Matrix Theory
()
About this ebook
Not only is matrix theory significant in a wide range of fields mathematical economics, quantum physics, geophysics, electrical network synthesis, crystallography, and structural engineering, among others-but with the vast proliferation of digital computers, knowledge of matrix theory is a must for every modern engineer, mathematician, and scientist. Matrices represent linear transformations from a finiteset of numbers to another finite set of numbers.
Since many important problems are linear, and since digital computers with finite memory manipulate only finite sets of numbers, the solution of linear problems by digital computers usually involves matrices. Developed from the author's course on matrix theory at the California
Institute of Technology, the book begins with a concise presentation of the theory of determinants, continues with a discussion of classical linear algebra, and an optional chapter on the use of matrices to solve systems of linear triangularizations of Hermitian and nonHermitian matrices, as well as a chapter presenting a proof of the difficult and important matrix theory of Jordan. The book concludes with discussions of variational principles and perturbation theory of matrices, matrix numerical analysis, and an introduction to the subject of linear computations.
The book is designed to meet many different needs, and because it is mathematically rigorous, it may be used by students of pure and applied mathematics. Since it is oriented towards applications, it is valuable to students of engineering, science, and the social sciences. And because it contains the basic preparation in matrix theory required for numerical analysis, it can be used by students whose main interest is computers. The book assumes very little mathematical preparation, and except for the single section on the continuous dependence of eigenvalues on matrices, a knowledge of elementary algebra and calculus is sufficient.
Related to Matrix Theory
Titles in the series (100)
Infinite Series Rating: 4 out of 5 stars4/5Optimization Theory for Large Systems Rating: 5 out of 5 stars5/5Advanced Calculus: Second Edition Rating: 5 out of 5 stars5/5Fourier Series and Orthogonal Polynomials Rating: 0 out of 5 stars0 ratingsFirst-Order Partial Differential Equations, Vol. 1 Rating: 5 out of 5 stars5/5A History of Mathematical Notations Rating: 4 out of 5 stars4/5Differential Forms with Applications to the Physical Sciences Rating: 5 out of 5 stars5/5The Calculus Primer Rating: 0 out of 5 stars0 ratingsAnalytic Inequalities Rating: 5 out of 5 stars5/5Laplace Transforms and Their Applications to Differential Equations Rating: 5 out of 5 stars5/5Dynamic Probabilistic Systems, Volume II: Semi-Markov and Decision Processes Rating: 0 out of 5 stars0 ratingsMathematics for the Nonmathematician Rating: 4 out of 5 stars4/5Calculus: An Intuitive and Physical Approach (Second Edition) Rating: 4 out of 5 stars4/5Methods of Applied Mathematics Rating: 3 out of 5 stars3/5Theory of Approximation Rating: 0 out of 5 stars0 ratingsA Catalog of Special Plane Curves Rating: 2 out of 5 stars2/5History of the Theory of Numbers, Volume II: Diophantine Analysis Rating: 0 out of 5 stars0 ratingsCounterexamples in Topology Rating: 4 out of 5 stars4/5How to Gamble If You Must: Inequalities for Stochastic Processes Rating: 0 out of 5 stars0 ratingsLinear Algebra Rating: 3 out of 5 stars3/5Modern Calculus and Analytic Geometry Rating: 4 out of 5 stars4/5Numerical Methods Rating: 5 out of 5 stars5/5Calculus Refresher Rating: 3 out of 5 stars3/5Journey into Mathematics: An Introduction to Proofs Rating: 4 out of 5 stars4/5Geometry: A Comprehensive Course Rating: 4 out of 5 stars4/5Topology for Analysis Rating: 4 out of 5 stars4/5Applied Functional Analysis Rating: 0 out of 5 stars0 ratingsChebyshev and Fourier Spectral Methods: Second Revised Edition Rating: 4 out of 5 stars4/5An Introduction to Lebesgue Integration and Fourier Series Rating: 0 out of 5 stars0 ratingsFirst-Order Partial Differential Equations, Vol. 2 Rating: 0 out of 5 stars0 ratings
Related ebooks
Matrices and Linear Algebra Rating: 4 out of 5 stars4/5The Theory of Matrices in Numerical Analysis Rating: 4 out of 5 stars4/5Boolean Algebra and Its Applications Rating: 4 out of 5 stars4/5Numerical Analysis Rating: 0 out of 5 stars0 ratingsIntroduction to Algebraic Geometry Rating: 4 out of 5 stars4/5Determinants and Matrices Rating: 3 out of 5 stars3/5Integral Equations Rating: 0 out of 5 stars0 ratingsAnalysis of Numerical Methods Rating: 3 out of 5 stars3/5An Introduction to Ordinary Differential Equations Rating: 4 out of 5 stars4/5Modern Multidimensional Calculus Rating: 0 out of 5 stars0 ratingsKronecker Products and Matrix Calculus with Applications Rating: 0 out of 5 stars0 ratingsNonlinear Differential Equations Rating: 0 out of 5 stars0 ratingsBasic Abstract Algebra: For Graduate Students and Advanced Undergraduates Rating: 4 out of 5 stars4/5Foundations of Electrodynamics Rating: 0 out of 5 stars0 ratingsBasic Methods of Linear Functional Analysis Rating: 0 out of 5 stars0 ratingsA Survey of Matrix Theory and Matrix Inequalities Rating: 0 out of 5 stars0 ratingsIntroduction to Vector and Tensor Analysis Rating: 4 out of 5 stars4/5Cartesian Tensors: An Introduction Rating: 0 out of 5 stars0 ratingsAdvanced Calculus Rating: 0 out of 5 stars0 ratingsDifferential Geometry Rating: 5 out of 5 stars5/5Special Matrices and Their Applications in Numerical Mathematics: Second Edition Rating: 5 out of 5 stars5/5Applications of Tensor Analysis Rating: 5 out of 5 stars5/5A First Course in Functional Analysis Rating: 0 out of 5 stars0 ratingsAdvanced Calculus: Second Edition Rating: 5 out of 5 stars5/5Topics in Quaternion Linear Algebra Rating: 5 out of 5 stars5/5Complex Variables for Scientists and Engineers: Second Edition Rating: 5 out of 5 stars5/5Linear Operators for Quantum Mechanics Rating: 5 out of 5 stars5/5Introduction to Combinatorial Analysis Rating: 5 out of 5 stars5/5Applied Complex Variables Rating: 5 out of 5 stars5/5Applied Matrix Algebra in the Statistical Sciences Rating: 4 out of 5 stars4/5
Mathematics For You
Algorithms to Live By: The Computer Science of Human Decisions Rating: 4 out of 5 stars4/5The Little Book of Mathematical Principles, Theories & Things Rating: 3 out of 5 stars3/5What If?: Serious Scientific Answers to Absurd Hypothetical Questions Rating: 5 out of 5 stars5/5Game Theory: A Simple Introduction Rating: 4 out of 5 stars4/5My Best Mathematical and Logic Puzzles Rating: 4 out of 5 stars4/5Build a Mathematical Mind - Even If You Think You Can't Have One Rating: 0 out of 5 stars0 ratingsSummary of The Black Swan: by Nassim Nicholas Taleb | Includes Analysis Rating: 5 out of 5 stars5/5Algebra - The Very Basics Rating: 5 out of 5 stars5/5How to Solve It: A New Aspect of Mathematical Method Rating: 4 out of 5 stars4/5Quantum Physics for Beginners Rating: 4 out of 5 stars4/5Geometry For Dummies Rating: 4 out of 5 stars4/5The Art of Statistical Thinking Rating: 5 out of 5 stars5/5Introducing Game Theory: A Graphic Guide Rating: 4 out of 5 stars4/5Statistics: a QuickStudy Laminated Reference Guide Rating: 0 out of 5 stars0 ratingsCalculus For Dummies Rating: 4 out of 5 stars4/5Learn Game Theory: Strategic Thinking Skills, #1 Rating: 5 out of 5 stars5/5Think Like A Maths Genius: The Art of Calculating in Your Head Rating: 0 out of 5 stars0 ratingsCalculus Essentials For Dummies Rating: 5 out of 5 stars5/5Logicomix: An epic search for truth Rating: 4 out of 5 stars4/5Mathematics for the Nonmathematician Rating: 4 out of 5 stars4/5How Minds Change: The New Science of Belief, Opinion and Persuasion Rating: 0 out of 5 stars0 ratingsThe Art of Logic: How to Make Sense in a World that Doesn't Rating: 0 out of 5 stars0 ratingsStatistics Essentials For Dummies Rating: 4 out of 5 stars4/5The Cartoon Introduction to Calculus Rating: 5 out of 5 stars5/5Mental Math: Tricks To Become A Human Calculator Rating: 5 out of 5 stars5/5
Reviews for Matrix Theory
0 ratings0 reviews
Book preview
Matrix Theory - Joel N. Franklin
1 DETERMINANTS
1.1 INTRODUCTION
Suppose that we wish to solve the equations
To solve for x1 we multiply the first equation by 8, multiply the second equation by 7, and subtract the resulting equations. This procedure gives
Thus, x. To find x2, multiply the second equation by 2, the first equation by 3, and subtract. The result is
Thus x. To generalize these equations, write
In the example (1)
Solving for x1 and x2, we find
If we introduce the notation
formula (2) yields
if the denominator in each fraction is not zero.
A rectangular collection of numbers is a matrix. Thus,
are matrices. So is the number 17—a single number is a 1 × 1 matrix. Systems of linear equations are completely specified by the matrices containing their coefficients and by their right-hand sides. A vector is a matrix with only one column or only one row. Thus
are vectors. The first is a row vector; the second, a column vector.
A determinant is a single number computed from a square matrix. We speak of the determinant of a matrix,
and in the next section, we shall define the determinant of an n × n matrix. For n = 2 we define
For example, the collection of four coefficients
from equations (1) is a matrix with a determinant that is the single number
As we saw in (4), determinants appear in the solution of linear equations.
1.2 THE DEFINITION OF A DETERMINANT
We wish to generalize our solution of two equations in two unknowns to n equations in n unknowns. For n = 3 write
If x2 and x3 are eliminated, we can obtain by a long computation the formula
where
and where ∆1 is found by substituting b1, b2, b3, respectively, for a11, a21, a31 in the expression for ∆. A second enormous computation would yield
where ∆2 is found by substituting b1, b2, b3, respectively, for a12, a22, a32 in the expression for ∆. A third computation would give
where ∆3 is found by substituting b1, b2, b3, respectively, for a13, a23, a33. In a later section these results will be proved and generalized. Here we only wish to motivate the definition of the determinant.
Every term in the expression for ∆ has the form a1ja2ka3l where j, k, l is a permutation of 1, 2, 3. With each term there is a sign ±1 = s(j, k, l), which we call the sign of the permutation j, k, l. Thus
Evidently,
Now (3) takes the form
where the summation extends over all six permutations j, k, l.
For an n × n matrix (aij)(i, j = 1, . . . , n) we define the determinant ∆ as the sum of n! terms
The summation extends over all n! permutations j1, . . . , jn of 1, . . . , n. The sign s(j1, . . . , jn) is defined as
In other words, s = 1 if the product of all jq – jp for q > p is positive; s = −1 if the product is negative.
For example, the determinant of a 5 × 5 matrix has 120 terms. One of the terms is –a13a25a31a42a54. The minus sign appears because
It should be emphasized that so far we have proved nothing. We merely have a definition (9) which we hope will be useful. For n = 2, the definition gives
which is consistent with the definition given in the preceding section. For n = 1, we define ∆ = a11, s(1) = 1.
PROBLEMS
1. Verify formula (7) for the six permutations in formula (6).
2. In the expansion (9) of the determinant of a 7 × 7 matrix there will be a term ±al7a26a35a44a53a62a71. Is the sign plus or is it minus?
3. Consider the equations
Evaluate the determinants ∆, ∆1, ∆2, ∆3 and solve for x1, x2, x3 by formulas (2), (4), and (5).
1.3 PROPERTIES OF DETERMINANTS
The definitions (9) and (10) in the last section show that, to study determinants, we must study permutations.
Theorem 1. If two numbers in the permutation j1, . . . , jn are interchanged, the sign of the permutation is reversed. For example,
Proof. Suppose that the two numbers are adjacent, say k and l, in the permutation j1, . . . , k, l, . . . , jn. When k and l are interchanged, the product ∏ (js – jr) for s > r is unchanged except that the single term l – k becomes k – l. Therefore, the sign is reversed.
If k and l are not adjacent, let them be separated by m numbers. Move k to the right by m successive interchanges of adjacent numbers so that k is just to the left of l. Now move l to the left by m + 1 successive interchanges of adjacent numbers. The effect of these interchanges is simply to interchange k and l in the original permutation. Since the sign is reversed an odd number of times, namely m + m + 1 times, the sign is reversed when k and l are interchanged.
Theorem 2. Let the permutation j1, . . . , jn be formed from 1, 2, . . . , n by t successive interchanges of pairs of numbers. Then s(j1, . . . , jn) = (−1)t.
Proof. According to the last theorem, the sign of the permutation 1, 2, . . . , n is reversed t times as the permutation j1, . . . , jn is formed. The result follows because s(l, 2, . . . , n) = 1. For example,
If t is even, the permutation jl, . . . , jn is called an even permutation; if t is odd, j is called an odd permutation. Since the sign of a permutation is uniquely defined, a permutation cannot be both even and odd. The number of even permutations equals the number of odd permutations = n!/2 if n > 1, since the even permutations j1, j2, . . . , jn may be put into a one-to-one correspondence with the odd permutations by the single interchange of j1 and j2.
The transpose of a matrix is formed by interchanging corresponding rows and columns. Thus, if A = (aij), the component bij of AT equals aji. For example,
Another example is
Theorem 3. If A is a square matrix, det A = det AT.
Proof. Let B = (bij) = AT = (aji). By definition
where the summation extends over all n! permutations, j = j1, . . . , jn Since bij = aji,
By rearranging the terms, we may write
For example, a31a12a23 = a12a23a31. The rearrangement (2) establishes a one-to-one correspondence between permutations j and permutations i. Therefore, as j goes through all n! permutations in the summation (1), i goes through all permutations. If a rearrangement from the left-hand side of (2) can be accomplished by t interchanges of pairs of a’s, then the right-hand side can be rearranged by t interchanges to produce the left-hand side. Thus,
implies that
Therefore, s(j) = (−1)t = s(i), and
Theorem 4. If two rows of a square matrix A are interchanged, the sign of the determinant is reversed. Or, if two columns are interchanged, the sign is reversed. For example,
Proof. Suppose that rows r and s are interchanged in the matrix A to produce a matrix B. Then
By definition,
Let k be the permutation produced from j by interchanging jr and js. This interchange establishes a one-to-one correspondence between permutations j and k. The sign s(k) = –s(j). Interchanging brjr with bsjs gives
Therefore,
To establish the column-analog, we use Theorem 4. Let C be produced by interchanging two columns in A. Then CT is produced by interchanging two rows in AT. Therefore,
Corollary. If two rows, or two columns, of a square matrix are identical, the determinant is zero.
Proof. If the two identical rows or columns are interchanged, the matrix is unaltered, but the determinant must change sign. Therefore, the determinant equals minus itself, and is thus equal to zero.
Theorem 5. If a row or a column of a square matrix is multiplied by a constant, c, the determinant is multiplied by c. For example,
Proof. Each term a1j1 . . . anjn in the expansion of det A contains exactly one component from each row. Therefore, if any one row is multiplied by c, the determinant is multiplied by c. Each term alj1 . . . anjn contains exactly one term from each column. Therefore, if a column is multiplied by c, the determinant is multiplied by c.
Theorem 6. If a multiple of one row (column) is subtracted from another row (column) of a matrix, the determinant is unchanged. For example,
Proof. By the transpose theorem, Theorem 3, we only need to prove the row part of Theorem 6. Let λ times row r be subtracted from row s. Then asjs is replaced by asjs – λarjs. For definiteness, assume that r < s. The new matrix has the determinant
The first sum equals det A. The second sum is zero because it is the expansion of a determinant with identical rows r and s. This completes the proof.
Theorem 6 is used to evaluate determinants by reducing them to triangular form. For example,
The last determinant equals 1 because of the following result:
Theorem 7. If a1j = 0 for i > j, then det A = a11a22 . . . ann.
Proof. A term s(j)alj1 . . . anjn in the expansion of det A . But the only permutation j that satisfies all of these inequalities is j = 1, 2, . . . , n. Therefore, det A equals the single term a11a22 . . . ann.
Theorem 8. aij = 0 for i < j, then det A = a11a22 . . . ann.
Proof. det A = det AT = a11a22 . . . ann.
PROBLEMS
1. By applying Theorems 4 and 6, show that
2. Let A have the form
Show that
Use an argument similar to the proof of Theorem 7.
3. Generalize the result of Problem 2. Let A be a block-triangular matrix
Show that
Can you prove this result by induction from the result of Problem 2?
4. Is it possible to rearrange the letters of the alphabet a, b, . . . , z in reverse order z, y, . . . , a by exactly 100 successive interchanges of pairs of letters?
1.4 ROW AND COLUMN EXPANSIONS
If A is a square matrix, we may regard det A as a function of the elements a11, a12, . . . , a1n in the first row. Each term alj1a2j2 . . . anjn contains exactly one element, a1j1, from the first row. Therefore, we may write the determinant as a linear combination
From the definition of det A we observe that
where the summation ∑k is extended over all (n – 1)! permutations j = j1, j2, . . . , jn with j1 = k. Note that
where
since the numbers 1, 2, . . . , k – 1 appear to the right of k in the permutation j = k, j2, . . . , jn. Therefore,
where the summation extends over all the (n – 1)! permutations j2, . . . , jn of the n – 1 numbers 1, . . . , k – 1, k + 1, . . . , n. Thus,
where Alk is the (n – 1) × (n – 1) matrix formed by eliminating row 1 and column k from A.
Formulas (1) and (5) give the expansion of det A by the first row. For example,
Suppose that we wish to expand the determinant (6) by the third row. We bring the third row to the top, without disturbing the order of the other rows, by interchanging rows 3 and 2 and then interchanging rows 2 and 1.
We now expand by the first row to obtain
This illustrates the general result:
Theorem 1. Let A = (aij) be an n × n matrix, where . Let Aij be the (n − 1) × (n − 1) matrix formed by deleting row i and column j from A. Defining the cofactors
we then have the expansion by row i:
and we have the expansion by column j:
Proof. We have already proved (9) for i = 1 in formulas (1) and (5). For i > 1 bring row i to the top by i − 1 successive interchanges of pairs of rows: i and i − 1, i − 1 and i − 2, . . . , 2 and 1. Call the resulting matrix B. Because of the i − 1 interchanges,
Expand det B by its first row.
Since b1j = aij and B1j = Aij, we have
Therefore, by (12),
This establishes the row expansion (10).
To establish the column expansion (11), let R = AT. Expand R by row j:
But rjk = a. For example, if n = 3,
Therefore, since det A = det AT,
This is the expansion by column j.
As an example of a row expansion, we shall evaluate Vandermonde’s determinant:
If xi = xj for some i ≠ j, the determinant equals 0. Suppose that x1, . . . , xn are distinct. We regard Vn as a function of the variable x ≡ xn, and we regard x1, . . . , xn−1 as constants. Now Vn is a polynomial of degree n − 1 in x, and Vn has the distinct roots x = x1, x = x2, . . . , x = xn−1. Therefore,
where α is independent of x. But α . Expanding the determinant (14) by its last row, we see that α . But this cofactor is the lower-order Vandermonde determinant,
Direct computation for n = 2 gives
Now (15) gives
We conjecture
But if
then (15) gives
which establishes (17) by induction. The identity (17) is correct even if the xi are not distinct because in that case both sides are zero.
PROBLEMS
1. Evaluate
by the three row expansions and by the three column expansions.
2. By certain simplifications, evaluate
3. Give a simple expression for
4. Give a simple expression for
Note that this determinant, as a function of a, is a quartic polynomial with roots b, c, and d ; and the coefficient of a³ is zero.
1.5 VECTORS AND MATRICES
A set of m linear equations in n unknowns
may be represented in a compact form
Here A is the matrix
and x and y are the column vectors
If A is fixed while x is allowed to vary over all column vectors with n components, the equation Ax = y gives a mapping from the space
X of all n-component vectors x into the space
Y of all m-component vectors y. When we solve the linear equations (1) we find those points
x in X which are mapped into a given point
y in Y by the matrix A.
A mapping Ax = y might be followed by another mapping, By = z, from Y into the space Z of p-component column-vectors z. Let B be the matrix
The successive mappings Ax = y, By = z map X into Z. Explicitly,
yield
or, equivalently,
Therefore, the successive mappings Ax = y, By = z yield a mapping Cx = z, where C is the p × n matrix
In this case we write
The product BA of the matrices B and A is defined as the matrix C with the components (6).
Two matrices are called equal if their corresponding components are equal. As a rule, BA ≠ AB. The product is not even defined unless the number of columns of the matrix on the left equals the number of rows of the matrix on the right. Thus, if
we have
but AB is undefined. If
we have
The inequality AB ≠ BA means that the transformation AB coming from B followed by A, is different from the transformation BA coming from A followed by B.
If Ax = y, By = z, and Dz = w we may write
or
Thus, the product of three matrices may be taken in either order:
The i, j component of DBA is the double sum
The inequality AB ≠ BA states that matrix multiplication is not commutative, The equality (DB)A = D(BA) states that matrix multiplication is associative.
Sometimes it is convenient to multiply every component of a vector x or a matrix A by the same number α. We then write
Obviously, multiplication of a matrix or vector by a number is commutative. Thus, αABx = A(αB)x = AB(αx).
PROBLEMS
1. Let A and B be the matrices
Form the products AB and BA.
2. Let
and let
Express z1 and z2 in terms of x1 and x2. If y = Ax and z = By, what is the product BA?
3. If y = Ax for all x, and if z = By for all y, we have shown that z = Cx for all x, where C is defined in (6). Could there be another matrix, C′, with this property? Suppose that z = Cx = C′x for all x; show that C = C′. [Hint: Let x, successively, be the unit vectors column (1,0, . . . , 0), column (0, 1, 0, . . . ), . . . , column (0, 0, . . . , 1).]
4. If y = Ax can be solved for x in terms of y by a formula x = A′y, we call A′ an inverse of A, and we write A′ = A−1. Compute an inverse A−1 for
5. If A−1 is an inverse of A, and if B−1 is an inverse of B, what is an inversefor BA expressed as a matrix product?
6. Let A be an m × n matrix. Let y = Ax for all x Define the row-vectors x′ = (x1, . . . , xn) and y′ − (y1, . . . , ym), where x and y are the column-vectors with the same components. How can you construct an n × m matrix, A′, such that y′ = x′A′ for all x′? If
what is A′?
7. Define A′ as in the last problem. Express (AB)′ in terms of A′ and B′
1.6 THE INVERSE MATRIX
Let a system of n equations in n unknowns be written in matrix-vector form : Ax = b. Define the n × n identity matrix
as the matrix with ones on the main diagonal and zeros off the main diagonal. We can solve Ax = b if we can find an n × n matrix B such that AB = I. A solution is x = Bb because Ax = A(Bb) = (AB)b = Ib = b.
Theorem 1. Let A be an n × n matrix, with n > 1. Let Aij be the (n − 1) × (n − 1) matrix formed by deleting row i and column j from A. Define the cofactor matrix
Let ∆ = det A. Then
If ∆ ≠ 0, then A has an inverse matrix B = ∆−1CT satisfying
EXAMPLE 1. Let
Then
In this example, det A = 0. We will show later that det A = 0 implies that no matrix B exists such that either AB = I or BA = I.
EXAMPLE 2. Let
Then
Since ∆ ≠