Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $9.99/month after trial. Cancel anytime.

Matrix Theory
Matrix Theory
Matrix Theory
Ebook498 pages3 hours

Matrix Theory

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Not only is matrix theory significant in a wide range of fields mathematical economics, quantum physics, geophysics, electrical network synthesis, crystallography, and structural engineering, among others-but with the vast proliferation of digital computers, knowledge of matrix theory is a must for every modern engineer, mathematician, and scientist. Matrices represent linear transformations from a finiteset of numbers to another finite set of numbers.
Since many important problems are linear, and since digital computers with finite memory manipulate only finite sets of numbers, the solution of linear problems by digital computers usually involves matrices. Developed from the author's course on matrix theory at the California
Institute of Technology, the book begins with a concise presentation of the theory of determinants, continues with a discussion of classical linear algebra, and an optional chapter on the use of matrices to solve systems of linear triangularizations of Hermitian and nonHermitian matrices, as well as a chapter presenting a proof of the difficult and important matrix theory of Jordan. The book concludes with discussions of variational principles and perturbation theory of matrices, matrix numerical analysis, and an introduction to the subject of linear computations.
The book is designed to meet many different needs, and because it is mathematically rigorous, it may be used by students of pure and applied mathematics. Since it is oriented towards applications, it is valuable to students of engineering, science, and the social sciences. And because it contains the basic preparation in matrix theory required for numerical analysis, it can be used by students whose main interest is computers. The book assumes very little mathematical preparation, and except for the single section on the continuous dependence of eigenvalues on matrices, a knowledge of elementary algebra and calculus is sufficient.

LanguageEnglish
Release dateJul 31, 2012
ISBN9780486136387
Matrix Theory

Related to Matrix Theory

Titles in the series (100)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Matrix Theory

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Matrix Theory - Joel N. Franklin

    1     DETERMINANTS

    1.1   INTRODUCTION

    Suppose that we wish to solve the equations

    To solve for x1 we multiply the first equation by 8, multiply the second equation by 7, and subtract the resulting equations. This procedure gives

    Thus, x. To find x2, multiply the second equation by 2, the first equation by 3, and subtract. The result is

    Thus x. To generalize these equations, write

    In the example (1)

    Solving for x1 and x2, we find

    If we introduce the notation

    formula (2) yields

    if the denominator in each fraction is not zero.

    A rectangular collection of numbers is a matrix. Thus,

    are matrices. So is the number 17—a single number is a 1 × 1 matrix. Systems of linear equations are completely specified by the matrices containing their coefficients and by their right-hand sides. A vector is a matrix with only one column or only one row. Thus

    are vectors. The first is a row vector; the second, a column vector.

    A determinant is a single number computed from a square matrix. We speak of the determinant of a matrix, and in the next section, we shall define the determinant of an n × n matrix. For n = 2 we define

    For example, the collection of four coefficients

    from equations (1) is a matrix with a determinant that is the single number

    As we saw in (4), determinants appear in the solution of linear equations.

    1.2   THE DEFINITION OF A DETERMINANT

    We wish to generalize our solution of two equations in two unknowns to n equations in n unknowns. For n = 3 write

    If x2 and x3 are eliminated, we can obtain by a long computation the formula

    where

    and where ∆1 is found by substituting b1, b2, b3, respectively, for a11, a21, a31 in the expression for ∆. A second enormous computation would yield

    where ∆2 is found by substituting b1, b2, b3, respectively, for a12, a22, a32 in the expression for ∆. A third computation would give

    where ∆3 is found by substituting b1, b2, b3, respectively, for a13, a23, a33. In a later section these results will be proved and generalized. Here we only wish to motivate the definition of the determinant.

    Every term in the expression for ∆ has the form a1ja2ka3l where j, k, l is a permutation of 1, 2, 3. With each term there is a sign ±1 = s(j, k, l), which we call the sign of the permutation j, k, l. Thus

    Evidently,

    Now (3) takes the form

    where the summation extends over all six permutations j, k, l.

    For an n × n matrix (aij)(i, j = 1, . . . , n) we define the determinant ∆ as the sum of n! terms

    The summation extends over all n! permutations j1, . . . , jn of 1, . . . , n. The sign s(j1, . . . , jn) is defined as

    In other words, s = 1 if the product of all jq jp for q > p is positive; s = −1 if the product is negative.

    For example, the determinant of a 5 × 5 matrix has 120 terms. One of the terms is –a13a25a31a42a54. The minus sign appears because

    It should be emphasized that so far we have proved nothing. We merely have a definition (9) which we hope will be useful. For n = 2, the definition gives

    which is consistent with the definition given in the preceding section. For n = 1, we define ∆ = a11, s(1) = 1.

    PROBLEMS

    1.  Verify formula (7) for the six permutations in formula (6).

    2.  In the expansion (9) of the determinant of a 7 × 7 matrix there will be a term ±al7a26a35a44a53a62a71. Is the sign plus or is it minus?

    3.  Consider the equations

    Evaluate the determinants ∆, ∆1, ∆2, ∆3 and solve for x1, x2, x3 by formulas (2), (4), and (5).

    1.3   PROPERTIES OF DETERMINANTS

    The definitions (9) and (10) in the last section show that, to study determinants, we must study permutations.

    Theorem 1. If two numbers in the permutation j1, . . . , jn are interchanged, the sign of the permutation is reversed. For example,

    Proof. Suppose that the two numbers are adjacent, say k and l, in the permutation j1, . . . , k, l, . . . , jn. When k and l are interchanged, the product ∏ (js jr) for s > r is unchanged except that the single term l k becomes k l. Therefore, the sign is reversed.

    If k and l are not adjacent, let them be separated by m numbers. Move k to the right by m successive interchanges of adjacent numbers so that k is just to the left of l. Now move l to the left by m + 1 successive interchanges of adjacent numbers. The effect of these interchanges is simply to interchange k and l in the original permutation. Since the sign is reversed an odd number of times, namely m + m + 1 times, the sign is reversed when k and l are interchanged.

    Theorem 2. Let the permutation j1, . . . , jn be formed from 1, 2, . . . , n by t successive interchanges of pairs of numbers. Then s(j1, . . . , jn) = (−1)t.

    Proof. According to the last theorem, the sign of the permutation 1, 2, . . . , n is reversed t times as the permutation j1, . . . , jn is formed. The result follows because s(l, 2, . . . , n) = 1. For example,

    If t is even, the permutation jl, . . . , jn is called an even permutation; if t is odd, j is called an odd permutation. Since the sign of a permutation is uniquely defined, a permutation cannot be both even and odd. The number of even permutations equals the number of odd permutations = n!/2 if n > 1, since the even permutations j1, j2, . . . , jn may be put into a one-to-one correspondence with the odd permutations by the single interchange of j1 and j2.

    The transpose of a matrix is formed by interchanging corresponding rows and columns. Thus, if A = (aij), the component bij of AT equals aji. For example,

    Another example is

    Theorem 3. If A is a square matrix, det A = det AT.

    Proof. Let B = (bij) = AT = (aji). By definition

    where the summation extends over all n! permutations, j = j1, . . . , jn Since bij = aji,

    By rearranging the terms, we may write

    For example, a31a12a23 = a12a23a31. The rearrangement (2) establishes a one-to-one correspondence between permutations j and permutations i. Therefore, as j goes through all n! permutations in the summation (1), i goes through all permutations. If a rearrangement from the left-hand side of (2) can be accomplished by t interchanges of pairs of a’s, then the right-hand side can be rearranged by t interchanges to produce the left-hand side. Thus,

    implies that

    Therefore, s(j) = (−1)t = s(i), and

    Theorem 4. If two rows of a square matrix A are interchanged, the sign of the determinant is reversed. Or, if two columns are interchanged, the sign is reversed. For example,

    Proof. Suppose that rows r and s are interchanged in the matrix A to produce a matrix B. Then

    By definition,

    Let k be the permutation produced from j by interchanging jr and js. This interchange establishes a one-to-one correspondence between permutations j and k. The sign s(k) = –s(j). Interchanging brjr with bsjs gives

    Therefore,

    To establish the column-analog, we use Theorem 4. Let C be produced by interchanging two columns in A. Then CT is produced by interchanging two rows in AT. Therefore,

    Corollary. If two rows, or two columns, of a square matrix are identical, the determinant is zero.

    Proof. If the two identical rows or columns are interchanged, the matrix is unaltered, but the determinant must change sign. Therefore, the determinant equals minus itself, and is thus equal to zero.

    Theorem 5. If a row or a column of a square matrix is multiplied by a constant, c, the determinant is multiplied by c. For example,

    Proof. Each term a1j1 . . . anjn in the expansion of det A contains exactly one component from each row. Therefore, if any one row is multiplied by c, the determinant is multiplied by c. Each term alj1 . . . anjn contains exactly one term from each column. Therefore, if a column is multiplied by c, the determinant is multiplied by c.

    Theorem 6. If a multiple of one row (column) is subtracted from another row (column) of a matrix, the determinant is unchanged. For example,

    Proof. By the transpose theorem, Theorem 3, we only need to prove the row part of Theorem 6. Let λ times row r be subtracted from row s. Then asjs is replaced by asjs – λarjs. For definiteness, assume that r < s. The new matrix has the determinant

    The first sum equals det A. The second sum is zero because it is the expansion of a determinant with identical rows r and s. This completes the proof.

    Theorem 6 is used to evaluate determinants by reducing them to triangular form. For example,

    The last determinant equals 1 because of the following result:

    Theorem 7. If a1j = 0 for i > j, then det A = a11a22 . . . ann.

    Proof. A term s(j)alj1 . . . anjn in the expansion of det A . But the only permutation j that satisfies all of these inequalities is j = 1, 2, . . . , n. Therefore, det A equals the single term a11a22 . . . ann.

    Theorem 8. aij = 0 for i < j, then det A = a11a22 . . . ann.

    Proof. det A = det AT = a11a22 . . . ann.

    PROBLEMS

    1.  By applying Theorems 4 and 6, show that

    2.  Let A have the form

    Show that

    Use an argument similar to the proof of Theorem 7.

    3.  Generalize the result of Problem 2. Let A be a block-triangular matrix

    Show that

    Can you prove this result by induction from the result of Problem 2?

    4.  Is it possible to rearrange the letters of the alphabet a, b, . . . , z in reverse order z, y, . . . , a by exactly 100 successive interchanges of pairs of letters?

    1.4   ROW AND COLUMN EXPANSIONS

    If A is a square matrix, we may regard det A as a function of the elements a11, a12, . . . , a1n in the first row. Each term alj1a2j2 . . . anjn contains exactly one element, a1j1, from the first row. Therefore, we may write the determinant as a linear combination

    From the definition of det A we observe that

    where the summation ∑k is extended over all (n – 1)! permutations j = j1, j2, . . . , jn with j1 = k. Note that

    where

    since the numbers 1, 2, . . . , k – 1 appear to the right of k in the permutation j = k, j2, . . . , jn. Therefore,

    where the summation extends over all the (n – 1)! permutations j2, . . . , jn of the n – 1 numbers 1, . . . , k – 1, k + 1, . . . , n. Thus,

    where Alk is the (n – 1) × (n – 1) matrix formed by eliminating row 1 and column k from A.

    Formulas (1) and (5) give the expansion of det A by the first row. For example,

    Suppose that we wish to expand the determinant (6) by the third row. We bring the third row to the top, without disturbing the order of the other rows, by interchanging rows 3 and 2 and then interchanging rows 2 and 1.

    We now expand by the first row to obtain

    This illustrates the general result:

    Theorem 1. Let A = (aij) be an n × n matrix, where . Let Aij be the (n − 1) × (n − 1) matrix formed by deleting row i and column j from A. Defining the cofactors

    we then have the expansion by row i:

    and we have the expansion by column j:

    Proof. We have already proved (9) for i = 1 in formulas (1) and (5). For i > 1 bring row i to the top by i − 1 successive interchanges of pairs of rows: i and i − 1, i − 1 and i − 2, . . . , 2 and 1. Call the resulting matrix B. Because of the i − 1 interchanges,

    Expand det B by its first row.

    Since b1j = aij and B1j = Aij, we have

    Therefore, by (12),

    This establishes the row expansion (10).

    To establish the column expansion (11), let R = AT. Expand R by row j:

    But rjk = a. For example, if n = 3,

    Therefore, since det A = det AT,

    This is the expansion by column j.

    As an example of a row expansion, we shall evaluate Vandermonde’s determinant:

    If xi = xj for some i j, the determinant equals 0. Suppose that x1, . . . , xn are distinct. We regard Vn as a function of the variable x xn, and we regard x1, . . . , xn−1 as constants. Now Vn is a polynomial of degree n − 1 in x, and Vn has the distinct roots x = x1, x = x2, . . . , x = xn−1. Therefore,

    where α is independent of x. But α . Expanding the determinant (14) by its last row, we see that α . But this cofactor is the lower-order Vandermonde determinant,

    Direct computation for n = 2 gives

    Now (15) gives

    We conjecture

    But if

    then (15) gives

    which establishes (17) by induction. The identity (17) is correct even if the xi are not distinct because in that case both sides are zero.

    PROBLEMS

    1.  Evaluate

    by the three row expansions and by the three column expansions.

    2.  By certain simplifications, evaluate

    3.  Give a simple expression for

    4.  Give a simple expression for

    Note that this determinant, as a function of a, is a quartic polynomial with roots b, c, and d ; and the coefficient of a³ is zero.

    1.5   VECTORS AND MATRICES

    A set of m linear equations in n unknowns

    may be represented in a compact form

    Here A is the matrix

    and x and y are the column vectors

    If A is fixed while x is allowed to vary over all column vectors with n components, the equation Ax = y gives a mapping from the space X of all n-component vectors x into the space Y of all m-component vectors y. When we solve the linear equations (1) we find those points x in X which are mapped into a given point y in Y by the matrix A.

    A mapping Ax = y might be followed by another mapping, By = z, from Y into the space Z of p-component column-vectors z. Let B be the matrix

    The successive mappings Ax = y, By = z map X into Z. Explicitly,

    yield

    or, equivalently,

    Therefore, the successive mappings Ax = y, By = z yield a mapping Cx = z, where C is the p × n matrix

    In this case we write

    The product BA of the matrices B and A is defined as the matrix C with the components (6).

    Two matrices are called equal if their corresponding components are equal. As a rule, BA AB. The product is not even defined unless the number of columns of the matrix on the left equals the number of rows of the matrix on the right. Thus, if

    we have

    but AB is undefined. If

    we have

    The inequality AB BA means that the transformation AB coming from B followed by A, is different from the transformation BA coming from A followed by B.

    If Ax = y, By = z, and Dz = w we may write

    or

    Thus, the product of three matrices may be taken in either order:

    The i, j component of DBA is the double sum

    The inequality AB BA states that matrix multiplication is not commutative, The equality (DB)A = D(BA) states that matrix multiplication is associative.

    Sometimes it is convenient to multiply every component of a vector x or a matrix A by the same number α. We then write

    Obviously, multiplication of a matrix or vector by a number is commutative. Thus, αABx = A(αB)x = AB(αx).

    PROBLEMS

    1.  Let A and B be the matrices

    Form the products AB and BA.

    2.  Let

    and let

    Express z1 and z2 in terms of x1 and x2. If y = Ax and z = By, what is the product BA?

    3.  If y = Ax for all x, and if z = By for all y, we have shown that z = Cx for all x, where C is defined in (6). Could there be another matrix, C′, with this property? Suppose that z = Cx = Cx for all x; show that C = C′. [Hint: Let x, successively, be the unit vectors column (1,0, . . . , 0), column (0, 1, 0, . . . ), . . . , column (0, 0, . . . , 1).]

    4.  If y = Ax can be solved for x in terms of y by a formula x = Ay, we call A′ an inverse of A, and we write A′ = A−1. Compute an inverse A−1 for

    5.  If A−1 is an inverse of A, and if B−1 is an inverse of B, what is an inversefor BA expressed as a matrix product?

    6.  Let A be an m × n matrix. Let y = Ax for all x Define the row-vectors x′ = (x1, . . . , xn) and y′ − (y1, . . . , ym), where x and y are the column-vectors with the same components. How can you construct an n × m matrix, A′, such that y′ = xA′ for all x′? If

    what is A′?

    7.  Define A′ as in the last problem. Express (AB)′ in terms of A′ and B

    1.6   THE INVERSE MATRIX

    Let a system of n equations in n unknowns be written in matrix-vector form : Ax = b. Define the n × n identity matrix

    as the matrix with ones on the main diagonal and zeros off the main diagonal. We can solve Ax = b if we can find an n × n matrix B such that AB = I. A solution is x = Bb because Ax = A(Bb) = (AB)b = Ib = b.

    Theorem 1. Let A be an n × n matrix, with n > 1. Let Aij be the (n − 1) × (n − 1) matrix formed by deleting row i and column j from A. Define the cofactor matrix

    Let ∆ = det A. Then

    If ∆ ≠ 0, then A has an inverse matrix B = ∆−1CT satisfying

    EXAMPLE 1. Let

    Then

    In this example, det A = 0. We will show later that det A = 0 implies that no matrix B exists such that either AB = I or BA = I.

    EXAMPLE 2. Let

    Then

    Since ∆ ≠

    Enjoying the preview?
    Page 1 of 1