SMA1102 Linear Algebra Matrices Updated1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

SMA1102 : Linear Algebra

Definition (Linear Algebra) Linear algebra is the study of vectors and linear
functions.

1 Basic Operations of matrices


Definition (Matrix) A rectangular array of numbers is called a matrix.
Am×n = [aij ] means matrix A has order m × n (i.e. A has m rows and n
columns), where aij is the entry at the intersection of the ith row and j th column.
 where aij ∈ <. A may also
We shall consider cases  be represented
 in either of the 
a11 a12 · · · a1n a11 a12 · · · a1n
 a21 a22 · · · a2n   a21 a22 · · · a2n 
following forms: A =  ..  or  ..  .
   
.. .. . . .. .. . .
 . . . .   . . . . 
m1 am2 · · · amn
a am1 am2 · · · amn

2 3 0
Example If  8 1 9  then a11 = 2, a12 = 3, a13 = 0, a21 = 8,
7 6 5
a22 = 1, a23 = 9, a31 = 7, a32 = 6, a33 = 5.

Definition (Column vector)- A matrix having only one column is called a


column vector.

Definition (Row vector)- A matrix with only one row is called a row
vector.

Definition (Equality of two matrices)- Two matrices A = [aij ] and B = [bij ]


having the same order m × n are equal if a = ij = bij for each i = 1, 2, ..., m
and j = 1, 2, ..., n.

Definition (Zero-matrix or null matrix)- A matrix in which each entry is zero
0 0
is called a zero-matrix, denoted by O. e.g. O2×2 = and
0 0
 
0 0 0
O2×3 =  0 0 0  .
0 0 0
Definition(Square matrix)- A matrix that has the same number of rows as the
number of columns, is called a square matrix. A square matrix is said to have
order n if it is an n × n matrix.

Definition(Diagonal entries of square matrix A)- The entries a11 , a22 , ..., ann of
an n × n square matrix A = [aij ] are called the diagonal entries (the principal
diagonal) of A.

Definition(Diagonal matrix)- A square matrix A = [aij ] is said to be a

1
diagonal matrix if aij = 0 for i 6= j. In other words, the non-zero entries appear
4 0
only on the principal diagonal. e.g .
0 1
A diagonal matrix D of order n with diagonal entries d1 , d2 , ..., dn is denoted
by D = diag(d1 , d2 , ..., dn ) is denoted by then the diagonal matrix D is called a
scalar matrix.

Definition(Identity matrix)- A scalar matrix A of order n is called an iden-


tity matrix if d = 1. This matrixis denoted 
by In .
  1 0 0
1 0
e.g. I2 = and I3 =  0 1 0  .
0 1
0 0 1
The subscript n is suppressed in case the order is clear from the context or
if no confusion arises.

Definition (Triangular matrix)- A square matrix A = [aij ] is said to be an


upper triangular matrix if aij = 0 for i > j. A square matrix A = [aij ] is said to
be a lower triangular matrix if aij = 0 for i < j. A square matrix A is said to be
triangular if itis an upper or a lower triangular
  matrix.
 e.g. upper triangular:
1 6 8 1 0 0
 0 1 −1  and lower triangular :  5 1 0  .
0 0 2 −2 3 1
Definition (Trace of a matrix)- If A is a square matrix, then the trace of
A, denoted by tr(A), is defined to be the sum of the entries on main diagonal
of A. The trace
 of A is undefined
 if A is not a square matrix.
1 6 8
e.g If A =  0 1 −1 , then tr(A) = 1 + 1 + 2.
0 0 2

2 Operations on matrices
Definition(Transpose of a matrix)- The transpose of an m×n matrix A = [aij ],
with bij = aji for 1 ≤ i ≤ m and 1 ≤ j ≤ n. The transpose
 of 
A is denoted
  −1 7
−1 2 4
by At or AT . e.g. If A = then At =  2 10  . Thus, the
7 10 15
4 15
transpose of a row vector is a column vector and vice-versa.

Theorem For any matrix A, (At )t = A.

Proof: Let A = [aij ], At = [bij ] and (At )t = [cij ]. Then, the definition of
transpose gives cij = bji = aij for all i, j and the result follows.

Definition(Addition of matrices)- Let A = [aij ] and B = [bij ] be two m × n

2
matrices. Then the sum A + B is defined to be the matrix C = [cij ] with
cij = aij + bij .

N.B. We define the sum of two matrices only when the order of the two matri-
ces are the same.

Definition(Multiplying a scalar to a matrix)- Let A = [aij ] be an m  


4 2 8
matrix. Then for any element k ∈ <, we define kA = [kaij ]e.g.if A = and
1 0 5
 
-8 −4 −16
k = −2, then − 2A = .
-2 0 −10
Theorem Let A, B and C be matrices of order m × n, and let k, l ∈ <. Then
1. A + B = B + A (Commutativity).
2. (A + B) + C = A + (B + C) (Associativity).
3. k(lA) = (kl)A.
4. (k + l)A = kA + lA.
5. k(B + C) = kB + kC.
6. k(B − C) = kB − kC.
7. (k − l)C = kC − lC.
8. k(lC) = (kl)C.
Proof for 1.
Let A = [aij ] and B = [bij ]. Then
A+B = [aij ] + [bij ]
= [aij + bij ]
= [bij + aij ]
= [bij ] + [aij ]
= B + A.
(Since real numbers commute).
Proofs for 2 − 8 are left as homework.

Definition(Multiplication of matrices) Let A = [aij ] be an m × n matrix and


× r matrix. The product AB is a matrix C = [cij ] of order
B = [bij ] be an n P
n
m × r, with cij = k=1 aik bkj = ai1 b1j + ai2 b2j + · · · + ain bnj .
.. ..
 
 
··· ··· ··· ··· . b 1j .
 .. . 
 
 ··· ··· ··· ··· 
   . b2j .. 
That is, if Am×n =  ai1 ai2 · · · ain  and Bn×r =  .
  .
 .. .. .. 
 ··· ··· ··· ···  . . 
 
··· ··· ··· ··· .. ..
. bmj .

3
AB = [(AB)ij ]m×r and (AB)ij = ai1 b1j + ai2 b2j + · · · + ain bnj . Observe that
the product AB is defined if and only if

the number of columns of A = the number of rows of B.

     
5 3 -1 0 5 4 1 30 16 41
e.g Let A = , B= and AB= .
-2 1 2 10 −3 7 4 10 −13 −1
Theorem Assuming that the size of the matrices are such that the indicated
operations can be performed, the following rules of matrix arithmetic are valid
1. A(BC) = (AB)C (Associative law for multiplication)
2. A(B + C) = AB + AC ( Left distributive law)
3. (B + C)A = BA + CA (Right distributive law)
4. A(B − C) = AB − AC
5. (B − C)A = BA − CA
6. k(BC) = (kB)C = B(kC)
Proof for 4.
We are required to show that A(B + C) and AB + AC have the same size and
that corresponding entries are equal. To form A(B + C), the matrices B and
C must have the same size, say m × n, and the matrix A must then have m
columns, so its size must be of the form r × m. This makes A(B + C) an r × n
matrix. It follows that AB + AC is also an r × n matrix and, consequently,
A(B + C) and AB + AC have the same size.

Suppose that A = [aij ], B = [bij ], andC = [cij ]. We want to show that cor-
responding entries of A(B + C) and AB + AC are equal; that is;

[A(B + C)]ij = [AB + AC]ij

for all values of i and j. But from the definitions of matrix addition and matrix
multiplication we have

[A(B + C)]ij = ai1 (b1j + c1j ) + ai2 (b2j + c2j ) + · · · + aim (bmj + cmj )
= (ai1 b1j + ai2 b2j + · · · + aim bmj ) + (ai1 c1j + ai2 c2j + · · · + aim cmj )
= [AB]ij + [AC]ij
= [AB + AC]ij

Commutativity of matrix product Two square matrices A and B are said


to commute if AB = BA.

4
Remark: Note that if A is a square matrix of order n and if B is a scalar matrix
of order n then AB = BA. In general, the matrix product is not commutative.
For example consider
   
1 1 1 0
A= and B = .
0 0 1 0

The matrix product,


   
2 0 1 1
AB = 6= = BA.
0 0 1 1

NB Matrices AB and BA need not be equal due to the following;


1. AB may be defined whereas BA is undefined e.g. say A2×3 and B3×4
matrices.
2. AB and BA are both defined but have different sizes e.g. A2×3 and B3×2
matrices.
3. AB and BA are defined and have the same size but AB 6= BA because
corresponding entries of AB and BA are not equal.
Definition(Inverse of a matrix) Let A be a square matrix of order n.

1. A square matrix B is said to be a left inverse of matrix A if BA = In .


2. A square matrix C is said to be a right inverse of matrix A if AC = In .
3. A matrix A is said to be invertible (or is said to have an inverse) if there
exists a matrix B such that AB = BA = In .

Lemma Let A be an n × n matrix. Suppose that there exist n × n matrices B


and C such that AB = In and CA = In , then B = C.
Proof Note that
C = CIn = C(AB) = (CA)B = In B = B.

Remarks:
1. From the above lemma, we observe that if a matrix A is invertible, then
the inverse is unique.
2. As the inverse of matrix A is unique, we denote it by A−1 . That is,
AA−1 = A−1 A = I.
Theorem If B and C are both inverses of the matrix A, then B = C.

Proof Since B is an inverse of A, we have BA = I. Multiplying both sides


on the right by C gives (BA)C = IC = C. But (BA)C = B(AC) = BI = B,
so that C = B.

5
Theorem If A and B are invertible matrices of the same size, then AB is
invertible and (AB)−1 = B −1 A−1 .
Proof If we can show that (AB)(B −1 A−1 ) = (B −1 A−1 )(AB) = I, then we will
have simultaneously shown that the matrix AB is invertible and that
(AB)−1 = B −1 A−1 .
But (AB)(B −1 A−1 ) = A(BB −1 )A−1 = AIA−1 = AA−1 = I. A similar
argument shows that (B −1 A−1 )(AB) = I.
NB A product of any number of invertible matrices is invertible, and the inverse
of the product is the product of the inverses in the reverse order.

3 Elementary Row Operations (Gauss Elimina-


tion)
Definition( Elementary Row Operations)- Let A be an m × n matrix. Then
the elementary row operations are defined as follows:
1. Rij (or Ri → Rj ): Interchange of the ith and the j th row of A.
2. For c 6= 0, cRk : Multiply the k th row of A by c.
3. For c 6= 0, cRi + Rj → Rj : Replace the j th row of A by the j th row of A
plus c times the ith row of A.
Definition(Row Equivalent matrices)- Two matrices are said to be row-equivalent
if one can be obtained from the other by a finite number of elementary row
operations.
Definition A matrix C is said to be in the row echelon form if
1. the rows consisting entirely of zeros appears after the non-zero rows,
2. the first non-zero entry in a non-zero row is 1. This term is called the
leading term or a leading 1. The column containing this term is called the
leading column.
3. In any two successive non-zero rows, the leading 1 in the lower row occurs
further to the right than the leading 1 in the higher row.
   
0 1 −2 4 1 1 0 2 3
For example, the matrices  0 0 1 1  and  0 0 0 1 8  are in
0 0 0 0 0 0 0 0 1 
0 1 4 2 1 1 0 7 9
row-echelon form whereas, the matrices  0 0 0 0  ,  0 0 0 1 −6 
  0 0 1 1 0 0 0 0 3
1 1 0 2 3
and  0 0 0 0 1  are not in row-echelon form.
0 0 0 1 8

6
Definition (Row- reduced echelon form)- A matrix C is said to be in the row-
reduced echelon form or reduced row echelon form if
1. C is already in the row echelon form;

2. the leading column containing the leading 1 has every other entry being
zero.
A matrix which is in the row-reduced echelon form is also called a row-reduced
echelon matrix.

Row rank of a matrix- Let C be the row-reduced echelon form of a ma-


trix A. The number of non-zero rows in C 
is called the row-rank
 of A.
2 3 1 2
Example Determine the row-rank of A =  1 2 1 1 .
1 1 2 1
Solution
• Row reduce matrix A and obtain matrix C, the reduced-row echelon form
of A.

• Count the number of non-zero rows in matrix C.


• Perform the following row operations on A;
     
2 3 1 2 1 1 2 1 1 1 2 1
↔R3 R +(−1)R →R
 1 2 1 1  R1→  1 2 1 1  2 →
1 2
 0 1 −1 0 
1 1 2 1
 2 3 1 2  2 3 1 2 
1 1 2 1 1 1 2 1 − 1 R →R 1 1 2 1
+R3 →R3
−2R1 +R3 →R3
→  0 1 −1 0  (−1)R2→  0 1 −1 0  2 → 3 3
 0 1 −1 0 
0 1 −3 0   0 0 −2 0 0 0 1 0
1 0 3 1 1 0 3 1
(−1)R2 +R1 →R1 2 →R2
→  0 1 −1 0  (1)R3 +R →  0 1 0 0 
 0 0 1 0 0 0 1 0
1 0 0 1
(−3)R3 +R1 →R1
→  0 1 0 0 .
0 0 1 0
Therefore the row-rank of A = 3.

Definition(Linear system) A system of m linear equations in n unknowns


x1 , x2 , · · · , xn is a set of equations of the form,

a11 x1 + a12 x2 + · · · + a1n xn = b1 ,


a21 x1 + a22 x2 + · · · + a2n xn = b2 ,
··························· = ···, (1)
am1 x1 + am2 x2 + · · · + amn xn = bm ,

7
where for 1 ≤ i ≤ n, and 1 ≤ j ≤ m; aij , bi ∈ <.
Linear system (1) is called homogeneous if b1 = b2 = · · · = bm and non-
homogeneous otherwise. The above equation may be rewritten in the form
Ax =b, where     
a11 a12 · · · a1n x1 b1
 a21 a22 · · · a2n   x2   b2 
A= . , x = , and b =  ..  .
     
.. .. .. ..
 ..
  
. . .   .   . 
am1 am2 ··· amn xn bm
Matrix A is called the coefficient matrix and matrix [A|b], is called the
augmented matrix of the linear system (1).

Definition( Solution of system (1))- A solution ( or a particular solution) of the


system (1) is a list of values for the unknowns or equivalently, a vector x ∈ <n ,
which is a solution of each of the equations in the system. The set of all solu-
tions of the system is called the solution set or the general solution of the system.

The system (1) of linear equations is said to be consistent if it has one or


more solutions, and it is said to be inconsistent if it has no solution.

Example

1. Solve the following systems:


(a)

x1 + x2 − 2x3 + 4x4 = 5,
2x1 + 2x2 − 3x3 + x4 = 3,
3x1 + 3x2 − 4x3 − 2x4 = 1.

(b)

x1 + x2 − 2x3 + 3x4 = 4,
2x1 + 3x2 + 3x3 − x4 = 3,
5x1 + 7x2 + 4x3 + x4 = 5.

(c)

x + 2y + z = 3,
2x + 5y − z = −4,
3x − 2y − z = 5.

Solution

8
   
1 1 −2 4 5 1 1 −2 4 5
−2R1 +R2 →R2
(a) 2 2 −3 1 3  →  0 0 1 −7 −7 
3 3 −4 −2 1 3 3 −4 −2 1
   
1 1 −2 4 5 1 1 −2 4 5
−3R1 +R3 →R3 −2R2 +R3 →R3
→  0 0 1 −7 −7  →  0 0 1 −7 −7 
3 3 −4 −2 1 0 0 0 0 0
 
1 1 0 −10 −9
2R2 +R1 →R1
→  0 0 1 −7 −7  .
0 0 0 0 0
x1 + x2 − 10x4 = −9,
x3 − 7x4 = −7.
Therefore,
x1 = −9 − x2 + 10x4 ,
x3 = −7 + 7x4 .
Hence x2 and x4 are free variables.
   
1 1 −2 3 4 1 1 −2 3 4
−2R1 +R2 →R2
(b) 2 2 −3 1 3  →  0 1 7 −7 −5 
5 7 4 1 5  0 2 14 −14  −15 
1 1 −2 3 4 1 1 −2 3 4
−5R1 +R3 →R3 −2R2 +R3 →R3
→  0 1 7 −7 −5  →  0 1 7 −7 −5  .
0 2 14 −14 −15 0 0 0 0 −5
Therefore from the third row,

0x1 + 0x2 + 0x3 + 0x3 = −5.


Hence the system has no solution.
   
1 2 1 3 1 2 1 3
−2R1 +R2 →R2
(c) 2 5 −1 −4  →  0 1 −3 −10 
3 −2 −1 5 3 −2 −1 5
   
1 2 1 3 1 2 1 3
−3R1 +R3 →R3 8R +R →R
→  0 1 −3 −10  2 →3 3  0 1 −3 −10 
 0 −8 −4 −4
  0 0 −28  −84
1
− 28 R3 →R3
1 2 1 3 1 2 1 3
3R +R →R
→  0 1 −3 −10  3 →2 2  0 1 0 −1 
0 0 1 3  0 0 1
 3 
1 2 0 0 1 2 0 0
(−1)R3 +R1 →R1 −2R2 +R1 →R1
→  0 1 0 −1  →  0 1 0 −1 .
0 0 1 3 0 0 1 3
Thus, x = 2; y = −1; z = 3, a unique solution.

Theorem Consider a system of linear equations in n unknowns with augmented


matrix M = [A|b]. Then,

9
1. The system has a solution if rank(A) = rank(M).
2. The solution is unique if and only if rank(A) = rank(M) = n.
The three possibilities
Possibility 1:rank(A) < rank[Ab]
 
1 1 0 1
e.g.  0 1 0 1 ,
0 0 0 1
therefore rank(A) = 2; rank[Ab] = 3. Hence
rank(A) < rank[Ab]. From the third row;

0x1 + 0x2 + 0x3 = 1,

which an never be true.Thus when rank(A) < rank[Ab], then the linear system
has no solution at all.

Possibility 2: rank(A)=rank[Ab]=number of unknowns


 
1 0 0 1
e.g. 0 1 0 1 ,
0 0 1 1
therefore A = rank[Ab] = 3 = number of unknowns. There are no free vari-
ables and there is exactly one solution.

Possibility 3:rank(A) = rank[Ab] < number of unknowns


 
1 1 1 1
e.g.  0 1 1 1 ,
0 0 0 0
therefore rank[A] = rank[Ab] = 2 < number of unknowns = 3. There are (3 − 2)
free variables (i.e. 1 free variable). Hence the linear system has infinitely many
solutions. In this case the three planes intersect in a line since there is one free
variable. If there are two free variables it shows that three planes coincide.

Example
Consider the system of equations

x1 + x2 + 2x3 = q,
x2 + x3 + 2x4 = 0,
x1 + x2 + 3x3 + 3x4 = 0,
2x2 + 5x3 + px4 = 3.

For which values of p and q does the system have


1. no solutions,

10
2. a unique solution,
3. exactly two solutions,
4. more than two solutions?

Solution
   
1 1 2 0 q 1 1 2 0 q
 0 1 1 2 0  (−1)R1 +R3 →R3  0 1 1 2 0 
  →  
 1 1 3 3 0   0 0 1 3 −q 
0 2 5 p 3 0 2 5 p 3
   
1 1 2 0 q 1 1 2 0 q
−2R2 +R4 →R4  0 1 1 2 0   0 1 1 2 0 
→   −3R3 +R →R
→4 4  .
 0 0 1 3 −q   0 0 1 3 −q 
0 0 3 p−4 3 0 0 0 p − 13 3 + 3q
rank[A] = rank[Ab] = 4 whenever p 6= 13. If p = 13, rank[A] = 3. In this case,
rank[Ab] = 3 if 3 + 3q = 0 and rank[Ab] = 4 if 3 + 3q 6= 0. There

1. are no solutions if p = 13, q 6= −1,


2. is exactly one solution if p 6= 13,
3. are NEVER exactly two solutions for any linear system of equations,
4. are infinitely many solutions if p = 13, q = −1.

4 Determinants
Each n-square matrix A = [aij ] is assigned a special scalar called the
determinant of A, denoted by det(A) or |A| or

a11 a12 · · · a1n


a21 a22 · · · a2n
.
··· ··· ··· ···
an1 an2 · · · ann

Determinants of order 1 and 2


Determinants of order 1 and 2 are defined as follows: |a11 | = a11 and

a11 a12
= a11 a22 − a12 a21 .
a21 a22

Example
1. det(100) = 100, det(k − 5) = k − 5.
8 5
2. = (8)(0) − (5)(−1) = 5.
−1 0

11
Application to linear equations
Consider two linear equations in two unknowns, x and y;

a1 x + b1 y = c1 ,
a2 x + b2 y = c2 .

Let D = a1 b2 − a2 b1 , the determinant of the matrix of coefficients. The system


has a unique solution if and only if D 6= 0. Thus D 6= 0,

c1 b1
Nx c2 b2 b2 c1 − b1 c2
x= = = ;
D a1 b1 a1 b2 − a2 b1
a2 b2

a1 c1
Ny a2 c2 a1 c2 − c1 a2
y= = = .
D a1 b1 a1 b2 − a2 b1
a2 b2
On the other hand, if D = 0, then the system may have no solution or more
than one solution.

Example
Solve the following system by determinants:

2x − 3y = −1,
4x + 7y = −1.

Solution

2 −3
D= = 2(7) − (4)(−3) = 26,
4 7
−1 −3
−1 7 −7 − 3 −5
x= = = ,
26 26 13

2 −1
4 −1 −2 + 4 1
y= = = .
26 26 13
Determinant of order 3
The determinant of the 3 × 3 matrix A = [aij ] may be written as;

a22 a23 a21 a23 a21 a22


det(A) = a11 − a12 + a13 ,
a32 a33 a31 a33 a31 a32
= a11 (a22 a33 − a23 a32 ) − a12 (a21 a33 − a23 a31 ) + a13 (a21 a32 − a22 a31 ),
= a11 a22 a33 − a11 a23 a32 − a12 a21 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 .

12
Example

1 5 8
−1 4 3 4 3 −1
3 −1 4 = 1 −5 +8
0 −2 5 −2 5 0
5 0 −2
= 1(2 − 0) − 5(−6 − 20) + 8(0 + 5)
= 2 + 130 + 40
= 172.

Properties of determinants
Theorem 1 : The determinant of a matrix A and its transpose AT are equal;
that is, |A| = |AT |.
Theorem 2: Let A be a square matrix.
1. If A has a row (column) of zeros, then |A| = 0.
2. If A has two identical rows (columns), then |A| = 0.
3. If A is triangular, then |A| = product of diagonal elements. Thus, in par-
ticular, |I| = 1, where I is the identity matrix.
Theorem 3: Suppose B is obtained from A by an elementary row (column)
operation.
1. If a row (column) of A were interchanged, then |B| = −|A|.
2. If a row (column) of A were multiplied by a scalar k, then |B| = k|A|.
3. If a multiple of a row (column) of A were added to another row (column)
of A, then |B| = |A|.
Major properties of Determinants
Theorem 4: The determinant of a product of two matrices A and B is the
product of their determinants; that is det(AB) = det(A)det(B).
Theorem 5: Let A be a square matrix. Then the following are equivalent:
1. A is invertible; that is, A has an inverse A−1 .
2. AX = 0 has only the zero solution.
3. The determinant of A is not zero; that is, A 6= 0.
Remarks A non-singular matrix A is defined to be an invertible matrix A for
which AX = 0 has only the zero solution. Theorem 5 shows that all such defi-
nitions are equivalent.

Minors and cofactors


Consider an n-square matrix A = [aij ]. Let Mij denote the (n-1)-square sub-
matrix of A obtained by deleting its ith row and j th column. The determinant
|Mij | is called the minor of the element aij of A, and we define the cofactor of

13
aij , denoted by Aij , to be the “signed” minor : Aij = (−1)i+j |Mij |:

 
+ − + − ···
 − + − + ··· 
 + − + − ··· .
 

· · · · ···
We emphasize that M ij denotes a matrix
 whereas Aij denotes a scalar.
2 3 −4
Example Let A =  0 −4 2  . The cofactors of the nine elements of A
1 −1 5
are;
−4 2 0 2
A11 = + = −18, A12 = − = 2,
−1 5 1 5
0 −4 3 −4
A13 = + = 4, A21 = − = −11,
1 −1 −1 5
2 −4 2 3
A22 = + = 14, A23 = − = 5,
1 5 1 −1
3 −4 2 −4
A31 = + = −10, A32 = − = −4,
−4 2 0 2
2 3
A33 = + = −8.
0 −4
   
2 3 4 −57 51 −3
Example Let A =  5 6 7  . The matrix of cofactors =  33 −30 6 .
8 9 1 −3 6 −3
Classical adjoint Consider an n-square matrix A = [aij ] over <:

 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
A=
 ···
.
··· ··· ··· 
an1 an2 · · · ann
The transpose of the matrix of cofactors of the elements aij of A, denoted by
Adj(A), is called the classical adjoint of A:
 
A11 A21 · · · An1
 A12 A22 · · · An2 
A=  ···
.
··· ··· ··· 
A1n A2n · · · Ann
   
2 3 4 −57 33 −3
e.g. If A =  5 6 7  , then Adj(A) =  51 −30 6 .
8 9 1 −3 6 −3
Theorem For any square matrix A, A · (Adj(A)) = (Adj(A) · A = |A|I, where
6 0, A−1 = |A|
I is the identity matrix. Thus, if |A| = 1
Adj(A).

14
   
2 3 4 −57 33 −3
e.g. Let A =  5 6 7  hence from the previous example, Adj(A) =  51 −30 6 
8 9 1 −3 6 −3
thus;
  
2 3 4 −57 33 −3
A(Adj(A)) =  5 6 7   51 −30 6 
8 9 1 −3 6 −3
 
27 0 0
=  0 27 0 
0 0 27
= 27I
= |A|I.

Therefore,
1
A−1 = Adj(A)
|A|
 57 33 3

− 27 27 − 27
=  51 − 30 6 
27 27 27
3 6 3
− 27 27 − 27
 19 11
− 19

−9 9
=  17 − 10 2 .
9 9 9
− 19 2
9 − 19

15

You might also like