MATH0047 Lecture Notes 1
MATH0047 Lecture Notes 1
MATH0047 Lecture Notes 1
Department of Mathematics
UCL
Lecture Notes
2020-2021
Isidoros Strouthos
In this course, we will aim to develop aspects of the theory of matrices and give an introduction to
the theory of vector spaces and the theory of linear maps. During the course, we are also due to see
examples of techniques which illustrate the use of the relevant mathematical objects.
Some of the topics we will cover in the first part of the course, apart from a revision of the basic
theory of matrices, will be row reduction and Gaussian elimination, elementary matrices, and matrix
determinants. The material will allow us to describe techniques used to find matrix inverses and
solve (systems of) linear equations.
The second part of the course will be an introduction to vector spaces and linear maps. It will include
the notions of inner products, subspaces, bases, orthogonality, norm, eigenvalues and eigenvectors,
results such as the Cauchy-Schwarz inequality, and techniques such as the Gram-Schmidt process.
Contents
1
Chapter 1
The mathematical objects we will be working with for most of this course are matrices. So, before
we go on to study some of the tools and techniques we will see in this course, let us review some
of the basic notions related to matrices, and introduce the subscript notation, which allows us to
prove, in a concise way, that matrices have various properties, by letting us “go down” to the level
of matrix entries, or numbers.
In this course, our matrices will always contain complex numbers, so we may refer to them as
complex matrices, even though we shall often simply refer to them as matrices. So, unless otherwise
stated, below, a matrix is necessarily a complex matrix, i.e. it contains complex numbers as (array)
entries (though many of our examples will involve only real matrices, i.e. matrices with real numbers
as entries).
We will denote the set of complex numbers by C and the set of real numbers by R. So, x ∈ C means
that “x is a complex number”, while x ∈ R means that “x is a real number”.
If a matrix A has m rows and n columns, we will say that A has size m × n, or, equivalently, that A
is an m × n matrix
e.g.
A11 A12 ··· A1n
A21 A22 ··· A2n
A = .. is an m × n matrix
.. .. ..
. . . .
Am1 Am2 · · · Amn
2
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 3
Note the way in which we label the entries in a matrix. For a matrix, A say (we will almost always
use capital letters for matrices), Aij is the number in row i and column j. It is often useful to move
over to this subscript notation in order to show that matrices have certain properties.
1 0 −2
1. A = is a 2 × 3 matrix, where
4 1 7
2. 1 5 7 is a 1 × 3 matrix.
1
3
3. B = 2 is a 2 × 2 matrix.
5 −6
4. 5 is a 1 × 1 matrix.
1 + 5i
0
3 is a 4 × 1 matrix.
5. b =
−2
x1
6. x = x2 is a 3 × 1 matrix, if x1 , x2 , x3 are complex numbers, i.e. if x1 , x2 , x3 ∈ C.
x3
Sometimes, as in example 6 above, we will not explicitly give the numbers in the matrix, but will
label them instead (for example, the entries may correspond to the unknowns or variables appearing
in a set of equations that we are trying to solve, using matrices).
Definition 1.2. A matrix M is a square matrix if it has the same number of rows and columns, i.e.
if it is a matrix of size n × n, for some n.
In the examples given above, (only) the matrices given in examples 3 and 4 are square matrices.
As mentioned earlier, we shall almost always use capital letters to denote matrices, except, possibly,
when a matrix is a vector, i.e. when it consists of a single column (or row). In those cases, we may
use a small (lowercase) letter, as in the cases of the matrices given in examples 5 and 6 above.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 4
We would like to refer to two matrices, A and B say, as equal if they have the same size and contain
the same numbers in corresponding positions i.e. if all corresponding entries are equal.
Another way of saying that “all corresponding entries are equal” is to use the subscript notation,
which “picks out” the entries for us.
Definition 1.3. Two matrices, A and B say, are equal if they are of the same size, and if, for all i, j:
Aij = Bij
e.g.
1 2 1 2
=
3 4 3 4
1 2 5 1 2
6=
3 4 6 3 4
a 0 3 0
= precisely when a = 3 and b = −4
1 b 1 −4
If two matrices are of the same size, then we can add them together simply by adding corresponding
entries together.
Another way of expressing the same idea is to say that, assuming that A and B have the same size,
we can form a matrix A + B, such that, for all (relevant) i and j, the number in row i, column j of
A + B is the number in row i, column j of A plus the number in row i, column j of B. Once again,
the subscript ‘entrywise’ notation allows us to write this down in a concise way:
Definition 1.4. Suppose that two matrices, A and B say, have the same size, m × n say. Then, the
sum of A and B, denoted by A + B, is the m × n matrix defined by:
(A + B)ij = Aij + Bij for all i, j (i.e. for all 1 ≤ i ≤ m, 1 ≤ j ≤ n)
A11 · · · A1n B11 · · · B1n A11 + B11 · · · A1n + B1n
A + B = ... .. .. + .. .. .. = .. .. ..
. . . . . . . .
Am1 · · · Amn Bm1 · · · Bmn Am1 + Bm1 · · · Amn + Bmn
1 8 −3 0 −5 1
For example, if A = and B = , then:
2 5 7 1 2 3
1 3 −2
A+B =
3 7 10
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 5
We can also multiply a matrix by a constant, a fixed complex number, by simply multiplying each
entry in the matrix by the given number.
Definition 1.5. Suppose that A is a complex matrix, of size m × n say, and c is a fixed complex
number. Then, the scalar multiple cA is the matrix defined (entrywise) by
(cA)ij = c(Aij ) for all i, j
A11 · · · A1n cA11 · · · cA1n
.. . .. .
.. = ... .. ..
cA = c .
. .
Am1 · · · Amn cAm1 · · · cAmn
1 2
For example, if A = 8 5, then:
−3 7
1
3 6 −1 −2 −2 −4 2
1
1 5
3A = 24 15 , −A = −8 −5 , −2A = −16 −10 , 2
A = 4 2
−3 7
−9 21 3 −7 6 −14 2 2
It is also (sometimes) possible to multiply two given matrices together, as you might have seen
before.
If we wish to find the product of matrices A and B, then the number of columns of A must be equal
to the number of rows of B.
This is because of the way in which multiplication is defined. If we want to find the number which
should appear in row i, column j of the product AB, then we must “pair up” row i of A with column
j of B, multiply corresponding entries together and then add the resulting products up. For this to
work “properly”, we must have the same number of entries in each row of A as in each column of
B, i..e the number of columns of A must be equal to the number of rows of B.
For example, suppose that A is a matrix with 3 columns and B is a matrix with 3 rows.
Let us write down what will appear in row 1, column 2 of the product, i.e. what (AB)12 will be:
(AB)12 = A11 B12 + A12 B22 + A13 B32
Observe that, in each of the “little entry products” above, the column of the entry from A matches
the row of the entry from B, so that we may rewrite the above as:
3
X
(AB)12 = A1k Bk2
k=1
The same idea applies when we wish to find the answer for an entry in any row and column of the
product matrix, not simply for the entry in row 1, column 2. So, we could write, for all i, j:
3
X
(AB)ij = Aik Bkj
k=1
Notice also that k “goes” from 1 to 3 because, in this example A has 3 columns (and B has 3 rows).
We are, therefore, led to the following definition (or description) of (general) matrix multiplication:
Definition 1.6. Suppose that A is an m × r matrix and B is an r × n matrix, so that the number of
columns of A is equal to the number of rows of B. Then, we can define the matrix product AB to be
the m × n matrix satisfying:
r
X
(AB)ij = Aik Bkj for all i, j (i.e. for all 1 ≤ i ≤ m, 1 ≤ j ≤ n)
k=1
1 2
0 2 1 3 0
For example, if A = 8 5 ,B =
,C = , then:
1 0 −4 5 −2
−3 7
2 2 −7 13 −4
13 17 0 6 3
AB = 5 16 −12 , BA = , AC = 49 −10 , CB =
13 −26 −2 10 13
7 −6 −31 26 −14
9 0
and C2 = C C =
5 4
whereas the products A2 , B 2 , CA, BC are not defined.
Suppose that A is a square matrix, of size n × n. Then, the sum A11 + A22 + · · · + Ann turns out to
be quite an important feature of the matrix, for some purposes. This number is known as the trace
of the matrix, and can otherwise be described as the sum of numbers on the main diagonal of the
matrix (the main diagonal of an n × n matrix A is the diagonal “running from the top left to the
bottom right” i.e. the one containing the set of entries {A11 , A22 , · · · , Ann }).
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 7
Definition 1.7. Let A be a square matrix, of size n × n. Then the trace of A, denoted by tr(A), or
trace(A), is the sum of the entries along the main diagonal:
n
X
tr(A) = A11 + A22 + · · · + Ann or tr(A) = Aii
i=1
1 0 2
For example, if A = 2 5 6 , then:
0 0 −4
tr(A) = 1 + 5 + (−4) = 2
As you might have seen before, we can also “flip” a matrix, in order to obtain its transpose. If we
are dealing with a square matrix, we can also imagine the “transpose action” as a “reflection” along
the main diagonal. However, the transpose is defined for a matrix of any size: it interchanges rows
and columns, and so contains in row i, column j what was “before” in row j, column i.
For example:
1 8 −3 1 2 0
T
If A = 2
5 7, then A = 8 5 0
0 0 4 −3 7 4
1 2
1 8 −3
If B = , then B T = 8 5
2 5 7
−3 7
1 8
T 1 2 0
If C = 2 5 , then C =
8 5 0
0 0
Please see exercise 1 on Sample Exercises 1 and exercise 1 on Exercise Set 1 for some more practice
related to the ideas mentioned above.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 8
Before we proceed to give a description of the inverse of a matrix, let us give some results on
matrices, which involve the properties and operations defined above. We will prove a few of these
results by moving over to the subscript ‘entrywise’ setting, and making use of the fact that, once
we are dealing with subscripts and actual numbers, rather than whole matrices, many properties and
operations “work in a simple way”.
At first glance, the proofs below might appear quite complicated. However, using subscripts turns
out to provide a concise way of proving results like the ones that follow, and a quite powerful way
too. In the proofs, most of the steps follow by simply using one of the definitions given above. So,
once you are comfortable with the main definitions given above, most of the steps in the proofs will
hopefully be understandable.
The following are some concepts and techniques that crop up quite often in proofs of results similar
to the ones we are about to see:
· Subscripts appear in most of the definitions above, so it is natural, in a way, to use them
below. In fact, most of the individual steps in the proofs below follow by simply using one of
the definitions above (one which is relevant to what we are dealing with at any step), and then
rearranging our expression in some way to get closer to the desired end.
· A useful tip when “rearranging” is that, once we are at the level of subscripts, we are working
with actual numbers in a matrix, rather than matrices themselves, so have more “freedom”.
For example, we can add numbers in any order we wish:
etc.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 9
· Another useful “rearranging tip” is that, once we are at the level of subscripts and are therefore
dealing with numbers, if sums also happen to appear (e.g. if we are dealing with matrix
multiplication), we can “do sums” in any order we wish.
So, for example, if C is an m × n matrix:
m X
X n n X
X m
Cij = Cij
i=1 j=1 j=1 i=1
Furthermore, we can suitably rename the ‘counters’ appearing in a sum, e.g. for n×n matrices
C, D:
Xn X n
Cik Dkj = Cil Dlj
k=1 l=1
A+B = B+A
Proof. To show that A + B is equal to B + A, we will show that their corresponding entries are
equal, i.e. that (A + B)ij = (B + A)ij for all i and j. Now, for all 1 ≤ i ≤ m, 1 ≤ j ≤ n:
A + (B + C) = (A + B) + C
We can use the same method to prove many more results related to matrices, e.g.:
Proposition 1.11. Suppose that A is an m × r matrix, and that B and C are r × n matrices. Then:
A(B + C) = AB + AC
So, since for all i, j: (A(B + C))ij = (AB + AC)ij , we deduce that A(B + C) = AB + AC, as
required.
Proposition 1.12. Suppose that A is an r × n matrix, and that B and C are m × r matrices. Then:
(B + C)A = BA + CA
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 11
Proof. Note that we have chosen the sizes of the matrices so that we are allowed to multiply all
three together, as A(BC) or (AB)C.
So, since for all i, j: (A(BC))ij = ((AB)C)ij , we deduce that A(BC) = (AB)C, as required.
Note 1.14. The above result shows that matrix multiplication is associative, i.e. that, when the
matrix products are defined: (AB)C = A(BC).
Meanwhile, Proposition 1.9 and Proposition 1.10 show that matrix addition is commutative and
associative, respectively.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 12
So, since for all i, j: ((c1 + c2 )A)ij = (c1 A + c2 A)ij , we deduce that (c1 + c2 )A = c1 A + c2 A, as
required.
We next give some of the basic results related to the transpose and trace.
Proposition 1.19. For any matrix A:
(AT )T = A
(AT )T ij = AT ji
by definition of transpose; Definition 1.8
=Aij by definition of transpose; Definition 1.8
So, since for all i, j: (AT )T ij
= Aij , we deduce that (AT )T = A, as required.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 13
Proposition 1.20. If A and B are two matrices of the same size, then
(A + B)T = AT + B T
Let us now give a result similar to the previous proposition, in the setting of matrix multiplication:
(AB)T = B T AT
(AB)T ij =(AB)ji
by definition of transpose; Definition 1.8
r
X
= Ajk Bki by definition of matrix multiplication; Definition 1.6
k=1
r
X
= (AT )kj (B T )ik by definition of matrix transpose; Definition 1.8
k=1
Xr
= (B T )ik (AT )kj since we are dealing with numbers
k=1
= B AT T
ij
by definition of matrix multiplication; Definition 1.6
So, since for all i, j: (AB)T ij
= B T AT ij
, we deduce that (AB)T = B T AT , as required.
tr(AB) = tr(BA)
We now give the definition of a particularly useful concept related to matrices, that of an inverse
matrix.
Before we do so, let us introduce the notation we will use for two special types of matrices:
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 14
Definition 1.23. A matrix, of any size, which has the number 0 in each position or entry, is known
as a zero matrix.
For example:
1 0 0
1 0
I2 = , I3 = 0 1 0
0 1
0 0 1
The following result follows easily, and suggests why we use the name ‘identity matrix’:
Proposition 1.25. Consider the m × m and n × n identity matrices, Im and In respectively. Then,
for any m × n matrix A: Im A = A and AIn = A.
1 2
For example, if A = 8 5:
−3 7
1 0 0 1 2 1 2
I3 A = 0 1 0 8 5 = 8 5 = A
0 0 1 −3 7 −3 7
1 2 1 2
1 0
AI2 = 8 5 = 8 5 = A
0 1
−3 7 −3 7
Note that, if we are verifying that a matrix has a specific inverse, we must verify that the potential
inverse “multiplies to give the identity” on both sides.
Not all matrices have inverses, as we shall see (in more detail) later on in this course. However, the
same matrix cannot have two different inverses:
Proposition 1.27. Suppose that the matrices B and C are inverses for a matrix A. Then: B = C
(i.e. if a matrix has an inverse, then its inverse is unique).
Proof. We assume that B and C are inverse matrices for A. By the definition of an inverse:
Then, we can show that B = C by using the fact that (BA)C = B(AC) (see Proposition 1.13):
Proposition 1.28. Suppose that A and B are invertible matrices of the same size. Then, the matrix
AB is also invertible, with inverse B −1 A−1 .
Similarly:
So, we have shown that (AB)(B −1 A−1 ) = I and that (B −1 A−1 )(AB) = I. Hence, the matrix AB
is invertible, with inverse B −1 A−1 , i.e. (AB)−1 = B −1 A−1 .
Proposition 1.29. Suppose that A1 , · · · , An are invertible matrices of the same size. Then, the
−1
product matrix A1 · · · An is also invertible, with inverse A−1
n · · · A1 .
Exercise 1.30. Prove that, if A is a square matrix satisfying A2 = 0, then the matrix A − I is
invertible, and find the inverse of A − I.
By definition of a matrix inverse, we need to find a matrix, C say, such that (A − I)C = I and also
C(A − I) = I.
(A − I)(A + I) =A2 + AI − IA − I 2
=A2 + A − A − I
=A2 − I
=−I since we assume that A2 = 0
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 17
This product is not equal to I, but −I, so let us choose −(A + I) = −A − I instead of A + I above.
This will hopefully “get rid of the minus sign”:
(A − I)(−A − I) = − A2 − AI + IA + I 2
= − A2 − A + A + I
= − A2 + I
=I since we assume that A2 = 0
(−A − I)(A − I) = − A2 + AI − IA + I
= − A2 + I
=I since we assume that A2 = 0
So, we have shown that (A − I)(−A − I) = I and that (−A − I)(A − I) = I. Hence, the matrix
A − I is invertible, with inverse −A − I.
For the remainder of this chapter, we will describe a way in which matrices can help us solve,
systematically, simultaneous (linear) equations in any (finite) number of unknowns. While setting
up and describing the structures and ideas involved, we will also be able to prove some more results
related to matrices, and will see that the invertibility of a matrix is a property that plays a key role
when trying to solve linear equations.
Definition 1.31. A complex linear equation is an equation of the form:
We commonly also use other letters for the unknowns, e.g. x, y, z instead of x1 , x2 , x3 etc.
For the remainder of this chapter, we will, in fact, restrict attention to matrices with real numbers
as entries (though much of the theory we will describe “carries over” to the setting of matrices with
complex numbers as entries); we will restrict attention to real linear equations:
Definition 1.32. A real linear equation is an equation of the form:
2x1 + 3x2 = 0
x + 5y = 0
x1 + 2x2 − x5 = 7
3.1x = 0
1
x
2 1
+ x3 = 5
x21 = 0
x2 + y 2 = 25
2xy = 0
x = ey
y = sin x
Observe that linear equations cannot contain powers of unknowns and products of unknowns.
We can generalise these ideas to sets of (simultaneous) equations in any number of unknowns. These
are referred to as systems of equations:
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 19
Definition 1.34. A system of real linear equations is a finite set of, m say, real linear equations in a
finite number of unknowns, n say:
a11 x1 + · · · + a1n xn = b1
.. .
. = ..
am1 x1 + · · · + amn xn = bm
where, for each 1 ≤ i ≤ m, 1 ≤ j ≤ n, aij is a real number and bi is a real number, and where
x1 , · · · , xn are the n unknowns of the system of real linear equations.
The crucial observation which allows us to use matrices to describe and solve general systems of
linear equations is that we can express any such system as a matrix equation. For example, the
general system given in Definition 1.34 can be written as:
a11 · · · a1n x1 b1
.. . .. .
.. .. = ...
.
.
am1 · · · amn xn bm
When we are dealing with systems of real linear equations, it is possible to have no solutions, one
solution, or infinitely many solutions, as the (hopefully familiar) case of two simultaneous equations
in two unknowns shows.
2x − y = 7 2x − y = 7 2x − y = 7
4x − 2y = 10 3x − 4y = 3 −4x + 2y = − 14
In the first case, there are no (real) numbers that we can substitute for x and y, such that both
equations given are satisfied simultaneously. One way to see this is to “subtract 2 times the first
equation from the second equation”, obtaining the new equation:
0 = −4
This is an equation which can never hold, so neither can the original system of equations. We say
that the first system of equations is inconsistent, and that it therefore has no solutions.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 20
In the second case, we can find a solution for this system of equations, and it is, in fact, a unique
solution. For example if we “subtract 4 times the first equation from the second equation”, we obtain
−5x = −25, from which we deduce that, necessarily, x = 5. We may then substitute this into any
one of the two equations, to obtain y = 3. So, the second system of equations has a unique solution,
given by:
x=5
y=3
Finally, in the third case, it is easy to see that the two equations really “depend on one another”, e.g.
if we multiply the first equation through by a factor of −2, we obtain precisely the second equation.
So, in reality, we do not have two “different” equations in two unknowns, but a single equation in
two unknowns, for example:
2x − y = 7
Such an equation has infinitely many solutions (they are all of the points on the line 2x − y = 7).
If we want to find a way of generating all these solutions, we could choose whichever real number
we wish for x say, and then find a corresponding value of y using a rearrangement of the above
equation:
y = 2x − 7
We could also make it more obvious that we are selecting any real number for x before finding y,
by writing out the solutions in the following form:
x=λ
y = 2λ − 7
Essentially, even when we have any number of equations in any number of unknowns, we apply
similar ideas to the ones mentioned above in order to solve them. However, with more than two
equations, it might not be obvious which equations to add or subtract from one another in order to
find one of the unknowns, or whether some of the equations in the system depend on one another.
So, instead of trying to solve more complicated systems by “spotting a good thing to do next”, we
use a systematic method. This method involves matrices, and is known as reduction to reduced row
echelon form or Gauss-Jordan elimination.
To develop the method, let us first list some of the types of operations we may wish to apply to a
system of equations in order to make it simpler:
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 21
· We may wish to multiply (through) an equation by a fixed non-zero number, for example in
order to simplify the coefficients. Multiplying by zero would simply remove the equation
from the given system, and so would not help us when trying to find solutions that satisfy that
equation, along with the other ones in the same system.
· We may wish to add or subtract a multiple of one equation to or from another equation, e.g.
as we did in the second case above, in order to eliminate or “remove” one of the unknowns.
· We may wish to permute or interchange two equations, for example to “switch” the positions
of the two equations above. This type of operation becomes especially useful when we wish
to give a method that simplifies a greater number of equations, and, in particular, the method
we will describe below.
Each of these operations is allowed, in the sense that the system of equations that we obtain after we
apply each of the above types of operations has (precisely) the same solutions as the original system
of equations.
Let us now move to the setting of matrices by transforming a system of equations into a matrix
equation, as given above, i.e. by writing a general system of m real equations in n real unknowns:
a11 x1 + · · · + a1n xn = b1
.. .
. = ..
am1 x1 + · · · + amn xn = bm
in the form
Ax = b
where
a11 · · · a1n x1 b1
.. ..
A = . .. , x = ... , b = ...
. .
am1 · · · amn xn bm
Then, when seeking solutions to the system of equations we started with, we might just as well try
to solve the matrix equation given above.
Now that we have (visually) “separated the coefficients from their unknowns”, we see that each of
the earlier equations has been “encoded” into a row of the above matrix equation, in the sense that,
for the ith equation ai1 x1 + · · · + ain xn = bi , the coefficients ai1 , · · · ain now form row i of matrix
A, while the “solution value” on the right, bi , is now the entry in row i of the vector b.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 22
So, if we wish to apply operations like the ones described above when using matrices, we have
to regard the rows as equations, so that the operations apply to rows. Furthermore, so that we do
not have to separately apply the same operation to a row of matrix A as well as to an entry in row
b (since they are both needed to form the equation in position i), we form the augmented matrix
( A | b ), which combines both A and b, and work with this new structure:
a11 · · · a1n | b1
( A | b ) = ... .. .. .
| ..
. .
am1 · · · amn | bm
2x − y = 7 2x − y = 7 2x − y = 7
4x − 2y = 10 3x − 4y = 3 −4x + 2y = − 14
In this (matrix) setting, the three types of operations we may wish to apply to equations become
elementary row operations:
Definition 1.35. Given any matrix, a real elementary row operation is an operation which can be
described in terms of one of the following:
1) Multiply (each entry of) a row of the matrix by a non-zero real number.
2) Add or subtract a (real) multiple of one row of the matrix to or from another row of the matrix.
In order to avoid writing out, in words, the description of an operation each time we use it, we
introduce some notation:
2) If, to row i of a matrix, we add λ times row j of the matrix, we write: Ri → Ri + λRj
(then, for subtraction, we may use a negative λ)
These operations correspond to the three earlier types of operations that we can “safely” perform
on equations. As it turns out, combinations of these three types of operations can always reduce
a matrix to a relatively simple form, and this allows us to systematically solve any system of real
linear equations (or to show that no solutions exist, if and when this is the case).
The simplified form of the matrix that we aim to arrive at is the reduced row echelon form:
Definition 1.36. A matrix is in reduced row echelon form if it satisfies the following four conditions:
1) If there are any zero rows (rows consisting only of zero entries), then they are the final rows
of the matrix (each zero row present appears below all rows that are not zero rows).
2) For each non-zero row (each row that contains at least one non-zero entry), the first (leftmost)
non-zero entry is a 1 (known as that row’s leading 1).
3) If two consecutive rows are non-zero, then the lower row starts with more zeros than the row
above, i.e. the leading 1 of the lower row is (below and) to the right of the leading 1 of the
row above.
4) Each column that contains a leading 1 has zero entries everywhere else.
For example, the following matrices are in reduced row echelon form:
1 0 0
0 1 0
0 0 1
1 0 0 0 3
0 0 1 0 −2
1 0 0
0 1 0
0 0 0
1 0 0
0 0 1
0 0 0
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 24
1 0 0
2 1 0
0 0 1
1 0
0 2
1 0 0 3
0 1 0 −5
0 0 0 1
1 0 0 3
0 0 0 0
0 0 1 2
1 0 0
0 0 1
0 1 0
It is also possible to have a slightly weaker ‘echelon form’, which only satisfies the first three of the
above properties.
Definition 1.37. A matrix (of any size) is in row echelon form if it satisfies the following three
conditions:
1) If there are any zero rows (rows consisting only of zero entries), then they appear at the bottom
of the matrix.
2) For each non-zero row, the first (leftmost) non-zero entry is a 1 (known as the leading 1 of that
row).
3) If two successive rows are non-zero, then the lower row starts with more zeros than the row
above, i.e. the leading 1 of the lower row is (below and) to the right of the leading 1 of the
row above.
However, we will usually try to transform matrices to the “full” reduced row echelon form, partly
because this will make it even easier for us to work out the solutions of any system of real linear
equations (or allow us to verify that no such solutions exist, if and when this is the case).
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 25
Another reason to wish to transform a matrix to reduced row echelon form, rather than simply
row echelon form, is that, even though there are many ways to row reduce a matrix, and many
combinations of elementary row operations that we can use, a given matrix has a unique reduced
row echelon form:
Proposition 1.38. A matrix has a unique reduced row echelon form.
If we apply a sequence of elementary row operations to a matrix, A say, and obtain a matrix A0 say,
it is common to say that the two matrices, A and A0 , are row equivalent. So, the result above may
be rephrased as:
A matrix is row equivalent to a unique matrix that is in reduced row echelon form.
Note that Proposition 1.38 does not say that two different matrices cannot have the same reduced
row echelon form, but, rather, that, if we start with a given matrix, and apply any sequence of row
operations which reduces the matrix to reduced row echelon form, then the matrix we will end up
with will always be the same.
Reducing a matrix to the above form allows us to easily study the original set of real linear equations,
as we shall see in the examples below. The process involves writing down the augmented matrix
corresponding to a system of real linear equations, and then applying elementary row operations to
transform this matrix to reduced row echelon form, possibly using the following strategy:
· At any stage, if there are any rows full of zeros in the matrix, make sure that they appear at
the bottom of the matrix, below any non-zero rows (by, for example, interchanging them with
non-zero rows).
1) Start at the row 1, column 1 position (or entry).
2) Try to use one or more of the elementary row operations to get a 1 in this position (without
using any rows above the one you are currently considering). If this is not possible, try the
same for the next entry (the entry immediately to the right) in the same row.
3) If it is possible to obtain a 1, add or subtract a suitable multiple of the row under consideration
to or from every other row, so as to obtain a zero in every position above and below the 1 (i.e.
so that the column with the leading 1 contains zeros everywhere else).
4) Consider the next row of the matrix, and the “earliest” column that does not contain a leading
1 above it.
5) Repeat steps 2), 3) and 4) until you have reached the end of the matrix (i.e. until after you
have performed the steps above on the bottom right entry of the matrix).
Your matrix should now be in reduced row echelon form.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 26
Please note that it is not necessary to memorise the strategy given above. However, it is quite a
useful algorithm to follow, as we will (more or less) show in the examples below. Of course, as
mentioned earlier, there are many ways to transform a given matrix to reduced row echelon form.
Once we have the reduced row echelon form of the augmented matrix, we can “unpack” it to obtain
a simpler set of equations that have the same solution(s), if any, as those in the original set of
equations. From these, resulting, equations, it is relatively simple to check if no solutions of the
original system of equations exist (i.e. if the equations contain any inconsistencies), or to find the
unique solution, or the infinitely many solutions, of the original system of equations, whichever the
case might be.
This way of solving systems of (real) linear equations is often called the Gauss-Jordan elimination
process.
If the system of equations we are considering has no solutions, then one of the equations we end up
with after we row reduce will be of the form ‘0 = 1’, which is inconsistent. So, the row reduction
will have directly shown that the original system of equations has no solutions.
If there is a unique solution, then we will be able to “read it off” the reduced row echelon form of
the augmented matrix.
Finally, if the system of equations has infinitely many solutions, the reduced row echelon form
will not only lead to a way of presenting the solutions, but it will also suggest which variables or
unknowns we could choose as free, and which variables or unknowns we could express in terms of
the free ones. We will see this in an example below, but let us first describe the main idea involved.
This observation relies on the fact that for a matrix equation of the form Ax = b, each column of
A contains the coefficients of one of the unknowns. So, if the unknowns are named x1 , · · · , xn , in
order, then the variable x1 corresponds to column 1 of A, the variable x2 corresponds to column 2
of A, and so on.
Suppose that we have row reduced our matrix, and there are infinitely many solutions. Then, it
turns out to be quite convenient to look at which columns of the reduced row echelon form of the
matrix A contain a leading 1 (and zeros everywhere else). It is generally “efficient” to express the
variables corresponding to these columns in terms of the remaining variables present in the original
system of real linear equations. In other words, it is relatively simple to consider as a free variable
each variable that corresponds to a column not containing a leading 1, and to then express the other
variables present in terms of any free variable(s) present.
Once again, it is not necessary to memorise the above argument, but it might be quite useful to have
it in mind. The ideas involved will hopefully become clearer when we go through the examples
below.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 27
Let us first use the row reduction method to solve the three earlier systems of equations.
Now, consider the second system of real linear equations given earlier
2x − y = 7
3x − 4y = 3
2 −1 | 7
R →R −2R
2 2 1 1 −3 | −4
−− −−− −−→
0 5 | 15
1
R2 → R2 1 −3 | −4
−−−−5−→
0 1 | 3
R →R +3R
1 1 2 1 0 | 5
−− −−− −−→
0 1 | 3
x =5
y =3
We have therefore obtained the unique solution to the above system of equations.
(We may verify that our answer is correct by substituting the values x = 5 and y = 3 into each of
the original equations, in order to check that these values satisfy the original system of equations.)
2x − y = 7
−4x + 2y = − 14
(We may verify that our answer is correct by substituting x = 12 λ + 27 and y = λ, or simply
x = 12 y + 72 and ‘y = y’, into each of the original equations, and checking that these equations
hold.)
In the above cases, where we had systems of two simultaneous equations in two unknowns, using
row reduction to get to the solutions was possibly more time consuming than us “spotting” how to
simplify the equations ourselves. However, when we get to systems containing more equations and
more unknowns, the systematic use of row reduction might often be more convenient, as might be
illustrated by the next few exercises; please see also exercise 2 on Sample Exercises 1 and exercise
2 on Exercise Set 1.
Exercise 1.39. Solve the following system of linear equations, by writing out an augmented matrix,
and then using elementary row operations to transform it to reduced row echelon form (i.e. by using
the process of Gauss-Jordan elimination):
3x1 + x2 + 5x3 = 2
x1 + 2x2 = 4
x1 − 3x2 + 5x3 = 0
Let us first use elementary row operations to find the reduced row echelon form of this matrix:
3 1 5 | 2 1 2 0 | 4
R ↔R2
1 2 0 | 4 −−1−−→ 3 1 5 | 2
1 −3 5 | 0 1 −3 5 | 0
1 2 0 | 4
R2 →R2 −3R1
−− −−−−−→ 0 −5 5 | −10
1 −3 5 | 0
1 2 0 | 4
R →R −R1
−−3−−−3−−→ 0 −5 5 | −10
0 −5 5 | −4
R2 →
−1
R
1 2 0 | 4
5 2 0 1 −1 | 2
−−−−− −→
0 −5 5 | −4
1 0 2 | 0
R1 →R1 −2R2
−− −−−−−→ 0 1 −1 | 2
0 −5 5 | −4
1 0 2 | 0
R3 →R3 +5R2
−− −−−−−→ 0 1 −1 | 2
0 0 0 | 6
1
R3 → R3
1 0 2 | 0
−−−−6−→ 0 1 −1 | 2
0 0 0 | 1
1 0 2 | 0
R2 →R2 −2R3
−− −−−−−→ 0 1 −1 | 0
0 0 0 | 1
The third equation in this system can “never” hold, so the system is inconsistent: it has no solutions.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 31
Let us first use elementary row operations to find the reduced row echelon form of this matrix:
3 1 5 | 2 1 2 0 | 4
R ↔R
1 2 0 | 4 −−1−−→
2
3 1 5 | 2
0 3 −7 | 2 0 3 −7 | 2
1 2 0 | 4
R →R −3R
2 2 1
−− −−− −−→ 0 −5 5 | −10
0 3 −7 | 2
R2 →
−1
R2
1 2 0 | 4
5 0 1 −1 | 2
−−−−− −→
0 3 −7 | 2
1 0 2 | 0
R →R −2R
1 1 2
−− −−− −−→ 0 1 −1 | 2
0 3 −7 | 2
1 0 2 | 0
R →R −3R
3 3 2
−− −−− −−→ 0 1 −1 | 2
0 0 −4 | −4
R3 →
−1
R
1 0 2 | 0
4 3 0 1 −1 | 2
−−−−−−→
0 0 1 | 1
1 0 0 | −2
R →R −2R
1 1 3
−− −−− −−→ 0 1 −1 | 2
0 0 1 | 1
1 0 0 | −2
R →R +R
−−2−−−2−−→
3
0 1 0 | 3
0 0 1 | 1
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 32
So, the original linear system of equations is equivalent to the following system:
x1 = −2
x2 = 3
x3 = 1
and this is the unique solution to the system of equations given.
(The uniqueness of the solution is related to the fact that there are ‘leading ones’ in columns 1, 2, 3
of the row reduced matrix, suggesting that we can regard the variables x1 , x2 , x3 , respectively, as
not being free.)
Exercise 1.41. Solve the following system of linear equations:
x1 + x3 + 2x4 + 3x5 = 7
x1 + 4x3 − 2x4 = −5
3x1 + 9x3 − 2x4 + 3x5 = −3
Let us first use elementary row operations to find the reduced row echelon form of this matrix:
1 0 1 2 3 | 7 1 0 1 2 3 | 7
R →R −R1
1 0 4 −2 0 | −5 −−2−−−2−−→ 0 0 3 −4 −3 | −12
3 0 9 −2 3 | −3 3 0 9 −2 3 | −3
1 0 1 2 3 | 7
R3 →R3 −3R1
−− −−−−−→ 0 0 3 −4 −3 | −12
0 0 6 −8 −6 | −24
1
R2 → R2
1 0 1 2 3 | 7
−−−−3−→ 0 0 1 −4 −1 | −4
3
0 0 6 −8 −6 | −24
1 0 0 10
3
4 | 11
R →R −R2
−−1−−−1−−→ 0 0 1 −4 −1 | −4
3
0 0 6 −8 −6 | −24
1 0 0 10
3
4 | 11
R3 →R3 −6R2
−− −−−−−→ 0 0 1 −4 −1 | −4
3
0 0 0 0 0 | 0
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 33
(As may be seen, there are ‘leading ones’ in columns 1 and 3 of the row reduced matrix, suggesting
that it might be helpful to regard the variables x1 and x3 , respectively, as not being free, but possibly
dependent on (some of) the remaining free variables x2 , x4 , x5 ; these, free, variables are indicated
by columns which do not contain ‘leading ones’.
Using this observation, we will try to rearrange the resulting system of real linear equations to
obtain equations for x1 and x3 .)
x1 + 10
3 4
x + 4x5 = 11
4
x3 − 3 x4 − x5 = −4
0=0
The third equation “always” holds, so we may safely ignore it, and rearrange the first two equations
in order to obtain x1 and x3 in terms of the other variables:
x1 = 11 − 10
3 4
x − 4x5
4
x3 = −4 + 3 x4 + x5
There are infinitely many solutions to these equations; we may choose whichever (real) values we
like for x2 , x4 and x5 .
Then, the general solution to this system of linear equations is given by:
x1 = 11 − 10
3
µ − 4ν
x2 =λ
x3 = −4 + 34 µ + ν
x4 =µ
x5 =ν
for any λ, µ, ν ∈ R.
(Observe that x2 is a free variable that does not appear in the expressions for x1 or x3 . So, it is not
necessarily true that every free variable must affect the value of some other variable appearing in
a system of equations. Here, we would expect this result anyway, since no x2 terms appear in our
equations, so x2 can “be whatever it likes”, without affecting our equations.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 34
Furthermore, it might be a good idea to check that the answer is correct, e.g. here, if we choose the
value of zero for each of the free variables, and therefore set λ = 0, µ = 0, ν = 0, we obtain the
solution x1 = 11, x2 = 0, x3 = −4, x4 = 0, x5 = 0, which, as we may verify, holds in the original
system of equations.
If we wish to be “more certain” that our answers are correct, we could try substituting each of
the expressions obtained for x1 and x3 above, in terms of x4 and x5 , into the original system of
equations, and verify that the original equations hold.)
In the examples we have seen so far, the systems of equations we have studied have had no solutions,
or one (unique) solution, or infinitely many solutions. We have not come across a system with any
other number of solutions, i.e. with exactly 2, 3, 4, · · · solutions. In fact, it can never be the case
that a system of (real) linear equations has more than one solution but not infinitely many.
We can try to think about why this result is true by expressing it in the following form: if a system
of real linear equations has at least two solutions, then it must have infinitely many.
Let us give a way to show how, given two (distinct) solutions, we may obtain as many as we like.
2x − y = 7
−4x + 2y = − 14
and suppose that we have already found two solutions to this system of equations:
x1 = 0 , y1 = −7 and x2 = 4 , y 2 = 1
We may then generate as many valid solutions as we like by “taking combinations of the two given
solutions adding up to 1”:
For instance, suppose we take, for our “new” x, half of x1 plus half of x2 , and similarly for y:
−7
x = 12 x1 + 12 x2 = 0 + 2 = 2 and y = 12 y1 + 12 y2 = 2
+ 1
2
= −3
Then, setting x = 2 and y = −3 also solves the original system of real linear equations.
Similarly, suppose we take, for our “new” x, a third of x1 plus two thirds of x2 , and similarly for y:
−7 −5
x = 31 x1 + 32 x2 = 0 + 8
3
= 8
3
and y = 13 y1 + 23 y2 = 3
+ 2
3
= 3
8 −5
Then, setting x = 3
and y = 3
also solves the original system of real linear equations.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 35
Finally, suppose we take, for our “new” x, 5 lots of x1 minus 4 lots of x2 , and similarly for y:
Then, setting x = −16 and y = −39 also solves the original system of real linear equations.
A similar idea applies in the case of any number of equations in any number of unknowns. Given
two distinct solutions of a system of real linear equations, say
x = x1 , y = y1 and x = x2 , y = y2
we can generate a solution, by taking, for any real number c, the ‘combinations’:
Since this works for whichever real number c we choose, it allows us to obtain infinitely many
solutions of the original system of real linear equations.
This method also gives a way of generating new solutions of systems of linear equations, even when
we do not know the equations themselves:
Exercise 1.42. Suppose that a system of linear equations in three unknowns, x, y, z, say, has the
following two solutions:
Find two distinct solutions of the same system of real linear equations, which are different from the
ones given above.
We can use the idea described above, and consider any ‘combination’ we like. We begin by setting
x1 = −5, y1 = 2, z1 = −1 and x2 = 10, y2 = −3, z2 = 4.
Then, for any real number c, we can find a solution of the form
1
To find one solution, we could set c = 2
to obtain:
−5
x = 12 x1 + 21 x2 = 2
+5 = 2 21 , y = 21 y1 + 21 y2 = 1+ −3
2
= − 12 , z = 12 z1 + 12 z2 = −1
2
+2 = 1 12
For another solution, we could take almost any other combination, e.g. set c = 3 to obtain:
Then, whichever system is satisfied by the two earlier solutions is also satisfied by the solutions
x = 2 12 , y = − 12 , z = 1 21 and x = −35, y = 12, z = −11 (as well as infinitely many more
solutions).
If you would like to verify this for an actual system of equations, try “substituting the new solutions”
into, for example, the following system of equations, which is satisfied by the original solutions:
x + 2y − z = 0
y+z =1
Proposition 1.43. A system of real linear equations has no solutions, or one (unique) solution, or
infinitely many solutions.
a11 x1 + · · · + a1n xn = 0
.. .. ..
. . .
am1 x1 + · · · + amn xn = 0
Written in matrix form, this system of equations becomes the matrix equation:
Ax = 0
where
a11 · · · a1n x1 0
.. .. x = ...
.. ..
A = . 0 = .
. .
am1 · · · amn xn 0
Such a system of real linear equations is called a homogeneous system of real linear equations.
It can never be the case that a homogeneous system of real linear equations has no solutions, since
it always has at least one solution: the solution we obtain when we set x1 = 0, x2 = 0, · · · , xn = 0
(the zero solution).
So, for this special kind of system of equations, where the possibility of no solutions is ruled out,
Proposition 1.43 has the following form:
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 37
Proposition 1.44. A homogeneous system of real linear equations either has a unique solution (the
zero solution) or infinitely many solutions.
So far, we have seen how matrices can be used to solve systems of (real) linear equations. The tools
we have developed in order to do this can be translated completely to the setting of matrices, where
they will play a key role in helping us better understand matrices themselves.
In order to see how this can be achieved, let us return to the notion of elementary row operations
(see Definition 1.35). When solving a system of equations above, we expressed the equations in
matrix form, and then, at each stage, “manually” performed one of the three types of elementary
row operations, in order to transform (and hopefully simplify) the matrix.
It is possible to find matrices that ‘emulate’ elementary row operations themselves, and this allows
us to express the tools developed so far in this chapter purely in terms of matrices.
Let us see the effect of multiplying some other matrices by A on the left, e.g. consider its ‘effect’
on
1 2 1 1 0 2 1
B = 2 −1 3 , B 0 = −5 1 3 0
4 5 7 1 2 3 1
We obtain:
1 0 0 1 2 1 1 2 1
AB = 5 1 0 2 −1 3 = 7 9 8
0 0 1 4 5 7 4 5 7
1 0 0 1 0 2 1 1 0 2 1
0
AB = 5 1 0
−5 1 3 0 = 0 1
13 5
0 0 1 1 2 3 1 1 2 3 1
Multiplying by A on the left has not changed rows 1 and 3 of each of the matrices B, B 0 , but only row
2 (in each case). In fact, it seems to have added 5 copies of row 1 to row 2. So, multiplying B, B 0 by
A (on the left) gives the same answer as performing the elementary row operation R2 → R2 + 5R1 :
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 38
1 2 1 1 2 1
R →R +5R
2 2 1
B = 2 −1 3 −− −−− −−→ 7 9 8 = AB
4 5 7 4 5 7
1 0 2 1 1 0 2 1
0 R2 →R2 +5R1
B = −5 1
3 0 −−−−−−−→ 0 1 13 5 = AB 0
1 2 3 1 1 2 3 1
It seems that, in general, multiplication of any matrix (with 3 rows), M say, on the left by A leads
to the same answer, AM , as the answer that we would obtain by performing the elementary row
operation R2 → R2 + 5R1 on the matrix M .
If we were interested in “going the other way”, and finding the matrix A, given the row operation
we wish to emulate, there is no need to guess a possible answer and then, perhaps, multiply other
matrices by it on the left in order to verify that we have chosen the appropriate matrix.
A simpler way to arrive at the matrix we want is to apply the row operation we wish to emulate to
the relevant identity matrix (by relevant, we mean the identity matrix that contains the same number
of rows as the matrix we are “operating on‘’).
For example, suppose that we are dealing with a matrix with 3 rows and we wish to find a matrix
corresponding to the elementary row operation R2 → R2 + 5R1 , If we apply the row operation
R2 → R2 + 5R1 to the 3 × 3 identity matrix, we obtain:
1 0 0 1 0 0
R2 →R2 +5R1
0 1 0 −− −−−−−→ 5 1 0
0 0 1 0 0 1
This leads to precisely the matrix we want, i.e. the matrix A from above, and it is a technique that
works in general, as we see below.
Let us now suppose that we wish to emulate the elementary row operation R1 → 4R1 , which
multiplies each entry in row 1 by 4, on a matrix with 3 rows say. If we apply the operation R1 → 4R1
to the 3 × 3 identity matrix, I3 , we obtain:
4 0 0
A0 = 0 1 0
0 0 1
and we can verify that when we multiply any matrix (with 3 rows), M say, on the left by A0 , the
final answer, A0 M , is precisely the same as if we had performed the row operation R1 → 4R1 on
the matrix M . For example:
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 39
1 2 1 4 0 0 1 2 1 4 8 4
If M = 2 −1 3 then A0 M = 0 1 0
2 −1 3 = 2 −1 3
4 5 7 0 0 1 4 5 7 4 5 7
1 1
4
3 0 −1 4 0 0 4
3 0 −1 1 12 0 −4
If M = 0 3 4 5 then A0 M = 0 1 0
0 3 4 5 = 0 3 4 5
0 1 0 1 0 0 1 0 1 0 1 0 1 0 1
Similarly, if we wish to find a matrix that emulates the action of the row operation R2 → 13 R2 on a
matrix with 4 rows, we would apply this operation to the 4 × 4 identity matrix, I4 , to obtain:
1 0 0 0
0 1 0 0
3
0 0 1 0
0 0 0 1
Suppose now that we wish to emulate the elementary row operation R2 ↔ R3 , which interchanges
rows 2 and 3 of a matrix. We may apply this operation to the identity matrix I3 , to obtain:
1 0 0
A00 = 0 0 1
0 1 0
Let us see some examples that verify that multiplication on the left by the matrix found above, A00 ,
corresponds to performing the required operation:
1 2 1 1 0 0 1 2 1 1 2 1
If M = 2 −1 3 then A00 M = 0 0 1 2 −1 3 = 4 5 7
4 5 7 0 1 0 4 5 7 2 −1 3
1 2 3 4 1 0 0 1 2 3 4 1 2 3 4
If M = 0 0 0 0 then A00 M = 0 0 1 0 0 0 0 = 0 0 1 −2
0 0 1 −2 0 1 0 0 0 1 −2 0 0 0 0
We refer to the matrices that correspond to elementary row operations, in the way described above,
as elementary matrices, and we denote them more concisely:
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 40
Definition 1.45. A real square matrix A, of size n × n say, is an elementary matrix if it has one of
the following forms, for given i, j (where 1 ≤ i ≤ n, 1 ≤ j ≤ n):
1) The matrix A is the same as the identity matrix, except that Aii = λ, for λ ∈ R, λ 6= 0.
We denote such a matrix by Dn (i; λ).
2) The matrix A is the same as the identity matrix, except that Aij = λ for λ ∈ R and i 6= j.
We denote such a matrix by En (i, j; λ).
3) The matrix A is the same as the identity matrix, except that rows i and j are interchanged.
We denote such a matrix by Pn (i, j).
i.e. for given i, j (where 1 ≤ i ≤ n, 1 ≤ j ≤ n):
1) Dn (i; λ) has a λ (λ 6= 0) in the entry in row i, column i; otherwise it is the same as In .
2) En (i, j; λ) has a λ in the entry in row i, column j (i 6= j); otherwise it is the same as In .
3) Pn (i, j) is the same as In , except that rows i and j are interchanged.
For example, the matrices A, A0 , A00 , considered earlier, are all elementary matrices, and they are
the matrices E3 (2, 1; 5), D3 (1; 4), P3 (2, 3), respectively:
In general:
Ri → λRi
Ri → Ri + λRj
Ri ↔ Rj
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 41
Note 1.46. The subscript that denotes the size of an elementary matrix may often be dropped from
its description, when we are making a general statement or when the size to be used is clear from
the situation we are in.
For example, for any matrix size, we may say that the operation R1 → 5R1 corresponds to the
elementary matrix D(1; 5); or, if we are using elementary matrices to reduce a matrix with 4 rows
say, then all the elementary matrices involved will have size 4 × 4, so we may omit the subscript 4
from each elementary matrix present in such an instance (e.g. from each elementary matrix present
in such an example or exercise in our course).
Note 1.47. As you might have realised, we have insisted that, when we find the elementary matrix
corresponding to an elementary row operation, we multiply by this matrix on the left of any other
matrix that we wish to modify, in order to emulate the given elementary row operation. This is an
important point, since multiplying on the right will not have the desired effect.
Multiplying by elementary matrices on the right is, in fact, indeed similar to applying an elementary
operation, but on the columns rather than the rows of a matrix. In this course, we (will) deal only
with elementary row operations though, so we will always require the elementary matrices to act
on the left of whichever matrix we wish to transform.
Before we continue, let us see some more examples of elementary matrices (see exercise 1 on
Sample Exercises 2):
Exercise 1.48. Write out the following elementary matrices: E3 (1, 2; 5), P4 (1, 3), D4 (2; −1
3
), and
describe the elementary row operation corresponding to each of the given matrices.
Let us try to solve both parts of this exercise together; it might be helpful to consider the elementary
row operation corresponding to each of the matrices before writing down the corresponding matrix:
R1 → R1 + 5R2
R1 ↔ R3
D4 (2; −1
3
) corresponds to the operation which multiplies row 2 by the number −1
3
:
−1
R2 → 3
R2
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 42
This takes care of the second part of the exercise. We may now apply these operations to suitable
identity matrices (where the size is indicated by the subscript present in the elementary matrix
in each case) in order to actually obtain (and write out) the elementary matrices themselves, as
required:
1 5 0
E3 (1, 2; 5) = 0 1 0
0 0 1
0 0 1 0
0 1 0 0
P4 (1, 3) = 1 0 0 0
0 0 0 1
1 0 0 0
0 −1 0 0
D4 (2; −1
3
) = 3
0 0 1 0
0 0 0 1
Please see exercise 1 on Exercise Set 2 for more practice on writing out elementary matrices and
describing the corresponding elementary row operations.
Now that we have seen how to use matrices to express elementary row operations, let us view this
idea in the context of the Gauss-Jordan elimination process, which may be used to find the reduced
row echelon form of a matrix.
Consider, for example, the row reduction given below (let us refer to the original matrix as A):
3 0 0 1
R1 → R1
1 0 0
A = 0 5 1 −−−−3−→ 0 5 1
0 1 0 0 1 0
1 0 0
R ↔R3
−−2−−→ 0 1 0
0 5 1
1 0 0
R3 →R3 −5R2
−− −−−−−→ 0 1 0
0 0 1
By using three elementary row operations, we have shown that the matrix A is row equivalent to the
identity matrix, I3 . Let us view the above row reduction in the setting of elementary matrices.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 43
We first applied the row operation R1 → 31 R1 , which corresponds to left multiplication by the
elementary matrix D(1; 31 ): 1
3
0 0
0 1 0
0 0 1
So, the matrix we obtained at the end of the first ‘row reduction stage’ is the same as D(1; 13 )A,
which we can confirm:
1
3
0 0 3 0 0 1 0 0
1
D(1; 3 )A = 0 1 0
0 5 1 = 0 5 1
0 0 1 0 1 0 0 1 0
Then, to this “new” matrix, D(1; 13 )A, we applied the row operation R2 ↔ R3 , i.e multiplication on
the left by P (2, 3), so that the matrix we obtained at the end of the second ‘row reduction stage’ is
the same as P (2, 3)D(1; 31 )A:
1 0 0 1 0 0 1 0 0
P (2, 3)D(1; 13 )A = P (2, 3) D(1; 31 )A = 0 0 1 0 5 1 = 0 1 0
0 1 0 0 1 0 0 5 1
Finally, to this “new” matrix, P (2, 3)D(1; 13 )A, we applied the row operation R3 → R3 − 5R2 , i.e
multiplication on the left by E(3, 2; −5), to finally obtain the identity matrix:
1 0 0 1 0 0 1 0 0
E(3, 2; −5)P (2, 3)D(1; 31 )A = 0 1 0 0 1 0 = 0 1 0 = I
0 −5 1 0 5 1 0 0 1
Even though this is a rather brief example, it illustrates some of the points we will consider next.
To start with, observe that the final answer of the row reduction is E(3, 2; −5)P (2, 3)D(1; 13 )A = I,
which we obtained by applying the ‘D(1; 13 ) row operation’, R1 → 13 R1 , first, followed by the
‘P (2, 3) row operation’, R2 ↔ R3 , and, then, by the‘E(3, 2; −5) row operation’, R3 → R3 − 5R2 .
So, in the final answer, E(3, 2; −5)P (2, 3)D(1; 31 )A, the elementary matrices seem to have appeared
in the reverse of the order in which the corresponding elementary row operations were performed.
This is because we are multiplying on the left each time, so that the first elementary row operation
applied to the matrix A corresponds to the first elementary matrix multiplying A, i.e. the rightmost
one (nearest to A), whereas the matrix corresponding to the next elementary row operation applied
to A appears further to the left.
A similar observation holds in general, however many elementary row operations we use.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 44
Secondly, observe that, in the given example, our row reduction leads to the identity matrix. This
allows us to show that the original matrix, A, is invertible. Having “translated” the elementary row
operations used to the setting of elementary matrices, we obtain:
So, it seems as if the product E(3, 2; −5)P (2, 3)D(1; 31 ) is the inverse of A, i.e. that
This is indeed the case, as we can verify by checking that AA−1 = I and that A−1 A = I:
1
3 0 0 3
0 0 1 0 0
AA−1 = 0 5 1 0 0 1 = 0 1 0 = I3
0 1 0 0 1 −5 0 0 1
1
3
0 0 3 0 0 1 0 0
A−1 A = 0 0 1 0 5 1 = 0 1 0 = I3
0 1 −5 0 1 0 0 0 1
So, when the reduced row echelon form of a (square) matrix is the (relevant) identity matrix, we are
able to show that the matrix is invertible, and also find its inverse, by keeping track of the elementary
row operations used in the row reduction process, and by then “translating” them to the setting of
elementary matrices.
(Note that not all square matrices have the identity matrix as reduced row echelon form.)
However, the above computation of the matrix A−1 , the inverse of A, could have been performed
more efficiently. What we did was first to row reduce the matrix A to the identity matrix I3 , and
then to multiply together the elementary matrices present in the row reduction process, in order to
find A−1 .
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 45
We could have actually arrived at the matrix for A−1 by using the original row reduction, together
with the observation that, if we were to apply the same elementary row operations applied to A, in
the same order, to the identity matrix I3 , we would end up with A−1 .
Since the row reduction process is identical in the two cases, we could do it in one step, by using an
augmented matrix. If we start with the augmented matrix ( A | I ) and row reduce this, then, when
the “left half”, corresponding to A, has been row reduced to an identity matrix, the “right half”,
corresponding to the original matrix I, will simply be the matrix A−1 .
Let us show this method being applied, to the matrix A given in the example described above:
1 0 0 | 13 0 0
3 0 0 | 1 0 0 1
R1 → R1
( A | I3 ) = 0 5 1 | 0 1 0 −−−−3−→ 0 5 1 | 0 1 0
0 1 0 | 0 0 1 0 1 0 | 0 0 1
1 0 0 | 13 0 0
R ↔R3
−−2−−→ 0 1 0 | 0 0 1
0 5 1 | 0 1 0
1 0 0 | 13 0 0
R3 →R3 −5R2
−− −−−−−→ 0 1 0 | 0 0 1
0 0 1 | 0 1 −5
As required, at the end of the process, when the “left half” is an identity matrix, the “right half” is
precisely the inverse of the original “left half”.
Once again, this holds in general. In fact, as we shall soon mention, it gives a way of always, and
not just sometimes, determining whether or not a matrix is invertible, and of finding its inverse if it
is invertible. This is because a (square) matrix is invertible if and only if its reduced row echelon
form is the (relevant) identity matrix, so that we could apply the above process to any square matrix,
and find its inverse (if the reduced row echelon form of the matrix is the identity matrix) or show
that it is not invertible (if the reduced row echelon form of the matrix is not the identity matrix).
Furthermore, the method described earlier, which allows us to express the inverse of an invertible
matrix as a product of elementary matrices, can also be used to express the original invertible matrix
as a product of elementary matrices.
Before we show this, let us first show that each elementary matrix itself is invertible. A hopefully
simple way to realise that elementary matrices are invertible and to find their inverses is to think
about the corresponding elementary row operations. For example, let us consider the elementary
matrices mentioned in Exercise 1.48:
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 46
Let us start with E3 (1, 2; 5). This corresponds to the operation which adds 5 times row 2 to row 1:
R1 → R1 +5R2 . The ‘inverse operation’, which we could apply to “undo” R1 → R1 +5R2 is the one
that subtracts 5 times row 2 from row 1: R1 → R1 − 5R2 . If we performed these two operations,
one after another, they would “cancel each other out”. The ‘inverse operation’ R1 → R1 − 5R2
corresponds to the elementary matrix E3 (1, 2; −5), so this is the inverse of the original matrix, i.e.
E3 (1, 2; 5)−1 = E3 (1, 2; −5)
We can confirm this by using the matrices to verify that E3 (1, 2; 5)E3 (1, 2; −5) = I3 and that
E3 (1, 2; −5)E3 (1, 2; 5) = I3 , as required:
1 5 0 1 −5 0 1 0 0
E3 (1, 2; 5)E3 (1, 2; −5) = 0 1 0 0 1 0 = 0 1 0
0 0 1 0 0 1 0 0 1
1 −5 0 1 5 0 1 0 0
E3 (1, 2; −5)E3 (1, 2; 5) = 0 1 0 0 1 0 = 0 1 0
0 0 1 0 0 1 0 0 1
Similarly, consider the elementary matrix P4 (1, 3). This is a matrix that corresponds to the operation
that interchanges rows 1 and 3: R1 ↔ R3 . But, in this case, the inverse operation is the same
operation. To “undo” the interchange, we simply interchange rows 1 and 3 again: R1 ↔ R3 . So,
the corresponding inverse (elementary) matrix is P4 (1, 3) once again, i.e.
P4 (1, 3)−1 = P4 (1, 3)
You may verify this by multiplying the matrix P4 (1, 3) by itself and checking that the result you
obtain is the identity matrix.
These rules generalise to all elementary matrices. In general, the inverse of an elementary matrix is
another elementary matrix of the same form:
Proposition 1.49. Each elementary matrix, of any size, is invertible, and its inverse is an elementary
matrix of the same form. For each n:
Dn (i; λ)−1 = Dn (i; λ1 )
En (i, j; λ)−1 = En (i, j; −λ)
Pn (i, j)−1 = Pn (i, j)
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 47
Having this in mind, let us return to our earlier row reduction process, which led to:
E(3, 2; −5)P (2, 3)D(1; 13 )A = I
If we wish to find an expression for A itself, we may multiply each side of the above equation, on
the left, in order, by the inverses of the elementary matrices present, thus “cancelling them out”:
E(3, 2; −5)P (2, 3)D(1; 31 )A = I =⇒ E(3, 2; 5) E(3, 2; −5)P (2, 3)D(1; 13 )A = E(3, 2; 5)I
Equivalently, we could have obtained this by starting from the expression for A−1 as a product of
elementary matrices
A−1 = E(3, 2; −5)P (2, 3)D(1; 31 )
and then using Proposition 1.28 or Proposition 1.29 (together with Proposition 1.49 and the fact that
the inverse of the inverse of A is A: (A−1 )−1 = A), to obtain:
A = (A−1 )−1 = D(1; 31 )−1 P (2, 3)−1 E(3, 2; −5)−1 = D(1; 3)P (2, 3)E(3, 2; 5)
Let us verify that the expression for A, as a product of elementary matrices, is correct/valid:
3 0 0 1 0 0 1 0 0
D(1; 3)P (2, 3)E(3, 2; 5) = 0 1 0 0 0 1 0 1 0
0 0 1 0 1 0 0 5 1
3 0 0 1 0 0
= 0 0 1
0 1 0
0 1 0 0 5 1
3 0 0
= 0 5 1 = A
0 1 0
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 48
Let us now see how, having calculated the matrix for A−1 earlier, we may solve a general matrix
equation of the form A x = b in a more straightforward way.
For example, suppose that we wish to solve the matrix equation A x = b, where A is as above and
x1 6
x = x2 , b = 5
x3 3
Then, we may “cancel out” the matrix A from the “left” of the equation and find the (unique) values
of x1 , x2 , x3 directly:
If A x = b, then
A−1 (A x) = A−1 b
i.e. (A−1 A) x = A−1 b
So
x = A−1 b
Therefore: 1
x1 3
0 0 6 2
x = x2 = 0 0 1 5 = 3
x3 0 1 −5 3 −10
This method gives the unique solution of the above matrix equation, and such a method works
generally, for any matrix equation of the form Ax = b in which the matrix A is invertible.
For the example we have just seen, it might have been quicker to solve the system of real linear
equations (corresponding to the matrix equation Ax = b) directly. However, the method we have
described is a useful one, particularly when we are dealing with larger and more “complicated”
matrices.
The following exercises will help us review some of the ideas and techniques introduced in the last
few pages (see exercises 2, 3 on Sample Exercises 2):
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 49
2 3 3
Exercise 1.50. Consider the matrix M = 0 1 0
1 1 2
By row reducing the augmented matrix ( M | I3 ), deduce that M is invertible, and write down the
inverse of M .
Let us row reduce the augmented matrix ( M | I3 ), where I3 is the 3 × 3 identity matrix:
2 3 3 | 1 0 0 1 1 2 | 0 0 1
R ↔R
0 1 0 | 0 1 0 −−1−−→
3
0 1 0 | 0 1 0
1 1 2 | 0 0 1 2 3 3 | 1 0 0
1 1 2 | 0 0 1
R →R −2R
3 3 1
−− −−− −−→ 0 1 0 | 0 1 0
0 1 −1 | 1 0 −2
1 0 2 | 0 −1 1
R →R −R
−−1−−−1−−→
2
0 1 0 | 0 1 0
0 1 −1 | 1 0 −2
1 0 2 | 0 −1 1
R →R −R
−−3−−−3−−→
2
0 1 0 | 0 1 0
0 0 −1 | 1 −1 −2
1 0 2 | 0 −1 1
R →−R
−−3−−−→
3
0 1 0 | 0 1 0
0 0 1 | −1 1 2
1 0 0 | 2 −3 −3
R →R −2R
1 1 3
−− −−− −−→ 0 1 0 | 0 1 0
0 0 1 | −1 1 2
At the end of the row reduction, we obtain the identity matrix on the “left half” of the augmented
matrix (i.e. the reduced row echelon form of M is the identity matrix). So, we deduce that the matrix
M is invertible. Its inverse is the matrix on the “right half” of the final augmented matrix:
2 −3 −3
M −1 = 0 1 0
−1 1 2
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 50
0 1 2
Exercise 1.51. Consider the matrix A = 1 −1 1
0 2 7
We begin by row reducing the augmented matrix ( A | I3 ), where I3 is the 3 × 3 identity matrix; at
each step, we give the elementary row operation being performed, as well as the elementary matrix
corresponding to it (since these matrices will be required in parts (b) and (c)).
0 1 2 | 1 0 0 1 −1 1 | 0 1 0
R ↔R
1 −1 1 | 0 1 0 −−1−−→
2
0 1 2 | 1 0 0
P (1,2)
0 2 7 | 0 0 1 0 2 7 | 0 0 1
1 0 3 | 1 1 0
R →R +R
−−1−−−1−−→
2
0 1 2 | 1 0 0
E(1,2;1)
0 2 7 | 0 0 1
1 0 3 | 1 1 0
R →R −2R
3 3 2
−− −−− −−→ 0 1 2 | 1 0 0
E(3,2;−2)
0 0 3 | −2 0 1
1
R3 → R3
1 0 3 | 1 1 0
3 0 1 2 | 1 0 0
−−−−−→
1
D(3, )
3 0 0 1 | −23
0 13
1 0 0 | 3 1 −1
R →R −3R
1 1 3
−− −−− −−→ 0 1 2 | 1 0 0
E(1,3;−3)
0 0 1 | −2
3
0 13
1 0 0 | 3 1 −1
R →R −2R
2
−− 2
−−− 3
−−→ 0 1 0 | 7 0 −2
E(2,3;−2) 3 3
0 0 1 | −2
3
0 1
3
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 51
a) At the end of the row reduction, we see that we obtain the identity matrix on the “left half” of
the augmented matrix (i.e. the reduced row echelon form of A is the identity matrix). So, we
deduce that the matrix A is invertible. Its inverse is the matrix on the “right half” of the final
augmented matrix:
3 1 −1
A−1 = 73 0 −2 3
−2 1
3
0 3
(It might be a good idea to verify that this is the inverse, by checking that its product with A
is the identity: AA−1 = I = A−1 A.)
b) As we have seen, we may now express A−1 as a product of elementary matrices by writing
down the elementary matrices of the operations performed during the row reduction process,
in (seemingly) the reverse order of the order in which the operations are performed:
A−1 = E(2, 3; −2)E(1, 3; −3)D(3; 13 )E(3, 2; −2)E(1, 2; 1)P (1, 2)
c) Also, we may now express A as a product of elementary matrices, as shown in the example
earlier. This results in a matrix product consisting of the inverses of the elementary matrices
from above and in the reverse order of the order in which they appear in A−1 (we can write
down the inverse of each elementary matrix using the ideas developed earlier, leading to
Proposition 1.49):
A = P (1, 2)E(1, 2; −1)E(3, 2; 2)D(3; 3)E(1, 3; 3)E(2, 3; 2)
d) Since we have shown that A is an invertible matrix, and have found the matrix for A−1 , we
may cancel out A from the “left” of the equation and find the (unique) values of x1 , x2 , x3
directly:
If A x = b, then
A−1 (A x) = A−1 b
i.e. (A−1 A) x = A−1 b. So
x = A−1 b
Therefore:
x1 3 1 −1 0 2
7 −2
x = x2 =
3
0 3 5 = −2
−2 1
x3 3
0 3 3 1
So, the equation given has the unique solution
2 x1 = 2
x = −2 or x2 = −2
1 x3 = 1
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 52
For more practice on using the ideas mentioned above, please see exercises 2, 3 on Exercise Set 2.
So far in this course, we have seen quite a few ideas and concepts related to (real) matrices and
systems of (real) linear equations. The following result brings together most of the points of view
we have seen, and gives a way of expressing one central idea using equivalent statements, which
involve general (invertible) matrices, elementary matrices, reduced row echelon forms of matrices
and solutions of systems of linear equations:
Theorem 1.52. Suppose that A is a (real) square n × n matrix, and let x, b be n × 1 matrices (x, b
are vectors).
(2) The matrix equation Ax = b has a unique solution for any given vector b.
(3) The reduced row echelon form of the matrix A is the n × n identity matrix, In .
Proof. We would like to prove that the statements all imply one another. In order to do this, we will
prove that (1) implies (2), (2) implies (3), (3) implies (4) and (4) implies (1). Having done this, it
will be possible to find a “path” from any one statement to any other statement.
(1) =⇒ (2) :
The matrix A is invertible, so there exists a matrix A−1 such that AA−1 = I and A−1 A = I.
Let us recall that a matrix equation can have no solutions, one solution or infinitely many solutions
(by Proposition 1.43).
First, let us show that the above equation has at least one solution, for any given vector b. Since
Ax = b, we obtain A−1 (Ax) = A−1 b, i.e. (A−1 A) x = A−1 b. Then, since A−1 A = I, the previous
equation becomes:
x = A−1 b
which gives one solution for x (as may be verified by substituting x = A−1 b into the original matrix
equation; then, Ax = A(A−1 b) = (AA−1 )b = In b = b, as required).
Let us now show that we cannot have more than one solution, by showing that any two solutions are
the same.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 53
Suppose that there are solutions x and y, i.e. that Ax = b and Ay = b. In that case, Ax = Ay, from
which we obtain A−1 (Ax) = A−1 (Ay), i.e. (A−1 A)x = (A−1 A)y. Then, using A−1 A = I once
again, we deduce that x = y.
So, for any given vector b, the equation Ax = b has a unique solution, as required.
(2) =⇒ (3) :
The matrix A has a unique reduced row echelon form, A0 say (e.g. by Proposition 1.38). Suppose
that we find the reduced row echelon form, ( A0 | b0 ) say, of the augmented matrix ( A | b ).
Let us try to show that the n × n matrix A0 must be the identity matrix In . We are assuming that the
matrix equation Ax = b has a unique solution; we have excluded the possibilities that Ax = b has
no solutions or infinitely many solutions
From this, we may deduce that the reduced matrix A0 does not contain any zero rows.
We may do so using a proof by contradiction. For example, let us suppose that the reduced matrix
A0 contains one or more zero rows. Then, in the setting of the reduced augmented matrix ( A0 | b0 ),
we have the following possibilities:
· At least one of the zero rows of A0 is followed by a non-zero number in b0 . This shows that
the matrix equation Ax = b is inconsistent and that it has no solutions. But this cannot be the
case, since we are assuming that the equation Ax = b has a unique solution.
· In the reduced, augmented, matrix (A0 |b0 ), all of the zero rows of A0 are followed by zeros in
b0 . The presence of one or more zero rows means that not every row of the n × n matrix A0
contains a leading 1, i.e. there are fewer than n leading 1s in A0 . So, it cannot be the case that
each of the n columns of A0 contains a leading 1. The number of columns of the (reduced)
matrix A0 not containing a leading 1 is equal to the number of free variables of the original
matrix equation, so we may conclude that the matrix equation Ax = b has at least one free
variable, and that it therefore has infinitely many solutions. But this cannot be the case, since
we are assuming that the equation Ax = b has a unique solution.
The above argument shows that, if Ax = b has a unique solution, then the (reduced) matrix A0 does
not contain any zero rows. As a result, each row of the matrix A0 must contain a leading 1.
The identity matrix is the only possible n × n matrix that is in reduced row echelon form and that
contains a leading 1 in every row. So, the n × n matrix A0 , which is the reduced row echelon form
of the matrix A, must be the identity matrix In , as required.
CHAPTER 1. MATRICES AND LINEAR EQUATIONS 54
(3) =⇒ (4) :
If the reduced row echelon form of A is the identity In , there must be a sequence of elementary
row operations transforming A to In . If we write down the elementary matrices, E1 , · · · , Ek say,
corresponding to those elementary operations, applied in order, we obtain:
Ek · · · E1 A = In
From this, we can deduce (e.g. by Proposition 1.29) that A can be written as
A = E1−1 · · · Ek−1
But, the inverse of an elementary matrix is also an elementary matrix (by Proposition 1.49), so that
each matrix in the above product for A is an elementary matrix, i.e. the matrix A can be expressed
as a product of elementary matrices, as required.
(4) =⇒ (1) :
Then, since each elementary matrix is invertible (by Proposition 1.49), we may deduce that their
product, A, is also invertible (by Proposition 1.28), with inverse matrix
A−1 = Fm−1 · · · F1−1
As mentioned at the beginning of the proof above, we made sure that we proved enough steps so as
to allow any two of the four given statements to be “connected” in “both directions”.
For example, if we now wish to show that ‘(1) =⇒ (3)’, we may use the proof of ‘(1) =⇒ (2)’
together with the proof of ‘(2) =⇒ (3)’; similarly, to “go the other way” and show the implication
‘(3) =⇒ (1)’, we may combine the proofs for ‘(3) =⇒ (4)’ and ‘(4) =⇒ (1)’.
So, overall, the proof above is enough to show that any two of the statements given in the theorem
are equivalent, and, therefore, that all the given statements are equivalent.
It might also be worth noting that, in the last part of the proof, we may verify that A−1 = Fm−1 · · · F1−1
“works” as an inverse matrix for the matrix A, by using the idea behind Proposition 1.29 (together
with Proposition 1.49), to “cancel out”, in pairs, all the “little matrices” in
AA−1 = (F1 · · · Fm )(Fm−1 · · · F1−1 ) and A−1 A = (Fm−1 · · · F1−1 )(F1 · · · Fm )