Linear Algebra Week 3
Linear Algebra Week 3
Linear Algebra Week 3
Definition 1 (Nullity)
If A is any m × n real matrix, the dimension of the null-space N (A) of A is
called the nullity of A and is denoted null(A).
Example 3
Consider
the augmented matrix
of a homogeneous system Ax = 0.
1 3 1 −2 −3 0
1 3
3 −1 −4 0
2 6 −4 −7 −3 0
3 9 1 −7 −8 0
Performing
ERO 0 s yields:
1 3 1 −2 −3 0
1 3
3 −1 −4 0
2 6 −4 −7 −3 0
3 9 1 −7 −8 0
1 3 1 −2 −3 0
E21 (−1),E31 (−2),E42 (−3)
0 0
2 1 −1 0
7−→ 0 0 −6
−3 3 0
0 0 −2 −1 10 0
1 3 1 −2 −3
E32 (3),E42 (1)
0
02 1 −1
7−→ 0
(REF )
0 0 0 0
0 0 0 0 0
−6x 2 + 5x 4 + 5x5
2x
2
⊂ R5 . Here x2 , x4 , x5
The solution set is N (A) = −x4 + x5
2x4
2x5
take arbitrary values since the second, fourth and fifth columns are
pivot-free.
Finally, null(A) = 3, rank(A) = 2 and null(A) + rank(A) = 5 which is the
no. of variables or the number of columns of A.
1 3 1 −2 −3
1 3 3 −1 −4
Given A = 2 6 −4 −7 −3, find a basis of the null space N (A) of
3 9 1 −7 −8
A. The null space is described in the previous slide. Since the real triple
(x2 , x4 , x5 ) can take any value (in R3 ),we assign
values
(1,
0, 0), (0,
1,
0)
−6 5 5
2 0 0
and (0, 0, 1) successively, to find v1 = 0 , v2 = −1 , v3 =
1 as
0 2 0
0 0 2
three linearly independent elements of N (A) which form a basis.
We have parametrized N (A) by R3 and also used its standard basis to find
a basis of N (A).
Theorem 4
An n × n matrix A is invertible if and only if its rank is n. Equivalently, if
and only if the null space of A is trivial.
Proof: Gauss-Jordan method of finding A−1 shows that the inverse will
exist if and only if the reduced row echelon form of the n × n matrix A
becomes the identity matrix In which has n pivots. [09 : 50]
Definition 5 (Minors of A)
The (jk)th minor Mjk = Mjk (A) of A is defined to be the determinant of
the (n − 1) × (n − 1) sub-matrix of A obtained by deleting its j th row and
the k th column.
Definition 6 (Determinant of A)
The determinant |A| is defined as the sum
n
X
|A| = (−1)j+k ajk Mjk (Expansion by the j th row)
k=1
n
X
= (−1)j+k ajk Mjk (Expansion by the k th column)
j=1
Remark: We will consistently use the row expansions and finally prove
that column expansion is also valid. We will NOT prove that expansion by
any two rows are equal. It is given in the handout. [10:00]
Example 7
a b d c d −c d −b
A= , M= , C= , AdjA =
c d b a −b a −c a
Theorem 8
Let A be any n × n matrix, n ≥ 2 and B is obtained from A by
interchanging two of its rows then |B| = −|A|.
Proof:
Direct check if n = 2. For n ≥ 3 expand by a row not involved in the
interchange. By induction hypothesis each involved cofactor changes
sign.
Corollary 9
If two of the rows are same then the determinant vanishes.
Proof:
On interchanging the said rows, the determinant changes sign while
the matrix and therefore the determinant remains same.
Theorem 10
Let A be an n × n matrix. If its j th row is a linear combination of two
rows, say Aj = c 0 A0j + c 00 A00j , then |A| = c 0 |A0 | + c 00 |A00 |. The
notations
are as follows:
A1 A1 A1
.. .. ..
. . .
A = Aj , Aj = c 0 A0j + c 00 A00j , A0 = A0j A00 = A00j ,
. . .
.. .. ..
An An An
Proof:
Obvious. Just expand by the j th row. Cofactors remain unchanged.
Theorem 11
Let In be the n × n identity matrix. Then |In | = 1.
Proof:
Obvious for n = 1. Assuming for (n − 1) and expanding |In | by the first
row we get |In | = 1.|In−1 | + 0 + · · · + 0 = 1.
For the converse, we actually construct the inverse using the adjoint.
Lemma 13
Let A be a square matrix, then A(AdjA) = (AdjA)A = |A|I.
(Gray entries are absent, the gray column moves from left to write while the gray
row remains steady)
a11 ··· a1` ··· a1n a11 ··· a1` ··· a1n
.. .. .. .. .. ..
. . . . . .
aj1
··· aj` ··· ajn aj1
··· aj` ··· ajn
(−1) aj1 ...
j+1 .. .. +· · ·+(−1)j+n a .. .. ..
. .
jn .
. .
ak1
··· ak` ··· akn ak1
··· ak` ··· akn
. .. .. . .. ..
.. ..
. . . .
an1 ··· an` ··· ann an1 ··· an` ··· ann
(Gray entries are absent, the gray column moves from left to write while the gray
row remains steady)
a11 ··· a1` ··· a1n a11 ··· a1` ··· a1n
.. .. .. .. .. ..
. . . . . .
aj1
··· aj` ··· ajn aj1
··· aj` ··· ajn
(−1) aj` ... .. .. =
X
k+` .. .. ..
=
. . .
. .
` aj1 ··· aj` ··· ajn aj1 ··· aj` ··· ajn
. .. .. . .. ..
.. . . .. . .
an1 ··· an` ··· ann an1 ··· an` ··· ann
| {z }
(expanded by the k th row)
th th
=0 ∵ j row = k row .
Expand by the first row and assume by induction that |In−1 | = 1. for
example,
1 0 0
0 1 0 = 1 1 0 + 0. ∗ +0.∗ = 1
0 1
0 0 1
Example 15
Find the matrices of minors, cofactors and the adjoint of the matrix
√ √
2 3 2
A = √ −1 √0 1 .
2 3 2
√ √ √ √ √ √
2 3 2 − √3 ( 2 − 2) 3 3√
A(AdjA) = −1 0 1 (2 +√ 2) √ 2 √ −(2√
+ 2)
√ √
2 3 2 − 3 ( 2 − 2) 3 3
0 0 0
= 0 0 0
0 0 0
Further,
expanding
√ √by the second row
2 3 2
√ √ √ √
|A| = √−1 √0 1 = −(−1)(2 3 − 6) − 1(2 3 − 6) = 0 =⇒ A(AdjA) =
2 3 2
O = |A|I and the inverse does not exist. [1.5]
Notation:
For any natural number n we denote by n the (ordered) set
{1, 2, ..., n}.
Let A be any real (or complex) m × n matrix and let J ⊂ m and
K ⊂ n be ordered subsets of indices. We denote by AJK the
submatrix of A obtained by selecting the rows corresponding to
indices in J and the columns corresponding to the indices in K .
Thus for example the (j, k)-minor Mjk = Aĵ k̂ for ĵ = n \ {j}, k̂ = n \ {k}.
Example 16
Let A = [ajk ] be a 5 × 10 matrix. Let J = {3, 5}, K = {2,4, 6}.
a32 a34 a36
Determine the submatrix AJK . Answer: .
a52 a54 a56
Proof:
If r = rankd (A), there is an r × r submatrix AJK of nonzero determinant.
Then the rows of AJK are L.I.
Example 19
Find the rank by determinants. Verify by row reduction.
0 2 3 4 3
(i) 2 0 5 (ii) −8 −6 .
−3 5 0 16 12
0 2 3
(i) 2 0 5 = 0 + (−2) × 15 + 3 × 10 = 0.
−3 5 0
=⇒ rank(A) ≤ 2. ∵ (@ 3 × 3 subdeterminant 6= 0)
0 2
OTOH = −4 6= 0 for a 2 × 2 submatrix A{1,2}{1,2} ∴ rank(A) = 2.
2 0
Verification by row reduction:
January 19, 2017 30 / 51
Example contd.
0 2 3 −3 5 0 −3 5 0
2 0 5 7→ 2 0 5 7→ 0 10/3 5
−3 5 0 0 2 3 0 2 3
−3 5 0 -3 5 0
7→ 0 2 3 7→ 0 2 3 (2 pivots.)
0 2 3 0 0 0
4 3
(ii) A = −8 −6 A{1,2}{1,2} = A{2,3}{1,2} = A{1,3}{1,2} = 0 =⇒
16 12
rank(A) < 2 =⇒ rank(A) = 1 ∵ A 6= [0].
Verification by row reduction:
4 3 4 3
−8 −6 → 7 0 0
16 12 0 0
January 19, 2017 31 / 51
Applications of determinant: Cramer’s Rule
Let Ax = b be a system of n equations in the same number n of variables.
Thus A is a square matrix in this case. We may (temporarily) call such a
system as a square system.
where Ak̂ [b] = [A1 , ..., b, ..., An ] is the n × n matrix obtained from A by
replacing its k th column by b.
Proof: Omitted.
(AdjA)b
Decode the column x = A−1 b = entry-wise.
|A|
January 19, 2017 32 / 51
Example
Example 21
Find the values of β for which Cramer’s rule is applicable. For the
remaining value(s) of β, find the number of solutions.
x + 2y + 3z = 20
x + 3y + z = 13
x + 6y + βz = β.
1 2 3 20 1 2 3 20
1 3 1 13 7→ 0 1 −2 −7
1 6 −5 −5 0 4 −8 −25
1 2 3 20
7→ 0 1 −2 −7 .
0 0 0 3
The pivot in the last column means that rank(A) = 2 < 3 = rank(A+ ).
Hence the system is not solvable for β = −5.
Example 23
0 2 3
Let A = 2 0 5 be a matrix considered earlier. Find bases for the row and
−3 5 0
column spaces.
|A| = 0 =⇒ rank(A) < 3 and A{1,2}{1,2} = −4 6= 0. Hence the row and column
spaces are 2-dimensional and the first two rows of A can be chosen for a basis for
the row space and the first two columns of A for that of the column space of A.
[2.0]
January 19, 2017 35 / 51
Expansion in a basis
Recall V ⊂ Rn a vector space i.e. L(S) for some finite set of vectors.
Assume S to be l.i. so that it is also a basis of V .
Theorem 24
Let B = {v1 , v2 , ..., vk } be a basis of a vector space V ⊆ Rn and v ∈ V .
There exist unique scalars c1 , c2 , ..., ck s.t. v = c1 v1 + c2 v2 + · · · + ck vk .
Proof:
Existence is obvious since V = L(B). For uniqueness,
v = c1 v1 + c2 v2 + · · · + ck vk = c10 v1 + c20 v2 + · · · + ck0 vk =⇒ (c1 − c10 )v1 +
(c2 − c20 )v2 + · · · + (ck − ck0 )vk = 0 =⇒ (cr − cr0 ) = 0, 1 ≤ r ≤ k =⇒
uniqueness.
Recall that for the vectors v, w ∈ Rn , their dot product or the standard
X n
inner product is defined to be the scalar v · w = vT w = vj wj . It is also
j=1
denoted hv, wi. Evidently, hv, vi ≥ 0 with equality if and only if v = 0.
Definition 26 (Orthogonality)
We say v, w ∈ Rn are mutually orthogonal if hv, wi = 0.
2 Triangle inequality
kv + wk ≤ kvk + kwk.
3 Pythagorus theorem: If v ⊥ w then
Theorem 28
Let S = {v1 , v2 , ..., vk } be a set of non-zero vectors in Rn . If the vectors
are mutually orthogonal, then S is a l.i. set.
Proof:
X X
cj vj = 0 =⇒ h cj vj , v` i = 0
=⇒ c` kv` k2 = 0, (1 ≤ ` ≤ k)
=⇒ c` = 0 since v` 6= 0.
Definition 29 (Orthogonality/Orthonormality)
Let S = {v1 , v2 , ..., vk } be a set of vectors in Rn .
1 We say that S is an orthogonal set if vj ⊥ v` ; ∀ j 6= ` ∈ {1, 2, ..., k}.
(One or more vector in S may be zero.)
2 We say that S is an orthonormal set if vj ⊥ v` ; ∀ j 6= ` ∈ {1, 2, ..., k}
and kvj k = 1 for each j (normalization).
Theorem 31
Let B = {u1 , u2 , ..., uk } be an orthonormal basis of a vector space V and
x ∈ V . Then
Proof:Direct check.
Start with x = c1 u1 + · · · + ck uk .
Take inner-product successively with u1 , u2 , ..., uk .
Example 32
0.6 −0.8
Verify that u1 = , u = is an orthonormal basis of R2 .
0.8 2 0.6
1
Expand the vector x = in this basis and verify equality.
1
Solution: (0.6)2 + (0.8)2 = 1 shows that kuj k2 = 1, j = 1, 2. Further u1 qu2 = 0
is easy AND x qu1 = 1.4, x qu2 = −0.2 is also easy. Verification:
0.6 −0.8
(x qu1 )u1 + (x qu2 )u2 = 1.4 − 0.2
0.8 0.6
0.84 + 0.16 1
= = = x.
1.12 − 0.12 1
Example 33
2/3 1/3 2/3
Verify that u1 = −2/3 , u2 = 2/3 , u3 = 1/3 is an orthonormal
1/3 2/3 −2/3
basis of R3 .
1
Expand the vector x = 1 in this basis and verify equality.
1
1 4 4
Solution: + + = 1 shows that kuj k2 = 1, j = 1, 2, 3. Further
9 9 9
uj quk = 0; j 6= k is easy AND x qu1 = 1/3, x qu2 = 5/3, x qu3 = 1/3 is also easy.
Verification:
Theorem 34
A
If A, B are m × n and n × p respectively which give us linear maps Rn −→ Rm
B
and Rp −→ Rn , then the product AB represents the composite
B A
Rp −→ Rn −→ Rm viz A ◦ B.
Proof:
Prove inductively that hwj , w` i = 0 for ` = 1, 2, ..., j − 1. In other words,
check at each stage which is routine.
Remark: If {v1 , ..., vk } is a l.i. set, then at no stage we get wj = 0. If not
we do get wjt = 0 for a few 1 ≤ j1 < · · · < jr ≤ k. We drop these to get
an o.g. basis. Next we normalize to get an o.n. basis of V .
January 19, 2017 47 / 51
Examples
Example 35
Apply G-S process to {i + j + k, i + j, i} in R3 .
Solution:
w1 = i + j + k.
hi + j, i + j + ki i + j − 2k
w2 = i + j − 2
w1 = .
ki + j + kk 3
hi, i + j + ki hi, i+j−2k
3 i i + j − 2k
w3 = i − (i + j + k) − 2
3 3
3
i−j
= .
2
Thus {w1 , w2 , w3 } = {i + j + k, i + j − 2k, i − j} is an o.g. set on
dropping the denominators
for convenience. The corresponding o.n. set is
i + j + k i + j − 2k i − j
{u1 , u2 , u3 } = √ , √ , √
3 6 2
Example 36
Apply G-S process to {i, i + j, i − j, i + j + k} in R3 .
Solution:
w1 = i.
hi + j, ii
w2 = (i + j) − i = j.
kik2
hi − j, ii hi − j, ji
w3 = (i − j) − i− j=0
1 1
hi + j + k, ii hi + j + k, ji
w4 = (i + j + k) − i− j=k
1 1
Thus {w1 , w2 , w3 , w4 } = {i, j, 0, k} is an o.g. set. After dropping 0, the
corresponding o.n. set is
{u1 , u2 , u3 } = {i, j, k}- the standard o.n. basis of R3 .
In particular, if the given o.n. set is a basis of V , the Bessel’s inequality becomes
equality for each vector v ∈ V .
January 19, 2017 50 / 51
Examples
Example 38
Orthogonalize the following ordered set of row-vectors in R4 .
{[1, 1, 0, 0], [1, 0, 1, 0], [1, 0, 0, 1], [0, 1, 1, 0], [0, 1, 0, 1], [0, 0, 1, 1]}
Do you get an orthogonal basis? Does [−2, −1, 1, 2] belong to the linear span?
Use Bessel’s inequality.
Solution: After the Gram-Schmidt process we get
1 1 1
{[1, 1, 0, 0], [1, −1, 2, 0], [1, −1, −1, 3], [−1, 1, 1, 1], 0, 0}.
2 3 2
The 4 nonzero elements give an orthogonal basis. [−2, −1, 1, 2] must be in linear
span. To verify Bessel’s equality, we first note that