WEEK 5-Student

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 47

WEEK 5: MATRIX ALGEBRA

Matrix Algebra 1
5.1 INTRODUCTION
5.2 DEFINITION
A matrix is an array of mn elements (where m and n are are integers) arranged
in m rows and n columns. Such a matrix A is usually denoted by:

Where a11,a12,….amn are called its elements and may be either real or complex.
The matrix A is said to be of size (m x n). If m = n, the matrix is said to be a
square matrix of order n

Matrix Algebra 2
If the matrix has one column or one row then it is called column vector or row vector
respectively.

The elements aii in a square matrix form the principal diagonal (or main diagonal).
Their sum a11+a22+a33+…+amn is called the trace of A. If all the elements of a square
matrix are zero, then the matrix is called a null matrix. On the other hand, if only the
elements on the main diagonal are non-zero, then the matrix is said to be a diagonal
matrix.

Matrix Algebra 3
In particular, if all the diagonal elements are equal to 1, it is called a unit matrix and
is usually denoted by i. Thus,

A square matrix is said to be an upper-triangular matrix if aij = 0 for i>j, and a lower-
triangular matrix if aij = 0 for i<j. The matrices U (upper-triangular matrix) and L
(lower-triangular matrix) are defined by:

A square matrix A is said to be banded if

aij = 0, when |i - j| > k,

Matrix Algebra 4
In which case only (2k +1) elements in every row are non-zero. In particular
matrices of the type

Are called tridiagonal matrices.

Every square matrix A is associated with a number called its determinant and it
written as

If |A| ≠ 0, the matrix is said to be non-singular, otherwise it is a singular matrix. As


an example, the matrix
Is singular, since |
A| = 0

5
Matrix Algebra
A square matrix A in which aij = aji is said to be symmetric. If aij = -aji, it is said to
skew-symmetric. For a skew-symmetric matrix, therefore we have

aii = -aii

Where aii = 0. For example, the matrix

Is a skew-symmetric matrix.

Basic properties of matrices are elaborated in detail in Appendix 5.

Matrix Algebra 6
5.3 SOLUTION OF LINEAR EQUATIONS
A system of m linear equation in n unknown x1, x2, x3,……xn is defined as follows:

….

Which may be written in matrix form


AX = B
Where

Where A=(aij) is called the coefficient matrix and X=(xij) and B=(bij) are called the
column vectors of the unknowns and of the constants respectively
7
Matrix Algebra
If b=0 (that is bi=0 for all i), the system is called homogenous.
If B≠) (that is, there exist a bi≠0 for all i), the system is called non-homogenous.

A system of linear equations having no solution is called inconsistent, while a


system having one or more solutions is called consistent.

Geometric Interpretation, Existence and Uniqueness of solutions

If we have two equations in two unknowns x1 and x2

Matrix Algebra 8
If we interpret x1,x2 as coordinates in the x1 x2-plane, then each of the two equations
represent a straight line, and (x1 x2) is a solution if and only if the point P with
coordinates x1, x2 lies in both lines. Hence there are three possible cases:

(a)Precisely one solution if the lines intersect


(b)Infinitely many solutions if the lines coincide
(c)No solution if the lines are parallel

For instance,

Matrix Algebra 9
5.3.1 THE GAUSSIAN ELIMINATION
Let A and B be the above matrices. We may write the above system of linear
equation in its augmented matrix form [A / B]. Then the Gauss elimination is a
reduction of the augmented matrix [A / B] by elementary row operations to “triangular
form”, from which we then readily obtain the values of the unknowns by “back
substitution”

Steps:

i.Suppose a11≠0, that is the first entry of the first row is nonzero. (If a11=0 and ai1≠0,
then interchange the 1st row and the i th row). This row is called the pivot row
remains untouched.
ii.Ii. Use some elementary row operations to form a new matrix with ai1=0 for all
i=2,3,….,m.

Matrix Algebra 10
iii. Next take the 2nd row as the new pivot row if a22≠0 (if a22=0 and ai2≠0, i≠1, then
interchange the 2nd row and the i th row). Repeat step 2 so that ai2=0 for all i=3,4,
….,m
iv Repeat the process until we form a “upper triangular matrix”
v. Working backward from the last to the first row of this system to obtain the
solution.

Example 1

a)Solve the following system of linear equations:

Solution

Matrix Algebra 11
b) Solve the following system of linear equations:

Solution:

Let x3 = k be any number.


Then x1 = -k-9, x2 = k+6

Matrix Algebra 12
Example: Gauss Elimination if infinitely many solutions Exist

Solve the following linear system of three equations in four unknowns whose
augmented matrix is

Solution
We circle pivots and box terms of equations and corresponding entries to be
eliminated. We indicate the operations in terms of equations and operate on both
equations and matrices.

Step 1: Elimination of x1 from the second and third equations:

Matrix Algebra 13
Step 2: Elimination of x2 from the third equation of (2) by adding

Back substitution. From the second equation, x2 = 1- x3 + 4x4. From this and the
first equation, x1 = 2 – x4. Since x3 and x4 remain arbitrary, we have infinitely many
solutions. If we choose a value of x3 and a value of x4, then the corresponding
values of x1 and x2 are uniquely determined.

On Notation. If unknowns remain arbitrary, it is also customary to donate them by


other letters t1,t2,…..In this example we may write x1 = 2 – x4 = 2 – t2, x2 = 1 – x3 +
4x4 = 1 – t1 + 4t2, x3 = t1 (first arbitrary unknown), x4 = t2 (second arbitrary unknown)

Matrix Algebra 14
Example: Gauss Elimination if no Solution Exists

What will happen if we apply the Gauss elimination to a linear system that has no
solution? The answer is that in this case the method will show this fact by
producing a contradiction. For instance, consider

Step 1: Elimination of x1 from the second and third equations

This gives

Matrix Algebra 15
Step 2: Elimination of x2 from the third equation gives

The false statement 0 = 12 shows that the system has no solution.

Matrix Algebra 16
5.3.2 The Cramer’s rule

Let AX = B be a system of n linear equations in n unknowns. If A is a n x n


nonsingular matrix, then AX = B has a unique solution given by:

Matrix Algebra 17
Examples:

(a) Solve the system of linear equations

Solution:
In the form AX = B

Matrix Algebra 18
(b) Solve the system of linear equations

Find the determinant of A

By using Cramer’s rule, the solution is given by:

Matrix Algebra 19
5.4 EIGENVALUE & EIGENVECTOR
Let A be an n x n matrix. A number λ is called eigenvalue of A if there exists a
nonzero column vector X ϵ Mn,1 such that AX = λX. X is called an eigenvector of A
corresponding to (or associated with) the eigenvalue λ.

(Note that the eigenvalue λ can be the number 0 but eigenvector X must be a
nonzero vector, i.e. X ≠ õ)

Properties of eigenvalues of eigenvectors

The following properties of eigenvalues and eigenvectors are useful in the


applications:

(i) If
a)

b)

c)

d)
Matrix Algebra 20
(ii) The matrices A and AT have the same eigenvalues
(iii) If two or more eigenvalues are equal, then the eigenvectors may be linearly
independent or linearly dependent.
(iv) If X1 and X2 are eigenvector of a matrix A corresponding to eigenvalues λ1 and
λ2 respectively, and if λ1 ≠ λ2, then X1 and X2 are linearly independent.

To solve for the eigenvalues, λi and the corresponding eigenvectors, xc of an n x n


matrix A, do the following steps:

1. Multiply on n x n identify matrix by the scalar λ,


2. Subtract the identify matrix multiple from the matrix A
3. Find the determinant of the matrix and the different
4. Solve for the values of λ that satisfy the equation det (A – λI) = 0
5. Solve for the corresponding vector to each λ

Matrix Algebra 21
Example
Find the eigenvalues and eigenvectors of the following matrix

Solution:
To do this, we find the values of λ which satisfy the characteristics equation of
the matrix A, namely those values of λ for which

det (A – λl)=0

Matrix Algebra 22
Matrix Algebra 23
Finding eigenvectors:

For each eigenvalue λ, we have


(A – λl)x = 0

Case 1: λ = 4

(A – 4I)x = 0

Matrix Algebra 24
To check:

Substitute all value in AX = λX

Matrix Algebra 25
Cases 2 & 3: λ = - 2

Matrix Algebra 26
5.5 CAYLEY-HAMILTON THEOREM
This theorem states that every square matrix satisfies its own characteristic equation.
Let the characteristic polynomial be

i.e.

Then the Cayley-Hamilton therom states that

Proof: The element of are of degree n or less in ƪ. Hence the element of adj
are at most of degree (n-1) in ƪ. We therefore write

Where the Qi’s are square matrices of order (n –1)

Since

Matrix Algebra 27
We have

Equating the coefficients of like powers of λ on both sides of the above equation, we
get:

.
.

Pre-multiplying these equations respectively by l, A, A2, ……., An and adding, we


obtain

Matrix Algebra 28
Or

Since the termson the left side cancel each other. This prove Cayley Hamilton
theorem. We can write the above statement as:

Pre-multiplying both sides of the above equation by A-1, we obtain:

Or

Matrix Algebra 29
Example

Given that A = . Use the Cayley Hamilton to find A -1

Solution:

Matrix Algebra 30
The characteristic equation of A is

By Cayley Hamilton we have

Premultiplying the above equation to obtain

Matrix Algebra 31
5.6 LINEAR DEPENDENCE; ROW ECHELON MATRIX; REDUCED
ROW ECHELON MATRIX
Definition 1:

Let v1, v2,….., vn be vectors in a vector set V (that is V = {v1, v2,….vn})

(i)We said that the vectors v1, v2,…vn are linearly independent if

implies 𝜆1 = 𝜆2 = ⋯ = 𝜆𝑛 = 0

(ii)v1, v2, …..,vn are linearly dependent if there exist numbers


Not all of them zero, such that

Matrix Algebra 32
Example:

a)Let V = {(1,0,0),(1,2,0),(1,0,1)}
λ1(1,0,0)+λ2 (1,2,0) + λ3 (1,0,1) = 0

λ1 = 0, λ2 = 0, λ3 = 0

Therefore (1,0,0), (1,3,0), (1,0,1) are linearly independent.

Matrix Algebra 33
b) Let V = {(1.0.0), (1,2,2), (1,-4,-4)}
Since λ1(1,0,0)+λ2(1,2,2)+λ3 (1,-4,-4) = 0,

From above equation, and λ3 is free variable.

If λ3 = 1, λ2 = 2, λ1 = -3

Therefore, (1,0,0), (1,2,2), (1,-4,-4) are linearly dependent

Matrix Algebra 34
Definition 2:

Let A = (ai,j) ϵ Mm,n and let v be a row vector of A.

Then, the maximum number of independent row vectors of A is called the rank of A
is denoted by ρ(A) or rank A.

Remark:

Let V = {v1, v2, …..vn}, if there exists a zero row vector in V (that is vi = 0), then v1,
v2, …..vn are linearly dependent

If vi = 0, then λi does not to be zero so that

Matrix Algebra 35
Definition 3 (Row Echelon Matrices):

A row is called a zero row if every element in the row is zero

A row is called non-zero row if there is at least one element in the row which is not
zero.

Let A = (a i,j) ϵ M m,n, A ≠ 0 ) and let Ri be the i th row of A, i = 1,2,…,m

Let Ri be a non-zero row.

The first non-zero element in Ri is called the distinguished element of Ri.

A matrix A is called a row echelon matrix if there is a number k, 1≤ k ≤ m such that:


(i)Only R1, R2,…..Rk are non-zero are above all zero rows (if zero row (s) exist (s))
and
(ii)If ci = the number of zeros before the distinguished element in Ri, then ci < ci+1, i
= 1,2,…., k-1

Matrix Algebra 36
Furthermore, a row echelon matrix is called a row reduced echelon matrix, if the
distinguished elements are:

(iii)Each equal to 1, and


(iv)The only nonzero entries in their respective columns

The matrices A, C and F are not row echelon matrices

The matrices B and d are echelon matrices but not row reduced echelon matrices

The matrix of E is a row echelon and also reduced echelon matrices.


Matrix Algebra 37
Procedure which row reduces a matrix to echelon form:

1.Suppose the j1 column is the first column with a non-zero entry. Interchange the
rows so that this non-zero entry appears in the first row, that is, so that a1j1 ≠ 0
2.For each i >1, apply the operation Ri → a1j1 Ri – a1j1 R1.
3.Repeat steps 1 and 2 with the submatrix formed by all the rows excluding the
first. Continue the process until the matrix is in echelon form

Matrix Algebra 38
The following examples are of matrices in echelon form:

The following examples are not in echelon form

Matrix A does not have all-zero rows below non-zero rows


Matrix B has a 1 in the 2nd position on the third row. For row echelon form, it needs
to be to the right of the leading coefficient above it. In other words, it should be in
the fourth position in place of the 3
Matrix C has a 2 as a leading coefficient instead of a 1
Matrix D has a -1 as a leading coefficient instead of a 1

Matrix Algebra 39
Definition 4:
If A = (a i, j) ϵ M m,n is a row echelon matrix, then the rank of A = ρ(A) = the number
of non-zero row (s) in A.

Example:
Let A be a matrix formed by the vectors set V.
V1 = {(0,1,0), (1,0,2), (2,0, -1)}

i)Determine the rank of A


ii)Determine whether the row vectors set V is linearly dependent or is linearly
independent
iii)The maximum number of linearly independent row vectors in V.

Solution:

i)The number of non-zero rows in the row echelon matrix = 3..the rank of A = 3
ii)There is no zero row in the row echelon matrix, the row vectors set V1 is linearly
independent
iii)The maximum number of linearly independent
Matrix Algebra
row vectors in V1 = 3 40
5.7 DIAGONALIZATION
Recall that a matrix D = (d i,j) ϵ Mn is called a diagonal matrix if d i,j = 0 for all i≠j.
That is

Properties of diagonal matrix:


Let D and W are two diagonal matrices of order n, That is,

1. DW and WD are also diagonal matrix and DW = WD =

Matrix Algebra 41
2. The determinant of

3. If all of the diagonal elements of D are nonzero (that is )

Then D is invertible (or non-singular) and

4. The eigenvalues of D are its diagonal elements, that is d11……dnn

Of course, most square matrices are not diagonal, but some are related to
diagonal matrices in a way we will find useful in solving problems.

Definition 1:
Let A be an n-square matrix, Then A is a diagonalizable ↔ there exists an n-
square matrix P such that P-1 AP is a diagonal matrix. (That is, P-1 AP = D where D
is a diagonal matrix). When such a matrix P exists, we say that P diagonalizes A.

Matrix Algebra 42
Example:

Let . There exists a matrix such that

Therefore, P diagonalizes A.

Definition 2:
Let A be an n-square matrix. Then, A is diagonalizable ↔ A has n linearly
independent eigenvectros.

Definition 3:
n column vector v1, v2,….vn are linearly independent
↔ The matrix A = (v1, v2, ……vn) is invertible (non-singular) ↔

Matrix Algebra 43
Procedure to determine a matrix P that diagonalizes A:

Matrix Algebra 44
Example:
 1 4 
Let A   . Find a matrix P that diagonalizes A and find the diagonal matrix D such that
0 3

D=P-1AP.

Solution:
The characteristic equation is p    A  I2  0

1  4
A   I2  0
0 3
 2  2  3  0
    3   1  0    -1,3

Next, find eigenvectors ei corresponding to i , by solving the system  A  In  ei  0

 1   4  x1   0 
That is, solve    
 0 3    x2   0 

Matrix Algebra 45
Case (i) λ1 = -1:

Augmented matrix:

So 4x2 = 0 → x2 = 0

The set of eigenvectors corresponding to λ1 = -1 is given by

When k=1, is an eigenvector corresponding to λ1 = -1

Case (ii): λ2 = 3

Augmented matrix is

The set of eigenvectors corresponding to λ2 = 3 is given by

Matrix Algebra 46
When k = 1, is an eigenvector corresponding to λ2 = 3

Let . Since , then P -1 exists and

Since

Therefore . diagonalizes the matrix A and

Matrix Algebra 47

You might also like