Matrix Algebra PDF
Matrix Algebra PDF
Matrix Algebra PDF
Matrix Algebra
In Chapter 1 we discussed the basic steps involved in any finite element analysis. These
steps include discretizing the problem into elements and nodes, assuming a function
that represents behavior of an element, developing a set of equations for an element, assembling the elemental formulations to present the entire problem, and applying the
boundary conditions and loading. These steps lead to a set of linear (nonlinear for some
problems) algebraic equations that must be solved simultaneously. A good understanding
of matrLx algebra is essential in formulation and solution of finite element models. As
is the case with any topic, matrix algebra has its own terminology and follows a set of
rules. We provide an overview of matrix terminology and matrix algebra in this chapter.
The main topics discussed in Chapter 2 include
2.1
2.2
2.3
Basic Definitions
MatrLx Addition or Subtraction
MatrL-1: Multiplication
2.4
2.5
2.6
Partitioning of a Ma trLx
Transpose of a Matrix
Determinant of a Matrix
2.7
2.8
2.9
2.1
BASIC DEFINITIONS
A matrix is an alTay of numbers or mathematical terms. The numbers or the mathematical terms that make up the matrix are called the elements of matrix. The size
of a matrix is defined by its number of rows and columns. A matrix may consist
Section 2.1
Basic Definitions
67
~[ ~
[i\1
5
26
8
-5
{L}
~4]
[T]
Df(x , y, z)
dX
af(x , y, z)
ay
Df(x ,y,z)
'r
COS
-sin e
cos e
0
0
0
0
cos e
sin e
jXdX
jYdY
jx2
az
cos e
[I] =
-,L]
dx
j i dy
Matrix [N] is a 3 by 3 (or 3 X 3) matrix whose elements are numbers, [1] is a 4 X 4 that
has sine and cosine terms as its elements, {L} is a 3 X 1 matrix with its elements representing partial derivatives, and [/] is a 2 X 2 matrix with integrals for its elements. The
[N] , [T], and [I] are square matrices. A square matrix has the same number of rows
and columns. The element of a matrix is denoted by its location. For example, the element in the first row and the third column of a matrix [8] is denoted by bu , and an element occulTing in matl-lX [A] in row 2 and column 3 is denoted by the term a 23. In this
book, we denote the matrix by a bold-facc Ictter in brackets [] and {} , for example: [K] ,
[T], {F} , and the elements of matrices are represented by regular lowercase letters. The
{} is used to distinguish a column matrix.
af(x , y, z)
{A}
~ ~2}' ~ {;:}..nd ~
{
{X}
{L}
ax
at(x, y, z)
by
at(x, y, z)
ilz
whereas [C]
= [5
68
Chapter 2
Matrix Algebra
0
0
a2
o
o
a3
The diagonal along which at> a2, a3, and a4lies is called the principal diagonal. An identity
or unilmatrix is a diagonal matrix whose elements consist of a value 1. An example of
an identity matrix follows.
10000
01000
00100
[I]
=
00010
00001
A banded matrix is a matrix that has a band of nonzero elements parallel to its principal diagonal. As shown in the example that follows, all other elements outside the band
are zero.
[8]
b l2
bn
b ll
b21
0
0
0
0
0
b32
0
0
0
0
0
b23
b 33
b 43
0
0
0
0
0
0
0
0
0
0
0
0
b 34
b44
b 54
b 45
b 55
b 56
b 65
b66
0
0
0
0
0
b67
b 76
~7
[U]
=[
o
o
lll2
lil3
lln
Un
0
0
U33
1124
122
1134
132
133
1144
142
143
""J
I~J
Section 2.3
2.2
Matrix Multiplication
69
Two matrices can be added together or subtracted from each other provided that
they are of the same size-each matrix must have the same number of rows and
columns. We can add matrix [A ]mxn of dimension m by 11 to matrix [B]mxn of the same
dimension by adding the like elements. Matrix subtraction follows a similar rule , as
shown.
[A] [B]
au
al2
a 21
an
al n
a2n
bu
b21
bl 2
bn
bin
b2n
b ml
bm2
bmn
ami
amn
am2
(au b u )
(a21 b21)
(a 12 b 12 )
(an b22 )
(al n bin)
(azn b2n)
(ami b ml )
(a m2 b m2)
(a mn b mn )
The rule for matrix addition or subtraction can be generalized in the following
manner. Let us denote the elements of matrix [A] by ai; and the elements of matrix [B]
by bi;, where the number of rows i varies from 1 to m and the number ofcolumnsjvaries
from 1 to Il . If we were to add matrix [A] to matrix [B] and denote the resulting matrix
by [C] , it follows that
2.3
(2.1)
MATRIX MULTIPLICATION
In this section we discuss the rules for multiplying a matrix by a scalar quantity and by
another matrix.
70
Chapter 2
Matrix Algebra
13 [A ]
011
012
In
13 0 11
13 0 12
130ln
021
022
2n
13 0 21
13 0 22
1302n
= 13
=
0 111 1
0 1112
(2.2)
130111 1
llln
13 0 llln
130m2
= [C]III Xp
L-J
must match
[AHD]
011
012
Oln
b ll
b l2
b lp
CI1
CI2
Clp
0 21
022
2n
b 21
b 22
b 2p
C21
C22
C2p
CIIII
Cm2
cmp
=
0ml
0m2
0nlll
b nl
b n2
bnp
where the elements in the first column of the [C] matrix are computed from
cZ I
Cml
ClI
and the e le me nts in the second column of the [C] matrix are
C22
Cm2
C12
and similarly, the elements in the other columns are computed, leading to the last column
of the [C] matrix
Clp
C2p
Section 2.3
Matrix Multiplication
71
The multiplication procedure that leads to the values of the elements in the [q matrix
may be represented in a compact summation form by
n
(2.3)
k- I
When multiplying matrices, keep in mind the following rules. Matrix multiplication is not commutative except for very special cases.
[A][B]
'* [BHA]
(2.4)
= ([A][B])[q
(2.5)
[A]( [B][C])
(2.6)
= [A][B] + [A][q
(2.7)
or
[A]([B] + [CD
For a square matrix , the matrix may be raised to an integer power n in the following
manner:
ntimcs
[AY
= [AHA] .. . [A]
(2.8)
This may be a good place to point out that if [I] is an identity matrix and [A] is a square
matrix of matching size, then it can be readily shown that the product of
[JHA] = [A][J] = [A]. See Example 2.1 for the proof.
EXAMPLE 2.1
Given matrices
[A]
[~ ~2
n G~ ~:l
[B]
[A] + [B]
b. [A] - [B]
=?
=?
c.3[A]=?
d.
c.
f.
g.
[AHB] = ?
[A]{C} = ?
[A]2 = ?
Show that [J][A] = [AHJ]
[A]
and {e}
n}
72
Chapter 2
Matrix Algebra
'Ve will use the operation rules discussed in the preceding sections to answer these
questions.
a. [A] + [B]
[AJ
=?
+ ~ G !]+n ~2]
[(0 +4)
(0 +(-2))] = [4
=
5
3
-2
[BJ
6
2
3
(8 + 7)
(9 + 1)
b. [A] - [B]
[AJ - [BJ
~G
-2]
11
15
10
5
1
15
5
3
-2
(5-6)
(3-2)
(-2-3)
(9-1)
3[AJ
(7 + 3)
(9 + (-4))
=?
[(0-4)
= (8-7)
c. 3[A]
(5 + 6)
(3 + 2)
(-2 + 3)
-4
-4
(0-(
-2))] = [-41
(7-3)
(9-(-4))
-1
1
-5
1~]
=?
~ {~ ~2 ~] =
d. [A][B]
[(3)(8)
(3)(9)
g~g~
(3)(-2)
(3)(7)] =
(3)(9)
[2~
27
=?
[A][B] =
[~ ~ ~][; ~ ~2] =
9
-2
-4
[~~ ~~ ~~5]
31
c. [A]{C}
[A]{C}
=?
77
-60
Section 2.4
f. [A]2
[AJ'
73
=?
~ [A][A] ~ [~
g. Show that
[A][I]
m~
5
3
-2
0] = [40
5
3
-2
87
65
15
35
21
35]
5
3
nand
84
67
= [AHI] = [A]
[IHA]
[/HA]
2.4
Partitioning of a Matrix
~ [~
0
1
0
~ [~
5
3
-2
m~
n ~[:
n~[:
5
3
-2
0
1
0
nu
-2
5
3
-2
PARTITIONING OF A MATRIX
Finite element formulation of complex problems typically involves relatively large sized
matrices. For these situations, when performing numerical analysis dealing with matrix
operations, it may be advantageous to partition the matrix and deal with a subset of
elements. The partitioned matrices require less computer memory to perform the operations. Traditionally, dashed horizontal and vertical lines are used to show how a matrix is partitioned. For example , we may partition matrix [A] into four smaller matrices
in the following manner:
[A]
al2
a\3
al4
a l5
a l6
a21
a22
a23
! a24
a 25
a26
a31
a32
a33
I a34
a 35
a36
a41
a42
a43
a45
a46
a51
a52
a53
! a44
! a54
a5S
a56
A I2
all
[A]
= [ All
A21
A22
where
[All]
[A2tl
=
=
[an
a12
a21
a22
[a"
a32
a41
a42
a51
a52
a 13
a23
a,,]
a43
a53
[A I2]
[Azzl
a l6
a26
a45
a55
a56
al4
a24
al5
[a"
a35
a44
a54
=[
a25
36
a46
It is important to note that matrix [A] could have been partitioned in a number of
other ways, and the way a matrix is partitioned would define the size of sub matrices.
74
Chapter 2
Matrix Algebra
[B]
b l2
b l1
b21
b13
b 23
I b l4 b ls b l6
Ib b b
b;-;-----b;;-----b;-;T-b;~-----b~;-----b;~
b
b
b Ib
b
b
bn
24
2S
26
41
42
43
44
4S
46
bSI
bS2
bS3
I bS4
bss
bS6
b 13 ]
b23
[B I2 ]
= [b 14
b24
b 2S
b26
bbn ]
43
[b~
[Bn] = b44
bb
bSJ
bS4
b3S
b 4S
bss
where
[Bid
[B2d
= [b ll
b l2
b21
b 22
[b"b
b32
b 42
bS2
41
bSI
b ls
b 16]
36
46 ]
b S6
[A] + [8]
= [All + Bl1
A21
+ 8 21
[e]
Cll
CI2
CI3
C21
C22: C23
C41
C42
=~!------~~~-L~~-~
I
C51
c52
C61
C62
[e] = [ell
C43
I
i
el2 ]
e21 e22
C53
C63
Section 2.4
Partitioning of a Matrix
75
where
Next , consider premultiplying matrix [C] by matrix [A]. Let us refer to the results
of this multiplication by matrix [D] of size 5 X 3. In addition to paying attention to the
size requirement for matrix multiplication , to carry out the multiplication using partitioned matrices, the premultiplying and postmultiplying matrices must be partitioned in
such a way that the resulting submatrices conform to the multiplication rule. That is, if
we partition matrix [A] be tween the third and the fourth columns, then matrix [C] must
be partitioned betwee n the third and the fourth rows. However, the column partitioning of matrix [C] may be done arbitrarily, because rega rdless of how th e columns are partitioned, the resulting submatrices will still conform to the multiplication rule. In other
words, instead of partitioning matrix [C] betwee n columns two and three , we could
have partitioned the matrix between columns one and two and still carried out the multiplication using the resulting submatrices.
[A][C]
= [D] = [Att
A2l
Al2][C:ll
A 22
C2l
C l2 ]
C22
= [AttC:tt + A l2 C:2l
A21 Ctt +
A 22 C21
D1 2]
D22
where
EXAMPLE 2.2
[A]
7
2 I 0
3
5
5
3
8
-3 i -5 0
8
---------------------------j----------------------------and
1
4
0 i 7
15
9
-1
0 10
12
3
5
2 -5
9 I 2
18 -10
= [C]
2 10 I 0
8
7 I 5
! -4
-5
2
I
[E] = ----------------,-------4
8 I 13
3 12 I 0
1
5 I 7
I
using
76
Chapter 2
Matrix Algebra
where
[~5 ~
[Ad
[An]
= [1272
1; ~
1]
-10
18
and
where
where
[ClI ]
+
[Cd
[~5
~3 ~5
J[
o3
5JU
8 1
= [A 21 HB lI ] +
[A22HB2tl
5
9
8
-5
7
2
9
-1
-10
3
1
10
-5
+ 12
2
15
3
18
164J
80
mn ~ {:;}
[0
12
10]
r[7 r
~
1~]
~ [7~7.;)
5
~3JUJ
+ -5 0
[C2tl
r]
8]
12
5
319]
= [116
111 207
-29
185
Section 2.5
Transpose of a Matrix
15
3
18
77
nUJ
9 ]{ 13}
-1
0
-10
7
= {174}
179
-105
70
164 II 62
73
80 i 43
---------- -----------~--- --------
116
111
-29
A21
319 I 174
207 I 179
I
185 i -105
As explained earlier, the column partitioning of matrix [B] may be done arbitrarily because the resulting submatrices still conform to the multiplication rule. It is left as an exercise for you (see problem 3 at the end of this chapter) to show that we could have
partitioned matrix [B] between columns one and two and used the resulting submatrices to compute[A ][B] = [C].
2.5
TRANSPOSE OF A MATRIX
As you will see in the following chapters, the finite element formulation lends itself to
situations wherein it is desirable to rearrange the rows of a matrix into the columns of
another matrix. To demonstrate this idea, let us go back and consider step 4 in Example 1.1. In step 4 we assembled the elemental stiffness matrices to obtain the global
stiffness matrix. You will recall that we constructed the stiffness matrix for each element with its position in the global stiffness matrix by inspection. Let's recall the stiffness matrix for element (1), which is shown here again for the sake of continuity and
convenience.
[K](1)
[K](1C)
=[
kl
-kl
kl
-k l
0
0
0
-kl
kl
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Instead of putting together [K](IC) by inspection as we did, we could obtain [K](1C) using
the following procedure:
(2.9)
78
Chapter 2
Matrix Algebra
where
[Ad=[~
0
0
0
0
~J
and
[A1t =
0
0
0
0
0
0
0
[Ad\ called th e transpose of [Ad , is obtained by taking the first and the second rows of
[A d and making them into the first and the second columns of the transpose matrix. It
is easily verified that by carrying out th e multiplication given by Eq. (2.9), we will arrive
at th e same result that was obtained by inspection.
[K](IG)
u u u
o o OJ =
o o 0
= 0
o
o
o o
o o
o o
0
0
0
[K](2G)
= [A 2Y[K](2) [A 2]
where
[A 2]
[~
0
1
0
0
~J
and
0
1
0
0
0
[A 2Y =
0
0
1
0
0
and
[K](2G) =
0
1
0
0
0
0
0
1 [ k2
-k2
0
0
J[ 00
-k 2
k2
1
0
0
1
0
0
~J =
0
0
0
0
0
k2
-k2
0
0
0
-k2
k2
0
0
0
0
0
0
0
0
0
0
0
0
Section 2.5
Transpose of a Matrix
79
As you have seen from the previous examples, we can use a positioning matrix,
such as [A] and its transpose, ancl methodically create the global stiffness matrix for
finite element models.
In general, to obtain the transpose of a matrix [B] of size m X n, the first row of
the given matrix becomes the first column of the [BY, the second row of [B] becomes
the second column of [B]\ and so on, leading to the mth row of [B] becoming the mth
column of the [B]\ resulting in a matrix with the size of n X m. Clearly, if you take the
transpose of [B]\ you will end up with [B]. That is,
([BYl
= [B]
(2.10)
As you will see in succeeding chapters, in order to save space, we write the solution
matrices , which are column matrices, as row matrices using the transpose of the
solution-another use for transpose of a matrix. For example, we represent the
displacement solution
V,
V2
{V}
V3
by [VY
= [V,
V 2 V3
Vn]
When pe rforming matrix operations dealing with transpose of matrices, the following identities are true:
(2.11)
(2.12)
[A] = [
~
-5
1~3
20
-5
20
8
Note that for a symmetric matrix, element amn is equal to anm . That is, amn
values of n andm. Therefore, for a symmetric matrix, [A] = [A
EXAMPLE 2.3
Given the following matrices:
[A]
~ [~
5
3
-2
~]9
and [B]
[~1
6
2
~2]
-4
80
Chapter 2
Matrix Algebra
Similarly,
7
2
=[ :
[BY
-2
b. Verify that ([A]
[A[ + [B]
+ [B]l = [AV +
=[I~
=[~
10
[A[T + [B[T
-2]
11
5
1
8
3
7
10
5
[BY.
and ([A]
~2]
+[ :
9
-2
7
2
3
= [4
11
15
5
-2
10
~ ]=[1~
15
5
10
[BDT
-4
-2
In
In
[AIIB]
[~
5
,.,
.)
-2
([A][BDT
[35
= 10
m~
6
2
3
-2]
= [35
60
3
10
75
-4
77
31
IS ]
-35
-60
3177 ]
15
60
75
-35
7
2
3
q[~
-60
Alternatively,
[BJ'[A]' = [ :
-2
-4
8
3
7
~2]
[~~15
9
=
60
75
-35
31 ]
77
-60
Again , by comparing results we can see that the given identity is true.
Section 2.6
2.6
Determinant of a Matrix
81
DETERMINANT OF A MATRIX
Up to this point we have defined essential matrix terminology and discussed basic matrix operations. In this section we define what is meant by a determinant of a matrix. As
you will see in the succeeding sections, determinant of a matrix is used in solving a set
of simultaneous equations, obtaining the inverse of a matrix, and forming the characteristic equations for a dynamic problem (eigenvalue problem).
Let us consider the solution to the following set of simultaneous equations:
a11xI
a 21x I
+ al2x 2 = b l
+ a 22 x 2 = b2
(2.13a)
(2.13b)
[A]{X}
= {B}
To solve for the unknowns XI and X2, we may first solve for X2 in terms of XI> using
Eq. (2.13b), and then substitute that relationship into Eq. (2.13a). These steps are shown
next.
Solving for
XI
(2.15a)
After we substitute for XI in either Eq. (2.13a) or (2.13b), we get
(2.15b)
Referring to the solutions given by Eqs. (2.15a) and (2.15b) , we see that the denominators in these equations represent the product of coefficients in the main diagonal
minus the product of the coefficient in the other diagonal of the [A] matrix.
The a11a22 - a 12a21 is the determinant of the 2 X 2 [A] matrix and is represented
(2.16)
a21
Only the determinant of a square matrix is defined. Moreover, keep in mind that the determinant of the [A] matrix is a single number. That is, after we substitute for the values
82
Chapter 2
Matrix Algebra
of al1 , an, a12, and a2l into al1a22 - al2a21> we get a single number. In general , the
determinant of a matrix is a single value. However, as you will see later, for dynamic
problems the determinant of a matrix resulting from equations of motions is a polynomial expression.
Cramer's rule is a numerical technique that can be used to obtain solutions to a
relatively small set of equations similar to the previous example. Using Cramer's rule we
can represent the solutions to the set of simultaneous equations given by Eqs. (2.l3a)
and (2.l3b) with the following determinants:
and
I:~: ~~I
X2
(2.17)
= -71-a-11--a-12':..1
a21
a22
CI2
CI3
C21
C22
C23
CllC22 C33
C31
Cn
C13 C21C32 -
C13 C22C31
(2.18)
CI2 C21C33
C33
There is a simple procedure called direct expansion, which you can use to obtain
the results given by Eq . (2.18). Direct expansion proceeds in the following manner.
First , we repeat and place the first and the second columns of the matrix [C] next to the
third column, as shown in Figure 2.1. Then we add the products of the diagonal elements lying on the solid arrows and subtract them from the products of the diagonal
elements lying on the dashed arrows. This procedure, shown in Figure 2.1 , results in the
determinant value given by Eq. (2.18).
The direct expansion procedure cannot be used to obtain higher order determinants. Instead, we resort to a method that first reduces the order of the determinantto what is called a minor-and then evaluates the lower order determinants. To
demonstrate this method, let's consider the right-hand side of Eq. (2.18) and factor out
CII> -CI2, and CI 3 from it. This operation is shown below.
Cl1 Cn C33
C12 C23C31
= Cl1 (C22C33
+ C13 C21C32
C23Cn ) -
cd C21C33
CllC23 C32 -
C23C31)
C12 C21C33
+ Cl3( C21Cn
C22C31)
As you can see, the expressions in the parentheses represent the determinants of
reduced 2 X 2 matrices. Thus, we can express the determinant of the given 3 X 3 matrix
Section 2.6
Determinant of a Matrix
83
(a)
31
C22
C23
C21
C 2
C23
C32
C33
C31
C 2
C33
84
Chapter 2
Matrix Algebra
EXAMPLE 2.4
5
3
-2
calculate
H. determinant of [A]
b. determinant of [A f
a. For this example, we use both the direct expansion and the minor methods to
compute the determinant of [A]. As explained earlier, using the direct expansion
method, we repeat and place the first and the second columns of the matrix next
to the third column as shown, and compute the products of the elements along
the solid arrows, then subtract them from the products of elements along the
dashed arrow.
,*)
",'5'
~
~ff";-7i~,
8
""",
-' 3
"
150
837
6 -2 9
Next, we use the minor to compute the determinant of [A]. For this example, we eliminate the elements in the first row and in first , second , and third
columns, as shown.
~ ~
~ ~2 ~
~ +2 ~
~
~ ~2 ~
~
~
~
~
42
~
~ ~2 ~
Section 2.6
Determinant of a Matrix
85
1
8
6
= - (8)[ (5)(9) - (0)( -2)]+(3)[ (1 )(9) - (0)( 6) ]-(7)[ (1)( -2) -(5)( 6)] = -109
b. As already mentioned, the determinant of [AYis equal to the determinant of [A].
There fore, there is no need to perform any additional calculations. However, as a
means of verifying this identity, we will compute and compare determinant of
[AY to the determinant of [A]. Recall that [AY is obtained by interchanging the
first , second, and third rows of [A] into the first , second, and third columns of the
[1 8 6]
~
~ ; ~~
1
5
8
3
~
~ ; 1~
~
~ t ~~
When the determinant of a matrix is zero, the matrix is called a singlliar. A singular matrLx results when the elements in two or more rows of a given matrix are identical. For
example, consider the following matrix:
[2 1 4]
[A] = 2
1
3
4
5
whose rows one and two are identical. As shown below, the determinant of [A] is zero.
2
2
1
1
1
3
4
4
5
Matrix singularity can also occur when the elements in two or more rows of a matrix are
linearly dependent. For example, if we multiply the elements of the second row of matrix [A] by a scalar factor such as 7 , then the resulting matrix,
[A]
[2 1 4]
14 7 28
135
86
Chapter 2
Matrix Algebra
is singular because rows one and two are now linearly dependent. As shown below, the
determinant of the new [A] matrix is zero.
14
1
7
4
28
2.7
=-
As you saw in Chapter 1, the finite element formulation leads to a system of algebraic
equations. Recall that for Example 1.1, the bar with a variable cross section supporting
a load, the finite element approximation and the application of the boundary condition
and the load resulted in a set of four linear equations:
1820
-845
103[
o
o
-845
1560
-715
o
-715
1300
-585
-~85] {:::} ~
={
585
llS
103
In the sections that follow we discuss two methods that you can use to obtain solutions to a set of linear equations.
Gauss Elimination Method
+ X3 = 13
3x l + 2x2 + 4x3 = 32
5Xl - X2 + 3X3 = 17
2Xl
X2
(2.19a)
(2.19b)
(2.19c)
Xl
(2.20)
2. We multiply Eq. (2.20) by 3, the coefficient of Xl in Eq. (2.19b).
(2.21)
Section 2.7
We then subtract Eq. (2.21) from Eq. (2.19b). This step eliminates
Eq. (2.19b). This operation leads to
3xI
87
from
XI
+ 2X2 + 4X3 = 32
+ 1.
x, + ~ x- = 39)
2 - 2 ~
2
- (3XI
25
2"X 2 + 2"X 3 = 2
(2.22)
3. Similarly, to eliminate XI from Eq. (2.19c), we multiply Eq. (2.20) by 5, the coefficient of XI in Eq. (2.19c)
(2.23)
We then subtract the above equation from Eq. (2.19c), which eliminates
Eq. (2.19c). This operation leads to
5xI -
X2
65)
=2
from
+ 3X3 = 17
5
5
( 5xI + 2"
X 2 + 2"X 3
-"2 X2
XI
+ "2X3 =
31
-2
(2.24)
Let us summarize the results of the operations performed during steps 1 through
3. These operations eliminated the XI from Eqs. (2.1%) and (2.19c).
1
13
25
+ "2 X2 + "2 X3 = 2
XI
1
"2 X2
7
--x
2 2
+ "2X3 = 2
1
+ -x2 ~
31
2
=--
(2.25a)
(2.25b)
(2.25c)
4. To eliminate X2 from Eq. (2.25c), first we divide Eq. (2.25b) by 112, the coefficient
of X2 .
X2
+ 5X3 = 25
(2.26)
Then we multiply Eq. (2.26) by -7/2, the coefficient of X2 in Eq. (2.25c), and subtract that equation from Eq. (2.25c). These operations lead to
7
-"2 X2
- (
-~
X2 -
+ "2X3 =
31
X3
= -
1~X3
= 72
~)
(2.27)
88
Chapter 2
Matrix Algebra
=4
x,
X2
13
="2
(2.28)
+ 5X3 = 25
(2.29)
+ '2X2 + '2X3
(2.30)
Note Eqs. (2.28) and (2.29) are the same as Eqs. (2.25a) ancl (2.26), which are
renumbered for cOllvenience. Now we can use back substitution to compute the
values of X2 and x,. We substitute for X3 in Eq. (2.29) and solve for X2'
x2
+ 5(4) = 25
X3 and X2
x2
=5
x,.
When designing structures, it is often necessary to change the load and consequently the
load matrix to determine its effect on the resulting displacements and stresses. Some heat
transfer analysis also requires experimenting with the heat load in reaching the desirable
temperature distribution within the medium. The Gauss elimination method requires full
implementation of the coefficient matrix (the stiffness or the conductance matrix) and
the right-hand side matrix (load matrix) in order to solve for the unknown displacements (or temperatures). When using Gauss elimination, the entire process must be repeated each time a change in the load matrix is made. Whereas the Gauss elimination
is not well suited for such situations, the LU method handles any changes in the load
matrix much more efficiently. The LU method consists of two major parts: a decomposition part and a solution part. We explain the LU method using the following three
equations
a 2'x,
+
+
a 31x l
+ a 32 x 2 + a 33 x 3
a"x,
a'2x 2
a n X2
+
+
aUx3 =
b,
(2.31)
a n X3 =
b2
(2.32)
b3
(2.33)
or in a matrix form ,
(2.34)
Section 2.7
89
Decomposition Part The main idea behind[ ;he LOU ~e]thOd is first to decompose the coefficient matrix [A] into lower [L] =
Illl
[U]= [ 0
1112
1122
1l13]
1123
121
131
In
matrices so that
1133
a21
a22
a,,] = [ 1
a23
121
0
1
a 31
a32
a33
131
132
a12
[a"
~]['~'
u,,]
1112
1122
ll23
(2.35)
ll33
[a"
al2
a21
a22
a,,] = [
a23
/21
0
1
a31
an
a33
131
In
[""
nu'
1112
12 III 11
U"
121 1113 +
+ ll22
+ 1321122
1211112
13111ll
1311112
13 III 13
1112
1122
",,]
1123
1133
(2.36)
1123
+ ll33
Inll23
Now let us compare the elements in the first row of [A] matrix in Eg. (2.36) to the
elements in the first row of the [L][U] multiplication results. From this comparison we
can see the following relationships:
llll
Now, by comparing the elements in the first column of [A] matrix in Eg. (2.36) to
the elements in the first column of the [LHu] product, we can obtain the values of 121
and 131 :
1211111
1311111
= a21
= a31
21 -
a21 _ a21
all
(2.37)
a31 _ a31
Illl
all
(2.38)
Illl
31 -
Note the value of Illl was determined in the previous step. That is, 1111 = all' We can obtain the values of ll22 and liz3 by comparing the elements in the second rows of the matrices in Eg. (2.36).
1211112
12111 13
+ ll22 = a22
+ ll23 = a23
llzz
= a22
lzlllt2
(2.39)
Un
= a23
1211113
(2.40)
When examining Egs. (2.39) and (2.40) remember that the values of 121> 1l12, and ll13 are
known from previous steps. Now we compare the elements in the second columns of
90
Chapter 2
Matrix Algebra
Eq. (2.36). This comparison leads to the value of 132 Note, we already know values of
from previous steps.
132
an - 1311112
= --:.::'---.....::..:.~
(2.41)
1122
Finally, the comparison of elements in the third rows leads to the value of 1l33.
(2.42)
We used a simple 3 X 3 matrix to show how the LU decomposition is performed.
We can now generalize the scheme for a square matrix of any size n in the following
manner:
Step 1. TIle values of the elements in the first row of the [V] matrix are obtained from
= al;
Ill;
for j
= 1 to n
(2.43)
Step 2. The unknown values of the elements in the first column of the [L] matrix are
obtained from
Iii
= -ail
1111
for i
= 2 to n
(2.44)
Step 3. The unknown values of the elements in the second row of the [V] matrix are
computed from
(2.45)
Step 4. TIle values of the elements in the second column of [L] matrix are calculated from
ai2 -
In
lilll12
1122
for i
= 3 to n
(2.46)
Next, we determine the unknown values of the elements in the third row of the [V]
matrix and the third column of [L]. By now you should see a clear pattern. We evaluate the values of the elements in a row first and then switch to a column. This procedure
is repeated until all the unknown elements are computed. \Ve can generalize the above
steps in the following way. To obtain the values of the elements in the kth row of [V] matrix, we use
k-I
Ilki
= ak;
"LlkPllp;
for j
=k
to n
(2.47)
p=1
We will then switch to the kth column of [L] and determine the unknown values in that
column.
k-I
aik lik
"L/ipllpk
p=1
= ---'----Ilkk
for i
= k + 1 to n
(2.48)
Solution Purf So far you have seen how to decompose a square coefficient
matrix [A] into lower ancl upper triangular [L] and [V] matrices. Next, we use the [L]
Section 2.7
91
and the [U] matrices to solve a set of linear equations. Let's turn our attention back to
the three equations and three unknowns example and replace the coefficient matrix
[A] with the [L] and [U] matrices:
[A]{x} = {b}
(2.49)
= {b}
(2.50)
[L][U]{x}
We now replace the product of [U]{x} by a column matrix {z} such that
= {z}
(2.51)
(2.52)
[U]{x}
{z}
~
[L][U]{x}
Because [L] is a lower triangular matrix, we can easily solve for the values of the
elements in the {z} matrix, and then use the known values of the {z} matrix to solve for
the unknowns in the {x} from the relationship [U] {x} = {z}. These steps are demonstrated next.
(2.53)
= bl
Z2
= b2
Z3
= b3 -
(2.54)
-
(2.55)
12\zl
131Z1 -
(2.56)
132z2
Now that the values of the elements in the {z} matrix are known , we can solve for the
unknown matrix {x} using
til 2
[I
ti?J.
lh ,
II"
1133
Jr' } {" }
x?
X3
Z3
X3 = 1133
Z2 -
z?
(2.57)
Z3
(2.58)
1123X3
(2.59)
X2 =
1122
Zl - 1112X2 XI =
/l\3X3
(2.60)
llll
Here we used a simple three equations and three unknowns to demonstrate how
best to proceed to obtain solutions; we can now generalize the scheme to obtain the solutions for a set of 11 equations anclll unknown.
92
Chapter 2
Matrix Algebra
i-I
ZI
= bl
and
= bi
Zi
~/ijZj
(2.61)
i~1
for i
=n
- 1, n - 2, n - 3, ... ,3,2,1
(2.62)
Next, we apply the LU method to the set of equations that we used to demonstrate
the Gauss elimination method.
EXAMPLE 2.5
Apply the LU decomposition method to the following three equations and three
unknowns set of equations:
2xI +
3xI +
5xI -
[A]
+ X3 = 13
2X2 + 4X3 = 32
x2 + 3x3 = 17
X2
~ G ~1 ~}nd ~
(b)
nn
= 3.
Decomposition Part
Step 1. The values of the elements in the first row of the [V] matrix are obtained from
llli
llll
= ali
= all = 2
= 1 to n
lll2 = al2 = 1
for j
Step 2. The unknown values of the elements in the first column of the [L] matrix are
obtained from
III =~
Ull
121 --
for i
a21 _ 3
-Ull
2
= 2 to n
131 --
a31 Ull
Step 3. The unknown values of the elements in the second row of the [V] matrix are
computed from
U2j
= a2j
U22
a22 -
1211112
U23
a23 -
121ll\3
= 4
121 ulj
for j
= 2 to n
2-(%)(1) ~
-(%)(1) &
=
Section 2.7
Step 4.
The unknown values of th e elements in the second column of the [L] matrix
are determined from
= 3 to n
for i
a 32 -
32 -
Step 5.
93
= -1-(-25)(1) = -7
13\ll\ 2
ll 22
2
Compute the remaining unknown elements in the [U] and [L] matrices.
k-\
llkj
ll33
= akj
= a 33
for j
2:)k p llpj
r\
(l3\ll\3
= k to n
+ In U23 ) =
3- ((%)(1) + (-7)(%) ) = 18
Because of the size of this problem (n = 3) and the fact that the elements along
the main diagonal of the [L] matrix have values of 1, that is, 133 = 1, we do not need to
proceed any furthel: Therefore, the application of the last step
k-\
~/iPllpk
aik -
p=1
lik
= ---'----
for i
= k + 1 to n
llkk
is omitted. We have now decomposed the coefficient matrix [A] into the following lower
and upper triangular [L] and [U] matrices:
1
3
1
2
5
-7
2
When performing this method by hand, here is a good place to check the decomposition results by premultiplying the [L] matrix by the [U] matrix to see if the [A] matrix is recovered.
We now proceed with the solution phase of the LU method, Eg. (2.61).
Solution Part
i-I
ZI
= bl
and
Zi
= bi
~/ijZj
for i
= 2 to n
j=\
13
Z\
Z3
= b3 -
Z2
(l3\Z\
= b2 + 132 z2 )
32 - (~)(13) = 2;
= 17 - ((%)(13) + (_7)(2;)) = 72
12\z\
94
Chapter 2
Matrix Algebra
Zi Xn = -
Zn
and
llijXi
;=i+1
for i
Xi = -----'----
=n-
Uii
linn
= -Z3 = -72 = 4
18
1133
X2
Z2 -
5 )(4)
225 - (-2
1123 X 3
1122
=5
1
2
XI
2.8
ZI - lll2x2 - ll\3x3
llll
13 - 1)(5)
2
+ (1)(4
=2
INVERSE OF A MATRIX
In the previous sections we discussed matrix addition , subtraction, and multiplication ,
but you may have noticed that we did not say anything about matrix division. That is because such an operation is not formally defined. Instead, we define an inverse of a matrix in such a way that when it is multiplied by the original matrix , the identity matrix is
obtained.
(2.63)
In Eq. (2.63), [A l is called the inverse of [A]. Only a square and nonsingular matrix has
an inverse. In Section 2.7 we explained the Gauss elimination and the LU methods that
you can use to obtain solutions to a set of linear equations. Matrix inversion allows for
yet another way of solving for the solutions of a set of linear equations. Once again , recall from our discussion in Chapter 1 that the finite element formulation of an engineering problem leads to a set of linear equations, and the solution of these equations
render the nodal values. For instance, formulation of the problem in Example 1.1 led
to the set of linear equations given by
[KHII}
= {F}
(2.64)
To obtain the nodal displacement values {II}, we premultiply Eq. (2.64) by [Kr l ,
which leads to
[I)
~
[Krl[K]{II}
[/HII}
and noting that [/HII}
= [Krl{F}
= [Krl{F}
(2.65)
(2.66)
(2.67)
Section 2.8
Inverse of a Matrix
95
From the matrix relationship given by Eq. (2.67), you can see that the nodal solutions can be easily obtained, provided the value of [Ktl is known. This example shows
the important role of the inverse of a matrix in obtaining the solution to a set of linear
equations. Now that you see why the inverse of a matrix is important , the next question
is, How do we compute the inverse of a square and nonsingular matrix? There are a
number of established methods that we can use to determine the inverse of a matrix.
Here we discuss a procedure based on the LU decomposition method. Let us refer back
to the relationship given by Eq. (2.63), and decompose matrix [A] into lower and upper
triangular [L] and [U] matrices.
[r\)
~
[L][U]
(2.68)
[Atl = [I]
Next, we represent the product of [U][Atl by another matrix, say matrix [Y]:
= [Y]
[U][Atl
(2.69)
and substitute for [U][Afl in terms of [Y] in Eq. (2.68), which leads to
[L][Y]
= [I]
(2.70)
We then use the relationships given by Eq. (2.70) to solve for the unknown values of elements in matrLx [Y] , and then use Eq. (2.69) to solve for the values of the elements in
matrix [Atl. These steps are demonstrated using Example 2.6.
EXAMPLE 2.6
Given [A]
Step 1.
~G ~
1
2
-1
compute [Ar',
Decompose the given matrix into lower and upper triangular matrices. In
Example 2.5 we showed the procedure for decomposing the [A] matrix into
lower and upper triangular [L] and [U] matrices.
1
1
1
2
-7
[L] =
Step 2.
2
5
2
1]
Use Eq. (2.70) to determine the unknown values of the elements in the [Y] matrix.
[I.)
~
~ ~
: [;.
Pl
-------...
~~: ~~ ~~:] =
Y31
-7
Y32
Y33
1
[00
100
~1]
96
Chapter 2
Matrix Algebra
First, let us consider the multiplication results pertaining to the first column of
the [Y] matrix, as shown.
3
2
-7
5
2
u:}~m
=1
Y21
= --2
Y31
= -13
Next, consider the multiplication results pertaining to the second column of [Y]:
o o
1
3
2
5
2
-7
=0
Y22
=1
=7
Y32
Similarly, solve for the unknown values of the elements in the remaining
column of the [Y] matrix:
1
3
2
5
2
Y13
-7
=0
Y 23
{2}~m
=0
Y33
=1
Now that the values of elements of the [YJ are known, we can proceed with calculation
of the values of the elements comprising the [Atl, denoted by XII, X12," . , and so on ,
as shown. Using the relationship given by Eq. (2.69) , we have
[U][Ar l
= [Y]
[: ~ i][;::
:::] = [
X33
~& ~
-13
:]
1
Section 2.8
Inverse of a Matrix
97
Again , we consider multiplication results pertaining to one column at a time. Considering the first column of the [x] matrix,
[: ~ i]{;: }
o
18
= -18
~&}
-13
31
13
X 31
={
11
X 21
= 18
10
XlI
= 18
= 18
X 32
X22
= 18
XI 2
= -18
X\3
= 18
[Ar l
10
[
= J.- 11
18 -13
-4
1
7
1][
1011
4
3
-13
= [I].
-4
1
7
Finally, it is worth noting that the inversion of a diagonal matrix is computed simply by
inversine its elements. That is, the inverse of a diagonal mat.rix is also a diagonal mat.rix
98
Chapter 2
Matrix Algebra
with its elements being the inverse of the elements of the original matrix. For example,
the inverse of the 4 x 4 diagonal matrix
la,
[AJ = ~
0
a2
0
0
0
0
a3
qS[At' =
a4
1
al
0
0
llz
0
1
a3
a4
al
0
[Arl[A]
=
0
0
0
1
a2
0
0
a3
l~'
a2
0
0
0
0
a3
0
= [I].
~}l~ ~]
0
1
0
0
0
0
1
0
a4
[A]{x}
= {b}
(2.71)
For the set of linear equations that we have considered so far, the values of the elements
of the {b} matrix were typically nonzero. This type of system of linear equations is commonly referred to as l1onhomogenous. For a nonhomogenous system, unique solutions
exist as long as the determinant of the coefficient matrix [A] is nonzero. We now discuss
the type of problems that render a set of linear equations of the form
[A]{X} - A{X} = 0
(2.72)
This type of problem, called an eigenvalue problem, occurs in analysis of buckling problems, vibration of elastic structures, and electrical systems. In general, this class of problems has nonunique solutions. That is, we can establish relationships among the
unknowns, and many values can satisfy these relationships. It is common practice to
write Eq. (2.72) as
[[A] - A[I]]{X}
=0
(2.73)
where [I] is the identity matrix having the same dimension as the [A] matrix. In
Eg. (2.73), the unknown matrix {X} is called the eigenvector. We demonstrate how to
obtain the eigenvectors using the following vibration example.