m1 Formeln e

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Contents

1 Formelsammlung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Calculation Rules for Powers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.2 Binomial Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Quadratic Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.4 Calculation Rules for Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.5 Fraction Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.6 Calculation Rules for Logarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Trigonometric Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 Cartesian Form of a Complex Number . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.2 Polar Form of a Complex Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.1 Vectors in the 3-dimensional Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.2 Vectors in n-dimensional Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.3 Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.4 Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Matrices and Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.1 Matrices, Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.2 Addition of matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.3 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.4 Properties of Transpose of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5.5 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Determinant of Order 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.6 Determinant of Order 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.7 Minor, Cofactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.8 Laplace’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5.9 Properties of Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5.10 Inverse Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

1
2 Contents

Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20


Chapter 1
Formelsammlung

1.1 Algebra

1.1.1 Calculation Rules for Powers


For a, b > 0 the following rules hold:
bn
b n · b m = b n + m, = b n − m , (b n ) m = b n · m ,
bm
a n  a n 1
a n · b n = (ab)n , n
= , b−n = n , b 0 = 1, 0b = 0.
b b b

1.1.2 Binomial Formulas


The binomial formula describes the algebraic expansion of powers of a binomial, a po-
lynomial with two terms:
n  
n
X n n!
(a + b) = Cnm an−m bm = Cnm am bn−m , Cnm = = .
m m! (n − m)!
m=0

where each Cnm is a specific positive integer known as binomial coefficient. The basic
example of the binomial formula is the formula for the square a ± b:

(a ± b)2 = a2 ± 2ab + b2 .

Here we also give the higher powers of a + b:

(a ± b)3 = a3 ± 3a2 b + 3ab2 ± b3 ,


(a ± b)4 = a4 ± 4a3 b + 6a2 b2 ± 4ab3 + b4 ,
(a ± b)5 = a5 ± 5a4 b + 10a3 b2 ± 10a2 b3 + 5ab4 ± b5 .

3
4 1 Formelsammlung

The other the useful formulas are


a2 − b2 = (a − b)(a + b), a3 − b3 = (a − b)(a2 + ab + b2 ).

1.1.3 Quadratic Equation


The solutions of a quadratic equation ax2 + bx + c = 0 (a, b, c ∈ R, a 6= 0) are given by
√ √
−b ± D −b ± b2 − 4ac
x1,2 = = .
2a 2a
The expression D = b2 −4ac, called discriminant, determines the nature of the solutions.
1. D > 0: Two different real solutions.
2. D = 0: One (repeated) real solution.
3. D > 0: Two different complex solutions.

The solutions of a quadratic equation x2 + px + q = 0 are


r 
p p 2
x1,2 = − ± −q .
2 2

1.1.4 Calculation Rules for Roots


For a, b, c ∈ R, a, b > 0, c > 0 and m, n 6= 0 the following rules hold:
√ √ √ √
√ m √ √ √
n
r q q
n m n n a a n n m m·n
bm = b n , n
a · b = a · b, √
n
= n
, b = b = b.
c c

1.1.5 Fraction Rules


a c a+c a c a·d+c·b a c a·c a c a·d
+ = , + = , · = , : = .
b b b b d b·d b d b·d b d b·c

1.1.6 Calculation Rules for Logarithms


For b, x, y > 0, b 6= 1 the following holds:

1. Product Rule: logb (x · y) = logb x + logb y,


 
x
2. Quotient Rule: logb = logb x − logb y,
y
3. Power Rule: logb (xn ) = n logb x.
1.2 Trigonometric Formulas 5
1
A special case of the rule 2 is logb x = − logb x
logb x
Change of base formula: loga x = .
logb a

1.2 Trigonometric Formulas

The trigonometric functions y = sin ϕ and y = cos ϕ are periodic functions with the
period T = 2π:

sin(ϕ + 2π) = sin ϕ, cos(ϕ + 2π) = cos ϕ.

The functions sin ϕ and cos ϕ we can write without parentheses, if in the argument there
is one letter or with parentheses as sin(ϕ) and cos(ϕ). The arguments with more then
one letter will be written with parentheses, like: sin(3ϕ) and cos(ϕ + π6 ).

Pythagorean trigonometric identity:

sin2 ϕ + cos2 ϕ = 1.

The function y = cos ϕ is even and y = sin ϕ is odd.

cos(−ϕ) = cos ϕ, sin(−ϕ) = − sin ϕ.

Double-angle formulas:

sin(2ϕ) = 2 sin ϕ cos ϕ,


cos(2ϕ) = cos2 ϕ − sin2 ϕ = 2 cos2 ϕ − 1 = 1 − 2 sin2 ϕ.

Half-angle formulas:
1 1
sin2 ϕ = (1 − cos(2ϕ)) , cos2 ϕ = (1 + cos(2ϕ)) .
2 2

Addition formulas:

sin(ϕ ± θ) = sin ϕ cos θ ± cos ϕ sin θ,


cos(ϕ ± θ) = cos ϕ cos θ ∓ sin ϕ sin θ.

Product identities:
6 1 Formelsammlung
1
sin ϕ sin θ = (cos(ϕ − θ) − cos(ϕ + θ)) ,
2
1
cos ϕ cos θ = (cos(ϕ − θ) + cos(ϕ + θ)) ,
2
1
sin ϕ cos θ = (sin(ϕ − θ) + sin(ϕ + θ)) .
2

Sum to Product Identities:


  
ϕ±θ ϕ∓θ
sin ϕ ± sin θ = 2 sin cos ,
2 2
   
ϕ+θ ϕ−θ
cos ϕ + cos θ = 2 cos cos ,
2 2
   
ϕ+θ ϕ−θ
cos ϕ − cos θ = −2 sin sin
2 2

Table 1.1 Values of Trigonometric Functions

ϕ Grad 0◦ 30◦ 45◦ 60◦ 90◦ 120◦ 135◦ 150◦ 180◦ 270◦ 360◦

π π π π 2π 3π 5π 3π
ϕ Radiant 0 6 4 3 2 3 4 6 π 2 2π
√ √
1 √1 3 3 √1 1
sin ϕ 0 2 2 2 1 2 2 2 0 −1 0
√ √
3 √1 1 3
cos ϕ 1 2 2 2 0 − 21 − √12 − 2 −1 0 1
√ √
tan ϕ 0 √1 1 3 − 3 −1 − √13 0 0
3

1.3 Complex Numbers

1.3.1 Cartesian Form of a Complex Number


A complex number z is generally written as

z = x + iy, x = Re(z), y = Im(z), x, y ∈ R.


1.3 Complex Numbers 7

The symbol i stands for a number i2 = −1. The set of complex numbers is denoted by
the symbol C: C = { z | z = x + iy, x, y ∈ R }.
A complex number z = x + iy has a complex conjugate number z ∗ = x − iy
The absolute value (or modulus or magnitude) of a complex number z, | z |, is the dis-
tance of the point z from O or the length of its position vector:
√ p
| z | = zz ∗ = x2 + y 2 , | z | > 0.

The following operations apply for any complex numbers z1 = x1 +iy1 and z2 = x2 +iy2
and real λ

Addition z1 + z2 = (x1 + x2 ) + i(y1 + y2 ),

Subtraction z1 − z2 = (x1 − x2 ) + i(y1 − y2 ),

Multiplication with a real number λz1 = λ(x1 + iy1 ) = λx1 + iλy1 ,

Multiplication z1 · z2 = (x1 x2 − y1 y2 ) + i(x1 y2 + x2 y1 ),


z1 z1 z2∗ x1 x2 + y1 y2 x2 y1 − x1 y2
Division = ∗ = 2 2 +i .
z2 z2 z2 x 2 + y2 x22 + y22

1.3.2 Polar Form of a Complex Number


The form

z = r (cos ϕ + i sin ϕ)

is called the trigonometric form of a complex number. A complex conjugate of z is


z ∗ = r (cos ϕ − i sin ϕ). The absolute value r = | z | is the distance of z from the origin,
(r > 0), and ϕ is the argument, also denoted as arg(z), ϕ = arg(z). Here ϕ is the main
(principal) value of the argument, 0 6 ϕ < 2π. The argument kann be written as

ϕk = ϕ + 2kπ, 0 6 ϕ < 2π, k ∈ Z.

A complex number in the trigonometric form can be written with a help the Euler’s
formula

eiϕ = cos ϕ + i sin ϕ.

in an exponential form as:

z = r (cos ϕ + i sin ϕ) = reiϕ .


8 1 Formelsammlung

1.4 Vectors

1.4.1 Vectors in the 3-dimensional Space


Canonical unit vectors of three-dimensional space:

~ex = (1, 0, 0), ~ey = (0, 1, 0), ~ez = (0, 0, 1)

form an orthonormal basis

|~ex |2 = |~ey |2 = |~ez |2 = 1, ~ex ⊥ ~ey ⊥ ~ez .

A three-dimensional vector ~u can be represented by three components ux , uy and uz .

~u = ~ux + ~uy + ~uz = ux~ex + uy ~ey + uz ~ez = (ux , uy , uz ).

The magnitude of ~u ist:


q
| ~u | = u2x + u2y + u2z .
The position vector ~u can be also written as ~u = (x, y, z), where x, y and z are the
−−→
coordinates of the point P . The Cartesian coordinates of a vector AB with initial point
A = (xA , yA , zA ) and terminal point B = (xB , yB , zB ) are:
−−→
AB = (xB − xA )~ex + (yB − yA )~ey + (zB − zA )~ez = (xB − xA , yB − yA , zB − zA ).

The unit vector of a non zero vector ~u is determined by normalizing the vector with the
magnitude | ~u |:
~u
~eu = .
| ~u |

1.4.2 Vectors in n-dimensional Space


Canonical unit vectors in n-dimensional space

~e1 = (1, 0, 0, . . .), ~e2 = (0, 1, 0, . . .), ~e3 = (0, 0, 1, . . .)

form an orthonormal basis

| ~ei |2 = 1, i = 1, 2, . . . n, ~ei ⊥ ~ej (i 6= j).

Instead of indexes x, y, z etc., they are denoted by i = 1, 2, 3, 4, . . .. A n-dimensional


vector ~u can be represented by components ui .

~u = u1~e1 + u2~e2 + u3~e3 + . . . + un~en = (u1 , u2 , u3 , . . . , un ).


q
The magnitude of ~u ist: |~u| = u21 + u22 + u23 + . . . + u2n .
1.4 Vectors 9

1.4.3 Dot Product


D e f i n i t i o n. dot product: geometric definition
The dot product ~u · ~v of two vectors ~u and ~v is defined as:

~u · ~v = | ~u | · | ~v | cos ϕ, 0◦ 6 ϕ 6 180◦ ,

where ϕ is the angle between these vectors.

D e f i n i t i o n. dot product: algebraic definition


The dot product ~u · ~v can be represented by the vector coordinates. In case of n-
dimensional vectors it can be written:
 
v1
n
 v2  X
~u · ~v = (u1 , u2 , . . . , un ) · 
  = u1 v1 + u2 v2 + . . . + un vn = ui vi .
... 
i=1
vn

The dot product of three-dimensional vectors ~u = (ux , uy , uz ) and ~v = (vx , vy , vz ) is:


 
vx
~u · ~v = (ux , uy , uz ) ·  vy  = ux vx + uy vy + uz vz .
vz

The angle ϕ between these vectors is:


~u · ~v ux vx + uy vy + uz vz
cos ϕ = =q q .
| ~u | · | ~v | ux + u2y + u2z · vx2 + vy2 + vz2
2

The dot product is commutative und distributive over vector addition:


~u · ~v = ~v · ~u, ~u · (~v + w)
~ = ~u · ~v + ~u · w.
~

1.4.4 Cross Product


D e f i n i t i o n. Cross Product: Geometric definition
The cross product, also vector product, of two vectors ~a and ~b, denoted as ~a × ~b, is a
vector ~c, which is orthogonal to ~a and ~b:

~c = ~a × ~b, ~c ⊥ ~a, ~c ⊥ ~b.

The magnitude of ~c is the area of the parallelogram formed by ~a and ~b in length units:

| ~c | = | ~a | · |~b | sin ϕ, 0 6 ϕ 6 π,

where ϕ is the angle between these two vectors.


10 1 Formelsammlung

The cross product can also be written as

~a × ~b = | ~a | · |~b | sin ϕ ~n.

The vector ~n is the unit vector, orthogonal to ~a and ~b. The direction of ~n is determined
by convention by the right-hand rule. The cross product is an operation on two three-
dimensional vectors.

D e f i n i t i o n. Cross Product: Algebraic definition


The cross product of two vectors ~a and ~b is the following vector
 
ay bz − az by
~c = ~a × ~b =  az bx − ax bz 
ax by − ay bx

~c can be represented by a three-row determinant.

e~x e~y e~z


ay az a a a a
~c = ~a × ~b = ax ay az = e~x − e~y x z + e~z x y =
by bz bx bz bx by
bx by bz
= e~x (ay bz − az by ) − e~y (ax bz − az bx ) + e~z (ax by − ay bx ) .

Properties of the cross product:

~a × ~b = −~b × ~a,
(~a + ~b) × ~c = ~a × ~c + ~b × ~c,
(λ~a) × ~b = ~a × (λ~b) = λ(~a × ~b), λ ∈ R.

1.5 Matrices and Determinants

1.5.1 Matrices, Basic Concepts


D e f i n i t i o n. A (m, n)-matrix is generally written as follows:
 
a11 a12 . . . a1k . . . a1n
 a21 a22 . . . a2k . . . a2n 
 
 
A =  ... ... ... ... ... ... .
 
 ai1 ai2 . . . aik . . . ain 
 
 ... ... ... ... ... ... 
am1 am2 . . . amk . . . amn
1.5 Matrices and Determinants 11

It is not a number, but an ordered scheme consisting of m rows and n columns. aik is
called an element or an entry of a matrix A, which lies is in the row i and in the column
k. An element of a matrix has two indices, the first one stays for the row and the second
for the column in which the entry is.
Conventionally rows are mentioned before columns. The A(m,n) has m rows are n
columns. A column vector of A has m components, a row vector has n components.
The number of rows and columns is also denoted as the dimension or size of the matrix.
The matrix A has a dimension m × n and can be written A = A(m,n) or A = (aij )(m,n)
(spoken m by n matrix).
The elements of a matrix can also be complex numbers. The following matrices A and
B are complex matrices
   
1 −i 3 1 + i 8i
A= , B= .
2i 3 1 −i −7
A complex conjugate matrix M ∗ is the matrix M with complex conjugate elements. The
conjugate matrices of A and B are
   
∗ 1 i ∗ 3 1 − i −8i
A = , B = .
−2i 3 1 i −7
The transpose of a matrix A, written AT , is created by interchanging rows and columns
of a given matrix. For example, the first row of A is the first column of AT , the second
row of A is the second column of AT and so on.
AT = A, aTik = aki .
 
  1 7
1 3 5
The transpose of the matrix A = is AT =  3 5 .
7 5 −9
5 −9
Two matrices A and B are called equal, A = B, if they have the same dimension and
the corresponding elements are equal.
A square matrix has the same number of rows and columns, it is an n × n-matrix. The
matrices M and N are square matrices.
 
  1 −2 3
6 0
M= , N =  0 −3 9  .
0 −9
2 19 5
The elements aii form the the main diagonal of a square matrix are called diagonal
elements. The other diagonal of a square matrix from the top right to the bottom left
corner is called antidiagonal.
The trace of a square matrix is the sum of the elements on the main diagonal. The trace
of a matrix A(n,n) is
12 1 Formelsammlung
n
P
tr(A) = a11 + a22 + . . . + ann = aii .
i=1
A diagonal matrix is a square matrix in which all non-diagonal elements are zero. The
matrices F and G are the diagonal matrices.
 
  5 0 0
6 0
F = , G =  0 −3 0  .
0 −9
0 0 9
The n × n identity matrix or unit matrix is a diagonal matrix with ones on the main
diagonal and zeros elsewhere. The matrix E is a 3 × 3 identity matrix:
 
1 0 0
E = 0 1 0.
0 0 1
A square matrix is called upper triangular matrix, if the elements located below the
diagonal are zeros. A square matrix is called lower triangular matrix, if the elements
located above the diagonal are zeros.
A symmetric matrix is a square matrix that is equal to its transpose:
AT = A, aik = aki .
A antisymmetric or a skew-symmetric matrix is a square matrix whose transpose is its
negation
AT = −A, aik = −aki .

1.5.2 Addition of matrices


Two matrices A and B can be added, if they have the same dimension, also the same
number of row and column.
A+B =C : A(m,n) + B(m,n) = C(m,n) .
An element cik of the matrix C in is created by adding corresponding elements of the
matrices A and B: aik + bik = cik . The order, in which matrices are added, does not
matter: A + B = B + A.
A matrix is multiplied by a number by multiplying each element of the matrix by this
number.
 
λa11 λa12 . . . λa1k . . . λa1n
 λa21 λa22 . . . λa2k . . . λa2n 
 
 
λA =  . . . ... ... ... ... ... .


 λai1 λai2 . . . λaik . . . λain 
 
 ... ... ... ... ... ... 
λam1 λam2 . . . λamk . . . λamn
1.5 Matrices and Determinants 13

The expression α1 A1 + α2 A2 + . . . + αn An is called linear combination of A1 , A2 , . . .,


An with the coefficients α1 , α2 , . . ., αn .
The subtraction is defined as:
A−B =C : A(m,n) − B(m,n) = G(m,n) , aik − bik = gik ,
A − B = A + (−1)B = (−1)B + A.

Each square matrix M can be represented as the sum of a symmetric matrix, MS , and
an antisymmetric matrix, MA :
1 1
M = MS + MA , MS = (M + M T ), MA = (M − M T ).
2 2
A complex matrix M can be represented as the sum of two matrices R and I: M = R+iI.
The matrices R and I have real elements and are called the real part and the imaginary
part of the matrix M . For example,
     
2i 1 − 5i 0 1 2 −5
M= = +i = R + iI, R = Re(M ), I = Im(M ).
3 2 + 3i 3 2 0 1
Rules of addition
The matrices A and B and C are chosen in a way, that addition and subtraction are
defined. The following rules apply to the matrices:
A+B =B+A Kommutativgesetz
(A + B) + C = A + (B + C) Assoziativgesetz
A+0=A Addition der Nullmatrix
A − A = A + (−A) = 0 Addition der entgegengesetzten Matrix

1.5.3 Matrix Multiplication


The product of two matrices A and B is defined, when the number of columns of the
first matrix A is equal to the number of rows of the second matrix B: Column number
of A is equal to the row count of B. If A is a (m, k)-matrix and B a (k, n)-matrix, then
a product AB is the (m, n)-matrix.
k
X
A · B = A(m,k) B(k,n) = C(m,n) , cij = air brj .
r=1
The element cij of the product matrix is calculated by multiplying the i-th row of the
first matrix by the i-th column of the second matrix.
The matrices A, B and C are defined in a way, that the indicated operations can be
performed (α, β ∈ R):
14 1 Formelsammlung

(A · B) · C = A · (B · C) Assoziativgesetz
A · (B + C) = A · B + A · C Distributivgesetz
α(A + B) = αA + αB, (α + β)A = αA + βA
α(A · B) = (αA) · B = A · (αB)

A commutator of the matrices A and B is defined by the following equation:


[A, B] = AB − BA.

1.5.4 Properties of Transpose of a Matrix


For transposed matrices:
1. The matrices A and B have the same dimension m × n. The transpose of a sum of
two matrices is the sum of transposed matrices:

(A + B)T = AT + B T .

This property can be applied to the sum of several matrices of the same dimension:

(A1 + A2 + A3 + . . . + An )T = AT1 + AT2 + AT3 + . . . + ATn .

2. α is a scalar. The transpose of the product αA is:

(αA)T = αAT .

3. Two times applied transformation results in the original matrix.

(AT )T = A.

4. The transpose of the product of two matrices A and B is:

(AB)T = B T AT .

Hier, we assume that the conditions of the matrix multiplication are fulfilled. Gen-
erally for the product of several matrices:

(A1 · A2 · . . . · An−1 · An )T = ATn · ATn−1 · . . . · AT2 · AT1 .


1.5 Matrices and Determinants 15

1.5.5 Determinant
Determinant of Order 2

A determinant is a number which


1. is computed from the elements of a square matrix. A determinant of non-square
matrix is not defined.
2. is unique. Only one determinant corresponds to a square matrix.
3. is real, if all elements of the matrix are real.
4. is complex, if all elements of the matrix are complex.
The determinant of a matrix A is denoted detA or | A |. The determinant of a 2 × 2-
matrix is the determinant of second order. It is evaluated by subtracting the products
of its diagonals.
 
a11 a12 a a
A= , detA = 11 12 = a11 a22 − a12 a21 .
a21 a22 a21 a22

1.5.6 Determinant of Order 3


The rule of Sarrus is a scheme with which the determinant of a 3 × 3-matrix can be
calculated.
a11 a12 a13
detA = a21 a22 a23 = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − (1.1)
a31 a32 a33
− a13 a22 a31 − a11 a23 a32 − a12 a21 a33 .

I m p o r t a n t : The rule of Sarrus applies to 3-row determinants only.

1.5.7 Minor, Cofactor


D e f i n i t i o n. Submatrix
A submatrix of a matrix is obtained by removing
 some rows and/or columns. For exam-
a1 b1 c1
ple, by removing a column of a matrix A = one gets three square matrices
a2 b2 c2
second order:      
b1 c1 a1 c1 a1 b1
A1 = , A2 = , A3 = .
b2 c2 a2 c2 a2 b2
D e f i n i t i o n. Minor
K is a square matrix. A minor (or a first minor) Mij of this matrix element kij is the
determinant of the submatrix obtained by removing the i-th row and j-th column of K.
16 1 Formelsammlung

D e f i n i t i o n. Cofactor
The cofactor Cij of a matrix K is defined by the relation Cij = (−1)i+j Mij .
The numerical values of the minor and the corresponding cofactor can differ from each
other in their sign only.

1.5.8 Laplace’s Formula


Any large determinant of the order n is calculated by Laplace’s formula. The determi-
nant of a matrix is calculated with a help of this formula in terms of matrix minors of
the order n − 1. These determinants can be further calculated in terms of matrix minors
of the order n − 2.
The following formula shows the Laplace expansion along the first row:
a11 a12 a13
detA = a21 a22 a23 = a11 C11 + a12 C12 + a13 C13 =
a31 a32 a33
= a11 (−1)1+1 M11 + a12 (−1)1+2 M12 + a13 (−1)1+2 M13 =
a22 a23 a a a a
= a11 −a12 21 23 +a13 21 22 .
a32 a33 a31 a11 a31 a32
| {z } | {z } | {z }
M11 M12 M13

The following formula shows the Laplace expansion of a determinant of 4-d order along
the first column.
a11 a12 a13 a14
a21 a22 a23 a24
detA = = a11 C11 + a21 C21 + a31 C31 + a41 C41 =
a31 a32 a33 a34
a41 a42 a43 a44
= a11 (−1)1+1 M11 + a21 (−1)2+1 M21 + a31 (−1)3+1 M31 + a41 (−1)4+1 M41 =
a22 a23 a24 a12 a13 a14 a12 a13 a14 a12 a13 a14
= a11 a32 a33 a34 −a21 a32 a33 a34 +a31 a22 a23 a24 −a41 a22 a23 a24 .
a42 a43 a44 a42 a43 a44 a42 a43 a44 a42 a43 a44
| {z } | {z } | {z } | {z }
M11 M21 M31 M41

Generally, the Laplace expansion of a determinant of n-d order can be represented as:
n
X
detA = (−1)i+k aik Mik − expansion along the i-th row
k=1
n
X
detA = (−1)i+k aik Mik − expansion along the k-th column.
i=1
1.5 Matrices and Determinants 17

I m p o r t a n t . In order to save the time of computing, it is advisable to select an


expansion row or an expansion column with as many zeros as possible.

1.5.9 Properties of Determinants


1. The determinants of a matrix and its transpose are the same.

det A = det AT . (1.2)

Since for the transpose of the sum of two matrices A and B the same dimension
holds (A + B)T = AT + B T , is the determinant of AT + B T equal:

det(AT + B T ) = det(A + B)T = det(A + B).

2. By interchange 2 rows or 2 columns, the sign of the determinant changes

a1 a2 a3 b1 b2 b3
b1 b2 b3 = − a1 a2 a3 .
c1 c2 c3 c1 c2 c3

3. The determinant of a matrix is zero if


– Two rows or two columns are proportional to each other R2 = λR1 :
a1 a2 a3
λa1 λa2 λa3 = 0.
c1 c2 c3

– All elements of a row or column are null:


a1 a2 0
b1 b2 0 = 0.
c1 c2 0

4. A common factor of a row (or column) can be drawn before the determinant

a1 a2 a3 a1 a2 a3 a1 a2 λa3
λ b1 b2 b3 = λb1 λb2 λb3 = b1 b2 λb3 .
c1 c2 c3 c1 c2 c3 c1 c2 λc3

5. The value of a determinant does not change when a row (or column) is added to an
arbitrary multiple of another row (or column):

a1 a2 a3 a1 + λb1 a2 + λb2 a3 + λb3


b1 b2 b3 = b1 b2 b3 .
c1 c2 c3 c1 c2 c3
18 1 Formelsammlung

6. The determinant of a triangular matrix is equal to the product of the diagonal


elements:
a11 a12 a13
0 a22 a23 = a11 a22 a33 .
0 0 a33

7. The determinant of a product of two matrices is equal to the product of the deter-
minants of these matrices:

det(A · B) = (detA) · (detB).

1.5.10 Inverse Matrix


D e f i n i t i o n. Nonsingular Matrix
A square matrix is called nonsingular (invertible) if and only if its determinant is not
zero.

D e f i n i t i o n. Singular Matrix
A square matrix is called singular if its determinant is zero.

D e f i n i t i o n. A n × n matrix A is invertible (nonsingular), when there is a matrix


A−1 , so that the product

A · A−1 = A−1 · A = E

is the n × n identity matrix E. The uniquely determined matrix A−1 is called the inverse
of A.

The inverse of a 2-row invertible matrix is determined according to the following rule:
   
a11 a12 −1 1 a22 −a12
A= , A = .
a21 a22 det A −a21 a11

Diagonal elements are interchanged, the antidiagonal elements are multiplied with −1.
The matrix is divided by det A.
Properties of the inverse matrix:
1. The inverse of an inverse matrix is again the initial matrix:

(A−1 )−1 = A.
1.5 Matrices and Determinants 19

2. If two n × n matrices A and B are invertible, then the product AB is also invertible

(A · B)−1 = B −1 · A−1 .

The rule also applies to three or more matrices:

(A · B · C)−1 = C −1 · B −1 · A−1 .

3. The inverse of a matrix αA, where α is a real number not equal to zero, is:
1 −1
(αA)−1 = A .
α

4. If A is an invertible matrix, then the transpose AT is also invertible.

(AT )−1 = (A−1 )T .

5. The inverse of a diagonal matrix whose diagonal elements are not zero, is also diag-
onal matrix:
   1 
d1 0 0 d1 0 0
A =  0 d2 0  , A−1 =  0 d12 0  .
0 0 d3 0 0 d13

Each nonsingular n × n matrix A has exactly one inverse matrix:


   
a11 a12 . . . a1n A11 A21 . . . An1
 a21 a22 . . . a2n  1   A12 A22 . . . An2  .
A−1 =

A=  ... ...
,
... ...  det A  ... ... ... ...
an1 an2 . . . ann A1n A2n . . . Ann

Cij is a cofactor of aij in A: Cij = (−1)i+j Mij . Mij is the minor obtained by removing
the i-th row and j-th column of A. The inverse of a 3 × 3 matrix is determined as
follows:
20 1 Formelsammlung
   
a11 a12 a13 A A A
1  11 21 31 
A =  a21 a22 a23  , A−1 = A12 A22 A32 ,
det A
a31 a32 a33 A13 A23 A33

a22 a23 a12 a13


A11 = (−1)1+1 U11 = , A21 = (−1)2+1 U21 = − ,
a32 a33 a32 a33
a12 a13 a21 a23
A31 = (−1)3+1 U31 = , A12 = (−1)1+2 U12 = − ,
a22 a23 a31 a33
a11 a13 a11 a13
A22 = (−1)2+2 U22 = , A32 = (−1)3+2 U32 = − ,
a31 a33 a21 a23
a21 a22 a11 a12
A13 = (−1)1+3 U13 = , A23 = (−1)2+3 U23 = − ,
a31 a32 a31 a32
a11 a12
A33 = (−1)3+3 U33 = .
a21 a22

Eigenvalues and Eigenvectors

Let A be an n × n matrix. The number λ is an eigenvalue of A if there exists a non-zero


vector ~v such that
A~v = λ~v .
The vector ~v is called an eigenvector of A corresponding to λ.

You might also like