Chapter 2. Linear Systems: Lecture Notes For MA2327
Chapter 2. Linear Systems: Lecture Notes For MA2327
Chapter 2. Linear Systems: Lecture Notes For MA2327
Linear systems
Lecture notes for MA2327
P. Karageorgis
1 / 50
Linear homogeneous systems
c1 et + c2 t + c3 = c1 t + c2 t + c3 = 0 for all t.
c2 t + c3 = 0 for all t.
Then y1 (t), y2 (t), . . . , yn (t) are easily seen to form a basis for the
space of solutions. However, such a basis is not usually explicit.
5 / 50
Basis of solutions: Example, page 1
We obtain a basis of solutions for the linear homogeneous system
′ 1 0
y (t) = A(t)y(t), A(t) = t .
e 2
This shows that every solution of the system has the form
c 1 et
t
e 0
y(t) = = c1 + c2 2t .
(c1 t + c2 )e2t te2t e
7 / 50
Systems with constant coefficients
c1 e7t + c2 et
y(t) = c1 e7t v1 + c2 et v2 = .
2c1 e7t − c2 et
10 / 50
Eigenvector method: Example 2
We use the eigenvector method to solve the linear system
1 0 0
y ′ (t) = Ay(t), A = 2 3 0 .
2 1 4
Since A is lower triangular, its eigenvalues λ = 1, 3, 4 are merely the
diagonal entries of A. These are distinct, so A is diagonalisable and
one may easily check that the corresponding eigenvectors are
−3 0 0
v1 = 3 , v2 = −1 , v3 = 0 .
1 1 1
In view of the previous theorem, the solution of the system is thus
−3c1 et
It can be shown that this series converges for every square matrix A.
13 / 50
Matrix exponential: Jordan forms
Theorem 2.8 – Matrix exponential of a Jordan form
Suppose that J is a k × k Jordan block with eigenvalue λ. Then the
exponential etJ is a lower triangular matrix and the entries that lie i
j
steps below the diagonal are equal to tj! eλt for each 0 ≤ j < k.
14 / 50
Matrix exponential: Example 1, page 1
We compute the matrix exponential of the diagonalisable matrix
4 1
A= .
2 3
The characteristic polynomial of this matrix is given by
f (λ) = λ2 − (tr A)λ + det A = λ2 − 7λ + 10 = (λ − 2)(λ − 5),
so the eigenvalues are real and distinct, namely λ1 = 2 and λ2 = 5.
The corresponding eigenvectors are easily found to be
1 1
v1 = , v2 = .
−2 1
Once we now merge the eigenvectors to form a matrix B, we get
1 1 −1 2
B= =⇒ J = B AB = .
−2 1 5
15 / 50
Matrix exponential: Example 1, page 2
Since the Jordan form J is diagonal, the same is true for etJ and
2t
−1 2 tJ e
J = B AB = =⇒ e = .
5 e5t
1 1 e2t
tA 1/3 −1/3
e =
−2 1 e5t 2/3 1/3
2t 5t 2t 5t
1 e + 2e −e + e
= 2t 5t .
3 −2e + 2e 2e2t + e5t
1 6 e3t
tA tJ −1 1 −2/3
e = Be B =
0 9 te3t e3t 0 1/9
1 + 6t −4t
= e3t .
9t 1 − 6t
On the other hand, one has e±it = cos t ± i sin t, so this implies that
et 2 cos t −2 sin t
t
e cos t −et sin t
tA
e = = t .
2 2 sin t 2 cos t e sin t et cos t
Needless to say, etA will always turn out to be real when A is real.
20 / 50
Fundamental matrix
Then y1 (t), y2 (t), . . . , yn (t) form a basis for the space of solutions.
When A(t) is a matrix that commutes with its antiderivative B(t), a
fundamental matrix for the system is given by
Z t
B(t)
Φ(t) = e , B(t) = A(s) ds.
0
If A(t) and b(t) are continuous, then every solution has the form
Z
y(t) = Φ(t)c + Φ(t) Φ(t)−1 b(t) dt,
25 / 50
Higher-order scalar equations
Since the scalar equation is linear, the same is true for the system, so
one may determine y using methods we have already developed.
This kind of approach is certainly valid, but it is not very efficient, as
we are only interested in the first entry of y. It is thus worth having
some related results that deal with scalar equations directly.
26 / 50
Linear homogeneous equations
If the coefficients ak are all constant, then one may obtain a basis of
solutions by solving the corresponding characteristic equation
a n λn + . . . + a 2 λ2 + a 1 λ + a 0 = 0
λ3 − 5λ2 + 7λ − 3 = 0.
28 / 50
Linear homogeneous equations: Example 2
Let us now solve an initial value problem such as
λ2 − 1 = 0 =⇒ (λ + 1)(λ − 1) = 0 =⇒ λ = −1, 1.
Since the roots are both simple, every solution has the form
y(t) = c1 et + c2 e−t .
my ′′ (t) = −ky(t).
Here, the constants k, m are both positive, so one may also write
y ′′ (t) + ω 2 y(t) = 0,
p
ω = k/m.
λ2 + ω 2 = 0 =⇒ λ2 = −ω 2 =⇒ λ = ±iω.
Suppose that the coefficients ak are all constant and that the right
hand side f (t) is a linear combination of terms that have the form
One typically uses this theorem to write down an explicit formula for
a particular solution yp . It is easy to predict the terms that appear in
the formula, but their exact coefficients need to be determined.
33 / 50
Undetermined coefficients: General rules
The general rules for finding a particular solution yp are the following.
1 If f (t) contains the term tk eλt , then yp contains the expression
k
X
Aj tj eλt = Ak tk eλt + . . . + A1 teλt + A0 eλt .
j=0
2 If f (t) contains either the term tk eat sin(bt) or the term tk eat cos(bt),
but not necessarily both, then yp contains the expression
k
X k
X
Aj tj eat sin(bt) + Bj tj eat cos(bt).
j=0 j=0
λ2 − 3λ + 2 = 0 =⇒ (λ − 1)(λ − 2) = 0
=⇒ yh = c1 et + c2 e2t .
36 / 50
Undetermined coefficients: Example 2
λ2 + 5λ + 6 = 0 =⇒ (λ + 2)(λ + 3) = 0
=⇒ yh = c1 e−2t + c2 e−3t .
yh = c1 e−2t + c2 e−3t .
λ2 + 1 = 0 =⇒ λ = ±i =⇒ yh = c1 sin t + c2 cos t.
Let us now worry about the particular solution yp . Based on the right
hand side of the given equation, a natural guess for yp would be
39 / 50
Undetermined coefficients: Example 4, page 2
2A = 0, −2B = 2, 2C = 4.
λ2 − 2λ + 1 = 0 =⇒ (λ − 1)2 = 0
=⇒ yh = c1 et + c2 tet .
yp = Aet + Bt + C
yp = At2 et + Bt + C.
41 / 50
Undetermined coefficients: Example 5, page 2
Differentiating the last equation twice, one easily finds that
yp = At2 et + Bt + C,
yp′ = 2Atet + At2 et + B,
yp′′ = 2Aet + 4Atet + At2 et ,
yp′′ − 2yp′ + yp = 2Aet + Bt + C − 2B.
On the other hand, we need to ensure that the solution yp satisfies
yp′′ − 2yp′ + yp = 2et + 3t + 4.
Comparing these two expressions, we arrive at the system
2A = 2, B = 3, C − 2B = 4.
This determines the coefficients A, B and C, so the solution is
y = yh + yp = c1 et + c2 tet + t2 et + 3t + 10.
42 / 50
Linear independence and Wronskian
Definition 2.14 – Wronskian
The Wronskian of the functions y1 (t), y2 (t), . . . , yn (t) is defined as
The converse of this theorem is not true in general. For instance, the
Wronskian of the functions y1 (t) = t2 and y2 (t) = t|t| is identically
zero, but these functions are linearly independent.
43 / 50
Variation of parameters: General case
44 / 50
Variation of parameters: Second-order case
where W (t) = y1 (t)y2′ (t) − y1′ (t)y2 (t) is the Wronskian of y1 and y2 .
45 / 50
Variation of parameters: Example
We use variation of parameters to find a particular solution of
y ′′ (t) + y(t) = sec t.
The solution of the associated homogeneous equation is given by
λ2 + 1 = 0 =⇒ λ = ±i =⇒ yh = c1 sin t + c2 cos t.
Letting y1 (t) = sin t and y2 (t) = cos t, we now find that
sin t cos t
W (t) = det = − sin2 t − cos2 t = −1.
cos t − sin t
According to the previous theorem, a particular solution is thus
Z Z
yp (t) = sin t cos t · sec t dt − cos t sin t · sec t dt
cos t sin t
Z Z
= sin t dt − cos t dt
cos t cos t
= t sin t + (cos t) log(cos t).
46 / 50
Reduction of order
47 / 50
Reduction of order: Example, page 1
z = t2 v, z ′ = 2tv + t2 v ′ , z ′′ = 2v + 4tv ′ + t2 v ′′
48 / 50
Reduction of order: Example, page 2