E0 219 Linear Algebra and Applications / August-December 2011
E0 219 Linear Algebra and Applications / August-December 2011
E0 219 Linear Algebra and Applications / August-December 2011
Evaluation Weightage : Assignments : 20% Midterms (Two) : 30% Final Examination : 50%
6.5 Let η be a positive real number and let Vη be the C-subspace (of the C-vector space CR of all
functions R → C) generated by the functions eiω t , ω ∈ (−η, η) .
(a) The given family of functions form a C-basis of Vη . Further, show that the (real-valued)
functions 1, sin ω t , cos ω t , ω ∈ (0, η) also form a C-basis of Vη . In particular, the real-valued
functions in Vη are precisely the (finite) sums of constant real-functions and of harmonic oscilla-
tions with angular frequency < η . (Proof: Since Vη is generated (over C) by eiω t , ω ∈ (−η, η), it is
enough to prove that they are linearly independent over C. For this we need to show that the functions eiω j t ,
j = 1, . . . , n ∈ N+ , are linearly independent over C for all pairwise distinct ω1 , . . . , ωn ∈ (−η, η). We shall
prove this by induction on n. The case n = 1 is trivial. Now, suppose that ∑nj=1 a j eiω j t = 0 for a1 , . . . , an ∈ C
and all t ∈ R. Then iωn · ∑nj=1 a j eiω j t = ∑nj=1 iωn · a j eiω j t = 0, on the other hand by differentiating we
get ∑nj=1 iω j a j eiω j t = 0 and hence ∑n−1j=1 i(ωn − ω j ) · a j e
iω j t = 0. Now, since by induction hypothesis, the
iω t
functions e , j = 1, . . . , n − 1 are linearly independent over C, it follows that i(ωn − ω j ) · a j = 0, and hence
j
(b) The C-linear map F : Vη → CN defined by f 7→ f (nπ/η) n∈N is injective. (Proof: The
C-linearity of the map F is clear by definitions. Consider f ∈ Ker F. By part (a) we know that f is a (finite)
linear combination f = ∑ aω eiω t . Since f ∈ Ker F, we have f (nπ/η) = 0 for all n ∈ N. Putting
ω∈(−η,η)
λω := eiωπ/η , we get ∑ aω λωn = ∑ aω einωπ/η t = 0 for all n ∈ N. From −π < ωπ/η < π, it
ω∈(−η,η) ω∈(−η,η)
follows that the λω are pairwise distinct. Now Exercise 3.4-(b) shows that aω = 0 for all ω, i. e. f = 0.
Therefore we have Ker F = 0 and hence F is injective.)
– Remarks: A linear combination f of harmonic oscillations with angular frequencies < η , i. e. frequencies ν := η/2π,
is uniquely determined by its values f (n/2ν), n ∈ N, measured at times differing by 1/2ν, i. e. measured with frequency
2ν. This statement (proved above) is part of the Sampling Theorem from signal processing, which will be (rigorously)
fully treated in the context of Fourier transformations. The interpolation problem, to reconstruct the function f from
its values an := f (n/2ν), measured with frequency 2ν, arises from this in a natural way. It is solved by the so-called
(Whittaker-) Shannon interpolation formula:
∞
ηt ηt ηt
f (t) = a0 sinc + ∑ an sinc − n + a−n sinc +n
π n=1 π π
sin πt
using the function sinct := . Since this formula is linear in f , it is enough to prove it in the special case
πt
iω t
f (t) = e , |ω | < η, i. e. to show the equality
∞
ηt ηt ηt
eiω t = sinc + ∑ eiπnω/η sinc − n + e−iπnω/η sinc
(6.5.c) +n
π n=1 π π
for |ω| < η and all t ∈ R. Proof: We use the formulas from the theorem1 to compute the complex-Fourier coefficients
1 Theorem For every piece-wise continuous functiona0x : [a,∞b] → C, there is (in mean square convergence) the
Fourier-expansion x= ∑n∈Z cn exp 2πi 2π 2π
` Rn(t − c) and x= 2 + ∑ n=1 a n cos ` n(t − c) + b n sin ` n(t − c) , ` := b−a ,
with the Fourier-coefficients an := 2` ab x(t) cos 2π ` n(t − c) dt , n ∈ N , b n := 2 Rb
` a x(t) sin 2π
` n(t − c) dt , n ∈ N ∗,
cn of a function x(t) := ezt for a fixed z ∈ C \ Z 2π i with respect to the interval (−η, η) and get
1 η zt −(π/η)int 1 η (z η−πin)t/η 1
Z Z
η t=η
cn = e e dt = e dt = e(z η−πin)t/η t=−η
2η −η 2η −η 2η z η − πin
1 −1 (z η−πin) (−z η+πin) 1 1 i(iz η+πn) −i(iz η+πn) sin(iz η +nπ) iz η
= e −e = e −e = = sinc +n .
πn+iz η 2i πn+iz η 2i iz η +nπ π
Now the theorem2 gives (with absolute and uniform convergence in t ∈ (−η, η))
∞
iz η
zt
e = ∑ sinc + n eiπnt/η .
n=−∞ π
replacing first t by ω and then by z by it, we obtain (using sinc (−ω) = sinc (ω)) the equality in (6.5.c). •
For the values am+ 1 := f (2m+1)π
2η , m ∈ Z, of f in the middle between two neighbouring scan points one gets the
2
interpolation formula
∞
2(−1)k 2 2 2
am+ 1 =
2
∑ (2k + 1)π (am−k + am+k+1 ) = π (am + am+1 ) − 3π (am−1 + am+2 ) + 5π (am−2 + am+3 ) − · · · ,
k=0
with the weights 2/π, 2/π, −2/3π, −2/3π, 2/5π, 2/5π, . . . , occuring in the Leibniz-series
∞
2(−1)k 2(−1)k
∑ + = 1 (note that this series converges very slowly) .
k=0 (2k + 1)π (2k + 1)π
Starting from the premise that the human ear apprehends only frequencies below 20 kHz, the result of this exercise
suggests for digital sound recording to determine the amplitude of a sound wave about 2ν (= 40000)- times a second.
For the recording of usual CD audio discs the amplitude is measured 44100-times a second (for the sound of DATit is
48000 and for DVD-audio 96000) and each value is digitalized with 16 bit = 2 Byte, (according to the Sony red book,
Philips uses 14 bytes and over sampling) i.e. with a 0-1-sequence of length 16. Therefore, in order to record one hour
of stereo music with two channels, a disc space of
2 · 2 · 44100 · 3600 ≈ 6.35 · 108 Byte ≈ 600 Megabyte
(1 Megabyte = 210 Kilobyte (K) = 220 Byte) is needed. (Actually, storage densities for audio CDs and for data CDs
differ) For error correction and for information management even more disc space has to be provided. In order to
enhance sound reproduction, usually intermediate values are interpolated (This is called O v e r s a m p l i n g).
You might enjoy the recreational-application of Linear Algebra in the following Exercise 6.6. Its complete solution has
been given in GhostWhite colour so that it is invisible (with a hope that one may be tempted to find their own (different)
solution!).
2 Theorem Let x : R → C be a continuous, piece-wise continuously differentiable and periodic function with period
1. Then the Fourier-series of x normally converges on R and in particular, converges absolutely and uniformly to the
function x.
3 In the notion of magic-squares given here we have dropped the usual condition that all entries in it should be
different from each other or even from natural numbers 1, 2, . . . , n2 , because this way we can make use of Linear
Algebra, otherwise it would be just combinatorics!
was first found in the right-hand upper corner of the famous copper-plate engraving by German 16-th century
painter Albrecht Dürer. It is an allegorical composition entitled M e l e n c o l i a I which has been the
subject of many interpretations.4 This engraving portrays melancholia as the state of waiting for inspiration
to strike, and not necessarily as a depressive affliction. The image in turn inspired a passage in The City of
Dreadful Night by James Thomson (B.V.), and, a few years later, a sonnet by Edward Dowden.
4 One interpretation suggests the image references the depressive or melancholy state and accordingly explains
various elements of the picture. Among the most conspicuous are: (i) The tools of geometry and architecture surround
the figure, unused. (ii) The 4 × 4 magic-square, with the two middle cells of the bottom row giving the year of the
engraving: 1514. This 4 × 4 magic square, as well as having traditional magic square rules, its four quadrants, corners
and centers equal the same number, 34, which happens to belong to the Fibonacci sequence. (iii) The truncated
rhombohedron with a faint human skull on it. This shape is now known as Dürer’s solid; over the years, there have been
numerous articles disputing the precise shape of this polyhedron. (iv) The hourglass showing time running out. (v)
The empty scale (balance). (vi) The despondent winged figure of genius. (vii) The purse and keys. (viii) The beacon
and rainbow in the sky.
Show that the magic (respectively, super-magic) n × n-matrices Magn (K) (respectively, Magsn (K) )
form a K-subspace of Mn (K) and compute the dimensions of these subspaces.
1, if n = 1, 2 ,
if n = 3 ,
3,
2 s
(Ans: Dim K Magn (K) = (n−1) +1 and Dim K Magn (K) = 8, if n = 4 and Char K 6= 2 ,
9, if n = 4 and Char K = 2 ,
n(n − 2), if n ≥ 5.
Moreover, one can easily find a K-basis of Magn (K).)
(Proof: For A, B ∈ Magn (K) (respectively, Magsn ), it is easy to see that A + B and aA, a ∈ K, belong to
Magn (K) (respectively, Magsn ) and hence both Magn (K) and Magsn are K-subspaces of Mn (K).
For a n× n-Matrix A = (ai j ), let zi := ∑n−1 j=1 ai j be the sum of the first n−1 elements in the i-th row,
n−1
i = 1, . . . , n−1, and s j := ∑i=1 ai j be the sum of the first n−1 elements in the j-th column, j = 1, . . . , n−1, of A
and further a := ∑n−1 n−1 n−1 n−1
i=1 ∑ j=1 ai j = ∑i=1 zi = ∑ j=1 s j . Then the matrix A is magic with the row- and column sum
s if and only if ain = s−zi for i = 1, . . . , n−1, an j = s−s j for j = 1, . . . , n−1 and ann = a−(n−2)s. (The choice
of ain and an j is necessary and sufficient for the row-sum and the column-sum for the first n−1 rows as well as
columns is s) The sum of the elements in the n-th row is s if and only if ∑n−1 n−1
j=1 an j + ann = ∑ j=1 (s − s j ) + ann =
(n−1)s−a+ann = s, i. e. ann = a−(n−2)s. In this case it is automatic that the sum of the elements in the n-th
column is also equal to s because ∑n−1 n−1
i=1 ain + ann = ∑i=1 (s − zi ) + a − (n−2)s = (n−1)s − a + a − (n−2)s = s.
Therefore the general form of a magic matrix with row- and column sum s is:
· · · a1, n−1
a11 s− z1
... ..
.
..
.
..
.
f (a11 , . . . , a1, n−1 , . . . , an−1, 1 , . . . , an−1, n−1 , s) :=
an−1, 1 · · · an−1, n−1
.
s− zn−1
s− s1 · · · s− sn−1 a− (n−2)s
2
This define the map f : K (n−1) +1 → Magn (K) which is clearly K-linear and further surjective as seen above.
If f (a11 , . . . , a1, n−1 , . . . , an−1, 1 , . . . , an−1, n−1 , s) = 0, then first ai j are 0 for all 1 ≤ i, j ≤ n − 1 and then the
zi , s j , s are 0 and finally, also a. Therefore Ker f = 0 and hence f is injective. Altogether f is an isomorphism
and hence Dim K Magn (K) = (n−1)2 + 1.
2
The image of the standard basis of K (n−1) +1 is clearly a basis of Magn (K). These are precisely the following
(n−1)2 magic-matrices with the upper-left (n−1) × (n−1)-submatrix which has (p, q)-th entry is 1 and all
other entries are 0, where p, q = 1, . . . n−1,
0 ··· 0 ··· 0 0
.. . . . . . .. ..
. . .. . . .
0 · · · 1 · · · 0 −1
E pq − Enq − E pn + Enn = . .
.. . . ... . . . ... ...
0 · · · 0 · · · 0 0
0 · · · −1 · · · 0 1
In the case n = 1 every matrix is super-magic, if n = 2, then clearly the matrices with all equal entries are
precisely the super-magic matrices. In both these cases Dim K Magsn (K) = 1.
Now, assume that n ≥ 3. Then f (a11 , . . . , a1, n−1 , . . . , an−1, 1 , . . . , an−1, n−1 , s) is super-magic if and only if
n−1 n−1
∑ aii + a − (n−2)s = s and s−z1 + ∑ ai, n−i+1 + s−s1 = s ,
i=1 i=2
i. e. if and only if (a11 , . . . , a1, n−1 , . . . , an−1, 1 , . . . , an−1, n−1 , s) is a solution of the system of linear equations
n−1 n−1 n−1 n−1 n−1 n−1
∑ aii + ∑ ∑ ai j − (n−1)s = 0 , s − ∑ a1 j − ∑ ai1 + ∑ ai, n−i+1 = 0 .
i=1 i=1 j=1 j=1 i=1 i=2
This solution space of this system of linear equations has (see Theorem 5.E.6 in the Lecture-Notes) has
dimension (n−1)2 + 1 − r, where r is the rank of the homogeneous system of linear equations, therefore is
2
the dimension of Im g, where g : K (n−1) +1 → K 2 is defined by
g(a11 , . . . , a1, n−1 , . . . , an−1, 1 , . . . , an−1, n−1 , s) :=
n−1 n−1 n−1 n−1 n−1 n−1
∑ aii + ∑ a
∑ ij − (n−1)s , s − −
∑ 1 j ∑ i1 ∑ ai, n−i+1 .
a a +
i=1 i=1 j=1 j=1 i=1 i=2
If we put all ai j = 0 and s = 1, then g(0, . . . , 0, 1) = (1−n , 1) 6= 0, i. e. Dim K Im g ≥ 1. If we put s = 0
and all ai j = 0, but a12 = −1, then g(0, −1, 0, . . . , 0, 0) = (−1 , 1) . In the case n = 3 both (−2, 1) and
(−1, 1) are contained in Im g and since −2 6= −1, they are linearly independent, i. e. r = Dim K ℑg = 2 and
Dim K Magsn (K) = (n − 1)2 + 1 − 2 = 5 − 2 = 3.
In the case n = 4 and Char K 6= 2, both (−3, 1) and (−1, 1) belong to the Im g and since −3 6= −1, they are
linearly independent, i. e. r = Dim K Im g = 2 and Dim K Magsn (K) = (n − 1)2 + 1 − 2 = 10 − 2 = 8.
In the case n = 4 and Char K = 2, since 2 = 0, we have −ai j = ai j and −3s = s,
g(a11 , . . . , a33 , s) =
2(a11 + a22 + a33 ) + a12 + a13 + a21 + a23 + a31 + a32 − 3s , s − 2a11 − a12 − a13 − a21 − a31 + a23 + a32
= (s + a12 + a13 + a21 + a31 + a23 + a32 ) · (1, 1) ,
i. e. Im g is only 1-dimensional and hence Dim K Magsn (K) = (n − 1)2 + 1 − 1 = 10 − 1 = 9.
Now assume that n ≥ 5. Then the index (2, 3) is smaller than the indices (i , n−i+1), since n−2+1 ≥
4. We put s = 0 and all ai j = 0, but a23 = 1, then g(0, . . . , 0, 0, 0, 1, . . . , 0, . . . , 0, . . . , 0, 0) = (1, 0) . Since
g(0, . . . , 0, 1) = (1−n , 1) 6= 0 and (1, 0) are linearly independent, we have r = 2, and hence Dim K Magsn (K) =
(n−1)2 + 1 − 2 = n(n−2) . •
Auxiliary Results/Test-Exercises
T6.1 Determine whether there are R-linear maps with the following properties:
(a) f : R3 → R2 ; f (1, 0, 1) = (1, 0) , f (0, 1, 0) = (0, 1) .
(b) f : R2 → R3 ; f (1, 2) = (1, 0, 0) , f (2, 1) = (0, 1, 0) , f (−1, 4) = (0, 0, 1) .
(c) f : R2 → R3 ; f (1, 2) = (1, 0, 0) , f (2, 1) = (0, 1, 0) , f (−1, 4) = (3, −2, 0) .
T6.2 Show that the vectors x1 := (1, 2, 1) , x2 := (2, 1, 1) , x3 := (1, 1, 1) form a basis of R3 . For the linear
map f : R3 → R3 defined by f (x1 ) := (1, 0, 3) , f (x2 ) := (−1, 3, 2) , f (x3 ) := (0, −1, 1) write the image of
the vectors (2, −1, 5) , (−1, 0, 1) , (0, 1, 0) and further compute a basis of Kern f and Im f .
T6.3 Let V be a finite dimensional K-vector space and let U,W be subspaces of V of equal dimension. Then
there exists a K-automorphism f of V such that f (U) = W .
T6.4 Let V be a finite dimensional K-vector space and let f :V → V be an endomorphism of V . Show that
the following statements are equivalent:
(i) f is not an automorphism of V .
(ii) There exists a K-endomorphism g 6= 0 of V such that g ◦ f = 0.
(ii0 ) There exists an K-endomorphism g0 6= idV of V such that g0 ◦ f = f .
(iii) There exists an K-endomorphism h 6= 0 of V such that f ◦ h = 0.
(Hint : We may assume that g is surjective and apply the inequality of Sylvester, See Test-Exercise T6.16.)
T6.18 Let f :V → W be a homomorphism of K-vector spaces. Show that Ker f is finite dimensional if and
only if there exists a homomorphism of K-vector space g : W → V and an operator h : V → V on V such that
Rank h is finite and g f = h + idV .
T6.19 Let f be an operator on the finite dimensional K-vector space V . Show that the following statements
are equivalent: (i) Rank f = Rank f 2 . (i0 ) Im f = Im f 2 . (ii) DimK Ker f = DimK Ker f 2 .
(ii0 ) Ker f = Ker f 2 . (iii) Im f ∩ Ker f = 0. (iv) Im f + Ker f = V.
(Remark : (iii) and (iv) together mean that V is the direct sum of Im f and Ker f .)
T6.20 Let f1 , . . . , fr ∈ HomK (V,W ) be K-vector space homomorphisms of finite rank. For arbitrary
a1 , . . . , ar ∈ K, show that the rank of a1 f1 + · · · + ar fr is finite and
Rank (a1 f1 + · · · + ar fr ) ≤ Rank f1 + · · · + Rank fr .
(Remark : From the Test-Exercise it follows that the subset A := { f ∈ EndK V | Rank f is finite } is a two-sided ideal
in the K-algebra EndK V .)
T6.21 Let f :U → V and let g :V → W be homomorphisms of K-vector spaces. If one of these homomor-
phism have a finite rank, then the composition g ◦ f also has a finite rank. If f is surjective (respectively, g is
injective) , then Rank (g ◦ f ) = Rank g (respectively, Rank (g ◦ f ) = Rank f ).
T6.22 Let f :V → W be a homomorphism of K-vector spaces and let ui , i ∈ I, be a basis of Ker f . Then for
a family v j , j ∈ J, of vectors of V , the family f (v j ) , j ∈ J, of the image vectors is a basis of Im f if and only
if the families ui , i ∈ I ; v j , j ∈ J, together form a basis of V . (Hint : Look at the proof of the Rank-Theorem.)
T6.23 Let V and W be finite dimensional K-vector space and let V 0 , W 0 be subspaces of V and W respectively.
Show that there exists a K-homomorphism f :V → W with Ker f = V 0 and Im f = W 0 if and only if DimK V 0 +
DimK W 0 = DimK V .
T6.24 Let ai j ∈ K , i = 1, . . . , m, j = 1, · · · , n. Then the linear system of equations
a11 x1 + · · · + a1n xn = b1
·····················
am1 x1 + · · · + amn xn = bm
over K has a solution in K n for every (b1 , . . . , bm ) ∈ K m if and only if its rank is m. Moreover, in this case the
solution space is an affine subspace of dimension n − m.
T6.25 Let s, n ∈ N, s ≤ n. Then every affine subspace of K n of dimension s is a solution space of a linear
system of equations of rank n − s in n unknowns and n − s equations.
GOOD LUCK
For the first Midterm-Test coming on September 17, 2011