Lecture Notes W2
Lecture Notes W2
Lecture Notes W2
We can generalize the notion of a vector space RN , where the vector elements are
real numbers in R, to CN - where they are complex numbers. One important differ-
ence is that the dual vector will be defined with complex conjugation. To remind,
complex numbers can be defined by their real and imaginary parts,
For any complex number, its complex conjugate is defined by z ∗ = x − iy, with
hv|ui = hu|vi∗ .
The norm of a complex vector is the sum of the norms of its elements
N
X N
X
2
hv|vi = |vi | = < vi 2 + = vi 2 . (4)
i=1 i=1
We know that real symmetric matrices, where A> = A, are special - they have a
basis of eigenvectors and other nice properties. Similarly for self-adjoint complex
matrices, where A† = A, they have a basis of eigenvector, and moreover they have
real eigenvalues. We will return to these important operators later.
1
More abstract linear spaces
Imagine an abstract linear space V over a field Φ, where addition and multiplica-
tion by a scalar from the field is defined. Using these operations we can form linear
combinations of the abstract elements. Linear independence follows from the gen-
eralized concept of a linear combination: if a set of elements cannot be combined
to zero with nonzero coefficients, the set is linearly independent.
a0 + a1 x + a2 x 2 + . . . + aN x N .
Addition is defined by adding the polynomials - that still remains in the space,
and the same for multiplication by a product. In this linear space |0i is the zero
polynomial. The following set
Example 2: Space of all polynomials on [−1, 1], of arbitrary finite order. The oper-
ations (addition, multiplication by a scalar) are the same as before; however, this
space is no longer equivalent to RN because of the variable length of coefficient
vectors. The set {x, x2 , x3 ...} is an independent set, showing that we can find an
infinite set of independent elements! It is a basis of the space since any polynomial
can be created by a linear combination of its elements. Therefore, the dimension-
ality of this space is infinite (countable).
Example 3: Space of all matrices of size n × n with real entries. Their addtion is
defined elementwise; multiplication by a scalar - all elements.
2
Example 4: Space of all real functions f (x) on [0, 1].
|f i = f (x).
00
Example 5: The space of solutions of the differential equation y + y = 0 on the
real line. This is a linear space, since for any two solutions y1 (x), y2 (x), their linear
combination a1 y1 (x) + a2 y2 (x) is also a solution. The elements in this space are
also functions, as in the previous example; but it is more constrained since it only
contains solutions.
Consider the two solutions
are they linearly independent? let us try to construct a zero linear combination
a1 cos x + a2 sin x = 0
p
a1 cos x + a2 (1 − cos2 x) = 0
and this last equation can only be satisfied if a1 = a2 = 0. Therefore, these are two
linearly independent solutions.
Any solution of the differential equation can be written as a combination of
these two, therefore they span the entire space - which means it has dimension 2
3
Inner Product
In a generalized linear space an inner product can be defined that obeys the four
properties (see last class notes). As in RN , the inner product is not unique; in most
cases many alternative definitions are legitimate. A linear space with a particular
inner product is called an ”inner product space”, and is identified by specifying its
elements, and its operations - addition, multiplication and inner product.
Example 1: In the space of real functions f (x), x ∈ [a, b], the following inner prod-
uct can be defined:
Z b
hf |gi = f (x)g(x) dx
a
.
Checking for the inner product properties:
Rb Rb
• hf |gi = a
f (x)g(x) dx = a g(x)f (x) dx = hg|f i
Rb
• hf1 + f2 |gi = a [f1 (x) + f2 (x)]g(x) dx = hf1 | gi + hf2 |gi
Rb Rb
• hcf |gi = a cf (x)g(x) dx = c a f (x)g(x) dx = chf |gi
Rb
• hf |f i = a f 2 (x) dx ≥ 0.
hf |f i ↔ f (x) = |0i.
Is this really the case? Suppose a function f (x) is defined to be zero everywhere
except at one point, where it is 7. Its integral is still 0!. We find ourselves forced
to confine the discussion to “wel behaved” functions, or smooth enough. For our
purposes we will most often talk about the space of piecewise continuous func-
tions. These are functions that are continuous everywhere except possibly in a
finite number of points.
Example 2: In the same space of functions of the previous example, we can
define
Z b
hf |gi = f (x)g(x)w(x) dx.
a
where w(x) > 0 is a weight function. Similar to the weighted inner product we
defined in R2 , also here we modify the inner product be counting different points
4
x with different weight. You may easily convince yourself that all inner product
properties are obeyed.
• hv|ui = hu|vi∗
• hv + w|ui = hv | ui + hw|ui
where we note that in the third property, the notation hav| stands for the dual
element
" # to |avi and therefore
" # includes conjugation of a. For example, if |vi =
1 2+i h i h i
, a = 2+i, |avi = , hav| = 2−i 4−2i = a∗ 1 2 .
2 4+2i
Computing Projections
Since inner products and norms are defined most generally, the formula for pro-
jection holds true in any linear space:
|vihv|
|ui = |u|| i + |u⊥ i = |ui + |u⊥ i.
hv|vi
his is because we derived this formula without actually using a specific form of
the inner product, only with general notation hv|ui. You may go back and check
in lecture notes 1, and appreciate the strength of this general notation without
reference to the details of any particular product. We now illustrate how it works
in some general cases.
5
Example 1: In the space of functions on [0, 1], decompose the linear space element
|gi = 1 − x to the parallel and perpendicular components with respect to |f i = x2 .
Z 1 Z 1 Z 1 1
3 4
x x 1 1 1
hf |gi = x2 (1 − x)dx = x2 dx − x3 dx = − = − =
0 0 0 3 4 3 4 12
0
1
x5
Z
1
hf |f i = x4 dx = =
0 5 5
|gi = |g|| i + |g⊥ i = λ|f i + |g⊥ i
hf |gi 1/12 5 5 2
λ= = = → |g|| i = x
hf |f i 1/5 12 12
5 2
|g⊥ i = |gi − |g|| i = 1 − x −
x.
12
You may check that the two parts add up to the whole function |gi.
Example 2: In the space of real functions on [−π, π] with the integral inner product,
compute the product of |f i = 1 with |gi = sin x.
It is useful to note that the integral of a function which is asymmetric around
some point, over an interval which is symmetric around this point, is always zero.
For example, if g(−x) = −g(x) the function is asymmetric around zero. Then,
Z π
g(x) = 0
−π
RA
as is its integral over any symmetric region: −A
g(x) = 0. All the area you gain
by integrating over the negative part of the interval, is exactly cancelled by its
negative in the positive part.
Example 3: In the space of real functions on [0, π], decompose |gi = sin x to its
projection and orthogonal parts with respect to |f i = 1.
This example looks very much like the previous one, but it is very different! this
is because the space itself is linear: the functions are defined on a different interval.
Example 4: In the same space as the previous example, check the orthonormality
of the following pair of elements:
f Example 5: In the space of real polynomials on [−1, 1] and integral inner prod-
6
uct, turn the independent group {1, x, x2 } into an orthonormal group by repeated
projection subtraction and normalization (Gram Schmidt procedure).
Cauchy-Schwartz inequality
In the familiar vector space of RN , the standard inner product has a geometric
interpretation:
where θ is the angle between the two vectors. (In R2 this is easy to show by writ-
ing the components v1 = |v| cos θv , v2 = |v| sin θv where θv is the angle the vector
v makes with the x-axis, and similarly for u, and multiplying the components ex-
plicitly) Since | cos θ| ≤ 1, it follows that for any two vectors
|hv|ui| ≤ ||v||||u||.
Example: The following formula is true for any two functions f (x), g(x) on the
interval [0, 1]:
!2
Z 1 Z 1 Z 1
2 2
f (x)g(x)dx ≤ f (x) dx g(x) dx .
0 0 0
7
Linear Operators
In generalized linear spaces, many linear operators can be defined. Not always can
they be written as matrices; for example, in linear spaces of infinite dimensionality,
there are a infinite number of basis elements, therefore there is no representation as
a finite matrix. We will give a few examples of operatorsin linear function spaces.
Example 1: in the space V of polynomials with real coefficients on the real line,
x ∈ (−∞, ∞), we define the derivative operator, D1 : V → V , as
dp
D1 [p] = D1 p(x) = .
dx
This is an operator in the space, because the derivative of a polynomial is also a
polynomial, and therefore the image remains in the space. Its linear properties
stem from the linearity of the derivative itself:
• D1 |0i = |0i
The kernel of this operator is all constant polynomials - i.e., polynomials of degree
0. It is a singular operator that cannot be inverted (recall that in calculus, when
“inverting” the derivative by integration, there is always an unspecified constant
that can be added! this shows that there is no well-defined operator to do the
inverse of a derivative).
The derivative operator has no eigenfunctions or eigenvalues (except |0i). When-
ver you differentiate a polynomial, you get a polynomial of a lower degree, which
is linearly independent of the original polynomial and therefore can never be a
scalar multiple of it.
8
This operator has infinitely many eigenvectors - a continuum labeled by a.
d2 f
D2 |f i = D2 f (x) = .
dx2
It is easy to see that this is a linear operator that obeys all requirements - actually,
all orders of derivatives are. All functions of the form
You may get some intuition for this condition recalling that matrix element Aij was
found by applying the operator on one basis element |ej i and projecting the result
on |ei i. The properties of the adjoint operator are very similar to those of a complex
conjugate: