We are IntechOpen,
the world’s leading publisher of
Open Access books
Built by scientists, for scientists
4,800
123,000
140M
Open access books available
International authors and editors
Downloads
Our authors are among the
154
TOP 1%
12.2%
Countries delivered to
most cited scientists
Contributors from top 500 universities
Selection of our books indexed in the Book Citation Index
in Web of Science™ Core Collection (BKCI)
Interested in publishing with us?
Contact
[email protected]
Numbers displayed above are based on latest data collected.
For more information visit www.intechopen.com
Chapter 3
Eigenvalue Problems
Aleksandra Kostić
Additional information is available at the end of the chapter
http://dx.doi.org/10.5772/62267
Abstract
In natural sciences and engineering, are often used differential equations and systems of
differential equations. Their solution leads to the problem of eigenvalues. Because of that,
problem of eigenvalues occupies an important place in linear algebra. In this caption we
will consider the problem of eigenvalues, and to linear and quadratic problems of
eigenvalues. During the studying of linear problem of eigenvalues, we put emphasis on
QR algorithm for unsymmetrical case and on minmax characterization of symmetric case.
During the studying of quadratic problems of eingenvalue, we consider the lineariza‐
tion and variational characterization. We illustrate all with practical examples.
Keywords: QR algorithm, min max principle, Rayleigh functional, linearization, ei‐
genvalues
1. Introduction
Every mechanical system has the property of vibration. Analog phenomenon can be observed
in the electrical systems in the form of oscillation circuits. Neglecting terms of vibration can lead
to resonance, which on one hand can have catastrophic consequences as the demolition of the
bridge, on the other hand can be used positively. Mathematical description of vibratory condition
leads to a differential equation or a system of differential equations. This problem is further
transformed to the eigenvalue problem. This is the motivation for considering the eigenvalue
problems.
This chapter is organized as follows: The Linear eigenvalue problem and the quadratic
eigenvalue problem.
© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons
Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution,
and reproduction in any medium, provided the original work is properly cited.
58
Applied Linear Algebra in Action
2. The linear eigenvalue problem
This section considers the linear eigenvalue problem of finding parameter λ such that the linear
system
Ax = l x
(1)
has nontrivial solution x, where A ∈ C(n,n). The scalar λ is called an eigenvalue of A, and x is
an eigenvector of A corresponding to λ. The set of all eigenvalues of matrix A is called the
spectrum of A, and is denoted as σ(A).
The literature discusses the right and left eigenvectors. In our deliberations, we have been
limited to the right eigenvectors, which are earlier defined.
This section is organized as follows:
1.
Basis properties (characteristic polynomial, bases for eigenspaces, eigenvalues and
invertibility, diagonalization)
2.
QR Algorithm (The QR algorithm is used for determining all the eigenvalues of a matrix.
Today, it is the best method for solving the unsymmetrical eigenvalue problems.)
3.
Mathematical background for Hermitian (symmetric) case (Rayleigh quotient, min max
principle of Poincare, minmax principle of Courant-Fischer)
4.
Physical Background
5.
General linear eigenvalue problem
Ax = l Bx
(2)
2.1. Basic properties
In this section we outline the basic concepts and theorems, which will allow us to understand
further elaboration. The eigenvalue problem is related to the homogeneous system of linear
equations, as we will see in the following discussion.
To find the eigenvalues of n × n matrix A we rewrite (1) as
Ax = l Ix
(3)
or by inserting an identity matrix I equivalently
( A - l I ) x = 0.
(4)
Eigenvalue Problems
http://dx.doi.org/10.5772/62267
Such system is homogeneous system and the system (3) has a nontrivial solution if and only
if
det ( A - l I ) = 0.
(5)
This is called the characteristic equation of A; the scalars satisfying this equation are the
eigenvalues of A. When is expanded, the determinant det(A − λI) is polynomial p in λ, and it
is called the characteristic polynomial of A.
The following theorem gives the link between the characteristic polynomial of the matrix A
and its eigenvalues.
Theorem 2.1.1. Equivalent Statements
If A is an n × n matrix and λ is a complex number, then the following are equivalent
a.
λ is an eigenvalue of A.
b.
The system of equations (A − λI)x = 0 has nontrivial solutions.
c.
There is a nonzero vector x in ℂn such that Ax = λx.
d.
λ is a solution of the characteristic equation det(A − λI).
Some coefficients of the characteristic polynomial of A have a specific shape. The following
theorem gives the information about it.
Theorem 2.1.2.
If A is an n × n matrix, then the characteristic polynomial p(λ) of A has degree n, the coefficient
of λn is (−1)n, the coefficient of λn − 1 is (−1)n − 1 trace(A) and the constant term is det(A), where
trace(A) := a11 + a22 + ⋯ + ann.
In some structured matrices, eigenvalues can be read as shown in Theorem 2.1.3.
Theorem 2.1.3.
If A is an n × n triangular matrix (upper triangular, lower triangular, or diagonal), then the
eigenvalues of A are entries of the main diagonal of A.
Cayley-Hamilton’s theorem is one of the most important statements in linear algebra. The
theorem states:
Theorem 2.1.4.
Substituting the matrix A for λ in characteristic polynomial of A, we get the result of zero matrix
i.e., p(A) = 0.
There are a number of methods for determining eigenvalue. Some methods allow finding all
the eigenvalues and the other just a few of the eigenvalues. Methods based on first determining
the coefficients of the characteristic polynomial, and later to determining the eigenvalue
59
60
Applied Linear Algebra in Action
solving algebraic equations are rarely implemented, because they are numerically unstable. In
fact, for the coefficients of the characteristic polynomial burdened with rounding errors, and
due to numerical instability cause large errors in the eigenvalue. Because of that, the charac‐
teristic polynomial has mainly theoretical significance. The methods, which are based on the
direct application characteristic polynomial, are applied in practice only when the character‐
istic polynomial is well conditioned. Also for some structured matrices, we can apply the
method for the characteristic polynomial, but we don’t calculate directly characteristic
polynomial coefficients. The following example describes a class of such matrices.
Example 2.1.1 The Example of structured matrix which achieve the characteristic polynomial
for determining the eigenvalue are Toeplitz matrix. Toeplitz matrix marked as Tn, are matrices
with constant diagonals. If the Toeplitz matrix is symmetric and positive definite, recursive
relation is pn(λ) = pn − 1(λ)βn − 1(λ), where are pn i pn − 1 characteristic polynomial matrix Tn i Tn − 1
respectively a βn − 1 Schur-Szegö parameter for Yule-Walker system. The above recursive
relation enables work with characteristic polynomial without individual accounts of his odds.
More information can be found at [1]
The following definitions are introducing two important terms: the geometric multiplicity of
λ0 and the algebraic multiplicity of A.
The eigenvectors corresponding to λ are the nonzero vectors in the solutions space of
(A − λI)x = 0. We call this solution space the eigenspace of A corresponding to λ.
Definition 2.1.1.
If λ0 is an eigenvalue of an n × n matrix A, then the dimension of the eigenspace corresponding
to λ0 is called the geometric multiplicity of λ0, and the number of times that λ − λ0 appears as
a factor in the characteristic polynomial of A is called the algebraic multiplicity of A.
Eigenvalue and eigenvector have some specific features, which are easy to prove.
The following are some of the obvious features of eigenvalues of matrix A and corresponding
eigenvector:
Theorem 2.1.5.
a.
If μ ≠ 0 complex number, λ is an eigenvalue of matrix A, and x ≠ 0 corresponding eigen‐
vector, then μx is a corresponding eigenvector.
b.
If k is a positive integer, λ is an eigenvalue of matrix A, and x ≠ 0 corresponding eigen‐
vector, then λk is an eigenvalue of Ak and x is a corresponding eigenvector.
c.
Matrix A and AT have the same eigenvalues.
In linear algebra invertible matrix are important. From the problem of eigenvalues we can
easily conclude If the matrix A is invertible or not. What more can be, the eigenvalues of the
matrix A invertible can be immediately read from the eigenvalues of the matrix A− 1. Because
of that, in the following theorems we summarize some properties of invertible matrix.
Eigenvalue Problems
http://dx.doi.org/10.5772/62267
Theorem 2.1.6.
If A is an n × n matrix, then the following are equivalent.
a.
A is invertible.
b.
λ = 0 is not an eigenvalue of A
c.
If λ is an eigenvalue of matrix invertible A, and x ≠ 0 corresponding eigenvector, then
is an eigenvalue of A− 1 and x is a corresponding eigenvector.
d.
det(A) ≠ 0.
e.
A has rank n.
f.
Ax = 0 has only the trivial solution.
g.
Ax = b has exactly one solution for every n × 1 matrix B
h.
The column vectors of A form a basis for ℝn.
i.
The row vectors of A form a basis for ℝn.
j.
ATA is invertible.
1
λ
The problem of finding a base ℝn consisting of eigenvectors is very important in linear algebra.
Because of that, in this section we will consider the following two equivalent problems:
The Eigenvector Problem. Given an n × n matrix A, does there exist a basis for ℝn consisting
of eigenvectors of A?
The Diagonalization Problem (Matrix Form). Given an n × n matrix A, does there exist an
invertible matrix P such that P− 1AP is a diagonal matrix?
The latter problem suggests the following terminology.
Definition 2.1.3. Two square matrices A and B are similarly called, if there is invertible matrix
P, so that B = P− 1AP. The transition from A to the matrix of the matrix B is called the similarity
transformations .
The importance of similar matrices can be seen in the following theorem:
Theorem 2.1.7. Similar matrices A and B have the same characteristic polynomial. They have
the same eigenvalues including their geometric multiplicity of the geometric multiplicity of λ0
and the algebraic multiplicity of λ0.
Based on previous definitions, we can define the term diagonalizable as follows:
Definition 2.1.4. A square matrix A is called diagonalizable if the transformation of similarity
may be translated into a diagonal form.
61
62
Applied Linear Algebra in Action
To the above two problems are obviously equivalent to the following theorem.
Theorem 2.1.8.
If A is an n × n matrix, then the following are equivalent.
a.
A is diagonalizable.
b.
A has n linearly independent eigenvectors.
The following algorithm is for diagonalizing a matrix
Algorithm for Diagonalizing a Matrix
Find n linearly independent eigenvectors of A, marked as x1, x2, ⋯, xn.
Form the matrix P having x1, x2, ⋯, xn., at its column vectors.
The matrix P− 1AP then will be diagonal with λ1, λ2, ⋯, λn successive diagonal entries, where
λi is eigenvalue corresponding to xi , for i = 1, 2, ⋯, n.
Theorem 2.1.9.
If x1, x2, ⋯, xk are eigenvectors of A corresponding to distinct eigenvalues λ1, λ2, ⋯, λk, then
{x1, x2, ⋯, xk } is a linearly independent set.
As consequence of Theorem 2.1.8. and Theorem 2.1.9 ., we obtain the following important
result
Theorem 2.1.10.
If an n × n matrix A has n distinct eigenvalues, then A is diagonalizable.
There are matrices that can have the same eigenvalues and yet can be diagonalizable. Broadest
such class of such matrices are normal matrix, which we will introduce the following definition.
Definition 2.1.5 . Matrix A ∈ ℂ(n,n) is called normal, if holds AHA = AAH.
More general characterization of diagonalizable matrix A is given in the following theorem .
Theorem 2.1.11.
If A is a square matrix, then:
a.
For every eigenvalue of A the geometric multiplicity is less than or equal to the algebraic
multiplicity.
b.
A is diagonalizable if and if the geometric multiplicity is equal to the algebraic multiplicity
for every eigenvalue.
2.2. QR algorithm
In this section we will present the QR algorithm. In numerical linear algebra, the QR algo‐
rithm is an eigenvalue algorithm : that is, a procedure to calculate the eigenvalues and
Eigenvalue Problems
http://dx.doi.org/10.5772/62267
eigenvectors of a matrix. QR algorithm represent factorization method, because it is based on
the matrix decomposition. First factorization method labeled as LR algorithm is developed by
H. Rutishauser in 1958. LR algorithm is due to the many shortcomings, today is rarely used,
(Wilkinson monograph). Better factorization method, is designated as the QR algorithm. The
basic form have developed independently from each other, in 1962 G. F. Francis (England) and
by Vera N. Kublanskoovskaya (USSR). Today, it is the best method for solving the unsym‐
metrical eigenvalue problems, when you want to determine all the eigenvalues of the matrix.
It is particularly effective when it is brought into the so-called matrix "Condensed form".
Condensed form is for the unsymmetrical problem Hessenberg form. About Hessenberg form
will be discussed more later. The basic idea is to perform a QR decomposition, writing the
matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the
factors in the reverse order, and iterate.
For understanding of the QR algorithm we will need the following terms:
Definition 2.2.1. Matrix Q ∈ P(n,n) called orthogonal if worth QTQ = I.
Remark 2.2.1.Orthogonal matrix represent special case of unitary matrices. The Matrix
U ∈ ℂ(n,n) called unitary if applies UHU = I. It is now clear that the orthogonal matrix of a unitary
matrix in which all elements of real numbers. In practice, of course, is easier to work with the
orthogonal than unitary matrices.
Remark 2.2.2 Orthogonal and unitary matrices are normal matrices.
Let A and B are similar matrices. If the similarity transformations performed by the orthogonal
or unitary matrix Q i.e. if applies B = QTAQ or B = UHAU we will say that the matrices A and
B are unitary similar. Since the unitary similar matrices are special special case of similar
matrix, the eigenvalues of unitary similar matrices are the same.
In addition to the unitary similar matrices and their properties for the introduction of the QR
algorithm, we will need the following theorem.
Theorem 2.2.1. Let’s A ∈ ℝ(n,n) regular matrix, then there is a decomposition A = QR, where is
Q orthogonal matrix and R upper triangular matrix. If the diagonal elements of the matrix R
are positive, decomposition is unique.
The decomposition of the matrix A from Theorem 2.2.1 is called the QR decomposition of the
matrix A.
The following is a basic form of the QR algorithm.
Let A0 := A. The basic forms of the QR algorithm is given by
Algorithm 2.2.1 . (QR algorithm-basic forms)
For i = 0, 1, ⋯ until convergence
Decompose Ai = QiRi (QR decomposition)
63
64
Applied Linear Algebra in Action
Ai +1 = Ri Qi
End
Theorem 2.2.1 All matrices Ai resulting in algorithms 2.2.1 are unitary similar.
Proof
Since applies QiT Ai = Ri we have Ai+1 = Ri Qi = QiT Ai Qi . Based on past relationships, it is clear that
T
it is true Ai+1 = QiT Qi−1
⋅ ⋯ ⋅ Q0T A0Q0 ⋅ ⋯ ⋅ Qi−1Qi . It is obvious that the matrix Q := Q0 ⋅ ⋯ ⋅ Qi
orthogonal matrix and the theorem is proved.
Let’s look briefly at some properties QR algorithm:
a.
For QR factorization full matrix it is necessary O(n3) flops by factorization. Total QR
iteration needs O(n4) flops, which means that it is objectively very slow. But the problem
of the slowness of the basic forms of the algorithm can be overcome with the two strategies,
namely: translation matrix in Hessenberg form and the introduction of shift.
Let A ∈ ℝ(n,n) then for each and Qi, Ri ∈ ℝ(n,n) and the whole algorithm is performed in real
area. If all eigenvalues of matrix A modulo different, then they are all real and the algorithm
converges. However, if there is at least one, and so it is worth
| | ≈ 1 QR algorithm is very
λi +1
λi
slow. If the matrix A has complex eigenvalues then a basic form of the QR algorithm does not
converge. We have already seen the need for the introduction of Hessenberg form of a matrix
and it will introduce the following definition.
Definition 2.2.2. For the matrix A will say that in the upper Hessenberg form if holds
aij = 0 for i - j > 2
i.e. the upper triangular matrix has a side below the main diagonal.
The process of reducing the Hessenberg form can be performed using Househelder reflectors
and Givens rotation. Let’s look at bringing in Hessenberg form using Househelder spotlight.
Let A =
(
a11 cT
b
B
)
with b ≠ 0
Our goal is to determine ω ∈ ℂn − 1 with features a‖ω‖2 = 1 and
Q1b : = ( I n−1 − 2ωω H )b = ke1
where In − 1 is identity matrix row n-1 a e1 the first column of the matrix In − 1.
Eigenvalue Problems
http://dx.doi.org/10.5772/62267
We define Householder reflector as follows
æ 1 0T ö
P1 := çç
÷÷
è 0 Q1 ø
Now applies
æ a11 cT Q1 ö
ç
÷
ç k
÷
÷
A1 := P1 AP1 = ç 0
ç
÷
ç M Q1 BQ1 ÷
ç
÷
è 0
ø
Obviously, the first column of the matrix A1 have a look what is required in the upper
Hessenberg form. In this way, we showed that the first column is converted into a suitable
form. An analogous procedure can be performed on 2,3,…,n-1 column.
For the implementation of QR algorithm is important that in each iteration in implementing
QR algorithm preserves the structure of the matrix. For a matrix form in the upper Hessenberg
applies following theorem.
Theorem 2.2.2 If A is upper Hessenberg matrix, then the Q in its QR factorization A=QR also
upper Hessenberg matrix.
The above theorem that QR and also upper Hessenberg matrix is the product of the upper
triangular matrix and upper Hessenberg matrix.
Preserving the structure of the matrix is very important to stop the efficiency of the algorithm.
Namely, if the upper Hessenberg matrix A matrix of its QR factorization is only O(n2) instead
of O(n3) as is necessary for QR factorization (decomposition) of full matrix.
Let’s look at one more advantage of the transformation matrix in Hessenberg form. Namely,
Ai → R (i → ∞), where R is an upper triangular matrix. Because the matrix Ai is always upper
Hessenberg matrix proves that elements of the second diagonal tend to zero and it’s worth
()
i
(
)
a j+1,
j →0 i → ∞ ,
()
i
j = 1, 2, ⋯ , n − 1 wherein a j+1,
j elements of the matrix Ai. It is now clear that
for sufficiently large and can be read eigenvalues initial value of A on the basis of theorem
2.1.3. as the diagonal elements of the matrix Ai
For a further improve of the algorithm, shift is used . The idea of improving the QR algorithm
is based on the simple fact that if the eigenvalues of A are equal λi that the eigenvalues of the
matrix A − σI equal λi − σ. If we shift σ chosen close eigenvalues, there is a strong acceleration
of the algorithm.
65
66
Applied Linear Algebra in Action
Let A0 := A.
Algorithm 2.2.2. . (QR algorithm with shift)
For i = 0, 1, ⋯ until convergence
Choose shift σi near the eigenvalues value
Decompose Ai − σiI = QiRi (QR decomposition)
Ai +1 = Ri Qi + s i I
End
It is easy to prove that all matrices in the algorithm 2.2.1. are unitary similar. From the above
it is clear that in the case of real matrices with real eigenvalues is best to shift the parameters
(i )
taken σi = an.n
.
For a further analysis of the parameter shifts we will need the concept of reduced upper
Hessenberg matrix as well as the implicit Q theorem.
Definition 2.2.3 The upper Hessenberg matrix H is unreduced upper Hessenberg matrix if
the first second diagonal not a single zero.
Theorem 2.2.3. Let QTAQ = H unreduced upper Hessenberg matrix with positive subdiagonal
elements hk + 1,k a Q unitary matrix. The columns of the matrix Q and matrix H starting from
the second to n-th, are uniquely determined first column of the matrix Q.
Proof
Let Q = (q1, q2, ⋯, qn) some prior q1, q2, ⋯, qk and the first k − 1 column of the matrix H is
determined. The proof will be carried out by mathematical induction by k. For k = 1 is deter‐
mined q1 and the process can start. Because QH = AQ and H = (hij) the upper Hessenberg matrix
applies
hk +1, k q k +1 + hkk q k + L + h1k q1 = Aq k .
If you multiply the last equality with (qi)H we get
( )
hik = qi
H
Aq k ( i = 1, 2,L, k )
From here it’s k-th column of H except element hk + 1,k specified.
Because hk + 1,k ≠ 0 we have
Eigenvalue Problems
http://dx.doi.org/10.5772/62267
qk +1 =
ö
1 æ k k
i
ç Aq - åhik q ÷
hk + 1, k è
i =1
ø
From (qk + 1)Hqk + 1 = 1 and positivity hk + 1,k we get hk + 1,k in a unitary way.
Theorem is proved.
Remark 2.2.3 Condition hk + 1,k > 0 in the previous theorem was we only need to ensure
uniformity of matrices Q i H.
With the help of implicit Q theorem we discuss the selection of shift A = A0 real matrix has
complex eigenvalues. Then you have to make a double shift for σ i σ̄ . Namely,
A0 - s I = Q1R1,
A0 - s I = Q1R1,
A1 = R1Q1 + s I
A1 = R1Q1 + s I.
From there, easy to get A2 = Q2T Q1T A0Q1Q2. Matrices Q1i Q2 can be chosen so that Q1Q2 real
matrices and therefore the matrix A2 is real matrix. Applying
Q1Q2 R2 R1 = Q1 ( A1 - s I ) R1 = Q1 ( R1Q1 + (s - s ) I ) R1 = Q1R1Q1R1 + (s - s )Q1R1
2
2
= ( A0 - s I ) + (s - s ) ( A0 - s I ) = A02 + (s + s ) A0 + s I =: M .
Because of σ + σ̄ ∈ P , matrix M is real. Then Q1Q2R2R1 QR factorization real matrix and that
means Q1Q2 i R2R1 we can choose a real matrix. The first column of the matrix Q1Q2 is
proportional to the first column of the matrix M, and the other columns are calculated by
applying implicit Q theorem.
2.3. Mathematical background for Hermitian (symmetric) case
In this section we look at the problem of eigenvalues in the case of a symmetric or Hermitian
matrix.
Definition 2.3.1. Matrix A ∈ ℝ(n,n) is called symmetric if applies A = AT.
Definition 2.3.2 Matrix A ∈ ℂ(n,n) is called Hermitian if applies A = AH.
Remark 2.3.1 Symmetric Matrices are only a special case of Hermitian matrices in the case that
the elements matrices are real numbers. Therefore, we will formulate a theorem for Hermitian
matrices.
Remark 2.3.2 Hermitian and symmetric matrices are normal matrices, which means that they
can diagonalize.
67
68
Applied Linear Algebra in Action
The following theorem gives important information on the reality of eigenvalues Hermitian
(symmetric) matrices. This feature greatly facilitates consideration of the problem of eigen‐
values for this class of matrices, which makes this class of matrices applicable in practice.
Theorem 2.3.1 . If A is Hermitian (symmetric) matrix, then:
a.
The eigenvalues of A are all real numbers.
b.
Eigenvectors from different eigenspace are orthogonal.
Since all the eigenvalues of real ones can be compared. Therefore, we assume that the eigen‐
values in order of size, i.e. It applies λ1 ≤ λ2 ≤ ⋯ ≤ λn and that the corresponding orthonormal
eigenvectors.
If the matrix A is symmetrical due to symmetry feature, it comes to the significant acceleration
algorithms for the unsymmetrical case. We will demonstrate the QR algorithm which is
presented in section 2.2. for the unsymmetrical case. In symmetric case is important to note
that the upper Hessenberg form of symmetric matrix tridiagonal matrix whose QR decompo‐
sition is only necessary O(n) operation. It is also important that during the QR algorithm
preserves the structure of the matrix or all tridiagonal matrices Ai. For a shift in this case is
usually taken Wilkins shift which is defined as an eigenvalue of matrix
(
an−1,n−1 an−1,n
an−1,n
an,n
)
, that is closest an,n.
For a QR algorithm Wilkinson shifts apply following theorem, whose proof is given in [2]
Theorem 2.3.2 (Wilkinson) QR algorithm with Wilkinson shifts for symmetric tridiagonal
matrix converges globally and at least linearly. For a almost all of the matrices are on asymp‐
totically cubic converging.
Now we introduce the very important concept of the Rayleigh quotient, because it gives the
best estimate of the eigenvalues for a given vector x ∈ ℂn, x ≠ 0
Definition 2.3.3 Let A be a Hermitian (symmetric) matrix. For a given vector x ∈ ℂn, x ≠ 0
Rayleigh quotient is defined R ( x ) : =
xH Ax
x Hx
.
The importance of the Rayleigh quotient is seen in the following theorems
Theorem 2.3.3. (Features Rayleigh quotient)
a.
For all x ∈ ℂn, x ≠ 0 worth λ1 ≤ R(x) ≤ λn.
b.
λ1 = min x≠0 R (x), λn = max x≠0 R (x)
c.
If x ≠ 0 with λ1 = R(x) respectively λn = R(x) x is then x is the eigenvector corresponding to
λ1 respectively λn.
d.
λi = min { R (x) : xHx j = 0, j = 1, ⋯ , i − 1, x ≠ 0} =
max {R (x) : x Hx j = 0,
j = i + 1, ⋯ n, x ≠ 0}
Eigenvalue Problems
http://dx.doi.org/10.5772/62267
Paragraph (d) in the previous theorem is known as Rayleigh principle. However it is numer‐
ically worthless, because for the determination example, we need eigenvector or x1 corre‐
sponding to the eigenvalue λ1. In order to overcome this disadvantage is introduced min max
principle of Poincaré that is listed in the following theorems.
Theorem 2.3.4 . (min max principle of Poincaré )
li = min
max R ( x ) =
dimV =i xÎV \{0}
max
min R ( x )
dimV = n -i +1 xÎV \{0}
The following formulation is known as min max principle of Courant-Fischer and often
favorable to use.
Theorem 2.3.5. (min max principle of Courant-Fischer )
li =
min
{ p ,L, p }
1
max
i -1
{ p ,L, p }
1
n -i
{
}
max R ( x ) : x Hp j = 0, j = i + 1,L n, x ¹ 0
{
}
min R ( x ) : x Hp j = 0, j = i + 1,L n, x ¹ 0 .
From the above it is clear that these theorems are important for the localization of eigenvalues.
The following is an algorithm that is in linear algebra known as Rayleigh quotient iteration
and it reads as follows
Let A ∈ ℝ(n,n) symmetric matrix and x0 initial vector, which is the standardized. For which
applies ‖x0‖2 = 1.
Algorithm 2.3.1 . (Rayleigh quotient iteration)
( )
T
σ0 = x 0
Ax 0
For i = 1, 2 ⋯ until convergence
Solve (Ai − σiI)yi = xi − 1
xi =
yi
yi2
( )
σ0 = x i
End
T
Ax i .
69
70
Applied Linear Algebra in Action
Theorem2.3.6. Rayleigh quotient iteration converges cubic.
Finally point out that the most effective for symmetric matrices Divide-and-Conquer method.
This method was introduced by Cupen [3] and the first effective implementation is the work
of Cu and Eisenstat [4]. About this method, more information can be found in [4].
2.4. Physical background
We have already mentioned that the problem of eigenvalues has numerous applications in
engineering. Even more motivation to consider problems of eigenvalues, comes from their
large application in technical disciplines. On a simple example of a mass-spring system
illustrated with application of eigenvalues in engineering. We are assuming, that each spring
has the same natural length l and the same spring constant k. Finally, we can assume that the
displacement of each spring in measured to its own local coordinate system with an origin at
the spring’s equilibrium position.
Applying Newton’s second law we get the following system
m1
m2
d 2 x1
dt 2
d 2 x2
dt 2
- k ( -2 x1 + x2 ) = 0
- k ( x1 - 2 x2 ) = 0.
We are aware of vibration theory that
xi = ai sin wt ,
where ai is the amplitude of the vibration of mass i and ω is the frequency of the vibration.
If the last equation twice differentiate by t, we get
d 2 x2
dt
2
= - ai ω2 sin wt.
If the last two equations obtained expressions for xi i
write the system in matrix form, then we get
d 2 xi
dt 2
(i = 1, 2) replace the initial system and
Eigenvalue Problems
http://dx.doi.org/10.5772/62267
æa ö
æa ö
Aç 1 ÷ = w 2 ç 1 ÷ ,
è a2 ø
è a2 ø
(6)
where
æ 2k
ç m
1
A=ç
ç k
çè m2
k ö
m1 ÷÷
2k ÷
÷
m2 ø
-
Equation (6) represents unsymmetrical eigenvalue problem.
More information on this case can be found in [5]
2.5. General Linear Eigenvalue Problem
In this section we will deal with general linear eigenvalue problem or the problem
Ax = Bx , x ¹ 0 ,
(7)
where A, B ∈ C(n,n)
The scalar λ is called an eigenvalue of the problem (7), and x said to be an eigenvector of (7)
corresponding to λ.
A common acronym for general linear eigenvalue problem is GEP. Now eigenvalue problems
previously discussed is called the standard eigenvalue problem and tagging with SEP. In
practice, the more often we meet with GEP than SEP. Now let’s consider some features of GEP
and establish its relationship with SEP.
It is obvious that the eigenvalues of (7) zero of the characteristic polynomial, which is defined
as pn(λ) := det(A − λB). In the case of GES degree polynomial pn is less than or equal to n. The
characteristic polynomial of degree n has pn if and only if B is regular matrix. In case B=I we
get SEP and in this case SEP has n eigenvalues. GEP and can be less than n eigenvalues. It can
also happen that at GEP is pn(λ)≡0 and it that case GEP has infinitely many eigenvalues.
The following two examples illustrate this situation with GEP
Example 4.5.1 Let in GEP A =
i x=
(00 00), B = (00 60). Then worth p (λ)≡0 and every λ ∈ ℂ is eigenvalue
(10) is the corresponding eigenvector.
n
71
72
Applied Linear Algebra in Action
Example 4.5.2 Let in GEP A =
(13 24), B = ( −11 −11). When we get to the characteristic polynomial
1
pn(λ) = (1 − λ)(4 − λ) − (3 + λ)(2 + λ) = − 2 − 10λ and the only eigenvalue value is λ = − 5 .
Atypical nature of we met in the last two examples are the result of the fact that in their matrix
B was not regular. Therefore, it is usual in the GEP taken that A and B are Hermitian matrix
and B is positive definite, that is, all the eigenvalues of the matrix B are positive.
Our goal is to find a connection between this taken GEP and symmetric SEP. As we said B is
positive definite, B has a Cholesky decomposition, i.e., B = CCH. Then is
Ax = l Bx Û Fy := C -1 AC - H y , y := C H x
(8)
Since matrix F has n eigenvalues and it belongs to the GEP also has n real eigenvalues. Namely
on the basis of (7) are the same eigenvalues.
Let yi i yj orthonormal eigenvectors of F then for xi = C− Hyi i xj = C− Hyj apply
( ) y j = ( C H xi ) C H x j
H
H
= ( x i ) CC H x j = ( x i ) Bx j
d ij = y i
H
H
(9)
From equation (8) it is clear that the eigenvectors GEP (6) are an orthonormal vectors in relation
to the new inner product is defined as x, y B := xHBy. Now we’re going to GEP (7) redefine
Rayleigh quotient as follows R( A,B )( x ) : =
x H Ax
x H Bx
. With this predefined inneren product and
Rayleigh quotient applies to all theorems of section 2.3 with appropriate modification of the
definition changes required. This gives self-generation Hermitian (symmetrical) SEP.
3. The quadratic eigenvalue problems
In practice, nonlinear eigenproblems commonly arise in dynamic/stability analysis of struc‐
tures and in fluid mechanics, electronic behavior of semiconductor hetero-structures, vibration
of fluid–solid structures, vibration of sandwich plates, accelerator design, vibro-acoustics of
piezoelectric/poroelestic structures, nonlinear integrated optics, regularization on total least
squares problems and stability of delay differential equations. In practice, the most important
is the quadratic problem Q(λ)x = 0 where
Eigenvalue Problems
http://dx.doi.org/10.5772/62267
Q(λ) : = λ2 A + λB + C,
A, B, C ∈ ℂn×n ,
A ≠ 0, x ≠ 0
(10)
This section is organized as follows:
1.
Basic Properties
We consider Rayleigh functional and Minmax Characterization
2.
Linearization
A standard approach for investigating or numerically solving quadratic eigenvalue
linearization problems, where the original problem is transformed into a generalized
linear eigenvalues problem with the same spectrum.
3.
Physical Background
We study vibration analysis of structural systems
3.1. Basic Properties
Variational characterization is important for finding eigenvalues. In this section we give a brief
review of variational characterization of nonlinear eigenvalue problems. Since the quadratic
eigenproblems are a special case of nonlinear eigenvalue problems, results for nonlinear
eigenvalue problems can be specially applied for the quadratic eigenvalue problems. Varia‐
tional characterization is generalization of well known minmax characterization for the linear
eigenvalue problems.
We consider nonlinear eigenvalue problems
T ( λ ) x = 0,
(11)
where T(λ) ∈ ℂn × n, λ ∈ J is a family of the Hermitian matrices depending continuously on the
parameter λ ∈ J and J is a real open interval which may be unbounded.
Problems of this type arise in damped vibrations of structures, conservative gyroscopic
systems, lateral buckling problems, problems with retarded arguments, fluid–solid vibrations,
and quantum dot heterostructures.
To generalize the variational characterization of eigenvalues we need a generalization of the
Rayleigh quotient. To this end we assume that
(A) for every fixed x ∈ ℂn, x ≠ 0 the scalar real equation
f ( l ; x ) := x HT ( l ) x
(12)
has at most one solution p(x) ∈ J. Then f(λ; x) = 0 implicitly defines a functional p on some
subset D ⊂ ℂn which is called the Rayleigh functional of (11).
73
74
Applied Linear Algebra in Action
(B) for every x ∈ D and every λ ∈ J with λ ≠ p(x) it holds that (λ − p(x))f(λ; x) > 0.
If p is defined on D = ℂn\{0} then the problem (1) is called overdamped, otherwise it is called
nonoverdamped.
Generalizations of the minmax and the maxmin characterizations of the eigenvalues were
proved by Duffin [6] for the quadratic case and by Rogers [7] for the general overdamped
problems. For the nonoverdamped eigenproblems the natural ordering to call the smallest
eigenvalue the first one, the second smallest the second one, etc., is not appropriate. The next
theorem is proved in [8], which gives more information about the following minmax charac‐
terization for eigenvalues.
Theorem 3.1.1.
Let J be an open interval in ℝ, and let λ) ∈ ℂnxn, λ ∈ J, be a family of Hermitian matrices
depending continuously on the parameter λ ∈ J, such that the conditions (A) and (B) are
satisfied. Then the following statements hold.
a.
For every l ∈ ℕ there is at most one lth eigenvalue of T(⋅) which can be characterized by
ll =
b.
min
V ÎH l ,V Ç D ¹Æ
sup p ( v )
vÎV Ç D
(13)
If
l l :=
inf
V ÎH l ,V Ç D ¹Æ
sup p ( v ) Î J .
vÎV Ç D
For some l ∈ ℕ then λ l is the lth eigenvalue of ,T(⋅) in J, and (13) holds.
c.
If there exists the kth and lth eigenvalue λ k and λ l in J(k < l), then J contains the jth
eigenvalue λ j(k ≤ j ≤ l) as well, and λ k ≤ λ j ≤ λ l.
d.
Let λ 1 = infx ∈ Dp(x) ∈ J and λ l ∈ J. If the minimum in (13) is attained for an l dimensional
subspace V, then V ⊃ D ∪ {0}, and (3) can be replaced with
ll =
e.
f.
min
V ÎH l ,V É D È{0}
sup p ( v ) .
vÎV ,v ¹ 0
λ̃ is an lth eigenvalue if and only if μ = 0 is the lth eigenvalue of the linear eigenproblem
T(λ)x = μx.
The minimum in (3) is attained for the invariant subspace of T(λl) corresponding to its lth
largest eigenvalues.
Sylvester’s law of inertia has an important role in the nonlinear eigenvalue problems. We will
briefly look back to the Sylvester’s law of inertia. With this purpose we define the inertia of
the Hermitian matrix T as follows [9].
Eigenvalue Problems
http://dx.doi.org/10.5772/62267
Definition 3.1.1. The inertia of a Hermitian matrix T is the triplet of nonnegative integers
In(T) = (np, nn, nz) where np, nn and nz are the number of positive, negative, and zero eigen‐
values of T (counting multiplicities).
Next, we consider a case that an extreme eigenvalue λ1 := infx ∈ Dp(x)orλn := supx ∈ Dp(x) is
contained in J.
Theorem 3.1.2 Assume that T : J → ℂnxn satisfies the conditions of the minmax characterization,
and let (np, nn, nz) be the inertia of T(σ) for some σ ∈ J.
a.
If λ1 := infx ∈ Dp(x) ∈ J, then the nonlinear eigenproblem T(λ)x = 0 has exactly np eigenvalues
λ1 ≤ ⋯ ≤ λnp in J which are less than σ.
b.
If supx ∈ Dp(x) ∈ J, then the nonlinear eigenproblem T(λ)x = 0 has exactly nn eigenvalues
λn−nn +1 ≤ ⋯ ≤ λn in J exceeding σ.
We consider the quadratic eigenvalue problem in the label QEP .That is why we adapt to the
real scalar equation (11) QEPu. In this way, we get
f(λ; x; :=)λ2x H Ax + λx H Bx + x H Cx = 0 for every fixed x ∈ ℂn , x ≠ 0
(14)
Natural candidates for the Rayleigh functionals of QEPa (Eq. 10) are
x H Bx
2
æ x H Bx ö
x H Cx
+ ç H
p+ ( x ) := - H
÷ - H
ç 2 x Ax ÷
2 x Ax
x Ax
è
ø
x H Bx
and
(15)
2
æ x H Bx ö
x H Cx
p- ( x ) := - H
- ç H
÷ - H
ç 2 x Ax ÷
x Ax
2 x Ax
è
ø
(16)
The Rayleigh functionals are the generalization of the Rayleigh quotient.
In this section we deal with the hyperbolic quadratic pencil as an example overdamped
problems and gyroscopically stabilized pencil as an example of the changes that are not.
Now, let us look briefly the hyperbolic quadratic pencil. It is overdamped square pencil given
by (10) in which the A = AH > 0, B = BH, C = CH. For hyperbolic quadratic pencil, the following
interesting features
The ranges J+ := p+(ℂn\{0}) and J− := p−(ℂn\{0}) are disjoint real intervals with max J− < min J+.
Q(λ) is the positive definite for λ < min J− and λ > max J+, and it is the negative definite for
λ ∈ (max J−, min J+).
(Q, J+) and (−Q, J−) satisfy the conditions of the variational characterization of the eigenvalues,
i.e. there exist 2n eigenvalues[1].
75
76
Applied Linear Algebra in Action
l1 £ l2 £ L £ ln < ln +1 £ L £ l2n
(17)
and
l j = min
max p- ( x ) , ln + j = min
dimV = j xÎV , x ¹ 0
max p+ ( x ) ,
dimV = j xÎV , x ¹ 0
j = 1, 2,L, n.
(18)
Now we will look at gyroscopically stabilized system in the label GSS. A quadratic polynomial
matrix
Q(l ) = l 2 I + l B + C , B = B H , det( B ) ¹ 0, C = C H
(19)
is gyroscopically stabilized if for some k>0 it holds that
B > kI + k -1C ,
(20)
where denotes the positive square root of B2.
Definition 3.1.2. A eigenvalue λ is positive type if applies xHQ ′ (λ )x > 0 x ∈ ℂn ,
A eigenvalue λ is positive type if applies xHQ ′ (λ )x < 0 x ∈ ℂn ,
x≠0
x≠0
Theorem (Barkwell, Lancaster, Markus 1992)
a.
The spectrum of a gyroscopic stabilized pencil is real, i.e. Q is quasi-hyperbolic.
b.
All eigenvalues are either of positive type or of negative type
c.
If (np, nn, nz) is the inertia of B, then Q(λ)x = 0 has 2np negative and 2 nn positive eigen‐
values.
d.
The 2np negative eigenvalues lie in two disjoint intervals, eigenvalues in each; the ones in
the left interval are of negative type, the ones in the right interval are of positive type.
e.
The 2 nn negative eigenvalues lie in two disjoint intervals, eigenvalues in each; the ones
in the left interval are of negative type, the ones in the right interval are of positive type.
Without loss of generality we will observe only positive eigenvalues value
Eigenvalue Problems
http://dx.doi.org/10.5772/62267
Let now p+(x) : = −
x H Bx
H
2x x
+
(
)−
x H Bx 2
H
2x x
xH C x
H
x x
and p−(x) : = −
x H Bx
H
2x x
−
(
)−
x H Bx 2
H
2x x
xH C x
xH x
priate for GSS. With them, we can define the Rayleigh functionals p−+ : =
p++ : =
{
p+(x) za
{
functionals appro‐
p−(x)
za
p−(x) > 0
0 else
i
p−(x) > 0
0 else
Voss and Kostić are defined for this function given interval in which the eigenvalues can
minmax characterize.
In order to minmax characterized all eigenvalues, we will introduce new Rayleigh functional.
It is a new strategy. With this aim we matrices B and C as well as vector x present in the
following format :
0 ö
æB
B=ç 1
÷ , Bi > 0, ( i = 1, 2 )
è 0 - B2 ø
æ C11 C12 ö
C =ç H
÷ , Cii > 0, ( i = 1, 2 )
çC
÷
C
22 ø
è 12
æzö
x = ç ÷ ¹ 0.
è yø
We define
Q1 ( l ) := l 2 I + l B1 + C11
Q2 ( l ) := l 2 I - l B2 + C 22
(
H
T ( l ) := Q2 ( l ) - C12
Q1 ( l )
)
-1
C12
Because of the conditions (20) are Q1(λ) I Q2(λ) are hyperbolic.
We will observe the following problem inherent value
T ( l ) y = 0, y ¹ 0, y Î C
Applies following theorem
n-np
,y ¹ 0
77
78
Applied Linear Algebra in Action
Theorem 3.1.3 λ is eigenvalue Q(⋅) if and only if λ is eigenvalue T(⋅)
Proof
æ Q1 ( l ) C12 ö
æ Q1 ( l ) C12 ö æ z ö
÷x = 0 Û ç
÷ç ÷ = 0
Q (l ) x = 0 Û ç H
H
ç C12
÷
ç
÷è y ø
l
l
Q
C
Q
(
)
(
)
2
2
è
ø
è 12
ø
æ Q1 ( l ) z + C12 y ö æ 0 ö
H
÷=
Ûç H
Û Q1 ( l ) z + C12 y = 0 Ù C12
z + Q2 ( l ) y = 0
ç C12 z + Q2 ( l ) y ÷ çè 0 ÷ø
è
ø
-1
-1
æ
ö
H
Û ç z = - Q1 ( l ) C12 y Ù æç Q2 ( l ) - C12
Q1 ( l ) C12 ö÷ y = 0 ÷
è
ø
è
ø
(
(
)
(
Û æç z = - Q1 ( l )
è
(
)
-1
)
)
C12 y Ù T ( l ) y = 0 ö÷ .
ø
Theorem is proved.
Analogous to the (14) we defined the following functions
q ( l ; y ) := y HT ( l ) y and
f 2 ( l ; y ) := y HQ2 ( l ) y
In the following theorem we give information about the properties q(λ; y)
Theorem 3.1.4.Function q(λ; y) has the following characteristics
a.
For each vector q(0; y) > 0
b.
For λ>0 a function q(λ; y) exactly two zeros for each vector y. Minimum zero function
q(λ; y) and is lower than the minimum zero function f2(λ; y) and the largest zero function
q(λ; y) is greater than the maximum zero function f2(λ; y)
c.
From f 2′(λ; y) > 0 iλ > 0 follow q′(λ; y)
Proof
a.
Because C > 0 we have
(
H
- y H C12
( C11 )
-1
)
æ C11
yH × ç H
çC
è 12
C12 ö æ - ( C )-1 C y ö
11
12 ÷ > 0.
÷×ç
÷
C 22 ÷ø çè
y
ø
Eigenvalue Problems
http://dx.doi.org/10.5772/62267
It follows
(
H
- y H C12
( C11 )
-1
0
æ
ö
÷ > 0.
yH × ç
-1
H
ç C 22 y - C12
C
C
y
( 11 ) 12 ÷ø
è
)
then
(
-1
)
H
q ( 0 ; y ) = y H C 22 - C12
( C11 ) C12 y > 0
b.
We have already mentioned that Q2(λ) is hyperbolic. This means that the function f2(λ; y)
for each vector y has two different real roots. For λ > 0 ist Q1(λ) > 0. Therefore, applies
(
)
-1
H
Q1 ( l ) C12 y
q ( l ; y ) < y H Q2 ( l ) y - y H C12
1444
424444
3
>0
< y H Q2 ( l ) y = f 2 ( l ; y )
(21)
lim q ( l ; y ) = +¥
(22)
Of further
l ®+¥
Of (a) and because (21) and (22) follows that the function q(λ; y) for λ>0 has two zero.
Minimum zero function q(λ; y) a and is lower than the minimum zero function f2(λ; y) and
the largest zero function q(λ; y) b is greater than the maximum zero function f2(λ; y)
c.
q ′(λ; y ) = 2λ y H y − y H B2 y + y H C12H (Q1(λ ))−1C12 y
>0
Theorem is proved.
Define a new functional
Definition 3.1.3. Let q(a, y) = q(b, y) = 0 and 0 < a < b. We define two new functional
t- ( y ) := a
t+ ( y ) := b
W± := t± ( h )
79
80
Applied Linear Algebra in Action
Theorem 3.1.5. Applies maxW− < minW+
Proof
2
x H B2 x
æ x H B2 x ö
x H C 2x
±
p2 ± ( x ) := ç
÷
ç 2x H x ÷
xH x
2x H x
è
ø
(
J ± := p2 ± C
n-np
)
\ {0} .
maxJ - < minJ + ,
Because Q2(λ) is hyperbolic.
t- ( y ) < maxJ - < minJ + < t+ ( y ) , for every y Î C
n-np
\ {0}
Þ maxW- < maxJ - < minJ + < maxW+
Theorem is proved.
Theorem 3.1.6. All the positive eigenvalues of (19) are either the value of the MinMax of t−(y)
or Maxmin of t+(y)
3.2. Linearization
In this section we will deal with linearization. As mentioned linearization is standard proce‐
dure for reducing QEP on GEP with a view to facilitate the computation of eigenvalues. We
have already seen that the problem of eigenvalues usually come as a result of solving differ‐
ential equations or systems of differential equations. That is the basic idea of linearization came
in the field of differential equations where the order of the differential equation of the second
order can be lowered by introducing a system of two partial differential equations of the first
order with two unknown functions.
The basic idea of linearization in QEPa is the introduction of shift z = λx u
(λ 2 A + λB + C ) x = 0.
Then we get
a.
λAz + Bz + Cx = 0 or
b.
λAz + λBx + Cx = 0.
The resulting equations are GEP because they can be written respectively in the form of
Eigenvalue Problems
http://dx.doi.org/10.5772/62267
a.
b.
( −IB −OC )(zx) = λ(OA OI )(zx)
(OI OC )(zx) = λ(OA BI )(zx).
Since the corresponding GEPs all matrices 2n × 2n to GEP has 2n eigenvalue and therefore
appropriate QEP has also 2n eigenvalue. From the above it is clear that linearization is not
unambiguous. However, in the choice of linearization QEP is an important factor to maintain
symmetry and some spectral properties QEP if it is possible. The application of linearization
will be will in the next chapter.
3.3. Physical Background
We look now at the application of eigenvalues of quadratic problems in engineering. The
largest review of applications QEP is in the [10].We have already mentioned in the introduction
to eigenvalue problem arises in connection with differential equations or systems of differen‐
tial equations. In structural mechanics the most commonly are used differential equations and
therefore the problem of eigenvalues. Note that the ultimate goal is to determine the effect of
vibrations on the performance and reliability of the system, and to control these effects.
We will now demonstrate the linearization of QEP on a concrete example from the engineering.
Low vibration system on n unknowns are described by the following system of differential
equations
M&&
y + Cy& + Ky = 0,
(23)
where M is mass matrix, C is viscous damping matrix, and K is the stiffness matrix thus Because
of the conditions in physics M and K are related to the kinetic and strain energy, respectively,
by a quadratic form which makes them symmetric. For most structures M and K are positive
definite and space.
The introduction of shift y = xeλ after rearrangement, we get
( l M + lC + K ) xe
2
l
=0
Respectively
( l M + lC + K ) x = 0
2
(24)
Therefore, the system (23) has a nontrivial solution y is selected, such that λ that QEP (24) has
a nontrivial solution x.
81
82
Applied Linear Algebra in Action
Now we’re going to QED (24) apply linearization method presented in section 3.2. Thus we
have the appropriate GEP
æ -C
ç
è I
- K öæ z ö
æM
÷ç ÷ = l ç
O øè x ø
èO
O öæ z ö
÷ç ÷ .
I øè x ø
When the system is undamped (C=O) we get
ωMx := l 2 Mx = Kx = 0.
Because the most common matrix M and K are symmetric, positive definite and space obtained
GEP is easy to solve.
4. Conclusion
Because of the great practical application eigenvalue problem occupies an important place in
linear algebra. In this chapter, we discussed the linear and quadratic eigenvalues. In particular,
an emphasis is on numerical methods such as the QR algorithm, Rayleigh quotient iteration
for linear problems of eigenvalues and linearization and minmax characterization of quadratic
problems eigenvalues. The whole chapter shows that the structure of the matrix, participating
in the problem of eigenvalues, strongly influence the choice of the method itself. It is also clear
that using the features of the structure matrix can do much more effectively existing algo‐
rithms. Thus, further studies are going to increase of use feature matrix involved in the problem
of eigenvalues, with the aim of improving the effectiveness of the method. Finally, we point
out that in this chapter we introduced new Rayleigh functionals for gyroscopically stabilized
system that enables complete minmax (maxmin) characterization of eigenvalues. It’s a new
strategy. We have proved all relevant features of new Rayleigh functionals.
Author details
Aleksandra Kostić
Address all correspondence to:
[email protected]
Faculty of Mechanical Engineering, University of Sarajevo, Sarajevo, Bosnia and Herzegovina
Eigenvalue Problems
http://dx.doi.org/10.5772/62267
References
[1] Kostić, A.Methods for the determination of some extremal eigenvalues of a symmetric
Toeplitz matrix [thesis]. Hamburg: Germany TUHH
[2] Parllet, B. N.: The Symmetric Eigenvalue Problem. SIAM Classicis in Appied Mathe‐
matihs 20, Philadelphia, 1998. DOI: http://dx.do.org/20.1137/1.9781611971163.fm1998,
[3] Cuppen, J. M.: A divide and conquer method for the symmetric tridiagonal eigenpro‐
blem. Numer. Math. 1981; 36: 177–195. DOI: 10.1007/BFo1396757
[4] Gu, M & Eisenstat A.: Divide-and-conquer algorithm for the symmetric tridiagonal
eigenproblem. SIAM J. Matr. Anal. Appl. 1995; 16: 172–191. DOI:10.1137/
S0895479892241287
[5] Chapra, S. T & Canale R. P.: Numerical methods for Engineers. McGraw-Hill Book
Company, Singapore, 1990. DOI: 456789 BJE95432
[6] Duffin, R. J.: A minimax theory for overdamped networks, J. Rat. Mech. Anal., 1955; 4:
221–233.
[7] Rogers, E.: A minimax theory for overdamped systems, Arch. Ration. Mech. Anal.,
1964; 16: 89–96. DOI: 10.1007/BF00281333
[8] Voss, H.: A minmax principle for nonlinear eigenproblems depending continuously on
the eigenparameter, Numer. Lin. Algebra Appl. 2009; 16: 899–913. DOI: 10.1002/nla.670
[9] Kostić, A & Voss, H.: On Sylvester’s law of inertia for nonlinear eigenvalue problems,
Electr. Trans. Numer. Anal., 2013; 40: 82–93.
[10] Tisseur, F & Meerbergen, K.: The quadratic eigenvalue problem, SIAM Review, 2001;
43:235–286.
83