Project, Due: Friday, 2/23
Project, Due: Friday, 2/23
Project, Due: Friday, 2/23
Arnold
This project (report and accompanying code) is due on Friday, February 23, by 11:59 PM. Your
report should be well-organized, typed (using LaTeX or Microsoft Word) and saved as a .pdf for
submission on Canvas. You must show all of your work within the report to receive full credit.
For portions of the project requiring the use of MATLAB code, remember to also submit your
.m-files on Canvas as a part of your completed project.
Project Description
The Jacobi, Gauss-Seidel, and SOR iterative methods for approximating the solution to Ax = b
are known as stationary iterative methods, since each method can be written as
x(k) = Tx(k−1) + c
where the iteration matrix T is constant and does not depend on the iteration k. Alternatively,
Krylov subspace methods are nonstationary iterative methods which do not have iteration matrices.
Instead, these methods aim to minimize the residual r(k) = b − Ax(k) with respect to the vectors
in the Krylov subspace
Kk (A, b) = span b, Ab, A2 b, . . . , Ak−1 b .
Two standard Krylov subspace methods are the conjugate gradient (CG) method and the gener-
alized minimal residual (GMRES) method. The CG method is designed for solving systems where
the coefficient matrix A is symmetric positive definite, while GMRES works for nonsymmetric
systems. The goal of this project is to explore the use of the CG and GMRES iterative methods
in approximating the solution to Ax = b.
Problems
1 (12 points) In your own words, briefly describe the main ideas behind the CG and GMRES
algorithms. Your description of each method should be clear and concise, including the key
points and defining factors of each method. Note that each method minimizes the residual
in a different way, so you’ll want to highlight the differences. As you research the algorithms,
you may notice that Section 7.6 in the text gives a derivation of the CG method, but it does
not discuss GMRES or Krylov subspaces. You may find the following references useful:
• L. N. Trefethen and D. Bau (1997) Numerical Analysis – Lecture 35 on GMRES,
Lecture 38 on CG (available online as an ebook through Gordon Library)
• C. T. Kelley (1995) Iterative Methods for Linear and Nonlinear Equations – Chapter
2 on CG, Chapter 3 on GMRES (available in hard copy at Gordon Library and online
at https://www.siam.org/books/textbooks/fr16_book.pdf)
You may also use additional references, just make sure to cite any references you use in a
bibliography at the end of your report. For example, if you use [1] as a reference, then you
should include the following bibliography entry at the end of your report:
[1] L. N. Trefethen and D. Bau (1997) Numerical Analysis. SIAM: Philadelphia.
(a) (4 points) Show that A is symmetric positive definite. Is this true for all n? Explain.
(b) (20 points) For n = 10, 50, 100, 500, use MATLAB implementations of the following iterative
schemes to approximate the solution to the linear system to within a tolerance of 10−6 :
• Jacobi’s method with x(0) = (1, 1, . . . , 1)T ∈ Rn , using jacobi.m.
• Gauss-Seidel with x(0) = (1, 1, . . . , 1)T ∈ Rn , using gauss_seidel.m from HW4.
• SOR with x(0) = (1, 1, . . . , 1)T ∈ Rn and the optimal choice of ω. For this method,
modify gauss_seidel.m to account for relaxation. Call your new function SOR.m. Does
the optimal choice of ω change with n? Discuss.
• CG, using MATLAB’s pcg.m. Note that the default settings will not be sufficient, so
you will need to modify them.
• GMRES, using MATLAB’s gmres.m. You will also need to modify these default settings
for the algorithm to converge.
For each n, use the numerical solution obtained by the command A\b as the “true” solution,
and report the number of iterations it took each of the algorithms to converge (if they
converge at all). Check that the stationary iterative methods will converge beforehand by
computing ρ(T) – report these values for each stationary method for each n. Summarize
your results. In particular, discuss how the number of iterations and computational time for
each method to converge changes with n.