Numerical Methods For Computational Science and Engineering: Lecture 1, Sept 19, 2013: Introduction
Numerical Methods For Computational Science and Engineering: Lecture 1, Sept 19, 2013: Introduction
Numerical Methods For Computational Science and Engineering: Lecture 1, Sept 19, 2013: Introduction
Peter Arbenz
Computer Science Department, ETH Zürich
E-mail: [email protected]
Scientific Computing
Survey on lecture
1. Introduction
2. Roundoff errors
3. Nonlinear equations in one variable (2 lectures)
4. Linear algebra review
5. Direct methods for linear system (2)
6. Linear least squares problems (2)
7. Iterative methods for linear system (2)
8. Eigenvalues and singular values (2)
9. Nonlinear systems and optimization (3)
10. (Piecewise) polynomial interpolation (3)
11. Best approximation
Goals
• Knowledge of the fundamental algorithms in numerical
mathematics
• Knowledge of the essential terms in numerical mathematics
and the techniques used for the analysis of numerical
algorithms
• Ability to choose the appropriate numerical method for
concrete problems
• Ability to interpret numerical results
• Ability to implement numerical algorithms efficiently in
Matlab
Indispensable: Learning by doing (Ô exercises)
Literature
Uri Ascher & Chen Greif: A First Course in Numerical
Methods. SIAM, 2011.
http://www.siam.org/books/cs07/
I Excellent reference.
I Main reference for large parts of this
course.
I Target audience: undergraduate students
in computer science.
I I will follow this book quite closely.
Literature (cont.)
W. Dahmen & A. Reusken: Numerik für Ingenieure und
Naturwissenschaftler, Springer, 2006.
Good reference for large parts of this course; a lot of simple examples and lucid explanations, but also
rigorous mathematical treatment; Target audience: undergraduate students in science and engineering.
http://www.mathworks.ch/moler/.
Prerequisites
Essential prerequisite for this course is a solid knowledge in linear
algebra and calculus. Familiarity with the topics covered in the
first semester courses is taken for granted, see
• K. Nipp and D. Stoffer, Lineare Algebra, vdf
Hochschulverlag, Zürich, 5 ed., 2002.
• M. Gutknecht, Lineare algebra, lecture notes, SAM, ETH
Zürich, 2009.
http://www.sam.math.ethz.ch/~mhg/unt/LA/HS07/.
• M. Struwe, Analysis für Informatiker. Lecture notes, ETH
Zürich, 2009.
Organization
Lecturer:
Prof. Peter Arbenz [email protected]
Assistants:
Stefan Pauli [email protected]
Daniel Hupp [email protected]
Christian Schüller [email protected]
Robert Carnecky [email protected]
Laura Scarabosio [email protected]
Jonas Šukys [email protected]
Cecilia Pagliantini [email protected]
Alexander Lobbe [email protected]
Sebastian Grandis [email protected]
Sharan Jagathrakashakan [email protected]
Venue
Classes: Mon 10.15-12.00 (CAB G11); Thu 10.15-12.00 (HG G5)
Tutorials: Mon 13.15-15.00
Thu 8.15-10.00
Please register (on course website) for tutorial groups until
September 23th:
http://www.math.ethz.ch/education/bachelor/lectures/
hs2013/math/nummath_cse
Consulting hours: if needed, see the course website.
Assignments
• The assignment sheets will be uploaded on the course
webpage on Monday every week the latest.
• The exercise should be solved until the following tutorial class.
(Hand them in to the assistant or grade yourself.)
Examination
• Three-hour written examination involving coding problems to
be done at the computer on
TBA
• Dry-run for computer based examination:
Does not exist anymore.
Try out a computer in the student labs in HG.
• Pre-exam question session:
TBA
Examination (cont.)
• Topics of examination:
I All topics, that have been addressed in class or in a
homework assignment.
I One exam question will be one of the homework
assignment.
• Lecture slides will be available as (a single) PDF file during
the examination.
• The Ascher-Greif book will be made available, too.
• The exam questions will be asked both in German and in
English. You can chose among the two.
u v Absolute Relative
Error Error
1 0.99 0.01 0.01
1 1.01 0.01 0.01
–1.5 –1.2 0.3 0.2
100 99.99 0.01 0.0001
100 99 1 0.01
NumCSE, Lecture 1, Sept 19, 2013 18/40
Numerical Methods for Computational Science and Engineering
Numerical algorithms and errors
Approximation example
The Stirling approximation
√ n n
v = Sn = 2πn ·
e
is used to approximate u = n! = 1 · 2 · · · n for large n.
The formula involves the constant e = exp(1) = 2.7182818 . . ..
We use the Matlab program Example1 1.m from
http://www.siam.org/books/cs07/programs.zip
to compute the values of u and v for n = 1, . . . , 10.
Notice that the quality of v approximating u increases although
the absolute error increases rapidly!
Types of errors
I Errors in the formulation of the problem to be solved.
I Errors in the mathematical model. Simplifications.
I Error in input data. Measurements.
I Approximation errors
I Discretization error.
I Convergence error in iterative methods.
I Discretization/convergence errors may be assessed by an
analysis of the method used.
I Roundoff errors
I Roundoff errors arise everywhere in numerical computation
because of the finite precision arithmetic.
I Roundoff errors behave quite erratic.
f (x0 + h) − f (x0 )
f 0 (x0 ) ≈ Dx0 ,h (f ) =
h
f (x0 + h) − f (x0 ) h
= f 0 (x0 ) − f 00 (ξ), x0 < ξ < x0 + h.
h 2
So, we expect the error to decrease linearly with h.
0
f (x0 ) − f (x0 + h) − f (x0 ) h 00 h 00
= f (ξ) ≈ f (x0 )
h 2 2
Or, using the big-O notation:
0
f (x0 ) − Dx ,h (f ) = O(h).
0
Results
Try for f (x) = sin(x) at x0 = 1.2.
(So, we are approximating cos(1.2) = 0.362357754476674 . . .)
h Absolute error
0.1 4.71667 · 10−2
0.01 4.666196 · 10−3
0.001 4.660799 · 10−4
10−4 4.660256 · 10−5
10−7 4.619326 · 10−8
If we wanted
cos(1.2) − sin(1.2 + h) − sin(1.2) < 10−10 .
h
h Absolute error
10−8 4.36105 · 10−10
10−9 5.594726 · 10−8
10−10 1.669696 · 10−7
10−11 7.938531 · 10−6
10−13 6.851746 · 10−4
10−15 8.173146 · 10−2
10−16 3.623578 · 10−1
Algorithm properties
Performance features that may be expected from a good numerical
algorithm.
I Accuracy
Relates to errors. How accurate is the result going to be when
a numerical algorithm is run with some particular input data.
I Efficiency
I How fast can we solve a certain problem?
Rate of convergence. Floating point operations (flops).
I How much memory space do we need?
I These issues may affect each other.
I Robustness
(Numerical) software should run under all circumstances.
Should yield correct results to within an acceptable error or
should fail gracefully if not successful.
NumCSE, Lecture 1, Sept 19, 2013 28/40
Numerical Methods for Computational Science and Engineering
Algorithm properties
Complexity I
Complexity/computational cost of an algorithm :⇐⇒ number of
elementary operators
Asymptotic complexity = ˆ “leading order term” of complexity w.r.t.
large problem size parameters
The usual choice of problem size parameters in numerical linear
algebra is the number of independent real variables needed to
describe the input data (vector length, matrix sizes).
operation description #mul/div #add/sub
inner product (x ∈ Rn , y ∈ Rn ) 7→ xH y n n−1 O(n)
outer product (x ∈ Rm , y ∈ Rn ) 7→ xyH nm 0 O(mn)
tensor product
matrix product (A ∈ Rm×n , B ∈ Rn×k ) 7→ AB mnk mk(n − 1) O(mnk)
e = O(hq )
w = O(n log n)
|w | ≤ Cn log n as n → ∞.
Example:
O(h2 ) means at least “quadratic convergence” (see later). Θ(h2 )
is exact quadratic convergence.
Complexity II
To a certain extent, the asymptotic complexity allows to predict
the dependence of the runtime of a particular implementation of
an algorithm on the problem size (for large problems). For
instance, an algorithm with asymptotic complexity O(n2 ) is likely
to take 4× as much time when the problem size is doubled.
One may argue that the memory accesses are more decisive for run
times than floating point operations. In general there is a linear
dependence among the two. So, there is no difference in the O
notation.
Scaling
Scaling ≡ multiplication with diagonal matrices (with non-zero
diagonal entries) from left and/or right.
It is important to know the different effects of multiplying with a
diagonal matrix from left or right:
DA vs. AD with Rn×n 3 A, D = diag(d1 , . . . , dn )
y = diag(d)*x;
or
y = d.*x;
NumCSE, Lecture 1, Sept 19, 2013 33/40
Numerical Methods for Computational Science and Engineering
Elementary operations
Elementary matrices
Matrices of the form A = I + αuvT are called elementary.
Again we can apply A to a vector x in a straightforward and a
more clever way:
Ax = (I + αuvT )x
or
Ax = x + αu(vT x)
Cf. exercises.
An unstable algorithm
A stable algorithm
Unstable algorithm
Problem statement: evaluate the integrals
Z 1
xn
yn = dx, for n = 0, 1, 2, . . . , 30.
0 x + 10
En ' c0 nE0 .