Numerical Analysis
Numerical Analysis
Numerical Analysis
Before the advent of modern computers numerical methods often depended on hand interpolation in large printed
tables. Since the mid 20th century, computers calculate
the required functions instead. These same interpolation
formulas nevertheless continue to be used as part of the
software algorithms for solving dierential equations.
1 General introduction
The overall goal of the eld of numerical analysis is the
design and analysis of techniques to give approximate but
accurate solutions to hard problems, the variety of which
is suggested by the following:
Advanced numerical methods are essential in making numerical weather prediction feasible.
Babylonian clay tablet YBC 7289 (c. 18001600 BC) with annotations. The approximation of the square root of 2 is four
sexagesimal gures, which is about six decimal gures. 1 +
24/60 + 51/602 + 10/603 = 1.41421296...[1]
1.1 History
The eld of numerical analysis predates the invention of
modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many
1
great mathematicians of the past were preoccupied by nu- the same manner as for an iterative method.
merical analysis, as is obvious from the names of important algorithms like Newtons method, Lagrange interpolation polynomial, Gaussian elimination, or Eulers 1.3 Discretization
method.
Furthermore, continuous problems must sometimes be
To facilitate computations by hand, large books were proreplaced by a discrete problem whose solution is known
duced with formulas and tables of data such as interpoto approximate that of the continuous problem; this prolation points and function coecients. Using these tacess is called discretization. For example, the solution of
bles, often calculated out to 16 decimal places or more
a dierential equation is a function. This function must
for some functions, one could look up values to plug into
be represented by a nite amount of data, for instance by
the formulas given and achieve very good numerical estiits value at a nite number of points at its domain, even
mates of some functions. The canonical work in the eld
though this domain is a continuum.
is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of
commonly used formulas and functions and their values
at many points. The function values are no longer very 2 Generation and propagation of
useful when a computer is available, but the large listing
errors
of formulas can still be very handy.
The mechanical calculator was also developed as a tool
for hand computation. These calculators evolved into
electronic computers in the 1940s, and it was then found
that these computers were also useful for administrative
purposes. But the invention of the computer also inuenced the eld of numerical analysis, since now longer
and more complicated calculations could be done.
1.2
2.1 Round-o
Round-o errors arise because it is impossible to represent all real numbers exactly on a machine with nite
memory (which is what all practical digital computers
are).
2.3
(
) x+1+ x
=x x+1 x
x+1+ x
( x + 1)2 ( x)2
=x
x+1+ x
x+1x
= x
x+1+ x
1
= x
x+1+ x
x
=
x+1+ x
= g(x)
of just plugging in the number in the formula is sometimes not very ecient. For polynomials, a better approach is using the Horner scheme, since it reduces the
necessary number of multiplications and additions. Generally, it is important to estimate and control round-o
errors arising from the use of oating point arithmetic.
g(500) =
501 + 500
Extrapolation is very similar to interpolation, except that
500
=
now we want to nd the value of the unknown function at
22.38 + 22.36
a point which is outside the given points.
500
= 11.17
=
Regression is also similar, but it takes into account that
44.74
by looking to the two results above, we realthe data is imprecise. Given some points, and a measureize that loss of signicance (caused here by
ment of the value of some function at these points (with
SOFTWARE
an error), we want to determine the unknown function. The method of Lagrange multipliers can be used to reThe least squares-method is one popular way to achieve duce optimization problems with constraints to unconthis.
strained optimization problems.
3.3
Much eort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian elimination, LU decomposition,
Cholesky decomposition for symmetric (or hermitian)
and positive-denite matrix, and QR decomposition for
non-square matrices. Iterative methods such as the
Jacobi method, GaussSeidel method, successive overrelaxation and conjugate gradient method are usually preferred for large systems. General iterative methods can be
3.7
developed using a matrix splitting.
Root-nding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an
argument for which the function yields zero). If the function is dierentiable and the derivative is known, then
Newtons method is a popular choice. Linearization is
another technique for solving nonlinear equations.
3.4
Dierential equations
Solving eigenvalue or singular value Partial dierential equations are solved by rst discretizing the equation, bringing it into a nite-dimensional subproblems
4 Software
3.5
Optimization
Optimization problems ask for the point at which a given Since the late twentieth century, most algorithms are imfunction is maximized (or minimized). Often, the point plemented in a variety of programming languages. The
Netlib repository contains various collections of software
also has to satisfy some constraints.
The eld of optimization is further split in several sub- routines for numerical problems, mostly in Fortran and C.
elds, depending on the form of the objective function Commercial products implementing many dierent nuand the constraint. For instance, linear programming merical algorithms include the IMSL and NAG libraries;
deals with the case that both the objective function and a free alternative is the GNU Scientic Library.
the constraints are linear. A famous method in linear pro- There are several popular numerical computing applications such as MATLAB, TK Solver, S-PLUS, LabVIEW,
gramming is the simplex method.
5
and IDL as well as free and open source alternatives
such as FreeMat, Scilab, GNU Octave (similar to Matlab), IT++ (a C++ library), R (similar to S-PLUS) and
certain variants of Python. Performance varies widely:
while vector and matrix operations are usually fast, scalar
loops may vary in speed by more than an order of
magnitude.[5][6]
Also, any spreadsheet software can be used to solve simple problems relating to numerical analysis.
See also
Analysis of algorithms
Computational science
List of numerical analysis topics
Numerical dierentiation
Numerical Recipes
Symbolic-numeric computation
Hildebrand, F. B. (1974). Introduction to Numerical Analysis (2nd ed.). McGraw-Hill. ISBN 0-07028761-9.
8 External links
Journals
Notes
References
Online course material
Golub, Gene H. and Charles F. Van Loan (1986).
Matrix Computations, Third Edition (Johns Hopkins
University Press, ISBN 0-8018-5413-X).
8
Lectures on Numerical Analysis, Dennis Deturck
and Herbert S. Wilf University of Pennsylvania
Numerical methods, John D. Fenton University of
Karlsruhe
Numerical Methods for Science, Technology, Engineering and Mathematics, Autar Kaw University of
South Florida
Numerical Analysis Project, John H. Mathews
California State University, Fullerton
Numerical Methods Online Course, Aaron
Naiman Jerusalem College of Technology
Numerical Methods for Physicists, Anthony OHare
Oxford University
Lectures in Numerical Analysis (archived), R.
Radok Mahidol University
Introduction to Numerical Analysis for Engineering,
Henrik Schmidt Massachusetts Institute of Technology
Numerical Methods for time-dependent Partial Differential Equations, J.W. Haverkort, based on a
course by P.A. Zegeling Utrecht University
Numerical Analysis for Engineering, D. W. Harder
University of Waterloo
EXTERNAL LINKS
9.1
Text
9.2
Images
9.3
Content license