Numerical Analysis
Numerical Analysis
Numerical Analysis
Course contents
1. Introduction to Numerical Methods
- Motivation and importance of numerical techniques.
- Sources of errors in numerical computations.
2. Root Finding Methods
- Bisection method
- Newton-Raphson method
- Secant method
- Fixed-point iteration
3. Interpolation and Approximation
- Lagrange interpolation
- Newton's divided differences
- Polynomial approximation
4. Numerical Differentiation and Integration
- Numerical differentiation techniques
- Newton-Cotes formulas (Trapezoidal rule, Simpson's rule)
- Gaussian quadrature
5. Error Analysis and Convergence
- Round-off errors
- Truncation errors
- Convergence analysis of numerical methods
6. Introduction to Numerical Software and Programming
- Using computational tools (MATLAB, Python, etc.) for numerical
computation.
- Coding and implementation of numerical algorithms.
Course Overview:
This course introduces fundamental numerical methods and techniques used to
solve mathematical problems, especially those that are complex or difficult to
solve analytically. The course emphasizes both theoretical foundations and
practical implementations of numerical algorithms.
Course Objectives:
- Understand the limitations of analytical solutions and the necessity of
numerical methods.
- Develop skills in selecting and applying appropriate numerical techniques for
different types of problems.
- Gain proficiency in using computational tools and programming languages for
implementing numerical algorithms.
- Analyze the accuracy, stability, and convergence of numerical methods.
- Apply numerical methods to solve mathematical problems from various
disciplines.
1
Chapter one
INTRODUCTION TO NUMERICAL METHODS
Introduction to Numerical Methods is a pivotal field within applied mathematics
and computer science, focusing on the development, analysis, and
implementation of algorithms to solve mathematical problems that are
challenging or impossible to solve using analytical methods alone. It serves as a
bridge between mathematical theory and practical problem-solving in various
disciplines.
KEY ASPECTS:
1. Foundation of Numerical Analysis:
Numerical Methods play a critical role in situations where exact analytical
solutions are unattainable. These methods encompass a range of techniques and
algorithms designed to approximate solutions to mathematical problems
involving continuous variables, differential equations, optimization,
interpolation, integration, and more.
2
7. Continuous Advancements and Research:
The field of Numerical Methods continually evolves with the advancement of
technology and the increasing need for more sophisticated computational
techniques. Ongoing research aims to develop new algorithms that are more
accurate, efficient, and applicable to a wider range of problems.
1. Time Complexity:
Runtime Performance: How efficiently the algorithm solves the problem,
considering the time taken for execution. It's crucial to assess how the
algorithm's performance scales with the size of the input.
3
2. Space Complexity: - Memory Usage: The amount of memory or space an
algorithm requires for execution. This includes consideration of auxiliary space,
variables, data structures, etc.
3. Input Size:
Scale of Problem: Understanding how the algorithm performs with different
sizes of input. Some algorithms might be efficient for small inputs but
inefficient for larger datasets.
4. Resource Limitations:
Hardware/Resource Constraints: Considering the hardware or resource
limitations in the environment where the algorithm will run. This involves
assessing CPU capacity, memory constraints, and system architecture.
5. Algorithm Robustness:
Reliability and Accuracy: Ensuring the algorithm provides accurate results for
different types of inputs and edge cases without failing or producing incorrect
outputs.
6. Complexity of Implementation:
Readability and Ease of Implementation: The ease of understanding,
implementing, and maintaining the algorithm. Simplicity and readability can be
crucial for debugging and future modifications.
7. Stability:Stable Output: Some algorithms may exhibit unstable behavior for
certain inputs. Stability refers to the consistency of output for the same input
across different executions.
8. Special Requirements:
Specific Problem Requirements: Considering specific constraints or
requirements of the problem such as sorting, searching, or unique constraints
imposed by the problem itself.
4
The motivation and importance of numerical techniques lie in their ability to
solve complex mathematical problems that often lack explicit analytical
solutions. Several factors contribute to the significance of numerical methods in
various fields:
Computational Challenges:
Large-scale Calculations: Problems involving a vast number of variables or
complex systems cannot be practically solved by hand or using conventional
analytical methods. Numerical techniques enable the handling of extensive
computations efficiently.
5
Advancements in Computing Power:
Technological Progress: With the advancement in computational power and
access to high-performance computing, the application of numerical methods
has become more feasible and accessible.
The motivation and importance of numerical techniques stem from their ability
to address real-world complexities, providing practical solutions to problems
across various disciplines. From engineering design and technological
advancements to scientific research and predictive analysis, numerical methods
serve as an invaluable tool for handling complex mathematical problems,
enabling innovation and progress in numerous fields.
1. Round-off Errors:
Definition: Round-off errors occur due to the finite precision of numerical
representation. They arise when a real number with an infinite decimal
expansion is approximated by a finite number of digits.
1
Example: Consider the number in decimal form: (0.3333333.... Rounding it
3
to a finite decimal representation, say 0.333 , introduces a round-off error,
causing a deviation from the actual value.
2. Truncation Errors:
Definition: Truncation errors arise when an infinite process (like a series or
iterative computation) is approximated by a finite one, leading to an incomplete
representation of the exact solution.
Example: Using a finite number of terms in the Taylor series to approximate a
function. For instance, approximating e x using only the first few terms of the
series expansion, neglecting higher-order terms, and results in a truncation error.
3. Discretization Errors:
Definition: Discretization errors occur when continuous problems are
approximated in discrete form, leading to inaccuracies due to the discretization
process.
Example: Numerical integration methods applied to approximate a definite
integral by dividing the interval into finite subintervals. The difference between
the exact integral and the numerical approximation due to discretization is a
form of discretization error.
4. Iterative Convergence Errors:
6
Definition: In iterative methods, errors can arise due to the inability to reach
the true solution after a finite number of iterations, either because of numerical
instability or limitations of the method.
Example: The iterative process of solving systems of equations, such as the
Gauss-Seidel method for solving linear systems, may not converge for certain
matrices, resulting in divergence and error in the solution.
5. Data Errors:
Definition: Data errors occur due to inaccuracies or imprecisions in input data,
which propagate through computations, leading to erroneous results.
Example: In scientific experiments, if measurements are imprecise or
contaminated by random noise, subsequent calculations or simulations using
this data will inherit these inaccuracies, resulting in data errors.
6. Modeling Errors:
Definition: Modeling errors stem from inaccuracies in the mathematical models
used to represent real-world systems, leading to deviations between the model
and the actual system behavior.
Example: Simulating a physical system using a simplified model that neglects
certain relevant parameters or physical effects, causing discrepancies between
the model's predictions and real-world behavior.
7. Stability and Algorithmic Errors:
Definition: Errors can arise from the numerical instability of algorithms used in
computations, especially when dealing with ill-conditioned problems or
inappropriate algorithms.
Example: The use of an unstable algorithm in solving differential equations
where small changes in input data lead to significantly amplified errors in the
output, making the solution unreliable.
7
CHAPTER TWO
ROOT FINDING METHODS
Introduction
Root finding methods are numerical techniques used to locate the roots (or
solutions) of an equation. The roots of an equation are the values of the
variable(s) that make the equation true. There are various methods to find these
roots, some of the most commonly used ones include: Newton-Raphson
method, Bisection method, Secant method, Fixed-point iteration and so on.
The Newton-Raphson method
The Newton-Raphson method, also known as Newton's method, is an iterative
numerical technique used for finding successively better approximations to the
roots of a real-valued function. This method is particularly effective for root
finding and is widely used due to its rapid convergence if certain conditions are
met.
Methodology:
1. Root Finding Objective:
The goal of the Newton-Raphson method is to approximate the root of an
equation f x 0 by iteratively refining an initial guess towards a more
accurate solution.
2. Derivative Usage:
The method utilizes the derivative of the function (f(x), denoted as f ' x , to
calculate subsequent approximations. The derivative represents the slope of the
function and aids in determining the direction and magnitude of corrections.
3. Iterative Procedure:
Initial Guess: Start with an initial guess, x0 for the root of the function
f x 0 .
Update Formula: Improve the approximation using the formula:
f ( xn )
xn1 xn
f ( xn )
Iterate: Repeat the process by substituting the previously computed value, xn ,
into the update formula to obtain xn1 .
4. Convergence Criteria:
The process continues until the difference between successive approximations,
| xn1 xn | , falls below a specified tolerance or until a predetermined number of
iterations.
8
1. Input:
- f ( x) : The function for which the root is to be found.
- f ' x : The derivative of f x .
- x0 : Initial guess for the root.
- : Tolerance limit for convergence.
- N : Maximum number of iterations.
2. Procedure:
3. Initialize: Set n 1 and the initial guess x0 for the root.
4. Repeat the following steps until convergence or maximum iterations
reached:
5. Calculate the next approximation:
f ( xn )
xn1 xn
f ( xn )
SYNTHETIC DEVISON
Synthetic division is a shorthand method used to divide polynomials, especially
when dividing by a linear polynomial of the form x x0 , where x0 is a constant.
9
This method is particularly useful when dividing by linear factors because it's
quicker and more straightforward compared to long division.
Here is a step-by-step guide to performing synthetic division:
Example1.1:
Divide the polynomial P( x) 2 x 5 x 3x 7 by ( x 2) :
3 2
Solution
Arrange the coefficients: (2, 5, -3, -7).
Perform synthetic division:
2 | 2 5 -3 -7
10
| 4 18 30
|-----------------------------
| 2 9 15 23
Example 2.2:
Find the root of the fucntion f ( x) x3 2 x 5.
Solution
1. Initial guess: Let's say x0 2 (initial guess close to the actual root).
2. The derivative of f x is f ( x) 3x 2 2 .
Iteration 1: n 0 :
f ( x0 ) 23 2(2) 5
x1 x0 2 2.2
f ( x0 ) 3(2)2 2
Iteration 2: n 1:
f ( x1 ) 2.23 2(2.2) 5
x2 x1 2.2 2.09434
f ( x1 ) 3(2.2)2 2
Iteration 3: n 2 :
f ( x2 ) 2.094343 2(2.09434) 5
x3 x2 2.09434 2.09455
f ( x2 ) 3(2.09434)2 2
Iteration 4: n 3:
f ( x3 ) 2.094553 2(2.09455) 5
x4 x3 2.09455 2.09455
f ( x3 ) 3(2.09455)2 2
Iteration 5: n 4 :
f ( x4 ) 2.094553 2(2.09455) 5
x5 x4 2.09455 2.09455
f ( x4 ) 3(2.09455)2 2
The value of x after five iterations remains approximately 2.09455. In this case,
the method converges quickly, and the approximation of the root appears to
11
stabilize after the third iteration, reaching a value that remains consistent after
subsequent iterations.
Example 2.3
Consider the function f ( x) x 2 3 x 2 for which we want to find a root.
1. Function: f ( x) x 2 3 x 2
2. Derivative: f ' x 2 x 3
Initial Guess:
Let's start with an initial guess, x0 3:
Iterations:
Iteration 1:
f ( x0 ) 32 3(3) 2 2 2 7
x1 x0 3 3 3 2.3333
f ( x0 ) 2(3) 3 3 3 3
Iteration 2:
f ( x1 ) 7 (7 / 3)2 3 (7 / 3) 2
x2 x1
f ( x1 ) 3 2 (7 / 3) 3
1/ 3
x2 2.3333 2.3333 1 1.3333
1/ 3
Iteration 3:
f ( x2 ) (1.3333)2 3 (1.3333) 2
x3 x2 1.3333
f ( x2 ) 2 (1.3333) 3
0
x3 1.3333
0
Here, the method converges, and the approximation does not change. It seems
x3 remains the same due to the function reaching the root.
Example 2.4
Use Newton-Raphson’s methods with synthesis division approach or otherwise
to compute an approximation to one of the zeros of P ( x ) 2 x 3 x x 4 ,
4 2
Solution
P( x ) P(2) : -2
0
| 2 0 -3 3 4
| -4 8 -10 14
|-------------------------------------
| 2 -4 5 -7 10
12
P(2) 10
P( x) ( x 2)(2 x 4 x 5 x 7) 10
3 2
To find Q(2) ,
we have
P( x ) P(2) : -2
0
| 2 -4 5 -7
| -4 16 -42
|-----------------------------
| 2 -8 21 -49= Q(2) P '(2)
And
P( x ) 10
x x 0
2 1.796
49
1 0
P '( x )0
We have
-1.796 | 2 0 -3 3 -4
| -3.592 6.451 -6.197 5.742
|------------------------------------------------------
| 2 -3.592 3.451 -3.197 1.742 P( x ) 1
|------------------------------------------------------
P( x ) 1.742
x x 1
2 1.7425
32.565
2 1
P '( x )
1
13
The Newton-Raphson method is a powerful numerical tool for finding roots of
equations, particularly when the function and its derivative are well-behaved
around the root and a good initial guess is provided.
EXERCISE 2.1
Use the Newton-Raphson method to find roots of the follwind equations,
focusing on different initial guesses and convergence criteria
1. f ( x) x3 6 x 2 11x 6 with an initial guess x0 0 .
2. f ( x) x4 3x3 2 x2 5 x 1 with an initial guess x0 1.5 .
3. f ( x) e x x 2 3with an initial guess x0 1.
4. f ( x) x5 4 x3 x 1 . with an initial guess x0 1.
5. f ( x) 2 x3 11.7 x 2 17.7 x 5 . Start with an initial guess of x0 3 ,
and perform iterations until the solution reaches a tolerance of 104 .
14
Unlike some root-finding methods, such as Newton's method, the Secant
Method doesn’t require knowledge of the derivative. This makes it more
adaptable in situations where obtaining the derivative is complex or impossible.
2. Convergence Rate:
The convergence rate of the Secant Method is close to the golden ratio 1.618...,
meaning it converges relatively quickly but not as rapidly as some methods
requiring the second derivative.
3. Applicability:
It's effective for functions that are not differentiable, discontinuous, or those
where the derivative calculation is challenging or impractical.
Limitations and Considerations:
1. Convergence Rate:
- While it converges faster than some simpler methods, it might be slower
compared to methods requiring the second derivative, such as Newton's method.
2. Convergence Guarantee:
- Unlike the Newton-Raphson method, the Secant Method doesn’t guarantee
convergence in all cases. Oscillation or divergence might occur under certain
conditions.
3. Initial Guesses:
- The accuracy of the result is highly dependent on the choice of initial
guesses. Poor initial guesses can lead to slow convergence or divergence.
Algorithm
Algorithm for the Secant Method:
Input:
- f ( x) : The function for which the root needs to be found.
15
2. Compute the next approximation:
- Calculate the next approximation using the formula:
f ( x1 ) ( x1 x0 )
x x1
f ( x1 ) f ( x0 )
3. Update:
- Set x0 x1 and x1 x
4. Increment:
- Increment n by 1.
5. Return:
- If | f ( x1 ) | , return x as the root.
- If n N , return "N; no convergence."
The algorithm iteratively refines the approximation of the root by updating the
guesses x0 and x1 based on the function values until the convergence condition
is met or the maximum number of iterations is reached.
Remember, this algorithm assumes that the chosen initial guesses x0 and x1 are
close enough to the actual root and that the function is continuous in the given
interval. Adjustments in initial guesses and iterations may be required for
specific functions to achieve accurate results.
The Secant Method serves as a valuable alternative to find roots of functions,
especially when the computation of derivatives is challenging. It strikes a
balance between convergence speed and not requiring derivative information,
making it versatile in various problem-solving scenarios. However, careful
consideration of initial guesses and potential convergence issues is crucial for
its effective application.
Example 2.4
Use the Secant
Method to find an approximate root of the
f ( x) 2 x 3x x 4 with initial approximates x0 0 and x1 2 .
4 2
Solution
Function:
f ( x) 2 x 4 3 x 2 x 4
Initial Approximates:
x0 0 and x1 2 .
Let's begin the iterations to find the approximate root using the Secant Method.
16
Iteration 1:
1. Calculate f ( x0 ) f (1) :
2. Calculate f ( x0 ) f (2) :
f (2) 18
3. Calculate f ( x1 ) f (1.182) :
2. f (1.182) 1.585
3. f (1.240) 1.206
4. Using the Secant Method formula:
1.206 (1.240 1.182)
x 1.240 1.226
1.206 (1.585)
The process of performing additional iterations of the Secant Method will
further refine the approximation of the root. In this case, after three iterations,
the estimate of the root is approximately x 1.226. .This process can be
continued until reaching the desired level of accuracy or convergence.
17
Example 2.5
Use the Secant Method to find an approximate root of the f ( x) x 2 3x 2
using initial approximates x0 0 and x1 1 .
Solution
Function:
f ( x) x 2 3 x 2
Initial Approximates:
x0 0 and x1 1
Let's start the iteration process.
Iteration 1:
1. f ( x0 ) f (0) 2
2. f ( x1 ) f (1) 0
3. Using the Secant Method formula:
0 (1 0) 0
x 1 1 1
02 2
Iteration 2:
1. Update values: x0 1 and x1 1
2. f (1) 0
3. f (1) 0
4. The next approximation remains x = 1, no change after the second iteration.
Iteration 3:
1. Update values: x0 1 and x1 1
2. f(1) = 0
3. f(1) = 0
4. The next approximation remains x = 1, no change after the third iteration.
For this specific case, using the Secant Method with the given initial
approximates x0 0 and x1 1 resulted in the root remaining at x = 1 after
three iterations. This indicates that the Secant Method didn't converge in three
iterations for this function and initial approximates.
18
Continue the process of iterations until convergence is achieved or the
maximum iterations are reached, as per the Secant Method algorithm.
Example 2.6
Absolutely! Let's use the Secant Method to find an approximate root of the
function f ( x) x3 2 x 5 using the initial approximates x0 2 and x1 3 .
Solution
Function:
f ( x) x 3 2 x 5
Initial Approximates:
x0 2 and x1 3 .
Let's start with the iteration process.
Iteration 1:
1. f ( x0 ) f (2) 2 2(2) 5 8 4 5 1
3
2. f ( x1 ) f (3) 3 2(3) 5 27 6 5 16
3
2. f (2.059) 0.434
3. f (2.312) 0.126
4. Using the Secant Method formula:
19
0.126 (2.312 2.059)
x 2.312 2.239
0.126 (0.434)
Iteration 4:
1. Update values: x0 2.239 and x1 2.236
2. f (2.239) 0.044
3. f (2.236) 0.012
4. Using the Secant Method formula:
0.012 (2.236 2.239)
x 2.236 2.236
0.012 (0.044)
Iteration 5:
1. Update values: x0 2.236 and x1 2.236
2. f (2.236) 0.012
20
5. m( x) x5 4 x3 x 1; - Use the Secant Method to find an approximate
root of the equation with initial approximates x0 1 and x1 2 . Carry out
iterations until the solution converges, and determine the approximated root.
These exercises are designed to practice the application of the Secant Method in
finding roots of different functions, employing varying initial guesses and
iterations to obtain approximate roots using this numerical technique.
FIXED POINT ITERATION METHOD
Fixed-point iteration methods are numerical techniques used to find the fixed
point of a function, which is a point where the function takes on its own value.
This method is utilized to solve equations of the form g(x) = x, where g(x) is a
function rearranged from the original equation, often making it easier to solve
for a root.
21
- Verify convergence by checking if xn1 xn is less than a specified
tolerance or reaching a maximum number of iterations.
Advantages and Considerations:
1. Simplicity: - Fixed-point iteration methods are straightforward and often
easier to implement compared to some other numerical techniques.
2. Convergence and Divergence:
- The convergence of fixed-point iteration methods is not guaranteed for all
functions or initial guesses. Some functions might diverge or converge slowly.
3. Choosing g(x):
- The choice of g(x) is crucial. A proper rearrangement of the equation is
required to ensure convergence. g(x) should contract the space and have a
derivative with an absolute value less than 1 in the vicinity of the root.
Limitations:
1. Convergence Rate:
- Fixed-point iteration may converge slowly for certain functions or initial
guesses, requiring more iteration to achieve accuracy.
2. Convergence Guarantee:
- Unlike some other methods, fixed-point iteration methods do not guarantee
convergence for all functions or initial guesses.
Fixed-point iteration methods are a fundamental tool in numerical analysis,
often used for solving equations or finding fixed points of functions. They are
versatile but require careful selection of the function g(x) and initial guess to
ensure convergence and accurate solutions.
23
1. Initialize:
- Set the initial guess x0 1.
- Set a tolerance, say105 , and a maximum number of iterations, let's take 10.
2. Iteration 1:
- Calculate g ( x0 ) :
1 3
g ( x0 ) 2 1.414
2
3. Iteration 2:
- Update x0 using the obtained value:
1.414 3
x1 g ( x0 ) 1.414
2
4. Iteration 3:
- Update x0 using the obtained value:
1.414 3
x2 g ( x1 ) 1.414
2
5. Iteration 4:
- Update x0 using the obtained value:
1.414 3
x3 g ( x2 ) 1.414
2
6. Iteration 5:
- Update x0 using the obtained value:
1.414 3
x4 g ( x3 ) 1.414
2
7. Iteration 6:
- Update x0 using the obtained value:
8. Iteration 7:
- Update x0 using the obtained value:
1.414 3
x6 g ( x5 ) 1.414
2
9. Iteration 8:
- Update x0 using the obtained value:
1.414 3
x7 g ( x6 ) 1.414
2
10. Iteration 9:
- Update x0 using the obtained value:
24
1.414 3
x8 g ( x7 ) 1.414
2
11. Iteration 10:
- Update x0 using the obtained value:
1.414 3
x9 g ( x8 ) 1.414
2
The iterations demonstrate that x approximates to approximately x 1.414 for
the given function g(x). The method shows convergence to the fixed point after
multiple iterations.
Example 2.8
Use Fixed-Point Iteration method to find the approximate solution to
f ( x) 2 x 4 3 x 2 x 4 .
Solution
Function: f ( x) 2 x 4 3x 2 x 4
Rearranged Equation for Fixed Point:
Rearranging the equation to the form x = g(x):
2 x4 x 4
x g ( x)
3
Fixed-Point Iteration Steps:
1. Initialization:
- Set the initial guess x0 to a convenient value, for instance, x0 0 .
- Define a tolerance level, say105 , and a maximum number of iterations is 5.
2. Iterations:
Iteration 1:
2(0)4 0 4 4
x1 g ( x0 ) 1.333
3 3
Iteration 2:
2(1.333)4 1.333 4
x2 g ( x1 ) 1.113
3
Iteration 3:
2(1.113)4 1.113 4
x3 g ( x2 ) 1.209
3
Iteration 4:
2(1.209)4 1.209 4
x4 g ( x3 ) 1.174
3
Iteration 5:
2(1.174)4 1.174 4
x5 g ( x4 ) 1.191
3
25
The Fixed-Point Iteration method is applied to the rearranged equation. The
iterations reveal the values \(x_1\) through \(x_5\) consecutively. After five
iterations, the method approaches an approximate fixed point for the function.
Example 2.9
Find the approximate value to the function f ( x) x 2 3x 2 using fixed
point iteration method.
Solution
Function:
f ( x) x 2 3 x 2
Rearranged Equation for Fixed Point:
To find x = g(x), add x to both sides of f(x):
x x2 3x 2
x x2 4 x 2
x g ( x) x 2 4 x 2
Initial Approximation:
For the initial guess, let's take x0 2 .
Fixed-Point Iteration Steps:
1. Initialize:
- Set the initial guess x0 2 .
- Choose a tolerance, say 105 , and a maximum number of iterations, say 10.
2. Iteration 1:
- Calculate g ( x0 ) :
g ( x0 ) x02 4 x0 2 22 4 2 2 4 8 2 2
3. Iteration 2:
- Update x0 using the obtained value:
x1 g ( x0 ) x02 4 x0 2 (2) 2 4 (2) 2 4 8 2 14
4. Iteration 3:
- Update x0 using the obtained value:
x2 g ( x1 ) x12 4 x1 2 142 4 14 2 196 56 2 142
5. Iteration 4:
- Update x0 using the obtained value:
x3 g ( x2 ) x22 4 x2 2
1422 4 142 2 20164 568 2 19698
6. Iteration 5:
- Update x0 using the obtained value:
x4 g ( x3 ) x32 4 x3 2 196982 4 19698 2 (a large value)
26
The iterations diverge, resulting in increasingly larger values, which indicates
that the method is not converging for the given function and initial guess. It's
important to note that not all functions and initial guesses will converge using
the Fixed-Point Iteration method.
Example 2.10.
Use Fixed-Point Iteration method to find the approximate solution the function
f ( x) x 3 2 x 5 .
Solution
Function: f ( x) x3 2 x 5
Rearranged Equation for Fixed Point:
Adding x to both sides of the equation to find x = g(x) :
x x3 2 x 5
x g ( x) x 3 5
Fixed-Point Iteration Steps:
1. Initialize:
- Set the initial guess x0 to start the iteration.
- Choose a tolerance value for convergence check.
- Select the maximum number of iterations.
2. Iteration:
- Apply the fixed-point iteration formula: xn1 g ( xn ) .
- Calculate xn1 for each iteration.
Let's choose an initial guess, such as x0 2 , and perform the first few iterations
to check for convergence.
Initial Approximation:
x0 2
Fixed-Point Iteration:
1. Iteration 1:
- Calculate g ( x0 ) :
g ( x0 ) x03 5 23 5 8 5 3
2. Iteration 2:
- Update x0 using the obtained value:
x1 g ( x0 ) x03 5 33 5 27 5 22
3. Iteration 3:
- Update x0 using the obtained value:
27
x2 g ( x1 ) x13 5 223 5 (a large value)
From the third iteration, the value has significantly increased and the iterations
seem to diverge, indicating that the method is not converging for the given
function and initial guess. This outcome suggests that the chosen function and
initial guess are not suitable for the Fixed-Point Iteration method.
Exercise 2.4
Use the Fixed-Point Iteration method to find the roots of the following:
1. f ( x) x 2 x 2 ; x0 1.
2. g ( x) 5x 2 ; x0 0 .
3. h( x) e x2 ; x0 1.
2 x3 1
4. k ( x) ; x0 2 .
x 1
5. m( x) ln( x 2 3) ; x0 1.
These exercises are designed to help practice the application of the Fixed-Point
Iteration method, involving rearranging functions to find appropriate fixed-point
equations and conducting iterations to calculate the fixed points of various
functions.
BISECTION METHOD:
AN ITERATIVE ROOT-FINDING TECHNIQUE
The bisection method is an iterative numerical technique used for finding
approximate solutions of an equation. It's particularly useful for finding roots of
continuous functions. The method is based on the Intermediate Value Theorem,
which states that if a function f(x) is continuous on an interval [a, b] and f(a)
and f(b) have opposite signs, then there exists at least one value c in the interval
[a, b] such that f c 0 (i.e., there is a root or a zero of the function in that
interval).
1. Initial Bracketing: Start with an interval [a, b] that contains the root of the
function, and make sure that f(a)\) and \f(b) have opposite signs. This is crucial
for the application of the Intermediate Value Theorem.
2. Iterative Process: Divide the interval in half and check which subinterval
contains the root. To do this, find the midpoint c of the interval a, b using
ab
c .
2
28
3. Checking Sign: Calculate f(c) and examine the sign of f(c). Update the
interval [a, b] to be [a, c] or [c, b] based on the sign of f(c). The new interval
will still contain the root due to the sign change criterion (since the function is
continuous).
4. Termination Condition:
- iterations continue until the length of the interval is sufficiently small or
until a predefined tolerance is reached.
5. Repeat: Continue this process iteratively by halving the interval that contains
the root until the desired level of accuracy is achieved.
The bisection method guarantees convergence to a root because at each step, the
interval containing the root is reduced by half. The process continues until the
width of the interval becomes smaller than a predefined tolerance level.
This method is relatively simple but effective for finding roots of continuous
functions, especially when other methods (such as Newton's method) might not
be suitable or when a reliable initial guess is not available.
Algorithm Steps:
1. Initialize:
- Set n 0 (iteration counter).
29
ab
- Calculate the initial midpoint: c0 .
2
2. Loop:
- Repeat until the interval is sufficiently small or until a predetermined
number of iterations.
2.1. Evaluate Function:
- Calculate f cn at the current midpoint: f cn .
2.2. Check Tolerance or Convergence:
- If the absolute value of the function at the midpoint is less than or equal to
the tolerance | f (cn ) | , the algorithm has converged. The current midpoint cn
is an acceptable root approximation.
- If the interval width b – a is smaller than the tolerance b a , the
algorithm has converged. The midpoint cn is an acceptable root approximation.
This algorithm converges to a root of the function within the provided interval
by iteratively narrowing down the interval where the root is expected to lie,
based on the behavior of the function.
Example 2.11: Use bisection method to find the root of the following function
f ( x) x 2 3x 2.
Solution
Bisection Method Steps:
Given: f ( x) x 3x 2.
2
30
- We need to find an interval [a, b] where f (a) f (b) 0 (indicating a root
exists within the interval).
- Let's choose the interval [0, 3] because f(0) = 2 and f(3) = 2, and their product
is less than zero 2 2 0 .
Let's perform the iterations to find the root within the interval [0, 3] using the
bisection method.
Iteration 1:
- Initial interval: a0 , b0 0, 3
a0 b0 0 3
- Midpoint: c0 1.5
2 2
- Calculate f (cn ) :
- f (1.5) (1.5)2 3(1.5) 2 0.25 4.5 2 2.25
- New interval selection based on the sign of f (c0 ) :
- f 1.5 0 ,so the root is in the subinterval [1.5, 3].
- Set a1 1.5 and b1 3 .
Iteration 2:
- Updated interval: a1 , b1 1.5, 3
a1 b1 1.5 3
- Midpoint: c1 2.25
2 2
- Calculate f c1 :
- f (2.25) (2.25)2 3(2.25) 2 5.0625 6.75 2 0.3125
- New interval selection based on the sign of f c1 :
- f 2.25 0 , so the root is in the subinterval 1.5, 2.25 .
- Set a2 1.5 and b2 2.25 .
Iteration 3:
- Updated interval: a2 , b2 1.5, 2.25
a2 b2 1.5 2.25
- Midpoint: c2 1.875
2 2
- Calculate f c2 :
-
f (1.875) (1.875)2 3(1.875) 2 3.515625 5.625 2 0.890625
- New interval selection based on the sign of f c2 :
- f 1.875 0 , so the root is in the subinterval 1.5, 1.875 .
- Set a3 1.5 and b3 1.875 .
Iteration 4:
- Updated interval: a3 , b3 1.5, 1.875
31
a3 b3 1.5 1.875
- Midpoint: c3 1.6875
2 2
- Calculate f c3 :
- f (1.6875) (1.6875)2 3(1.6875) 2
2.853515625 5.0625 2 0.208984375
- New interval selection based on the sign of f c3 :
- f 1.6875 0 , so the root is in the subinterval 1.6875, 1.875 .
- Set a4 1.6875 and b4 1.875. .
Iteration 5:
- Updated interval: a4 , b4 1.6875, 1.875
a4 b4 1.6875 1.875
- Midpoint: c4 1.78125
2 2
- Calculate f c4 :
- f (1.78125) (1.78125) 2 3(1.78125) 2
3.175964355 5.34375 2 0.832214355
- New interval selection based on the sign of f c4 :
- f 1.78125 0 ,so the root is in the subinterval 1.6875, 1.78125 . -
Set a5 1.6875 and b5 1.78125 .
The process can be continued for more iteration to achieve a desired level of
accuracy or until convergence. However, within five iterations, the bisection
method has significantly narrowed down the interval in which the root exists,
with the root expected to be close to 1.78125.
Example 2.12
use bisection method or otherwise to find the a root of the equation
x x 1 0
3
Solution
1st iteration:
f (1) 1 0 and f (2) 5 0
∴ Now, Root lies between 1 and 2
1 2
x
0
1.5 f ( x ) f (1.5) 0.875 0
0
2
2nd iteration:
f (1) 1 0 and f (1.5) 0.875 0
∴ Now, Root lies between 1 and 1.5
1 1.5
x 1
1.25 , f ( x ) f (1.25) 0.29688 0
1
2
3rd iteration:
32
f (1.25) 0.29688 0 and f (1.5) 0.875 0
∴ Now, Root lies between 1.25 and 1.5
1.25 1.5
x
2
1.375 , f ( x ) f (1.375) 0.22461 0
2
2
At 11th iteration. We have, f ( x ) f (1.32471) 0.00005 0 ,
10
1.32471
Example 2.13
Solve the equation f ( x) x3 2 x 5 using the bisection method to find the
root within the interval [2, 3].
Solution
The bisection method relies on the intermediate value theorem and iteratively
narrows down the interval where the root lies.
Given:
- Function: f ( x) x3 2 x 5
- Initial interval: [2, 3]
Bisection Method:
Iteration 1:
- Initial Interval: [2, 3]
23
- Midpoint: c1 2.5
2
- Function Values:
f (2) 23 2(2) 5 1), f (2.5) 2.53 2(2.5) 5 2.375
- Root in Interval:[2, 2.5]
Iteration 2:
- Updated Interval: [2, 2.5]
2 2.5
- Midpoint: c2 2.25
2
- Function Values: f (2) 1), f (2.25) 2.25 2(2.25) 5 0.859375
3
Iteration 3:
-Updated Interval: [2.25, 2.5]
2.25 2.5
- Midpoint: c3 2.375
2
- Function Values: f (2.375) 2.3753 2(2.375) 5 0.23828125
- Root in Interval: [2.375, 2.5]
Iteration 4:
33
- Updated Interval: [2.375, 2.5]
2.375 2.5
- Midpoint: c4 2.4375
2
- Function Values:
f (2.4375) 2.43753 2(2.4375) 5 0.049560546875
- Root in Interval: 2.4375, 2.5
The process continues until the width of the interval reaches the desired
tolerance or the number of iterations. As we proceed, the bisection method
repeatedly narrows down the interval where the root exists. In this case, after
four iterations, the approximate solution is within the interval [2.4375, 2.5],
getting closer to the actual root value.
Example 2.14
Use the bisection method to find an approximate solution for the equation
f ( x) x5 4 x3 x 1within the interval [-2, 2].
Solution
Bisection Method Steps:
1. Initial Interval: [-2, 2]
- f (2) (2)5 4(2)3 (2) 1 31
- f (2) 25 4(2)3 2 1 15
- Since the signs of f(-2) and f(2) are different, there is a root within the
interval [-2, 2].
2. Iterative Process:
- 1st iteration:
2 2
- Midpoint: c1 0
2
- f (0) 05 4(0)3 0 1 1
- New interval: [-2, 0] (as the signs of f(-2) and f(0) are the same)
- 2nd iteration:
2 0
- Midpoint: c2 1
2
- f (1) (1) 4(1) 1 2
5 3
- New interval: [-2, -1] as the signs of f(-2) and f(-1) are different
- 3rd iteration:
2 1
- Midpoint: c3 1.5
2
- f (1.5) (1.5) 4(1.5) 1.5 1 3.875
5 3
- New interval: [-1.5, -1] (as the signs of f(-1.5) and f(-1) are different)
- 4th iteration:
34
1.5 1
- Midpoint: c4 1.25
2
- f (1.25) (1.25)5 4(1.25)3 1.25 1 0.546875
- New interval: [-1.25, -1] as the signs of f(-1.25) and f(-1) are different
- 5th iteration:
1.25 1
- Midpoint: c5 1.125
2
- f (1.125) (1.125)5 4(1.125)3 1.125 1 0.1362305
- New interval: [-1.125, -1] (as the signs of f(-1.125) and f(-1) are different
The process continues until the interval becomes sufficiently small or a set
number of iterations have been reached. In this case, after multiple iterations,
the approximate solution for the root lies within the interval [-1.125, -1].
The bisection method is a fundamental and reliable technique, especially in
situations where other methods might not be applicable. While it might
converge slower than some other methods, its robustness and simplicity make it
a useful tool for root-finding in various real-world applications.
Exercise 2.5
Use the bisection method to find roots of the following equations. Perform
iterations until the solution converges, and provide the approximated root
1. f ( x) x3 2 x 5 , [1, 3].
2. f ( x) cos( x) x , [0, ].
2
3. f ( x) e x 3x 2 , [-2, 2].
4. f ( x) x 4 8x 2 9 , [-3, -1] .
5. f ( x) 2 x3 4 x 1,[2, 2]. Perform iterations until the solution reaches a
tolerance of 104.
35
CHAPTER THREE
INTERPOLATION AND APPROXIMATION
Interpolation and approximation are two fundamental techniques in
mathematics and data analysis used to estimate unknown values between known
data points or to represent a complicated function with a simpler one,
respectively.
INTERPOLATION:
Interpolation involves estimating the value of a function between two known
data points. The primary objective is to construct a function that passes through
all the given data points. This is particularly useful when you have discrete data
and need to estimate values at points that lie between the known data points.
Key methods for interpolation include: Linear Interpolation, Finite Differences,
Newton's Divided Differences, Lagrange Interpolation.
Interpolation methods can accurately estimate intermediate values, assuming the
underlying function behaves consistently between data points. However, using
high-degree polynomials for interpolation can sometimes lead to oscillations or
numerical instability, a phenomenon known as Runge's phenomenon.
APPROXIMATION:
Approximation, on the other hand, aims to represent a complex function with a
simpler one while minimizing the error. This method is often used when dealing
with functions that are difficult to work with or computationally expensive.
Common techniques for approximation include: Polynomial Approximation,
Least Squares Approximation, Chebyshev Approximation.
Linear Interpolation
Linear interpolation is a method used to estimate unknown values between two
known data points. It assumes that there's a linear relationship between the
known data points. This technique is particularly useful when you have two data
points and want to estimate a value that lies between them.
1 2 1
x 2
x 1
the point between x and x where you want to estimate the value and y is the
1 2
estimated value at x .
36
2. Determine the interval:
Identify the interval where the estimation is required. This interval should lie
between the known data points.
Example 3.1:
You have two data points: (3, 12) and (8, 24). Estimate the value at x = 5 using
linear interpolation.
Solution:
Using the provided data points:
Given:
- x , y 3, 12
1 1
- x , y
2 2
8, 24
-x=5
xx
y y 1
(y y )
x x
1 2 1
2 1
Example 3.2:
Given the points (-1, 4) and (5, 20), find the value at x = 2 using linear
interpolation.
Solution:
Given:
- x , y 1, 4
1 1
- x , y 5, 20
2 2
-x=2
Using the linear interpolation formula:
37
xx
y y 1
(y y )
x x
1 2 1
2 1
Example of Extrapolation:
Suppose you have data points for the temperature at various times:
- At 12:00 PM, the temperature is 20°C
- At 3:00 PM, the temperature is 25°C
Now, using extrapolation, you might attempt to estimate the temperature at 6:00
PM by assuming the same rate of change observed between 12:00 PM and 3:00
PM will continue.
Extrapolation Method:
1. Identify the known data points:
Determine the last known points within the data range.
2. Understand the trend or relationship:
Analyze the trend or relationship between the known data points. This could
be linear, quadratic, exponential, etc.
3. Apply the pattern:
Use this trend to predict or estimate values outside the given range.
- Extrapolation assumes that the trend observed within the known range will
continue. However, this might not always be the case.
- If the relationship between the variables changes outside the range of known
data, extrapolation can yield inaccurate results.
- It's important to exercise caution and consider the validity of the model or
pattern used for extrapolation.
38
Example of Extrapolation in a Formula:
Given the points (1, 3) and (3, 9), if you notice a linear relationship
change in y 9 3
y mx c the slope of the line is m 3 . The
change in x 3 1
intercept (c) can be calculated using one of the known points, say (1,3) as
c y mx 3 3 1 0. So, the linear relationship is y = 3x. If you want to
extrapolate for x = 4, you would use \ y 3 4 12 . This is based on the
assumption that the linear relationship continues beyond the observed data
points.
Example 3.3
Given the points (-2, 6) and (1, 9), determine an estimate for x = 4 using linear
extrapolation.
Solution:
Given data points:
- x , y 2, 6
1 1
- x , y
2 2
1, 9
-x=4
The intercept (c) can be calculated using one of the known points and the slope,
for instance, using (1, 9):
c y mx 9 11 8
Therefore, the linear relationship is y = x + 8.
Example 3.4
You have two data points: (3, 10) and (7, 22). Estimate the value at (x = 10)
using linear extrapolation.
39
Solution:
Given data points:
x , y
1 1
3,10
x , y
2 2
7, 22
x = 10
Calculate the slope of the line:
change in y y y 22 10 12
m 2 1
3
change in x x x 2
7 3
1
4
Calculate the intercept © using one of the known points, for instance, using (7,
22):
c y mx 22 3 7 22 21 1
Thus, the linear relationship is y 3x 1
For (x = 10):
y 3x 1 3 10 1 30 1 31
Therefore, the estimated value at x = 10 using linear extrapolation is y = 31.
Exercise 3.1:
You are given two data points: (2, 8) and (5, 17).
1. Estimate the value at x = 3 using linear interpolation.
2. If there's a linear relationship, predict the value at x = 6.
Exercise 3.2:
Consider the data points: (-1, 3) and (2, 9).
1. Using linear interpolation, estimate the value at x = 0.
2. Assuming the same rate of change continues, extrapolate the value at x = 5.
Exercise 3.3:
Given the points: (1, 5) and (4, 11),
1. Estimate the value at x = 2 using linear interpolation.
2. Assuming a linear relationship, predict the value at x = 6 using extrapolation.
Exercise 3.4:
You have data points: (-3, 2) and (2, 12).
1. Using linear interpolation, estimate the value at x = 0.
2.If the relationship is quadratic y ax bx c , attempt to extrapolate the
2
value at x = 4.
FINITE DIFFERENCES
40
1. Forward Difference: It's calculated by taking the difference between
function values at two consecutive points in the forward direction.
For a function f(x), the forward difference is given by:
f ( x x) f ( x)
f ( x)
x
where x is the step size.
These methods are extensively used in numerical analysis and are fundamental
in many algorithms used for solving differential equations or analyzing data in
various fields, including physics, engineering, finance, and computer science.
41
2 yi (yi ) ( yi 1 yi ) yi 2 2 yi 1 yi
The second differences provide information about the acceleration or curvature
of the sequence.
3. Third Differences: Continuing the process of differences, the third
differences involve taking the differences between consecutive second
differences. If you have second differences 2 yi , the third differences are
calculated as:
3 yi ( 2 yi ) (( yi 1 yi )) yi 3 3 yi 2 3 yi 1 yi
The third differences provide information about the rate of change in the
second differences.
Now, let's establish the general formula for the nth order differences.
For a sequence yi , where i 1,2,3,, n :
The first differences yi are calculated as yi 1 yi .
The second differences 2 yi are calculated as
(yi ) y12 2 yi1 yi
The third differences 3 yi are calculated as
( 2 yi ) yi 3 3 yi 2 3 yi 1 yi
The nth differences n yi can be expressed using the binomial theorem and the
finite differences. In general,
n yi n1 yi 1 n1 yi
defines the nth order differences.
Difference table
The difference table is the standard format for displaying the finite difference.
Its diagonal pattern makes each entry, except for the xi , yi the difference of its
two nearest neignours to the left. Each difference proves to be a combination of
the y values in column two.
y i
y i
y 2
i 1
y 2
i2
y3
i 3
y4
i4
x0 y0
y0
x1 y1 2 y0
y1 3 y0
2 y2 4 y0
x2 y2
y2 2 y2
x3 y3 2 y3
y3
x4 y4
42
The general result is
n n
n yi (1) k yi nk
k 0
k
n
Here, represents the binomial coefficient "n choose k", and yi n k
k
denotes the (n - k)th term from yi . This general formula allows you to calculate
the nth order differences for a given sequence.
DIFFERENCE FORMULAS
The difference formulas for elementary functions often parallel those in
calculus.
i. The differences of a constant function are zero.
starting with the difference of a constant function, which results in zero.
(c u ) c u
i i
Here, u and v are functions of some variable e.g., x , and represents the
i i
This property allows you to apply finite differences to each term in a linear
combination of functions separately, and the result is the same as taking the
linear combination of the finite differences of individual functions. It is
analogous to the linearity property of derivatives in calculus.
v. The difference of a product
The difference of a product of two functions can be expressed using the product
rule for finite differences. If you have two functions u(x) and v(x) , the finite
difference of their product u ( x) v ( x) is given by: i i
u ( x x) v v ( x x) u u v
i i i i i i
This expression is similar to the product rule for derivatives in calculus, where
the derivative of a product of two functions is the product of the first function
and the derivative of the second, plus the product of the second function and the
derivative of the first.
44
vi. The difference of a quotient
The difference of a quotient of two functions can be expressed using the
quotient rule for finite differences. If you have two functions u(x) and v(x)
u ( x)
i
such that v(x) is not equal to zero, the finite difference of their quotient is
v ( x)
i
given by:
u u ( x x) v ( x) u ( x) v ( x x)
i i i i i
v i
[v ( x x)] [v ( x)]
i i
This expression involves the products and differences of the functions and their
finite differences. Similar to the product rule, it becomes more involved than the
sum or product rule. In practice, the finite difference of a quotient is less
common than the sum or product rules due to its complexity.
The key is to be cautious of the denominator v(x) , making sure it is not equal to
zero to avoid division by zero issues. If v(x) is equal to zero at any point, the
finite difference for the quotient is not well-defined.
The finite difference quotient rule is analogous to the quotient rule for
derivatives in calculus, where the derivative of a quotient is given by a formula
involving the numerator, denominator, and their derivatives.
vii. The difference of the power function
c c (c 1)
k k
x x x x
(sin( x)) sin( x x) sin( x) 2cos sin
2 2
Cosine Function:
x x x x
(cos( x)) cos( x x) cos( x) 2sin sin
2 2
These expressions involve the trigonometric functions sine and cosine and
include the angle sum and difference identities.
These formulas provide a way to compute the finite differences for sine and
cosine functions. Note that for small values of x , these expressions simplify
45
further. Also, since sine and cosine functions are periodic, the finite differences
will exhibit periodic behavior as well.
ix. The difference of the logarithmic function
The finite difference of the logarithmic function, (log ( x) , involves the a
x x
(ln( x)) ln( x x) ln( x) ln
x
This expression arises from the fact that the logarithm of a quotient is equal to
the difference of the logarithms:
x x
ln ln( x x) ln( x)
x
The finite difference of the logarithmic function represents the logarithm of the
ratio of the new value x x to the old value x .
It's worth noting that this formula holds as long as x and x x are both
positive, as the natural logarithm is undefined for non-positive arguments.
If you have a different base for the logarithm (e.g., log ( x) or log ( x) , you can
10 2
y y y c c
k k 1 k
Example 3.6
Prove that (cy ) cy
k k
Solution
This is analogous to a result of calculus
(cy ) cy cy cy
k k 1 k k
Essesntially, this problem involves two function defined for same argument x . i
46
y cy k k
Example 3.7
Consider two function defined for the same set of argument x . Call the values i
w cu c v
i 1 i 2 i
w c u c v
i 1 i 2 i
Solution
This is the property of the difference operation.
The proof is direct from the definations.
w w w i i 1 i
(c u c v ) (c u c v )
1 i 1 2 i 1 1 i 2 i
c (u u ) c (v v )
1 i 1 i 2 i 1 i
c u c v 1 i 2 i
Clearly the same proof would apply to sums of any finite length.
Example 3.8
With the same symbolism as in example 3.7, consider the fundtion with
z uv .
k i i
Prove z u v v u
i i i i 1 i
Solution
Again starting from the definition
z u v u v i i 1 i 1 i i
v (u u ) u (v v ) i 1 i 1 i i i 1 i
u v v u i i i 1 i
Example 3.9
Verify that y y 3 y 3 y y
3
0 3 2 1 0
Solution
y ( y y ) ( y y ) ( y y ) y 2 y y
2
i i 1 i i 2 i 1 i 1 i i 2 i 1 i
47
Now, the third-order forward difference can be found using the second-order
differences:
y ( y ) ( y 2 y y )
3
i
2
i i 2 i 1 i
y ( y y ) 2( y 2 y y )
3
i i 3 i 2 i 2 i 1 i
y y y 2y 4y 2y
3
i i 3 i 2 i 2 i 1 i
y y 3y 4 y 2 y
3
i i 3 i 2 i 1 i
y y 3y 3y y
3
i i 3 i 2 i 1 i
Therefore, after simplifying and rearranging terms, the expression holds true for
the third-order forward difference of the sequence y i
Example 3.10
Compute the complete table of differences for function
P ( x) x 2 x 3x 4 for x 2,3.
3 2
Solution
The values of the polynomial at these point are -2, 2, 4, 10, 26, 58 and so the
table of differences.
x P( x) P( x) P ( x) 2
P( x)3
P( x)
4
-2 -2
-1 2 4
0 4 2 -2
1 10 6 4 6
2 26 16 10 6 0
3 58 32 16 6 0
Exercise 3.1:
48
Given the function g ( x) 2 x 3 x 5 ,compute the complete table of finite
2
differences for the range x [1,5] . Show the step-by-step calculations for the
first and second finite differences. Based on the finite differences, can you
determine the degree of the polynomial function?
Exercise 3.2:
Consider a sequence of numbers: y 3, y 8, y 15, y 24, . Compute
1 2 3 4
the first and second finite differences for this sequence. Based on your
observations, propose a formula or expression for y in terms of n. Explain how
n
the finite differences provide insights into the nature of the sequence.
Exercise 3.3:
Compute (ln( x) for x = 4 and x 0.1.
Exercise 3.4:
Determine (sin( x)) for x and x 0.2 .
4
Exercise 3.5:
Calculate (cos( x)) for x and x 0.1.
3
Exercise 3.6:
Find ( x ) for x = 2 and x 0.5 .
2 3
Exercise 3.7:
If u ( x) 2 x 1 v( x) sin( x) and, calculate (u v) for x and
6
x 0.1.
Exercise 3.8:
f
Given f ( x) ln( x) and g ( x) x 1, determine for
2
x 3 and
g
x 0.2 .
49
is particularly useful in numerical analysis for constructing interpolating
polynomials.
Divided differences are a way of expressing the coefficients of interpolating
polynomials. Given a set of data points ( x0 , y0 ),( x1 , y1 ),,( xn , yn ) where xi
and yi are the coordinates of the data points, the divided differences are defined
recursively as follows:
In man form y ways these differences play role equivalent to those of the
simpler differences used earlier.
A difference table is again a convenient device for displaying differences, the
standard diagonal form being used.
x0 f0
f ( x0 , x1 )
x1 f1 f ( x0 , x1 , x2 )
f ( x1 , x2 ) f ( x0 , x1 , x2 , x3 )
f ( x1 , x2 , x3 ) f ( x0 , x1 , x2 , x3 , x4 )
x2 f2
f ( x2 , x3 ) f ( x1 , x2 , x3 , x4 )
x3 f3 f ( x2 , x3 , x4 )
f ( x3 , x4 )
x4 f4
n fi
f ( x0 , x1 ,..., xn ) n
i 0 F ( x )
i i
permutated in the same way. This very useful result is an easy consequences of
the representation theorem.
f ( n1) ( )
f ( x0 , x1 ,..., xn )
(n 1)!
n f0
f ( x0 , x1 ,..., xn )
n!hn
A useful property of ordinary finite differences may be obtained as follows,
n f 0 f ( n ) ( )h n
(n)
For a function f ( x ) with bounded derivated, all f ( x) having a bound
independent of n, it follows that, for small h,
lim n f 0 0
For increasing n. This generalizes the result found earlier for polynomials and
explains why the higher differences in a table are oftenfound to tend towards
zero.
51
P( x) f [ x0 ] ( x x0 ) f [ x0 , x1 ] ( x x0 )( x x1 ) f [ x0 , x1 , x2 ]
( x x0 )( x x1 )( x xn1 ) f [ x0 , x1 ,, xn ]
This polynomial will pass through all the given data points. Newton's divided
differences method is often used for numerical interpolation because it provides
a straightforward way to compute the coefficients of the interpolating
polynomial.
The error f ( x) p( x) , where f(x) and p(x) are the collocate at the argument
x , x ,...x , is still given by the formula obtained earlier.
0 1 n
f ( ) ( x)
( n1)
f ( x ) p ( x)
(n 1)!
An alternative form of this error using divided difference is
f ( x) P( x) f ( x, x0 ,..., xn )( x x0 )...( x xn )
Example 3.11
For the function f ( x ) 2 x 3 x 5 ,calculate the first and second divided
2
52
Create a difference table for g ( x) 3 x 2 x 1using the points
2
x 0, 1, 2, 3, 4
Fill in values for g ( x) ,first differences, second differences, and so on.
3. Analyze the patterns in the table to infer information about the nature of the
function and its derivatives.
Example 3.13
Compute the divided difference through the third for f values in the table k
below
x k
0 1 2 4
f k
1 1 2 5
Solution
x k
f k
0 1
0 1
1 1
1 2 1
2 2 3 1 12
4 5
2 6
52 3 1 0 1
f (2,4) , f (0,1,2) ,
42 2 20 2
3 1 1
1
2 1 6 2 1
f (1,2,4) , f (0,1, 2, 4)
4 1 6 40 12
Substituting into newton’s divided difference formula,
P( x) f [ x0 ] ( x x0 ) f [ x0 , x1 ] ( x x0 )( x x1 ) f [ x0 , x1 , x2 ]
( x x0 )( x x1 )( x xn1 ) f [ x0 , x1 ,, xn ]
we have
1
P( x) ( x 9 x 8 x 12)
3 2
12
Exercise 3.9
Elaborate on the significance of the symmetry property of divided differences.
How does this property impact the process of constructing interpolating
polynomials, and why is it advantageous?
53
Exercise 3.10
Given the function p ( x ) 2 x 3 x 5 x 7 , calculate the first and second
3 2
Exercise 3.11
For a set of data points (1, 4), (2, 9), (3, 16), (4, 25) , compute the first, second,
and third divided differences. Use these differences to construct the
interpolating polynomial and evaluate it at x = 2.5.
Exercise 3.12
Consider the function q ( x) 4 x 2 x 9 . Compute the second divided
2
LAGRANGE INTERPOLATION
Lagrange interpolation is a method for constructing a polynomial that passes
through a given set of points. Given a set of distinct points
( x , y ),( x , y ),,( x , y ) , the Lagrange interpolating polynomial P(x) of
0 0 1 1 n n
n xx
L ( x) j
x x
i
j 0 , j i
i j
each i .
Lagrange interpolation is a useful method for approximating a function when
the exact functional form is unknown, but a set of data points is available. It is
widely used in numerical analysis and computer science.
54
For each i , set up the product in the Lagrange basis polynomial formula.
2. Compute the individual factors:
Compute the value of each factor in the product.
3. Combine the factors:
Multiply all the individual factors together to obtain L ( x) .
i
Example 3.14
Suppose we have three points: (1, 2), (3, 4), (5, 6) , compute L ( x) .
1
Solution
x3 x5
L ( x)
1 3 1 5
1
You can simplify this expression further if needed. Repeat this process for each
Lagrange basis polynomial L ( x) corresponding to different i.
i
Example 3.15
Given the points (1, 2), (2, 5), (4, 10), find the Lagrange interpolating
polynomial P(x) that passes through these points.
Solution
1. Set up the Lagrange interpolating polynomial:
P( x) L ( x) 2 L ( x) 5 L ( x) 10
0 1 2
( x 1)( x 4)
L ( x)
(2 1)(2 4)
1
55
( x 1)( x 2)
L ( x)
(4 1)(4 2)
2
3 6
1 8 5 5 5
P( x) x 2 x x 5 x 4 x x
2 2 2
3 3 6 2 3
1 1 7
P( x) x x 2
6 2 6
Example 3.16
Use Lagrange interpolation to find a polynomial Q(x) passing through the points
(-1, 3), (0, 1), (1, -1), (2, 5) .
Solution
1. Set up the Lagrange interpolating polynomial:
Q( x) L ( x) 3 L ( x) 1 L ( x) (1) L ( x) 5
0 1 2 3
( x 1)( x 1)( x 2)
L ( x)
(0 1)(0 1)(0 2)
1
( x 1)( x 0)( x 2)
L ( x)
(1 1)(1 0)(1 2)
2
( x 1)( x 0)( x 1)
L ( x)
(2 1)(2 0)(2 1)
3
56
1 5
Q( x) ( x x 3x 2) ( x 3x 2 x) ( x x 2 x) ( x x x 1)
3 2 3 2 3 2 3 2
3 6
1 1 2 5 5 5 5
Q( x) x x x x 3x 2 x x x 2 x x x x
3 2 3 2 3 2 3 2
3 3 3 6 6 6 6
5 7 13 11
Q( x) x x x 3 2
6 2 3 6
Example 3.17
Find the polynomial of degree three which takes the values below as in
Example 3.13.
x k
0 1 2 4
f k
1 1 2 5
Solution
The polynomial can be written directly as follows
( x 1)( x 2)( x 4) x( x 2)( x 4) x( x 1)( x 4) x( x 1)( x 2)
P( x) 1 1 2 5
(0 1)(0 2)(0 4) 1(1 2)(1 4) 2(2 1)(2 4) 4(4 1)(4 2)
1
This can be rearranged into P( x) ( x 9 x 8 x 12)
3 2
12
Exercise 3.13
Find the Lagrange interpolating polynomial for the points (0, 1), (2, 3), (4, 5),
(6, 7).
These examples illustrate the process of using Lagrange interpolation to find
polynomials that pass through given sets of points.
POLYNOMIAL APPROXIMATION
Polynomial approximation is a mathematical technique used to represent a
given function by a polynomial of a certain degree. This is often done to
simplify calculations or to make it easier to work with the function. The goal is
to find a polynomial that closely matches the behavior of the original function
within a specific range or over the entire domain.
There are various methods for polynomial approximation, and one common
approach is the Taylor series expansion. The Taylor series represents a function
as an infinite sum of terms, each derived from the function's derivatives at a
particular point (usually the center of approximation). However, in practice,
only a finite number of terms are used to approximate the function.
57
For a function f(x) , its nth-degree Taylor polynomial P ( x) centered at a is n
given by:
P ( x) f (a) f (a)( x a)
n
( x a) ( x a)
2
( x a) 3 n
2! 3! n!
This polynomial approximates the function f(x) near the point a up to the nth
degree. The higher the degree of the polynomial, the better the approximation,
but it may also lead to more complex calculations.
It's important to note that while polynomial approximations can be useful, they
may not always provide accurate representations of functions, especially for
functions with complex behavior or sharp changes. In some cases, other
methods, such as piecewise polynomial interpolation or spline interpolation,
may be more appropriate.
Example 3.18
Given the function f ( x) e , find the quadratic Taylor polynomial P ( x)
x
2
centered at ( a = 0 ).
Solution;
The Taylor polynomial P ( x) is given by:
2
f (0)
P ( x) f (0) f (0) x
2
x 2
2!
1. Find f (0) : f (0) e 1
0
2!
Example 3.19
Approximate the square root function g ( x) x by a linear Taylor polynomial
P( x) centered at ( a = 4 \).
1
Solution:
The Taylor polynomial P( x) is given by:
1
58
P( x) g (4) g(4)( x 4)
1
Solution
f x 1 x f (0) 1
1
2
1 1
f ' x 1 x f '(0)
1
2
2 2
1 1
f '' x 1 x f ''(0)
3/2
4 4
3 3
f ''' x 1 x f '''(0)
5/2
4 8
15 15
1 1
5/2 5/2
f ( lv )
f ( lv )
16 16
where 0, x .
Therefore,
f "( x) f "'( x)
P ( x) f (0) f '(0) x
3
x x 2 3
2! 3!
1 1 1 3 1
1 x x x 2 3
2 4 2! 8 3!
Therefore,
2 3
1 x x
P ( x) 1 x
3
2 8 16
Exercise 3.14
59
Given the function f ( x) e , find the quadratic Taylor polynomial P ( x)
x
centered at a 0 .
Exercise 3.15
Approximate the square root function g ( x) x by a linear Taylor
polynomial P ( x) centered at a = 4 .
1
60
CHAPTER FOUR
Numerical Differentiation and Integration
Numerical differentiation techniques
Numerical differentiation is an approach to estimate the derivative of a function
at a given point using discrete data points. One common method for numerical
points. One commonly used method is the finite difference method. The
approaches.
1. Forward Difference:
f ( x h) f ( x)
f ( x)
h
2. Backward Difference:
f ( x) f ( x h)
f ( x)
h
3. Central Difference:
f ( x h) f ( x h)
f ( x)
2h
Here, h is the step size, and smaller values of h generally lead to more accurate
approximations.
Algorithm 4.1
A general algorithm for numerical differentiation, specifically using the central
difference method, can be outlined as follows:
Numerical Differentiation Algorithm (Central Difference Method):
1. Input:
- (f(x)): The function to be differentiated.
- (x0): The point at which the derivative is to be estimated.
- (h): Step size for the central difference method.
61
2. Initialization:
- Calculate the forward point ( x+ ) and backward point (x-) using the step
size: (x+ = x0 + h\) and (x- = x0- h\).
3.Evaluation:
- Evaluate the function at the central point (x0) and the forward and backward
points: f0 f x0 , f f x , f f x .
Numerical Integration:
Numerical integration is the process of estimating the definite integral of a
function over a given interval. Different methods exist, such as the trapezoidal
rule, Simpson's rule, and numerical quadrature methods.
1. Trapezoidal Rule:
h
f (a) 2 f ( x1) 2 f ( x2 ) 2 f ( xn1) f (b)
b
a f ( x ) dx
2
ba
where h is the step size h and xi are the nodes within the interval.
n
2. Simpson's Rule:
h
f (a) 4 f ( x1) 2 f ( x2 ) 4 f ( x3 ) 2 f ( xn2 ) 4 f ( xn1) f (b)
b
a f ( x) dx
3
Simpson's rule generally provides more accurate results than the trapezoidal rule
for smooth functions.
3. Numerical Quadrature:
Numerical quadrature methods use various mathematical techniques to
approximate the integral. These methods include Gaussian quadrature, adaptive
quadrature, and others.
62
These numerical techniques are particularly useful when an analytical solution
to the differentiation or integration problem is difficult or impossible to obtain.
The choice of method depends on the specific characteristics of the function and
the required level of accuracy.
Algorithm 4.2
A general algorithm for numerical integration, often applicable to methods like
the trapezoidal rule or Simpson's rule, can be outlined as follows:
Numerical Integration Algorithm:
1. Input:
- Define the function f(x) to be integrated over the interval ([a, b]).
- Specify the number of subintervals (n) or the step size (h) for the numerical
integration.
2. Initialization:
- Calculate the step size (h) if it's not provided:
ba
h
n
3. Summation:
- Initialize a variable, let's call it ( result} , to store the accumulated sum.
- For i 0,1,2,, n :
- Calculate the current point xi a ih .
63
Example 4.1: Numerical Differentiation
Use the central difference method to estimate the derivative of the function
f ( x) x 2 3 x 2 at x = 1 with a step size of h = 0.1.
Solution:
f ( x h) f ( x h)
f ( x)
2h
Substitute the values:
(1.1)2 3(1.1) 2 (0.9)2 3(0.9) 2
f (1)
2 0.1
Solve to find (f'(1).
simplify the expression:
1.21 3.3 2 0.81 2.7 2
f (1)
0.2
3.7
f (1) 18.5
0.2
Example 4.2
2
0 ( x 1) dx using the trapezoidal rule with four
2
Estimate the definite integral
subintervals.
Solution:
h
f (0) 2 f ( x1 ) 2 f ( x2 ) 2 f ( x3 ) f (2)
2
0 f ( x) dx
2
Calculate h and xi values:
20
h 0.5
4
x1 0.5, x2 1.0, x3 1.5
Substitute into the formula:
0.5
(0) 2 2(0.52 1) 2(12 1) 2(1.52 1) (2) 2 1
2
0 ( x 1) dx
2
2
Simplify the expression to find the numerical approximation.
64
Example 4.3: Simpson's Rule:
3
Apply Simpson's rule to approximate 1 (2 x 1) dx with a step size of h 0.5 .
Solution:
h
f (1) 4 f ( x1) 2 f ( x2 ) 4 f ( x3) f (3)
3
1 f ( x) dx
3
Calculate h and xi values:
Solution:
a) Derivative g'(x):
d 2x
g ( x) (e 2 x 2 3)
dx
Use the rules of differentiation to find g'(x).
3
b) Definite Integral 1 g ( x) dx :
3 3
1 g ( x) dx 1 (e 2 x 2 3) dx
2x
65
1 2(3) 2 3 1 2(1) 2 3
2 e 3 (3) 3(3) 2 e 3 (1) 3(1)
1 1 2
e6 18 9 e2 3
2 2 3
The numerical result will be the difference between these values.
Exercise 4.1: Forward Difference Method
Given the function f ( x) 2 x 2 3 x 1 , find the numerical approximation of
f ' 2 using the forward difference method with a step size h = 0.1.
Exercise 4.3
1 x2
Use the trapezoidal rule to approximate the 0 e dx with four subintervals.
Exercise 4.4
2 1
Apply Simpson's rule to estimate 1 x
dx using six subintervals.
Exercise 4.5
1
Utilize Gaussian quadrature to 1 1 x 2 dx .
66