Numerical Analysis

Download as pdf or txt
Download as pdf or txt
You are on page 1of 66

INTRODUCTION TO NUMERICAL METHODS

Course contents
1. Introduction to Numerical Methods
- Motivation and importance of numerical techniques.
- Sources of errors in numerical computations.
2. Root Finding Methods
- Bisection method
- Newton-Raphson method
- Secant method
- Fixed-point iteration
3. Interpolation and Approximation
- Lagrange interpolation
- Newton's divided differences
- Polynomial approximation
4. Numerical Differentiation and Integration
- Numerical differentiation techniques
- Newton-Cotes formulas (Trapezoidal rule, Simpson's rule)
- Gaussian quadrature
5. Error Analysis and Convergence
- Round-off errors
- Truncation errors
- Convergence analysis of numerical methods
6. Introduction to Numerical Software and Programming
- Using computational tools (MATLAB, Python, etc.) for numerical
computation.
- Coding and implementation of numerical algorithms.
Course Overview:
This course introduces fundamental numerical methods and techniques used to
solve mathematical problems, especially those that are complex or difficult to
solve analytically. The course emphasizes both theoretical foundations and
practical implementations of numerical algorithms.
Course Objectives:
- Understand the limitations of analytical solutions and the necessity of
numerical methods.
- Develop skills in selecting and applying appropriate numerical techniques for
different types of problems.
- Gain proficiency in using computational tools and programming languages for
implementing numerical algorithms.
- Analyze the accuracy, stability, and convergence of numerical methods.
- Apply numerical methods to solve mathematical problems from various
disciplines.

1
Chapter one
INTRODUCTION TO NUMERICAL METHODS
Introduction to Numerical Methods is a pivotal field within applied mathematics
and computer science, focusing on the development, analysis, and
implementation of algorithms to solve mathematical problems that are
challenging or impossible to solve using analytical methods alone. It serves as a
bridge between mathematical theory and practical problem-solving in various
disciplines.

KEY ASPECTS:
1. Foundation of Numerical Analysis:
Numerical Methods play a critical role in situations where exact analytical
solutions are unattainable. These methods encompass a range of techniques and
algorithms designed to approximate solutions to mathematical problems
involving continuous variables, differential equations, optimization,
interpolation, integration, and more.

2. Problem Solving and Algorithm Development:


Numerical Methods facilitate the development of algorithms that help in
approximating solutions for problems across various domains. These algorithms
include root-finding methods, interpolation, differentiation, integration, solving
linear systems, and differential equations.
3. Error Analysis and Computational Challenges:
An essential aspect of numerical analysis involves understanding and
managing errors arising from approximations and computational limitations.
The field examines sources of error and methods to minimize them, ensuring
the accuracy and reliability of numerical solutions.
4. Computational Tools and Software Implementation:
Utilizing computational tools and programming languages like MATLAB,
Python, or C/C++ is integral to the practical implementation of numerical
algorithms. This involves coding, testing, and executing numerical methods to
solve real-world problems.
5. Multidisciplinary Applications:
Numerical Methods have applications across diverse fields such as
engineering, physics, economics, biology, and more. They enable the modeling,
simulation, and analysis of complex systems that have no exact analytical
solution.
6. Accuracy, Convergence, and Efficiency:
Numerical methods are evaluated based on their accuracy, convergence
properties, and computational efficiency. Understanding these aspects is crucial
for selecting the most appropriate method for a given problem.

2
7. Continuous Advancements and Research:
The field of Numerical Methods continually evolves with the advancement of
technology and the increasing need for more sophisticated computational
techniques. Ongoing research aims to develop new algorithms that are more
accurate, efficient, and applicable to a wider range of problems.

Introduction to Numerical Methods provides a foundational understanding of


the principles, techniques, and practical applications essential for solving
complex problems in the absence of exact analytical solutions. It equips
students with the knowledge and skills necessary to apply numerical algorithms
effectively across diverse fields, making it a fundamental discipline in both
mathematics and computational sciences.

What are algorithms?


An algorithm is a step-by-step procedure or set of instructions designed to solve
a problem or perform a specific task. In computing and mathematics, an
algorithm acts as a blueprint for solving problems systematically, with a
sequence of well-defined, unambiguous, and finite steps that, when executed,
accomplishes a particular goal.

Characteristics of algorithms include:


1. Definiteness: Each step of the algorithm must be clear and unambiguous,
leaving no room for uncertainty in execution.
2. Finiteness: Algorithms must have a finite number of steps. They should
eventually come to an end after a specific number of actions or iterations.
3. Input/Output: Algorithms take input (information or data) and produce an
output, transforming the input according to the defined steps.
4. Effectiveness: An algorithm must be practical and feasible, meaning it can be
executed within a reasonable amount of time using available resources.

Algorithms are essential in various fields, including computer science,


mathematics, engineering, and everyday problem-solving. In computer
programming, algorithms are the foundation for writing code and developing
software to automate tasks, sort data, search for information, perform
calculations, and much more. They are fundamental in optimizing processes and
solving complex problems efficiently.
Factors considered when crossing algorithms
When choosing an algorithm to solve a particular problem, several factors need
consideration to ensure the efficiency and effectiveness of the solution. Some
crucial factors to consider when selecting an algorithm include:

1. Time Complexity:
Runtime Performance: How efficiently the algorithm solves the problem,
considering the time taken for execution. It's crucial to assess how the
algorithm's performance scales with the size of the input.

3
2. Space Complexity: - Memory Usage: The amount of memory or space an
algorithm requires for execution. This includes consideration of auxiliary space,
variables, data structures, etc.
3. Input Size:
Scale of Problem: Understanding how the algorithm performs with different
sizes of input. Some algorithms might be efficient for small inputs but
inefficient for larger datasets.

4. Resource Limitations:
Hardware/Resource Constraints: Considering the hardware or resource
limitations in the environment where the algorithm will run. This involves
assessing CPU capacity, memory constraints, and system architecture.

5. Algorithm Robustness:
Reliability and Accuracy: Ensuring the algorithm provides accurate results for
different types of inputs and edge cases without failing or producing incorrect
outputs.

6. Complexity of Implementation:
Readability and Ease of Implementation: The ease of understanding,
implementing, and maintaining the algorithm. Simplicity and readability can be
crucial for debugging and future modifications.
7. Stability:Stable Output: Some algorithms may exhibit unstable behavior for
certain inputs. Stability refers to the consistency of output for the same input
across different executions.
8. Special Requirements:
Specific Problem Requirements: Considering specific constraints or
requirements of the problem such as sorting, searching, or unique constraints
imposed by the problem itself.

9. Parallelism and Scalability:


Parallel Execution and Scalability: If the algorithm can take advantage of
parallel processing or scale efficiently when additional computational resources
are available.

10. External Factors:


External Dependencies: Any external libraries or dependencies required for the
algorithm and their availability or integration ease.

Selecting the appropriate algorithm involves analyzing these factors to ensure


the most suitable solution for the specific problem context, balancing trade-offs
between time complexity, space complexity, and other requirements based on
the problem's characteristics and constraints.

THE MOTIVATION AND IMPORTANCE OF NUMERICAL


TECHNIQUES

4
The motivation and importance of numerical techniques lie in their ability to
solve complex mathematical problems that often lack explicit analytical
solutions. Several factors contribute to the significance of numerical methods in
various fields:

Inherent Complexity of Real-world Problems:


Lack of Analytical Solutions: Many real-world problems, especially those in
science, engineering, economics, and biology, do not have straightforward
mathematical solutions. Numerical techniques offer practical tools to find
approximate solutions in such cases.

Computational Challenges:
Large-scale Calculations: Problems involving a vast number of variables or
complex systems cannot be practically solved by hand or using conventional
analytical methods. Numerical techniques enable the handling of extensive
computations efficiently.

Precision and Accuracy:


Precision in Approximations: While numerical methods provide approximate
solutions, advancements in algorithms and computational capabilities allow for
increasingly accurate approximations. These methods help attain solutions
within an acceptable margin of error.

Engineering and Technological Advancements:


Design and Simulation: Engineers heavily rely on numerical methods for
modeling and simulating complex systems, facilitating the development of new
technologies, structures, and processes.

Decision-making and Predictive Analysis:


Prediction and Analysis: In fields like weather forecasting, financial modeling,
and risk assessment, where precise future predictions are essential, numerical
methods enable the analysis of historical data and the prediction of future
trends.

Complementing Analytical Approaches:


Comprehensive Solutions: While analytical methods are elegant and provide
exact solutions in many cases, they often fail in intricate and real-world
situations. Numerical techniques complement analytical methods, offering
solutions where analytical solutions are either too complex or non-existent.

Innovation and Research:


Facilitating Innovation: Numerical methods are crucial in scientific research,
allowing for the study of complex systems, the testing of hypotheses, and the
exploration of new scientific frontiers that wouldn't be possible solely through
analytical methods.

5
Advancements in Computing Power:
Technological Progress: With the advancement in computational power and
access to high-performance computing, the application of numerical methods
has become more feasible and accessible.

The motivation and importance of numerical techniques stem from their ability
to address real-world complexities, providing practical solutions to problems
across various disciplines. From engineering design and technological
advancements to scientific research and predictive analysis, numerical methods
serve as an invaluable tool for handling complex mathematical problems,
enabling innovation and progress in numerous fields.

SOURCES OF ERRORS IN NUMERICAL COMPUTATIONS


Errors in numerical computations can arise from various sources, impacting the
accuracy and reliability of the computed results. Understanding these sources is
crucial for minimizing and managing errors in numerical analysis. Here are
some of the primary sources of errors in numerical computations:

1. Round-off Errors:
Definition: Round-off errors occur due to the finite precision of numerical
representation. They arise when a real number with an infinite decimal
expansion is approximated by a finite number of digits.
1
Example: Consider the number in decimal form: (0.3333333.... Rounding it
3
to a finite decimal representation, say 0.333 , introduces a round-off error,
causing a deviation from the actual value.

2. Truncation Errors:
Definition: Truncation errors arise when an infinite process (like a series or
iterative computation) is approximated by a finite one, leading to an incomplete
representation of the exact solution.
Example: Using a finite number of terms in the Taylor series to approximate a
function. For instance, approximating e x using only the first few terms of the
series expansion, neglecting higher-order terms, and results in a truncation error.

3. Discretization Errors:
Definition: Discretization errors occur when continuous problems are
approximated in discrete form, leading to inaccuracies due to the discretization
process.
Example: Numerical integration methods applied to approximate a definite
integral by dividing the interval into finite subintervals. The difference between
the exact integral and the numerical approximation due to discretization is a
form of discretization error.
4. Iterative Convergence Errors:

6
Definition: In iterative methods, errors can arise due to the inability to reach
the true solution after a finite number of iterations, either because of numerical
instability or limitations of the method.
Example: The iterative process of solving systems of equations, such as the
Gauss-Seidel method for solving linear systems, may not converge for certain
matrices, resulting in divergence and error in the solution.

5. Data Errors:
Definition: Data errors occur due to inaccuracies or imprecisions in input data,
which propagate through computations, leading to erroneous results.
Example: In scientific experiments, if measurements are imprecise or
contaminated by random noise, subsequent calculations or simulations using
this data will inherit these inaccuracies, resulting in data errors.
6. Modeling Errors:
Definition: Modeling errors stem from inaccuracies in the mathematical models
used to represent real-world systems, leading to deviations between the model
and the actual system behavior.
Example: Simulating a physical system using a simplified model that neglects
certain relevant parameters or physical effects, causing discrepancies between
the model's predictions and real-world behavior.
7. Stability and Algorithmic Errors:
Definition: Errors can arise from the numerical instability of algorithms used in
computations, especially when dealing with ill-conditioned problems or
inappropriate algorithms.
Example: The use of an unstable algorithm in solving differential equations
where small changes in input data lead to significantly amplified errors in the
output, making the solution unreliable.

Understanding and managing these sources of errors is crucial in ensuring the


accuracy and reliability of numerical computations. Techniques such as error
analysis, careful algorithm design, utilizing higher precision arithmetic, and
conducting sensitivity analyses are employed to minimize the impact of these
errors in numerical computations.

7
CHAPTER TWO
ROOT FINDING METHODS
Introduction
Root finding methods are numerical techniques used to locate the roots (or
solutions) of an equation. The roots of an equation are the values of the
variable(s) that make the equation true. There are various methods to find these
roots, some of the most commonly used ones include: Newton-Raphson
method, Bisection method, Secant method, Fixed-point iteration and so on.
The Newton-Raphson method
The Newton-Raphson method, also known as Newton's method, is an iterative
numerical technique used for finding successively better approximations to the
roots of a real-valued function. This method is particularly effective for root
finding and is widely used due to its rapid convergence if certain conditions are
met.

Methodology:
1. Root Finding Objective:
The goal of the Newton-Raphson method is to approximate the root of an
equation f  x   0 by iteratively refining an initial guess towards a more
accurate solution.
2. Derivative Usage:
The method utilizes the derivative of the function (f(x), denoted as f '  x  , to
calculate subsequent approximations. The derivative represents the slope of the
function and aids in determining the direction and magnitude of corrections.
3. Iterative Procedure:
Initial Guess: Start with an initial guess, x0 for the root of the function
f  x  0 .
Update Formula: Improve the approximation using the formula:
f ( xn )
xn1  xn 
f ( xn )
Iterate: Repeat the process by substituting the previously computed value, xn ,
into the update formula to obtain xn1 .
4. Convergence Criteria:
The process continues until the difference between successive approximations,
| xn1  xn | , falls below a specified tolerance or until a predetermined number of
iterations.

Newton-Raphson Method Algorithm:


This algorithm provides a framework for implementing the Newton-Raphson
method in programming languages and computational environments.

8
1. Input:
- f ( x) : The function for which the root is to be found.
- f '  x  : The derivative of f  x  .
- x0 : Initial guess for the root.
- : Tolerance limit for convergence.
- N : Maximum number of iterations.
2. Procedure:
3. Initialize: Set n  1 and the initial guess x0 for the root.
4. Repeat the following steps until convergence or maximum iterations
reached:
5. Calculate the next approximation:
f ( xn )
xn1  xn 
f ( xn )

6. Increment the iteration count: n = n + 1


7. Check for convergence:
8. If | xn1  xn | , where is the tolerance, then stop the iteration.
9. Check for maximum iterations:
- If n  N , where N is the maximum number of iterations, then stop
the iteration.
10.Output:
- If the process converges, the output is the approximated root xn1
within the specified tolerance.
- If the maximum iterations are reached without convergence, the
algorithm stops without providing a solution.

Advantages and Considerations:

Rapid Convergence: The Newton-Raphson method typically converges


quickly if the initial guess is reasonably close to the root and if certain
conditions are met (e.g., the function's behavior around the root).
Sensitivity to Initial Guess: The method might fail to converge or converge to
a different root if the initial guess is far from the actual root or if the function
behaves irregularly around the root.
Use of Derivative: The availability and computation of the derivative are
prerequisites for implementing this method. In cases where the derivative is not
readily available, numerical approximations of the derivative might be used,
introducing additional complexity.

SYNTHETIC DEVISON
Synthetic division is a shorthand method used to divide polynomials, especially
when dividing by a linear polynomial of the form x  x0 , where x0 is a constant.

9
This method is particularly useful when dividing by linear factors because it's
quicker and more straightforward compared to long division.
Here is a step-by-step guide to performing synthetic division:

Synthetic Division Steps:


Given:
Let's say you want to divide a polynomial, represented in the form P(x), by a
linear factor of the form x  x0 .
Recell that if c is a function of the polynomial P(x), then
P( x)  ( x  x0 )Q( x)
where Q ( x) is a polynomial of degree less than that of P(x) and P( x0 )  0 .
Also if c is not a factor, then we have
P( x)  ( x  x0 )Q( x)  Rx0

Where Rx is the reminder and P( x0 )  Rx


0 0

1. Set up the division:


Arrange the coefficients of the polynomial P(x) in descending order of powers
of x, including any missing terms with a coefficient of zero.
For example, if you're dividing P(x) by x  x0 , and P(x) is ax  bx  cx  d ,
3 2

you'd set up the coefficients as (a, b, c, d), respectively.

2. Perform the synthetic division:


- Write down the value of c from x  x0 .
- Perform synthetic division as follows:
a. Draw a horizontal line and write down the coefficients of the polynomial
outside the division symbol.
b. Next to the vertical line, write c. This value is derived from x  x0 .
c. Bring down the first coefficient (the leading coefficient) below the line.
d. Multiply x0 by the number just written below the line, and write the result
above the line one column to the right. Add this result to the next coefficient
and write the sum below the line. Continue this process until you've brought
down all coefficients. The numbers written to the right of the line after the
division symbol will be the coefficients of the quotient polynomial.
3. Interpret the result:
The numbers below the line after the division represents the coefficients of the
quotient polynomial. The last number is the remainder.

Example1.1:
Divide the polynomial P( x)  2 x  5 x  3x  7 by ( x  2) :
3 2

Solution
Arrange the coefficients: (2, 5, -3, -7).
Perform synthetic division:
2 | 2 5 -3 -7
10
| 4 18 30
|-----------------------------
| 2 9 15 23

The result of the division is 2 x  9 x  15 with a remainder of 23.


2

This method is particularly useful when dividing by linear factors but


remember, it only works when dividing by expressions of the form x  x0 .
P( x)  ( x  2)(2 x 2  9 x  15)  23

Example 2.2:
Find the root of the fucntion f ( x)  x3  2 x  5.
Solution
1. Initial guess: Let's say x0  2 (initial guess close to the actual root).

2. The derivative of f  x  is f ( x)  3x 2  2 .

3. Using the Newton-Raphson formula:


f ( xn )
xn1  xn 
f ( xn )

Iteration 1: n  0 :

f ( x0 ) 23  2(2)  5
x1  x0  2  2.2
f ( x0 ) 3(2)2  2
Iteration 2: n  1:

f ( x1 ) 2.23  2(2.2)  5
x2  x1   2.2   2.09434
f ( x1 ) 3(2.2)2  2
Iteration 3: n  2 :

f ( x2 ) 2.094343  2(2.09434)  5
x3  x2   2.09434   2.09455
f ( x2 ) 3(2.09434)2  2
Iteration 4: n  3:
f ( x3 ) 2.094553  2(2.09455)  5
x4  x3   2.09455   2.09455
f ( x3 ) 3(2.09455)2  2
Iteration 5: n  4 :
f ( x4 ) 2.094553  2(2.09455)  5
x5  x4   2.09455   2.09455
f ( x4 ) 3(2.09455)2  2
The value of x after five iterations remains approximately 2.09455. In this case,
the method converges quickly, and the approximation of the root appears to

11
stabilize after the third iteration, reaching a value that remains consistent after
subsequent iterations.

Example 2.3
Consider the function f ( x)  x 2  3 x  2 for which we want to find a root.

1. Function: f ( x)  x 2  3 x  2
2. Derivative: f '  x   2 x  3

Initial Guess:
Let's start with an initial guess, x0  3:
Iterations:
Iteration 1:
f ( x0 ) 32  3(3)  2 2 2 7
x1  x0  3  3   3    2.3333
f ( x0 ) 2(3)  3 3 3 3
Iteration 2:
f ( x1 ) 7 (7 / 3)2  3  (7 / 3)  2
x2  x1   
f ( x1 ) 3 2  (7 / 3)  3
1/ 3
x2  2.3333   2.3333  1  1.3333
1/ 3
Iteration 3:
f ( x2 ) (1.3333)2  3  (1.3333)  2
x3  x2   1.3333 
f ( x2 ) 2  (1.3333)  3
0
x3  1.3333 
0
Here, the method converges, and the approximation does not change. It seems
x3 remains the same due to the function reaching the root.

The Newton-Raphson method iterates to find the root of f ( x)  x 2  3 x  2 and


converges after two iterations in this specific case, providing an approximation
of x  1.3333 as the root.

Example 2.4
Use Newton-Raphson’s methods with synthesis division approach or otherwise
to compute an approximation to one of the zeros of P ( x )  2 x  3 x  x  4 ,
4 2

with x  2 as initial approximate.


0

Solution
P( x )  P(2) : -2
0
| 2 0 -3 3 4
| -4 8 -10 14
|-------------------------------------
| 2 -4 5 -7 10
12
 P(2)  10

P( x)  ( x  2)(2 x  4 x  5 x  7)  10
3 2

Q( x)  (2 x  4 x  5 x  7) & P (2)  Q(2)


3 2 /

To find Q(2) ,

we have

P( x )  P(2) : -2
0
| 2 -4 5 -7
| -4 16 -42
|-----------------------------
| 2 -8 21 -49= Q(2)  P '(2)

And

P( x ) 10
x x  0
 2   1.796
49
1 0
P '( x )0

Repeat the same process to find x 2

We have

-1.796 | 2 0 -3 3 -4
| -3.592 6.451 -6.197 5.742
|------------------------------------------------------
| 2 -3.592 3.451 -3.197 1.742  P( x ) 1

| -3.592 12.902 -29.308


|------------------------------------------------------
| -7.184 16.353 -32.565  Q( x )  P '( x )
1 1

|------------------------------------------------------

 P(1.796)  1.742, P '(1.796)  32.565

P( x ) 1.742
x x  1
 2   1.7425
32.565
2 1
P '( x )
1

Repeat again after 3rd iteration we get -1.73896

13
The Newton-Raphson method is a powerful numerical tool for finding roots of
equations, particularly when the function and its derivative are well-behaved
around the root and a good initial guess is provided.

EXERCISE 2.1
Use the Newton-Raphson method to find roots of the follwind equations,
focusing on different initial guesses and convergence criteria
1. f ( x)  x3  6 x 2  11x  6 with an initial guess x0  0 .
2. f ( x)  x4  3x3  2 x2  5 x  1 with an initial guess x0  1.5 .
3. f ( x)  e x  x 2  3with an initial guess x0  1.
4. f ( x)  x5  4 x3  x  1 . with an initial guess x0  1.
5. f ( x)  2 x3  11.7 x 2  17.7 x  5 . Start with an initial guess of x0  3 ,
and perform iterations until the solution reaches a tolerance of 104 .

THE SECANT METHOD


The Secant Method is a numerical technique used to find the root of a function.
Similar to the Newton-Raphson method, the Secant Method approximates the
root of an equation; however, it doesn’t require the derivative of the function,
making it more versatile in some scenarios.
Key Steps of the Secant Method:
1. Initialization:
The method begins with initial guesses, x0 and x1 , close to the root of the
function.
2. Approximation of the Derivative:
Rather than computing the derivative, the Secant Method approximates it by
using two points x0 , f ( x0 ) and x1 , f ( x1 ) on the curve of the function.
3. Tangent Line and Intersection:
Using the two points, a line is drawn, approximating the tangent to the curve.
The intersection of this line with the x-axis gives a better estimate of the root.
4. Iterative Refinement:
The method continues by iteratively updating the points , x0 and x1 , based on
the intersection with the x-axis until the desired accuracy is achieved.
Advantages and Characteristics of the Secant Method:
1. Derivative-Free Approach:

14
Unlike some root-finding methods, such as Newton's method, the Secant
Method doesn’t require knowledge of the derivative. This makes it more
adaptable in situations where obtaining the derivative is complex or impossible.
2. Convergence Rate:
The convergence rate of the Secant Method is close to the golden ratio 1.618...,
meaning it converges relatively quickly but not as rapidly as some methods
requiring the second derivative.
3. Applicability:
It's effective for functions that are not differentiable, discontinuous, or those
where the derivative calculation is challenging or impractical.
Limitations and Considerations:
1. Convergence Rate:
- While it converges faster than some simpler methods, it might be slower
compared to methods requiring the second derivative, such as Newton's method.
2. Convergence Guarantee:
- Unlike the Newton-Raphson method, the Secant Method doesn’t guarantee
convergence in all cases. Oscillation or divergence might occur under certain
conditions.
3. Initial Guesses:
- The accuracy of the result is highly dependent on the choice of initial
guesses. Poor initial guesses can lead to slow convergence or divergence.
Algorithm
Algorithm for the Secant Method:
Input:
- f ( x) : The function for which the root needs to be found.

- x0 , x1 : Initial guesses close to the root.


-  : The desired tolerance or the level of accuracy.
Output:
- x : Approximated root of the function f ( x) .
Procedure:
1. Initialize:
- Set (n = 0 (iteration counter).
- While | f ( x1 ) |  and n  N

15
2. Compute the next approximation:
- Calculate the next approximation using the formula:
f ( x1 )  ( x1  x0 )
x  x1 
f ( x1 )  f ( x0 )
3. Update:
- Set x0  x1 and x1  x
4. Increment:
- Increment n by 1.
5. Return:
- If | f ( x1 ) |  , return x as the root.
- If n  N , return "N; no convergence."
The algorithm iteratively refines the approximation of the root by updating the
guesses x0 and x1 based on the function values until the convergence condition
is met or the maximum number of iterations is reached.
Remember, this algorithm assumes that the chosen initial guesses x0 and x1 are
close enough to the actual root and that the function is continuous in the given
interval. Adjustments in initial guesses and iterations may be required for
specific functions to achieve accurate results.
The Secant Method serves as a valuable alternative to find roots of functions,
especially when the computation of derivatives is challenging. It strikes a
balance between convergence speed and not requiring derivative information,
making it versatile in various problem-solving scenarios. However, careful
consideration of initial guesses and potential convergence issues is crucial for
its effective application.
Example 2.4
Use the Secant
Method to find an approximate root of the
f ( x)  2 x  3x  x  4 with initial approximates x0  0 and x1  2 .
4 2

Solution
Function:
f ( x)  2 x 4  3 x 2  x  4
Initial Approximates:
x0  0 and x1  2 .
Let's begin the iterations to find the approximate root using the Secant Method.
16
Iteration 1:
1. Calculate f ( x0 )  f (1) :

f (1)  2(1)4  3(1)2  1  4  2  3  1  4  4


2. Calculate f ( x1 )  f (2) :

f (2)  2(2)4  3(2)2  2  4  32  12  2  4  18


3. Using the Secant Method formula:
18  (2  1) 18
x  2  2  1.182
18  (4) 22
Iteration 2:
1. Update values:
- Set x0  2 and x1  1.182

2. Calculate f ( x0 )  f (2) :

f (2)  18
3. Calculate f ( x1 )  f (1.182) :

f (1.182)  2(1.182)4  3(1.182)2  1.182  4  1.585


4. Using the Secant Method formula:
1.585  (1.182  2)
x  1.182   1.240
1.585  18
Iteration 3:
1. Update values: x0  1.182 and x1  1.240

2. f (1.182)  1.585

3. f (1.240)  1.206
4. Using the Secant Method formula:
1.206  (1.240  1.182)
x  1.240   1.226
1.206  (1.585)
The process of performing additional iterations of the Secant Method will
further refine the approximation of the root. In this case, after three iterations,
the estimate of the root is approximately x  1.226. .This process can be
continued until reaching the desired level of accuracy or convergence.

17
Example 2.5
Use the Secant Method to find an approximate root of the f ( x)  x 2  3x  2
using initial approximates x0  0 and x1  1 .
Solution
Function:
f ( x)  x 2  3 x  2
Initial Approximates:
x0  0 and x1  1
Let's start the iteration process.
Iteration 1:
1. f ( x0 )  f (0)  2

2. f ( x1 )  f (1)  0
3. Using the Secant Method formula:
0  (1  0) 0
x  1  1 1
02 2
Iteration 2:
1. Update values: x0  1 and x1  1

2. f (1)  0

3. f (1)  0
4. The next approximation remains x = 1, no change after the second iteration.
Iteration 3:
1. Update values: x0  1 and x1  1
2. f(1) = 0
3. f(1) = 0
4. The next approximation remains x = 1, no change after the third iteration.
For this specific case, using the Secant Method with the given initial
approximates x0  0 and x1  1 resulted in the root remaining at x = 1 after
three iterations. This indicates that the Secant Method didn't converge in three
iterations for this function and initial approximates.

18
Continue the process of iterations until convergence is achieved or the
maximum iterations are reached, as per the Secant Method algorithm.
Example 2.6
Absolutely! Let's use the Secant Method to find an approximate root of the
function f ( x)  x3  2 x  5 using the initial approximates x0  2 and x1  3 .
Solution
Function:
f ( x)  x 3  2 x  5
Initial Approximates:
x0  2 and x1  3 .
Let's start with the iteration process.
Iteration 1:
1. f ( x0 )  f (2)  2  2(2)  5  8  4  5  1
3

2. f ( x1 )  f (3)  3  2(3)  5  27  6  5  16
3

3. Using the Secant Method formula:


16  (3  2) 16
x  3  3   2.059
16  (1) 17
Iteration 2:
1. Update values: x0  3 and x1  2.059
2. f(3) = 16
3. f (2.059)  0.434
4. Using the Secant Method formula:
0.434  (2.059  3)
x  2.059   2.312
0.434  16
Iteration 3:
1. Update values: x0  2.059 and x1  2.312

2. f (2.059)  0.434

3. f (2.312)  0.126
4. Using the Secant Method formula:

19
0.126  (2.312  2.059)
x  2.312   2.239
0.126  (0.434)
Iteration 4:
1. Update values: x0  2.239 and x1  2.236

2. f (2.239)  0.044

3. f (2.236)  0.012
4. Using the Secant Method formula:
0.012  (2.236  2.239)
x  2.236   2.236
0.012  (0.044)
Iteration 5:
1. Update values: x0  2.236 and x1  2.236

2. f (2.236)  0.012

3. f (2.236)  0.012 , the same as previous iterations


4. Using the Secant Method formula:
- The formula remains the same as the function values at x0 and x1 are nearly
identical in this iteration.
After five iterations using the Secant Method, the approximate root of the
f ( x)  x3  2 x  5 is x  2.236 . This method refines the estimate of the root
with each iteration, moving closer to the actual root.
EXERCISE 2.3
Use Secant Method for finding roots of the following functions:
1. f ( x)  x3  2 x  5 ; starting with initial approximates x0  2 and x1  3
.Perform three iterations and provide the approximated root.
2. g ( x)  2 x  3x  x  4 ; with initial guesses x0  1 and x1  2 .
4 2

Continue the iterations until the solution converges.


3. h( x)  e  x  3 ; with initial approximates x0  1 and x1  2 . Perform
x 2

four iterations and report the approximations for the root.



4. k ( x)  cos( x)  x ; within the interval [0, ] . Start with x0  0.5 and
2
x1  1, and perform five iterations to find the approximated root.

20
5. m( x)  x5  4 x3  x  1; - Use the Secant Method to find an approximate
root of the equation with initial approximates x0  1 and x1  2 . Carry out
iterations until the solution converges, and determine the approximated root.
These exercises are designed to practice the application of the Secant Method in
finding roots of different functions, employing varying initial guesses and
iterations to obtain approximate roots using this numerical technique.
FIXED POINT ITERATION METHOD
Fixed-point iteration methods are numerical techniques used to find the fixed
point of a function, which is a point where the function takes on its own value.
This method is utilized to solve equations of the form g(x) = x, where g(x) is a
function rearranged from the original equation, often making it easier to solve
for a root.

Key Concepts of Fixed-Point Iteration Methods:


1. Fixed Point:
- A fixed point of a function g(x) is a value x such that g(x) = x.
2. Iterative Approach:
- Fixed-point iteration involves repeatedly applying a function g(x) to an
initial guess x0 to find the fixed point. The method uses the relationship
xn1  g ( xn ) iteratively until convergence.
3. Convergence Criteria:
- Convergence is achieved when the difference between consecutive iterations
is smaller than a specified tolerance or when a maximum number of iterations is
reached.
Steps in Fixed-Point Iteration:
1. Rearranging the Equation:
- Rearrange the original equation into the form g(x) = x to establish the fixed-
point equation.
2. Selecting an Initial Guess:
- Choose an initial guess x0 close to the expected root or fixed point.
3. Iterative Process:
- Apply the fixed-point iteration formula: xn1  g ( xn ) , iterating from the
initial guess.
4. Convergence Check:

21
- Verify convergence by checking if xn1  xn is less than a specified
tolerance or reaching a maximum number of iterations.
Advantages and Considerations:
1. Simplicity: - Fixed-point iteration methods are straightforward and often
easier to implement compared to some other numerical techniques.
2. Convergence and Divergence:
- The convergence of fixed-point iteration methods is not guaranteed for all
functions or initial guesses. Some functions might diverge or converge slowly.
3. Choosing g(x):
- The choice of g(x) is crucial. A proper rearrangement of the equation is
required to ensure convergence. g(x) should contract the space and have a
derivative with an absolute value less than 1 in the vicinity of the root.
Limitations:

1. Convergence Rate:
- Fixed-point iteration may converge slowly for certain functions or initial
guesses, requiring more iteration to achieve accuracy.
2. Convergence Guarantee:
- Unlike some other methods, fixed-point iteration methods do not guarantee
convergence for all functions or initial guesses.
Fixed-point iteration methods are a fundamental tool in numerical analysis,
often used for solving equations or finding fixed points of functions. They are
versatile but require careful selection of the function g(x) and initial guess to
ensure convergence and accurate solutions.

Algorithm for Fixed-Point Iteration:


Input:
g(x) : The function rearranged in the form x = g(x) .
x0 : Initial guess.
 : The desired tolerance or the level of accuracy.
N : The maximum number of iterations allowed.
Output:
x : Approximated fixed point of the function g(x) .
Procedure:
22
1. Initialize:
Set n = 0 (iteration counter).
While n  N :
2. Compute the next approximation:
Calculate the next approximation using the formula:
x  g ( x0 )
3. Convergence Check:
If | x  x0 |  , return x as the fixed point.
4. Update:
Set x0  x .
5. Increment:
Increment n by 1.
6. Return:
If n  N , return "Maximum iterations reached; no convergence."
The algorithm iterates by applying the function g(x) to an initial guess x0 , and
then continues this process, updating the value of x0 with the obtained x, until
convergence is achieved or the maximum number of iterations is reached.
Example 2.7
Use the Fixed-Point Iteration method to find the fixed point of the
x3
g ( x)  .
2
Solution
We're rearranging the equation to solve for the fixed point x where
x  g  x  . We'll perform the iterations starting with an initial guess x0  1.
Function:
x3
g ( x) 
2
Rearranged Equation for Fixed Point:
x3
x  g ( x) 
2
Initial Approximate:
x0  1
Fixed-Point Iteration Steps:

23
1. Initialize:
- Set the initial guess x0  1.
- Set a tolerance, say105 , and a maximum number of iterations, let's take 10.

2. Iteration 1:
- Calculate g ( x0 ) :
1 3
g ( x0 )   2  1.414
2
3. Iteration 2:
- Update x0 using the obtained value:
1.414  3
x1  g ( x0 )   1.414
2
4. Iteration 3:
- Update x0 using the obtained value:
1.414  3
x2  g ( x1 )   1.414
2
5. Iteration 4:
- Update x0 using the obtained value:
1.414  3
x3  g ( x2 )   1.414
2
6. Iteration 5:
- Update x0 using the obtained value:
1.414  3
x4  g ( x3 )   1.414
2
7. Iteration 6:
- Update x0 using the obtained value:

8. Iteration 7:
- Update x0 using the obtained value:
1.414  3
x6  g ( x5 )   1.414
2
9. Iteration 8:
- Update x0 using the obtained value:
1.414  3
x7  g ( x6 )   1.414
2
10. Iteration 9:
- Update x0 using the obtained value:

24
1.414  3
x8  g ( x7 )   1.414
2
11. Iteration 10:
- Update x0 using the obtained value:
1.414  3
x9  g ( x8 )   1.414
2
The iterations demonstrate that x approximates to approximately x  1.414 for
the given function g(x). The method shows convergence to the fixed point after
multiple iterations.
Example 2.8
Use Fixed-Point Iteration method to find the approximate solution to
f ( x)  2 x 4  3 x 2  x  4 .
Solution
Function: f ( x)  2 x 4  3x 2  x  4
Rearranged Equation for Fixed Point:
Rearranging the equation to the form x = g(x):
2 x4  x  4
x  g ( x) 
3
Fixed-Point Iteration Steps:
1. Initialization:
- Set the initial guess x0 to a convenient value, for instance, x0  0 .
- Define a tolerance level, say105 , and a maximum number of iterations is 5.
2. Iterations:
Iteration 1:
2(0)4  0  4 4
x1  g ( x0 )     1.333
3 3
Iteration 2:
2(1.333)4  1.333  4
x2  g ( x1 )   1.113
3
Iteration 3:
2(1.113)4  1.113  4
x3  g ( x2 )   1.209
3
Iteration 4:
2(1.209)4  1.209  4
x4  g ( x3 )   1.174
3
Iteration 5:
2(1.174)4  1.174  4
x5  g ( x4 )   1.191
3

25
The Fixed-Point Iteration method is applied to the rearranged equation. The
iterations reveal the values \(x_1\) through \(x_5\) consecutively. After five
iterations, the method approaches an approximate fixed point for the function.

Example 2.9
Find the approximate value to the function f ( x)  x 2  3x  2 using fixed
point iteration method.

Solution
Function:
f ( x)  x 2  3 x  2
Rearranged Equation for Fixed Point:
To find x = g(x), add x to both sides of f(x):
x  x2  3x  2
x  x2  4 x  2
x  g ( x)  x 2  4 x  2
Initial Approximation:
For the initial guess, let's take x0  2 .
Fixed-Point Iteration Steps:
1. Initialize:
- Set the initial guess x0  2 .
- Choose a tolerance, say 105 , and a maximum number of iterations, say 10.

2. Iteration 1:
- Calculate g ( x0 ) :
g ( x0 )  x02  4 x0  2  22  4  2  2  4  8  2  2
3. Iteration 2:
- Update x0 using the obtained value:
x1  g ( x0 )  x02  4 x0  2  (2) 2  4  (2)  2  4  8  2  14
4. Iteration 3:
- Update x0 using the obtained value:
x2  g ( x1 )  x12  4 x1  2  142  4  14  2  196  56  2  142
5. Iteration 4:
- Update x0 using the obtained value:
x3  g ( x2 )  x22  4 x2  2
 1422  4 142  2  20164  568  2  19698
6. Iteration 5:
- Update x0 using the obtained value:
x4  g ( x3 )  x32  4 x3  2  196982  4  19698  2  (a large value)

26
The iterations diverge, resulting in increasingly larger values, which indicates
that the method is not converging for the given function and initial guess. It's
important to note that not all functions and initial guesses will converge using
the Fixed-Point Iteration method.

Example 2.10.
Use Fixed-Point Iteration method to find the approximate solution the function
f ( x)  x 3  2 x  5 .
Solution
Function: f ( x)  x3  2 x  5
Rearranged Equation for Fixed Point:
Adding x to both sides of the equation to find x = g(x) :
x  x3  2 x  5
x  g ( x)  x 3  5
Fixed-Point Iteration Steps:

1. Initialize:
- Set the initial guess x0 to start the iteration.
- Choose a tolerance value for convergence check.
- Select the maximum number of iterations.

2. Iteration:
- Apply the fixed-point iteration formula: xn1  g ( xn ) .
- Calculate xn1 for each iteration.

Let's choose an initial guess, such as x0  2 , and perform the first few iterations
to check for convergence.

Initial Approximation:
x0  2
Fixed-Point Iteration:

1. Iteration 1:
- Calculate g ( x0 ) :
g ( x0 )  x03  5  23  5  8  5  3
2. Iteration 2:
- Update x0 using the obtained value:
x1  g ( x0 )  x03  5  33  5  27  5  22

3. Iteration 3:
- Update x0 using the obtained value:

27
x2  g ( x1 )  x13  5  223  5  (a large value)
From the third iteration, the value has significantly increased and the iterations
seem to diverge, indicating that the method is not converging for the given
function and initial guess. This outcome suggests that the chosen function and
initial guess are not suitable for the Fixed-Point Iteration method.

Exercise 2.4
Use the Fixed-Point Iteration method to find the roots of the following:
1. f ( x)  x 2  x  2 ; x0  1.
2. g ( x)  5x  2 ; x0  0 .
3. h( x)  e x2 ; x0  1.
2 x3  1
4. k ( x)  ; x0  2 .
x 1
5. m( x)  ln( x 2  3) ; x0  1.

These exercises are designed to help practice the application of the Fixed-Point
Iteration method, involving rearranging functions to find appropriate fixed-point
equations and conducting iterations to calculate the fixed points of various
functions.

BISECTION METHOD:
AN ITERATIVE ROOT-FINDING TECHNIQUE
The bisection method is an iterative numerical technique used for finding
approximate solutions of an equation. It's particularly useful for finding roots of
continuous functions. The method is based on the Intermediate Value Theorem,
which states that if a function f(x) is continuous on an interval [a, b] and f(a)
and f(b) have opposite signs, then there exists at least one value c in the interval
[a, b] such that f  c   0 (i.e., there is a root or a zero of the function in that
interval).

Here's a simple breakdown of the bisection method:

1. Initial Bracketing: Start with an interval [a, b] that contains the root of the
function, and make sure that f(a)\) and \f(b) have opposite signs. This is crucial
for the application of the Intermediate Value Theorem.

2. Iterative Process: Divide the interval in half and check which subinterval
contains the root. To do this, find the midpoint c of the interval  a, b using
ab
c .
2

28
3. Checking Sign: Calculate f(c) and examine the sign of f(c). Update the
interval [a, b] to be [a, c] or [c, b] based on the sign of f(c). The new interval
will still contain the root due to the sign change criterion (since the function is
continuous).
4. Termination Condition:
- iterations continue until the length of the interval is sufficiently small or
until a predefined tolerance is reached.
5. Repeat: Continue this process iteratively by halving the interval that contains
the root until the desired level of accuracy is achieved.

The bisection method guarantees convergence to a root because at each step, the
interval containing the root is reduced by half. The process continues until the
width of the interval becomes smaller than a predefined tolerance level.

The fundamental principle behind the bisection method is that of continually


narrowing down the interval where the root exists, guided by the Intermediate
Value Theorem, which ensures the existence of a root in the interval where the
function changes sign.

This method is relatively simple but effective for finding roots of continuous
functions, especially when other methods (such as Newton's method) might not
be suitable or when a reliable initial guess is not available.

Advantages and Considerations:


Convergence: The bisection method guarantees convergence to a root if the
initial interval contains a root and the function is continuous within that interval.
Robustness: It is a robust method and doesn’t rely on the function being
differentiable, making it applicable to a wide range of functions.
Slow Convergence: The bisection method converges linearly, which means the
rate of convergence is relatively slow compared to other methods, especially
when the initial interval is large.
Ease of Implementation: Its straightforward nature makes it easy to understand
and implement.
Certainly! Here's a basic algorithm for the bisection method, which outlines the
steps involved in finding a root within a given interval:

Bisection Method Algorithm:


Inputs:
- Function f  x 
- Initial interval endpoints (a) and (b) such that f (a)  f (b)  0
- Tolerance for convergence

Algorithm Steps:

1. Initialize:
- Set n  0 (iteration counter).
29
ab
- Calculate the initial midpoint: c0  .
2
2. Loop:
- Repeat until the interval is sufficiently small or until a predetermined
number of iterations.
2.1. Evaluate Function:
- Calculate f  cn  at the current midpoint: f  cn  .
2.2. Check Tolerance or Convergence:
- If the absolute value of the function at the midpoint is less than or equal to
the tolerance | f (cn ) | , the algorithm has converged. The current midpoint cn
is an acceptable root approximation.
- If the interval width b – a is smaller than the tolerance b  a  , the
algorithm has converged. The midpoint cn is an acceptable root approximation.

2.3. Update Interval:


- If the sign of f (a)  f (cn ) indicates that the root lies in the subinterval
 a, c  , update the interval: set b
n
 cn
- Otherwise, if the sign indicates that the root lies in the subinterval  cn , b  ,
update the interval: set a  cn .

2.4. Calculate New Midpoint:


- Recalculate the midpoint for the updated interval:
ab
cn1  .
2
2.5. Increment Iteration:
- Increase the iteration counter: n  n  1 .
3. Output:
- The final midpoint cn within the defined tolerance represents the root
approximation within the given interval.

This algorithm converges to a root of the function within the provided interval
by iteratively narrowing down the interval where the root is expected to lie,
based on the behavior of the function.

Example 2.11: Use bisection method to find the root of the following function
f ( x)  x 2  3x  2.
Solution
Bisection Method Steps:
Given: f ( x)  x  3x  2.
2

Initial Interval Selection:

30
- We need to find an interval [a, b] where f (a)  f (b)  0 (indicating a root
exists within the interval).
- Let's choose the interval [0, 3] because f(0) = 2 and f(3) = 2, and their product
is less than zero 2  2  0 .
Let's perform the iterations to find the root within the interval [0, 3] using the
bisection method.
Iteration 1:
- Initial interval:  a0 , b0   0, 3
a0  b0 0  3
- Midpoint: c0    1.5
2 2
- Calculate f (cn ) :
- f (1.5)  (1.5)2  3(1.5)  2  0.25  4.5  2  2.25
- New interval selection based on the sign of f (c0 ) :
- f 1.5  0 ,so the root is in the subinterval [1.5, 3].
- Set a1  1.5 and b1  3 .
Iteration 2:
- Updated interval:  a1 , b1   1.5, 3
a1  b1 1.5  3
- Midpoint: c1    2.25
2 2
- Calculate f  c1  :
- f (2.25)  (2.25)2  3(2.25)  2  5.0625  6.75  2  0.3125
- New interval selection based on the sign of f  c1  :
- f  2.25  0 , so the root is in the subinterval 1.5, 2.25 .
- Set a2  1.5 and b2  2.25 .
Iteration 3:
- Updated interval:  a2 , b2   1.5, 2.25
a2  b2 1.5  2.25
- Midpoint: c2    1.875
2 2
- Calculate f  c2  :
-
f (1.875)  (1.875)2  3(1.875)  2  3.515625  5.625  2  0.890625
- New interval selection based on the sign of f  c2  :
- f 1.875  0 , so the root is in the subinterval 1.5, 1.875 .
- Set a3  1.5 and b3  1.875 .
Iteration 4:
- Updated interval:  a3 , b3   1.5, 1.875

31
a3  b3 1.5  1.875
- Midpoint: c3    1.6875
2 2
- Calculate f  c3  :
- f (1.6875)  (1.6875)2  3(1.6875)  2
 2.853515625  5.0625  2  0.208984375
- New interval selection based on the sign of f  c3  :
- f 1.6875  0 , so the root is in the subinterval 1.6875, 1.875 .
- Set  a4  1.6875 and  b4  1.875. .

Iteration 5:
- Updated interval:  a4 , b4   1.6875, 1.875
a4  b4 1.6875  1.875
- Midpoint: c4    1.78125
2 2
- Calculate f  c4  :
- f (1.78125)  (1.78125) 2  3(1.78125)  2
3.175964355  5.34375  2  0.832214355
- New interval selection based on the sign of f  c4  :
- f 1.78125  0 ,so the root is in the subinterval 1.6875, 1.78125 . -
Set a5  1.6875 and b5  1.78125 .
The process can be continued for more iteration to achieve a desired level of
accuracy or until convergence. However, within five iterations, the bisection
method has significantly narrowed down the interval in which the root exists,
with the root expected to be close to 1.78125.

Example 2.12
use bisection method or otherwise to find the a root of the equation
x  x 1  0
3

Solution
1st iteration:
f (1)  1  0 and f (2)  5  0
∴ Now, Root lies between 1 and 2
1 2
x 
0
 1.5  f ( x )  f (1.5)  0.875  0
0

2
2nd iteration:
f (1)  1  0 and f (1.5)  0.875  0
∴ Now, Root lies between 1 and 1.5
1  1.5
x  1
 1.25 ,  f ( x )  f (1.25)  0.29688  0
1
2
3rd iteration:
32
f (1.25)  0.29688  0 and f (1.5)  0.875  0
∴ Now, Root lies between 1.25 and 1.5

1.25  1.5
x 
2
 1.375 ,  f ( x )  f (1.375)  0.22461  0
2
2
At 11th iteration. We have, f ( x )  f (1.32471)  0.00005  0 ,
10

Approximate root of the equation x  x  1  0 using Bisection method is


3

1.32471

Example 2.13
Solve the equation f ( x)  x3  2 x  5 using the bisection method to find the
root within the interval [2, 3].
Solution
The bisection method relies on the intermediate value theorem and iteratively
narrows down the interval where the root lies.
Given:
- Function: f ( x)  x3  2 x  5
- Initial interval: [2, 3]
Bisection Method:
Iteration 1:
- Initial Interval: [2, 3]
23
- Midpoint: c1   2.5
2
- Function Values:
f (2)  23  2(2)  5  1), f (2.5)  2.53  2(2.5)  5  2.375
- Root in Interval:[2, 2.5]

Iteration 2:
- Updated Interval: [2, 2.5]
2  2.5
- Midpoint: c2   2.25
2
- Function Values: f (2)  1), f (2.25)  2.25  2(2.25)  5  0.859375
3

- Root in Interval: [2.25, 2.5]

Iteration 3:
-Updated Interval: [2.25, 2.5]
2.25  2.5
- Midpoint: c3   2.375
2
- Function Values: f (2.375)  2.3753  2(2.375)  5  0.23828125
- Root in Interval: [2.375, 2.5]

Iteration 4:
33
- Updated Interval: [2.375, 2.5]
2.375  2.5
- Midpoint: c4   2.4375
2
- Function Values:
f (2.4375)  2.43753  2(2.4375)  5  0.049560546875
- Root in Interval:  2.4375, 2.5

The process continues until the width of the interval reaches the desired
tolerance or the number of iterations. As we proceed, the bisection method
repeatedly narrows down the interval where the root exists. In this case, after
four iterations, the approximate solution is within the interval [2.4375, 2.5],
getting closer to the actual root value.

Example 2.14
Use the bisection method to find an approximate solution for the equation
f ( x)  x5  4 x3  x  1within the interval [-2, 2].
Solution
Bisection Method Steps:
1. Initial Interval: [-2, 2]
- f (2)  (2)5  4(2)3  (2)  1  31
- f (2)  25  4(2)3  2  1  15
- Since the signs of f(-2) and f(2) are different, there is a root within the
interval [-2, 2].

2. Iterative Process:
- 1st iteration:
2  2
- Midpoint: c1  0
2
- f (0)  05  4(0)3  0  1  1
- New interval: [-2, 0] (as the signs of f(-2) and f(0) are the same)

- 2nd iteration:
2  0
- Midpoint: c2   1
2
- f (1)  (1)  4(1)  1  2
5 3

- New interval: [-2, -1] as the signs of f(-2) and f(-1) are different
- 3rd iteration:
2  1
- Midpoint: c3   1.5
2
- f (1.5)  (1.5)  4(1.5)  1.5  1  3.875
5 3

- New interval: [-1.5, -1] (as the signs of f(-1.5) and f(-1) are different)
- 4th iteration:
34
1.5  1
- Midpoint: c4   1.25
2
- f (1.25)  (1.25)5  4(1.25)3  1.25  1  0.546875
- New interval: [-1.25, -1] as the signs of f(-1.25) and f(-1) are different
- 5th iteration:
1.25  1
- Midpoint: c5   1.125
2
- f (1.125)  (1.125)5  4(1.125)3  1.125  1  0.1362305
- New interval: [-1.125, -1] (as the signs of f(-1.125) and f(-1) are different
The process continues until the interval becomes sufficiently small or a set
number of iterations have been reached. In this case, after multiple iterations,
the approximate solution for the root lies within the interval [-1.125, -1].
The bisection method is a fundamental and reliable technique, especially in
situations where other methods might not be applicable. While it might
converge slower than some other methods, its robustness and simplicity make it
a useful tool for root-finding in various real-world applications.

Exercise 2.5
Use the bisection method to find roots of the following equations. Perform
iterations until the solution converges, and provide the approximated root
1. f ( x)  x3  2 x  5 , [1, 3].

2. f ( x)  cos( x)  x , [0, ].
2
3. f ( x)  e x  3x 2 , [-2, 2].
4. f ( x)  x 4  8x 2  9 , [-3, -1] .
5. f ( x)  2 x3  4 x  1,[2, 2]. Perform iterations until the solution reaches a
tolerance of 104.

These exercise questions are designed to provide practice in applying the


bisection method to find roots of different equations within specified intervals,
aiming to reinforce the understanding of the iterative process and approximation
of roots.

35
CHAPTER THREE
INTERPOLATION AND APPROXIMATION
Interpolation and approximation are two fundamental techniques in
mathematics and data analysis used to estimate unknown values between known
data points or to represent a complicated function with a simpler one,
respectively.

INTERPOLATION:
Interpolation involves estimating the value of a function between two known
data points. The primary objective is to construct a function that passes through
all the given data points. This is particularly useful when you have discrete data
and need to estimate values at points that lie between the known data points.
Key methods for interpolation include: Linear Interpolation, Finite Differences,
Newton's Divided Differences, Lagrange Interpolation.
Interpolation methods can accurately estimate intermediate values, assuming the
underlying function behaves consistently between data points. However, using
high-degree polynomials for interpolation can sometimes lead to oscillations or
numerical instability, a phenomenon known as Runge's phenomenon.

APPROXIMATION:
Approximation, on the other hand, aims to represent a complex function with a
simpler one while minimizing the error. This method is often used when dealing
with functions that are difficult to work with or computationally expensive.
Common techniques for approximation include: Polynomial Approximation,
Least Squares Approximation, Chebyshev Approximation.

Linear Interpolation
Linear interpolation is a method used to estimate unknown values between two
known data points. It assumes that there's a linear relationship between the
known data points. This technique is particularly useful when you have two data
points and want to estimate a value that lies between them.

The formula for linear interpolation is:


 xx 
y  y  (y  y )
1

  
1 2 1
x 2
x 1

where,  x , y  and  x , y  are the coordinates of the known data points, x is


1 1 2 2

the point between x and x where you want to estimate the value and y is the
1 2

estimated value at x .

Steps for Linear Interpolation:

1. Identify the two data points:


You need at least two data points with known x x and y values.

36
2. Determine the interval:
Identify the interval where the estimation is required. This interval should lie
between the known data points.

3. Apply the formula:


Plug the values of the known data points  x , y  and  x , y  into the linear
1 1 2 2

interpolation formula to find the estimated value y at a given x within the


interval.

Example 3.1:
You have two data points: (3, 12) and (8, 24). Estimate the value at x = 5 using
linear interpolation.
Solution:
Using the provided data points:
Given:
-  x , y    3, 12 
1 1

- x , y
2 2
  8, 24 
-x=5

Apply the linear interpolation formula:

 xx 
y  y  1
(y  y )
x x 
1 2 1

2 1

Substitute the values:


53
y  12     (24  12)
 8  3 
2
y  12     12
5
y  12  4.8  16.8
Therefore, the estimated value at x  5 using linear interpolation is y  16.8 .

Example 3.2:
Given the points (-1, 4) and (5, 20), find the value at x = 2 using linear
interpolation.
Solution:
Given:
-  x , y    1, 4 
1 1

-  x , y    5, 20 
2 2

-x=2
Using the linear interpolation formula:

37
 xx 
y  y  1
(y  y )
x x 
1 2 1

2 1

Substituting the values:


 2  (1) 
y  4   (20  4)
 5  ( 1) 
3
y  4     16
6
y  4  8  12
Hence, the estimated value at x  2 using linear interpolation is y  12 .
Extrapolation
Extrapolation is a method that extends estimation beyond the range of known
data points. It involves using the existing data to predict or estimate values
outside the given range. It assumes that the relationship between the variables
remains consistent beyond the observed data points.

The concept and approach of extrapolation are similar to interpolation, but


extrapolation extends estimations beyond the range of known data rather than
within it.

Example of Extrapolation:
Suppose you have data points for the temperature at various times:
- At 12:00 PM, the temperature is 20°C
- At 3:00 PM, the temperature is 25°C
Now, using extrapolation, you might attempt to estimate the temperature at 6:00
PM by assuming the same rate of change observed between 12:00 PM and 3:00
PM will continue.
Extrapolation Method:
1. Identify the known data points:
Determine the last known points within the data range.
2. Understand the trend or relationship:
Analyze the trend or relationship between the known data points. This could
be linear, quadratic, exponential, etc.
3. Apply the pattern:
Use this trend to predict or estimate values outside the given range.

Risks and Considerations:

- Extrapolation assumes that the trend observed within the known range will
continue. However, this might not always be the case.
- If the relationship between the variables changes outside the range of known
data, extrapolation can yield inaccurate results.
- It's important to exercise caution and consider the validity of the model or
pattern used for extrapolation.
38
Example of Extrapolation in a Formula:

Given the points (1, 3) and (3, 9), if you notice a linear relationship
change in y 9  3
y  mx  c the slope of the line is m    3 . The
change in x 3  1
intercept (c) can be calculated using one of the known points, say (1,3) as
c  y  mx  3  3 1  0. So, the linear relationship is y = 3x. If you want to
extrapolate for x = 4, you would use \ y  3  4  12 . This is based on the
assumption that the linear relationship continues beyond the observed data
points.

Extrapolation can be a useful tool, but it's essential to be mindful of its


limitations and potential errors, especially when applying it to predict values far
beyond the known data range.

Example 3.3
Given the points (-2, 6) and (1, 9), determine an estimate for x = 4 using linear
extrapolation.
Solution:
Given data points:
-  x , y    2, 6 
1 1

- x , y
2 2
  1, 9 
-x=4

Identify the linear relationship:


The slope of the line can be calculated using the formula:
change in y y  y 96 3
m   2 1
  1.
change in x x  x 1  (2) 3
2 1

The intercept (c) can be calculated using one of the known points and the slope,
for instance, using (1, 9):
c  y  mx  9  11  8
Therefore, the linear relationship is y = x + 8.

Using this linear relationship, for x = 4:


y = x + 8 = 4 + 8 = 12

Hence, the estimated value at x  4 using linear extrapolation is y  12. .

Example 3.4
You have two data points: (3, 10) and (7, 22). Estimate the value at (x = 10)
using linear extrapolation.
39
Solution:
Given data points:
x , y 
1 1
  3,10
x , y 
2 2
  7, 22 
x = 10
Calculate the slope of the line:
change in y y  y 22  10 12
m  2 1
 3
change in x x  x 2
7 3
1
4
Calculate the intercept © using one of the known points, for instance, using (7,
22):
c  y  mx  22  3  7  22  21  1
Thus, the linear relationship is y  3x  1

For (x = 10):
y  3x  1  3 10  1  30  1  31
Therefore, the estimated value at x = 10 using linear extrapolation is y = 31.
Exercise 3.1:
You are given two data points: (2, 8) and (5, 17).
1. Estimate the value at x = 3 using linear interpolation.
2. If there's a linear relationship, predict the value at x = 6.
Exercise 3.2:
Consider the data points: (-1, 3) and (2, 9).
1. Using linear interpolation, estimate the value at x = 0.
2. Assuming the same rate of change continues, extrapolate the value at x = 5.
Exercise 3.3:
Given the points: (1, 5) and (4, 11),
1. Estimate the value at x = 2 using linear interpolation.
2. Assuming a linear relationship, predict the value at x = 6 using extrapolation.
Exercise 3.4:
You have data points: (-3, 2) and (2, 12).
1. Using linear interpolation, estimate the value at x = 0.
2.If the relationship is quadratic y  ax  bx  c , attempt to extrapolate the
2

value at x = 4.

FINITE DIFFERENCES

Finite differences are a method used in mathematics, particularly in calculus and


numerical analysis, to approximate derivatives or evaluate functions. The basic
idea is to compute the difference between function values at different points.

There are various types of finite differences, including:

40
1. Forward Difference: It's calculated by taking the difference between
function values at two consecutive points in the forward direction.
For a function f(x), the forward difference is given by:
f ( x  x)  f ( x)
f ( x) 
x
where x is the step size.

2. Backward Difference: Similar to the forward difference, but it computes the


difference in the backward direction.
For a function f(x), the backward difference is given by:
f ( x)  f ( x  x)
f ( x) 
x
3. Central Difference: It's calculated using function values on both sides of the
point at which the derivative is being estimated.
For a function f(x), the central difference is given by:
f ( x  x)  f (x  h)
f ( x) 
2x
Finite differences are not only limited to approximating derivatives. They can
also be used to solve differential equations numerically, interpolate between
data points, and perform numerical integration.

These methods are extensively used in numerical analysis and are fundamental
in many algorithms used for solving differential equations or analyzing data in
various fields, including physics, engineering, finance, and computer science.

First differences, second differences, and so on are a way to analyze the


differences between consecutive terms in a sequence. These differences can
reveal patterns or characteristics of the sequence, and they are commonly used
in mathematics, statistics, and data analysis. Let's define first and second
differences:

1. First Differences: To calculate the first differences of a sequence, subtract


each term from the next term. If you have a sequence y1 , y2 , y3 ,, yn , the first
differences are given by:
yi  yi 1  yi
In other words, it's the difference between each term and the one immediately
following it. The first differences provide information about the rate of change
in the sequence.

2. Second Differences: To calculate the second differences, repeat the process


above but with the first differences you've just calculated. That is, find the
differences between consecutive first differences. If you have first differences
yi , the second differences are given by:

41
 2 yi  (yi )  ( yi 1  yi )  yi 2  2 yi 1  yi
The second differences provide information about the acceleration or curvature
of the sequence.
3. Third Differences: Continuing the process of differences, the third
differences involve taking the differences between consecutive second
differences. If you have second differences  2 yi , the third differences are
calculated as:
3 yi  ( 2 yi )  (( yi 1  yi ))  yi 3  3 yi 2  3 yi 1  yi
The third differences provide information about the rate of change in the
second differences.
Now, let's establish the general formula for the nth order differences.
For a sequence yi , where i  1,2,3,, n :
The first differences yi are calculated as yi 1  yi .
The second differences  2 yi are calculated as
(yi )  y12  2 yi1  yi
The third differences  3 yi are calculated as
( 2 yi )  yi 3  3 yi 2  3 yi 1  yi
The nth differences  n yi can be expressed using the binomial theorem and the
finite differences. In general,
 n yi   n1 yi 1   n1 yi
defines the nth order differences.

Difference table
The difference table is the standard format for displaying the finite difference.
Its diagonal pattern makes each entry, except for the xi , yi the difference of its
two nearest neignours to the left. Each difference proves to be a combination of
the y values in column two.

y i
y i
y 2

i 1
y 2

i2
y3

i 3
y4

i4

x0 y0
y0
x1 y1  2 y0
y1 3 y0
 2 y2  4 y0
x2 y2
y2  2 y2
x3 y3  2 y3
y3
x4 y4

42
The general result is
n n
 n yi   (1) k   yi nk
k 0
k 
n
Here,   represents the binomial coefficient "n choose k", and yi  n  k
k 
denotes the (n - k)th term from yi . This general formula allows you to calculate
the nth order differences for a given sequence.

These higher-order differences can provide insights into more complex


relationships and patterns within sequences, helping in trend analysis, curve
fitting, and understanding the behavior of data over multiple levels of changes.

DIFFERENCE FORMULAS
The difference formulas for elementary functions often parallel those in
calculus.
i. The differences of a constant function are zero.
starting with the difference of a constant function, which results in zero.

In symbols, if f ( x)  C where C is a constant, then the difference is given as:


C  f ( x  x)  f ( x)  C  C  0
This indicates that the difference between any two points in a constant function
is always zero, emphasizing that a constant function remains unchanged when
shifted by any amount. This property is applicable in calculus and various
mathematical fields where understanding changes between function values is
crucial.

ii. Constant Times a Function


The relationship for constant times another constant in the context of finite
differences is:
(c  u )  c  u
i i

Here, c is a constant, and u is a variable or function. The finite difference of the


product of a constant and a function is equal to the constant multiplied by the
finite difference of the function.
For example, if u i
is a function of x , then the finite difference of c  u is c
i

times the finite difference of u :i

(c  u )  c  u
i i

This relationship is a consequence of the linearity property of finite differences.


iii. The difference of the sum of two functions
43
The difference of the sum of two functions is indeed equal to the sum of their
differences. In mathematical notation, this property is expressed as:
(u  v )  u  v i i i i

Here, u and v are functions of some variable e.g., x , and  represents the
i i

finite difference operator. This property is a consequence of the linearity of


finite differences. It is a fundamental rule and is analogous to the corresponding
property in calculus, where the derivative of a sum of functions is equal to the
sum of their derivatives.
iv. The linearity property
The linearity property generalizes the two previous results. The generalization is
that for any constants c and c and functions u and v , the linearity property of
1 2

finite differences states:


(c u  c v )  c u  c v
1 i 2 i 1 i 2 i

This property allows you to apply finite differences to each term in a linear
combination of functions separately, and the result is the same as taking the
linear combination of the finite differences of individual functions. It is
analogous to the linearity property of derivatives in calculus.
v. The difference of a product
The difference of a product of two functions can be expressed using the product
rule for finite differences. If you have two functions u(x) and v(x) , the finite
difference of their product u ( x)  v ( x) is given by: i i

(u  v )  u ( x  x)  v ( x  x)  u ( x)  v ( x)


i i i i i i

Expanding this expression, you get:


(u  v )  u ( x  x)  v ( x  x)  u ( x)  v ( x)
i i i i i i

 u ( x  x)  v  v ( x  x)  u  u  v
i i i i i i

The last term, u  v , is often neglected when the focus is on small


i i

increments x going to zero), and it is considered of higher order. If the finite


differences are small, this term becomes negligible compared to the other terms.
So, in a more simplified form:
(u  v )  u ( x  x)  v  v ( x  x)  u
i i i i i i

This expression is similar to the product rule for derivatives in calculus, where
the derivative of a product of two functions is the product of the first function
and the derivative of the second, plus the product of the second function and the
derivative of the first.

44
vi. The difference of a quotient
The difference of a quotient of two functions can be expressed using the
quotient rule for finite differences. If you have two functions u(x) and v(x)
u ( x)
i
such that v(x) is not equal to zero, the finite difference of their quotient is
v ( x)
i

given by:

 u  u ( x  x)  v ( x)  u ( x)  v ( x  x)
  i i i i i

v  i
[v ( x  x)]  [v ( x)]
i i

This expression involves the products and differences of the functions and their
finite differences. Similar to the product rule, it becomes more involved than the
sum or product rule. In practice, the finite difference of a quotient is less
common than the sum or product rules due to its complexity.
The key is to be cautious of the denominator v(x) , making sure it is not equal to
zero to avoid division by zero issues. If v(x) is equal to zero at any point, the
finite difference for the quotient is not well-defined.
The finite difference quotient rule is analogous to the quotient rule for
derivatives in calculus, where the derivative of a quotient is given by a formula
involving the numerator, denominator, and their derivatives.
vii. The difference of the power function
c  c (c  1)
k k

The special case of c=2 brings y  y .


k k

viii. The difference of sine and cosine function


The finite differences of sine and cosine functions follow a pattern that can be
derived based on the periodic nature of these functions. The key relationships
are:
Sine Function:

 x  x  x   x 
(sin( x))  sin( x  x)  sin( x)  2cos    sin  
 2   2 
Cosine Function:

 x  x  x   x 
(cos( x))  cos( x  x)  cos( x)  2sin    sin  
 2   2 
These expressions involve the trigonometric functions sine and cosine and
include the angle sum and difference identities.
These formulas provide a way to compute the finite differences for sine and
cosine functions. Note that for small values of x , these expressions simplify
45
further. Also, since sine and cosine functions are periodic, the finite differences
will exhibit periodic behavior as well.
ix. The difference of the logarithmic function
The finite difference of the logarithmic function, (log ( x) , involves the a

logarithmic derivative and can be expressed using the properties of logarithms.


Let's consider the natural logarithm ln( x) , and the base a will be assumed to be
greater than 1. The formula is as follows:

 x  x 
(ln( x))  ln( x  x)  ln( x)  ln  
 x 
This expression arises from the fact that the logarithm of a quotient is equal to
the difference of the logarithms:

 x  x 
ln    ln( x  x)  ln( x)
 x 
The finite difference of the logarithmic function represents the logarithm of the
ratio of the new value x  x to the old value x .
It's worth noting that this formula holds as long as x and x  x are both
positive, as the natural logarithm is undefined for non-positive arguments.

If you have a different base for the logarithm (e.g., log ( x) or log ( x) , you can
10 2

adapt the formula accordingly using the logarithmic properties.


Example 3.5
Prove that for a constant function all differences are zero
Solution
Let y  c for all k . This is a constant function, Then, for all k ,
k

y  y  y  c  c
k k 1 k

Example 3.6
Prove that (cy )  cy
k k

Solution
This is analogous to a result of calculus
(cy )  cy  cy  cy
k k 1 k k

Essesntially, this problem involves two function defined for same argument x . i

One function has the values y , the other has values


k

46
y  cy k k

Example 3.7
Consider two function defined for the same set of argument x . Call the values i

of theses function u and v . Also consider a third function with values


i i

w  cu  c v
i 1 i 2 i

where c and c are two constants (independent of x ). Prove that


1 2 i

w  c u  c v
i 1 i 2 i

Solution
This is the property of the difference operation.
The proof is direct from the definations.
w  w  w i i 1 i

 (c u  c v )  (c u  c v )
1 i 1 2 i 1 1 i 2 i

 c (u  u )  c (v  v )
1 i 1 i 2 i 1 i

 c u  c v 1 i 2 i

Clearly the same proof would apply to sums of any finite length.
Example 3.8
With the same symbolism as in example 3.7, consider the fundtion with
z  uv .
k i i

Prove z  u v  v u
i i i i 1 i

Solution
Again starting from the definition
z  u v  u v i i 1 i 1 i i

 u v uv uv uv i 1 i 1 i i 1 i i 1 i i

 v (u  u )  u (v  v ) i 1 i 1 i i i 1 i

 u v  v u i i i 1 i

The result z  u v  v u could also be proved.


i i 1 i i i

Example 3.9

Verify that  y  y  3 y  3 y  y
3

0 3 2 1 0

Solution

The expression you provided is for the third-order forward difference of a


sequence y . The third-order forward difference is calculated by subtracting
i

consecutive second-order differences. Let's break it down step by step.

The second-order forward difference is given by:

 y  ( y  y )  ( y  y )  ( y  y )  y  2 y  y
2

i i 1 i i 2 i 1 i 1 i i 2 i 1 i

47
Now, the third-order forward difference can be found using the second-order
differences:

 y  (  y )   ( y  2 y  y )
3

i
2

i i 2 i 1 i

Expanding this further:

 y  ( y  y )  2( y  2 y  y )
3

i i 3 i 2 i 2 i 1 i

Simplify the equation:

 y  y  y  2y  4y  2y
3

i i 3 i 2 i 2 i 1 i

Now combine like terms:

 y  y  3y  4 y  2 y
3

i i 3 i 2 i 1 i

To match the formula you provided, we need to express it in terms of


y , y , y and y :
i 3 i 2 i 1 i

 y  y  3y  3y  y
3

i i 3 i 2 i 1 i

Therefore, after simplifying and rearranging terms, the expression holds true for
the third-order forward difference of the sequence y i

Example 3.10
Compute the complete table of differences for function
P ( x)  x  2 x  3x  4 for x   2,3.
3 2

Solution
The values of the polynomial at these point are -2, 2, 4, 10, 26, 58 and so the
table of differences.

x P( x) P( x)  P ( x) 2
 P( x)3
 P( x)
4

-2 -2
-1 2 4
0 4 2 -2
1 10 6 4 6
2 26 16 10 6 0
3 58 32 16 6 0

Exercise 3.1:

48
Given the function g ( x)  2 x  3 x  5 ,compute the complete table of finite
2

differences for the range x [1,5] . Show the step-by-step calculations for the
first and second finite differences. Based on the finite differences, can you
determine the degree of the polynomial function?
Exercise 3.2:
Consider a sequence of numbers: y  3, y  8, y  15, y  24, . Compute
1 2 3 4

the first and second finite differences for this sequence. Based on your
observations, propose a formula or expression for y in terms of n. Explain how
n

the finite differences provide insights into the nature of the sequence.
Exercise 3.3:
Compute (ln( x) for x = 4 and x  0.1.
Exercise 3.4:

Determine (sin( x)) for x  and x  0.2 .
4
Exercise 3.5:

Calculate (cos( x)) for x  and x  0.1.
3
Exercise 3.6:
Find  ( x ) for x = 2 and x  0.5 .
2 3

Exercise 3.7:

If u ( x)  2 x  1 v( x)  sin( x) and, calculate (u  v) for x  and
6
x  0.1.
Exercise 3.8:

 f 
Given f ( x)  ln( x) and g ( x)  x  1, determine    for
2
x  3 and
g
x  0.2 .

UNEQUALLY SPACE ARGUMENT

Newton's divided differences


Newton's divided differences are a method used for interpolating or fitting a
polynomial to a set of data points. The goal is to find a polynomial that passes
through given data points. This method was developed by Sir Isaac Newton and

49
is particularly useful in numerical analysis for constructing interpolating
polynomials.
Divided differences are a way of expressing the coefficients of interpolating
polynomials. Given a set of data points ( x0 , y0 ),( x1 , y1 ),,( xn , yn ) where xi
and yi are the coordinates of the data points, the divided differences are defined
recursively as follows:

Zeroth Divided Difference:


f [ xi ]  yi

First Divided Differences:


f [ xi1 ]  f [ xi ]
f [ xi , xi1 ] 
xi1  xi
Second Divided Differences:
f [ xi1 , xi2 ]  f [ xi , xi1 ]
f [ xi , xi1 , xi2 ] 
xi2  xi
And so on...

The nth divided difference is given by:


f [ x1 , x2 ,, xn ]  f [ x0 , x1 ,, xn1 ]
f [ x0 , x1 ,, xn ] 
xn  x0

In man form y ways these differences play role equivalent to those of the
simpler differences used earlier.
A difference table is again a convenient device for displaying differences, the
standard diagonal form being used.

x0 f0
f ( x0 , x1 )
x1 f1 f ( x0 , x1 , x2 )
f ( x1 , x2 ) f ( x0 , x1 , x2 , x3 )
f ( x1 , x2 , x3 ) f ( x0 , x1 , x2 , x3 , x4 )
x2 f2
f ( x2 , x3 ) f ( x1 , x2 , x3 , x4 )
x3 f3 f ( x2 , x3 , x4 )
f ( x3 , x4 )
x4 f4

These concepts are foundational in understanding and implementing


interpolation methods, particularly Newton's divided difference interpolation.
50
They provide a systematic way to represent and calculate coefficients of
interpolating polynomials based on given data points.

The representation theorem shows how each divided differences may be


represented as a combination of y values.
k

n fi
f ( x0 , x1 ,..., xn )   n
i 0 F ( x )
i i

The symmetry property of divided differences is a characteristic feature of


these differences and is an important property in the context of interpolation.
The symmetry property states that the divided difference of a set of points is the
same regardless of the order in which the points are considered. The difference
is invariant under all permutations of the argument x provided f are k k

permutated in the same way. This very useful result is an easy consequences of
the representation theorem.

Divided difference and derivatives are related by

f ( n1) ( )
f ( x0 , x1 ,..., xn ) 
(n  1)!

In the case of equallt spaced arguments, divided difference reduce to ordinary


finite differences; specifically as follows,

n f0
f ( x0 , x1 ,..., xn ) 
n!hn
A useful property of ordinary finite differences may be obtained as follows,

 n f 0  f ( n ) ( )h n
(n)
For a function f ( x ) with bounded derivated, all f ( x) having a bound
independent of n, it follows that, for small h,

lim  n f 0  0

For increasing n. This generalizes the result found earlier for polynomials and
explains why the higher differences in a table are oftenfound to tend towards
zero.

The collocation polynomial may now be obtained in terms of divided


differences. The classical result is the newton’s divided difference formula.

51
P( x)  f [ x0 ]  ( x  x0 ) f [ x0 , x1 ]  ( x  x0 )( x  x1 ) f [ x0 , x1 , x2 ] 
 ( x  x0 )( x  x1 )( x  xn1 ) f [ x0 , x1 ,, xn ]
This polynomial will pass through all the given data points. Newton's divided
differences method is often used for numerical interpolation because it provides
a straightforward way to compute the coefficients of the interpolating
polynomial.
The error f ( x)  p( x) , where f(x) and p(x) are the collocate at the argument
x , x ,...x , is still given by the formula obtained earlier.
0 1 n

f ( ) ( x)
( n1)

f ( x )  p ( x) 
(n  1)!
An alternative form of this error using divided difference is
f ( x)  P( x)  f ( x, x0 ,..., xn )( x  x0 )...( x  xn )

Example 3.11
For the function f ( x )  2 x  3 x  5 ,calculate the first and second divided
2

differences for x  1,2,3 . How does the symmetry property of divided


differences manifest in this case?
Solution
For f ( x )  2 x  3 x  5 calculate f 1 , f  2, f 3
2

The first divided differences are calculated as


f [2]  f [1] f [3]  f [2]
, ,
2 1 32
and the second divided difference as
First Divided Diff[3]  First Divided Diff[ 2]
.
3 1
Observe the symmetry: f 1, 2, 3  f 3, 2, 1 due to the symmetry
property.
The symmetry property ensures that changing the order of the points doesn't
impact the results, facilitating consistent calculations.
Example 3.12
Construct a difference table for the function g ( x)  3 x  2 x  1using the
2

points x  0,1,2,3,4 . Compute at least three finite differences. What insights


can be gained from the difference table?
Solution

52
Create a difference table for g ( x)  3 x  2 x  1using the points
2

x  0, 1, 2, 3, 4
Fill in values for g ( x) ,first differences, second differences, and so on.
3. Analyze the patterns in the table to infer information about the nature of the
function and its derivatives.
Example 3.13
Compute the divided difference through the third for f values in the table k

below
x k
0 1 2 4

f k
1 1 2 5

Solution
x k
f k

0 1
0 1
1 1
1 2 1
2 2 3 1 12
4 5
2 6
52 3 1 0 1
f (2,4)   , f (0,1,2)   ,
42 2 20 2

3 1 1
1 
2 1 6 2  1
f (1,2,4)   , f (0,1, 2, 4) 
4 1 6 40 12
Substituting into newton’s divided difference formula,
P( x)  f [ x0 ]  ( x  x0 ) f [ x0 , x1 ]  ( x  x0 )( x  x1 ) f [ x0 , x1 , x2 ] 
 ( x  x0 )( x  x1 )( x  xn1 ) f [ x0 , x1 ,, xn ]
we have
1
P( x)  ( x  9 x  8 x  12)
3 2

12
Exercise 3.9
Elaborate on the significance of the symmetry property of divided differences.
How does this property impact the process of constructing interpolating
polynomials, and why is it advantageous?
53
Exercise 3.10
Given the function p ( x )  2 x  3 x  5 x  7 , calculate the first and second
3 2

divided differences for the points x = 1, 2, 3, 4 . Show the step-by-step


computations and interpret the results.

Exercise 3.11
For a set of data points (1, 4), (2, 9), (3, 16), (4, 25) , compute the first, second,
and third divided differences. Use these differences to construct the
interpolating polynomial and evaluate it at x = 2.5.

Exercise 3.12
Consider the function q ( x)  4 x  2 x  9 . Compute the second divided
2

differences for the points x = -1, 0, 1, 2, 3. Determine the degree of the


interpolating polynomial based on the pattern in the divided differences.

LAGRANGE INTERPOLATION
Lagrange interpolation is a method for constructing a polynomial that passes
through a given set of points. Given a set of distinct points
( x , y ),( x , y ),,( x , y ) , the Lagrange interpolating polynomial P(x) of
0 0 1 1 n n

degree at most n is defined as:


P( x)  L ( x) y  L ( x) y  L ( x) y
0 0 1 1 n n

where each L ( x) is a Lagrange basis polynomial, given by:


i

n xx
L ( x)   j

x x
i
j 0 , j i
i j

The Lagrange interpolating polynomial P(x) satisfies P( x )  y for


i i

i  0,1,, n , meaning it passes through all the given points.

The steps for Lagrange interpolation are as follows:

1. Compute Lagrange basis polynomials:


Calculate each L ( x) based on the formula above.
i

2. Construct the interpolating polynomial:


Form the interpolating polynomial P(x) by summing up the products L ( x) y for i i

each i .
Lagrange interpolation is a useful method for approximating a function when
the exact functional form is unknown, but a set of data points is available. It is
widely used in numerical analysis and computer science.

Here are the steps to compute L ( x) : i

1. Set up the product:

54
For each i , set up the product in the Lagrange basis polynomial formula.
2. Compute the individual factors:
Compute the value of each factor in the product.
3. Combine the factors:
Multiply all the individual factors together to obtain L ( x) .
i

Example 3.14
Suppose we have three points: (1, 2), (3, 4), (5, 6) , compute L ( x) .
1

Solution
x3 x5
L ( x)  
1 3 1 5
1

Now, let's compute each factor:


x 3 x 3
Factor 1: 
1 3 2
x 5 x 5
Factor 2: 
1 5 4
Now, combine the factors:
x3 x5
L ( x)  
2 4
1

You can simplify this expression further if needed. Repeat this process for each
Lagrange basis polynomial L ( x) corresponding to different i.
i

Example 3.15
Given the points (1, 2), (2, 5), (4, 10), find the Lagrange interpolating
polynomial P(x) that passes through these points.
Solution
1. Set up the Lagrange interpolating polynomial:
P( x)  L ( x)  2  L ( x)  5  L ( x) 10
0 1 2

2. Compute each Lagrange basis polynomial:


( x  2)( x  4)
L ( x) 
(1  2)(1  4)
0

( x  1)( x  4)
L ( x) 
(2  1)(2  4)
1

55
( x  1)( x  2)
L ( x) 
(4  1)(4  2)
2

3. Substitute these into the interpolating polynomial P(x) and simplify.


1 5
P( x)  ( x  2)( x  4)  ( x  1)( x  4)  ( x  1)( x  2)
3 6
1 5
P( x)  ( x  6 x  8)  ( x  5 x  4)  ( x  3x  2)
2 2 2

3 6
1 8 5 5 5
P( x)  x  2 x   x  5 x  4  x  x 
2 2 2

3 3 6 2 3
1 1 7
P( x)  x  x  2

6 2 6
Example 3.16
Use Lagrange interpolation to find a polynomial Q(x) passing through the points
(-1, 3), (0, 1), (1, -1), (2, 5) .
Solution
1. Set up the Lagrange interpolating polynomial:
Q( x)  L ( x)  3  L ( x) 1  L ( x)  (1)  L ( x)  5
0 1 2 3

2. Compute each Lagrange basis polynomial:


( x  0)( x  1)( x  2)
L ( x) 
(1  0)(1  1)(1  2)
0

( x  1)( x  1)( x  2)
L ( x) 
(0  1)(0  1)(0  2)
1

( x  1)( x  0)( x  2)
L ( x) 
(1  1)(1  0)(1  2)
2

( x  1)( x  0)( x  1)
L ( x) 
(2  1)(2  0)(2  1)
3

3. Substitute these into Q(x) and simplify

56
1 5
Q( x)   ( x  x  3x  2)  ( x  3x  2 x)  ( x  x  2 x)  ( x  x  x  1)
3 2 3 2 3 2 3 2

3 6
1 1 2 5 5 5 5
Q( x)   x  x  x   x  3x  2 x  x  x  2 x  x  x  x 
3 2 3 2 3 2 3 2

3 3 3 6 6 6 6
5 7 13 11
Q( x)  x  x  x  3 2

6 2 3 6
Example 3.17
Find the polynomial of degree three which takes the values below as in
Example 3.13.

x k
0 1 2 4

f k
1 1 2 5

Solution
The polynomial can be written directly as follows
( x  1)( x  2)( x  4) x( x  2)( x  4) x( x  1)( x  4) x( x  1)( x  2)
P( x)  1 1 2 5
(0  1)(0  2)(0  4) 1(1  2)(1  4) 2(2  1)(2  4) 4(4  1)(4  2)
1
This can be rearranged into P( x)  ( x  9 x  8 x  12)
3 2

12
Exercise 3.13
Find the Lagrange interpolating polynomial for the points (0, 1), (2, 3), (4, 5),
(6, 7).
These examples illustrate the process of using Lagrange interpolation to find
polynomials that pass through given sets of points.

POLYNOMIAL APPROXIMATION
Polynomial approximation is a mathematical technique used to represent a
given function by a polynomial of a certain degree. This is often done to
simplify calculations or to make it easier to work with the function. The goal is
to find a polynomial that closely matches the behavior of the original function
within a specific range or over the entire domain.

There are various methods for polynomial approximation, and one common
approach is the Taylor series expansion. The Taylor series represents a function
as an infinite sum of terms, each derived from the function's derivatives at a
particular point (usually the center of approximation). However, in practice,
only a finite number of terms are used to approximate the function.

57
For a function f(x) , its nth-degree Taylor polynomial P ( x) centered at a is n

given by:

f (a) f (a) f (a) n

P ( x)  f (a)  f (a)( x  a) 
n
( x  a)  ( x  a) 
2
( x  a) 3 n

2! 3! n!
This polynomial approximates the function f(x) near the point a up to the nth
degree. The higher the degree of the polynomial, the better the approximation,
but it may also lead to more complex calculations.

Another common method is the least squares polynomial approximation. In this


approach, a polynomial of a certain degree is chosen to minimize the sum of the
squared differences between the actual function values and the values predicted
by the polynomial.

It's important to note that while polynomial approximations can be useful, they
may not always provide accurate representations of functions, especially for
functions with complex behavior or sharp changes. In some cases, other
methods, such as piecewise polynomial interpolation or spline interpolation,
may be more appropriate.

Example 3.18
Given the function f ( x)  e , find the quadratic Taylor polynomial P ( x)
x
2

centered at ( a = 0 ).
Solution;
The Taylor polynomial P ( x) is given by:
2

f (0)
P ( x)  f (0)  f (0) x 
2
x 2

2!
1. Find f (0) : f (0)  e  1
0

2. Find f ( x) : f ( x)  e ,so f '  0   1


x

3. Find f ( x) : f ( x)  e , so f ''  0   1


x

Now substitute these into the formula:


1
P ( x)  1  1  x 
2
x 2

2!
Example 3.19
Approximate the square root function g ( x)  x by a linear Taylor polynomial
P( x) centered at ( a = 4 \).
1

Solution:
The Taylor polynomial P( x) is given by:
1

58
P( x)  g (4)  g(4)( x  4)
1

1. Find g (4) : g (4)  4  2


1 1
2. Find g ( x) : g ( x)  ,so g (4) 
2 x 4
Now substitute these into the formula:
1
P ( x)  2  ( x  4)
1
4
1
So, the linear Taylor polynomial P ( x)  2  ( x  4) .
1
4
Example 3.20

Compute the Third Taylor’s polynomial above x  0, for f  x   1  x 


1
2

Solution

f  x   1  x   f (0)  1
1
2

1 1
f ' x   1  x   f '(0) 
1

2

2 2

1 1
f ''  x    1  x   f ''(0)  
3/2

4 4

3 3
f '''  x   1  x   f '''(0) 
5/2

4 8

15 15
    1        1   
5/2 5/2
f ( lv )
 f ( lv )

16 16
where    0, x  .
Therefore,
f "( x) f "'( x)
P ( x)  f (0)  f '(0) x 
3
x  x 2 3

2! 3!
1 1 1 3 1
1 x   x   x 2 3

2 4 2! 8 3!

Therefore,
2 3
1 x x
P ( x)  1  x  
3
2 8 16

Exercise 3.14
59
Given the function f ( x)  e , find the quadratic Taylor polynomial P ( x)
x

centered at a  0 .
Exercise 3.15
Approximate the square root function g ( x)  x by a linear Taylor
polynomial P ( x) centered at a = 4 .
1

60
CHAPTER FOUR
Numerical Differentiation and Integration
Numerical differentiation techniques
Numerical differentiation is an approach to estimate the derivative of a function

at a given point using discrete data points. One common method for numerical

differentiation is the Central Difference. Numerical differentiation involves

estimating the derivative of a function at a particular point using discrete data

points. One commonly used method is the finite difference method. The

forward, backward, and central difference methods are three common

approaches.

1. Forward Difference:
f ( x  h)  f ( x)
f ( x) 
h

2. Backward Difference:
f ( x)  f ( x  h)
f ( x) 
h

3. Central Difference:
f ( x  h)  f ( x  h)
f ( x) 
2h
Here, h is the step size, and smaller values of h generally lead to more accurate
approximations.

Algorithm 4.1
A general algorithm for numerical differentiation, specifically using the central
difference method, can be outlined as follows:
Numerical Differentiation Algorithm (Central Difference Method):
1. Input:
- (f(x)): The function to be differentiated.
- (x0): The point at which the derivative is to be estimated.
- (h): Step size for the central difference method.

61
2. Initialization:
- Calculate the forward point ( x+ ) and backward point (x-) using the step
size: (x+ = x0 + h\) and (x- = x0- h\).
3.Evaluation:
- Evaluate the function at the central point (x0) and the forward and backward
points:  f0  f  x0   ,  f   f  x   ,  f   f  x  .

4. Central Difference Method:


- Use the central difference formula to estimate the derivative:
f  f
f ( x0)  5. Output:
2h
- The result f'(x0) is the numerical approximation of the derivative at the point
( x0).
This algorithm can be applied to estimate the derivative of a given function at a
specified point using the central difference method. Users can adapt the
algorithm based on specific programming languages and requirements.

Numerical Integration:
Numerical integration is the process of estimating the definite integral of a
function over a given interval. Different methods exist, such as the trapezoidal
rule, Simpson's rule, and numerical quadrature methods.

1. Trapezoidal Rule:
h
 f (a)  2 f ( x1)  2 f ( x2 )   2 f ( xn1)  f (b) 
b
a f ( x ) dx 
2
ba
where h is the step size h  and xi are the nodes within the interval.
n

2. Simpson's Rule:

h
 f (a)  4 f ( x1)  2 f ( x2 )  4 f ( x3 )  2 f ( xn2 )  4 f ( xn1)  f (b) 
b
a f ( x) dx 
3
Simpson's rule generally provides more accurate results than the trapezoidal rule
for smooth functions.

3. Numerical Quadrature:
Numerical quadrature methods use various mathematical techniques to
approximate the integral. These methods include Gaussian quadrature, adaptive
quadrature, and others.

62
These numerical techniques are particularly useful when an analytical solution
to the differentiation or integration problem is difficult or impossible to obtain.
The choice of method depends on the specific characteristics of the function and
the required level of accuracy.
Algorithm 4.2
A general algorithm for numerical integration, often applicable to methods like
the trapezoidal rule or Simpson's rule, can be outlined as follows:
Numerical Integration Algorithm:
1. Input:
- Define the function f(x) to be integrated over the interval ([a, b]).
- Specify the number of subintervals (n) or the step size (h) for the numerical
integration.
2. Initialization:
- Calculate the step size (h) if it's not provided:
ba
h
n
3. Summation:
- Initialize a variable, let's call it ( result} , to store the accumulated sum.
- For i  0,1,2,, n :
- Calculate the current point xi  a  ih .

- Evaluate the function value at xi , i.e., f  xi  .

- Update the result based on the specific integration method:


h
- For the trapezoidal rule: result  result   f ( xi )  f ( xi1 )
2
h
- For Simpson's rule: (result  result   f ( xi )  4 f ( xi1)  f ( xi2 )
6
4. Output:
- The variable (result) now contains the numerical approximation of the
integral.
This algorithm provides a general framework for numerical integration. Specific
integration methods may require adjustments in the summation step, but the
overall structure remains consistent.

63
Example 4.1: Numerical Differentiation

Use the central difference method to estimate the derivative of the function
f ( x)  x 2  3 x  2 at x = 1 with a step size of h = 0.1.
Solution:

f ( x  h)  f ( x  h)
f ( x) 
2h
Substitute the values:
(1.1)2  3(1.1)  2  (0.9)2  3(0.9)  2
f (1) 
2  0.1
Solve to find (f'(1).
simplify the expression:
1.21  3.3  2  0.81  2.7  2
f (1) 
0.2
3.7
f (1)   18.5
0.2

So, the estimated derivative of the function f ( x)  x 2  3 x  2 at x = 1 using the


central difference method with a step size of h = 0.1 is f (1)  18.5

Example 4.2
2
0 ( x  1) dx using the trapezoidal rule with four
2
Estimate the definite integral
subintervals.
Solution:
h
 f (0)  2 f ( x1 )  2 f ( x2 )  2 f ( x3 )  f (2)
2
0 f ( x) dx 
2
Calculate h and xi values:

20
h  0.5
4
x1  0.5, x2  1.0, x3  1.5
Substitute into the formula:
0.5
(0) 2  2(0.52  1)  2(12  1)  2(1.52  1)  (2) 2  1
2
0 ( x  1) dx 
2

2
Simplify the expression to find the numerical approximation.

64
Example 4.3: Simpson's Rule:
3
Apply Simpson's rule to approximate 1 (2 x  1) dx with a step size of h  0.5 .

Solution:
h
 f (1)  4 f ( x1)  2 f ( x2 )  4 f ( x3)  f (3) 
3
1 f ( x) dx 
3
Calculate h and xi values:

h = 0.5 , x1  1.5, x2  2.0, x3  2.5


Substitute into the formula:
0.5
(2(1)  1)  4(2(1.5)  1)  2(2(2.0)  1)  4(2(2.5)  1)  (2(3)  1)
3
1 (2 x  1) dx  3

Simplify the expression to find the numerical approximation.


Example 4.4: Combined Differentiation and Integration
Given the function g ( x)  e 2 x  2 x 2  3 , find the following:
a) Determine the derivative g'(x).
3
b) Evaluate the definite integral 1 g ( x) dx .

Solution:
a) Derivative g'(x):
d 2x
g ( x)  (e  2 x 2  3)
dx
Use the rules of differentiation to find g'(x).
3
b) Definite Integral 1 g ( x) dx :
3 3
1 g ( x) dx  1 (e  2 x 2  3) dx
2x

Apply the rules of integration to find the definite integral.


3
3 1 2 
1 g ( x) dx   e 2 x  x3  3x 
2 3 1
Evaluate at the upper and lower limits:

65
 1 2(3) 2 3   1 2(1) 2 3 
 2 e  3 (3)  3(3)    2 e  3 (1)  3(1) 

1 1 2
 e6  18  9  e2   3
2 2 3
The numerical result will be the difference between these values.
Exercise 4.1: Forward Difference Method
Given the function f ( x)  2 x 2  3 x  1 , find the numerical approximation of
f '  2  using the forward difference method with a step size h = 0.1.

Exercise 4.2: Central Difference Method


Estimate the derivative g '(3) of the function g ( x)  sin( x) at x = 3 using the
central difference method with a step size h = 0.05 .
Applications of Numerical Differentiation
Suppose you have a set of temperature data over time t for a chemical reaction.
dT
Use numerical differentiation to find the rate of temperature change at a
dt
specific time point. Discuss the significance of this rate in the context of the
reaction dynamics.

Exercise 4.3
1  x2
Use the trapezoidal rule to approximate the 0 e dx with four subintervals.
Exercise 4.4
2 1
Apply Simpson's rule to estimate 1 x
dx using six subintervals.
Exercise 4.5
1
Utilize Gaussian quadrature to 1 1  x 2 dx .

66

You might also like