3 Ord NR King
3 Ord NR King
3 Ord NR King
1. Introduction
Newton’s method is commonly used to find an approximate simple root of a given nonlinear equation. Many
researchers [1–6] have improved Newton’s method and designed its higher-order variants. Kalantari [7,8] established a
general and elegant approach for a k-point family of iterative methods, using the ideas of the divided-difference matrix,
confluent divided differences and the determinantal Taylor theorem. In addition, high-order algebraic methods [9] for
approximating square roots have been extensively investigated; visualization via polynomiography [9] in the complex plane
has enabled us to observe the fascinating beauty of fractals which attracts some readers to this area.
Suppose that a function f : C → C has a multiple root α with integer multiplicity m ≥ 1 and is analytic in a small
neighborhood of α . In this paper, a new iteration method free of second derivatives is proposed below for finding an
approximate root α , given an initial guess x0 sufficiently close to α :
where µ and γ are parameters to be chosen for maximal order of convergence [10,11]. Observe that the pair (µ = 0, γ = 0)
in (1.1) yields the classical Newton method for a simple root. For when m = 1 for a function f : R → R having α ∈ R,
Kou [12] and Potra [5] investigated the cases with (µ = −1, γ = 1) and (µ = 1, γ = −1), respectively. A general
extension of their studies with root multiplicity taken into account is the main objective of this paper. Observe that (1.1) is
free of second derivatives, unlike the modified Halley method [3] and the Euler–Chebyshev method [6] requiring the second
∗ Corresponding author.
E-mail addresses: [email protected] (Y.I. Kim), [email protected] (Y.H. Geum).
1 Instructor of Mathematics.
0898-1221/$ – see front matter © 2011 Elsevier Ltd. All rights reserved.
doi:10.1016/j.camwa.2011.04.069
Y.I. Kim, Y.H. Geum / Computers and Mathematics with Applications 62 (2011) 1634–1640 1635
2
xn+1 = xn −
1
f ′ (xn ) f ′′ (xn )
, n = 0, 1, 2, . . . , (1.2)
1+ m f (xn )
− f ′ (xn )
for a given x0 ∈ C. Let p ∈ N be given and g (x) satisfy the relation below:
p
d
( ) = |g (p) (α)| < 1, if p = 1.
dxp g x
x=α
(1.5)
(i)
g (α) = 0 for 1 ≤ i ≤ p − 1 and g (p) (α) ̸= 0, if p ≥ 2.
Then it can be shown, by extending the similar analysis [2] for C, that the asymptotic error constant η with order of
convergence p is
en+1 |g p (α)|
η = lim p =
. (1.6)
n→∞ en p!
Let us define
f (x)/f ′ (x), if x =
̸ α
h(x) = lim f (x)/f ′ (x), if x = α (1.7)
x→α
and
x − G(x), if x =
̸ α
g (x) = x − lim G(x), if x = α, (1.8)
x→α
f (x−µh(x))+γ f (x)
where G(x) = f ′ (x)
. Our aim is to obtain the maximal order of convergence p using (1.5) as well as to derive the
asymptotic error constant associated with p in terms of m and properly chosen µ and γ . To this end, we need to investigate
some local properties of g (x) in a small neighborhood of α . From g (x) as defined in (1.8), we obtain
(g − x) · f ′ = −(f (z ) + γ f ), (1.9)
where f = f (x), f ′ = f ′ (x), z = z (x) = x − µh(x) and g = g (x) are used for brevity and the symbol ′ denotes the derivative
with respect to x. With the aid of (1.9), some relationships between µ, γ , g ′ (α), g ′′ (α) and g ′′′ (α) are sought for maximum
order of convergence in Section 2.
(k)
By Lemmas 1.1 and 1.2, we have [f (z )]x=α = 0, 0 ≤ k ≤ m − 1 and f (α) = f ′ (α) = · · · = f (m−1) (α) = 0, f (m) ̸= 0.
Using L’Hospital’s rule [13] repeatedly, we get
Lemma 1.1. Let f : C → C have a root α with a given integer multiplicity m ≥ 1 and be analytic in a small neighborhood of α .
f (m+j) (α)
Then the function h(x) and its derivatives up to order 3 evaluated at α have the following properties with θj = for j ∈ N:
f (m) (α)
(1) h(α) = 0.
(2) h′ (α) = m
1
.
(3) h′′ (α) = − m2 (m2 +1) θ1 .
(4) h(3) (α) = 6
θ1 − 2m
θ .
2
m3 (m+1) m+2 2
1636 Y.I. Kim, Y.H. Geum / Computers and Mathematics with Applications 62 (2011) 1634–1640
Lemma 1.2. Let z (x) = x − µh(x) ∈ C and h(x) be described as at the beginning of Section 1. Let α be a zero of multiplicity m
µ dk
and t = z ′ (α) = 1 − m with m ∈ N. Let θk for k ∈ N be as described in Lemma 1.1. Let Fk = dxk
f (z )|x=α for k = 0, 1, 2, . . . .
Then the following hold:
(1) Fk = 0 for 0 ≤ k ≤ m − 1.
(2) Fm = f (m) (α)t m for k = m.
(3) Fm+1 = f (m) (α) · θ1 · t m−1 (1 − t + t 2 ) with t 0 ≡ 1, for any t ∈ C.
(m+2)
(4) Fm+2 = f (m) (α) · t m−2 · {q1 (t )θ12 + q2 (t )θ2 }, with q1 (t ) = − m
(t − 1)2 (t − m−1
2(m+1)
), q2 (t ) = t (t 3 − 2t + 2) and t 0 ≡
1 for any t ∈ C.
2. Convergence analysis
In this section, we develop the order of convergence and the asymptotic error constant for iteration scheme (1.1) in terms
of parameters µ and γ .
We differentiate both sides of (1.7) with respect to x to obtain
G1 (x), if x =
̸ α
g ′ (x) − 1 = lim G1 (x), ifx = α, (2.2)
x→α
, , m≥2
(k) 0 if 0 ≤ k ≤ m − 2
[f (z )](1) + γ f ′ x=α = (m)
(2.4)
f (α)(t m + γ ), if k = m − 1,
µ
with t = 1 − m
, it follows that
m = tm + γ . (2.6)
Hence, we have
G2 (x), if x =
̸ α
g (x) =
′′
lim G2 (x), if x = α, (2.8)
x→α
,
0
(m) if 0 ≤ k ≤ m − 3
f (α)(m − t m − γ ), if k = m − 2
= (2.9)
( m + 2)( m − 1)
f (m) (α) θ1 (m + 1 − t m+1 − t m−1 + t m − γ ) − g ′′ (α) , if k = m − 1.
2
Y.I. Kim, Y.H. Geum / Computers and Mathematics with Applications 62 (2011) 1634–1640 1637
2
G2 (α) = g ′′ (α) = θ1 (m + 1 − t m+1 + t m − t m−1 − γ ). (2.10)
m(m + 1)
2
g ′′ (α) = − θ1 · ρ(t ), (2.11)
m(m + 1)
where ρ(t ) = t m+1 − 2t m + t m−1 − 1 with t 0 ≡ 1 for any t ∈ C. Observe that ρ(t ) = t 2 − 2t when m = 1. We seek t such
that g ′′ (α) = 0 as m varies to obtain possible third-order convergence. To this end, we find a root t of the polynomial ρ(t )
whose graph is sketched in Fig. 1. Such a root t is easily found to be an algebraic number [14] since all the coefficients are
integers. The next theorem guarantees the existence of real roots t of ρ(t ).
Theorem 2.1. Let ρ(t ) be defined as in (2.11) with m ≥ 1 as the multiplicity of root α of the given nonlinear equation f (x) = 0.
Then ρ has only two real roots t1 ∗ ∈ (−1, 0] and t2 ∗ ∈ (1, 2] for any odd m, while it has a unique real root t ∗ ∈ (1, 2) for any
even m.
Proof. If m = 1, then ρ(t ) = t 2 − 2t clearly reflects the assertion. In fact, two roots t1 ∗ = 0 and t2 ∗ = 2 exist only when
m = 1. If m = 2, then ρ(t ) = t 3 − 2t 2 + t − 1 and ρ(1) · ρ(2) < 0 guarantees the existence of a root t ∗ ∈ (1, 2). Let
ρ(t ) = t 3 + a1 t 2 + a2 t + a3 , Q = (3a2 − a21 )/9 and R = (9a1 a2 − 27a3 − 2a31 )/54 with a1 = −2, a2 = 1, a3 = −1. Then
the discriminant [15] D = Q3 + R2 = 23/108 > 0, showing that ρ has only one ∗
real root t .
Let us consider the case when m ≥ 3. Since ρ (t ) = (m + 1)t
′ m−2
t − m+1 (t − 1), ρ has local minima [16] at t = 0
m−1
m−1
furthermore, it is monotone increasing in 0, m−1
∪ [1, ∞) and
and t = 1, while it has local maximum at t = m+1
; m+1
, 1 . By direct computation, ρ has the local maximum (m+41)2 · ( m ) − 1 < 0 at t = m
m−1 −1 m−1 −1
monotone decreasing in m+1 m+1 m+1
.
Combining this with ρ(0) = ρ(1) = −1 < 0, we find that no positive real root exists in [0, 1]. In view of the fact that
ρ(1) · ρ(2) = 1 − 2m−1 < 0, ρ has a positive real root t ∗ ∈ (1, 2) which is also unique due to the monotonicity of ρ in
(1, ∞).
Hence it suffices to prove the assertion for negative real roots. When m ≥ 3 is odd, ρ(−1) · ρ(0) < 0 means the existence
of a root t ∗ ∈ (−1, 0]. Since ρ(−t ) = t m+1 + 2t m + t m−1 − 1 has one sign change in its coefficients, ρ has at most one
negative real root according to the Descartes sign rule [2]; as a result, only one negative real root t ∗ ∈ (−1, 0] exists. When
m ≥ 4 is even, ρ(−t ) = −t m+1 − 2t m − t m−1 − 1 has no sign change in its coefficients, and the Descartes sign rule assures
that no negative real root of ρ exists. This completes the proof.
Table 1 shows the typical real roots of ρ(t ) with 1 ≤ m ≤ 8 to ensure Theorems 2.1 and 2.2. Complex roots of ρ(t ) are not
selected, to make iteration scheme (1.1) simpler.
We differentiate both sides of (2.7) with respect to x to obtain
G3 (x), if x ̸= α
(3)
g (x) = lim G3 (x), if x = α, (2.13)
x→α
1638 Y.I. Kim, Y.H. Geum / Computers and Mathematics with Applications 62 (2011) 1634–1640
Table 1
Typical real roots of ρ(t ) with 1 ≤ m ≤ 8.
m ρ(t ) t
1 t 2 − 2t t11 = 0, t12 = 2
2 t 3 − 2t 2 + t − 1 t2 =
2+ρ1 +ρ2
3√
≈ 1.75487766624 √
(1 − 5 ) (1+ 5)
3 (t 2 − t + 1)(t 2 − t − 1) t31 = 2
≈ −0.618033988749, t32 = 2 ≈ 1.61803398874
4 t 5 − 2t 4 + t 3 − 1 t4 = 1.5289463545197057
5 (t 3 − t 2 + 1)(t 3 − t 2 − 1) t51 = 3 (1 − ρ1 − ρ2 ), t52 = 31 (1 + ρ3 + ρ4 )
1
6 t 7 − 2t 6 + t 5 − 1 t6 = 1.4177967508060623
7 (t 4 − t 3 + 1)(t 4 − t 3 − 1) t71 = 1
4
σ 1− 3−σ
σ −1
, t72 = 41 σ 1 + 3−σ
σ −1
8 t 9 − 2t 8 + t 7 − 1 t8 = 1.3499001972014136
√ 1/3
σ = 1 + 1 − 4 τ4 − τ3 , τ = 23 ( 849 − 9)
.
−3g ′′ ·f ′′ −3(g ′ −1)·f (3) −(g −x)·f (4) −[f (z )(3) +γ f (3) ]
where G3 (x) = f′
. Using Lemma 1.2 and the fact that g (α) = α, g ′ (α) = 0, g ′′ (α) = 0
for cubic order of convergence, we find the relation below:
− 3g ′′ · f ′′ |(xk=α
)
−3(g ′ − 1) · f (3) |(xk=α
)
−(g − x) · f (4) |(xk=α
)
−[f (z )(3) + γ f (3) ] |(xk=α
)
0, if 0 ≤ k ≤ m − 4
(m)
f ((α)( m − t m − γ ), if k = m − 3
m)
= θ 1 f (α)(
m+1−t
m+1
+ t m − t m−1 − γ ), if k = m −2 (2.14)
1
f (m) (α) φ1 θ 2 + φ2 θ2 − (m − 1)(m2 + 4m + 6)g (3) (α) , if k = m − 1,
1
6
where
6
g (3) (α) = (φ1 θ12 + φ2 θ2 ). (2.15)
m(m + 1)(m + 2)
Consequently, we obtain the asymptotic error constant η described in the theorem below.
Theorem 2.2. Let f : C → C have a zero α with integer multiplicity m ≥ 1 and be analytic in a small neighborhood of α . Let
θ1 , θ2 be defined as in Lemma 1.1 and φ1 , φ2 be defined as in (2.14). Let t be a root of ρ(t ) defined in (2.11). Let x0 be an initial
guess chosen in a sufficiently small neighborhood of α . Then the iteration method (1.1) with µ = m(1 − t ), γ = m − t m has
order 3 and its asymptotic error constant η is as follows:
1 1
η= |g (3) (α)| = |φ1 θ12 + φ2 θ2 |, (2.16)
6 m(m + 1)(m + 2)
Remark. Two pairs (m = 1, t = 0) and (m = 1, t = 2) in (2.16) yield the asymptotic error constants given by Kou [12]
and Potra [5], respectively.
On the basis of the description stated in Sections 1 and 2, we develop a zero-finding algorithm to be implemented with
high-precision Mathematica [17] programming.
Y.I. Kim, Y.H. Geum / Computers and Mathematics with Applications 62 (2011) 1634–1640 1639
Table 2
√
(x2 −x+7)2
Asymptotic error constants for f (x) = x2 +cos x
with m = 2, α = 1+32 3i
(t , µ, γ ) = (1.75487766624, −1.50975533249, −1.07959562349).
Table 3
Convergence for f (x) = (e−x sin x + log[1 + x − π ])2 (x − π)2 sin2 x with m = 7, α = π (t , µ, γ ) =
(−0.8191725133961718, 12.7342075937732, 7.247529862496331).
n xn |xn − α| en+1 /en 3 η
0 2.90000000000000 0.241593 0.2826722582
1 3.13818475241933 0.00340790 0.2416772773
2 3.14159264244215 1.11476 × 10−8 0.2816579059
3 3.14159265358979 3.91590 × 10−25 0.2826722549
4 3.14159265358979 1.69738 × 10−74 0.2826722582
5 3.14159265358979 1.38236 × 10−222 0.2826722582
6 3.14159265358979 0.0 × 10−249
Table 4
Convergence for f (x) = 208 − 37x4 + 3x7 with m = 1, α = 2 (t , µ, γ ) = (1, 2, 2, −1, −1).
|xn − α| and the computational asymptotic error constants en+1 /en 3 as well as the theoretical asymptotic error constant η.
Further test functions with t-values shown in Table 1 are listed below:
f1 (x) = x9 − x4 + 73, α = −1.24943225052 − 1.04103553493i, m = 1, x0 = −1.57 − 0.78i
π
f2 (x) = (x − 2) cos , α = 2, m = 2, x0 = 1.97
x
f3 (x) = (x2 + 16) log2 (x2 + 17), α = −4i, m = 3, x0 = −3.92i
√
(3 − x + x ) 2 4
1− 11i
f 4 ( x) = , α= , m = 4, x0 = 0.37 − 1.89i
x4 + sin x 2
1640 Y.I. Kim, Y.H. Geum / Computers and Mathematics with Applications 62 (2011) 1634–1640
Table 5
Convergence for various test functions.
f (x ) m t µ = m(1 − t ) γ = m − tm ν η
f1 (x) 1 2 1 −1 8 5.385517407
f2 (x) 2 t2 −1.50975533249
√ 1.07959562349
√ 5 0.040540846
f3 (x) 3 t31 3
2
(1 + 5) ≈ 4.85410196 1 + 5 ≈ 3.23606797 8 56.70330180
f4 (x) 4 t4 −2.11578541807 −1.46473354593 5 0.2180188772
f5 (x) 5 t51 8.77438833123 5.24512233375 6 0.7707803751
f6 (x) 6 t6 −2.50678050483 −2.122390410052 6 0.0295374323
f7 (x) 7 t72 −2.66194298368 −2.544759990360 6 0.6076535829
f8 (x) 8 t8 −2.79920157761 −3.02588062855 5 0.1332832356
πx
f5 (x) = (x3 − 4x2 − 16x − 35) log3 (x − 6) sin , α = 7, m = 5, x0 = 6.5
7
x
f6 (x) = (x − π )3 cos3 , α = π, m = 6, x0 = 3.75
2
2
f7 (x) = (ex +7x−30 − 1)(x − 3)6 , α = 3, m = 7, x0 = 2.87
f8 (x) = (x − π ) log (x − π + 1) sin x/ex ,
2 5
α = π, m = 8, x0 = 2.79.
Table 5 shows successful convergence for a list of other test functions with m, t , µ, γ , the least iteration number ν for
convergence and the asymptotic error constant η.
Let p denote the order of convergence and d the number of new evaluations of f (x) or its derivatives per iteration.
Taking into account the computational cost, an efficiency of the given iteration function is measured by the efficiency index
∗
EFF = p1/d introduced in [6]. A bigger efficiency index indicates a more efficient and less expensive iteration scheme. For
1 √
our proposed iteration scheme, we find p = 3 and d = 3 to get ∗ EFF = 3 3 ≈ 1.44224957 which is better than 2, the
efficiency index of the modified Newton method.
In this paper, we have proposed a cubic-order iteration scheme (1.1) free of second derivatives. The current approach
can be extended to other iterative methods including the Halley method, requiring higher-order derivatives.
References
[1] Q. Du, M. Jin, T.Y. Li, Z. Zeng, The quasi-Laguerre iteration, Mathematics of Computation 66 (217) (1997) 345–361.
[2] Y.H. Geum, The asymptotic error constant of leap-frogging Newton’s method locating a simple real zero, Applied Mathematics and Computation 189
(217) (2007) 963–969.
[3] E. Hansen, M. Patrick, A family of root finding methods, Numerische Mathematik 27 (1977) 257–269.
[4] L.D. Petkovic, M.S. Petkovic, D. Zivkovic, Hansen–Patrick’s family is of Laguerre’s type, Novi Sad Journal of Mathematics 33 (1) (2003) 109–115.
[5] F.A. Potra, V. Pták, Nondiscrete Induction and Iterative Processes, in: Research Notes in Mathematics, vol. 103, Pitman, Boston, 1984.
[6] J.F. Traub, Iterative Methods for the Solution of Equations, Chelsea Publishing Company, 1982.
[7] B. Kalantari, Generalization of Taylor’s theorem and Newton’s method via a new family of determinantal interpolation formulas, Journal of
Computational and Applied Mathematics 126 (2000) 287–318.
[8] B. Kalantari, S. Park, A computational comparison of the first nine members of a determinantal family of root-finding methods, Journal of
Computational and Applied Mathematics 130 (2001) 197–204.
[9] B. Kalantari, Polynomial Root-Finding and Polyomiography, World Scientific, 2008.
[10] S.D. Conte, Carl de Boor, Elementary Numerical Analysis, McGraw-Hill Inc., 1980.
[11] J. Stoer, R. Bulirsh, Introduction to Numerical Analysis, Springer-Verlag New York Inc., 1980, pp. 244–313.
[12] J. Kou, Y. Li, X. Wang, A modification of Newton method with third-order convergence, Applied Mathematics and Computation 181 (2) (2006)
1106–1111.
[13] J.H. Mathews, Basic Complex Variables for Mathematics and Engineering, Allyn and Bacon, Inc., 1982.
[14] A. Clark, Elements of Abstract Algebra, Dover Publications, Inc., 1971.
[15] M.R. Spiegel, Mathematical Handbook of Formulas and Tables, in: Schaum’s Outline Series, McGraw-Hill Inc., 1968.
[16] J.E. Marsden, Elementary Classical Analysis, W. H. Freeman and Company, 1974.
[17] S. Wolfram, The Mathematica Book, 4th ed., Cambridge University Press, 1999.