Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Loading...
User Settings
close menu
Welcome to Scribd!
Upload
Read for free
FAQ and support
Language (EN)
Sign in
0 ratings
0% found this document useful (0 votes)
53 views
Nonlinear Solutions
Uploaded by
free5050
Fixed Point Method , secant Method
Copyright:
© All Rights Reserved
Available Formats
Download
as PDF or read online from Scribd
Download
Save
Save Nonlinear Solutions For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Nonlinear Solutions
Uploaded by
free5050
0 ratings
0% found this document useful (0 votes)
53 views
18 pages
Document Information
click to expand document information
Fixed Point Method , secant Method
Copyright
© © All Rights Reserved
Available Formats
PDF or read online from Scribd
Share this document
Share or Embed Document
Sharing Options
Share on Facebook, opens a new window
Facebook
Share on Twitter, opens a new window
Twitter
Share on LinkedIn, opens a new window
LinkedIn
Share with Email, opens mail client
Email
Copy link
Copy link
Did you find this document useful?
0%
0% found this document useful, Mark this document as useful
0%
0% found this document not useful, Mark this document as not useful
Is this content inappropriate?
Report
Fixed Point Method , secant Method
Copyright:
© All Rights Reserved
Available Formats
Download
as PDF or read online from Scribd
Download now
Download as pdf
Save
Save Nonlinear Solutions For Later
0 ratings
0% found this document useful (0 votes)
53 views
18 pages
Nonlinear Solutions
Uploaded by
free5050
Fixed Point Method , secant Method
Copyright:
© All Rights Reserved
Available Formats
Download
as PDF or read online from Scribd
Save
Save Nonlinear Solutions For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download as pdf
Jump to Page
You are on page 1
of 18
Search inside document
Open Methods For the bracketing methods in Chap. 5, the root is located within an interval prescribed by a lower and an upper bound, Repeated application of these methods always results in closer estimates of the true value of the root, Such methods are said to be convergent because they move closer to the truth as the computation progresses (Fig. 6.14). In contrast, the open methods described in this chapter are based on formulas that require only a single starting value of x or two statting values that do not FIGURE 6.1 Graphical depiction of he ‘undamenal ference between he fal brackerng and (b) ond {cl open methods fr root Iocation. In fo, which is the Disection method, the root i constrained wii he interval prescribed by xjard x, In conta, fo he open method depicted in (bond fe, Formula is used o prec rom X10 %-) in an feralive fashion Thus, the meinod can elher le} diverge oF lel corworge rapidly, depending on he valve of he inl guess se) fe) » Oy CO) He—A 14s146 (OPEN METHODS 1 EXAMPLE 6.1 necessarily bracket the root. As such, they sometimes diverge or move away from the true root as the computation progresses (Fig. 6.12). However, when the open methods converge (Fig. 6.1¢), they usually do so much more quickly than the brack= cting methods. We will begin our discussion of open techniques with a simple version that is useful for illustrating their general form and also for demonstrating the con- cept of convergence. SIMPLE FIXED-POINT ITERATION [As mentioned abo c, open methods employ a formula to predict the root. Such a formula cean be developed for simple fixed-point iteration (or, as itis also called, one-point ite eration or successive substitution) by rearranging the function f(x) the leftchand side of the equation: a) (6) ‘This transformation can be accomplished either by algebraic manipulation or by simply adding x to both sides of the original equation. For example, so that x is on Y= Wwst3=0 cean be simply manipulated to yield ya tt3 2 whereas sin x = 0 could be put into the form of Bq, (6.1) by adding x to both sides to yield ‘The utility of Eq, (6.1) is that it provides a formula to predict a new value of x as 1 function of an old value of x. Thus, given an initial guess at the root x. Bg. (6.1) can be used to compute a new estimate 24, a8 expressed by the iterative formula, gt) 62) As with other iterative formulas in this book, the approximate error for this equation can bbe determined using the error estimator [Eg. (3.5) vs Simple Fixed-Point leration Problem Statement. Use simple fixedepoint iteration to locate the root of fla) = Solution, The funetion can be separated directly and expressed in the form of Ba, (6.2) a86.1 SIMPLE FIXED-POINT ITERATION, 147 Starting with an initial guess of xp = 0, this iterative equation can be applied to compute (%) a (%) ° ° 1000 1 1.900000 190.0 763 2 30/879 Vie 35.) 3 69220) 469 22.1 4 500473 383 ne 5 eo62ad 74 639 6 545396 nz 383 7 0.579612, 5.50 220, 8 0360115 348 2a 9 os7iiaa 193 aos to 564879 m 0.399 ‘Thus, each iteration brings the estimate closer to the true value of the root: 0.56714329. EXAMPLE 6.2 6.1.1 Convergence Notice that the true percent relative error for cach iteration of Example 6.1 is roughly proportional (by a factor of about 0.5 to 0.6) to the error from the previous iteration, ‘This property, called linear convergence, is characteristic of fixedspoint iteration. Aside from the “rate” of convergence, We must comment a this point about the “possibility” of convergence. The concepts of convergence and divergence can be de- picted graphically. Recall that in Sec. $.1, we graphed a function to visualize its structure and behavior (Example 5.1). Such an approach is employed in Fig. 62a for the function Jia) = e* = x, An alternative graphical approach is to separate the equation into wo ‘component pasts, as in fi2) = HO) “Then the two equations yi = AG) 63) and v2 = AQ) oa) ccan be plotted separately (Fig. 6.2). The x values corresponding to the intersections of these functions represent the roots of flx) = 0. The Two-Curve Graphical Method Problem Statement. Separate the equation & its root graphically. into two parts and determineus OPEN METHODS Solution, Reformulate the equation as yy = x and yy = e°*. The following values can ‘be computed: x n % 00 00 1000 02 02 ©0819 04 02 9670 06 06 059 08 08 (ae9 10 10 0368 ‘These points are plotted in Fig. 6.2b. The intersection of the two curves indicates a root estimate of approximately x = 0.57, which corresponds to the point where the single curve in Fig. 62a crosses the x axis. FIGURE 6.2 “wo alemmaive graphical methods for determining the 10 oF fl = = x. (0) Root ot the point where crosses he xcs, [bl 100! a! he ifese tion ofthe component functions.6.1 SIMPLE FIXED-POINT ITERATION, 149 ‘The two-curve method can now be used to illustrate the convergence and divergence ‘of fixed-point iteration, First, Eq. (6.1) can be reexpressed as a pair of equations y; — x ‘and ys — g(@). These two equations can then be plotted separately. As was the case with Eqs. (6.3) and (6.4, the roots of f(2) ~ 0 comespond to the abscissa value at the inter- section of the two curves, The function y; ~ x and four diferent shapes for yz ~ g(0) are plotted in Fig. 6.3. For the first case (Fig. 6.34), the initial guess of xp is used to determine the corre sponding point on the yz curve [2o, g(x). The point (xi, x) is located by moving left horizontally to the y; curve. These movements are equivalent to the first iteration in the fixed-point method: a1 = 80) ‘Thus, in both the equation and in the plot, a starting value of 19 is used to obtain an estimate of x). The next iteration consists of moving to [xy, e(%1] and then to (2,23) ‘This iteration is equivalent to the equation w= 8) FIGURE 6.3 loration cobwebs depicting convergence {a and band divergence (cand a} of simple Fxecpoint eration, Grophs (a cond [¢) ore caled monotone pattems, whereas (b) and {a cre caled oscillating or spiral paterns. Nole ths! convergence ‘occurs when [g'bll <1150 (OPEN METHODS: Box 6.1 From studying Fig. 6.3. it shouldbe clear that fixed-point iteration converges if in the region of interes, [e(8}| < 1. Ta other words, convergence occus ifthe magnitude of te slope of g(x) is less than the slope ofthe line x) = x. This observation can be demonstrated ‘heoretically. Recall that the iterative equation is, w= £0) ‘Suppose thatthe true solusion is a= £0) Subtracting these equations yields Boi ‘The derivative mean-value theorem (recall Sec. 4.1.1) states that if a function g(x) and its fist derivative are continueus aver an inter vala= x7 b, then there exists atleast one value of x = € within (he interval such that (0) = (a) boa sen = ala) ~ ela) BO (86.1.2) ‘The right-hand side ofthis equation i the sloe ofthe line joining (a) and g(). Thus, the mean-value theorem states that there is at least ene point between a and b that has a slope, designated by (8) ‘which is parallel othe Line joining g(a) and s() (ecall Fig. 43) Convergence of Fixed Point lteration Now, if we let a = x, and (86.1.1) can be expressed as ap the righthand side of Eq mete) where ¢ is somewhere between x; and x, This result can then be substituted into Bg. (B6.1.1) to yield gl) — 2) HTH = MO (86.1.3) the true exror for iteration is defined as Bye then Eq, (B6.1.3) becomes Bea = #1 Es Consequently if] <1 thers decreate with each tration For |p'(2)| > 1, the errors grow. Notice also that if the derivative is posite te ewes wl be ostne an enc the erate slton Sillbetmonotoni Fg 63a and) Ifthe deivaiveis negative the croc will oxlate ig.6.3 an 0h ‘Avot oe aly at als demons that when the method converges. the eos oughly proportional oad ese than the ever af the previous sep. Fo hs reason, spe ed point tration is ait be linearly convergent ‘The solution in Fig. 6.34 is convergent because the estimates of x move closer to the root with cach iteration, The same is true for Fig. 6.36. However, this is not the case for Fig. 6.3¢ and d, where the iterations diverge from the root, Notice that convergence seems to occur only when the absolute value of the slope of y = g(x) is less than the slope of yy tion of this result 6.1.2 Algorithm for Fixed: », that is, when |g'(x}| < 1. Box 6.1 provides a theoretical deriva- t Iteration ‘The computer algoritlam for fixed-point iteration is exteemely simple. It consists of loop to iteratively compute new estimates until the termination criterion has been met. Figure 6.4 presents pseudocode for the algorithm. Other open methods can be pro- grammed in a similar way, the major modification being to change the iterative formula that is used to compute the new root estimate.6.2_THE NEWTONRAPHSON METHOD. 151 FUNCTION Fixpt(x0, 2s, Imax, iter, ea) xr = 30 iter = m0 xrold = xr xr = gfxreld) iter = iter +1 IF xr #0 THEN ex = P= *214) 209 FIGURE 6.4 10 oF Houde for fred point IF 3 < es OR tter > joox EXIT iteration. Note that other open END 00 rrethods con be cas in it Fingt ~ ar general formal FAD Fixpt 10 16) FIGURE 6.5 Grophical depiction of the ° Nontomophvon rnthod A tongent tote function of x that is, Ph] is extrapolated down 'o he xax: 1 provide fon estimate af the root ot Xi 6.2 THE NEWTON-RAPHSON METHOD Perhaps the most widely used of all root-locating formulas is the Newton-Raphson equa- tion (Fig. 6.5). Ifthe initial guess at the root is x, a tangent can be extended from the point [x,, flx)|- The point where this tangent crosses the xr axis usually represents an improved estimate of the root152 EXAMPLE 6.3 (OPEN METHODS: “The Newton-Raphson method can be derived on the basis of this geometrical inter- pretation (an alternative method based on the Taylor series is described in Box 6.2). As in Fig. 6.5, the first derivative at x is equivalent o the slope feo =o ws fo) which can be rearranged to yield fxd Fay (65) which is called the Newton-Raphson formula NewtorRaphson Method Problem Stalemen! Use the Newton-Raphson method to estimate the root of fx) = e7 = x, employing an initial guess of xp = 0, Solution. The first derivative of the function can be evaluated as Fa) == which can be substituted along with the original function into Eg. (6.6) to give Starting with an initial guess of xy = 0. this iterative equation can be applied to compute i * et) ° 9 100 1 9.500000000 ie 2 9.56631 1003 0147 3 ose7l4sies 0.000220 4 9.567143290 <0 ‘Thus, the approach rapidly converges on the true root, Notice that the true percent relative error at each iteration decreases much faster than it does in simple fixed-point iteration (compare with Example 6.1). 6.2.1 Termination Criteria and Error Estimates As with other rootlocation methods, iq. (3.5) can be used asa termination criterion. In auton, however, the Taylor series derivation of the method (Box 6.2) provides theo- retical insight regarding the rate of convergence as expressed by £,., ~ O(E'). Thus the ror should be roughly proportional tothe squate of the previous eror Tn ether Words,6.2_THE NEWTONRAPHSON METHOD. 153 Box 6.2 Derivation and Error Analysis of the NewionRaphson Method [Aside from the geomeitic derivation (Ege. (6.5) and (6.6)), the Newton-Raphson method may also be developed from the Taylor series expansion, This alternative derivation is useful in that it also provides insight into the rate of convergence ofthe method, Recall from Chap. 4 that the Taylor series expansion can be represented as F0) = fo) +f Lo, ie = 0 Dia 7 8D wor) ‘where £ lies somewhere in the interval ftom to say. Am appro ‘mate version is obtainable by quncatng the series after the first erivative term Posi) =f) + FED Gie4 = 29 [At the intersect ° with the x axis, (4.1) would be equal to (ad) + SN aver — 8) hich can be solved for fxd Fey ‘which is identical to Eq (6.5), Thus, we have derived the Newtons Raphson formula using a Taylor series ‘Aside from the derivation, the Taylor series can also be used to ‘estimate the error ofthe formula. This canbe done by galing that it te complete Taylor series were employed, aa exact result would, B02) ‘he oblained, For this situation x, = x», where is the true value ff the root. Substituting this value along with fls,) = 0 into Eq, (86.2.1) yields £1) o= saa +rore = +E 2a, -a9 23 ‘Equation (B6.2.2) can be subtracted from Eq. (B6.2.3) to give onpnite= nna ws a2 Now, realize that the error is equal to the discrepancy between x3 and the rue value x, a Bust and Eq, (86.24) can be expressed as 0= snk +a, 36.25) If we assume convergence, boub x, and £ should eventually be ap- ‘proximated by the root, and Eg, (86.25) ean be rearranged to yield xs") 2ftay Acconding to Ba, (86.2.6), the error is oughly prapertionl to the ‘square ofthe previous error Tis means thatthe numberof caret decimal places approximately doubles with each iteration, Such, Dbchavior is referred to as quadratic convergence. Example 54 manifests this property Ba (86.26) the number of significant figures of accuracy approximately doubles with each iteration ‘This behavior is examined in the following example. EXAMPLE 6.4 Problem Statement. Error Analysis of Newton Raphson Method As derived in Box 6.2, the Newton-Raphson method is quadrat cally convergent, That is, the error is roughly proportional to the square of the previous error, as in =f") oa are) * Ey 54.1) Examine this formula and see if it applies to the results of Example 6.3, Solution Paya ret 1 ‘The frst derivative of f(x)154 (OPEN METHODS: which can be evaluated at x, = 0.56714329 as f'(0.56714329) = —1.56714329. The second derivative is se) which can be evaluated as /"(0.56714329) = 0,56714329. These results can be substituted into Eq, (H6.4.1) to yield 056714329, 2(= 156714325) From Example 6.3, the initial error was Ej» = 0.56714329, which can be substituted into the error equation to predict 18095(0.56714328)" = 0.0582 Boas = 0.180988%, En= which is close to the true error of 0.06714329. For the next iteration, Ey, = 0.18095(0.06714329)" = 0.008158 which also compares favorably with the true error of 0.008323. For the third iteration, jx = 0.18095(0,0008323)" = 0,000000125 which is the error obtained in Example 6,3, The error estimate improves in this manner because, as we come closer to the root, x and & are better approximated by x, [recall our assumption in going from Eq, (86.2.5) to Eq, (B6.2.6) in Box 6.2]. Finally, E,_ = 0.18095(0,000000125)? = 2.83 x 107 ‘Thus, this example illustrates that the error of the Newton-Raphson method for this case is, in fact, roughly proportional (by a factor of 0.18095) to the square of the error of the previous iteration, EXAMPLE 6.5 6.2.2 Pitfalls of the Newton-Raphson Method Although the Newton-Raphson method is often very efficient, there are situations where At performs poorly. A special case—multiple zoots—will be addressed later in this chapter. However, even when dealing with simple roots, difficulties can also arise, as in the fole lowing example. Exemple of « Slowly Converging Function with Newton Raphson Problem Stoiement. Determine the positive root of f(x) = x7 ~ 1 using the Newton Raphson method and an initial guess of x = 0.5, Solution, The Newton-Raphson formula for this case is which can be used (© compute6.2_THE NEWTONRAPHSON METHOD. 155 i 5, 51.65 26.485 41 8365 37 65285 53.887565 . 10000000 ‘Thus, after the first poor prediction, the technique is converging on the true root of I, Dut at a very slow rate, Aside from slow convergence due to the nature of the function, other difficulties can arise, as illustrated in Fig. 6.6, For example, Fig. 6.6a depicts the case where an inflection point [that is, f"(x) = 0] occurs in the vicinity of a root, Notice that iterations beginning at xp progressively diverge from the root. Figure 6.60 illustrates the tendency of the Newton-Raphson technique to oscillate around a local maximum or minimum, Such oscillations may persist, or as in Fig. 6.66, a near-rero slope is reached, whereupon the solution is sent far from the area of interest. Figure 6.6¢ shows how an initial guess that is close to one root can jump to a location several roots away. This tendency to move away from the arca of interest is because near- zero slopes are encountered. Obviously, a zero slope [/"(x) = 0] is truly a disaster because it causes division by zero in the Newton-Raphson formula (Eq. (6.6)]. Graphically (see Fig 6.64), it means that the solution shoots off horizontally and never hits the x axis. ‘Thus, there is no general convergence criterion for Newton-Raphson. Is convergence depends on the nature of the function and on the accuracy of the initial guess, The only remedy is to have an initial guess that is “sufliciently” close to the root. And for some functions, no guess will work! Good guesses are usually predicated on knowledge of the physical problem setting or on devices such as graphs that provide insight into the be- havior of the solution, The lack of a general convergence criterion also suggests that good computer software should be designed to recognize slow convergence or diver- gence, The next section addresses some of these issues 6.2.3 Algorithm for Newton-Raphson ‘An algorithm for the Newton-Raphson method is readily obtained by substituting Eg. (6.6) for the predictive formula (Eq. (6.2)] in Fig. 6.4. Note, however, that the program must also be modified to compute the first derivative. This can be simply accomplished by the inclusion of a user-defined function156 (OPEN METHODS Additionally, in light of the foregoing discussion of potential problems of the Newion- Raphson method, features: fo) se) Jo lhe program would be improved by incorporating several additional % o @ ‘ur cases where the Newlonaphson mathod exhib poot convergence6.3 THE SECANT METHOD. 157 1. A plotting routine should be included in the program. 2. At the end of the computation, the final root estimate should always be substituted into the original function (o compute whether the resull is close to zero. This check partially guards against those cases where slow or oscillating convergence may lead to a small value of , while the solution is still far from a root. 3. The program should always include an upper limit on the number of iterations to guard against oscillating, slowly convergent, or divergent solutions that could persist nterminably 4, The program should alert the user and take account of the possibility that f"(x) might equal zero at any time during the computation 6.3. THE SECANT METHOD A potential problem in implementing the Newton-Raphson method is the evaluation of the derivative, Although this is not inconvenient for polynomials and many other func- tions, there are certain functions whose derivatives may be extremely dificult or incon venient to evaluate. For these cases, the derivative can be approximated by a backward finite divided difference, as in Fig. 6.7) ins) ~ 0) pa = fads This approximation can be substituled into Eg. (6.6) {o yield the following iterative equation Loder =) on FIGURE 6.7 Graphical depiction of he se ay cant mehod. This technique i simibr tothe NewlersRaphson (ig. 6.5} inthe hat an estimate of he root i= sredicled by extapolaing 2 cangon ofthe funciona he axaxs, Howe, he secon mohos uses ad han a derivative 10 esi mate the $6.9(OPEN METHODS: EXAMPLE 6.6 Equation (6.7) is the formula for the secant method. Notice that the approach requires ‘wo initial estimates of x. However, because /(x) is not required to change signs between, the estimates, it is not classified as a bracketing method. The Secant Method Problem Slatement. Use the secant method to estimate the root of f(s) = with initial estimates of x= = 0 and x» = 1.0. = x. Start Solution, Recall that the true root is 0.56714329, First iteration: 21 =0 fx.) = 1.00000 oe = 1 flag) = -0.63212 =0.63212(0 = 1) 4 = 061270 «= 80% T= (0.63212) Second iteration: a= 1 fl) = -0.63212 = 061270 fl) = -0.07081 (Note that both estimates are now on the same side of the root.) =0.07081(1 = 0.61270) x, = 0.61270 — CTBT = 0et 0.63212 = (—0.07081) = 056384 5, = 058% ‘Third iteration: 2) = 0.61270 0.07081 0.56384 0.00518 0.00518(0,61270 — 0,56384 0.07081 — (=0.00518) 2 = 0.56384 OS6717 — , = 0.0088% 6.3.1 The Difference Between the Secant and False-Pesition Methods Note the similarity between the secant method and the false-position method. For example, qs. (6.7) and (5.7) are identical on a term-by-term basis. Both use two initial estimates to Compute an approximation of the slope of the function that is used co project to the x axis for a new estimate of the root. However, a critical difference between the methods is how fone of the initial values is replaced by the new estimate. Recall that in the false-position method the latest estimate of the root replaces whichever of the original values yielded a function value with the same sign as f(x). Consequently, the wo estimates always bracket the root. Therefore, forall practical purposes, the method always converges because the root 1s kept within the bracket In contrast, the secant method replaces the values in strict sequence, with the new value x.) replacing x, and 2; replacing x,-. As a result, the two values can sometimes lie on the same side of the root. For certain cases, this can lead to divergence.6.3. THE SECANT METHOD 161 EXAMPLE 6.8 6.3.2 Algorithm for the Secant Method ‘As with the other open methods, an algorithm for the secant method is obtained simply by modifying Fig. 64 so that two initial guesses are input and by using Eq. (6.7) to calculate the toot. In addition, the options suggested in Sec. 6.2.3 for the Newton-Raphson ‘method can also be applied to good advantage for the secant program. 6.3.3 Modified Secant Method Rather than using two arbitrary values to estimate the derivative, an alternative approach involves a fractional perturbation of the independent variable to estimate f'(), fas + 6x) ~ fox) 3, where 6 = a small perturbation fraction. This approximation can be substituted into Eg, (6.6) to yield the following iterative equation: yt bf) Fi + 8) Fe) Hie = 6s) Modified Secant Method Problem Siolement. Use the modified secant method to estimate the root of f(x) = oF = x, Use a value of 0.01 for 6 and start with x = 1.0. Recall that the true root is, 056714329, Solution First iteration: Xo flag) = 0.63212 Xo + Bay = LOL fay + Axo) = —0.64578 _001(-063212) 5 sang “OG4STS — (—0.63212) Ie Second iteration: p= 0.537263 Flas) = 0.047083, Xo + dp = 0.542635 flo + dro) ~ 0.038579 0.005373(0.047083) 01038579 — 0.047083 x = 53% xy = 0537263 — 0.56701 |s,| = 0.0236% ‘Third iteration p= 0.56701 ‘Flxa) = 0.000208 %o + Bty = 0.572680 fly + B19) = —0.00867 0.00567,(0.000209) os xy = 056701 561143 |e = 2.365 x 10°5% =0.00867 = 0.000205PROBLEMS: 173 PROBLEMS (3.5)? = 36.95 “Thus, the determinant of the Jacobian forthe fist iteration is 65(32.5) ~ 1.5(36.75) = 156.125 “The values of the functions can be evaluated a the intial guesses as My = (1.5)? + 1.53.5) — 10 = -2.5 wy = 35 + 3(15)G8)" = $7 = 102s ‘These values can be substituted into Bq, (6.24) to give 5(32.5) ~ 1.625(1.5) yes- - ‘ 156,125 2.08603 1.625(65) = (=2.5)3675) 35 — R265) — E2969) _ 4 54, 3 156.125 S388 “Thus, the results are converging to the true values of 2and y = 3. The computation ccan be repeated until an acceptable accuracy is obtained. Just as with fixed-point iteration, the Newton-Raphson approach will often diverge if the initial guesses aze not sullciently close to the true roots, Wheteas graphical methods ‘could be employed to derive good guesses for the singlo-equation case, no such simple procedure is available for the multiequation version. Although there are some advanced approaches for oblaining acceptable first estimates, olten the initial guesses must be ob- tained on the basis of tial and ertor and knowledge of the physical system being modeled, ‘The two-equation Newton-Raphson approach can be generalized to solve n simultar neous equations. Because the most efficient way to do this involves matrix algebra and the solution of simultaneous linear equations, we will defer discussion of the general approach to Part Three. 6. Use simple fixed-point iteration to locate the root of (@) Secant method (dee iterations, x-1 = 3,39 = 4) fla) = sin(V — x Use an initial puess of x {@) Modified secant method (three iterations, 1» = 3, 8 = 0.01), ‘Compute the approximate percent relative ears for your solutions. ieuntl ¢, 0.01%. Verity © Use (a) fixed-point iteration and (b) the Newton-Raphson ‘that the process is Hineasly convergent as described in Box 6,1, method to determine a root of fx) = —O.9x" + Lx + 25 using 162 Determine the highest real rock of 4) = 5. Perform the computation until , is Tess than e, = 0.01%. Je a) Graphically () Fixed-pointeraton method (three iterations, x) = 3) Note: Make certain that you develop a solution that converges onthe root. ©) Newion-Raphson method (three iterations, x5 = 3), as Ta + ITT 5 {64 Determine the real rots of f(s) ‘Also perform an error check of your final answer. 1+ 554-40 + 05x (a) graphically and (b) using the Newion-Raphison method to ‘within c, = 0.01% {65 Emplay the Newton-Raphson method to determine areal rot for fle) = 1+ 55x ~ 40° + 05x using inital guesses of (a) 452174 (OPEN METHODS: and (b) 4.58. Discuss and use graphical and analyte methods ex- plain any peculiares in your results 6.6 Determine the lowest real root of f(x) = =12 = 21x + 18x 2-4": (a graphically and (b) using the secant method to & value of ¢, comesponding to thee significant figures. 6.7 Locate te fist positive root of fla) = sine + cond +2) —1 where «is in radians. Use four iteration ofthe secant method with Initial guesses of (a) y=; = 1.0 and x, = 3.0; (b) x = 1.5 and 4) = 25, and (6)-1 = 1-Sand x, = 225 to locate the root. (@) Use sical method to explain your result, 6.8 Determine the zeal ro0t of x°* = 80, with the modified secant method to within e, = 0.1% using an initial guess of xp = 35 and 5=001 6.9 Dolermine the highest eal root offs) = °~ 6x4 + LL — 6.1 (a) Graphically. (b) Using the Newton-Raphson method (three iterations, x, = 35). (©) Using the secant method (three iterations, x, = 2.5 and x35). (@) Using the modified secant method (three iterations, x, = 3.5 5= 000, 6.10 Determine the lowest pasitiveroatof flx) = Tsin(x)e7* = 1: (a) Graphically (b) Using the Newton-Raphson method thre iterations, x, = 0.3) (6) Using the secant method (five iterations, x.) — 0.5 and x= 04), (@) Using the modified secant method (Uhre iterations, x, = 0.3, 5=000, 6.11 Use the Newton-Rephson method to find the root of fa) == 9) 2 Employ iil guesses of (a) 2, (by 6, and (¢) 8. Explain your results, 6.12 Given Js) = —28 = 15x + Loe +2 Use a root location technique to determine the maximum of this function. Perform slerations until the approximate relative exror falls below 5%. Ifyou use a bracketing method, use initial guesses of y= O and x, = 1. I'you use the Newton-Raphison ot the modi- fed secant method, use an initial guess of x, = 1. If you use the secant method, use initial guesses of xy = Oand.x,= 1. Assuming that convergence isnot an issue, choose the rechnique that is best suited to this problem, Justify your choice {618 You must determine the root of the following easily diferen- Liable fonction, OPS aie Pick the best numerical technique, juslily your choice and then Use that technique co determine the root. Note that itis known ‘hat for positive inital guesses, all techniques except fixed-point iteration will eventually converge. Perform iterations until the approximate relative error falls below 2%. If you use a brackel- fing method, use initial guesses of x) = 0 and x, = 2. If you use the Newton-Raphson of the modified secant method, use an in lial guess of x; ~ 0.7. If you use the secant method, use guesses of x = 0 and x; = 2 6.14 Use (a) the Newion-Raphson method and (b) the modified secant method (6 = 0.05) to determine a root of fo) = 3° — 16.053! + 88 75x) ~ 19205784 ~ 116.354 + 31.6875 using an inital gues of r= 05825 and s, = 0.01%. Explain you esl 6.15 The “vide and average” meth, an elise method for approximating the square root of any positive number a, can be formulated as xe afx z Prove tha this is equivalent to the Newton-Raphison algorithm, 6.16 (a) Apply the Newton-Raphson method tothe fuction O) = tanh(x! — 9) to evaluate its known teal root atx = 3, Use a initial _ruess of xy = 3.2 and tae a minimum of fur iterations, (b) Did the method exhibit convergence onto its real oot? Sketch the plot with the reals for each eration shown, 6.17 The polynomial f(x) = 0.0074x* ~ 0.2841) + 3.3550) — 12.1851 + 5 has areal root between 15 and 20. Apply the Newton- Raphson method to this function using an inital puess of x = 16.15. Explain your results 6.18 Use the secant method an the circle function (x + 1)! + (= 2)'= 16 to find a postive rea root. Set your initial guess to x)= 2 and x.-) = 0.5. Approach the solution from the first and fourth quadrants. When solving or (x) in the fourth quadtan, be sure to take the negative value of the squae root. Why dacs your solution diverge? 6.19 You are designing a spherical tank (Fig. P6.19) to hold water fora small village in a developing country. The volume of liquid it can hold can he computed as v »BR= A) wa where V = volume (mm). = depth of water in tank (mand R = the tank radius (m). FR ~ 3 m, what depth must the tank be filled to s0 that it holds 30 m"? Use thre iterations of the Newton- Raphson method to determine your answer. Determine the ap- proximate relaive error after each iteration. Note that an initial vest of R will always converge,PROBLEMS: 175 FIGURE P6.19 {6.20 The Manning equation can be written fora rectangular open channel as van” mB a where @ = flow [ms], 5 = slope [mm H = dept Im}, and n the Manning roughness coefficient, Develop a fixed-point eration scheme to salve this equation for IT given Q = 5, $= 0.0002, B = 2, and ~ 0.03. Prove that your scheme converges forall inital guesses _reater than or equal 0 2er0. G21 The function «° — 2c? — 4x + § has a double root at x = 2, Use (a) the standard Newton-Raphson (Eq. (6.6) (b) the modi fled Newton-Raphson (Eq. (6.12). and (c) the modified Newton- aphson (Eq, (6.16)] to solve forthe root at x = 2. Compare and cliecuss the rate of convergence using an initial guess of sy ~ 1.2, 1622 Determine the roots of the following simallancous nonlinear ‘equations using (a) fixed-point iteration and 0b) the Newtoo-Raphson, method: yor er sors yt Sy Employ initial guesses ofx = y ~ 1.2 and discuss the results. 4625 Determine the roots ofthe simultaneous nonlinear equations (aso atas v4y= 6 Use a graphical approach to obtain your initial guesses. Determine refined estimates with the two-equation Newton-Raphson method ‘eseribed in Sec. 6.6.2 (624 Repeat Prob, 6.23 except determine the pasiive root of yoeel 2eose 6.25. A mats bal de Va Given the parameter values V= 1 x 104m, 0 = 1 > 10" ye W> 1 10" gly, and k= 0.25 mI, use the moded secant method t solve forthe steady-state concentration. Employ an ini tial guess ofc ~ 4 gf! and 8 ~ 0.5, Perform thee seraons and determine the percent relative er alter the thin iteration {6.26 For Prob, 6.25, the root can be located with fixed-point (w-oey (ar) W-Wve . Q Only one will converge for intial guesses of 2 << 6, Select the ‘correct one and demonstrate why it wil always work 627 Develop a userriendly program for the Newtos-Raphson. ‘method based on Fig. 64 and Sec. 6.2.3. Test it by duplicating the ‘computation rom Example 6.3 {628 Develop auser-iiendly program forte secant method based, on Fig. 64 and Sec. 6.2. Test it by duplicating the computation 629 Develop a userstiendly program for the modified secant ‘method based on Fig, 64 and Sec. 6.3.2 Test it by duplicating the ‘computation fom Example 6.8 6.20 Develop a user-friendly program for Brent's root location method based on Fig 612. Test it by solving Prob, 6.6 G1 Develop a user-friendly program for the two-equation [Newton-Raphson method based on Sec. 6.6.2. Test it by solving Example 6.12 {6.32 Use the program you developed in Prob, 6.31 to solve Probs, 6.22 and 6.28 to within a tolerance of , = 0.01%,
You might also like
The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life
From Everand
The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life
Mark Manson
Rating: 4 out of 5 stars
4/5 (5982)
Principles: Life and Work
From Everand
Principles: Life and Work
Ray Dalio
Rating: 4 out of 5 stars
4/5 (624)
The Gifts of Imperfection: Let Go of Who You Think You're Supposed to Be and Embrace Who You Are
From Everand
The Gifts of Imperfection: Let Go of Who You Think You're Supposed to Be and Embrace Who You Are
Brene Brown
Rating: 4 out of 5 stars
4/5 (1112)
Never Split the Difference: Negotiating As If Your Life Depended On It
From Everand
Never Split the Difference: Negotiating As If Your Life Depended On It
Chris Voss
Rating: 4.5 out of 5 stars
4.5/5 (898)
The Glass Castle: A Memoir
From Everand
The Glass Castle: A Memoir
Jeannette Walls
Rating: 4.5 out of 5 stars
4.5/5 (1737)
Sing, Unburied, Sing: A Novel
From Everand
Sing, Unburied, Sing: A Novel
Jesmyn Ward
Rating: 4 out of 5 stars
4/5 (1238)
Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
From Everand
Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
Margot Lee Shetterly
Rating: 4 out of 5 stars
4/5 (932)
Grit: The Power of Passion and Perseverance
From Everand
Grit: The Power of Passion and Perseverance
Angela Duckworth
Rating: 4 out of 5 stars
4/5 (619)
The Perks of Being a Wallflower
From Everand
The Perks of Being a Wallflower
Stephen Chbosky
Rating: 4.5 out of 5 stars
4.5/5 (2119)
Shoe Dog: A Memoir by the Creator of Nike
From Everand
Shoe Dog: A Memoir by the Creator of Nike
Phil Knight
Rating: 4.5 out of 5 stars
4.5/5 (546)
The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
From Everand
The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
Ben Horowitz
Rating: 4.5 out of 5 stars
4.5/5 (356)
Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
From Everand
Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
Ashlee Vance
Rating: 4.5 out of 5 stars
4.5/5 (476)
Bad Feminist: Essays
From Everand
Bad Feminist: Essays
Roxane Gay
Rating: 4 out of 5 stars
4/5 (1058)
The Emperor of All Maladies: A Biography of Cancer
From Everand
The Emperor of All Maladies: A Biography of Cancer
Siddhartha Mukherjee
Rating: 4.5 out of 5 stars
4.5/5 (275)
Steve Jobs
From Everand
Steve Jobs
Walter Isaacson
Rating: 4.5 out of 5 stars
4.5/5 (814)
The Outsider: A Novel
From Everand
The Outsider: A Novel
Stephen King
Rating: 4 out of 5 stars
4/5 (1953)
Angela's Ashes: A Memoir
From Everand
Angela's Ashes: A Memoir
Frank McCourt
Rating: 4.5 out of 5 stars
4.5/5 (443)
The Little Book of Hygge: Danish Secrets to Happy Living
From Everand
The Little Book of Hygge: Danish Secrets to Happy Living
Meik Wiking
Rating: 3.5 out of 5 stars
3.5/5 (424)
The World Is Flat 3.0: A Brief History of the Twenty-first Century
From Everand
The World Is Flat 3.0: A Brief History of the Twenty-first Century
Thomas L. Friedman
Rating: 3.5 out of 5 stars
3.5/5 (2272)
The Yellow House: A Memoir (2019 National Book Award Winner)
From Everand
The Yellow House: A Memoir (2019 National Book Award Winner)
Sarah M. Broom
Rating: 4 out of 5 stars
4/5 (99)
Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
From Everand
Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
Gilbert King
Rating: 4.5 out of 5 stars
4.5/5 (270)
Yes Please
From Everand
Yes Please
Amy Poehler
Rating: 4 out of 5 stars
4/5 (1949)
The Art of Racing in the Rain: A Novel
From Everand
The Art of Racing in the Rain: A Novel
Garth Stein
Rating: 4 out of 5 stars
4/5 (4255)
A Tree Grows in Brooklyn
From Everand
A Tree Grows in Brooklyn
Betty Smith
Rating: 4.5 out of 5 stars
4.5/5 (1934)
Team of Rivals: The Political Genius of Abraham Lincoln
From Everand
Team of Rivals: The Political Genius of Abraham Lincoln
Doris Kearns Goodwin
Rating: 4.5 out of 5 stars
4.5/5 (235)
A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
From Everand
A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
Dave Eggers
Rating: 3.5 out of 5 stars
3.5/5 (232)
Fear: Trump in the White House
From Everand
Fear: Trump in the White House
Bob Woodward
Rating: 3.5 out of 5 stars
3.5/5 (805)
On Fire: The (Burning) Case for a Green New Deal
From Everand
On Fire: The (Burning) Case for a Green New Deal
Naomi Klein
Rating: 4 out of 5 stars
4/5 (75)
Rise of ISIS: A Threat We Can't Ignore
From Everand
Rise of ISIS: A Threat We Can't Ignore
Jay Sekulow
Rating: 3.5 out of 5 stars
3.5/5 (139)
John Adams
From Everand
John Adams
David McCullough
Rating: 4.5 out of 5 stars
4.5/5 (2411)
Manhattan Beach: A Novel
From Everand
Manhattan Beach: A Novel
Jennifer Egan
Rating: 3.5 out of 5 stars
3.5/5 (883)
The Constant Gardener: A Novel
From Everand
The Constant Gardener: A Novel
John le Carré
Rating: 3.5 out of 5 stars
3.5/5 (108)
The Unwinding: An Inner History of the New America
From Everand
The Unwinding: An Inner History of the New America
George Packer
Rating: 4 out of 5 stars
4/5 (45)
LU Decomp
Document
22 pages
LU Decomp
free5050
No ratings yet
Classification of Wireless Sensor Network: Static and Mobile Network
Document
2 pages
Classification of Wireless Sensor Network: Static and Mobile Network
free5050
No ratings yet
Chap-02 Osi Model MSC
Document
39 pages
Chap-02 Osi Model MSC
free5050
No ratings yet
Assignment of Bisection Method Solution: Computer Oriented Numerical Methods - 2620004
Document
3 pages
Assignment of Bisection Method Solution: Computer Oriented Numerical Methods - 2620004
free5050
No ratings yet
Chapter (1) Introduction To Multimedia, Its Components and Requirements
Document
4 pages
Chapter (1) Introduction To Multimedia, Its Components and Requirements
free5050
No ratings yet
Errata For The 6 Edition, First Printing: Preface
Document
3 pages
Errata For The 6 Edition, First Printing: Preface
free5050
No ratings yet
تحديات ال wsns PDF
Document
5 pages
تحديات ال wsns PDF
free5050
No ratings yet
Packet Switching: 1.3 The Network Core
Document
5 pages
Packet Switching: 1.3 The Network Core
free5050
No ratings yet
Lect. 3: Superscalar Processors: - Data Dependence (A.k.a. Read After Write - RAW) - Control Dependence
Document
24 pages
Lect. 3: Superscalar Processors: - Data Dependence (A.k.a. Read After Write - RAW) - Control Dependence
free5050
No ratings yet
Link Layer and Lans Chapter 5: The Data Link Layer: Our Goals
Document
8 pages
Link Layer and Lans Chapter 5: The Data Link Layer: Our Goals
free5050
No ratings yet
Optimization Open Method - 3
Document
6 pages
Optimization Open Method - 3
free5050
No ratings yet
Omparison of Entium Processor With AND Processor S
Document
23 pages
Omparison of Entium Processor With AND Processor S
free5050
No ratings yet
Lung Cancer Detection by Using Artificial Neural Network and Fuzzy Clustering Methods
Document
4 pages
Lung Cancer Detection by Using Artificial Neural Network and Fuzzy Clustering Methods
free5050
No ratings yet
Zener Diode Problem Set PDF
Document
2 pages
Zener Diode Problem Set PDF
free5050
100% (1)
Nmap Network Discovery III Reduced Size PDF
Document
937 pages
Nmap Network Discovery III Reduced Size PDF
free5050
50% (2)
Operator Tuning in Fuzzy Production Rules Using Neural Networks
Document
6 pages
Operator Tuning in Fuzzy Production Rules Using Neural Networks
free5050
No ratings yet
Little Women
From Everand
Little Women
Louisa May Alcott
Rating: 4 out of 5 stars
4/5 (105)