Talk:Brent's method
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
Unclear
[edit]The article states:
- then we set e = d, otherwise we set e = m. In the latter case, we do a step according to the bisection method.
Maybe I'm dense, but this is unclear to me. Bisect, using which points? In general, the whole if e then .. part of the article seems opaque, and I can't quite make sense of the intention of the code here. Can someone clarify? linas 23:22, 17 April 2006 (UTC)
BTW, I looked at Mathworld's entry for this. It seems to be a direct rip-off of what's in "Numerical Recipes in C". One minor advantage is that it seems to minimize the number of divisions required (useful in the case of divisions taking a long time, e.g. high-precision work). linas 12:54, 18 April 2006 (UTC)
- I added an algorithm, which should at least be unambiguous. I'll return and see if I can rewrite the text to make it clearer.
- I assume the "Numerical Recipes" book has a good reason for their formulation of inverse quadratic interpolation (perhaps it's minimizing the number of divisions, as you suggest), but I don't know it, which is why I stick to the more obvious formulation. -- Jitse Niesen (talk) 08:11, 19 April 2006 (UTC)
- Thank you! FWIW, I've now implemented a one-off version in GMP. For my data, the thing very nearly doubles the number of sig figs each iteration, which is a vast improvement over linear interpolation or binary search. For your amusement (my amusement?), heres actual output, showing width of the bracket bracketing the zero, per iteration:
# count=0 delt 0.9958715e-1 # count=1 delt 0.1000055e0 # count=2 delt 0.4072413e-3 # count=3 delt 0.1123433e-7 # count=4 delt 0.1257072e-14 # count=5 delt 0.1586553e-28 # count=6 delt 0.6180884e-54 # count=7 delt 0.3400586e-100
linas 13:55, 21 April 2006 (UTC)
Questions about Brent's method
[edit]To Jitse Niesen:
1. How do you know that the method will "fall back to the more robust bisection method if necessary"?
2. What is the justification for assuming that the root is closer to b than to a when you choose between d and m?
3. Would it not be better to make the new c equal to whichever of the old a or old b does not become the new a, so that c will always (after the first iteration) be the most recently generated value outside the interval [a,b] or [b,a]? Thus the last part of the code would become:
- if f(b) f(d) < 0 then
- c := a
- a := b
- else
- c := b
- (a remains a)
- end if
- b := d
Thanks for your assistance. JRSpriggs 07:02, 25 April 2006 (UTC)
- Explaining my third question. c is only used to perform inverse quadratic interpolation. But that interpolation will not be done if c is the same as either a or b. So we should try to avoid choosing a value for c which will be equal to a when in the "then" case. That is why I am suggesting that it should be the old a rather than the old b which is the new a. JRSpriggs 06:38, 26 April 2006 (UTC)
- Ad 1. I suppose that you realize that the statement d := m is the bisection method. As for how I know that the method falls back to the bisection method: I know it because I read it in Atkinson (as referenced in the article). I've looked a bit into it now, and this is discussed in Sections 4.2 and 4.3 in Brent. However, Atkinson's description of Brent's method is not complete, hence the algorithm given in the article is not quite the same as Brent's method, and therefore, the algorithm in the article might not always fall back to the bisection method. I'll look into it when I have some time.
- Ad 2. b is the latest guess for the root, and since the guesses converge to the root, b is usually a much better approximation than a. Perhaps the work assume should not have been used. It is not certain that b is closer to the root, but typically that is the case, and it makes sense to use this.
- Ad 3. I see your point. Let me think a bit about this and consult the original code. Brent does not say anything about it, as far as I can see. -- Jitse Niesen (talk) 13:19, 26 April 2006 (UTC)
Counter-example: If I understand the current version of Brent's method, it will NOT fall back on bisection in the following situation. Let f(x)=x^3+x^2-5x+3=(x+3)(x-1)^2. Let the initial values for a=-4 and b=+4/3. So that initially, f(a)=-25 and f(b)=13/27. For this problem, the bisection method will converge slowly to -3. Newton's method (beginning at b) will converge slowly to +1. The secant method will converge slowly to +1. The inverse quadratic method will converge slowly to +1. And Brent's method will NOT converge, because a will remain -4 while b slowly converges to +1. So the length of the interval from a to b will never drop below 5. JRSpriggs 04:40, 27 April 2006 (UTC)
- Hmm, yes, I guess I have no choice but to add the complete algorithm as given by Brent. Not sure what's the best way to present it though. Perhaps I should start with Dekker's algorithm and then say what Brent added to make sure that it falls back to the bisection method. Let me see whether I can track down the reference to Dekker's algorithm. -- Jitse Niesen (talk) 11:39, 27 April 2006 (UTC)
- I do not know what Brent would have done. But if it was up to me to write a program with what I know now, I would force a bisection step after any step which did not reduce the length of the interval to half or less of its previous length. JRSpriggs 06:28, 18 May 2006 (UTC)
I have taken the liberty of modifying the algorithm to: (1) agree with the verbal text as far as I can make sense of it; (2) avoid the problems which I mentioned above in this section of talk; and (3) add more detail to the input and output parts. If I have gotten any of it wrong (contrary to Brent, that is), feel free to fix it. JRSpriggs 04:08, 14 June 2006 (UTC)
- Regarding the third point, is that really necessary? I'd say that things like checking of inputs, explicitly mentioning that f(a) should be calculated, and outputting the result go to much into detail. After all, this is an encyclopaedia and not a source code repository (see WP:NOT). I added the algorithm to supplement the text, which is hard to follow, but I don't think it should be turned into a computer program. -- Jitse Niesen (talk) 05:34, 14 June 2006 (UTC)
- I used to be a computer programmer, so I have a strong habit of adding more and more detail until I have a workable program. I could change it back, if you think that that would make it more readable. JRSpriggs 05:44, 14 June 2006 (UTC)
- As far as mentioning when f should be calculated, I was trying to make clear that it only needs to be calculated once per iteration (and twice during initialization) because f might be arbitrarily difficult to calculate and it might be the limiting cost of performing the algorithm. JRSpriggs 07:52, 15 June 2006 (UTC)
- That's a good point. -- Jitse Niesen (talk) 11:37, 16 June 2006 (UTC)
How slow can Brent's method be?
[edit]Since Jitse added "Brent ... inserted an additional test which must be satisfied before the result of the secant method is accepted as the next iterate. ... otherwise the midpoint m is used for the next iterate. This modification ensures that a bisection step is done if the secant method does not progress fast enough. Brent proved that his method requires at most N2 iterations, where N denotes the number of iterations for the bisection method." to explain part of how Brent modified Dekker's method. Each iteration reduces the length of the interval within which the root can lie. If an iteration using either the inverse quadratic interpolation or secant method reduces the length to half or less of its previous length, then it is better than a bisection step. Otherwise, a bisection step is used as the next iteration. So the length is halved or better in either one or two iterations. Thus no more than two times as many iterations are necessary to reduce the length by the same amount as the bisection method does to reach convergence. So I do not see how you can say it is N-squared rather than two times N. JRSpriggs 04:24, 14 June 2006 (UTC)
- The extra test that Brent introduced does not look at the length of the interval [a_k, b_k], but at the difference b_{k-1} − b_k. So there is no guarantee that the length of the interval [a_k, b_k] is halved every second iteration. -- Jitse Niesen (talk) 05:09, 14 June 2006 (UTC)
- PS: What do you think about adding an example; would that help to explain the method? -- Jitse Niesen (talk) 05:11, 14 June 2006 (UTC)
- Then I should take out the "if |b-a| > L/2 then (force bisection step)" if-statement, but that would make the method fail again because of the counter-example I gave in the previous section of talk.
- Regarding an example, we could add "debug" print statements to the algorithm and then gave a listing of the printout they would generate. I do not have a compiler presently, so I cannot write it now. I tried doing it by hand, and it quickly becomes very tedious (after about 3 iterations). JRSpriggs 05:58, 14 June 2006 (UTC)
How clever of you to use my "counter-example" as your example. Now I see that I misunderstood the condition for using m instead of s. I thought that it was as given in the version of the algorithm that was there when I first looked at this and which was used by Dekker. So, I will try to change the algorithm to use Brent's actual test. JRSpriggs 07:57, 15 June 2006 (UTC)
- Well, your "counter-example" illustrates a lot of cases, which is the main reason why I decided to use it. And that reminds me, the condition is actually slightly different in Brent's algorithm.
- However, I think it still may happen that the b_n converge to a root, but the a_n stay at the same location. But that does not mean that the algorithm has failed: if the b_n converge, we have a root and we're happy. :) -- Jitse Niesen (talk) 11:37, 16 June 2006 (UTC)
- What then is the test for convergence? We cannot just compare the length of [a,b] to ε then. JRSpriggs 08:35, 17 June 2006 (UTC)
- Actually, on checking Brent's book I discovered that I was wrong. The length of [a,b] always converges to zero, and the convergence test is whether the length of [a,b] is small enough. -- Jitse Niesen (talk) 11:19, 17 June 2006 (UTC)
- What then is the test for convergence? We cannot just compare the length of [a,b] to ε then. JRSpriggs 08:35, 17 June 2006 (UTC)
Swap all ak with bk or just the last?
[edit]I am wondering whether, when we swap the most recent a with b, we should also swap the earlier values? So that instead of "if |f(a)| < |f(b)| then swap (a,b) end if" we would have "if |f(a)| < |f(b)| then swap (a,b) and swap (p,c) and swap (q,d) end if" where p would be the old a and q would be the old p. JRSpriggs 11:37, 18 June 2006 (UTC)
- No, just the last. So "if |f(a)| < |f(b)| then swap (a,b) end if" is correct. -- Jitse Niesen (talk) 12:56, 18 June 2006 (UTC)
"f(a)*f(s) <= 0"
[edit]The condition "f(a)*f(s) <= 0" in the if-else at the end of the pseudocode was originally "f(a)*f(s) < 0" which I believe did not work for the following reason: if f(s)=0, then f(a)*f(s)=0 and we would land in the 'else' side, hence setting a := s. Upon the next repeat until we would then miss that s was actually a root.
On the other hand, if the test is f(a)*f(s) <= 0, we have the following possibilities:
- 1. if f(s)=0 then b := s, the next repeat until is the last since f(b)=0, and b is returned
- 2. if f(a)=0 then
- 2.a if f(b)!=0 then we swap a and b just after in "if |f(a)| <= |f(b)| then swap (a,b) end if" and a will therefore be identified as the root in the next repeat until test.
- 2.b or f(b)=0 and b is a root as well.
Erwan.lemonnier 14:07, 15 December 2006 (UTC)
- If f(s)=0, then a would become s and the argument which you gave for case 2 would work for case 1 as well. I am reverting both additions of equal signs. JRSpriggs 06:14, 16 December 2006 (UTC)
Discrepancy between discussion and example algorithm?
[edit]- d := c
- c := b
- if f(a) f(s) < 0 then b := s else a := s end if
- if |f(a)| < |f(b)| then swap (a,b) end if
At the end, it seems that the previous value of a could end up in c instead of b, whereas the discussion of Brent's improvement discusses using the old values of b. To match that discussion, surely the top 2 lines I've quoted should be after the other 2, not before? 193.133.239.201 09:38, 24 April 2007 (UTC)
- The algorithm is correct. Do not take the words too seriously. a itself is usually a previous value of b as are c and d. s is a probable future value. JRSpriggs 10:17, 24 April 2007 (UTC)
Possible algorithm error
[edit]Hello,
Towards the end of the algorithm there is the statement:
- d := c
however d is never mentioned earlier or later as a variable. Is it possible that it's supposed to be 's'? I see that in earlier incarnations of the page 's' was called 'd'. —Preceding unsigned comment added by 132.206.77.89 (talk • contribs)
- No. The variable d is used in one line
- if s is not between (3a + b)/4 and b or (mflag is set and |s−b| ≥ |b−c| / 2) or (mflag is cleared and |s−b| ≥ |c−d| / 2) then
- And it is supposed to be there and is distinct from s. JRSpriggs 03:11, 6 July 2007 (UTC)
How come d is used before it is declared?
D declaration?
[edit]As stated above it isn't clear what d is initialized to, this needs a little attention from someone that understands the algorithm better than I do. 58.6.101.164 (talk) 13:42, 14 February 2008 (UTC)
- As JRSpriggs says, d is only used in one line:
- if s is not between (3a + b)/4 and b or (mflag is set and |s−b| ≥ |b−c| / 2) or (mflag is cleared and |s−b| ≥ |c−d| / 2) then
- In the first iteration of the repeat loop, mflag is set so it doesn't matter what d is. In all other iterations, the statement d := c has been executed so it's clear what d is. Does this answer your question? -- Jitse Niesen (talk) 13:54, 14 February 2008 (UTC)
Question on Brent's extra tests
[edit]Hello,
The article states a few things without giving a precise reason for them, could you pease explain the following?
- Why does the "convergence speed test" depend on whether or not interpolation was used in the previous step? Why is it not possible to require whatever the method used in the previous step, even if interpolation was used?
- When using inverse quadratic interpolation, why does "the condition for accepting s (the value proposed by either linear interpolation or inverse quadratic interpolation)" have to be changed to "s has to lie between (3ak + bk) / 4 and bk"? (which is less constraining than "s has to lie between (ak + bk) / 2 and bk")
Thanks for your help. laug (talk) 12:47, 10 May 2008 (UTC)
- I do not know. It has been a long time since I worked on this article, and I do not have the reference. But remember, this is Brent's method, not Niesen's method, Spriggs's method, or Laug's method. If I were writing the program freely for my own use, I would change the algorithm in some ways. But it would not then be Brent's method. And an article on Spriggs's method would be Original Research which is frowned on here. JRSpriggs (talk) 19:58, 14 May 2008 (UTC)
- Also, notice that the only way to guarantee that the convergence will not be slower than the Bisection method is to use the Bisection method itself which also guarantees that it will not converge faster. JRSpriggs (talk) 00:36, 18 May 2008 (UTC)
Step 6 of the example
[edit]Step 5 states:
- (1) The midpoint is taken
- (2) The iterate remains the same
Therefore:
- (1) implies that the inequality used is
- (2) implies that
This would meen that the inequality cannot hold, but step 6 states that s "satisfies all the conditions".
Using the algorithm in pseudocode, this translates to mflag set and b=c.
What am I missing?
Thanks, laug (talk) 11:54, 25 September 2008 (UTC)
- OK, you have to be careful when mapping word descriptions to algorithms, because words are vague.
- (1) The midpoint is taken: this means that s = (a+b)/2.
- (2) The iterate remains the same: this means that at the end of this iteration of the algorithm, a = s, c = b, b remains the same. This does not imply ; rather, it implies . This is only accessed in the "if" statement in the next iteration of the algorithm.
- Hope this helps.
Yangli2 (talk) 19:27, 1 October 2008 (UTC)
I think that Laug is right. After the end of the 5th iteration :
s = -3.35724 a = -3.35724 b = -2.71449 c = -2.71449 (that's why quadratic interpolation can't be used in 6th iteration) d = -1.42897
So, in the 6th iteration (mflag is set) :
|s-b| = |-3.35724+2.71449| = 0.64275 |b-c| = 0
and the condition is not met, so in this iteration, bisection should be used. Is there any mistake in the above ? Thanks ! —Preceding unsigned comment added by 79.166.253.211 (talk) 23:11, 24 February 2009 (UTC)
- OK, re-reading my previous response to Laug, I realized that it is impossible to understand what I'm talking about. Sorry.
- You are not assessing the condition at the right time. The conditions are used to decide whether one should keep the results of an interpolation, or fall back to the result of the bisection. So you shouldn't use the s value from the previous iteration; you should instead first compute the linear interpolation, which gives you the new s = −2.95064, which... wait, you are right! There is no way to satisfy the condition since b-c = 0. Heh, I guess the example is wrong, I'll try to get to changing it some time. Yangli2 (talk) 22:10, 6 May 2009 (UTC)
- I left some correction notes on the example and posted some code and results as well. Please check and tidy up if necessary. --vpapanik 09:10, 9 September 2009 (UTC)
Serious Error in Secant Result Rejection Criterion?
[edit]OK, so the secant results are rejected, according to the "Brent's Method" section in the article, based on two inequalities, depending on the method used in the previous iteration of the algorithm:
Dekker's method performs well if the function f is reasonably well-behaved. However, there are circumstances in which every iteration employs the secant method, but the iterates bk converge very slowly (in particular, |bk − bk−1| may be arbitrarily small). Dekker's method requires far more iterations than the bisection method in this case.
Brent proposed a small modification to avoid this problem. He inserted an additional test which must be satisfied before the result of the secant method is accepted as the next iterate. Specifically, if the previous step used the bisection method, the inequality
must hold, otherwise the midpoint m is used for the next iterate. If the previous step used interpolation, then the inequality
is used instead.
I believe these inequalities are conditions for scrapping the secant results, rather than accepting them. Since earlier in the paragraph, the flaw raised in Dekker's method was that it does not converge fast enough in certain cases, since " may be arbitrarily small", then one needs to use the bisection method when the result generated by the secant method makes very small. But if we accept secant results when the above inequalities hold, we are saying that "OK, only when is smaller than half of or will I be happy with it". Wouldn't that be the exact opposite of what we would like to do? I will check Brent's work later, but I believe that one should use the bisection result when the inequalities hold for the secant result. I will confirm this with research and possibly correct the article soon. Yangli2 (talk) 21:09, 1 October 2008 (UTC)
Ran the same example as the one used in the article, but with the condition for accepting the secant result reversed. I am pretty positive now that the article is wrong. Here are the outputs at each iteration, please compare with ones in the example.
- Init:
- Iteration 1: The conditions are satisfied, so actually throwing away secant result and doing bisection:
- Iteration 2: The conditions are satisfied, so throwing away inverse quadratic result and doing bisection:
- Iteration 3: The conditions are not satisfied, so actually keeping the inverse quadratic result:
- Iteration 4: The conditions are satisfied, so throwing away inverse quadratic result and doing bisection:
- Iteration 5: The conditions are not satisfied, so keeping secant result:
As one can see, this gives a tighter interval for every iteration up to the fifth. I did not continue the procedure, but I think it makes a lot more sense this way. Still, need references to confirm this. Yangli2 (talk) 23:41, 1 October 2008 (UTC)
- OK, never mind, it is as the article says in Brent's book, but with an extra condition: when |c-d| is smaller than a certain tolerance value given a priori, a bisection also needs to be performed. This makes the whole thing make sense, because when the secant method gets trapped in situation where every step it takes becomes vanishingly small and underflows the processor, there is nothing to force a bisection in the algorithm given in the article. This condition, combined with the inequalities in the article, is basically saying "I will only accept step sizes made by the secant method that are halved every two iterations; that way the step sizes will vanish with , and if convergence does not happen, eventually the step sizes will become smaller than the tolerance value, at which point I use bisection method". The article is missing a very important condition which guarantees numerical convergence, and which makes sure that the bisection method is performed at least once every iterations, so that the algorithm is never much slower than bisection. I will now make the necessary adjustments to the article. Yangli2 (talk) 19:17, 2 October 2008 (UTC)
- Thanks. That was probably my fault. I found Brent's description quite a hard read and I had to refer a lot to the actual implementation to make sense of it. It's eminently possible I missed some details. I do remember that there were a couple of things that I didn't understand at all (I believe the second question by laug above was one of the things that mystified me). -- Jitse Niesen (talk) 23:17, 2 October 2008 (UTC)
Thanks to the authors
[edit]Thank you to the authors and maintainers of this article. The joys of Wiki: I just implemented Brent's Method using the pseudocode at the bottom of the page and it works like a charm. The use of d before initialisation also had me stumped, but now I see (on the talk page) that it is not a problem. Perhaps it is worth noting that the "s not between... and..." criteria can be expanded to :
- tmp = (3 * a + b) / 4
- ((!(((s > tmp) && (s < b)) || ((s < tmp) && (s > b)))) || (mflag && (abs(s - b) >= (abs(b - c) / 2))) || (!mflag && (abs(s - b) >= (abs(c - d) / 2))))
At least I hope that is correct. Either way, I leave it to your discretion. Thanks again. Dhatfield (talk) 15:25, 9 February 2009 (UTC)
- Thanks for your input, it's just that English might be more readable than C, especially to someone who doesn't know C. We do want to keep this accessible to the widest of audiences, the downside of which is, of course, that you can't just copy and paste the algorithm into a C++ function. My eyes burn and water when they have to be aligned to that huge conditional you put up :) —Preceding unsigned comment added by Yangli2 (talk • contribs) 22:18, 6 May 2009 (UTC)
- Note also that x is not between w and y is the same as ( x - w ) * (x - y) >=0
- i.e. if x is between w and y then ( x - w ) and ( x - y ) will have different signs.
- That can be used to simplify your expression above and indeed remove the variable tmp.
- (s - ( 3 * a + b) / 4 ) * ( s - b) >= 0
- || (mflag && (abs(s - b) >= (abs(b - c) / 2)))
- || (!mflag && (abs(s - b) >= (abs(c - d) / 2))))
- || (mflag && (abs(b-c) < abs(delta))
- || (!mflag && abs(c-d) < abs(delta))
- Pma jones (talk) 08:02, 22 February 2010 (UTC)
- Thanks for your contribution ! I will update the Delphi code accordingly. vpapanik 06:44, 24 February 2010 (UTC) —Preceding unsigned comment added by Vpapanik (talk • contribs)
Factual Issues and Comments
[edit]This article has been very useful and I have a few comments that I hope can improve it. The pseudo-code has significant differences from the algorithm presented in Brent's original text (I checked out the book from the library). Brent's original method uses a combination of linear interpolation, quadratic interpolation, and bisection. Perhaps the algorithm presented is a later modification by Brent, but if so it would greatly strengthen the article to include this information. If the modifications are not Brent's, then credit to authors of the modification should be cited. Also, the text references Jack Crenshaw incorrectly according to the reference provided. I read the reference and it says he uses parabolic interpolation while the text says he uses quadratic interpolation. This is a factual error that should be corrected.
Can someone point me to a primary reference for the algorithm presented here which uses quadratic interpolation, secant, and bisection? I am implementing a root finder and I am interested in the algorithm as I suspect it may outperform Brent's original method in some or many cases.
- How about the book? p. 58. It talks of linear interpolation rather than secant, if you wish to make that change. Bendel boy (talk) 15:59, 1 May 2019 (UTC)
I've noticed several others commenting on the use of the uninitialized value for d. This confused me when implementing as well and I suggest updating the pseudo-code to clearly indicate how this value is used. I realize that if readers dig through the talk, then they can find the answer, but it seems it has confused enough people to warrant including in the main body of the article.
This is my first wiki talk so I'm still learning the system, etiquette etc. sphEditor 01:48, 13 April 2009 (UTC)
- Did Brent really use quadratic interpolation? I did not look at that part too closely, when I read it, I was primarily looking for the condition upon which interpolation results were kept or rejected. How did he decide which root to keep and which to throw away, if they both fall in the bracket? Did he just pick a random one? Not to doubt your research, I'm just intrigued by this information.
- The Algol code states that there is a section that does this. The comment reads Inverse quadratic interpolation Bendel boy (talk) 15:59, 1 May 2019 (UTC)
- Regarding the Jack Crenshaw quote, it says he uses *inverse* quadratic interpolation, which is just another name for parabolic interpolation. So all's well.
- I'm not exactly sure what you mean by an algorithm which "uses quadratic interpolation, secant, and bisection". According to what you said earlier, isn't that what Brent proposed originally ("Brent's original method uses a combination of linear interpolation, quadratic interpolation, and bisection", whereas "linear interpolation" is synonymous with "secant")? In fact, if that is what you are looking for, it would have the exact performance as Brent's method, since it is Brent's method :)
- I agree on the d thing, I'll try to get to it. Yangli2 (talk) 22:10, 6 May 2009 (UTC)
pseudocode wrong
[edit]The pseudocode cannot be right, because near the bottom of the loop, c is set to b, and at the top of the loop, inverse quadratic interpolation is only done if f(c)!=f(b). So it never happens. —Preceding unsigned comment added by 193.113.37.7 (talk) 11:55, 13 May 2010 (UTC)
What ? After c := b there are two more commands that very likely change b ! vpapanik 11:17, 14 July 2010 (UTC) —Preceding unsigned comment added by Vpapanik (talk • contribs)
How about Brent's method for minimization?
[edit]Did Brent develop two methods for one-dimensional functions with derivatives? This page talks about a root-finding algorithm. But I thought Brent's method was a minimization algorithm, a variant on golden section search. The link from Powell's method to Brent's method also invokes Brent's method as a 1-dimensional minimization algorithm, but that is not what this page describes. Flipping through the Numerical Recipes book on Google Books, it looks like both of Brent's algorithms are given, without comment on the fact that they have the same name. The minimization algorithm appears in chapter 10. Eclecticos (talk) 21:06, 3 October 2011 (UTC)
- His code zero is root finding. The code localmin in the same book is for 1d minimisation. Bendel boy (talk) 16:02, 1 May 2019 (UTC)
I agree with the previous comment. Would it be possible to rename this article "Brent's method (root finding)"? Brent's method (function minimization) is a separate method that needs its own article. The reference for Brent's method of function minimization is here:[1] — Preceding unsigned comment added by Rwsaustin (talk • contribs) 16:39, 21 October 2013 (UTC)
References
- ^ Brent, Richard (2002). Algorithms for Minimization Without Derivatives. Dover. ISBN 0-486-41998-3.
C# code
[edit]Why on earth is the example in C# ? Thta is only supported on Microstft systems and is a niche anyway only used by a small amount of users. With only minor changes this would be a generic C program. (In fact I already made that C program).
130.161.210.156 (talk) 10:36, 16 May 2014 (UTC) Kees Lemmens, Delft University
- If you actually had read the top of the source code box, you would have seen that the intended language was (and still is?) Java, or at least there was nothing preventing the code from being used in Java. Besides, Mono is an open source implementation of C# that certainly covers all numerical computations. Source code examples are never intended to be complete programs or modules, so standard include or import statements are superfluous.--LutzL (talk) 09:40, 18 May 2014 (UTC)
- Remove. The C# code should just be removed; except for the addition of a μ-bound, it is the same as the psuedo code 2 sections above. WP prefers psuedo code to concrete languages. Glrx (talk) 18:21, 19 May 2014 (UTC)
- I went ahead and removed the code (which is copied below). Glrx (talk) 21:38, 21 May 2014 (UTC)
Removed code
[edit]The above algorithm can be translated to C-like code as follows:
#include <stdio.h>
#include <math.h>
#include <limits.h>
#include <float.h>
double BrentsMethod(double lowerLimit, double upperLimit, double (*function)(double), double errorTol, double maxiter)
{
double a = lowerLimit;
double b = upperLimit;
double c = 0;
double d = DBL_MAX;
double fa = (*function)(a);
double fb = (*function)(b);
double fc = 0;
double s = 0;
double fs = 0;
// if f(a) f(b) >= 0 then error-exit
if (fa * fb >= 0)
{
if (fa < fb)
return a;
else
return b;
}
// if |f(a)| < |f(b)| then swap (a,b) end if
if (fabs(fa) < fabs(fb))
{ double tmp = a; a = b; b = tmp; tmp = fa; fa = fb; fb = tmp; }
c = a;
fc = fa;
int mflag = 1;
int i = 0;
while (!(fb==0) && (fabs(a-b) > errorTol))
{
if ((fa != fc) && (fb != fc))
// Inverse quadratic interpolation
s = a * fb * fc / (fa - fb) / (fa - fc) + b * fa * fc / (fb - fa) /
(fb - fc) + c * fa * fb / (fc - fa) / (fc - fb);
else
// Secant Rule
s = b - fb * (b - a) / (fb - fa);
double tmp2 = (3 * a + b) / 4;
if ((!(((s > tmp2) && (s < b)) || ((s < tmp2) && (s > b)))) ||
(mflag && (fabs(s - b) >= (fabs(b - c) / 2))) ||
(!mflag && (fabs(s - b) >= (fabs(c - d) / 2))))
{
s = (a + b) / 2;
mflag = 1;
}
else
{
if ((mflag && (fabs(b - c) < errorTol)) ||
(!mflag && (fabs(c - d) < errorTol)))
{
s = (a + b) / 2;
mflag = 1;
}
else
mflag = 0;
}
fs = (*function)(s);
d = c;
c = b;
fc = fb;
if (fa * fs < 0) { b = s; fb = fs; }
else { a = s; fa = fs; }
// if |f(a)| < |f(b)| then swap (a,b) end if
if (fabs(fa) < fabs(fb))
{ double tmp = a; a = b; b = tmp; tmp = fa; fa = fb; fb = tmp; }
i++;
if (i > maxiter)
{
printf("Error is %f \n", fb);
break;
}
}
printf("Number of iterations : %d\n",i);
return b;
}
Pseudocode and worked example both unclear
[edit]I'm trying to implement Brent's method and running into problems. In some parts of the program I can't easily make a function to pass to a Brent function, and in one part, the domain of the function is the 32-bit integers, so I can't use code that's out there. Instead I made an object of a Brent class and initialize it with two arguments and two values, then it returns the next argument to try, and I pass it back the value, until it's done. This works, most of the time, but occasionally encounters some weirdness.
There are two problems with the pseudocode:
- There are 18 calls to f on just the one line that computes the inverse parabolic interpolation. There should be just one call to f; since I return s and expect f(s) to be passed to the next call to the object, I need to know where it is.
- There is an uninitialized variable δ.
The main problem with the worked example is that it uses subscripts instead of the variables a, b, c, and d, which the code uses. It also computes the function of the same argument twice in a row, which shouldn't happen unless you're within two ulps of the root. phma (talk) 18:59, 16 October 2016 (UTC)
possible error in the example?
[edit]I just wrote an algorithm to use brent's method and I was testing it following the reported example. I could follow it without errors until step 7. At step 6, you have f(a) = −6.78239 and the resulting f(s) = -0.58418. Since it result f(a)*f(s), it result A = S and since 0.58418 < f(b) = 3.93934, it swap A and B.
At this point, the example says: "In the seventh iteration, we can again use inverse quadratic interpolation", but this is not the case, since C = A (or C = previous swap B, to be accurate) and the condition f(a) ≠ f(c) cannot be satisfied. The code calculate the new S with the secant method instead, but since it fails the conditions, then bisection is used in the end.
Is it just me missing something or is it an error? can someone check this, please.
Nschloe (talk) 10:25, 21 February 2024 (UTC) Same issue.
Working code
[edit]I needed to use Brent's method and I ended up adapting the code found in Numerical Recipes after cross-referencing it with Brent's publication. I have now a working function which I believe to be correct. I have the code both in python and C++. This is the python code:
def brents_root_finder(func, x1, x2, tol, maxiter):
EPS = sys.float_info.epsilon
a = x1
b = x2
c = x2
fa = func(a)
fb = func(b)
if (fa > 0 and fb > 0) or (fa < 0 and fb < 0):
raise Exception(f"The interval [{x1},{x2}] is not guaranteed to contain a root")
fc = fb
d = b - a
e = d
for i in range(maxiter):
# the bracketing interval is (b,c) with b being the root candidate and a being the previous root candidate
if fb * fc > 0:
# if fb and fc have the same sign, then c becomes a since the root is within (a,b)
c = a
fc = fa
d = b - a
e = d
if abs(fc) < abs(fb):
# if abs(fc) < abs(fb), then c is a better root candidate , so they are flipped
# This has the side effect of a == c
a = b
b = c
c = a
fa = fb
fb = fc
fc = fa
tol1 = 2*EPS*abs(b) + 0.5*tol
m = 0.5*(c - b)
if abs(m) <= tol1 or fb == 0.0:
# tolerance reached, return root
return b
if abs(e) < tol1 or abs(fa) <= abs(fb):
# if the previous step was small or the previous root candidate seems better, do bisection
d = m
else:
s = fb/fa
if a == c:
# if a == c, then do secant
p = 2 * m * s
q = 1 - s
else:
# else do inverse quadratic interpolation
q = fa / fc
r = fb / fc
p = s * (2 * m * q * (q - r) - (b - a) * (r - 1))
q = (q - 1) * (r - 1) * (s - 1)
# the following conditional makes sure that p > 0 and if secant was what just happened
# then sign(m) == sign(q)
if p > 0:
q = -q
else:
p = -p
if 2 * p < 3 * m * q - abs(tol1 * q) and p < abs(e * q / 2):
# if the new root candidate is not more than 3/4 the distance from b to c
# and p/q is not bigger than half of the previous step
# then accept the previous step
# note that if sign(m) != sign(q) then the first check will fail
d = p / q
else:
# else do a bisection
d = m
e = d
a = b
fa = fb
# if d is smaller than tol1, then just take a tol1 step towards the middle
b += d if abs(d) > tol1 else math.copysign(tol1, m)
fb = func(b)
raise Exception(f"Iteration limit [{maxiter}] reached")
I didn't want to just go and add it, but I think it would be nice to have the code in the article. Gtusus (talk) 13:13, 9 January 2023 (UTC)