There are several reasons why this method can fail. If the curve touches the x-axis or is discontinuous, then it will fail.
Example of a curve touching the x-axis:
e.g.: y = f(x) = x^3 - 2x^2 - x +2.63
This actually has three solutions, but the change of Sign method will only detect one because between 2 and 3 all values of f(x) at 0.1 intervals are positive and there is no change in sign, as only a very small part of it dips below the x-axis.
Example of a discontinuous Curve:
e.g.: y = f(x) = 1/(x-2.0005)+x
Here, there are two solutions, but they will not be found because of the above reason. The change of sign method however will converge on x = 2.0005, because it is -ve to the left of it and +ve to the right. The mistake will only be discovered after the 4th set of calculations, when you are trying 2.0001,2.0002, 2.0003 etc and reach 2.0005, finding infinity at this point.
Clearly, both these mistakes would have been recognised or avoided if a graph was drawn before attempting the technique, so make sure you draw one every time it is used.
Task ii) Fixed Point iteration using the Newton-raphson method
Unlike the change of Sign method, this requires only one starting value, which should be an estimate of the root we wish to find for f(x)=0. First, a graph should be drawn, if only a rough sketch.
Now, you find the tangent to the curve at x1, and find where it hits the x-axis. This point is your new estimate, x2, and you repeat this technique to a required accuracy.
We can see how this method gives us a good approximation to the root, as each step gets us a lot closer to the root.
The gradient of the tangent at (x1, f(x1) ) is given by:
y - f(x1) = f ' (x1)
x - x1
Where (x,y) is a point on the tangent and f ' (x1) is dy/dx of the curve at x1. Where the tangent cuts the x-axis, x = x2 and y = 0:
0 - f(x1) = f ' (x1)
x2 - x1
- f(x1) = x2 - x1
f ' (x1)
x2 = x1 - f(x1)
f ' (x1)
This is a form of the Newton-Raphson Formula:
Xn+1 = Xn - f(Xn)
f'(Xn)
Using this formula iteratively, one can find an accurate approximate root fairly fast.
An Example of its use:
I will use this method to find all roots of y = f(x) = 2x^4.5 - 4x^3.5 + x
First, a graph
There are three roots, the first of which is 0 because the intercept is 0 and other parts are positive powers of x. If the 1st root = a and the 2nd root = b then, using the graph, I now find the starting approximations:
a1 = 1
b1 = 2
Now I need to find f ' (x)
f(x) = 2x^4.5 - 4x^3.5 + x
so: f ' (x) = 9x^3.5 - 14x^2.5 + 1
the iterative formula is now:
Xn+1 = Xn - 2Xn^4.5 - 4Xn^3.5 +Xn
9Xn^3.5 - 14Xn^2.5 + 1
Here is the result after five iterations
We can see just how close a value to 0 this last value of a5 gives us. Because the change in x is given by: f(x)/f '(x) and f(x) is this small, we also know that further iterations will not produce any significantly different values of x, so we can take this to be the correct answer, especially since f(x) is decreasing rapidly (The next values of f(x) are 6.6 x 10^-13 and 6.7 x 10^-16). I have used this technique to find the other unknown root b:
This has converged very quickly too, to b = 1.8994 to 4.d.p.
Error Bounds
This method does not provide us with any obvious error bounds, unlike the change of sign method. However, error bounds can be confirmed by demonstrating a change of sign of f(x). In the above example, one can find f(x) where x is 1.8994 exactly.
f(1.8994) = -0.0007
f(1.8995) = 0.0009
we can therefore quote the error bounds as 1.8994 to 1.8995
Causes of Failure
Any possible causes of failure can usually be identified by drawing a graph, but if the method is applied blindly, then the following can be causes of failure:
Graph of y = f(x) = 2ln(x) + 1/(3x) - x^4 +1
There are three things that could go wrong with this, both to do with the choosing the starting point badly. If a point just to the left of the minimum is chosen, the tangent will hit the x-axis where x is a negative number. As ln(x) is not defined for -ve values of x, one cannot continue further. Also, if a starting point just to the right of the minimum is used, in an attempt to find the middle solution, the tangent will hit the x-axis to the far right of the graph, and the third solution will instead be found. If you should start with a point on the minimum where the gradient is 0, then it will completely fail as the tangent will never hit the x-axis.
Task iii) Fixed point iteration after re-arranging the equation f(x) = 0 into the form y=g(x)
The Newton-Raphson method described earlier is only a special case of this method. Suppose you want to solve f(x) = 0, and it cannot be done using normal algebra. You can rearrange the equation to the form: x = g(x)
the solutions to this are where the curve y = g(x) crosses the line y = x when drawn on a graph.
We can use the formula x = g(x) as an iterative one, giving:
Xn+1 = g(Xn)
This requires a starting value of x, which can be found from a graph or other method. The iteration can be explained most easily by reference to the graph. Using x1, we calculate y = g(x1)
By moving across from the curve to the line y=x, we find where y=x2 and therefore also where x2 = g(x1)
Now we find g(x2). This is like moving vertically from y = x2 to y = g(x2)
Now move across to the line y = x from y = g(x2) because x3 = g(x2)
This must be repeated to sufficient accuracy. We can see that in this case, a "staircase" effect is produced, and it clearly goes towards the root (where x = g(x)).
Here, a staircase effect is produced. Another possibility is a spiral being produced. This is illustrated in the next graph, but the process is exactly the same.
Still using Xn+1 = g(Xn), we get a different looking graph.
From x1, we calculate g(x1), and go across to the line y = x to get g(x1) = x2
Next, we go up to get g(x2) = x3, then across to y = x . Going down we get g(x3) and so on.
This also homes in on the root, this time alternating above and below the root, the error bounds continually getting smaller, and only happens when the gradient of g(x) is negative.
An Example
I am going to use this method to find a solution to:
y = f(x) = 2x^4 - 2x^3 - 1 = 0
This rearranges:
2x^4 - 1 = 2x^3
x^4 - 0.5 = x^3
(x^4 - 0.5) ^ (1/3) = x
This is my iterative formula that I will use. This graph shows the three lines of the equations y = x, y=f(x) and y=g(x). We can see that the point where y=g(x) crosses y=x are in fact the same places where f(x) = 0
I will now find the 1st root on the left, and from the graph, I am starting with x1 = -0.8
As we can see, this does not converge very rapidly, and this is because at the point where y=g(x) and y=x cross, they are almost at right angles to each other, and so the spiral is moving inwards at a very slow rate. From the above results, I can quote the answer to be -0.669 +/- 0.001, because this encompasses the error bounds found, of -0.6684 and -0.6699.
Example of Non-convergence
If we try to use the same method to find the other root of the same equation, a different thing happens. The other root is close to 1.3, so I use that as the initial value and use the same iteration.
This is clearly a diverging sequence of numbers, as each is further away from the root, which is just less that 1.3, than the previous. The reason for this can be best demonstrated graphically:
This is what happened when I tried to find the 2nd root. When going from the line y = x to the curve y=g(x) vertically, instead of moving closer to the root, it moves away from the root, because, its gradient being greater than that of the line y=x, y=g(x) is above y=x. As y=x has a gradient of 1, we know that if the gradient of y = g(x) is greater than 1, then it will diverge. Also, if g '(x) < -1, then the spiral will diverge too. Therefore, |g '(x)| < 1 for this method to be successful.
This can be proved
(N.B. dx means 'delta x'):
g'(x) = dy/dx
dy = dx * g'(x)
So, for a change in x, the corresponding change in y will be smaller if |g'(x)| < 1
As y is put back as the next X, and the function is iterated, the change in X gets less and less and it homes in on something. If g'(x) > 1 or < -1, then X changes by the a greater amount each time and so gets further away from any value.
In the above equation, the curve y = g(x) was shown in the graph to be steeper than y=x (because it crossed y=x at a steeper angle), and so had a gradient which did not fit the criterion |g'(x)| < 1.
Comparison of methods
The change of sign method achieved the root fairly fast, but the root wasn’t accurate enough but the other two methods in terms of decimal places gave the correct root in to nine decimal places. Whereas the initial root found by using the change of sign method was only accurate to one decimal place. Even though, if required the method would have been perfectly adequate although it would have taken a bit longer.
In the terms of which method was easier and quicker I would say that the change of sign method was the most convenient. Alternatively the Newton-raphson method required you to differentiate first then work out the values.
The use of Microsoft excel and omni graph was far more beneficial for all three methods in the sense that the values for which x lay between were much easier to obtain, as was the use of tangents.
Conclusion
I have found that the Change of sign method is a very simple method to use, and only requires some simple checks before using it. The built in error-bounds and steady rate of convergence at 1 decimal place per cycle make it useful. Once programmed in to a computer, the many simple calculations become even easier.
The rearrangement of equations was complicated and required a certain amount of thinking before a calculator/computer could take over, including checking that g’ (x) was between -1 and 1. Its rate of convergence also varied a lot, depending on g’ (x). It also often requires more than one rearrangement to find all roots to an equation.
The Newton Raphson method converged the fastest of them all and was very easy to use once programmed in to my computer. It always tries to converge to a real root, but you may find the wrong one if your initial estimate was near a turning point.
I think that the Newton Raphson method is the best because of its speed of convergence and because a good starting estimates will always find the required root.