Xn+1 = Xn– f(Xn)
f '(Xn)
From this equation it can be seen that when the Y value of f(x) is divided by the deferential of that equation, the change in X is the result. Therefore, if the change in X is taken away from the original X value (X0 will always be an integer value) it will result in a value of X which is closer to the root. So by repeating this iterative process, the root can be found to a desired degree of accuracy.
Success
Below is the equation used to prove that the Newton Raphson method can indeed find a root of an equation.
f(x) = 1.3x3+0.4x2-0.75x-0.1
By using the form stated above, when the above equation is used and iterated several times, the results of each root are as follows:
My desired number of decimal places was 5 and as seen here: the lightly shaded areas of the tables are where the gradient taken at that exact point intercepts the X axis at exactly the same place (to 5 decimal places). The images below show that this is true:
The image above shows the process taken in order to find the root, however due to the size of lines used (for a clear image) only two processes can be demonstrated clearly, however the theory is applicable to all points.
Failure
However in likeness to all of the methods within this coursework, there are circumstances where this method would fail to find a root. This would occur if the gradient at the (X) integer value was to project away from the desired point. This would occur if there was a stationary point at a very close value to an integer of X. The equation below will contain examples of these.
f(x) = -1.11x5+3.2x4+0.15x3-4x2- 2x
Looking at the gradients at the integer values of X it can be seen that zooming into the root close to x = -1 will be possible, however looking at the gradient at X = 0, it does not seem steep enough for it to find the root closer to 0. We can see below that it does not.
As seen in the diagram above, the iterative process used to solve the root failed as instead of finding the desired root, it zoomed in on another, thus resulting in a failure.
Error bound
The error bound for the demonstrated point of success is ±0.000002. This was calculated by subtracting the mean of the last two different values of Xn+1 from those values.
Rearrangement equations
From previous knowledge, we know that if two functions were rearranged into one, the points of intersection will be the roots to the singular (rearranged) equation.
However by rearranging f(x) into g(x) and h(x), where h(x) =x. We will have a straight like with a newly formed g(x) curve, where the points of intersection are the roots of the original f(x) curve. So by using the y=x line, the rearrangement method can be applied.
As the criteria for this method is very specific and due to the nature of curves, it is very difficult to find a function for where this method totally succeeded or totally fails, so in this instance an example of a point where it succeeds and where it fails will be given from the same function.
For this method I chose to use the following function:
f(x) = x³ - 1.4x² - x + 0.72
This was then rearranged to remove an X from the function for the y = x line to result in the new g(x) and h(x) curves:
g(x) = x³ - 1.4x² + 0.72
h(x) = x
The image above shows all three of the functions together, and it can be seen that the intersections between g(x) and h(x) are indeed the roots to f(x). So by applying the iterative formula to the g(x) function, we can see weather or not the points converge or diverge.
Success
Below is the iterative process taken to prove that the process is successful for finding the root which is approximately half way between 0 and 1, however slightly closer to 0 than 1 so the starting value of c will be 0.
As seen in the diagram above, the rearrangement method does indeed work as the diagram shows the root at being just before 0.497 and the iterative process states that the roots is 0.49697, which reinforces this proof as being correct and thus proving further that the method does indeed work.
Magnitude of g’(x)
This is the general gradient within the neighbourhood of the curve intersect/root. For a success of the method to occur, the gradient of the curve within the area has to be between 1 and -1 one for the method to work. From the diagram we can see that it is indeed within that region and the method is a success.
To prove this: g(x) will be differentiated and the point of intersect will be used as the X value to determine the gradient of the curve at that point.
g’(x) = 3x2 – 2.8x
x = 0.49697
g’(x) = -0.651 (to 3 significant figures)
This has definitely proved that the gradient is within the defined region for the method to function.
Failure
Despite the method’s abilities to find a root, this can only occur if certain criteria are met. Below is an example where by these conditions are not met, and thus failing to find the root.
Using the same set of function as above, there are a few points which do not converge to a correct point, these are as follows:
The left tables shows the same root as that for the success, however the original target root was the one between 1 and 2, thus a failure. The other two processes show gradient of g(x) being unsuitable for the process and instead of converging, it diverges to an infinite degree, again a failure (the # represents a number too large to fit into the space available).
This image shows the failure of the rearrangement method. As seen above, the staircase pattern travels the wrong way, which results in a diverging sequence of results, hence a failure.
Magnitude of g’(x)
For the failure the gradient of g(x) at the point of intersection is:
g’(x) = 6x – 2.8
x = 1.737 (from zooming into the graph with software)
g’(x) = 7.622
This is out of the defined boundaries of 1 and -1, and therefore this point was never found by this method. The actual root was found by using the software to zoom into the graph to find the value of x.
Comparisons
In order to compare the three methods, an equation is needed which succeeds will all three. I have decided to use the equation from the success of bisection, as it is the only function which will work with all three methods.
f(x) = 0.64x4+0.28x2-2x+0.28
Bisection
The success of this was shown towards the beginning of the investigation. However for ease of comparison, the tables of calculations are shown below
…
Newton Raphson
The function shown above was applied to the Newton Raphson form:
xn+1 = xn - 0.64xn4+0.28xn2-2xn+0.28
0.32xn3+0.14xn-2
…and these are the results:
Iteration after rearrangement
From the original function the new g(x) is as follows;
g(x) = 0.32x4+0.14x2+0.14
…and process was as follows:
Conclusion
From the tables above, it seems that the methods were all successful in finding the root as all the answers seem to have either zoomed in or converged upon one point; which is 0.14299.
Merits of each method and speed of convergence
The merits of the bisection method are that the method is simple to remember and apply to any function as there is no rearrangement for differentiation required. So for example if a comparatively low level of accuracy is needed, then numbers need simply be placed within the function in order to calculate the Y values, so the method is as complicated as the function. The speed of convergence of this method for my function is quite slow requiring in total: 17 steps to achieve the desired level of accuracy. However this depends on where exactly the root is, as if the root is at x=0.5 it would only take 1 step to find the root, and a further step to check that it is correct.
For the Newton Raphson method, it is (in my opinion) the most difficult method to implement, especially with more complex functions, such as those containing trigonometrically functions, as these will have to be differentiated for the method to work. The speed of convergence for this function is mediocre, taking 6 iterations before finding the root, and again, a further step was taken to check. Again the speed depends on the function itself. For example, if the function was y=x-0.5, the function would cut the x axis at 0.5 and whichever integer value was taken (0 or 1) the root can be found with first iteration.
The difficulty of the rearrangement method is average, as it only involves rearranging the X and its co-efficient to the other side, and then dividing the remaining function by the co-efficient, to then result in the new g(x) and a y=x line. The speed in this case was very fast as it only took 3 iterations for the method to find the root. However, as with the other methods, it does depend on the function itself. As if the gradient of intersect between g(x) and the line is small it will take fewer steps to find the root (note that the gradient at the intersect point must be between 1 and -1 for the method to work).
Ease of use with technology
For this investigation, there was a host of technology to aid in this task. The main two being graphical calculators and computers. I personally chose computers for their speed and flexibility. As different software was used to find a suitable equation and another was used to solve the function.
Omnigraph was used to determine the function for all three of methods (successes and failures) and was comparatively easy to using paper and pen. However it was more difficult to find function for some methods than others, but this was not a limitation of the technology but a difficulty with the method (details of which are mentioned below).
Excel was used to solve the function as this software makes iterative process very easy by using its auto-fill function (which looks at the pattern within the selected area, then repeats it throughout the dragged region). This function was used repeatedly throughout all methods, however, was easier to implement in some methods (details of which are mentioned below).
The bisection method was very easily achieved in Omnigraph as it only need meet one requirement, that the successful root intersects the X axis between 2 integers where there isn’t another intersect. However, using Excel to solve the root was more difficult, as each step involved its own table, which had to be copied and pasted for each step. Then the figures of the previous step had to be entered also, resulting in a long and repetitive process, with a relatively inaccurate result.
The Newton Raphson method was completed with much more ease, as after the differentiation of the function, the rest becomes easy as the equation is to be inserted into the top cell, then auto-filled until the sequence converges (or diverges). Also this method would be much easier to achieve much higher levels of accuracy, as each line of the table is one step and a subsequent increase in accuracy.
The rearrangement method may be the easiest of the methods to implement as all that is needed is for the g(x) to be placed into the cell, and the cells dragged down until the sequence converges (or diverges). Again like in the Newton Raphson method, each line of the table represents one step. So the more that is auto-filled, the more accurate the root is.