We can use the NewtonRaphson method to find the square root
Solution
The Newton-Raphson (NR) method is an iteration formula to approximate roots of the equation f (x)=0 for some sufficiently well behaved function f(x) that is at least once-differentiable, given an initial guess that is \"good enough\".
The idea of NR iteration is to \"follow the slope\" of the function at the point (xn,f(xn))back to where the tangent of the function at that point crosses the x-axis. Do this enough times and you\'ll eventually be \"close enough\" to the root for all practical purposes.
So NR iteration begins with a straight line through the point (xn,f(xn)) with slope f(xn). That is yf(xn)xxn=f(xn). This can be rearranged into the equation of a line, but we don\'t need to do that; what we want is the point on this line which satisfies (xn+1,0), so we sub those in for (x,y):
f(xn)/(xn+1xn)=f(xn)
which we solve for xn+1 to obtain
xn+1=xnf(xn)/f(xn).
So to find the square root of a number you want to build the right function f(x). The number you are looking for satisfies the equation x^2=a, where a is the number whose square root you wish to find. That is, formally, we know that x=± sqrt a, even if we don\'t know its value.
We can rearrange the equation above so that the right hand side is zero, and therefore the left hand side is the function we want: f(x)=x2a=0.
Now we find f(x)=2x, and we have everything we need to use NR iteration to approximate ±sqrta Whether you get the positive or negative root depends on your initial guess x0: a positive or negative number. The only problem is if you pick the initial point to be x0=0, because then the tangent line has a slope of zero, and the NR formula has a division by zero. Not a good thing!
A nice thing about this is that you don\'t have to pick a to be an integer; it can be any non-negative real number (although if you get far enough into maths you\'ll learn how to deal with square roots of negative real numbers as well!).
If you\'re interested, see if you can figure out what function f(x) you need in order to approximate cube (or any other) roots of real numbers, and therefore how the NR formula will look in those particular cases.
If you have more complicated functions and not such good initial guesses, NR can give you some strange results, like answers that don\'t converge (oscillating through two or more values in the limit), or which converge to points far away from where you started, or which lead to \"stationary points\" of your function and thus stop because of a division by zero (or some number so small that your results might be thrown off if you\'re doing this on a computer). These are things that we just have to live with, because as far as approximating roots of functions goes, NR is one of the fastest methods we\'ve got. It\'s possible to find similar schemes that use higher derivatives, but they are usually so much more expensive (in terms of computation time) that the little gain in accuracy with each step is not worth it. Other methods that don\'t suffer from NR\'s pitfalls will typically be slower, so in the real world you have to pick something that\'s appropriate for your problem.
