Lab 2: Finding Roots of Equations


We studied several methods to solve equations numerically.

1. Bisection Method:
This method has four steps:
  1. Start with two initial values $x_{low}$ and $x_{high}$. These values should be so chosen that the solution lies somewhere between them. This is done by checking that the equation produces results that have opposite signs (+ve and -ve) when evaluated at the two initial values.
  2. Check if either of the values is the solution, i.e., the equation gives a value of $0\pm\epsilon$ where $\epsilon$ is the specified error limit. If so, return the correct initial value as the solution.
  3. Otherwise, divide the interval between $x_{low}$ and $x_{high}$ into two equal halves and evaluate the equation at the midpoint $x_{mid}$. Check if $x_{mid}$ is the solution and return it as the solution. Store the value of the equation at $x_{mid}$ as the error at the current stage. Otherwise, adjust one of $x_{low}$ or $x_{high}$ values as $x_{mid}$ so that the solution now lies in that half-interval. The updated value is always given by $$x_{new} = x_{mid} = \frac{x_{low} + x_{high}}{2}$$
  4. Repeat the last step until a solution is found or a pre-specified maximum number of iterations is exceeded. Return the solution as well as the stored errors at each stage.
Also, print a message about the number of iterations needed to find the solution.

 

2. Regula falsi Method:
Uses extra information from the given equation when dividing the interval in each iteration. It is guaranteed to give a solution in just one iteration if the equation is linear. Otherwise, its performance varies and there is no guarantee of a solution.

 

3. Newton (or Newton-Raphson) Method:
Theoretically the fastest method but requires explicitly computing the derivative of the given function as well. It starts with a single initial guess and finds a new point using the slope information provided by the derivative. The new guess for the solution is given by \[ x_{new} = x_{old} - \frac{f(x_{old})}{f^\prime(x_{old})} ~~~~~~~~~~~~ (1) \] where $f^\prime$ is the derivative of $f$.

 

4. Secant Method:
It is a variant on the Newton Method and does not require an explicit calculation of the derivative. The derivative is instead approximated by $$df(x) = \frac{f(x) - f(x - \Delta)}{\Delta} ~~~~~~~ (2)$$ where $\Delta$ is a step size usually $\leq1$. The approximate value is then used as in Newton Method. $$x_{new} = x_{old} - \frac{f(x)}{df(x)} ~~~~~~~~~~~~~~~ (3)$$

Implementing the Newton Method


The code for Newton Method is given in the cell below. The code is fairly easy to follow and is self-explanatory. Updating the guess happens on Line 31.

In [1]: 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
import numpy as np
import math
from matplotlib import pyplot as plt

np.set_printoptions(suppress=True, precision=5)

#################################################################
#    The following function finds a root of a given equation
#    using the Newton Method. Its parameters are:
#    fn:      the given function for which a solution is found
#    dfn:     derivative of fn
#    init:    the initial guess for the solution
#    eps:     error limit; default value is 1e-4 or 0.00001
#    maxiter: maximum number of iterations for which the update
#             loop runs
#    The function returns the solution and the errors array
#    containing the error at each iteration. The data in this
#    array may be used to see how the error changes from one
#    iteration to another.
#################################################################
def root_newton(fn, dfn, init, eps=1e-5, maxiter=50) :
    errors = np.zeros(maxiter)
    errors[0] = abs(fn(init))
    if errors[0] <= eps :
        print('Initial value is the solution!')
        return init, errors
    
    x0 = init
    i = 0
    while errors[i] >= eps and i < maxiter : 
        x0 = x0 - fn(x0) / dfn(x0)
        i = i + 1
        errors[i] = abs(fn(x0))
        
    if i >= maxiter :
        print('Good Solution not found in ', i, ' iterations!')
    else :
        print('Found Solution after ', i, ' iterations.')
        
    return x0, errors

Given below is the code for finding the value of $\sqrt[3]{\pi^{e}}$ ($\pi^e\approx22$ and therefore its cuberoot is $\approx2.8$).

First, we need to define the necessary function and its derivative. The function we want is $$f(x) = x^3 - \pi^e$$Its derivative is $3x^2$. These are defined by the two functions solve_eqn(x) and d_solve_eqn(x).

In [2]:
def solve_eqn(x) :
    return x ** 3 - math.pi ** math.e

def d_solve_eqn(x) :
    return 3. * x ** 2

We are ready to use the Newton Method defined above to solve the equation now. The cell below shows the necessary steps, prints the solution and also the absolute errors at each iteration. We can see that the Newton Method found the correct value to 4 decimal places in just 7 iterations starting from an initial guess of $1$.

We also plot the errors to see how quickly (or slowly) they are decreasing with each iteration. What do you infer from the plot of errors?

In [3]:
ans, err = root_newton(solve_eqn, d_solve_eqn, 1.)
print(ans)
print(' Absolute Errors at each stage:')
print(err)
plt.grid(True)
plt.plot(err[:17])
plt.show()
Found Solution after  7  iterations.
2.821398891219445
 Absolute Errors at each stage:
[ 21.45916 519.49272 148.30928  38.79153   7.61293   0.62435   0.00561
   0.        0.        0.        0.        0.        0.        0.
   0.        0.        0.        0.        0.        0.        0.
   0.        0.        0.        0.        0.        0.        0.
   0.        0.        0.        0.        0.        0.        0.
   0.        0.        0.        0.        0.        0.        0.
   0.        0.        0.        0.        0.        0.        0.
   0.     ]

Lab Exercises


  1. Implement Bisection Method using the information given in one of the cells below.
  2. Implement Secant Method as per the template given in one of the cells below.
  3. Use the three methods to find the roots of the following equations accurate to 8 decimal places.
    1. $-4.3x^3 + 6.74x^2 + 11.9x -27.67 = 0$
    2. $x - \log_{10}2 = 0$
  4. Compare the number of iterations taken by each method for the two problems. What do you conclude?
  5. Plot the errors for each of the methods to see how the errors reduce with each iteration. Again, what is your conclusion?
In [4]:
def root_bisection(fn, xlow, xhigh, eps=1e-5, maxiter=50) :
    # Insert your code as per the above parameters and
    # using the update equation (Equation 1)
    # Declare the errors array exactly as in root_newton method
    errors = np.zeros(maxiter)


    return xnew, errors
In [5]:
def root_secant(fn, init, step=1., eps=1e-5, maxiter=50) :
    # Insert your code here as per the above parameters
    # using the update equations (Equation 2 and Equation 3)
    # above.
    x0 = init
    
    
    return x0, errors


THE END