Lab 03: Solving Systems of Equations


There are three methods that we commonly use for solving systems of equations.

  • Gaussian Elimination
  • LU Decomposition
  • Jacobi Method


Gaussian Elimination

Gaussian elimination is the oldest and most well-known method for solving systems of equations. The method works as follows:

  1. Create an augmented matrix C by concatenating the RHS matrix B with the coefficient matrix A column-wise to create an $N\times(N+1)$ matrix where $N$ is the number of unknowns.
  2. Starting from the first row until the last but one row, repeat Steps .
  3. For each column, find the element with the largest magnitude and make it the pivot.
  4. Swap the row containing the pivot with the current row.
  5. For each row below the current pivot, make the elements below the pivot $=0$ by the row operation $$C[crow,ccol] = C[crow,ccol] - C[rpiv,ccol] \times\frac{C[crow,cpiv]}{C[rpiv,cpiv]} $$ where $crow, ccol$ are current row and column and $rpiv,cpiv$ is the cell containing the pivot.
At the end, the result is that C is now in upper triangular form, i.e., all the elements below the diagonal are 0s.

This code is in the cell below.

In [2]:
import numpy as np
# The following is only for pretty output
np.set_printoptions(precision=2, suppress=True)
In [3]:
def gauss_eliminate(eqns, debug=False) :
    nrows = eqns.shape[0]
    ncols = eqns.shape[1]
    
    # Initialise the pivot element as (0,0)
    r_piv = 0
    c_piv = 0
    
    # Repeat from the first row to the last but one row
    # and from the first column to the last but one column
    while r_piv < nrows-1 and c_piv < ncols-1 :
        # Pivot is the largest element in the current col
        pivots = np.argmax(abs(eqns[r_piv:,:ncols-1]), 
                           axis=0)
        # Check if pivot is 0, just move to the next col
        if eqns[pivots[r_piv], c_piv] == 0 :
            c_piv += 1
        else :
            # Swap the pivot row with the current row
            eqns[[r_piv, pivots[r_piv]+r_piv]] = eqns[[pivots[r_piv]+r_piv, r_piv]]

            # For all the rows below the pivot, make
            # elements below the pivot cell as 0
            for row in range(r_piv+1, nrows) :
                mult_fact = eqns[row,c_piv] / eqns[r_piv,c_piv]
                eqns[row,c_piv] = 0
                for col in range(c_piv+1, ncols) :
                    eqns[row,col] -= (eqns[r_piv,col] * mult_fact)
            # Output the current state for debugging    
            if debug :
                print('After reduction of row: ', r_piv+1)
                print(eqns)
            # Move to the next row and column and repeat
            # the above steps
            r_piv += 1
            c_piv += 1
            
    return eqns


Back-substitution Step

This step gets the values of the unknowns from the upper triangular matrix resulting from the previous Gauss elimination step.

Starting from the last row which contains only one non-zero element $C[nrow,nrow]$, where $nrow$ is the number of rows (number of unknowns), we obtain the values of the unknows, $x_1, x_2, \ldots, x_N$ in the reverse order, i.e. $x_N, x_{N-1}, \ldots, x_1$.

Note that the last row corresponds to the equation $$C[nrow, nrow]x_N = C[nrow,nrow+1]$$ where $C[nrow,nrow+1]$ is the RHS value of the $N^{th}$ equation ($C$ is the augmented matrix, remember!). From this, we get $$x_N = \frac{C[nrow,nrow+1]}{C[nrow,nrow]}$$ This value of $x_N$ can be substituted in the $N-1$ equation ($N-1$ row) to get $x_{N-1}$ and so on.

The code is given below.

In [4]:
def back_substitute(eqns) :
    nrows = eqns.shape[0]
    ncols = eqns.shape[1]
    # Define a new vector for the answers and initialise
    # it to 0
    sol = np.full(nrows, 0.)
    
    cstep = 1
    for row in range(nrows-1, -1, -1) :
        # print('Processing Row: ', row)
        cur_col = ncols - cstep
        if eqns[row, row] == 0 :
            print('Unique solution does not exist!')
            return False
        rhs = eqns[row,ncols-1]
        for col in range(row+1, ncols-1) :
            rhs -= (eqns[row,col] * sol[col])
        # print('RHS: ', rhs)
        sol[row] = rhs / eqns[row,cur_col-1]
        # print('Solution: ', sol)
        cstep += 1
    return sol

Try it on the problem below where $A$ and $B$ matrices have been given for you. These correspond to the system of equations [ \begin{eqnarray*} x_1 + x_2 - x_3 & = & 21 \\ x_1 + 2x_2 + 2x_3 & = & 39 \\ -x_1 + 5x_2 -x_3 & = & 7 \end{eqnarray*} ] If you ran everything correctly, you must get the answers $$x_1 = 19, x_2 = 6, x_3 = 4$$

In [ ]:
A = np.array([ 
    [1, 1, -1.],
    [1. , 2., 2],
    [-1., 5, -1.]
    ], dtype='float')
B = np.array([21, 39, 7])


Implementation Exercise

Implement Jacobi Method as studied in class. The procedure starts with randomly initialised values for the unknowns. It then updates the values as $$ x_i^{(k+1)} = \frac{1}{A_{ii}}\left( b_i - \sum_{j\neq i}A_{ij}x_j^{(k)}\right) $$ where $A$ and $B$ are the usual coefficient and RHS value matrices. $x$ is updated until the difference between the current value $x_i^{(k+1)}$ and its previous value $x_i^{(k)}$ is less than eps.

The skeleton is given in the cell below.

In [13]:
def jacobi(A, eps=1e-5) :
    X = np.zeros(A.shape[0])
    
    return X