Gauss Elimination works in two phases: the first is the elimination phase where the augmented input matrix is transformed into an upper diagonal matrix; the second is the backsubstitution phase where the solution is constructed from the upper diagonal matrix. The solution vector is obtained backwards from the last unknown $x_k$ to the first $x_1$.
Code snippets are given below to guide you. Test code is also given so that you can verify if your program is working correctly.
import numpy as np
np.set_printoptions(precision=2, suppress=True) # Make printing less messy
# Here is where you need to write your code
def gauss_eliminate (A) :
if A.ndim != 2 :
return False
sz = A.shape[0]
for r in range(sz) :
pivot = A[r,r] # Select diagonal element to eliminate values in rows below
if pivot == 0 :
print 'Condition for pivot = 0 not implemented'
return False
# Uncomment the lines below and fill in your code
# for ... :
#
#
#
#
# end of inner for
# end of outer for
return True
def gauss_substitute(A) :
xvec = np.zeros(A.shape[0])
#
# Fill in the code for backsubstitution here
#
return xvec
# This code is for giving input. Input is a random matrix A where the coefficients
# of x's are in the range -2 to +5. The RHS of the equations, i.e., b have values
# in the range -10 to +35.
N = input('Enter number of Equations: ')
A = np.zeros((N,N+1))
A[:,:-1] = np.random.uniform(-2, 5, (N,N))
A[:,-1] = np.random.uniform(-10,35,N)
B = A.copy() # So that we preserve the original
print A
gauss_eliminate(B)
print 'Matrix after doing elimination step'
print B
x = gauss_substitute (B) # Call the backward substitution step
print '-------------- Solution: '
print x
# This piece of code verifies that your solution is correct.
# It multiplies the solution vector with the coefficients given as input and
# checks if the values obtained match the b vector.
S = [x.dot(A[c,:-1]) for c in range(A.shape[0])]
if np.allclose(A[:,-1], S, atol=1e-6, rtol=0.1) :
print 'Solution ', x, ' has no error greater than 1e-6'
else :
print 'Errors in Solution'
print 'Original Values: ', A[:,-1]
print 'Obtained Values: ', S
Jacobi's method is a fast iterative method for solving large systems of equations typically having more than 100 unknowns. It works on the following update step: $$ x_{k+1} = D^{-1}(b - A_{off}x_k) $$ where $x_{k}$ is the solution in the $k$th iteration, $A_{off}$ is the $A$ matrix with 0's on the diagonal (i.e., it contains only the off-diagonal elements of $A$), $D$ is a matrix containing only the diagonal elements of $A$. The iterations stop when any of the standard conditions (discussed in class) are satisfied.
LU Decomposition is useful when only the $b$ vector changes. The matrix $A$ is factorized into two matrics: the lower triangular matrix $L$ and an upper triangular matrix $U$ so that the linear system of equations become $$ (LU)x = b $$ which is then solved as $L(y) = b$ with $y = Ux$. The steps involved are solve $Ly = b$ (using Gauss elimination) and get $y$ and then solve $Ux = y$ (again using Gauss elimination) to get $x$.
Use the following linear system $Cx = d$ to test your program for Jacobi Method
C = np.array([
[4., 0.5, 1., -1.],
[-1., 4., -1., -1.],
[1., -1., 6., -3.],
[-2., -1., 1., 8.]
])
d = np.array([37., 11., 0., 1.])