We now see how to get these two matrices using the intermediate steps of Gaussian elimination. Remember, the we eliminate the unknowns one at a time. To eliminate $x_1$, we find multiplication factors that make all the elements of
While we do the Gaussian elimination, let us keep track of the multiplication factors in each column in a matrix called
The elimination must be done using the following formula instead of the one used in Gauss elimination - this helps compute the row-echelon matrix as the desired U.
As the multiplication factors are used to eliminate only entries below the diagonal in the $A$ portion of the augmented matrix, the matrix containing all the multiplication factors with the diagonal elements as 1 is in a
Where do we get the $U$ matrix from? Well, we already have it! It is the row echelon matrix from the Gaussian elimination method without the last column. Remember the last column is from the matrix $B$ and has nothing to do with $A$. Thus, we have both the $L$ and $U$ matrices from one single run of the Gaussian elimination method.
You can verify that these are indeed the required $L$ and $U$ matrices by multiplying them. The product is the $A$ matrix.
The code skeleton is given in the cells below. Use the following steps to complete the code. In the code, matrix $C$ is given by the variable
Code skeleton is given in the cell below. The function takes a coefficient matrix $A$ as the argument and returns $L$ and $U$ matrices.
import numpy as np
np.set_printoptions(precision=2, suppress=True)
def LU_decomposition(eqns, debug=False) :
nrows = eqns.shape[0]
ncols = eqns.shape[1]
temp = eqns.copy()
MFM = np.eye(eqns.shape[0]) # Step 2: Done for you
for r_piv in range(ncols) : # Repeat step
# The diagonal element cannot be 0 in this
# simple algorithm. Check for that.
if temp[r_piv,r_piv] != 0 : # Part of Step 1
# Insert code to compute multiplication factors
# and store in MFM matrix # Step 4
for row in range(r_piv+1, nrows) :
k = temp[row,r_piv] / temp[r_piv,r_piv]
temp[row] = temp[row] - k * temp[r_piv]
MFM[row,r_piv] = k
return MFM, temp
A = np.array([[6, 5, 6, 4, 4, 7],
[5, 8, 4, 2, 2, 4],
[7, 7, 1, 4, 3, 2],
[5, 4, 7, 3, 3, 1],
[4, 7, 2, 4, 3, 6],
[5, 6, 1, 8, 2, 7]], dtype='float')
L, U = LU_decomposition(A)
print('Verification: ')
print(L.dot(U))
We use the following steps to solve such problems.
def fwd_substitute(eqns) :
nrows = eqns.shape[0]
ncols = eqns.shape[1]
sol = np.zeros((nrows,1))
for var_to_compute in range(nrows) : # Step 2
eqn_num = var_to_compute
# Check to see if the coefficient is 0. We cannot
# solve the system of equation in that case.
if eqns[eqn_num, var_to_compute] == 0 :
print('Unique solution does not exist!')
return False
rhs = eqns[eqn_num,ncols-1] # Step 3: Insert code
psum = 0.0 # Step 4: Insert code
psum = eqns[eqn_num,:ncols-1].dot(sol) #
rhs = rhs - psum # Step 5: Insert code
sol[var_to_compute] = rhs / eqns[eqn_num,var_to_compute] # Step 6: Insert code
return sol
###################################################
# PASTE back_substitute() CODE HERE! #
###################################################
B = np.array([[92, 91, 88, 56, 89, 91]])
print(B)
C = np.concatenate((L, B.T), axis=1)
Y = fwd_substitute(C)
print(Y)
print('############# ANSWER ######################')
ans = back_substitute(np.concatenate((U, Y), axis=1))
print(ans.T)
Jacobi is a fast iterative method for certain systems of equations. Iterative Jacobi is a very good method when the coefficient matrix $A$ is
Iterative Jacobi method starts with an initial guess of the solution: let $X = X_0$. The update equation is given by
$$
X^{t+1} = D^{-1}\left(B - A_{off}X^{t}\right)
$$
Where $X^t$ is the 'solution' at iteration $t$. The matrices $A_{off}$ and $D$ are derived from the $A$ matrix. $A_{off}$ is the matrix containing
The code skeleton is given below including the creation of the two matrices $A_{off}$ and $D$. Just insert the update step to complete the function.
def jacobi(A, B, eps=1e-3, debug=False) :
nrows = A.shape[0]
ncols = A.shape[1]
sol = np.zeros((nrows,1))
# Create a square matrix with only diagonal elements of A
D = np.diagonal(A) * np.eye(A.shape[0])
# Remove diagonal elements from A to get A_off
A_off = A - D
change = 1.0 # Criterion to stop iteration
while change > eps :
psol = sol.copy()
##############################################
# Insert code here for updating the solution #
##############################################
change = abs(sol - psol).max() # Compute the change from
# previous values
if debug :
print('Current Soln: ', sol.T)
return sol
A = np.array([[12, -1, -1, -1, -1, -1, -1, -3, -1, -1],
[-1, 10, -1, -1, -1, -3, -1, -1, -1, -1],
[-1, -1, 7, 0, 0, 0, -2, 1, 1, 1],
[-2, 1, -1, 15, -5, -1, 0, -2, 3, -1],
[1, 1, 1, 1, -22, 1, 1, 1, -4, 1],
[-3, -3, 0, -1, 0, 9, 0, 0, 0, 0],
[5, -3, -2, -2, 0, 3, 15, -1, -3, -1],
[1, -2, 1, -2, 1, -2, 1, 25, -1, 1],
[1, 1, 1, 1, 1, -1, -1, -1, 18, -1],
[-1, -1, -1, -1, 1, 1, 1, 1, 1, 10]], dtype='float')
B = np.array([[70, 0, 32, 5, 22, 0, 106, 30, 5, 69]], dtype='float')
ans = jacobi(A, B)
print(ans.T)
Solve it using Gauss Elimination. Did it work?