Introduction to Problem Description
1.1 Matrix Inversion
The inverse of a matrix is defined as A.A-1=I (where I is the identity matrix). A matrix is defined as invertible if and only if the determinate of the matrix is ≠ 0 and the matrix is square (i.e. NxN).
There are many various matrix inversion techniques such as Gauss-Jordan Inversion, Crammers Rule (co-factors), LU decomposition, singular value decomposition and blockwise inversion. However all but Gauss-Jordan Inversion and LU decomposition are inefficient for large matrices and as such are rarely used in applications such an image processing and scientific data processing.
The classical use of an inverted matrix is to solve:
A is a matrix that represents the series of co-efficients of variables in a set of simultaneous equations.
x is the vector set of variables to be solved
and b is the vector set of solutionsBy using matrix inversion the solution becomes a trivial matter of matrix-vector multiplication.
1.2 Gauss-Jordan Inversion
Gauss-Jordan Inversion is a two-stage process whereby Gaussian Elimination is performed first so as to reduce the matrix to ‘reduced row echelon form’ by using elementary row reduction so that the lower matrix (i.e. below the main diagonal) is filled with zeros. The second stage is where the upper triangle is similarly reduced to zeros. In the mean time a second matrix that starts as the Identity Matrix has each elementary row operation that was applied to the first matrix applied to it.
Take a 3×3 matrix A and an identity matrix:
The second matrix now contains the inverse matrix of A.
Gauss-Jordan Inversion is an O(N3) problem and hence any method that speeds up the computation of the problem is helpful. The ultimate goal of this project is to compute the task in a manner faster that previously available.