Finding the inverse of a matrix using Gauss-Jordan Elimination
(By 2012311909 Kim Si Jeong)
A variant of Gaussian elimination called Gauss–Jordan elimination can be used for finding the inverse of a matrix, if it exists. If $A$ is a $n\times n$ square matrix, then one can use row reduction to compute its inverse matrix, if it exists. First, the $n\times n$ identity matrix is augmented to the right of $A$, forming a $n\times 2n$ block matrix $[A | I]$. Now through application of elementary row operations, find the reduced echelon form of this $n\times 2n$ matrix. The matrix $A$ is invertible if and only if the left block can be reduced to the identity matrix $I$; in this case the right block of the final matrix is $A^{−1}$. If the algorithm is unable to reduce the left block to $I$, then $A$ is not invertible.
For example, consider the following matrix
${\displaystyle A={\begin{bmatrix}2&-1&0\\-1&2&-1\\0&-1&2\end{bmatrix}}}$
To find the inverse of this matrix, one takes the following matrix augmented by the identity, and row reduces it as a $3\times 6$ matrix:
$[A|I]=\left[{\begin{array}{rrr|rrr}2&-1&0&1&0&0\\-1&2&-1&0&1&0\\0&-1&2&0&0&1\end{array}}\right]$
By performing row operations, one can check that the reduced row echelon form of this augmented matrix is:
$[I|B]=\left[{\begin{array}{rrr|rrr}1&0&0&{\frac {3}{4}}&{\frac {1}{2}}&{\frac {1}{4}}\\[3pt]0&1&0&{\frac {1}{2}}&1&{\frac {1}{2}}\\[3pt]0&0&1&{\frac {1}{4}}&{\frac {1}{2}}&{\frac {3}{4}}\end{array}}\right]$
One can think of each row operation as the left product by an elementary matrix. Denoting by $B$ the product of these elementary matrices, we showed, on the left, that $BA = I$, and therefore, $B = A^{−1}$. On the right, we kept a record of $BI = B$, which we know is the inverse desired. This procedure for finding the inverse works for square matrices of any size.
I will provide you useful interactive applet to find inverse of $4\times 4$ matrix