Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/master/notebooks/Chapter 3 - Determinant.ipynb
Views: 449
Visualization of Determinants
The physical interpretation of determinants is to compute the area enclosed by vectors. For instance, consider the matrix The determinant represents the area of the parallelogram formed by the vectors
Here we demonstrate with a matrix It's also easy to understand the area formed by these two vectors are actually a rectangle.
If the matrix is not diagonal, the area will take the form of a parallelogram. Similarly, in 3D space, the determinant is the volume of a parallelepiped. First let's draw a parallelogram.
However, determinant can be a negative number, it means we flip the area like flipping a piece of paper.
What if two vectors are linearly dependent? The area between vectors will be zero.
Here's a plot of parallelepiped, in case you are not sure how it looks.
Computation of Determinants
For matrix , the algorithm of determinant is
Now we experiment with SymPy
With defined symbols, the algorithms of and determinants are
Anything above than is become more complicated
Cofactor Expansion
The -cofactor of is denoted as and is given by where is the minor determinant obtained by excluding the -th row and -th column.
Let's learn from an example. Consider the matrix : Any determinant can be expanded along an arbitrary row or column. We will expand the determinant along the first row:
The scalars , , and in front of each minor determinant are the elements of the first row of .
In general, the expansions across the -th row or -th column are:
A SymPy Example of Determinant Expansion
Consider the matrix below and perform a cofactor expansion
Cofactor expansion with the column which has two zero(s) involves the least computation burden:
We can use SymPy function for calculationg minors: sy.matrices.matrices.MatrixDeterminant.minor(A, i, 1)
. Also we define a function for cofactor expansion:
It's easy to verify the expansion algorithm Sympy's determinant evaluation function.
Actually you can experiment with any random matrices with multiple zeros, the function below has the parameter percent=70
which means of element are non-zero.
Calculate determinants with our user-defined function
Then verify the result of using determinant method .det()
. We can see indeed cofactor expansion works!
Minor matrices can also be extracted by using code .minor_submatrix()
, for instance, the matrix of is
Cofactor matrix is the matrix contains all cofactors of original matrix, and function .cofactor_matrix()
can easily produce this type of matrix.
Triangular Matrix
If is triangular matrix, cofactor expansion can be applied repetitively, the outcome will be a product of the elements on the principal diagonal.
where is the diagonal element.
Here is the proof, start with
Cofactor expanding on the first column, the sign of is always positive because Continue the cofactor expansion Iterating the expansion, eventually
Now let's verify with a numeric example, generate a random upper triangular matrix.
Compute the determinant with np.linalg.det
Extract the diagonal with np.diag()
, then calculate the product. We should expect the same results.
Properties of Determinants
Determinants have a long list of properties, but they are mostly derived from cofactor expansion. There is no need to memorize them.
Let be an square matrix. If one row of is multiplied by to produce the matrix , then .
Let be an square matrix. If two rows of are interchanged to produce a matrix , then .
Let be an square matrix. If a multiple of one row of is added to another row to produce the matrix , then .
If is an matrix, then .
A square matrix is invertible if and only if .
If and are matrices, then .
If is an matrix and is a scalar, then .
If is an invertible square matrix, then .
All of these properties are straightforward. The key is to demonstrate them using cofactor expansion. Here are some casual proofs.
Proof of property 6:
Proof of property 7:
Because , one row of is multiplied by to produce .Then multiply all the rows of by , there will be 's in front of , which is
Proof of property 8:
These properties are useful in the analytical derivation of other theorems; however, they are not efficient for numerical computation.
Cramer's Rule
If a linear system has equations and variables, an algorithm called Cramer's Rule can solve the system in terms of determinants as long as the solution is unique.
Some convenient notations are introduced here:
For any and vector , denote as the matrix obtained from replacing the th column of by .
The Cramer's Rule can solve each without solving the whole system
where is an identity matrix whose -th column replaced by . With determinant's property,
, can be shown by cofactor expansion.
A NumPy Example On Cramer's Rule
Consider the system
You have surely known several ways to solve it, but let's test if Cramer's rule works.
Input the matrices into NumPy arrays.
According to Cramer's rule:
We can verify the results by NumPy built-in function np.linalg.solve
.
Or in a straightforward way
All results are the same!
However, remember that Cramer's rule is rarely used in practice for solving systems of equations, as its computational cost (measured by the number of floating-point operations, or flops) is much higher than that of Gaussian-Jordan elimination.
A Determinant Formula For Inverse Matrix
An alternative algorithm for is
where the matrix of cofactors on RHS is the adjugate matrix, SymPy function is sy.matrices.matrices.MatrixDeterminant.adjugate
. And this is the transpose of the cofactor matrix which we computed using sy.matrices.matrices.MatrixDeterminant.cofactor_matrix
A SymPy Example
Generate a random matrix with of zero elements.
Compute the adjugate matrix
We can verify if this really the adjugate of , we pick element of of to compute the cofactors
Adjugate is the transpose of cofactor matrix, thus we reverse the row and column index when referring to the elements. As we have shown it is truel adjugate matrix.
Next we will check if this inverse formula works with SymPy's function.
The sy.N()
is for converting to float approximation, i.e. if you don't like fractions.
Now again, we can verify the results with .inv()
Or We can show by difference.
So Cramer's rule indeed works perfectly.
Short Proof of Formula With Determinants
We define the -th column of which satisfies
and is the -th column of an identity matrix, and th entry of is the -entry of . By Cramer's rule,
The cofactor expansion along column of ,