Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Math 208 Interactive Notebooks © 2024 by Soham Bhosale, Sara Billey, Herman Chau, Zihan Chen, Isaac Hartin Pasco, Jennifer Huang, Snigdha Mahankali, Clare Minerath, and Anna Willis is licensed under CC BY-ND 4.0
License: OTHER
Image: ubuntu2204
Chapter 8: Orthogonality
Originally created by Clare Minnerath
This Jupyter notebook is a part of a linear algebra tutorial created by Sara Billey's 2024 WXML group. It follows Chapter 8 of Holt's Linear Algebra with applications. [1] which explores orthogonality and new ways to decompose a matrix into more easily understood chunks. It is intended as a supplement to the UW Math 208 course. The user is encouraged to explore beyond the scope of this notebook as interested.
Note: This notebook primarily uses Sage linear algebra libraries. However, the Single Value Decomposition (SVD) application to image compression utiilizes python's numpy library. Also, methods such as Gram-Schmidt and QR factorization are available built in as Sage functions, but functions are written out in this tutorial to be compared with the text's algorithms.
1. Holt, J. (2017).Linear algebra with applications. 2. Cat image from Clare Minnerath: image of family cat Egg. 3. Dog image from Sara Billey: image of painting by Sara's uncle of her dog Valentine.
Two vectors in are orthogonal if their dot product is equal to 0. The magnitude (or norm) of a vector is given by .
Since the Pathagorean Theorem only holds for right triangles, it follows that if and only if . The Triangle Inequality, , holds for all vectors and .
An orthogonal basis, , is a basis for a subspace such that all of the basis vectors are orthogonal ( whenever ). Given such a basis, any vector, , in the subspace can be expressed as where . Recalling that , we can also express the formula as
The Gram-Schmit Process is a method for transforming a basis for a subspace into an orthogonal basis for that subspace. The following function performs said algorithm, but more details can be found in section 8.2 of [1].
An orthogonal matrix is a square matrix with unit length mutually orthogonal columns as well as unit length mutually orthogonal rows. A matrix, , is orthogonally diagonalizable if there exists an orthogonal matrix, , and a diagonal matrix, , such that . The Spectral Theorem from section 8.3 [1], tells us that a matrix is orthogonally diagonalizable if and only if the matrix is symmetric. What a result! The following function takes a symmetric matrix and outputs corresponding orthogonal and diagonal matrices. In particular, this function will make use of the previous Gram-Schmidt process.
In the more general case where we have an matrix with linearly independent columns, we can factorize by where is an orthonormal matrix and is an upper triangular matrix with positive diagonal entires. This method is known as QR Factorization. The following function takes a matrix with linearly independent columns and outputs the corresponding orthonormal and upper triangular matrices. See section 8.3 [1] for more details.
An even more general factorization is singular value decomposition (SVD). SVD can be applied to any matrix and decomposes the matrix into where is an orthogonal matrix, is a diagonal matrix padded with 0's, and is an orthogonal matrix. More specific details on the decomposition can be found in section 8.4 [1]. The following function implements (SVD) for any matrix.
SVD has applications in image compression. We can express an matrix, , with the following sum where is a column vector of , is a row vector of , and the come from the diagonal of in the decomposition of . The product used between and in the expansion is the outer product. Since the are decreasing, taking the sum with only the first terms can still give us a good approximation of the original matrix. See section 8.4 [1] for more details. We show the results of this application below utilizing python's numpy library.