Diagonalization Symmetric Matrices and QR factorization

Theorem: If A is a symmetric matrix, then eigenvectors associated to distinct eigenvalues are orthogonal.

Definition: A square matrix P with orthonormal columns is called an orthogonal matrix.

Theorem: If P is a square orthonormal matrix then P is invertible and the inverse is the transpose - P − 1 = Pt.

Proof: Let P = [p1 … pn]. Then the ijth entry of PtP is pi ⋅ pj which is 1 if i = j and is 0 if i ≠ j.

Questions:

Orthogonally Diagonalizable Matrices

Definition: A square matrix A is orthogonally diagonalizable if there exists an orthogonal matrix P and a diagonal matrix D such that A = PDP − 1 = PDPt.

(give an example in class.)

Theorem: (Spectral Theorem) A matrix A is orthogonally diagonalizable if and only if A is symmetric.

It is easy to see that if A is orthogonally diagonalizable then A is symmetric. The converse is harder and won’t be proved in this class.

(Work out example 3 on page 354)

QR Factorization

Theorem: (QR factorization) Let A = [a1 … am be an n × m matrix with linearly independent columns. Then A can be factored as A = QR where Q is a n × m matrix with orthonormal columns and R is an m × m matrix with nonnegative diagonal.

See the book for a full proof.

The matrix Q = [q1 … qm] is obtained by the Gram-Schmidt process. We can then obtain R by computing QtA = R. This R will be upper triangular but the entries on the diagonal could be negative. But we can fix it.

This is useful for solving linear systems. Pivot manipulations can cause significant roundoff errors.

If A is invertible and Ax = b and A = QR, then QRx = b so Rx = Qtb. This is now a triangular system which can be solved with backsubstituion.

(Work out example 4 on page 356)