CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
weijie-chen

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: weijie-chen/Linear-Algebra-With-Python
Path: blob/master/notebooks/Chapter 17 - Symmetric Matrices , Quadratic Form and Cholesky Decomposition.ipynb
Views: 449
Kernel: Python 3 (ipykernel)
import numpy as np import scipy as sp import matplotlib.pyplot as plt import sympy as sy
np.set_printoptions(precision=3) np.set_printoptions(suppress=True)
def round_expr(expr, num_digits): return expr.xreplace({n : round(n, num_digits) for n in expr.atoms(sy.Number)})

Diagonalization of Symmetric Matrices

The first theorem of symmetric matrix:

If AA is symmetric, i.e. A=ATA = A^T, then any two eigenvectors from different eigenspaces are orthogonal.

λ1v1⋅v2=(λ1v1)Tv2=(Av1)Tv2=(v1TAT)v2=v1T(Av2)=v1T(λ2v2)=λ2v1Tv2=λ2v1⋅v2\begin{aligned} \lambda_{1} \mathbf{v}_{1} \cdot \mathbf{v}_{2} &=\left(\lambda_{1} \mathbf{v}_{1}\right)^{T} \mathbf{v}_{2}=\left(A \mathbf{v}_{1}\right)^{T} \mathbf{v}_{2} \\ &=\left(\mathbf{v}_{1}^{T} A^{T}\right) \mathbf{v}_{2}=\mathbf{v}_{1}^{T}\left(A \mathbf{v}_{2}\right) \\ &=\mathbf{v}_{1}^{T}\left(\lambda_{2} \mathbf{v}_{2}\right) \\ &=\lambda_{2} \mathbf{v}_{1}^{T} \mathbf{v}_{2}=\lambda_{2} \mathbf{v}_{1} \cdot \mathbf{v}_{2} \end{aligned}

Because λ1≠λ2\lambda_1 \neq \lambda_2, so only condition which makes the above equation holds is

v1â‹…v2=0\mathbf{v}_{1} \cdot \mathbf{v}_{2}=0

With the help of this theorem, we can conclude that if symmetric matrix AA has different eigenvalues, its corresponding eigenvectors must be mutually orthogonal.

The diagonalization of AA is

A=PDPT=PDP−1A = PDP^T = PDP^{-1}

where PP is an orthonormal matrix with all eigenvectors of AA.

The second theorem of symmetric matrix:

An n×nn \times n matrix AA is orthogonally diagonalizable if and only if AA is a symmetric matrix: AT=(PDPT)T=PTTDTPT=PDPT=AA^{T}=\left(P D P^{T}\right)^{T}=P^{T T} D^{T} P^{T}=P D P^{T}=A.

An Example

Create a random matrix.

A = np.round(2*np.random.rand(3, 3)); A
array([[1., 0., 0.], [1., 2., 0.], [2., 1., 2.]])

Make it symmetric.

B = A@A.T; B # generate a symmetric matrix
array([[1., 1., 2.], [1., 5., 4.], [2., 4., 9.]])

Perform diagonalization with np.linalg.eig().

D, P = np.linalg.eig(B); P
array([[ 0.2 , 0.975, 0.095], [ 0.511, -0.021, -0.859], [ 0.836, -0.22 , 0.503]])
D = np.diag(D); D
array([[11.926, 0. , 0. ], [ 0. , 0.527, 0. ], [ 0. , 0. , 2.547]])

Check the norm of all eigenvectors.

for i in range(3): print(np.linalg.norm(P[:, i]))
1.0 1.0 1.0

Check the orthogonality of eigenvectors, see if PPT=IPP^T=I

P@P.T
array([[ 1., 0., -0.], [ 0., 1., -0.], [-0., -0., 1.]])

The Spectral Theorem

An n×nn \times n symmetric matrix AA has the following properties:

  1. AA has nn real eigenvalues, counting multiplicities.

  2. The dimension of the eigenspace for each eigenvalue λ\lambda equals the multiplicity of λ\lambda as a root of the characteristic equation.

  3. The eigenspaces are mutually orthogonal, in the sense that eigenvectors corresponding to different eigenvalues are orthogonal.

  4. AA is orthogonally diagonalizable.

All these properties are obvious without proof, as the example above shows.However the purpose of the theorem is not reiterating last section, it paves the way for spectral decomposition.

Write diagonalization explicitly, we get the representation of spectral decomposition

A=PDPT=[u1⋯un][λ10⋱0λn][u1T⋮unT]=[λ1u1⋯λnun][u1T⋮unT]=λ1u1u1T+λ2u2u2T+⋯+λnununT\begin{aligned} A &=P D P^{T}=\left[\begin{array}{lll} \mathbf{u}_{1} & \cdots & \mathbf{u}_{n} \end{array}\right]\left[\begin{array}{ccc} \lambda_{1} & & 0 \\ & \ddots & \\ 0 & & \lambda_{n} \end{array}\right]\left[\begin{array}{c} \mathbf{u}_{1}^{T} \\ \vdots \\ \mathbf{u}_{n}^{T} \end{array}\right] \\ &=\left[\begin{array}{lll} \lambda_{1} \mathbf{u}_{1} & \cdots & \lambda_{n} \mathbf{u}_{n} \end{array}\right]\left[\begin{array}{c} \mathbf{u}_{1}^{T} \\ \vdots \\ \mathbf{u}_{n}^{T} \end{array}\right]\\ &= \lambda_{1} \mathbf{u}_{1} \mathbf{u}_{1}^{T}+\lambda_{2} \mathbf{u}_{2} \mathbf{u}_{2}^{T}+\cdots+\lambda_{n} \mathbf{u}_{n} \mathbf{u}_{n}^{T} \end{aligned}

uiuiT \mathbf{u}_{i} \mathbf{u}_{i}^{T} are rank 11 symmetric matrices, because all rows of uiuiT \mathbf{u}_{i} \mathbf{u}_{i}^{T} are multiples of uiT\mathbf{u}_{i}^{T}.

Following the example above, we demonstrate in SymPy.

lamb0,lamb1,lamb2 = D[0,0], D[1,1], D[2,2] u0,u1,u2 = P[:,0], P[:,1], P[:,2]

Check rank of uiuiT \mathbf{u}_{i} \mathbf{u}_{i}^{T} by np.linalg.matrix_rank().

np.linalg.matrix_rank(np.outer(u0,u0))
1

Use spectral theorem to recover AA:

specDecomp = lamb0 * np.outer(u0,u0) + lamb1 * np.outer(u1,u1) + lamb2 * np.outer(u2,u2) specDecomp
array([[1., 1., 2.], [1., 5., 4.], [2., 4., 9.]])

Quadratic Form

A quadratic form is a function with form Q(x)=xTAxQ(\mathbf{x})=\mathbf{x}^TA\mathbf{x}, where AA is an n×nn\times n symmetric matrix, which is called the the matrix of the quadratic form.

Consider a matrix of quadratic form

A=[3202−1404−2]A = \left[ \begin{matrix} 3 & 2 & 0\\ 2 & -1 & 4\\ 0 & 4 & -2 \end{matrix} \right]

construct the quadratic form xTAx\mathbf{x}^TA\mathbf{x}.

xTAx=[x1x2x3][3202−1404−2][x1x2x3]=[x1x2x3][3x1+2x22x1−x2+4x34x2−2x3]=x1(3x1+2x2)+x2(2x1−x2+4x3)+x3(4x2−2x3)=3x12+4x1x2−x22+8x2x3−2x32\begin{align} \mathbf{x}^TA\mathbf{x}&= \left[ \begin{matrix} x_1 & x_2 & x_3 \end{matrix} \right] \left[ \begin{matrix} 3 & 2 & 0\\ 2 & -1 & 4\\ 0 & 4 & -2 \end{matrix} \right] \left[ \begin{matrix} x_1 \\ x_2\\ x_3 \end{matrix} \right]\\ & =\left[ \begin{matrix} x_1 & x_2 & x_3 \end{matrix} \right] \left[ \begin{matrix} 3x_1+2x_2 \\ 2x_1-x_2+4x_3 \\ 4x_2-2x_3 \end{matrix} \right]\\ & = x_1(3x_1+2x_2)+x_2(2x_1-x_2+4x_3)+x_3(4x_2-2x_3)\\ & = 3x_1^2+4x_1x_2-x_2^2+8x_2x_3-2x_3^2 \end{align}

Fortunately, there is an easier way to calculate quadratic form.

Notice that coefficients of xi2x_i^2 is on the principal diagonal and coefficients of xixjx_ix_j are be split evenly between (i,j)−(i,j)- and (j,i)(j, i)-entries in AA.

Example

Consider another example,

A=[32052−14−304−2−45−3−47]A = \left[ \begin{matrix} 3 & 2 & 0 & 5\\ 2 & -1 & 4 & -3\\ 0 & 4 & -2 & -4\\ 5 & -3 & -4 & 7 \end{matrix} \right]

All xi2x_i^2's terms are

3x12−x22−2x32+7x423x_1^2-x_2^2-2x_3^2+7x_4^2

whose coefficients are from principal diagonal.

All xixjx_ix_j's terms are

4x1x2+0x1x3+10x1x4+8x2x3−6x2x4−8x3x44x_1x_2+0x_1x_3+10x_1x_4+8x_2x_3-6x_2x_4-8x_3x_4

Add up together then quadratic form is

3x12−x22−2x32+7x42+4x1x2+0x1x3+10x1x4+8x2x3−6x2x4−8x3x43x_1^2-x_2^2-2x_3^2+7x_4^2+4x_1x_2+0x_1x_3+10x_1x_4+8x_2x_3-6x_2x_4-8x_3x_4

Let's verify in SymPy.

x1, x2, x3, x4 = sy.symbols('x_1 x_2 x_3 x_4') A = sy.Matrix([[3,2,0,5],[2,-1,4,-3],[0,4,-2,-4],[5,-3,-4,7]]) x = sy.Matrix([x1, x2, x3, x4])
sy.expand(x.T*A*x)

[3x12+4x1x2+10x1x4−x22+8x2x3−6x2x4−2x32−8x3x4+7x42]\displaystyle \left[\begin{matrix}3 x_{1}^{2} + 4 x_{1} x_{2} + 10 x_{1} x_{4} - x_{2}^{2} + 8 x_{2} x_{3} - 6 x_{2} x_{4} - 2 x_{3}^{2} - 8 x_{3} x_{4} + 7 x_{4}^{2}\end{matrix}\right]

The results is exactly the same as we derived.

Change of Variable in Quadratic Forms

To convert a matrix of quadratic form into diagonal matrix can save us same troubles, that is to say, no cross products terms.

Since AA is symmetric, there is an orthonormal PP that

PDPT=AandPPT=IPDP^T = A \qquad \text{and}\qquad PP^T = I

We can show that

xTAx=xTIAIx=xTPPTAPPTx=xTPDPTx=(PTx)TDPTx=yTDy\mathbf{x}^TA\mathbf{x}=\mathbf{x}^TIAI\mathbf{x}=\mathbf{x}^TPP^TAPP^T\mathbf{x}=\mathbf{x}^TPDP^T\mathbf{x}=(P^T\mathbf{x})^TDP^T\mathbf{x}=\mathbf{y}^T D \mathbf{y}

where PTP^T defined a coordinate transformation and y=PTx\mathbf{y} = P^T\mathbf{x}.

Consider AA

A=[3202−1404−2]A = \left[ \begin{matrix} 3 & 2 & 0\\ 2 & -1 & 4\\ 0 & 4 & -2 \end{matrix} \right]

Find eigenvalue and eigenvectors.

A = np.array([[3,2,0],[2,-1,4],[0,4,-2]]); A
array([[ 3, 2, 0], [ 2, -1, 4], [ 0, 4, -2]])
D, P = np.linalg.eig(A) D = np.diag(D); D
array([[ 4.388, 0. , 0. ], [ 0. , 1.35 , 0. ], [ 0. , 0. , -5.738]])

Test if PP is normalized.

P.T@P
array([[ 1., -0., -0.], [-0., 1., -0.], [-0., -0., 1.]])

We can compute y=PTx\mathbf{y}= P^T\mathbf{x}

x1, x2, x3 = sy.symbols('x1 x2 x3') x = sy.Matrix([[x1], [x2], [x3]]) x

[x1x2x3]\displaystyle \left[\begin{matrix}x_{1}\\x_{2}\\x_{3}\end{matrix}\right]

P = round_expr(sy.Matrix(P), 4); P

[0.7738−0.61430.15440.53690.5067−0.67460.33620.60490.7219]\displaystyle \left[\begin{matrix}0.7738 & -0.6143 & 0.1544\\0.5369 & 0.5067 & -0.6746\\0.3362 & 0.6049 & 0.7219\end{matrix}\right]

So the y=PTx\mathbf{y} = P^T \mathbf{x} is

[0.7738x1+0.5369x2+0.3362x3−0.6143x1+0.5067x2+0.6049x30.1544x1−0.6746x2+0.7219x3]\left[\begin{matrix}0.7738 x_{1} + 0.5369 x_{2} + 0.3362 x_{3}\\- 0.6143 x_{1} + 0.5067 x_{2} + 0.6049 x_{3}\\0.1544 x_{1} - 0.6746 x_{2} + 0.7219 x_{3}\end{matrix}\right]

The transformed quadratic form yTDy\mathbf{y}^T D \mathbf{y} is

D = round_expr(sy.Matrix(D),4);D

[4.38760.00.00.01.35050.00.00.0−5.7381]\displaystyle \left[\begin{matrix}4.3876 & 0.0 & 0.0\\0.0 & 1.3505 & 0.0\\0.0 & 0.0 & -5.7381\end{matrix}\right]

y1, y2, y3 = sy.symbols('y1 y2 y3') y = sy.Matrix([[y1], [y2], [y3]]);y

[y1y2y3]\displaystyle \left[\begin{matrix}y_{1}\\y_{2}\\y_{3}\end{matrix}\right]

y.T*D*y

[4.3876y12+1.3505y22−5.7381y32]\displaystyle \left[\begin{matrix}4.3876 y_{1}^{2} + 1.3505 y_{2}^{2} - 5.7381 y_{3}^{2}\end{matrix}\right]

Visualize the Quadratic Form

The codes are exceedingly lengthy, but intuitive.

k = 6 x = np.linspace(-k, k) y = np.linspace(-k, k) X, Y = np.meshgrid(x, y) fig = plt.figure(figsize = (7, 7)) ########################### xAx 1 ############################ Z = 3*X**2 + 7*Y**2 ax = fig.add_subplot(221, projection='3d') ax.plot_wireframe(X, Y, Z, linewidth = 1.5, alpha = .3, color = 'r') ax.set_title('$z = 3x^2+7y^2$') xarrow = np.array([[-5, 0, 0, 10, 0, 0]]) X1, Y1, Z1, U1, V1, W1 = zip(*xarrow) ax.quiver(X1, Y1, Z1, U1, V1, W1, length=1, normalize=False, color = 'black', alpha = .6, arrow_length_ratio = .12, pivot = 'tail', linestyles = 'solid',linewidths = 2) yarrow = np.array([[0, -5, 0, 0, 10, 0]]) X2, Y2, Z2, U2, V2, W2 = zip(*yarrow) ax.quiver(X2, Y2, Z2, U2, V2, W2, length=1, normalize=False, color = 'black', alpha = .6, arrow_length_ratio = .12, pivot = 'tail', linestyles = 'solid',linewidths = 2) zarrow = np.array([[0, 0, -3, 0, 0, 300]]) X3, Y3, Z3, U3, V3, W3 = zip(*zarrow) ax.quiver(X3, Y3, Z3, U3, V3, W3, length=1, normalize=False, color = 'black', alpha = .6, arrow_length_ratio = .001, pivot = 'tail', linestyles = 'solid',linewidths = 2) ########################### xAx 2 ############################ Z = 3*X**2 ax = fig.add_subplot(222, projection='3d') ax.plot_wireframe(X, Y, Z, linewidth = 1.5, alpha = .3, color = 'r') ax.set_title('$z = 3x^2$') xarrow = np.array([[-5, 0, 0, 10, 0, 0]]) X1, Y1, Z1, U1, V1, W1 = zip(*xarrow) ax.quiver(X1, Y1, Z1, U1, V1, W1, length=1, normalize=False, color = 'black', alpha = .6, arrow_length_ratio = .12, pivot = 'tail', linestyles = 'solid',linewidths = 2) yarrow = np.array([[0, -5, 0, 0, 10, 0]]) X2, Y2, Z2, U2, V2, W2 = zip(*yarrow) ax.quiver(X2, Y2, Z2, U2, V2, W2, length=1, normalize=False, color = 'black', alpha = .6, arrow_length_ratio = .12, pivot = 'tail', linestyles = 'solid',linewidths = 2) zarrow = np.array([[0, 0, -3, 0, 0, 800]]) X3, Y3, Z3, U3, V3, W3 = zip(*zarrow) ax.quiver(X3, Y3, Z3, U3, V3, W3, length=1, normalize=False, color = 'black', alpha = .6, arrow_length_ratio = .001, pivot = 'tail', linestyles = 'solid',linewidths = 2) ########################### xAx 3 ############################ Z = 3*X**2 - 7*Y**2 ax = fig.add_subplot(223, projection='3d') ax.plot_wireframe(X, Y, Z, linewidth = 1.5, alpha = .3, color = 'r') ax.set_title('$z = 3x^2-7y^2$') xarrow = np.array([[-5, 0, 0, 10, 0, 0]]) X1, Y1, Z1, U1, V1, W1 = zip(*xarrow) ax.quiver(X1, Y1, Z1, U1, V1, W1, length=1, normalize=False, color = 'black', alpha = .6, arrow_length_ratio = .12, pivot = 'tail', linestyles = 'solid',linewidths = 2) yarrow = np.array([[0, -5, 0, 0, 10, 0]]) X2, Y2, Z2, U2, V2, W2 = zip(*yarrow) ax.quiver(X2, Y2, Z2, U2, V2, W2, length=1, normalize=False, color = 'black', alpha = .6, arrow_length_ratio = .12, pivot = 'tail', linestyles = 'solid',linewidths = 2) zarrow = np.array([[0, 0, -150, 0, 0, 300]]) X3, Y3, Z3, U3, V3, W3 = zip(*zarrow) ax.quiver(X3, Y3, Z3, U3, V3, W3, length=1, normalize=False, color = 'black', alpha = .6, arrow_length_ratio = .001, pivot = 'tail', linestyles = 'solid',linewidths = 2) ########################### xAx 4 ############################ Z = -3*X**2 - 7*Y**2 ax = fig.add_subplot(224, projection='3d') ax.plot_wireframe(X, Y, Z, linewidth = 1.5, alpha = .3, color = 'r') ax.set_title('$z = -3x^2-7y^2$') xarrow = np.array([[-5, 0, 0, 10, 0, 0]]) X1, Y1, Z1, U1, V1, W1 = zip(*xarrow) ax.quiver(X1, Y1, Z1, U1, V1, W1, length=1, normalize=False, color = 'black', alpha = .6, arrow_length_ratio = .12, pivot = 'tail', linestyles = 'solid',linewidths = 2) yarrow = np.array([[0, -5, 0, 0, 10, 0]]) X2, Y2, Z2, U2, V2, W2 = zip(*yarrow) ax.quiver(X2, Y2, Z2, U2, V2, W2, length=1, normalize=False, color = 'black', alpha = .6, arrow_length_ratio = .12, pivot = 'tail', linestyles = 'solid',linewidths = 2) zarrow = np.array([[0, 0, -300, 0, 0, 330]]) X3, Y3, Z3, U3, V3, W3 = zip(*zarrow) ax.quiver(X3, Y3, Z3, U3, V3, W3, length=1, normalize=False, color = 'black', alpha = .6, arrow_length_ratio = .001, pivot = 'tail', linestyles = 'solid',linewidths = 2) plt.show()
Image in a Jupyter notebook

Som terms to need to be defined, a quadratic form QQ is:

  1. positive definite if Q(x)>0Q(\mathbf{x})>0 for all x≠0\mathbf{x} \neq \mathbf{0}

  2. negative definite if Q(x)<0Q(\mathbf{x})<0 for all x≠0\mathbf{x} \neq \mathbf{0}

  3. positive semidefinite if Q(x)≥0Q(\mathbf{x})\geq0 for all x≠0\mathbf{x} \neq \mathbf{0}

  4. negative semidefinite if Q(x)≤0Q(\mathbf{x})\leq0 for all x≠0\mathbf{x} \neq \mathbf{0}

  5. indefinite if Q(x)Q(\mathbf{x}) assumes both positive and negative values.

We have a theorem for quadratic forms and eigenvalues:

Let AA be an n×nn \times n symmetric matrix. Then a quadratic form xTAx\mathbf{x}^{T} A \mathbf{x} is:

  1. positive definite if and only if the eigenvalues of AA are all positive

  2. negative definite if and only if the eigenvalues of AA are all negative

  3. indefinite if and only if AA has both positive and negative eigenvalues

With the help of this theorem, we can immediate tell if a quadratic form has a maximum, minimum or saddle point after calculating the eigenvalues.

Positive Definite Matrix

Symmetric matrices are one of the most important matrix form in linear algebra, we will show they are always positive definite.

A{A} is a symmetric matrix, premultiplying Ax=λx{A}\mathbf{x}=\lambda \mathbf{x} by xT\mathbf{x}^T

xTAx=λxTx=λ∥x∥2\mathbf{x}^T{A}\mathbf{x} = \lambda \mathbf{x}^T\mathbf{x} = \lambda \|\mathbf{x}\|^2

xTAx\mathbf{x}^T{A}\mathbf{x} must be positive, since we defined so, then λ\lambda must be larger than 00.

Try asking the other way around: if all eigenvalues are positive, is An×nA_{n\times n} positive definite? Yes.

Here is the Principal Axes Theorem which employs the orthogonal change of variable x=Py\mathbf{x}=P\mathbf{y}:

Q(x)=xTAx=yTDy=λ1y12+λ2y22+⋯+λnyn2Q(\mathbf{x})=\mathbf{x}^{T} A \mathbf{x}=\mathbf{y}^{T} D \mathbf{y}=\lambda_{1} y_{1}^{2}+\lambda_{2} y_{2}^{2}+\cdots+\lambda_{n} y_{n}^{2}

If all of λ\lambda's are positive, xTAx\mathbf{x}^{T} A \mathbf{x} is also positive.

Cholesky Decomposition

Cholesky decomposition is modification of LULU decomposition. And it is more efficient than LULU algorithm.

If AA is positive definite matrix, i.e. xTAx>0\mathbf{x}^{T} A \mathbf{x}>0 or every eigenvalue is strictly positive. A positive definite matrix can be decomposed into a multiplication of lower triangular matrix and its transpose.

A=LLT=[l1100l21l220l31l32l33][l11l21l310l22l3200l33][a11a21a31a21a22a32a31a32a33]=[l112l21l11l31l11l21l11l212+l222l31l21+l32l22l31l11l31l21+l32l22l312+l322+l332]\begin{aligned} {A}={L} {L}^{T} &=\left[\begin{array}{ccc} l_{11} & 0 & 0 \\ l_{21} & l_{22} & 0 \\ l_{31} & l_{32} & l_{33} \end{array}\right]\left[\begin{array}{ccc} l_{11} & l_{21} & l_{31} \\ 0 & l_{22} & l_{32} \\ 0 & 0 & l_{33} \end{array}\right] \\ \left[\begin{array}{ccc} a_{11} & a_{21} & a_{31} \\ a_{21} & a_{22} & a_{32} \\ a_{31} & a_{32} & a_{33} \end{array}\right] &=\left[\begin{array}{ccc} l_{11}^{2} &l_{21} l_{11} & l_{31} l_{11} \\ l_{21} l_{11} & l_{21}^{2}+l_{22}^{2} & l_{31} l_{21}+l_{32} l_{22} \\ l_{31} l_{11} & l_{31} l_{21}+l_{32} l_{22} & l_{31}^{2}+l_{32}^{2}+l_{33}^{2} \end{array}\right] \end{aligned}

We will show this with NumPy.

A = np.array([[16, -8, -4], [-8, 29, 12], [-4, 12, 41]]); A
array([[16, -8, -4], [-8, 29, 12], [-4, 12, 41]])
L = sp.linalg.cholesky(A, lower = True); L
array([[ 4., 0., 0.], [-2., 5., 0.], [-1., 2., 6.]])

Check if LLT=ALL^T=A

L@L.T
array([[16., -8., -4.], [-8., 29., 12.], [-4., 12., 41.]])

Some Facts of Symmetric Matrices

Rank and Positive Definiteness

If a symmetric matrix AA does not have full rank, which means there must be a non-trivial vector v\mathbf{v} satisfies

Av=0A\mathbf{v} = \mathbf{0}

which also means the quadratic form equals zero vTAv=0\mathbf{v}^TA\mathbf{v} = \mathbf{0}. Thus AA can not be a positive definite matrix if it does not have full rank.

Contrarily, a matrix to be positive definite must have full rank.