Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
gmcninch-tufts
GitHub Repository: gmcninch-tufts/2024-Sp-Math190
Path: blob/main/course-assignments/PS01--rep-theory--solutions.md
908 views
---
title: | ProblemSet 1 -- Linear algebra and representations **Solutions** author: George McNinch date: 2024-01-29
---

FF denotes an algebraically closed field of characteristic 0. If you like, you can suppose that F=CF = \mathbf{C} is the field of complex numbers.

  1. Let VV be a finite dimensional vector space over the field FF. Suppose that ϕ,ψ:VV\phi,\psi:V \to V are linear maps. Let λF\lambda \in F be an eigenvalue of ϕ\phi and write WW for the λ\lambda-eigenspace of ϕ\phi; i.e. W={vVϕ(v)=λv}.W = \{v \in V \mid \phi(v) = \lambda v \}. If ϕψ=ψϕ\phi \psi = \psi \phi show that WW is invariant under ψ\psi -- i.e. show that ψ(W)W\psi(W) \subseteq W.

    ::: {.solution} Solution: Let wWw \in W. We must show that x=ψ(w)Wx=\psi(w) \in W. To do this, we must establish that x=ψ(w)x=\psi(w) is a λ\lambda-eigenvector for ϕ\phi.

    We have ϕ(x)=ϕ(ψ(x))=ψ(ϕ(w))since ϕψ=ψϕ=ψ(λw)since w is a λ-eigenvector=λψ(w)since ψ is linear=λx\begin{align*} \phi(x) &= \phi(\psi(x)) & \\ &= \psi(\phi(w)) & \text{since $\phi \circ \psi = \psi \circ \phi$} \\ &= \psi(\lambda w) & \text{since $w$ is a $\lambda$-eigenvector} \\ &= \lambda \psi(w) & \text{since $\psi$ is linear} \\ &= \lambda x \end{align*} This completes the proof. :::

  2. Let nNn \in \mathbf{N} be a non-zero natural number, and let VV be an nn dimensional FF-vector space with a given basis e1,e2,,ene_1,e_2,\cdots,e_n.

    Consider the linear transformation T:VVT:V \to V given by the rule Tei=ei+1(modn).Te_i = e_{i+1 \pmod n}. In other words Tei={ei+1i<ne1i=n.Te_i = \left \{ \begin{matrix} e_{i+1} & i < n \\ e_1 & i =n \end{matrix} \right ..

    a. Show that TT is invertible and that Tn=idVT^n = \operatorname{id}_V.

    ::: {.solution} To check that Tn=idVT^n = \operatorname{id}_V, we check that Tn(ei)=eiT^n(e_i) = e_i for 1in1 \le i \le n.

    From the definition, it follows by induction on the natural number mm that Tm(ei)=ei+m(modn).T^m(e_i) = e_{i+m \pmod n}. Thus Tn(ei)=ei+n(modn)=eiT^n(e_i) = e_{i+n\pmod n} = e_i. Since this holds for every ii, conclude Tn=idVT^n = \operatorname{id}_V.

    Now TT is invertible since its inverse is given by Tn1T^{n-1}. :::

    b. Consider the vector v0=i=1neiv_0 = \displaystyle \sum_{i=1}^n e_i. Show that v0v_0 is a 11-eigenvector for TT.

    ::: {.solution} We compute T(v0)=T(i=1nei)=i=1nT(ei)=i=1nei+1(modn)=j=2n+1ej(modn)(let j=i+1)=j=1nej(modn)=v0\begin{align*} T(v_0) &= T\left(\sum_{i=1}^n e_i\right) = \sum_{i=1}^n T(e_i) \\ &= \sum_{i=1}^n e_{i+1 \pmod n} \\ &= \sum_{j=2}^{n+1} e_{j \pmod n} & \text{(let $j = i+1$)} \\ &= \sum_{j=1}^{n} e_{j \pmod n} = v_0 \\ \end{align*} Thus T(v0)=v0T(v_0) = v_0 so indeed v0v_0 is a 11-eigenvector. :::

    Let ζF\zeta \in F be a primitive nn-th root of unity. (e.g. if you assume F=CF = \mathbf{C}, you may as well take ζ=e2πi/n\zeta = e^{2\pi i/n}).

    c. Let v1=i=1nζieiv_1 = \displaystyle \sum_{i=1}^n \zeta^i e_i. Show that v1v_1 is a ζ1\zeta^{-1}-eigenvector for TT.

    ::: {.solution} We compute T(v1)=T(i=1nζiei)=i=1nζiT(ei)=i=1nζiei+1(modn)=j=2n+1ζj1ej(modn)(let j=i+1)=ζ1j=2n+1ζjej(modn)=ζ1j=1nζjej(modn)(since ζj=ζj(modn) j)=ζ1v1\begin{align*} T(v_1) &= T\left(\sum_{i=1}^n \zeta^i e_i\right) \\ & = \sum_{i=1}^n \zeta^iT(e_i) \\ &= \sum_{i=1}^n \zeta^i e_{i+1 \pmod n} \\ &= \sum_{j=2}^{n+1} \zeta^{j-1} e_{j \pmod n} & \text{(let $j = i+1$)} \\ &= \zeta^{-1} \sum_{j=2}^{n+1} \zeta^{j} e_{j \pmod n} \\ &= \zeta^{-1} \sum_{j=1}^{n} \zeta^j e_{j \pmod n} & \text{(since $\zeta^j = \zeta^{j \pmod n}$ $\forall j$)}\\ & = \zeta^{-1} v_1 \\ \end{align*} Thus T(v1)=ζ1v1T(v_1) = \zeta^{-1} v_1 so indeed v0v_0 is a ζ1\zeta^{-1}-eigenvector. :::

    d. More generally, let 0j<n0 \le j < n and let vj=i=1nζijei.v_j = \sum_{i=1}^n \zeta^{ij} e_i. Show that vjv_j is a ζj\zeta^{-j}-eigenvector for TT.

    ::: {.solution} The calcuation in the solution to part (c) is valid for any nn-th root of unity unity ζ\zeta. Applying this calculation for ζj\zeta^j shows that vjv_j is a ζj\zeta^{-j}-eigenvector for TT as required. :::

    e. Conclude that v0,v1,,vn1v_0,v_1,\cdots,v_{n-1} is a basis of VV consisting of eigenvectors for TT, so that TT is diagonalizable.

    Hint: You need to use the fact that eigenvectors for distinct eigenvalues are linearly independent.

    What is the matrix of TT in this basis?

    ::: {.solution}

    Since eigenvectors for distinct eigenvalues are linearly independent, conclude that the vectors B={v0,v1,,vn1}\mathcal{B}=\{v_0,v_1,\cdots,v_{n-1}\} are linearly independent. Since there nn vectors in B\mathcal{B} and since dimV=n\dim V = n, conclude that B\mathcal{B} is a basis for VV.

    The matrix of TT in the basis B\mathcal{B} is given by [T]B=[10000ζ10000ζ20000ζn+1][T]_{\mathcal{B}} = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & \zeta^{-1} & 0 & \cdots & 0 \\ 0 & 0 & \zeta^{-2} & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & \zeta^{-n+1} \end{bmatrix}

    (This form explains why an n×nn\times n matrix MM is diagonalizable iff FnF^n has a basis of eigenvectors for MM). :::

  3. Let G=Z/3ZG = \mathbb{Z}/3\mathbb{Z} be the additive group of order 33, and let ζ\zeta be a primitive 33rd root of unity in FF.

    To define a representation ρ:GGLn(F)\rho:G \to \operatorname{GL}_n(F), it is enough to find a matrix MGLn(F)M \in \operatorname{GL}_n(F) with M3=1M^3 = 1; in turn, MM determines a representation ρ\rho by the rule ρ(i+3Z)=Mi\rho(i + 3\mathbb{Z}) = M^i.

    Consider the representation ρ1:GGL3(F)\rho_1 : G \to \operatorname{GL}_3(F) given by the matrix ρ1(1+3Z)=M1=[1000ζ000ζ2]\rho_1(1 + 3\mathbb{Z}) = M_1 = \begin{bmatrix} 1 & 0 & 0\\ 0 & \zeta & 0 \\ 0 & 0 & \zeta^2 \end{bmatrix} and consider the representation ρ2:GGL3(F)\rho_2:G \to \operatorname{GL}_3(F) given by the matrix ρ2(1+3Z)=M2=[001100010].\rho_2(1 + 3\mathbb{Z}) = M_2 = \begin{bmatrix} 0 & 0 & 1\\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}.

    Show that the representations ρ1\rho_1 and ρ2\rho_2 are equivalent (alternative terminology: are isomorphic). In other words, find a linear bijection Φ:F3F3\Phi:F^3 \to F^3 with the property that Φ(ρ2(g)v)=ρ1(g)Φ(v)\Phi(\rho_2(g)v) = \rho_1(g)\Phi(v) for every gGg \in G and vF3v \in F^3.

    Hint: First find a basis of F3F^3 consisting of eigenvectors for the matrix M2M_2.

    ::: {.solution} The matrix M1M_1 is diagonal, which is to say that the standard basis vectors e1=[100],e2=[010],e3=[001]e_1 = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, e_2 = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, e_3 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} are eigenvectors for M1M_1 with respective eigenvalues 1,ζ,ζ21,\zeta,\zeta^2.

    By the work in problem 2, we see that v1=e1+e2+e3,v2=e1+ζe2+ζ2e3,v3=e1+ζ2e2+ζe3v_1 = e_1 + e_2 + e_3, \quad v_2 = e_1 + \zeta e_2 + \zeta^2 e_3, \quad v_3 = e_1 + \zeta^2 e_2 + \zeta e_3 are eigenvectors for M2M_2 with respective eigenvalues 1,ζ2,ζ1,\zeta^2,\zeta.

    Now let Φ:F3F3\Phi:F^3 \to F^3 be the linear transformation for which Φ(e1)=v1,Φ(e2)=v3,Φ(e3)=v2\Phi(e_1) = v_1, \quad \Phi(e_2) =v_3, \quad \Phi(e_3) = v_2.

    We claim that Φ\Phi defines an isomorphism of GG-representations (ρ1,F3)(ρ2,F3).(\rho_1,F^3) \xrightarrow{\sim} (\rho_2,F^3).

    We must check that Φ(ρ1(g)v)=ρ2(g)Φ(v)\Phi(\rho_1(g)v) = \rho_2(g)\Phi(v) for all gGg \in G and all vF3v \in F^3.

    Since GG is cyclic it suffices to check that ()Φ(M1v)=M2Φ(v)vF3(\clubsuit) \quad \Phi(M_1 v) = M_2 \Phi(v) \quad \forall v \in F^3.

    (Indeed, ()(\clubsuit) amounts to "checking on a generator". If ()(\clubsuit) holds then for every natural number ii a straightforward induction argument shows for every vF3v \in F^3 that Φ(ρ1(i+3Z)v)=Φ(ρ1(1+3Z)iv)=Φ(M1iv)=M2iΦ(v)=ρ2(1+3Z)iΦ(v)=ρ2(i+3Z)Φ(v)\begin{align*} \Phi(\rho_1(i + 3\mathbb{Z})v) &= \Phi(\rho_1(1+3\mathbb{Z})^i v) \\ &= \Phi( {M_1}^i v) \\ &= {M_2}^i \Phi(v) \\ &= \rho_2(1 + 3\mathbb{Z})^i\Phi(v) \\ &= \rho_2(i+3\mathbb{Z})\Phi(v) \end{align*} )

    In turn, it suffices to verify the ()(\clubsuit) holds for the basis vectors e1,e2,e3e_1,e_2,e_3 for V=F3V = F^3.

    Since e1e_1 and v1v_1 are 11-eigenvectors for M1M_1 resp. M2M_2, we have Φ(M1e1)=Φ(e1)=v1=M2v1.\Phi(M_1 e_1)= \Phi(e_1) = v_1 = M_2 v_1.

    Since e2e_2 and v3v_3 are ζ\zeta-eigenvectors for M1M_1 resp. M2M_2, we have Φ(M1e2)=Φ(ζe2)=ζΦ(e2)=ζv3=M2v3.\Phi(M_1 e_2)= \Phi(\zeta e_2) = \zeta\Phi(e_2) = \zeta v_3 = M_2 v_3.

    Since e3e_3 and v2v_2 are ζ2\zeta^2-eigenvectors for M1M_1 resp. M2M_2, we have Φ(M1e3)=Φ(ζ2e3)=ζ2Φ(e3)=ζ2v2=M2v2.\Phi(M_1e_3)= \Phi(\zeta^2 e_3) = \zeta^2\Phi(e_3) = \zeta^2 v_2 = M_2v_2. Thus ()(\clubsuit) holds and the proof is complete.

    ---

    Alternatively, note that the matrix of Φ\Phi in the standard basis is given by [Φ]=[1111ζ2ζ1ζζ2][\Phi] = \begin{bmatrix} 1 & 1 & 1 \\ 1 & \zeta^2 & \zeta \\ 1 & \zeta & \zeta^2 \end{bmatrix}

    Now, to prove that Φρ1(g)=ρ2(g)Φ\Phi \circ \rho_1(g) = \rho_2(g) \circ \Phi, it suffices to check that M2[Φ]=[Φ]M1M_2[\Phi] = [\Phi] M_1 i.e. that $$[001100010]\begin{bmatrix} 0 & 0 & 1\\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix} \cdot [1111ζ2ζ1ζζ2]\begin{bmatrix} 1 & 1 & 1 \\ 1 & \zeta^2 & \zeta \\ 1 & \zeta & \zeta^2 \end{bmatrix}

    [1111ζ2ζ1ζζ2]\begin{bmatrix} 1 & 1 & 1 \\ 1 & \zeta^2 & \zeta \\ 1 & \zeta & \zeta^2 \end{bmatrix}

    \cdot [1000ζ000ζ2]\begin{bmatrix} 1 & 0 & 0\\ 0 & \zeta & 0 \\ 0 & 0 & \zeta^2 \end{bmatrix} $$

    IN fact, both products yield the matrix [1ζζ21111ζ2ζ]\begin{bmatrix} 1 & \zeta & \zeta^2 \\ 1 & 1 & 1 \\ 1 & \zeta^2 & \zeta \end{bmatrix} :::

  4. Let VV be a nn dimensional FF-vector space for nNn \in \mathbb{N}.

    Let GL(V)\operatorname{GL}(V) denote the group $$\operatorname{GL}(V) = \{ \text{all invertible $Flineartransformations-linear transformations \phi:V \to VParseError: KaTeX parse error: Expected 'EOF', got '}' at position 1: }̲\}$ where the group operation is composition of linear transformations.

    Recall that GLn(F)\operatorname{GL}_n(F) denotes the group of all invertible n×nn \times n matrices.

    If B={b1,b2,,bn}\mathcal{B} = \{b_1,b_2,\cdots,b_n\} is a choice of basis, show that the assignment ϕ[ϕ]B\phi \mapsto [\phi]_{\mathcal{B}} determines an isomorphism GL(V)GLn(F).\operatorname{GL}(V) \xrightarrow{\sim} \operatorname{GL}_n(F).

    Here [ϕ]B=[Mij][\phi]_{\mathcal{B}} = [M_{ij}] denotes the matrix of ϕ\phi in the basis B\mathcal{B} defined by equations

    ϕ(bi)=k=1nMkibk.\phi(b_i) = \sum_{k=1}^n M_{ki} b_k.

    ::: {.solution} Lets write Φ\Phi for the mapping Φ:GL(V)GLn(F)\Phi:\operatorname{GL}(V) \to \operatorname{GL}_n(F) defined above.

    An important property -- proved in Linear Algebra -- is that for ϕ,ψ:VV\phi,\psi:V \to V we have ()[ϕψ]B=[ϕ]B[ψ]B.(\heartsuit) \quad [\phi \circ \psi]_{\mathcal{B}} = [\phi]_{\mathcal{B}} \cdot [\psi]_{\mathcal{B}}. In words: "once you choose a basis, composition of linear transformations corresponds to multiplication of the corresponding matrices".

    Now, since the matrix of the endomorphism ϕ:VV\phi:V \to V is equal to the identity matrix In\mathbf{I}_n if and only if ϕ=idV\phi =\operatorname{id}_V, ()(\heartsuit) shows at once that a linear transformation ϕ:VV\phi:V \to V is invertible if and only if [ϕ]B[\phi]_{\mathcal{B}} is an invertible matrix.

    This confirms that Φ\Phi is indeed a group homomorphism.

    To show that Φ\Phi is an isomorphism, we exhibit its inverse. Namely, we defined a group homomorphism Ψ:GLn(F)GL(V)\Psi:\operatorname{GL}_n(F) \to \operatorname{GL}(V) and check that Ψ\Psi is the inverse to Φ\Phi.

    TO define Ψ\Psi, we introduce the linear isomorphism β:FnV\beta:F^n \to V defined by the rule β[a1a2an]=i=1naibi.\beta \begin{bmatrix} a_1 \\ a_2 \\ \vdots \\ a_n \end{bmatrix} =\sum_{i=1}^n a_i b_i.

    For an invertible matrix MM, we define Ψ(M):VV\Psi(M):V \to V by the rule Ψ(M)(v)=βMβ1v\Psi(M)(v) = \beta M \cdot \beta^{-1} v

    If M1,M2GLn(F)M_1,M_2 \in\operatorname{GL}_n(F) then for every vVv \in V we have Ψ(M1M2)v=βM1M2β1v=βM1β1βM2β1v=Ψ(M1)Ψ(M2)v\begin{align*} \Psi(M_1M_2)v &= \beta M_1 M_2 \cdot \beta^{-1} v \\ &= \beta M_1 \beta^{-1} \beta M_2 \cdot \beta^{-1} v \\ &= \Psi(M_1)\Psi(M_2)v \end{align*} This confirms that Ψ\Psi is a group homomorphism.

    It remains to observe that for MGLn(F)M \in \operatorname{GL}_n(F) we have ΦΨ(M)=M,\Phi \circ \Psi (M) = M, which amounts to the fact that MM is the matrix of Ψ(M)\Psi(M), and we must observe for gGL(V)g \in \operatorname{GL}(V) hat ΨΦ(g)=g\Psi \circ \Phi (g) = g which amounts to the observation that the transformation g:VVg:V \to V is determined by its effect on the basis vectors bib_i and hence by the matrix Φ(g)\Phi(g).

    :::