Contact Us!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

| Download

Math 208 Interactive Notebooks © 2024 by Soham Bhosale, Sara Billey, Herman Chau, Zihan Chen, Isaac Hartin Pasco, Jennifer Huang, Snigdha Mahankali, Clare Minerath, and Anna Willis is licensed under CC BY-ND 4.0

Views: 845
License: OTHER
Image: ubuntu2204
Kernel: SageMath 10.3

© 2024 By Soham Bhosale, Sara Billey, Herman Chau, Zihan Chen, Isaac Hartin Pasco, Jennifer Huang, Snigdha Mahankali, Clare Minerath, and Anna Willis is licensed under CC BY-ND 4.0

Welcome to Chapter 3: The Paradise of Matrices

Originally created by ZihanChenZihan Chen | zchen84@uw.eduzchen84@uw.edu

Preface:

This Sage Worksheet is a part of a LinearLinear AlgebraAlgebra tutorial created by Prof.SaraSara BilleyBilley's 2024 WXML group. Referred to LinearLinear AlgebraAlgebra withwith ApplicationsApplications byby HoltHolt, 2nd2^{nd} editionedition, this Worksheet is intended to help future UWUW Math208Math208 students with some tools and software to present more visual linear algebra content. This Worksheet will revolve around SectionSection 3.23.53.2-3.5 of HoltHolt's Book.

3.2/3.3Matrix3.2/3.3-Matrix ComputationsComputations

Next, let's take a look at matrix computations, which I believe are crucial in linear algebra. First, we'll explore matrix addition and subtraction.

[r

Example7Example7 Suppose we have matrices D=[412370] D = \begin{bmatrix} 4 & -1 \\ 2 & -3 \\ 7 & 0 \end{bmatrix} and E=[315002] E = \begin{bmatrix} 3 & -1 \\ 5 & 0 \\ 0 & 2 \end{bmatrix} . We want to calculate 3D 3D and D2E D - 2E .

Implementing this in Sage is quite straightforward, as follows:

To begin with, let's consider the computation of 2E2E. Actually, it's the same as multiply scalar 22 in every entries in matrix EE . We will computer 2E2E by using sage as below.

D=matrix([[4,-1],[2,-3],[7,0]]) E=matrix([[3,-1],[5,0],[0,2]]) print("3*D is") 3*D print("2*E is") 2*E print("D-2*E is") D-2*E print("D+E is") D+E
3*D is 2*E is D-2*E is D+E is
[ 7 -2] [ 7 -3] [ 7 2]

In the last line, I also calculated D+E D + E . When we see D+E D + E , we should consider whether the result of D+E D + E is the same as that of E+D E + D . Let's give it a try.

print("E+D is") E+D
E+D is
[ 7 -2] [ 7 -3] [ 7 2]

The results are the same. For three matrices, does addition follow the commutative law? I encourage you to check this conclusion yourself, and you should find that it is correct. Next, let's take a look at matrix multiplication. I won't go into the specific steps of multiplication, as you can learn about them from reading relevant materials. What I want you to check now is whether matrix multiplication follows the commutative law.

Example8Example8 Suppose we have two matrices F=[3120] F = \begin{bmatrix} 3 & 1 \\ -2 & 0 \end{bmatrix} and G=[102431] G = \begin{bmatrix} -1 & 0 & 2 \\ 4 & -3 & -1 \end{bmatrix} . Let's calculate FG FG and GF GF to see if the results of these two are the same.

F=matrix([[3,1],[-2,0]]) G=matrix([[-1,0,2],[4,-3,-1]]) print("F*G is") print(str(F*G)) print("G*F is") G*F
F*G is [ 1 -3 5] [ 2 0 -4] G*F is
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[5], line 6 4 print(str(F*G)) 5 print("G*F is") ----> 6 G*F
File /ext/sage/10.3/src/sage/structure/element.pyx:4090, in sage.structure.element.Matrix.__mul__() 4088 4089 if BOTH_ARE_ELEMENT(cl): -> 4090 return coercion_model.bin_op(left, right, mul) 4091 4092 cdef long value
File /ext/sage/10.3/src/sage/structure/coerce.pyx:1278, in sage.structure.coerce.CoercionModel.bin_op() 1276 # We should really include the underlying error. 1277 # This causes so much headache. -> 1278 raise bin_op_exception(op, x, y) 1279 1280 cpdef canonical_coercion(self, x, y) noexcept:
TypeError: unsupported operand parent(s) for *: 'Full MatrixSpace of 2 by 3 dense matrices over Integer Ring' and 'Full MatrixSpace of 2 by 2 dense matrices over Integer Ring'

We observe that the result of G×F G \times F cannot be produced, while F×G F \times G yields a result. Does this, to some extent, explain why F×G F \times G is not equal to G×F G \times F ? This serves as a reminder that in the matrix multiplication of F×G F \times G , the number of columns in matrix F F must be the same as the number of rows in matrix G G for the computation to be possible. Using this reminder, let's try to see if the multiplication of n×nn×n matrices follows the commutative law. We'll set H=[2113] H = \begin{bmatrix} 2 & -1 \\ 1 & 3 \end{bmatrix} and J=[4211] J = \begin{bmatrix} 4 & -2 \\ -1 & 1 \end{bmatrix} as Example8Example8.

H=matrix(2,2,[2,-1,1,3]) J=matrix(2,2,[4,-2,-1,1]) print("H*J is") H*J print("J*H is") J*H
H*J is J*H is
[ 6 -10] [ -1 4]

Now, I think we can summarize our conclusion that matrix multiplication does not follow the commutative law. Similarly, we are curious about other properties like the associative law and distributive law, and we can also perform self-checks for these. Here, I'll use the distributive law as an example.

Example9Example9 Suppose we have three matrices A1,A2,A3A_1, A_2, A_3, where A1=[2315] A_1 = \begin{bmatrix} 2 & -3 \\ 1 & 5 \end{bmatrix} , A2=[0742] A_2 = \begin{bmatrix} 0 & 7 \\ 4 & -2 \end{bmatrix} , and A3=[3401] A_3 = \begin{bmatrix} -3 & -4 \\ 0 & -1 \end{bmatrix} . Let's check if (A1+A2)×A3=A1×A3+A2×A3 (A_1 + A_2) \times A_3 = A_1 \times A_3 + A_2 \times A_3 holds true.

A1=matrix([[2,-3],[1,5]]) A2=matrix([[0,7],[4,-2]]) A3=matrix([[-3,-4],[0,-1]]) print("(A1+A2)*A3 is") (A1+A2)*A3 print("A1*A3+A2*A3 is") A1*A3+A2*A3
(A1+A2)*A3 is A1*A3+A2*A3 is
[ -6 -12] [-15 -23]

I think we have concluded on our own that matrix multiplication does comply with the distributive law. Let's continue to explore another important matrix operation: transposetranspose. The main idea of transpose is to switch coordinates. If we have a matrix A=[ABCD] A = \begin{bmatrix} A & B \\ C & D \end{bmatrix} , then AT A^T would be [ACBD] \begin{bmatrix} A & C \\ B & D \end{bmatrix} . I'm sure you, being clever, have already spotted the pattern. Can you summarize it? For example, if the coordinates in A A are [a11a12a21a22] \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} , what would the corresponding coordinates in AT A^T be? Let's find the answer through an example.

Example10Example10 Suppose we have a matrix K=[123456789] K = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix} , what should KT K^T be? We can still implement this through Sage.

K=matrix([[1,2,3],[4,5,6],[7,8,9]]) print("K is") K print("K^T is") K.transpose() #or we can use K.T
K is K^T is
[1 4 7] [2 5 8] [3 6 9]

Matrix transpose also satisfies many properties. For example, (A+B)T=AT+BT (A + B)^T = A^T + B^T and (AB)T=BTAT (AB)^T = B^T A^T . We will use the following example to demonstrate the latter conclusion.

Example11Example11 Let's calculate (L1L2)T (L_1 \cdot L_2)^T and L2TL1T L_2^T \cdot L_1^T for the given vectors: L1=[120314] L_1 = \begin{bmatrix} 1 & -2 & 0 \\ 3 & 1 & -4 \end{bmatrix} L2=[501203] L_2 = \begin{bmatrix} 5 & 0 \\ -1 & 2 \\ 0 & 3 \end{bmatrix}

L1=matrix([[1,-2,0],[3,1,-4]]) L2=matrix([[5,0],[-1,2],[0,3]]) print("L1 is") L1 print("L2 is") L2 print("(L1*L2)^T is") (L1*L2).transpose() print("(L2^T)*(L1^T) is") L2.transpose() * L1.transpose()
L1 is L2 is (L1*L2)^T is (L2^T)*(L1^T) is
[ 7 14] [ -4 -10]

Here, I'd like to pose a question to you. Do you know how to calculate the power of a matrix? Is matrix power simply raising each entry of the matrix to the corresponding power? In fact, this conclusion holds true for diagonal matrices, but not for non-diagonal matrices. However, don't worry, at this stage, we won't be calculating very high powers of matrices. If we encounter something like A2A^2, we can simply compute AAA \cdot A instead. The specifics of calculating high powers of matrices will be covered in Chapter 66 on diagonalizationdiagonalization.

I'd like to include the concept of inversesinverses in the matrix calculations discussed in section 3.2 because finding the inverse of a matrix is also a matrix operation. Let's consider a 2×22×2 matrix AA:

A=[abcd] A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}

We have a handy formula to find the inverse of this matrix:

A1=1adbc[dbca] A^{-1} = \frac{1}{{ad - bc}} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}

However, for matrices larger than 2×22×2, there are methods to find the inverse, but it's often efficient to compute the inverse of a matrix using SageMath.

Example12Example12 Let's take an example with a matrix MM: M=[121251120] M = \begin{bmatrix} 1 & 2 & -1 \\ 2 & 5 & -1 \\ 1 & 2 & 0 \end{bmatrix} We will now find its inverse and verify our result by checking if MM1=I MM^{-1} = I .

M=matrix([[1,2,-1],[2,5,-1],[1,2,0]]) M_inv=M.inverse() print ("M inverse is") print (M_inv) print ("Check M*M_inv") print (M*M_inv)
M inverse is [ 2 -2 3] [-1 1 -1] [-1 0 1] Check M*M_inv [1 0 0] [0 1 0] [0 0 1]

So what we've found is the inverse of matrix AA. Of course, we won't stop here; knowledge of inverse matrices is not only applicable in this chapter but also comes into play in section 4.44.4, "ChangeChange ofof BasisBasis," particularly in the operations mentioned below:

  1. (A1)1=A(A^{-1})^{-1} = A
  2. (AB)1=B1A1(AB)^{-1} = B^{-1}A^{-1}

Let's explore the second operation through a simple example.

Example13Example13 Let's calculate (NO)1(NO)^{-1} and O1N1O^{-1}N^{-1}.

N=[121251120],O=[231461420]N = \begin{bmatrix} 1 & 2 & -1 \\ 2 & 5 & -1 \\ 1 & 2 & 0 \end{bmatrix}, O = \begin{bmatrix} 2 & 3 & 1 \\ 4 & 6 & -1 \\ 4 & 2 & 0 \end{bmatrix}
N=matrix([[1,2,-1],[2,5,-1],[1,2,0]]) O=matrix([[2,3,1],[4,6,-1],[4,2,0]]) print ("N*O's inverse is") print((N*O).inverse()) N_2=N.inverse() O_2=O.inverse() print ("Ainverse *Binverse is") print(O_2*N_2)
N*O's inverse is [-11/24 1/12 5/24] [ 5/12 -1/6 1/12] [ 5/3 -5/3 7/3] Ainverse *Binverse is [-11/24 1/12 5/24] [ 5/12 -1/6 1/12] [ 5/3 -5/3 7/3]

We can observe that they are the same. How can you remember this formula, and why is it to first find the inverse of BB and then multiply it by the inverse of AA? Let's think about a process: when you define AA as putting on your socks and BB as putting on your shoes, when we want to undo these steps, you should first take off your shoes, and then take off your socks. In other words, (AB)1=B1A1(AB)^{-1} = B^{-1} A^{-1}.

ParseError: KaTeX parse error: Can't use function '$' in math mode at position 7: 3.5^*-$̲$Markov ChainsChains

What is a Markov matrix? I believe the most important thing is that it satisfies two conditions. The first is that the matrix is non-negative, which means that every entry aija_{ij} in the Markov matrix should be greater than or equal to 00. Another important property of a Markov matrix is that the sum of each of its columns is equal to 11. A Markov matrix is a fascinating matrix, and I'll do my best to introduce you to some Markov matrices that I know.

First, we can generate matrices that satisfy the above properties randomly. Each run will produce a different matrix because it's a random process.

# Define the size of the matrix n = 3 # Size of the matrix # Generate a random Markov matrix markov_matrix = random_matrix(QQ, n, n) # Using QQ to represent the rational number field # Ensure that the generated matrix is a Markov matrix (the sum of elements in each column equals 1) for j in range(n): col_sum = sum(markov_matrix[i, j] for i in range(n)) markov_matrix[:, j] = markov_matrix[:, j] / col_sum # Print the generated Markov matrix print(markov_matrix)
[ 0 1/3 0] [ 0 0 1/2] [ 1 2/3 1/2]

Observe the randomly generated matrices above and determine whether each generation results in a Markov matrix. I would like to discuss a fascinating property of Markov matrices, which is that the product of two Markov matrices is still a Markov matrix. We will also verify this property using randomly generated matrices.

""" This function is to define a random Markow matrix """ # Define the size of the matrix n = 3 # Matrix size # Generate a random Markov matrix markov_matrix = random_matrix(QQ, n, n) # Using QQ to represent the rational number field # Ensure that the generated matrix is a Markov matrix (the sum of elements in each column equals 1) for j in range(n): col_sum = sum(markov_matrix[i, j] for i in range(n)) markov_matrix[:, j] = markov_matrix[:, j] / col_sum # Print the generated Markov matrix print("testMarkow_1") print(markov_matrix) print("testMarkow_2") print(markov_matrix*markov_matrix) print("testMarkow_3") print(markov_matrix*markov_matrix*markov_matrix) print("testMarkow_4") print(markov_matrix*markov_matrix*markov_matrix*markov_matrix)
testMarkow_1 [ 1/3 0 -1/2] [ 0 1 1/2] [ 2/3 0 1] testMarkow_2 [-2/9 0 -2/3] [ 1/3 1 1] [ 8/9 0 2/3] testMarkow_3 [-14/27 0 -5/9] [ 7/9 1 4/3] [ 20/27 0 2/9] testMarkow_4 [-44/81 0 -8/27] [ 31/27 1 13/9] [ 32/81 0 -4/27]

We have observed through the multiplication of multiple random matrices that Markov matrices do indeed exhibit this characteristic. Can you think of why? (Hint: You can try using the row vector 1T1^T multiplied on the far left.) For Markov matrices, what we are most concerned with is their contribution to studying long-term behavior. Let's look at a specific example below.

G=matrix(3,3,[0,0,2/3,1,0,0,0,1,1/3]) G print("All eigenvectors are") print(G.eigenvalues()) print("All eigenvectors are") print(G.right_eigenvectors())
All eigenvectors are [1, -0.3333333333333334? - 0.745355992499930?*I, -0.3333333333333334? + 0.745355992499930?*I] All eigenvectors are [(1, [ (1, 1, 3/2) ], 1), (-0.3333333333333334? - 0.745355992499930?*I, [(1, -0.500000000000000? + 1.118033988749895?*I, -0.500000000000000? - 1.118033988749895?*I)], 1), (-0.3333333333333334? + 0.745355992499930?*I, [(1, -0.500000000000000? - 1.118033988749895?*I, -0.500000000000000? + 1.118033988749895?*I)], 1)]

By observing the eigenvalues we have outputted, we can see that the absolute values of the other two eigenvalues are all within 1|1|, meaning that the absolute values of the other eigenvalues do not exceed 11. Another thing we can observe is that only the dominant eigenvector is a non-negative vector.

""" This function is to computer the eigenvalue/eigenvector for our Markov matrix, studying the long term behavior. """ # Define a non-negative matrix A A = matrix([[0, 0, 2/3], [1, 0, 0], [0, 1, 1/3]]) # fill in with non-negative values # Define a non-negative initial vector w0 w0 = vector([1, 2, 3]) # fill in with non-negative values # Compute the eigenvalues and right eigenvectors of A eigenvalues = A.eigenvalues() eigenvectors = A.eigenvectors_right() # Find the dominant eigenvalue and its corresponding eigenvector dominant_eigenvalue = max(eigenvalues, key=abs) dominant_eigenvector = None for eigenvalue, eigenvector_list, multiplicity in eigenvectors: if eigenvalue == dominant_eigenvalue: dominant_eigenvector = eigenvector_list[0] break # Normalize the dominant eigenvector for comparison dominant_eigenvector_normalized = dominant_eigenvector / dominant_eigenvector.norm() # Initialize a variable for the result result = w0 # Set a maximum number of iterations max_iterations = 100 # Set a tolerance for convergence tolerance = 1e-6 # Iterate and multiply for k in range(max_iterations): new_result = A^k * w0 # Normalize the result for comparison new_result_normalized = new_result / new_result.norm() # Check for convergence using a tolerance if (new_result_normalized - dominant_eigenvector_normalized).norm() < tolerance: break result = new_result # Display the result print(result) print(result.n(digits=2))
(72670611493141823277045902932/42391158275216203514294433201, 24223543699881339092309899804/14130386091738734504764811067, 109005707058511380531790996862/42391158275216203514294433201) (1.7, 1.7, 2.6)

Super cool! It seems like we've discovered a fascinating aspect of Markov matrices. Taking a closer look at the dominant eigenvector above, you'll notice that it's just a scalar multiple of the dominant eigenvector! Isn't that intriguing? Indeed, we can use this to study long-term behavior! I can't wait to share with you how it works in google pagerank.

Example13Example13 Compute the page rank of the 55 webpages you see in the following directed graph. An arrow from ii to jj indicates that page ii links to page jj. Use the rule that when you are at a webpage you will teleport with probability 14\frac{1}{4} and follow a link with probability 34\frac{3}{4}. Assume that all links from a page have equal probability of being followed.

import networkx as nx import matplotlib.pyplot as plt # Create a directed graph G = nx.DiGraph() # Add nodes nodes = [1, 2, 3, 4, 5] G.add_nodes_from(nodes) # Add edges edges = [(1, 4), (2, 1), (3, 1), (4, 2), (4, 5), (5, 3)] G.add_edges_from(edges) # Draw the graph pos = nx.spring_layout(G) # Choose a layout method (spring layout in this case) nx.draw(G, pos, with_labels=True, node_size=500, node_color='skyblue', font_size=12, font_color='black', arrowsize=15) # Show the graph plt.show()
Image in a Jupyter notebook

Based on the instructions in the question and with reference to the diagram, we can construct the following matrices:

LinkLink = [0110000013000013110000000130]\begin{bmatrix} 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{3} & 0 \\ 0 & 0 & 0 & \frac{1}{3} & 1 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{3} & 0 \\ \end{bmatrix} , TeleportTeleport = [15151515151515151515151515151515151515151515151515]\begin{bmatrix} \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} \\ \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} \\ \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} \\ \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} \\ \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} \\ \end{bmatrix}

Now, the matrix that combines these with teleportation, known as the transition matrix with teleportation, can be represented as: PagePage = 34\frac{3}{4} \cdot LinkLink + 14\frac{1}{4} \cdot TeleportTeleport

We will calculate this transition matrix using Sage:

# Define the Link matrix Link = Matrix([[0, 1, 1, 0, 0], [0, 0, 0, 1/3, 0], [0, 0, 0, 1/3, 1], [1, 0, 0, 0, 0], [0, 0, 0, 1/3, 0]]) # Define the Teleport matrix Teleport = Matrix([[1/5, 1/5, 1/5, 1/5, 1/5], [1/5, 1/5, 1/5, 1/5, 1/5], [1/5, 1/5, 1/5, 1/5, 1/5], [1/5, 1/5, 1/5, 1/5, 1/5], [1/5, 1/5, 1/5, 1/5, 1/5]]) # Define the transition matrix with teleportation Page = (3/4) * Link + (1/4) * Teleport print(Page)
[1/20 4/5 4/5 1/20 1/20] [1/20 1/20 1/20 3/10 1/20] [1/20 1/20 1/20 3/10 4/5] [ 4/5 1/20 1/20 1/20 1/20] [1/20 1/20 1/20 3/10 1/20]
Page.eigenvalues()
[1, 0, -0.3318700005183316?, -0.2090649997408343? - 0.6164633182425810?*I, -0.2090649997408343? + 0.6164633182425810?*I]
Page.right_eigenvectors()
[(1, [ (1, 92/229, 161/229, 211/229, 92/229) ], 1), (0, [ (0, 1, -1, 0, 0) ], 1), (-0.3318700005183316?, [(1, 1.702414383919316?, -2.144907717943758?, -2.259921049894873?, 1.702414383919316?)], 1), (-0.2090649997408343? - 0.6164633182425810?*I, [(1, -0.3512071919596577? - 0.2691725449816134?*I, 0.07245385897187869? - 0.5527785460084946?*I, -0.3700394750525635? + 1.091123635971722?*I, -0.3512071919596577? - 0.2691725449816134?*I)], 1), (-0.2090649997408343? + 0.6164633182425810?*I, [(1, -0.3512071919596577? + 0.2691725449816134?*I, 0.07245385897187869? + 0.5527785460084946?*I, -0.3700394750525635? - 1.091123635971722?*I, -0.3512071919596577? + 0.2691725449816134?*I)], 1)]

Maybe we need to rescale it to make sure the probability is 11, but maybe we can figure out the order of preference for these sites directly from the first column of dominant eigenvector, which we call the page rank vector.That is 1>4>3>5=21>4>3>5=2

3.4PA=LU3.4^*-PA=LU

It represents the relationship between matrix decomposition and permutation, commonly used in linear algebra and numerical analysis. Let me explain the meaning of each symbol:

  • "PP" stands for the permutationpermutation matrixmatrix, which represents the permutation of rows. The PP matrix is a square matrix with the characteristic that each row and each column have exactly one element equal to 11, and all other elements are 00. The position of this 11 indicates the row permutation, i.e., the reordering of rows in the original matrix.
  • "AA" represents the original matrix, which is the matrix that needs to undergo LULU decomposition.
  • "LL" represents the lower triangular matrix, which is the lower triangular part of the LULU decomposition, with all elements above the main diagonal being 00, and some elements below the main diagonal may be non-zero. The LL matrix is used to represent the lower triangular part of the original matrix AA.
  • "UU" represents the upper triangular matrix, which is the upper triangular part of the LULU decomposition, with all elements below the main diagonal being 00, and some elements above the main diagonal may be non-zero. The U matrix is used to represent the upper triangular part of the original matrix AA.
""" This function is for illustrating the process of LU decomposition. Inputs: ------- The matrix we want to use the LU decomposition. Outputs: -------- P: the permutation matrix A: origin matrix we used L: lower triangular matrix U: upper triangular matrix """ # Create matrix A A = matrix([(2,1,1),(4,-6,0),(-2,7,2)]) # Perform LU decomposition and obtain P, L, and U matrices # Use pivot='nonzero' to select non-zero pivots for numerical stability and correctness P, L, U = A.LU(pivot='nonzero') # Transpose the P matrix (to convert column permutation matrix into row permutation matrix) P = P.T # Print the original matrix A print(f'A=\n{A}\n') # Print the permutation matrix P print(f'P (row permutation matrix)=\n{P}\n') # Print the lower triangular matrix L print(f'L (lower triangular matrix)=\n{L}\n') # Print the upper triangular matrix U print(f'U (upper triangular matrix)=\n{U}')
A= [ 2 1 1] [ 4 -6 0] [-2 7 2] P (row permutation matrix)= [1 0 0] [0 1 0] [0 0 1] L (lower triangular matrix)= [ 1 0 0] [ 2 1 0] [-1 -1 1] U (upper triangular matrix)= [ 2 1 1] [ 0 -8 -2] [ 0 0 1]

Dear linear algebraist:This is the end of my introduction to the matrix, thank you for your use, I hope this tool can help you more or less a little, I hope everything goes well with your future studies, if you encounter any problems, feel free to contact me:)

Best Regards,

Zihan@UW