



Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
An overview of the definitions and theorems encountered in a first year linear algebra.
Typology: Cheat Sheet
1 / 5
This page cannot be seen from the preview
Don't miss anything!
1.1.7: Ifx,⃗y,⃗w⃗ ∈ R n^ and c, d ∈ R:
1.2.13: A set of vectors {v ⃗ 1 , ...,⃗v (^) k } is linearly dependent if and only if v ⃗ i ∈ Span{v ⃗ 1 , ...⃗v (^) i − 1 ,⃗v (^) i +1, ...,⃗v (^) k } for some i, 1 ≤ i ≤ k. 1.2.20: if one set of vectors {v ⃗ 1 , ...,⃗v (^) k } contains the zero vector, then it is linearly dependent. 1.2.28: If β = {v ⃗ 1 , ...,⃗v (^) k } is a basis for a subset S of R n , then every vectorx⃗ ∈ S can be written as a unique linear combination of the vectors in β. 1.3.2 (Subspace Test): Let S be a non-empty subset of R n. Ifx⃗ +y⃗ ∈ S and c⃗x ∈ S for allx,⃗⃗y ∈ S and c ∈ R, then S is a subspace of R n 1.3.9: Ifv ⃗ 1 , ...,⃗v (^) k , then S = Span{v ⃗ 1 , ...,⃗v (^) k } is a subspace of R n 1.4.1: Ifx,⃗⃗y ∈ R^2 and θ is the angle betweenx⃗ andy⃗ thenx⃗ ·y⃗ = ∥x⃗ || ||y⃗ || cos x
1.4.4: Ifx,⃗⃗y,⃗z ∈ R n^ and s, t ∈ R, then: 1.x⃗ ·x⃗ ≥ 0 andx⃗ ·x⃗ = 0 if and only ifx⃗ = ⃗ 0 2.x⃗ ·y⃗ =y⃗ ·x⃗ 3.x⃗ · (s⃗y + zt⃗ ) = sx(⃗ ·y⃗ ) + tx(⃗ ·z⃗ ) 1.4.8: Ifx,⃗⃗y ∈ R n^ and c ∈ R, then:
1.4.18: Letv,⃗⃗w,⃗ b ∈ R^3 with {v,⃗⃗w } being linearly independent and let P be a plane in R^3 with vector equationx⃗ = s⃗v + wt⃗ +⃗ b, s.t ∈ R. Ifn⃗ =v⃗ ×w⃗ , then the equation for the plane is (x⃗ − ⃗b) ·n⃗ = 0
Theorem 2.1.11: If the system of linear equations:
a 11 x 1 + ... + a 1 n x n = b 1 a 21 x 2 + ... + a 2 n x n = b 2
... ... a m 1 x 1 + ... + a mn x n = b m
has two distinct solutionss⃗ =
s 1 . . s n
and ⃗t =
t 1 . . t n
then for every c ∈ R,⃗ s + cs(⃗ − t⃗) is a solu- tion, and furthermore these solutions are all distinct. 2.2.11: If the augmented matrices
A 1 | b⃗ 1
and [ A 2 | b⃗ 2
are row equivalent, then the system of linear equations associated with each augmented ma- trix are equivalent. 2.2.13: If A is a matrix, then A has a unique reduced row echelon form of R. 2.2.27: The solution set of a homogeneous system of m linear equations and in n variables is a subspace of R n 2.2.30: Let S b be the solution set of the system [ A | ⃗b ] and let S 0 be the solution set of the as- sociated homogeneous system [ A | ⃗0 ]. Then ifx ⃗ p is any particular solution in S b.
S b = {x ⃗ p +s⃗ |s⃗ ∈ S 0 }
2.3.4: For any m × n matrix A we have
rank A ≤ min(m, n)
2.3.5 (System Rank Theorem): Let A be the coefficient matrix of a system of m linear equations and in n unknowns [ A | ⃗b ].
rank A < [ A | ⃗b ]
3.1.4: If A, B, C ∈ M m × n (R) and c, d ∈ R then
3.1.7: Let A, B ∈ M m × n (R) and c ∈ R, then
3.1.19 (Column Extraction Theorem): Ife ⃗ i is the i-th standard basis vector for A = [a ⃗ 1 ...⃗a (^) n ], then A⃗e (^) i =a⃗ (^) i 3.1.20: Ifx,⃗⃗y ∈ R n , thenx⃗ T y⃗ =x⃗ ·y⃗ 3.1.22: If A ∈ M m × n ,⃗x,⃗y ∈ R n^ and c ∈ R, then
3.1.29: If A, B and C are matrices of the correct size so that the required product is defined and t ∈ R, then
3.1.32 (Matrix Equality Theorem): If A and B are m × n matrices such that A⃗x = B⃗x for every x ∈ R n , then A = B. 3.1.32: If I = [e⃗ 1 ...⃗e (^) n ], then for any n × n matrix A, we have
AI = A = IA
3.2.2 If A is an m × n matrix and f : R n^ → R m is defined by f x(⃗ ) = A⃗x , then for allx,⃗⃗y ∈ R n^ and b, c ∈ R we have
f (xb⃗ + yc⃗ ) = bf x(⃗ ) + cf y(⃗ )
3.2.8: If L : R n^ → R m^ is a linear mapping, then L(⃗ 0) = ⃗ 0 3.2.9: Every linear mapping L : R n^ → R m^ can be represented as a linear mapping with matrix whose i − th column is the image of the i − th standard basis vector of R n^ under L for all 1 ≤ i ≤ n. That is, Lx(⃗ ) = [Lx]⃗ where
[L] = [Le(⃗ 1 ) ... Le(⃗ (^) n )]
3.2.15: If R θ : R^2 → R^2 is a rotation with rotation A = [R θ ], then the columns of A are orthogonal unit vectors. 3.3.6: If L : R n^ → R m^ is a linear mapping, then Range(L) is a subspace of R m 3.3.14: If L : R n^ → R m^ is a linear mapping, then Ker(L) is a subspace of R n 3.3.16: Let L : R n^ → R m^ be a linear mapping. L is one-to-one if and only if for everyu,⃗⃗v ∈ R n^ such that Lu(⃗ ) = Lv(⃗ ), we must haveu⃗ =v⃗ 3.3.18: Let L : R n^ → R m^ be a linear mapping with standard matrix [L]. Then,x⃗ ∈ Ker(L) if and only if [Lx]⃗ = ⃗ 0 3.3.19: Let A ∈ M m × n (R). The set {x⃗ ∈ R n^ | A⃗x = 0 } is a subspace of R n 3.3.23: Let A be an m×n matrix. Suppose the vector equation of the solution set A⃗x = ⃗0 as determined by the Gauss-Jordan algorithm is given by
x⃗ = t 1 v⃗ 1 + ... + t k v⃗ (^) k , t 1 , ..t k ∈ R
Then, {v⃗ 1 , ...,⃗v (^) l } is a basis for N ull(A) 3.3.24: If A is an m × n matrix, then
dim N ull(A) = n − rank(A)
3.3.27: If L : R n^ → R m^ is a linear mapping with standard matrix [L] = A = [a⃗ 1 ...⃗a (^) n ], then
Range(L) = Span{a⃗ 1 , ...,⃗a (^) n }
4.2.5: If E is an elementary matrix, then E is invert- ible and E−^1 is also an elementary matrix. 4.2.8: If A is an m × n matrix and E is an m × m ele- mentary matrix corresponding to the row operation R i + cR j for i ̸= j then EA is the matrix obtained from A by performing the row operation R i + cR j on A 4.2.9: If A is an m × n matrix and E is an m × m elementary matrix corresponding to the row opera- tion cR i then EA is the matrix obtained from A by performing the row operation cR i on A 4.2.10: If A is an m×n matrix and E is an m×m el- ementary matrix corresponding to the row operation R i ↔ R j for i ̸= j then EA is the matrix obtained from A by performing the row operation R i ↔ R j on A 4.2.11: If A is an m × n matrix and E is an m × m elementary matrix, then
rank(EA) = rank(A)
4.2.12: If A is an m × n matrix with reduced eche- lon form R, then there exists a sequence E 1 , ..., E k of m × m elementary matrices such that E k ...E 2 E 1 A = R. In particular,
A = E− 1 1 E 2 − 1 ...E k −^1 R
4.2.17: If A is an n × n invertible matrix, then A and A−^1 can be written as a product of elementary matrices. 4.2.20: If E is an m × m elementary matrix, then E T^ is an elementary matrix.
4.3.11: Let A be an n × n matrix. For any i with 1 ≤ i ≤ n,
det A =
∑ n k =1 a ik C ik called the cofactor expansion across the i − th row. Or, for any j with 1 ≤ j ≤ n
det A =
∑ n k =1 a kj^ C kj is called the cofactor expansion across the j − th col- umn 4.3.16: If an n × n matrix A is upper or lower trian- gular, then
det A = a 11 a 22 ...a nn
4.3.18: If A is an n × n matrix and B is the ma- trix obtained from A by multiplying one row of A by c ∈ R, then det(B) = c det A 4.3.19: If A is an n × n matrix and B is the matrix obtained from A by swapping two rows of A, then det(B) = − det A 4.3.20: If an n × n matrix A has two identical rows, then det A = 0 4.3.21: If A is an n × n matrix and B is the matrix obtained from A by adding a multiple of one row to another row, then det B = det A 4.3.24: If A is an n × n matrix and E is an n × n elementary matrix, then det EA = det E det A 4.3.25 (Addition to the Invertible Matrix Theorem): An n × n matrix A is invertible if and only if det A ̸= 0 4.3.26: If A and B are n×n matrices, then det AB = det A det B 4.3.27: If A is an invertible matrix, then det A−^1 = 1 det A
4.3.28: If A is an n × n matrix, then det A = det A T 4.4.1: If A is an n × n matrix with cofactors C ij and i ̸= j, then ∑ n k =1(A) ik C jk^ = 0 4.4.2: If A is an invertible n × n matrix, then (A−^1 ) ij = (^) det^1 A C ji
5.1.3: Every subspace S of R n^ has a basis 5.1.7: Let S is a subspace of R n. Let B = {v⃗ 1 , ...,⃗v (^) k } be a basis for S and let C = {w⃗ 1 , ...⃗w (^) n } be a set in S. If m > k, then C is linearly dependent. 5.1.8: If B = {v⃗ 1 , ...,⃗v (^) n } and C = {w⃗ 1 , ...,⃗w (^) n } are bases for S, then k = m 5.1.12: If dim S = k, then:
5.1.15: If S is an k-dimensional subspace of R n^ and {v⃗ 1 , ..,⃗v (^) l } is a linearly independent set in S with l < k, then there exist vectorsw⃗ (^) l +1, ...,⃗w (^) k in S such that {v⃗ 1 , ...,⃗v (^) l w,⃗ (^) l +1, ...,⃗w (^) n } is a basis for S. 5.1.16: Let S 1 and S 2 be subspaces of R n^ such that S 1 ⊆ S 2. Then dim S 1 ≤ dim S 2. Moreover, dim S 1 = dim S 2 if and only if dim S 1 = dim S 2
5.2.1: If B = {v⃗ 1 , ..,⃗v (^) k } is a basis for the subspace S of R n , then everyv⃗ ∈ S can be written as a unique linear combination of the vectors in B. 5.2.10: If S is a subspace of R n^ with basis B = {v⃗ 1 , ...,⃗v (^) k }, then for anyv,⃗⃗w ∈ S and s, t ∈ R we have
[s⃗v + wt⃗ ] B = sv[⃗ ] B + tw[⃗ ] B
5.2.16: If B and C are basis for k-dimensional sub- spaces S, then the change of coordinates matrices C P B and^ B P C satisfy
C P B B P C =^ I^ = B P C C P B
6.1.7: If A and B are n × n matrices such that P −^1 AP = B for some invertible matrix P , then
∑ n i =1 a ii^ is called the trace of the matrix.
6.2.8: A scaler λ is an eigenvalue of an n × n matrix A if and only if C A (λ) = 0 6.2.13: If A is an n × n upper or lower triangular matrix, then the eigenvalues of A are the diagonal entries of A. 6.2.20: Let A and B are similar matrices, then A and B have the same characteristic polynomial, and hence the same eigenvalues. 6.2.21: If A is an n × n matrix with eigenvalue λ 1 , then 1 ≤ g λ 1 ≤ a λ 1 6.3.2 (Diagonalization Theorem): An n × n matrix A is diagonalizable if and only if there exists a basis {v⃗ 1 , ...,⃗v (^) n } for R n^ of eigenvalues of A.
6.3.3: If A is an n × n matrix with eigenpairs (λ 1 ,⃗v 1 ), (λ 2 ,⃗v 2 ), ..., (λ k v,⃗ (^) k ) where λ i ̸= λ j for i ̸= j, then {v⃗ 1 , ..,⃗v (^) l } is linearly independent. 6.3.4: If A is an n × n matrix with distinct eigenval- ues λ 1 , .., λ k and B i = {v⃗ (^) i, 1 , ...,⃗v (^) i,gλi } is a basis for the eigenspace λ i for 1 ≤ i ≤ k, then B 1 ∪B 2 ∪...∪B k is a linearly independent set. 6.3.5 (Diagonalizability Test): If A is an n×n matrix whose characterisitc polynomial factors as
C A (λ) = (λ − λ 1 ) aλ^1 ...(λ − λ k ) aλk
where λ 1 , ..., λ k are distinct eigenvalues of A, then A is diagonalizable if and only if g λi = a λi for 1 ≤ i ≤ k. 6.3.6: If A is an n × n matrix with n distinct eigen- values, then A is diagonalizable. 6.3.13: If λ 1 , .., λ n are all the n eigenvalues of an n × n matrix A, then
det A = λ 1 ...λ n and tr(A) = λ 1 + ... + λ n
6.4.1: Let A be an n × n matrix. If there ex- ists a matrix P and a diagonal matrix D such that P −^1 AP = D, then
A k^ = P D k P −^1