Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Linear Algebra Cheat Sheet, Cheat Sheet of Linear Algebra

An overview of the definitions and theorems encountered in a first year linear algebra.

Typology: Cheat Sheet

2024/2025

Uploaded on 07/02/2025

john-brown-nm4
john-brown-nm4 🇨🇦

1 document

1 / 5

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
1 Chapter 1
1.1.7: If x, y, w Rnand c, d R:
V1: x +y Rn
V2: (x +y) + w =x + (y +w)
V3: y +x =x +y
V4: There exists a vector
0Rnsuch that
x +
0 = x for all x Rn
V5: For every vector x Rn, there exists x
where x + (x) =
0
V6: cx Rn
V7: c(dx)=(cd)x
V8: (c+d)x =cx +dx
V9: (x +y)c=cx +cy
V10: 1x =x
1.2.13: A set of vectors {v1, ..., vk}is
linearly dependent if and only if vi
Span{v1, ...vi1, vi+1 , ..., vk}for some i, 1ik.
1.2.20: if one set of vectors {v1, ..., vk}contains the
zero vector, then it is linearly dependent.
1.2.28: If β={v1, ..., vk}is a basis for a subset S
of Rn, then every vector x Scan be written as a
unique linear combination of the vectors in β.
1.3.2 (Subspace Test): Let Sbe a non-empty subset
of Rn. If x +y Sand cx Sfor all x, y Sand
cR, then Sis a subspace of Rn
1.3.9: If v1, ..., vk,then S=Span{v1, ..., vk}is a
subspace of Rn
1.4.1: If x, y R2and θis the angle between x
and y then x ·y =x|| ||y|| cos x
1.4.4: If x, y, z Rnand s, t R, then:
1. x ·x 0 and x ·x = 0 if and only if x =
0
2. x ·y =y ·x
3.x ·(sy +tz) = s(x ·y) + t(x ·z)
1.4.8: If x, y Rnand cR, then:
1. ||x|| 0 and ||x|| = 0 if and only if
x = 0
2.||cx|| =|c| ||
x||
3. ||x ·y|| ||x|| ||y|| (Cauchy-Schwartz
Inequality)
4. ||x +y|| ||x|| +||y|| (Triangle Inequal-
ity)
1.4.17: Suppose that v , w, x R3and cR
1) If n =v ×w then for any y Span{v, w}
we have y ·n = 0
2) v ×w =w ×v
3) v ×v =
0
4) v ×w =
0 if and only if either v =
0 or
w is a scaler multiple of v
5) v ×(w +x) = v ×w +v ×x
6) (cv)×(c w) = c(v ×w)
7) ||v ×w || =||v|| || w|| | sin θ|where θis
the angle between v and w
1.4.18: Let v, w,
bR3with {v, w}being linearly
independent and let Pbe a plane in R3with vector
equation x =sv +t w +
b, s.t R. If n =v ×w, then
the equation for the plane is (x
b)·n = 0
2 Chapter 2
Theorem 2.1.11: If the system of linear equations:
a11x1+... +a1nxn=b1
a21x2+... +a2nxn=b2
. . .
. . .
am1x1+... +amnxn=bm
has two distinct solutions s =
s1
.
.
sn
and
t=
t1
.
.
tn
,
then for every cR, s +c(s
t) is a solu-
tion, and furthermore these solutions are all distinct.
2.2.11: If the augmented matrices hA1|
b1iand
hA2|
b2iare row equivalent, then the system of
linear equations associated with each augmented ma-
trix are equivalent.
2.2.13: If Ais a matrix, then Ahas a unique reduced
row echelon form of R.
2.2.27: The solution set of a homogeneous system of
mlinear equations and in nvariables is a subspace
of Rn
2.2.30: Let Sbbe the solution set of the system
[A|
b] and let S0be the solution set of the as-
sociated homogeneous system [ A|
0 ]. Then if xpis
any particular solution in Sb.
Sb={xp+s |s S0}
2.3.4: For any m×nmatrix Awe have
rank A min(m, n)
1
pf3
pf4
pf5

Partial preview of the text

Download Linear Algebra Cheat Sheet and more Cheat Sheet Linear Algebra in PDF only on Docsity!

1 Chapter 1

1.1.7: Ifx,⃗y,⃗w⃗ ∈ R n^ and c, d ∈ R:

  • V1:x⃗ +y⃗ ∈ R n
  • V2: (x⃗ +y⃗ ) +w⃗ =x⃗ + (y⃗ +w⃗ )
  • V3:y⃗ +x⃗ =x⃗ +y⃗
  • V4: There exists a vector ⃗ 0 ∈ R n^ such that x⃗ + ⃗0 =x⃗ for allx⃗ ∈ R n
  • V5: For every vectorx⃗ ∈ R n , there exists −x⃗ wherex⃗ + (−x⃗ ) = ⃗ 0
  • V6: xc⃗ ∈ R n
  • V7: c(d⃗x ) = (cdx)⃗
  • V8: (c + dx)⃗ = xc⃗ + d⃗x
  • V9: (x⃗ +y⃗ )c = xc⃗ + c⃗y
  • V10: 1x⃗ =x⃗

1.2.13: A set of vectors {v ⃗ 1 , ...,⃗v (^) k } is linearly dependent if and only if v ⃗ i ∈ Span{v ⃗ 1 , ...⃗v (^) i − 1 ,⃗v (^) i +1, ...,⃗v (^) k } for some i, 1 ≤ i ≤ k. 1.2.20: if one set of vectors {v ⃗ 1 , ...,⃗v (^) k } contains the zero vector, then it is linearly dependent. 1.2.28: If β = {v ⃗ 1 , ...,⃗v (^) k } is a basis for a subset S of R n , then every vectorx⃗ ∈ S can be written as a unique linear combination of the vectors in β. 1.3.2 (Subspace Test): Let S be a non-empty subset of R n. Ifx⃗ +y⃗ ∈ S and c⃗x ∈ S for allx,⃗⃗y ∈ S and c ∈ R, then S is a subspace of R n 1.3.9: Ifv ⃗ 1 , ...,⃗v (^) k , then S = Span{v ⃗ 1 , ...,⃗v (^) k } is a subspace of R n 1.4.1: Ifx,⃗⃗y ∈ R^2 and θ is the angle betweenx⃗ andy⃗ thenx⃗ ·y⃗ = ∥x⃗ || ||y⃗ || cos x

1.4.4: Ifx,⃗⃗y,⃗z ∈ R n^ and s, t ∈ R, then: 1.x⃗ ·x⃗ ≥ 0 andx⃗ ·x⃗ = 0 if and only ifx⃗ = ⃗ 0 2.x⃗ ·y⃗ =y⃗ ·x⃗ 3.x⃗ · (s⃗y + zt⃗ ) = sx(⃗ ·y⃗ ) + tx(⃗ ·z⃗ ) 1.4.8: Ifx,⃗⃗y ∈ R n^ and c ∈ R, then:

  1. ||x|| ≥ 0 and ||x|| = 0 if and only if x⃗ = 0 2.||c⃗x || = |c| || x⃗||
  2. ||x⃗ ·y⃗ || ≤ ||x⃗ || ||y⃗ || (Cauchy-Schwartz Inequality)
  3. ||x⃗ +y⃗ || ≤ ||x⃗ || + ||y⃗ || (Triangle Inequal- ity) 1.4.17: Suppose thatv,⃗⃗w,⃗x ∈ R^3 and c ∈ R
  1. Ifn⃗ =v⃗ ×w⃗ then for anyy⃗ ∈ Span{v,⃗⃗w } we havey ⃗ ·n⃗ = 0 2)v⃗ ×w⃗ = −w⃗ ×v⃗ 3)v⃗ ×v⃗ = ⃗ 0 4)v⃗ ×w⃗ = ⃗0 if and only if eitherv⃗ = ⃗0 or w⃗ is a scaler multiple ofv⃗ 5)v⃗ × w(⃗ +x⃗ ) =v⃗ ×w⃗ +v⃗ ×x⃗
  2. (vc⃗ ) × (wc⃗ ) = cv(⃗ ×w⃗ )
  3. ||v⃗ ×w⃗ || = ||v⃗ || ||w⃗ || | sin θ| where θ is the angle betweenv⃗ andw⃗

1.4.18: Letv,⃗⃗w,⃗ b ∈ R^3 with {v,⃗⃗w } being linearly independent and let P be a plane in R^3 with vector equationx⃗ = s⃗v + wt⃗ +⃗ b, s.t ∈ R. Ifn⃗ =v⃗ ×w⃗ , then the equation for the plane is (x⃗ − ⃗b) ·n⃗ = 0

2 Chapter 2

Theorem 2.1.11: If the system of linear equations:

a 11 x 1 + ... + a 1 n x n = b 1 a 21 x 2 + ... + a 2 n x n = b 2

... ... a m 1 x 1 + ... + a mn x n = b m

has two distinct solutionss⃗ =

s 1 . . s n

 and ⃗t =

t 1 . . t n

then for every c ∈ R,⃗ s + cs(⃗ − t⃗) is a solu- tion, and furthermore these solutions are all distinct. 2.2.11: If the augmented matrices

[

A 1 | b⃗ 1

]

and [ A 2 | b⃗ 2

]

are row equivalent, then the system of linear equations associated with each augmented ma- trix are equivalent. 2.2.13: If A is a matrix, then A has a unique reduced row echelon form of R. 2.2.27: The solution set of a homogeneous system of m linear equations and in n variables is a subspace of R n 2.2.30: Let S b be the solution set of the system [ A | ⃗b ] and let S 0 be the solution set of the as- sociated homogeneous system [ A | ⃗0 ]. Then ifx ⃗ p is any particular solution in S b.

S b = {x ⃗ p +s⃗ |s⃗ ∈ S 0 }

2.3.4: For any m × n matrix A we have

rank A ≤ min(m, n)

2.3.5 (System Rank Theorem): Let A be the coefficient matrix of a system of m linear equations and in n unknowns [ A | ⃗b ].

  1. The system [ A | ⃗b ] is inconsistent if and only if

rank A < [ A | ⃗b ]

  1. If the system [ A | ⃗b ]. is consistent, then the system contains (n − rank A) free variables 2.4.3: Let {v ⃗ 1 , ...,⃗v (^) k } be a set of vectors in R n^ and let A = v[⃗ 1 , ...,⃗v (^) k ]. Then {v ⃗ 1 , ...,⃗v (^) k } is linearly independent if and only if rank A = k. 2.4.8: Let {v ⃗ 1 , ...,⃗v (^) k } be a set of vectors in R n^ and let A = [v⃗ 1 , ...,⃗v (^) k ]. Then {v ⃗ 1 , ...,⃗v (^) k } spans R n^ if and only if rank A = n 2.4.11: A set of n vectors {v ⃗ 1 , ...,⃗v (^) k } in R n^ is linearly independent if and only if it spans R n

3 Chapter 3

3.1.4: If A, B, C ∈ M m × n (R) and c, d ∈ R then

  • A + B ∈ M m × n (R)
  • (A + B) + C = A + (B + C)
  • A + B = B + A
  • There exists a matrix O m,n ∈ M m × n (R), such that A + (O m × n ) = A for all A
  • For every A ∈ M m × n (R) there exists (−A) ∈ M m × n (R) such that A + (−A) = O m,n
  • cA ∈ M m × n (R)
  • c(dA) = (cd)A
  • (c + d)A = cA + dA
  • c(A + B) = cA + cB
  • 1 A = A

3.1.7: Let A, B ∈ M m × n (R) and c ∈ R, then

  • (A T^ ) T^ = A
  • (A + B) T^ = A T^ + B T
  • (cA) T^ = cA T

3.1.19 (Column Extraction Theorem): Ife ⃗ i is the i-th standard basis vector for A = [a ⃗ 1 ...⃗a (^) n ], then A⃗e (^) i =a⃗ (^) i 3.1.20: Ifx,⃗⃗y ∈ R n , thenx⃗ T y⃗ =x⃗ ·y⃗ 3.1.22: If A ∈ M m × n ,⃗x,⃗y ∈ R n^ and c ∈ R, then

  1. Ax(⃗ +y⃗ ) = A⃗x + A⃗y
  2. A(c⃗x ) = c(A⃗x )

3.1.29: If A, B and C are matrices of the correct size so that the required product is defined and t ∈ R, then

  1. A(B + C) = AB + AC
  2. (A + B)C = AC + BC
  3. t(AB) = (tA)B = A(tB)
  4. A(BC) = (AB)C
  5. (AB) T^ = B T^ A T

3.1.32 (Matrix Equality Theorem): If A and B are m × n matrices such that A⃗x = B⃗x for every x ∈ R n , then A = B. 3.1.32: If I = [e⃗ 1 ...⃗e (^) n ], then for any n × n matrix A, we have

AI = A = IA

3.2.2 If A is an m × n matrix and f : R n^ → R m is defined by f x(⃗ ) = A⃗x , then for allx,⃗⃗y ∈ R n^ and b, c ∈ R we have

f (xb⃗ + yc⃗ ) = bf x(⃗ ) + cf y(⃗ )

3.2.8: If L : R n^ → R m^ is a linear mapping, then L(⃗ 0) = ⃗ 0 3.2.9: Every linear mapping L : R n^ → R m^ can be represented as a linear mapping with matrix whose i − th column is the image of the i − th standard basis vector of R n^ under L for all 1 ≤ i ≤ n. That is, Lx(⃗ ) = [Lx]⃗ where

[L] = [Le(⃗ 1 ) ... Le(⃗ (^) n )]

3.2.15: If R θ : R^2 → R^2 is a rotation with rotation A = [R θ ], then the columns of A are orthogonal unit vectors. 3.3.6: If L : R n^ → R m^ is a linear mapping, then Range(L) is a subspace of R m 3.3.14: If L : R n^ → R m^ is a linear mapping, then Ker(L) is a subspace of R n 3.3.16: Let L : R n^ → R m^ be a linear mapping. L is one-to-one if and only if for everyu,⃗⃗v ∈ R n^ such that Lu(⃗ ) = Lv(⃗ ), we must haveu⃗ =v⃗ 3.3.18: Let L : R n^ → R m^ be a linear mapping with standard matrix [L]. Then,x⃗ ∈ Ker(L) if and only if [Lx]⃗ = ⃗ 0 3.3.19: Let A ∈ M m × n (R). The set {x⃗ ∈ R n^ | A⃗x = 0 } is a subspace of R n 3.3.23: Let A be an m×n matrix. Suppose the vector equation of the solution set A⃗x = ⃗0 as determined by the Gauss-Jordan algorithm is given by

x⃗ = t 1 v⃗ 1 + ... + t k v⃗ (^) k , t 1 , ..t k ∈ R

Then, {v⃗ 1 , ...,⃗v (^) l } is a basis for N ull(A) 3.3.24: If A is an m × n matrix, then

dim N ull(A) = n − rank(A)

3.3.27: If L : R n^ → R m^ is a linear mapping with standard matrix [L] = A = [a⃗ 1 ...⃗a (^) n ], then

Range(L) = Span{a⃗ 1 , ...,⃗a (^) n }

4.2.5: If E is an elementary matrix, then E is invert- ible and E−^1 is also an elementary matrix. 4.2.8: If A is an m × n matrix and E is an m × m ele- mentary matrix corresponding to the row operation R i + cR j for i ̸= j then EA is the matrix obtained from A by performing the row operation R i + cR j on A 4.2.9: If A is an m × n matrix and E is an m × m elementary matrix corresponding to the row opera- tion cR i then EA is the matrix obtained from A by performing the row operation cR i on A 4.2.10: If A is an m×n matrix and E is an m×m el- ementary matrix corresponding to the row operation R i ↔ R j for i ̸= j then EA is the matrix obtained from A by performing the row operation R i ↔ R j on A 4.2.11: If A is an m × n matrix and E is an m × m elementary matrix, then

rank(EA) = rank(A)

4.2.12: If A is an m × n matrix with reduced eche- lon form R, then there exists a sequence E 1 , ..., E k of m × m elementary matrices such that E k ...E 2 E 1 A = R. In particular,

A = E− 1 1 E 2 − 1 ...E k −^1 R

4.2.17: If A is an n × n invertible matrix, then A and A−^1 can be written as a product of elementary matrices. 4.2.20: If E is an m × m elementary matrix, then E T^ is an elementary matrix.

4.3.11: Let A be an n × n matrix. For any i with 1 ≤ i ≤ n,

det A =

n k =1 a ik C ik called the cofactor expansion across the i − th row. Or, for any j with 1 ≤ j ≤ n

det A =

n k =1 a kj^ C kj is called the cofactor expansion across the j − th col- umn 4.3.16: If an n × n matrix A is upper or lower trian- gular, then

det A = a 11 a 22 ...a nn

4.3.18: If A is an n × n matrix and B is the ma- trix obtained from A by multiplying one row of A by c ∈ R, then det(B) = c det A 4.3.19: If A is an n × n matrix and B is the matrix obtained from A by swapping two rows of A, then det(B) = − det A 4.3.20: If an n × n matrix A has two identical rows, then det A = 0 4.3.21: If A is an n × n matrix and B is the matrix obtained from A by adding a multiple of one row to another row, then det B = det A 4.3.24: If A is an n × n matrix and E is an n × n elementary matrix, then det EA = det E det A 4.3.25 (Addition to the Invertible Matrix Theorem): An n × n matrix A is invertible if and only if det A ̸= 0 4.3.26: If A and B are n×n matrices, then det AB = det A det B 4.3.27: If A is an invertible matrix, then det A−^1 = 1 det A

4.3.28: If A is an n × n matrix, then det A = det A T 4.4.1: If A is an n × n matrix with cofactors C ij and i ̸= j, then ∑ n k =1(A) ik C jk^ = 0 4.4.2: If A is an invertible n × n matrix, then (A−^1 ) ij = (^) det^1 A C ji

5 Chapter 5

5.1.3: Every subspace S of R n^ has a basis 5.1.7: Let S is a subspace of R n. Let B = {v⃗ 1 , ...,⃗v (^) k } be a basis for S and let C = {w⃗ 1 , ...⃗w (^) n } be a set in S. If m > k, then C is linearly dependent. 5.1.8: If B = {v⃗ 1 , ...,⃗v (^) n } and C = {w⃗ 1 , ...,⃗w (^) n } are bases for S, then k = m 5.1.12: If dim S = k, then:

  1. A set of more than k vectors in S must be lin- early dependent.
  2. A set of vectors less that k vectors in S cannot span S
  3. A set of k vectors in S is linearly dependent if and only if it spans S

5.1.15: If S is an k-dimensional subspace of R n^ and {v⃗ 1 , ..,⃗v (^) l } is a linearly independent set in S with l < k, then there exist vectorsw⃗ (^) l +1, ...,⃗w (^) k in S such that {v⃗ 1 , ...,⃗v (^) l w,⃗ (^) l +1, ...,⃗w (^) n } is a basis for S. 5.1.16: Let S 1 and S 2 be subspaces of R n^ such that S 1 ⊆ S 2. Then dim S 1 ≤ dim S 2. Moreover, dim S 1 = dim S 2 if and only if dim S 1 = dim S 2

5.2.1: If B = {v⃗ 1 , ..,⃗v (^) k } is a basis for the subspace S of R n , then everyv⃗ ∈ S can be written as a unique linear combination of the vectors in B. 5.2.10: If S is a subspace of R n^ with basis B = {v⃗ 1 , ...,⃗v (^) k }, then for anyv,⃗⃗w ∈ S and s, t ∈ R we have

[s⃗v + wt⃗ ] B = sv[⃗ ] B + tw[⃗ ] B

5.2.16: If B and C are basis for k-dimensional sub- spaces S, then the change of coordinates matrices C P B and^ B P C satisfy

C P B B P C =^ I^ = B P C C P B

6 Chapter 6

6.1.7: If A and B are n × n matrices such that P −^1 AP = B for some invertible matrix P , then

  1. rank A = rank B
  2. det A = det B
  3. tr A = tr B where tr A =

n i =1 a ii^ is called the trace of the matrix.

6.2.8: A scaler λ is an eigenvalue of an n × n matrix A if and only if C A (λ) = 0 6.2.13: If A is an n × n upper or lower triangular matrix, then the eigenvalues of A are the diagonal entries of A. 6.2.20: Let A and B are similar matrices, then A and B have the same characteristic polynomial, and hence the same eigenvalues. 6.2.21: If A is an n × n matrix with eigenvalue λ 1 , then 1 ≤ g λ 1 ≤ a λ 1 6.3.2 (Diagonalization Theorem): An n × n matrix A is diagonalizable if and only if there exists a basis {v⃗ 1 , ...,⃗v (^) n } for R n^ of eigenvalues of A.

6.3.3: If A is an n × n matrix with eigenpairs (λ 1 ,⃗v 1 ), (λ 2 ,⃗v 2 ), ..., (λ k v,⃗ (^) k ) where λ i ̸= λ j for i ̸= j, then {v⃗ 1 , ..,⃗v (^) l } is linearly independent. 6.3.4: If A is an n × n matrix with distinct eigenval- ues λ 1 , .., λ k and B i = {v⃗ (^) i, 1 , ...,⃗v (^) i,gλi } is a basis for the eigenspace λ i for 1 ≤ i ≤ k, then B 1 ∪B 2 ∪...∪B k is a linearly independent set. 6.3.5 (Diagonalizability Test): If A is an n×n matrix whose characterisitc polynomial factors as

C A (λ) = (λ − λ 1 ) ^1 ...(λ − λ k ) aλk

where λ 1 , ..., λ k are distinct eigenvalues of A, then A is diagonalizable if and only if g λi = a λi for 1 ≤ i ≤ k. 6.3.6: If A is an n × n matrix with n distinct eigen- values, then A is diagonalizable. 6.3.13: If λ 1 , .., λ n are all the n eigenvalues of an n × n matrix A, then

det A = λ 1 ...λ n and tr(A) = λ 1 + ... + λ n

6.4.1: Let A be an n × n matrix. If there ex- ists a matrix P and a diagonal matrix D such that P −^1 AP = D, then

A k^ = P D k P −^1