Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Matrix Equations, Inverse Matrices, and Vector Spaces, Study notes of Linear Algebra

Two lemmas about invertible matrices and the theorem derived from them. It also introduces the concept of a vector space, providing definitions and examples of various vector spaces, including spaces of polynomials, matrices, and functions. The document also includes examples of sets that are not vector spaces.

Typology: Study notes

2011/2012

Uploaded on 03/08/2012

wualter
wualter 🇺🇸

4.8

(95)

288 documents

1 / 5

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture 8
Andrei Antonenko
February 19, 2003
1 Matrix equations and the inverse
1.1 Discussion of the algorithm - Part 2
Last time we proved the following result:
Lemma 1.1. 1. If the square matrix Ais invertible, then its RREF is the identity matrix.
2. If we can reduce the matrix Aby elementary row operations to the identity matrix, i.e.
if its RREF is identity matrix, then this algorithm gives us A1Bin the right half of the
augmented matrix.
Actually, the opposite assertion of this lemma is also true.
Lemma 1.2. Let RREF of a square matrix Abe the identity matrix. Then Ais invertible.
Proof. Let’s consider the process of reducing Ato it’s RREF. It can be done by elementary
row operations with matrices E1,E2,. . . ,Es, i.e. (EsEs1· · · E1)A=I. From this equality we
see that the product of matrices EsEs1· · · E1satisfy the definition of the inverse for A.
So, from these 2 lemmas we get the interesting result, which is the main result about
invertible matrices so far:
Theorem 1.3. The matrix Ais invertible if and only if its RREF is the identity matrix.
2 Vector spaces
In this lecture we will introduce a new algebraic structure which is one of the most important
structure in linear algebra. This would be a set with 2 operations addition of its elements
and multiplication of numbers by its elements.
1
pf3
pf4
pf5

Partial preview of the text

Download Matrix Equations, Inverse Matrices, and Vector Spaces and more Study notes Linear Algebra in PDF only on Docsity!

Lecture 8

Andrei Antonenko

February 19, 2003

1 Matrix equations and the inverse

1.1 Discussion of the algorithm - Part 2

Last time we proved the following result:

Lemma 1.1. 1. If the square matrix A is invertible, then its RREF is the identity matrix.

  1. If we can reduce the matrix A by elementary row operations to the identity matrix, i.e. if its RREF is identity matrix, then this algorithm gives us A−^1 B in the right half of the augmented matrix.

Actually, the opposite assertion of this lemma is also true.

Lemma 1.2. Let RREF of a square matrix A be the identity matrix. Then A is invertible.

Proof. Let’s consider the process of reducing A to it’s RREF. It can be done by elementary row operations with matrices E 1 , E 2 ,... , Es, i.e. (EsEs− 1 · · · E 1 )A = I. From this equality we see that the product of matrices EsEs− 1 · · · E 1 satisfy the definition of the inverse for A.

So, from these 2 lemmas we get the interesting result, which is the main result about invertible matrices so far:

Theorem 1.3. The matrix A is invertible if and only if its RREF is the identity matrix.

2 Vector spaces

In this lecture we will introduce a new algebraic structure which is one of the most important structure in linear algebra. This would be a set with 2 operations — addition of its elements and multiplication of numbers by its elements.

Definition 2.1. Let k be any field. We didn’t study fields so far, so those who are not familiar with them can just treat the letter k as another notation for R. A set V is called vector space if there defined an operation of addition of elements of V such that ∀v, w ∈ V v + w ∈ V , and an operation of multiplication of elements of k by elements of V (often called scalar multiplication) such that ∀k ∈ k ∀v ∈ V kv ∈ V , and the following axioms are satisfied: Axioms of addition:

(A1) ∀v, u ∈ V v + u = u + v

(A2) ∀v, u, w ∈ V v + (u + w) = (v + u) + w

(A3) ∃ 0 ∈ V such that v + 0 = v

(A4) ∀v ∈ V ∃(−v) ∈ V such that v + (−v) = 0

Axioms of multiplication:

(M1) ∀a ∈ k ∀u, v ∈ V a(u + v) = au + av

(M2) ∀a, b ∈ k ∀v ∈ V (a + b)v = av + bv

(M3) ∀a, b ∈ k ∀v ∈ V a(bv) = (ab)v

(M4) ∀u ∈ V 1 u = u

Elements of the vector space are called vectors.

Now we’ll give a number of examples of a vector spaces.

Example 2.2 (Space Rn). Let V be a set of n-tuples of elements of R. We can define operations as follows:

Addition: (a 1 , a 2 ,... , an) + (b 1 , b 2 ,... , bn) = (a 1 + b 1 , a 2 + b 2 ,... , an + bn)

Scalar multiplication: k(a 1 , a 2 ,... , an) = (ka 1 , ka 2 ,... , kan).

The zero vector is 0 = (0, 0 ,... , 0) and the negative vector is −(a 1 , a 2 ,... , an) = (−a 1 , −a 2 ,... , −an).

Example 2.3 (Space P (t)). Let V be a set of all polynomials of the form

p(t) = a 0 + a 1 t + a 2 t^2 + · · · + asts, s ∈ N.

We can define operations as follows:

Addition: Usual addition of polynomials.

Scalar multiplication: Multiplication of a polynomial by a number.

  • If u + w = v + w then u = v.
  • ∀k ∈ k k 0 = 0.

Proof. k 0 = k( 0 + 0 ) = k 0 + k 0 , and so by the first property 0 = l 0.

  • ∀u ∈ V 0 u = 0.

Proof. 0 u = (0 + 0)u = 0u + 0u, and so by the first property 0 = 0u.

  • If k 6 = 0 and ku = 0 then u = 0.

Proof. u = 1u = (k−^1 k)u = k−^1 (ku) = k−^10 = 0.

  • ∀k ∈ k and u ∈ V (−k)u = k(−u).

Proof. 0 = k 0 = k(u + (−u)) = ku + k(−u), and 0 = 0u = (k + (−k))u = ku + (−k)u. So, k(−u) = (−k)u.

3 Subspaces

Definition 3.1. Let V be a vector space. The subset W ⊂ V is called a subspace of V if W itself is a vector space.

To check that W is a subspace we need to check the following properties:

  1. 0 ∈ W
  2. ∀v, w ∈ W v + w ∈ W
  3. ∀k ∈ k ∀u ∈ W ku ∈ W

Example 3.2. Consider a vector space R^2. Then its subset W = {(0, y)|y ∈ R} — set of pairs for which the first element equals to 0, is a subspace. We can prove it. First of all, (0, 0) ∈ W , since first element of it is 0. Moreover, let u = (0, a) ∈ W , v = (0, b) ∈ W. Then their sum u + v = (0, a + b) ∈ W since it has zero on the first place. Now let’s multiply any vector u = (0, a) ∈ W by any number k: ku = (0, ka), and it belongs to W , since it has 0 on the first place. So, this is a subspace.

Example 3.3. Consider a vector space R^2. Then its subset W = {(1, y)|y ∈ R} — set of pairs for which the first element equals to 1, is NOT a subspace. Here the first property is not satisfied — (0, 0) doesn’t belong to W. Other properties are not satisfied as well: (1, a) ∈ W , (1, b) ∈ W , but their sum (2, a + b) 6 ∈ W , since it has 2 on the first place.

Example 3.4. Consider a vector space R^2. Then its subset W = {(x, y)|x, y ∈ R, x = y} — set of pairs for which the first element is equal to the second element (geometrically, it is a line on the plane), is a subspace.

6

x

y

°

°

°

°

°

°

°

°

°

Let’s check it. First of all, if a = (a, a) ∈ W , and b = (b, b) ∈ W then a+b = (a+b, a+b) ∈ W. Than, (0, 0) ∈ W. Moreover, for each k ∈ R we have k(a, a) = (ka, ka) ∈ W. So, this is a subspace.

One can prove that any line on the plane R^2 which goes through the origin is a subspace. Moreover, any plane in the space R^3 which contains the origin (0, 0 , 0) is a subspace.

Example 3.5. Consider a vector space R^2. Then its subset W = {(x, x^2 )|x ∈ R} — set of pairs for which the second element is equal to the square of the first element is NOT a subspace. Let’s prove it. First of all, if (0, 0) = (0, 02 ) ∈ W. Now let’s consider 2 elements of this set — (1, 1) ∈ W and (2, 4) ∈ W. Their sum (3, 5) doesn’t belong to W , since 5 6 = 3^2. So, we showed that there are two elements sum of which doesn’t belong to the set. So, this is not a vector space.