Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Orthogonal Bases-Linear Algebra-Lecture 28 Notes-Applied Math and Statistics, Study notes of Linear Algebra

Orthogonal Bases, Fourier, Coefficients, Decomposition, Euclidean Space, Projections, Gram Schmidt, Orthogonalization, Process, Vector, Subspace, Linear Algebra, Lecture Notes, Andrei Antonenko, Department of Applied Math and Statistics, Stony Brook University, New York, United States of America.

Typology: Study notes

2011/2012

Uploaded on 03/08/2012

wualter
wualter 🇺🇸

4.8

(95)

288 documents

1 / 6

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture 27
Andrei Antonenko
April 14, 2003
1 Orthogonal bases
In this section we will generalize the example from the previous lecture. Let {v1, v2, . . . , vn}be
an orthogonal basis of the Euclidean space V. Our goal is to find coordinates of the vector u
in this basis, i.e such numbers a1, a2, . . . , an, that
u=a1v1+a2v2+· ·· +anvn.
The familiar way is to write a linear system, and solve it. But since the vectors of the basis are
orthogonal, we can do the following. First, let’s multiply the expression above by v1. We’ll get:
hu, v1i=a1hv1, v1i+a2hv1, v2i+· ·· +hv1, vni.
But all products hv1, v2i, . . . , hv1, vniare equal to 0, so we’ll have
hu, v1i=a1hv1, v1i,
and thus
a1=hu, v1i
hv1, v1i.
In the same way multiplying by v2, v3, . . . , vnwe will get formulae for other coefficients:
a2=hu, v2i
hv2, v2i, . . . , an=hu, vni
hvn, vni.
Definition 1.1. The coefficients defined as
a1=hu, v1i
hv1, v1i, . . . , an=hu, vni
hvn, vni.
are called Fourier coefficients of the vector uwith respect to basis {v1, v2, . . . , vn}.
Moreover, we proved the following theorem:
Theorem 1.2. Let {v1, v2, . . . , vn}be an orthogonal basis of the Euclidean space V. Then for
any vector u,
u=hu, v1i
hv1, v1iv1+hu, v2i
hv2, v2iv2+· ·· +hu, vni
hvn, vnivn
This expression is called Fourier decomposition and can be obtained in any Euclidean
space, e.g. the space of continuous functions C[a, b].
1
pf3
pf4
pf5

Partial preview of the text

Download Orthogonal Bases-Linear Algebra-Lecture 28 Notes-Applied Math and Statistics and more Study notes Linear Algebra in PDF only on Docsity!

Lecture 27

Andrei Antonenko

April 14, 2003

1 Orthogonal bases

In this section we will generalize the example from the previous lecture. Let {v 1 , v 2 ,... , vn} be an orthogonal basis of the Euclidean space V. Our goal is to find coordinates of the vector u in this basis, i.e such numbers a 1 , a 2 ,... , an, that

u = a 1 v 1 + a 2 v 2 + · · · + anvn.

The familiar way is to write a linear system, and solve it. But since the vectors of the basis are orthogonal, we can do the following. First, let’s multiply the expression above by v 1. We’ll get:

〈u, v 1 〉 = a 1 〈v 1 , v 1 〉 + a 2 〈v 1 , v 2 〉 + · · · + 〈v 1 , vn〉.

But all products 〈v 1 , v 2 〉,... , 〈v 1 , vn〉 are equal to 0, so we’ll have

〈u, v 1 〉 = a 1 〈v 1 , v 1 〉,

and thus a 1 = 〈u, v 1 〉 〈v 1 , v 1 〉

In the same way multiplying by v 2 , v 3 ,... , vn we will get formulae for other coefficients:

a 2 = 〈u, v 2 〉 〈v 2 , v 2 〉 ,... , an = 〈u, vn〉 〈vn, vn〉

Definition 1.1. The coefficients defined as

a 1 = 〈u, v 1 〉 〈v 1 , v 1 〉 ,... , an = 〈u, vn〉 〈vn, vn〉

are called Fourier coefficients of the vector u with respect to basis {v 1 , v 2 ,... , vn}.

Moreover, we proved the following theorem:

Theorem 1.2. Let {v 1 , v 2 ,... , vn} be an orthogonal basis of the Euclidean space V. Then for any vector u,

u = 〈u, v 1 〉 〈v 1 , v 1 〉 v 1 + 〈u, v 2 〉 〈v 2 , v 2 〉 v 2 + · · · + 〈u, vn〉 〈vn, vn〉 vn

This expression is called Fourier decomposition and can be obtained in any Euclidean

space, e.g. the space of continuous functions C[a, b].

2 Projections

In this lecture we will continue study orthogonality. We’ll start now with the projection of a

vector to another vector.

©©

©©

©©

©©*

°

°

°

°μ AKA

©©

©©

©*

v (^) u w

cw = projw v

The projection of the vector v along the vector w is the vector projw v = cw proportional to w, such that u = v − cw is orthogonal to w. So, to find projection, we have to determine the number c, and then we can simply multiply it by vector w. After that we will be able to

find the perpendicular from v onto w, i.e. u. Since we know that u is orthogonal to w, then we can write

〈u, w〉 = 0.

But

u = v − cw,

so

〈v − cw, w〉 = 0 ⇔ 〈v, w〉 − c〈w, w〉 = 0.

From the last equality we can find c:

c = 〈v, w〉 〈w, w〉

So, the projection of the vector v along the vector w is given by the following formula:

projw v = 〈v, w〉 〈w, w〉 w

The orthogonal component u is equal to

u = v − projw v = v − 〈v, w〉 〈w, w〉 w

The length of this perpendicular u will be the distance between the point, corresponding to vector v and the line, which goes through 0 with direction vector w.

Example 2.1. Let’ s find the distance from the point (1, 3) to the line y = x. The direction vector of this line is (1, 1). So, in our terms we have the following data:

v = (1, 3), w = (1, 1).

Let’s compute projection of v along w:

projw v = 〈v, w〉 〈w, w〉

w =

w =

w = 2w = 2(1, 1) = (2, 2).

3 Gram-Schmidt orthogonalization process

Let we have any basis {v 1 , v 2 ,... , vn} in the Euclidean space. We want to construct orthogonal

basis {w 1 , w 2 ,... , wn} of this space. We will do it as follows.

w 1 = v 1

w 2 = v 2 − 〈v 2 , w 1 〉 〈w 1 , w 1 〉

w 1

w 3 = v 3 −

〈v 3 , w 1 〉 〈w 1 , w 1 〉 w 1 −

〈v 3 , w 2 〉 〈w 2 , w 2 〉 w 2

...

wn = vn − 〈vn, w 1 〉 〈w 1 , w 1 〉 w 1 − 〈vn, w 2 〉 〈w 2 , w 2 〉 w 2 − 〈vn, w 3 〉 〈w 3 , w 3 〉 w 3 − · · · − 〈vn, wn− 1 〉 〈wn− 1 , wn− 1 〉 wn− 1

Actually, each time we’re subtracting the projection to the space, spanned by the vectors, already orthogonalized. After this algorithm we will have orthogonal basis }w 1 , w 2 ,... , wn}.

Example 3.1. Let

v 1 = (1, 1 , − 1 , −2); v 2 = (5, 8 , − 2 , −3); v 3 = (3, 9 , 3 , 8).

Let’s apply the Gram-Schmidt orthogonalization process to these vectors.

w 1 = v 1 = (1, 1 , − 1 , −2).

Now, let’s find w 2 :

w 2 = v 2 − 〈v 2 , w 1 〉 〈w 1 , w 1 〉 w 1

= (5, 8 , − 2 , −3) −

Now, we can find w 3 :

w 3 = v 3 − 〈v 3 , w 1 〉 〈w 1 , w 1 〉 w 1 − 〈v 3 , w 2 〉 〈w 2 , w 2 〉 w 2

= (3, 9 , 3 , 8) −

Finally, we got:

w 1 = (1, 1 , − 1 , −2); w 2 = (2, 5 , 1 , 3); w 3 = (0, 0 , 0 , 0).

The third vector is a zero-vector, so we don’t need it. Actually, it means that vectors v 1 , v 2 and v 3 are in the same plane, so, the basis of this plane consists of 2 vectors, and the orthogonal basis consists of w 1 and w 2.

Again, this process is very general, and can be used in any Euclidean space, i.e. the space of continuous functions C[a, b].

4 Distance between a vector and a subspace

Now when we know how to find orthogonal bases of the subspace, we can find distances between

the vector (or a point, corresponding to this vector) and a subspace, for example a plane which goes through origin. Let we want to find a distance between vector v and a subspace with any basis. Then

we should first orthogonalize the basis of the subspace using Gram-Schmidt orthogonalization process, and then compute projections of v along vectors of basis. Then, subtracting projections from v we will get a vector, which is orthogonal to the subspace. Its length will be equal to the

needed distance.

Example 4.1. Let we have a plane P in the 3 -dimensional space with the following basis: v 1 = (1, 0 , −1) and v 2 = (− 1 , 1 , 0). Let’s find the distance between point (1, 2 , 3) and this plane.