Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Matrix analysis and applied linear algebra Meyer Solutions manual, Exercises of Linear Algebra

Full manual solutions of Matrix analysis and applied linear algebra

Typology: Exercises

2019/2020

Uploaded on 05/26/2021

sumaira
sumaira 🇺🇸

4.8

(57)

263 documents

1 / 172

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Matrix Analysis and Applied
Linear, Algebra
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64

Partial preview of the text

Download Matrix analysis and applied linear algebra Meyer Solutions manual and more Exercises Linear Algebra in PDF only on Docsity!

Matrix Analysis and Applied

Linear, Algebra

Solutions for Chapter 1

Solutions for exercises in section 1. 2

1.2.6. Every row operation is reversible. In particular the “inverse” of any row operation is again a row operation of the same type. 1.2.7. π 2 , π, 0 1.2.8. The third equation in the triangularized form is 0x 3 = 1, which is impossible to solve. 1.2.9. The third equation in the triangularized form is 0x 3 = 0, and all numbers are solutions. This means that you can start the back substitution with any value whatsoever and consequently produce infinitely many solutions for the system. 1.2.10. α = − 3 , β = 112 , and γ = − (^32) 1.2.11. (a) If xi = the number initially in chamber #i, then

. 4 x 1 + 0x 2 + 0x 3 +. 2 x 4 = 12 0 x 1 +. 4 x 2 +. 3 x 3 +. 2 x 4 = 25 0 x 1 +. 3 x 2 +. 4 x 3 +. 2 x 4 = 26 . 6 x 1 +. 3 x 2 +. 3 x 3 +. 4 x 4 = 37

and the solution is x 1 = 10, x 2 = 20, x 3 = 30, and x 4 = 40. (b) 16, 22, 22, 40 1.2.12. To interchange rows i and j, perform the following sequence of Type II and Type III operations.

Rj ← Rj + Ri (replace row j by the sum of row j and i) Ri ← Ri − Rj (replace row i by the difference of row i and j) Rj ← Rj + Ri (replace row j by the sum of row j and i) Ri ← −Ri (replace row i by its negative)

1.2.13. (a) This has the effect of interchanging the order of the unknowns— xj and xk are permuted. (b) The solution to the new system is the same as the

Solutions for exercises in section 1. 5

1.5.1. (a) (0, −1) (c) (1, −1) (e)

1. 001 ,^

− 1

  1. 001

1.5.2. (a) (0, 1) (b) (2, 1) (c) (2, 1) (d)

1. 0001 ,^

  1. 0003
  2. 0001

1.5.3. Without PP: (1.01, 1.03) With PP: (1, 1) Exact: (1, 1)

1.5.4. (a)

 (^) z = −. 077 /.006 = − 12. 8 ,

y = (. 166 −. 083 z)/.083 = 14. 8 , x =. 333 − (. 5 y +. 333 z) = − 2. 81

(b)

 (^) z = −. 156 /.01 = − 15. 6 ,

y = (. 268 −. 268 z)/.251 = 17. 7 , x =. 333 − (. 5 y +. 333 z) = − 3. 33

(c)

 (^) z = −. 88 /.057 = − 15. 4 ,

y = (1. 99 − z)/.994 = 17. 5 , x =. 333 − (. 5 y +. 333 z) = − 3. 29 (d) x = − 3 , y = 16, z = − 14 1.5.5. (a)

. 0055 x +. 095 y + 960z = 5000 . 0011 x +. 01 y + 112z = 600 . 0093 x +. 025 y + 560z = 3000

(b) 3-digit solution = (55, 900 lbs. silica, 8, 600 lbs. iron, 4.04 lbs. gold). Exact solution (to 10 digits) = (56, 753. 68899 , 8 , 626. 560726 , 4 .029511918). The relative error (rounded to 3 digits) is er = 1. 49 × 10 −^2. (c) Let u = x/ 2000 , v = y/ 1000 , and w = 12z to obtain the system

11 u + 95v + 80 w = 5000

  1. 2 u + 10v + 9. 33 w = 600
  2. 6 u + 25v + 46. 7 w = 3000.

(d) 3-digit solution = (28.5 tons silica, 8.85 half-tons iron, 48.1 troy oz. gold). Exact solution (to 10 digits) = (28. 82648317 , 8. 859282804 , 48 .01596023). The relative error (rounded to 3 digits) is er = 5. 95 × 10 −^3. So, partial pivoting applied to the column-scaled system yields higher relative accuracy than partial pivoting applied to the unscaled system. 1.5.6. (a) (− 8. 1 , − 6 .09) = 3-digit solution with partial pivoting but no scaling. (b) No! Scaled partial pivoting produces the exact solution—the same as with complete pivoting. 1.5.7. (a) 2 n−^1 (b) 2 (c) This is a famous example that shows that there are indeed cases where par- tial pivoting will fail due to the large growth of some elements during elimination, but complete pivoting will be successful because all elements remain relatively small and of the same order of magnitude. 1.5.8. Use the fact that with partial pivoting no multiplier can exceed 1 together with the triangle inequality |α + β| ≤ |α| + |β| and proceed inductively.

Solutions for exercises in section 1. 6

1.6.1. (a) There are no 5-digit solutions. (b) This doesn’t help—there are now infinitely many 5-digit solutions. (c) 6-digit solution = (1. 23964 , − 1 .3) and exact solution = (1, −1) (d) r 1 = r 2 = 0 (e) r 1 = − 10 −^6 and r 2 = 10−^7 (f) Even if computed residuals are 0, you can’t be sure you have the exact solution. 1.6.2. (a) (1, − 1 .0015) (b) Ill-conditioning guarantees that the solution will be very sensitive to some small perturbation but not necessarily to every small perturba- tion. It is usually difficult to determine beforehand those perturbations for which an ill-conditioned system will not be sensitive, so one is forced to be pessimistic whenever ill-conditioning is suspected. 1.6.3. (a) m 1 (5) = m 2 (5) = − 1. 2519 , m 1 (6) = − 1. 25187 , and m 2 (6) = − 1. 25188 (c) An optimally well-conditioned system represents orthogonal (i.e., perpen- dicular) lines, planes, etc. 1.6.4. They rank as (b) = Almost optimally well-conditioned. (a) = Moderately well- conditioned. (c) = Badly ill-conditioned. 1.6.5. Original solution = (1, 1 , 1). Perturbed solution = (− 238 , 490 , −266). System is ill-conditioned.

(b)

and

A∗ 2 = 12 A∗ 1 , A∗ 4 = 2A∗ 1 −A∗ 3 , A∗ 6 = 2A∗ 1 − 3 A∗ 5 , A∗ 7 = A∗ 3 +A∗ 5

2.2.2. No. 2.2.3. The same would have to hold in EA, and there you can see that this means not all columns can be basic. Remember, rank (A) = number of basic columns.

2.2.4. (a)

 (^) (b)

 (^) A∗ 3 is almost a combination of A∗ 1

and A∗ 2. In particular, A∗ 3 ≈ −A∗ 1 + 2A∗ 2. 2.2.5. E∗ 1 = 2E∗ 2 − E∗ 3 and E∗ 2 = 12 E∗ 1 + 12 E∗ 3

Solutions for exercises in section 2. 3

2.3.1. (a), (b)—There is no need to do any arithmetic for this one because the right- hand side is entirely zero so that you know (0,0,0) is automatically one solution. (d), (f) 2.3.3. It is always true that rank (A) ≤ rank[A|b] ≤ m. Since rank (A) = m, it follows that rank[A|b] = rank (A). 2.3.4. Yes—Consistency implies that b and c are each combinations of the basic columns in A. If b =

βiA∗bi and c =

γiA∗bi where the A∗bi ’s are the basic columns, then b + c =

(βi + γi)A∗bi =

ξiA∗bi , where ξi = βi + γi so that b + c is also a combination of the basic columns in A. 2.3.5. Yes—because the 4 × 3 system α + βxi + γx^2 i = yi obtained by using the four given points (xi, yi) is consistent. 2.3.6. The system is inconsistent using 5-digits but consistent when 6-digits are used. 2.3.7. If x, y, and z denote the number of pounds of the respective brands applied, then the following constraints must be met.

total # units of phosphorous = 2x + y + z = 10 total # units of potassium = 3x + 3y = 9 total # units of nitrogen = 5x + 4y + z = 19

Since this is a consistent system, the recommendation can be satisfied exactly. Of course, the solution tells how much of each brand to apply. 2.3.8. No—if one or more such rows were ever present, how could you possibly eliminate all of them with row operations? You could eliminate all but one, but then there is no way to eliminate the last remaining one, and hence it would have to appear in the final form.

Solutions for exercises in section 2. 4

2.4.1. (a) x 2

 +^ x 4

 (b)^ y

 (^) (c) x 3

 +^ x 4

(d) The trivial solution is the only solution.

 (^) and

2.4.3. x 2

  • x 4

2.4.4. rank (A) = 3 2.4.5. (a) 2—because the maximum rank is 4. (b) 5—because the minimum rank is

2.4.6. Because r = rank (A) ≤ m < n =⇒ n − r > 0. 2.4.7. There are many different correct answers. One approach is to answer the question “What must EA look like?” The form of the general solution tells you that rank (A) = 2 and that the first and third columns are basic. Consequently,

EA =

1 α 0 β 0 0 1 γ 0 0 0 0

 (^) so that x 1 = −αx 2 − βx 4 and x 3 = −γx 4 gives rise

to the general solution x 2

−α 1 0 0

 +^ x 4

−β 0 −γ 1

.^ Therefore,^ α^ = 2,^ β^ = 3,

and γ = − 2. Any matrix A obtained by performing row operations to EA will be the coefficient matrix for a homogeneous system with the desired general solution. 2.4.8. If

i xfi^ hi^ is the general solution, then there must exist scalars^ αi^ and^ βi^ such that c 1 =

i αihi^ and^ c^2 =^

i βihi.^ Therefore,^ c^1 +^ c^2 =^

i(αi^ +^ βi)hi, and this shows that c 1 + c 2 is the solution obtained when the free variables xfi assume the values xfi = αi + βi.

Solutions for exercises in section 2. 5

2.5.1. (a)

 +^ x 2

 +^ x 4

 (b)

 (^) + y

Solutions for exercises in section 2. 6

2.6.1. (a) (1/575)(383, 533 , 261 , 644 , − 150 , −111) 2.6.2. (1/211)(179, 452 , 36) 2.6.3. (18, 10) 2.6.4. (a) 4 (b) 6 (c) 7 loops but only 3 simple loops. (d) Show that rank ([A|b]) = 3 (g) 5 / 6

I fear explanations explanatory of things explained. — Abraham Lincoln (1809–1865)

f (αA) = αf (A). Do so by writing

f (A + B) = f

a 1 + b 1 a 2 + b 2

a 2 + b 2 a 1 + b 1

a 2 a 1

b 2 b 1

= f (A) + f (B),

f (αA) = f

αa 1 αa 2

αa 2 αa 1

= α

a 2 a 1

= αf (A).

3.3.2. Write f (x) =

∑n i=1 ξixi.^ For all points^ x^ =

x 1 x 2 .. . xn

 and^ y^ =

y 1 y 2 .. . yn

 ,^ and for

all scalars α, it is true that

f (αx + y) =

∑^ n

i=

ξi(αxi + yi) =

∑^ n

i=

ξiαxi +

∑^ n

i=

ξiyi

= α

∑^ n

i=

ξixi +

∑^ n

i=

ξiyi = αf (x) + f (y).

3.3.3. There are many possibilities. Two of the simplest and most common are Hooke’s law for springs that says that F = kx (see Example 3.2.1) and Newton’s second law that says that F = ma (i.e., force = mass × acceleration).

3.3.4. They are all linear. To see that rotation is linear, use trigonometry to deduce

that if p =

x 1 x 2

, then f (p) = u =

u 1 u 2

, where

u 1 = (cos θ)x 1 − (sin θ)x 2 u 2 = (sin θ)x 1 + (cos θ)x 2.

f is linear because this is a special case of Example 3.3.2. To see that reflection is linear, write p =

x 1 x 2

and f (p) =

x 1 −x 2

. Verification of linearity is straightforward. For the projection function, use the Pythagorean theorem to conclude that if p =

x 1 x 2

, then f (p) = x^1 + 2 x^2

. Linearity is now easily verified.

Solutions for exercises in section 3. 4

3.4.1. Refer to the solution for Exercise 3.3.4. If Q, R, and P denote the matrices associated with the rotation, reflection, and projection, respectively, then

Q =

cos θ − sin θ sin θ cos θ

, R =

, and P =

2

1 2 1 2

1 2

3.4.2. Refer to the solution for Exercise 3.4.1 and write

RQ =

cos θ − sin θ sin θ cos θ

cos θ − sin θ − sin θ − cos θ

If Q(x) is the rotation function and R(x) is the reflection function, then the composition is R

Q(x)

(cos θ)x 1 − (sin θ)x 2 −(sin θ)x 1 − (cos θ)x 2

3.4.3. Refer to the solution for Exercise 3.4.1 and write

PQR =

a 11 x 1 + a 12 x 2 a 21 x 1 + a 22 x 2

cos θ − sin θ sin θ cos θ

cos θ + sin θ sin θ − cos θ cos θ + sin θ sin θ − cos θ

Therefore, the composition of the three functions in the order asked for is

P

Q

R(x)

(cos θ + sin θ)x 1 + (sin θ − cos θ)x 2 (cos θ + sin θ)x 1 + (sin θ − cos θ)x 2

Solutions for exercises in section 3. 5

3.5.1. (a) AB =

 (^) (b) BA does not exist (c) CB does not exist

(d) CT^ B = ( 10 31 ) (e) A^2 =

 (^) (f) B^2 does not exist

(g) CT^ C = 14 (h) CCT^ =

 (^) (i) BBT^ =

(j) BT^ B =

(k) CT^ AC = 76

3.5.11. At time t, the concentration of salt in tank i is xi V(t ) lbs/gal. For tank 1,

dx 1 dt

lbs sec

coming in −

lbs sec

going out = 0

lbs sec

r

gal sec

×

x 1 (t) V

lbs gal

r V

x 1 (t) lbs sec

For tank 2,

dx 2 dt

lbs sec

coming in −

lbs sec

going out =

r V

x 1 (t)

lbs sec

r

gal sec

×

x 2 (t) V

lbs gal

r V

x 1 (t)

lbs sec

r V

x 2 (t)

lbs sec

r V

x 1 (t) − x 2 (t)

and for tank 3,

dx 3 dt

lbs sec

coming in −

lbs sec

going out =

r V

x 2 (t)

lbs sec

r

gal sec

×

x 3 (t) V

lbs gal

r V

x 2 (t)

lbs sec

r V

x 3 (t)

lbs sec

r V

x 2 (t) − x 3 (t)

This is a system of three linear first-order differential equations

dx 1 dt

= (^) Vr

−x 1 (t)

dx 2 dt

= (^) Vr

x 1 (t) − x 2 (t)

dx 3 dt

= (^) Vr

x 2 (t) − x 3 (t)

that can be written as a single matrix differential equation

  

dx 1 /dt dx 2 /dt dx 3 /dt

 =^

r V

x 1 (t) x 2 (t) x 3 (t)

Solutions for exercises in section 3. 6

AB =

A 11 A 12 A 13

A 21 A 22 A 23

B 1

B 2

B 3

A 11 B 1 + A 12 B 2 + A 13 B 3

A 21 B 1 + A 22 B 2 + A 23 B 3

3.6.2. Use block multiplication to verify L^2 = I —be careful not to commute any of the terms when forming the various products.

3.6.3. Partition the matrix as A =

I C

0 C

, where C = (^13)

 (^) and observe

that C^2 = C. Use this together with block multiplication to conclude that

Ak^ =

I C + C^2 + C^3 + · · · + Ck 0 Ck

I kC 0 C

Therefore, A^300 =

3.6.4. (A∗A)∗^ = A∗A∗∗^ = A∗A and (AA∗)∗^ = A∗∗A∗^ = AA∗. 3.6.5. (AB)T^ = BT^ AT^ = BA = AB. It is easy to construct a 2 × 2 example to show that this need not be true when AB = BA. 3.6.6.

[(D + E)F]ij = (D + E)i∗F∗j =

k

[D + E]ik[F]kj =

k

([D]ik + [E]ik) [F]kj

k

([D]ik[F]kj + [E]ik[F]kj ) =

k

[D]ik[F]kj +

k

[E]ik[F]kj

= Di∗F∗j + Ei∗F∗j = [DF]ij + [EF]ij

= [DF + EF]ij.

3.6.7. If a matrix X did indeed exist, then

I = AX − XA =⇒ trace (I) = trace (AX − XA) =⇒ n = trace (AX) − trace (XA) = 0,

Solutions for exercises in section 3. 7

3.7.1. (a)

(b) Singular (c)

 (^) (d) Singular

(e)

3.7.2. Write the equation as (I − A)X = B and compute

X = (I − A)−^1 B =

3.7.3. In each case, the given information implies that rank (A) < n —see the solution for Exercise 2.1.3. 3.7.4. (a) If D is diagonal, then D−^1 exists if and only if each dii = 0, in which case

   

d 11 0 · · · 0 0 d 22 · · · 0 .. .

0 0 · · · dnn

− 1

1 /d 11 0 · · · 0 0 1 /d 22 · · · 0 .. .

0 0 · · · 1 /dnn

(b) If T is triangular, then T−^1 exists if and only if each tii = 0. If T is upper (lower) triangular, then T−^1 is also upper (lower) triangular with [T−^1 ]ii = 1/tii. 3.7.5.

A−^1

)T

AT^

= A−^1.

3.7.6. Start with A(I − A) = (I − A)A and apply (I − A)−^1 to both sides, first on one side and then on the other. 3.7.7. Use the result of Example 3.6.5 that says that trace (AB) = trace (BA) to write

m = trace (Im) = trace (AB) = trace (BA) = trace (In) = n.

3.7.8. Use the reverse order law for inversion to write [ A(A + B)−^1 B

]− 1

= B−^1 (A + B)A−^1 = B−^1 + A−^1

and (^) [ B(A + B)−^1 A

]− 1

= A−^1 (A + B)B−^1 = B−^1 + A−^1.

3.7.9. (a) (I − S)x = 0 =⇒ xT^ (I − S)x = 0 =⇒ xT^ x = xT^ Sx. Taking trans- poses on both sides yields xT^ x = −xT^ Sx, so that xT^ x = 0, and thus x = 0

(recall Exercise 3.6.12). The conclusion follows from property (3.7.8). (b) First notice that Exercise 3.7.6 implies that A = (I + S)(I − S)−^1 = (I − S)−^1 (I + S). By using the reverse order laws, transposing both sides yields exactly the same thing as inverting both sides. 3.7.10. Use block multiplication to verify that the product of the matrix with its inverse is the identity matrix. 3.7.11. Use block multiplication to verify that the product of the matrix with its inverse is the identity matrix. 3.7.12. Let M =

A B

C D

and X =

DT^ −BT

−CT^ AT

. The hypothesis implies that MX = I, and hence (from the discussion in Example 3.7.2) it must also be true that XM = I, from which the conclusion follows. Note: This problem appeared on a past Putnam Exam—a national mathematics competition for undergraduate students that is considered to be quite challenging. This means that you can be proud of yourself if you solved it before looking at this solution.

Solutions for exercises in section 3. 8

3.8.1. (a) B−^1 =

(b) Let c =

 (^) and dT^ = ( 0 2 1 ) to obtain C−^1 =

3.8.2. A∗j needs to be removed, and b needs to be inserted in its place. This is accomplished by writing B = A+(b−A∗j )eTj. Applying the Sherman–Morrison formula with c = b − A∗j and dT^ = eTj yields

B−^1 = A−^1 −

A−^1 (b − A∗j )eTj A−^1 1 + eTj A−^1 (b − A∗j )

= A−^1 −

A−^1 beTj A−^1 − ej eTj A−^1 1 + eTj A−^1 b − eTj ej

= A−^1 −

A−^1 b[A−^1 ]j∗ − ej [A−^1 ]j∗ [A−^1 ]j∗b

= A−^1 −

A−^1 b − ej

[A−^1 ]j∗ [A−^1 ]j∗b

3.8.3. Use the Sherman–Morrison formula to write

z = (A + cdT^ )−^1 b =

A−^1 −

A−^1 cdT^ A−^1 1 + dT^ A−^1 c

b = A−^1 b −

A−^1 cdT^ A−^1 b 1 + dT^ A−^1 c

= x −

ydT^ x 1 + dT^ y

3.8.4. (a) For a nonsingular matrix A, the Sherman–Morrison formula guarantees that A + αeieTj is also nonsingular when 1 + α

[

A−^1

]

ji = 0,^ and this certainly will be true if α is sufficiently small.