Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Math 55a - Honors Abstract Algebra, Lecture notes of Abstract Algebra

Math 55a - Honors Abstract Algebra. Taughy by Yum-Tong Siu. Notes by Dongryul Kim. Fall 2015. The course was taught by Professor Yum-Tong ...

Typology: Lecture notes

2021/2022

Uploaded on 09/12/2022

ekaant
ekaant 🇺🇸

4.6

(34)

270 documents

1 / 102

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Math 55a - Honors Abstract Algebra
Taughy by Yum-Tong Siu
Notes by Dongryul Kim
Fall 2015
The course was taught by Professor Yum-Tong Siu. We met twice a week,
on Tuesdays and Thursdays from 2:30 to 4:00. At the first lecture there were
over 30 people, but at the end of the add-drop period, the class consisted of
11 students. There was an in-class midterm exam and a short take-home final.
The course assistants were Calvin Deng and Vikram Sundar.
Contents
1 September 3, 2015 2
1.1 Overview ............................... 2
1.2 Things we will cover . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Peanosaxioms............................ 2
1.4 Rationalnumbers........................... 5
2 September 8, 2015 6
2.1 Non-rigorous pro of of the fundamental theorem of algebra . . . . 6
2.2 Orderrelations ............................ 8
2.3 Dedekindcuts............................. 9
3 September 10, 2015 11
3.1 Scipione del Ferro’s solution of the cubic equation . . . . . . . . . 11
3.2 Lagrangesidea............................ 11
3.3 Schematics for solving a polynomial equation . . . . . . . . . . . 13
4 September 15, 2015 15
4.1 More on solving polynomial equations . . . . . . . . . . . . . . . 15
4.2 Basic linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3 Determinant of a matrix . . . . . . . . . . . . . . . . . . . . . . . 18
5 September 17, 2015 19
5.1 Review of basic matrix theory . . . . . . . . . . . . . . . . . . . . 19
5.2 Determinants again . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.3 Cramersrule............................. 22
1Last Update: August 27, 2018
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64

Partial preview of the text

Download Math 55a - Honors Abstract Algebra and more Lecture notes Abstract Algebra in PDF only on Docsity!

Math 55a - Honors Abstract Algebra

Taughy by Yum-Tong Siu

Notes by Dongryul Kim

Fall 2015

The course was taught by Professor Yum-Tong Siu. We met twice a week, on Tuesdays and Thursdays from 2:30 to 4:00. At the first lecture there were over 30 people, but at the end of the add-drop period, the class consisted of 11 students. There was an in-class midterm exam and a short take-home final. The course assistants were Calvin Deng and Vikram Sundar.

Contents

1 September 3, 2015 2 1.1 Overview............................... 2 1.2 Things we will cover......................... 2 1.3 Peano’s axioms............................ 2 1.4 Rational numbers........................... 5

2 September 8, 2015 6 2.1 Non-rigorous proof of the fundamental theorem of algebra.... 6 2.2 Order relations............................ 8 2.3 Dedekind cuts............................. 9

3 September 10, 2015 11 3.1 Scipione del Ferro’s solution of the cubic equation......... 11 3.2 Lagrange’s idea............................ 11 3.3 Schematics for solving a polynomial equation........... 13

4 September 15, 2015 15 4.1 More on solving polynomial equations............... 15 4.2 Basic linear algebra.......................... 16 4.3 Determinant of a matrix....................... 18

5 September 17, 2015 19 5.1 Review of basic matrix theory.................... 19 5.2 Determinants again.......................... 21 5.3 Cramer’s rule............................. 22

1 Last Update: August 27, 2018

  • 6 September 22,
    • 6.1 Groups, rings, and fields
    • 6.2 Vector spaces
    • 6.3 Linear maps and the dual space
    • 6.4 Tensor products
  • 7 September 24,
    • 7.1 More explanation on tensor products
    • 7.2 Wedge products and some differential geometry
    • 7.3 Polarization of a polynomial
    • 7.4 Binet-Cauchy formula from wedge products
  • 8 September 29,
    • 8.1 Historical background
    • 8.2 Evaluation tensor and the contraction map
    • 8.3 Exterior product of two different vector spaces
    • 8.4 Hodge decomposition
  • 9 October 1,
    • 9.1 Philosophy of the Lefschetz theorem
    • 9.2 Hodge star operator
    • 9.3 Normal form of a matrix
  • 10 October 6,
    • 10.1 F [λ]-module structure of a vector space
    • 10.2 Kernel of the map induced by T
    • 10.3 Decomposition of the module structure on V
  • 11 October 8,
    • 11.1 Review of the decomposition of V as a F [λ]-module
    • 11.2 Chinese remainder theorem
    • 11.3 Jordan normal form
  • 12 October 13,
    • 12.1 Justifying complex multiplication on real vector spaces
    • 12.2 Field extensions
    • 12.3 The rise of Galois theory
  • 13 October 15,
    • 13.1 Galois theory
    • 13.2 Normal groups and solvability
    • 13.3 Bounding theorems for Galois extensions
  • 14 October 20,
    • 14.1 Separability of a polynomial
    • 14.2 The second counting argument
    • 14.3 Galois extension
  • 15 October 22,
    • 15.1 Three equivalent definitions of Galois extensions
    • 15.2 Some comments about normality
    • 15.3 Fundamental theorem of Galois theory
  • 16 October 27,
    • 16.1 Wrapping up Galois theory
    • 16.2 Solvability of the polynomial with degree n
    • 16.3 Digression: Primitive element theorem
  • 17 October 29,
    • 17.1 Insolvability of Sn
    • 17.2 Galois group of xp+1 − sx − t
    • 17.3 Constructing a regular polygon
  • 18 November 3,
    • 18.1 Midterm
  • 19 November 5, - gon of 17 sides 19.1 Gauss’s straightedge-and-compass construction of a regular poly-
    • 19.2 Lefschetz decomposition
  • 20 November 10,
    • 20.1 Setting of the Lefschetz decomposition
    • 20.2 Inner product on the complexified vector space
    • 20.3 Lefschetz operator and Hodge star operator
    • 20.4 Statement of the Lefschetz decomposition
  • 21 November 12,
    • 21.1 Overview of Lefschetz decomposition
    • 21.2 Notations and basic formulas
    • 21.3 Relations between L, Λ, and ∗
    • 21.4 Commutator of powers of Λ and L
  • 22 November 17,
    • 22.1 Proof of the Lefschetz decomposition
    • 22.2 Prelude to our next topic
  • 23 November 19,
    • 23.1 Rotations of R
    • 23.2 Representation of rotation by quaternions and SU (2)
    • 23.3 Hypercomplex number systems
  • 24 November 24,
    • 24.1 Decomposing a function into symmetric parts
    • 24.2 Young diagrams and Young symmetrizers
    • 24.3 Representation of a finite group
    • 24.4 Results of Schur’s theory
  • 25 December 1,
    • 25.1 Decomposition of the regular representation
    • 25.2 Intertwining operator and Schur’s lemma
  • 26 December 3,
    • 26.1 Representations of Sn

1 September 3, 2015

1.1 Overview

My name is Siu([see-you]), and there are no textbooks for this course. The website for this course is http://math.harvard.edu/~siu/math55a. There will be no clear division between abstract algebra and analysis. These are the things I will tell you during the lectures.

  • Motivation, background, and history for the material
  • Techniques, methods, ideas, and structures
  • “Rigorous” presentation

I will emphasize the last one, but it is useless to only know rigorous things. There will be weekly problem sets, and we encourage discussions. And of course, you need to write the solutions down in you own words. The actual level of difficulty will depend on the feedback I get from your assignments.

1.2 Things we will cover

We focus on solving equations in a number system. There are two kinds of equations:

  • polynomial equations - This is algebra, and will be the A part
  • differential equations - This is real and complex analysis and will be cov- ered in the B part

We start with Peano’s five axioms, and from this, we can define N, Q, R, and C. You can choose what number system you would like work in, and this is why number systems are important. For instance, the fundamental theorem of algebra holds in C, but does not hold in R or Q. Historically, the whole algebra came from solving polynomial equations. There are symmetry involved in solving equations. For instance, if

(x − a 1 ) · · · (x − an) = xn^ − σ 1 xn− 1 + σ 2 xn− 1 − · · · ,

we get σ 1 = a 1 + · · · + an, σ 2 = a 1 a 2 + · · · + an− 1 an. The coefficients have sym- metry between the ais. So basically solving a polynomial equation is bringing all-symmetry down to no-symmetry. This is basically what Galois did, but by going down steps of partial symmetry.

1.3 Peano’s axioms

I want to start from:

Proof. Let us prove (x + y) + z = x + (y + z). First fix x and y. Let

Ax,y = {z ∈ N : (x + y) + z = x + (y + z)}.

First 1 ∈ Ax,y since

(x + y) + 1 = (x + y)′^ = x + (y′) = x + (y + 1).

Also if z ∈ Ax,y , then

(x + y) + z′^ = ((x + y) + z)′^ = (x + (y + z))′^ = x + (y + z)′^ = x + (y + z′).

Thus z ∈ A implies z′^ ∈ A, and it follows that Ax,y = N.

Theorem 1.5. Addition is commutative.

Proof. We want to prove x + y = y + x. Fix y ∈ N. Let

Ay = {x ∈ A : x + y = y + x}.

The first thing we need to prove is 1 ∈ Ay , which is 1 + y = y + 1. We use another induction inside this induction. Let B = {y ∈ N : 1 + y = y + 1}.

Obviously 1 ∈ B, and y ∈ B implies

1 + y′^ = (1 + y)′^ = (y + 1)′^ = y + (1 + 1) = (y + 1) + 1 = y′^ + 1,

which in turn, implies y′^ ∈ B. Thus 1 + y = y + 1 for all y ∈ N. Now suppose that x ∈ Ay. For x′, we have

y + x′^ = (y + x)′^ = (x + y)′^ = x + y′^ = x + (1 + y) = (x + 1) + y = x′^ + y.

Thus Ay = N.

Now let us define multiplication.

Definition 1.6 (Multiplication). Let x · 1 = x and x · y′^ = x · y + x. This defines multiplication in general because of the induction axiom.

Theorem 1.7. For any x, y, z ∈ N, we have the following: (a) x · y = y · x (b) (x · y) · z = x · (y · z) (c) x · (y + z) = x · y + x · z

Proof. Homework.

1.4 Rational numbers

Now we begin to handle division. We construct the set Q+ of all positive frac- tions (or positive rational numbers). But first we need the concept of equivalence relations, because we need to say ab = (^) dc if ad = bc.

Definition 1.8. Let X be a set. A relation in X is a subset R ⊂ X × X. We use the notation a ∼ b to mean (a, b) ∈ R. The relation R (also denoted by ∼) is an equivalence relation if (Reflexivity) x ∼ x for all x ∈ X, (Symmetry) x ∼ y if and only if y ∼ x, (Transitivity) x ∼ y and y ∼ z imply x ∼ z.

Theorem 1.9 (Decomposition statement). An equivalence relation divides up X into a disjoint union of subsets.

Proof. For x ∈ X, let Xx = {y ∈ X : y ∼ x}, known as the equivalence class which contains x. It is clear that

X =

x∈X

Xx.

We also need to show that what we have is a disjoint union in the following sense: Xx ∩ Xy 6 = ∅ implies Xx = Xy.

Because of symmetry, it is sufficient to show Xx ⊂ Xy. By assumption there exists an element z ∈ Xx ∩ Xy , and we get z ∼ x and z ∼ y. Take any u ∈ Xx. Because u ∼ x, x ∼ z and z ∼ y, we have u ∼ y. This shows u ∈ Xy.

Now we finally define Q+, the set of positive rational numbers.

Definition 1.10. Introduce ∼ in X = N × N such that (a, b) = (c, d) if and only if ad = bc. We call the equivalence classes Q+.

You can check that it actually is a equivalence relation. Next class, we will define R+ by Dedekind cuts. We have to go into the realm of analysis to define the reals, because we need the mean-value property. For instance, let me sketch a proof of the fundamental theorem of algebra. Let

P (z) = zn^ +

n∑− 1

j=

aj zj

be a monic polynomial with complex variables and no roots. Let f (z) = 1/P (z). Then by certain facts in complex analysis,

f (c) =

2 π

∫ (^2) π

θ=

f (c + reiθ^ )dθ

and

|f (c)| ≤

2 π

∫ (^2) π

θ=

|f (c + reiθ^ )|dθ.

Sending r → ∞, we get a contradiction.

Because f ′(z 0 ) is independent of the curve, we get a degree of freedom. As we have set f (z) = 1/P (z), we have

f ′(z 0 ) = −

nzn 0 −^1 +

∑n− 1 j=1 aj^ jz

j− 1 0 P (z 0 )^2

There exists a derivative of f under the assumption that P has no zeros, al- though we have not proved it yet. Now the mean value property states that

f (z 0 ) = average of f (z) at the circle centered at z 0 of radius r > 0.

This can be deduced from the chain rule.

Proof of the mean value property. Analytically it can be written down as

f (z 0 ) =

2 π

∫ (^2) π

θ=

f (z 0 + reiθ^ )dθ.

I have not defined eiθ^ yet, but eiθ^ = cos θ + i sin θ. We consider the map

r 7 →

2 π

∫ (^2) π

θ=

f (z 0 + reiθ^ )dθ.

If we prove that the derivative is always 0, and that the limit when r → 0 +^ is f (z 0 ), we have proven the formula. The latter is immediate since f (z 0 + reiθ^ ) → f (z 0 ) as r → 0 +. Note that this is possible because any curve in the complex plane can can be retracted to a point. So we prove the former. We will apply the chain rule to two different curve; the line going through the origin, and the circle. First, we have

d dr

∫ (^2) π

θ=

f (z 0 + reiθ^ )dθ =

∫ (^2) π

θ=

∂r

f (z 0 + reiθ^ )

dθ.

Looking in the radial direction, we obtain

∂ ∂r

f (z 0 + reiθ^ )

r=r 0

= lim r→r 0

f (z 0 + reiθ^ ) − f (z 0 + r 0 eiθ^ ) r − r 0

= lim r→r 0

f (z 0 + reiθ^ ) − f (z 0 + r 0 eiθ^ ) (z 0 + reiθ^ ) − (z 0 + r 0 eiθ^ )

(z 0 + reiθ^ ) − (z 0 + r 0 eiθ^ ) r − r 0 = f ′(z 0 + r 0 eiθ^ )eiθ^ ,

and thus (^) ∫ (^2) π

θ=

∂r

f (z 0 + reiθ

dθ =

∫ (^2) π

θ=

eiθ^ f ′(z 0 + reiθ^ )dθ.

To calculate f ′(z 0 + reiθ^ ), we do the same thing over again. Looking at the circle, we get

∂ ∂θ

f (z 0 + reiθ^ )

θ=θ 0

= lim θ→θ 0

f (z 0 + reiθ^ ) − f (z 0 + reiθ^0 ) θ − θ 0

= lim θ→θ 0

f (z 0 + reiθ^ ) − f (z 0 + reiθ^0 ) (z 0 + reiθ^ ) − (z 0 + reiθ^0 )

(z 0 + reiθ^ ) − (z 0 + reiθ^0 ) θ − θ 0 = f ′(z 0 + reiθ^0 )rieiθ

Therefore ∫ (^2) π

θ=

eiθ^ f ′(z 0 + reiθ^ )dθ =

∫ (^2) π

θ=

ri

∂θ f (z 0 + reiθ^ )

ri

f (z 0 + reiθ^ )

2 π θ=

Proof of the fundamental theorem of algbera. Take any z 0 ∈ C. Since

f (z 0 ) =

2 π

∫ (^2) π

θ=

f (z 0 + reiθ^ )dθ

and the f (z 0 + reiθ^ ) goes to 0 as r → ∞, we get

|f (z 0 )| ≤

2 π

∫ (^2) π

θ=

|f (z 0 + reiθ^ )|dθ = 0

which contradicts f (z 0 ) = 1/P (z 0 ) 6 = 0.

As you can see, analysis is needed to prove a theorem in algebra. We needed two things; first is the notion of averaging which is same as integrals, and the two-dimensional situation which makes it possible to consider multiple directions.

2.2 Order relations

Back to rigorous presentations. Let us define upper bounds, and the least upper bound. But first we need to define what x < y or x ≤ y means.

Definition 2.2. Let x, y ∈ N. We say that x > y if and only if there exists a u ∈ N such that x = y + u. Let x < y if and only if y > x.

Theorem 2.3 (Trichotomy). For any x, y ∈ N, precisely one of the following three statements holds.

x = y, x > y, x < y

The key point in the proof is that there are no fixed points in the addition operation. In other words, for any fixed x ∈ N, we have y 6 = x + y for any y ∈ N.

Definition 2.6. A Dedekind cut is a proper subset ξ of Q+ such that

  1. (containing all numbers less than some non-member) For any x ∈ ξ and y ∈ Q+ \ ξ, we have x < y.
  2. (containing no upper bound) There does not exist an x ∈ ξ such that x ≥ y for all y ∈ ξ.

Definition 2.7. The (positive) real numbers is defined as

R+ = {all Dedekind cuts}.

We can embed Q+ into R+ according to the map

r ∈ Q+ 7 → ξr = {s ∈ Q+ : s < r}.

We can also easily define ordering, addition, and multiplication on the real numbers.

Definition 2.8. Let ξ and η be two distinct Dedekind cuts. Define ξ > η if and only if ξ ⊃ η, and ξ < η if and only if ξ ⊂ a. Also, define

ξ + η = {x + y : x ∈ ξ, y ∈ η}

and ξ · η = {xy : x ∈ ξ, y ∈ η}. Now we can define Q as

Q = (−Q+) ∪ { 0 } ∪ Q+

and also R = (−R+) ∪ { 0 } ∪ R+.

You can define addition, multiplication, ordering on these sets, but I am not going to do this, because I do not want to write a whole book.

Definition 2.9. The complex numbers is defined as the product C = R × R. The operations on the set are given as

(a, b)(c, d) = (ac − bd, ad + bc),

(a, b) + (c, d) = (a + c, b + d). Letting i = (0, 1), we get the notation we are used to. In the first class, I said that we will be studying polynomial equations. There are two kinds of things we want to do.

  • Single polynomial of a single variable - This is mainly Galois theory.
  • System of line equations in several variables - We will be doing this to do Stokes’ theorem.

Next time, we will discuss how to solve a polynomial equation with one variable.

3 September 10, 2015

Solving quadratic equations is easy. Given a equation ax^2 + bx + c = 0, we can solve it by “completing the squares”.

a

x + b 2 a

b^2 − 4 ac 4 a

and now if we take roots, we get the solution.

3.1 Scipione del Ferro’s solution of the cubic equation

A general method of solving the cubic equation was first discovered by del Ferro. Let F (X) = ax^3 + bx^2 + cx + d. Imitating the quadratic case, one can translate the variable x by letting x = t + α. For a good α, one can eliminate the second degree term and obtain t^3 + pt + q = 0.

But this does not solve the equation. So we try some other translation. Let t = u + v. Then

t^3 + pt + q = (u^3 + v^3 ) + (u + v)(3uv + p) + q.

Note that this is a polynomial of degree 3 over u. But we don’t want to just see this as a polynomial over u, because it destroys the symmetry between u and v. Instead, we set 3uv + p = 0. Then it is the same as { u^3 + v^3 + q = 0 3 uv + p = 0

Note that 3uv + p = 0 is the artificial relation, and u^3 + v^3 + q = 0 is the original equation. Cubing the second equation, we get u^3 v^3 = −p^3 /27, and then we get a quadratic equation

X^2 + qX − p^3 27

whose zeroes are u^3 and v^3. Then you get three solutions for each variable, and plugging each of the solutions, you finally get three solution pairs. This quadratic polynomial is called the resolvent.

3.2 Lagrange’s idea

Lagrange saw this solution of del Ferro’s and realized that actually what del Ferro had done was same as this. Let  = (−1 +

3)i be the cubic root of unity. The main trick is just setting

x 1 = u + v, x 2 = u + ^2 v, x 3 = ^2 u + v.

Actually the quartic formula was first discovered by Ferrari. But this is not relevant with our topic, so I will go over it quickly. Starting with the equation x^4 + ax^3 + bx^2 + cx + d = 0, we change it to

x^2 (x^2 + ax) = −bx^2 − cx − d, ( x

x +

a 2

a^2 x^2 − bx^2 − cx − d, ( x^2 +

a 2 x

a^2 x^2 − bx^2 − cx − d.

In the cubic formula, we introduced a generic translation t = u + v and imposed an additional condition. We do this again. Translating (x^2 + a 2 x), we get ( x^2 +

a 2

x +

y

a^2 − b + y

x^2 +

− cx +

ay

x +

− d +

y^2

Ferrari wanted to make the right-hand side a square of a polynomial, or in other words, make its discriminant zero. This condition in terms of y is a cubic equation. So it is possible to calculate y, and thus x by solving the corresponding quadratic equation.

3.3 Schematics for solving a polynomial equation

As I have said, solving a polynomial equation is performing on the coefficients of the polynomial equations (or symmetric functions σ 1 ,... , σn) the operations of the form of rational functions and roots(radicals). Actually the roots is what destroys the symmetry, because you need to choose what roots you will use. We can drawing the schematic as:

σ 1 , σ 2 ,... , σn

τ (1) 1 , τ^

(1) 2 ,... , τ^

(1) n 1

τ 1 (l ), τ 2 (l ),... , τ (^) n(ll)

x 1 ,... , xn

root-taking

Each “layer” actually represents the field of functions which share some specific symmetry. For instance the first layer is C(σ 1 ,... , σn) which is the set of rational symmetric functions. In each step, we take roots to extend the set of functions. Let us represent the process of solving a quadratic equation in this way.

C(σ 1 , σ 2 )

C(x 1 , x 2 ) = C(τ 1 (1) , τ 2 (1) )

root-taking

Writing down the symmetry of each layer in terms of groups(you can just think this as a set of permutations for now), this is

{ 1 } = G 1 ⊂ G 0 = S 2 ,

where Sn is the set of permutations on { 1 , 2 ,... , n} and 1 is the identity per- mutation. The cubic equation has two steps.

{ 1 } = G 2 ⊂ G 1 ⊂ G 0 = S 3

where G 1 = { 1 , (123), (132)} is the alternating group. It can be drawn as

C(σ 1 , σ 2 , σ 3 )

C(τ 1 (1) , τ 2 (1) , τ 3 (1) , τ 4 (1) )

C(x 1 , x 2 , x 3 , x 4 )

where τ 1 (1) = σ 1 , τ 2 (1) = y 2 y 3 , τ 3 (1) = y^32 , τ 4 (1) = y^33. The schematic for solving the quartic equation can be drawn as

{ 1 } ⊂ K 4 ⊂ A 4 ⊂ S 4

where A 4 is the alternating group and K 4 = { 1 , (12)(34), (13)(24), (14)(23)} is the Klein four-group. This diagram is not the solution itself; it is more of a reverse engineering kind of thing that shows us how complete symmetry was brought down to no symmetry in each of the cases.

P 1

P 2

P 3

P 4 Q 1

Q 3

Q 2

Because the set {Q 1 , Q 2 , Q 3 } is only permuted by the permutation of {P 1 , P 2 , P 3 , P 4 }, we can represent the elementary symmetric polynomials of Q 1 , Q 2 , Q 3 in terms of elementary symmetric polynomials of P 1 , P 2 , P 3 , P 4. But it is not so simple as it looks, because the formula for Q 1 , Q 2 , Q 3 involves complex conjugates. What you need to do is just write down the formula, and take the part which does not involve any complex conjugates. Then because permutations does not change where conjugates are, you get a polynomial with no conjugates, and does not change after permutation.

4.2 Basic linear algebra

When we talked about groups, they were finite groups which lay in a symmetric group. In Lagrange’s resolvent, yi’s were represented by linear combinations of xi’s. These are the things we are going to do now.

  • Solution of a system of linear equations
  • Change of variables as a matrix multiplication
  • Inverse of a matrix
  • Determinant of a matrix
  • Cramer’s rule and the adjoint matrix

We are actually doing determinants to do higher dimensional analysis. When we have a curve, we calculate the length of the curve by projecting it to an axis, and then adding up the lengths. In other word, it is

∫ (^) √ dx^2 + dy^2 =

( (^) dy dx

dx.

In calculating higher dimensional objects, such as the area of a surface, we do the same thing with an higher dimensional analogue of Pythagoras theorem.

We will do some review. A system of linear equations     

y 1 = a 11 x 1 + · · · + a 1 nxn .. . ym = am 1 x 1 + · · · + amnxn

can be represented by   

y 1 .. . ym

a 11 · · · a 1 n .. .

am 1 · · · amn

x 1 .. . xn

or

~y =

∑^ n

j=

xj A~j

where Aj is the jth column of the matrix. Gauss came up with a procedure to solve the equation. Using the following elementary row operations, we can make the matrix in to a row echelon form.

  • Multiply a row by a nonzero number.
  • Switch two rows.
  • replace the ith row by adding a constant times the jth row.

Everyone knows this. The important observation is that an elementary row operation E applied to A to get A′^ is the same as applying the operation to Im to get I m′ and left multiply A to get A′. Why is this? The jth column of the n × m matrix A is   

a 1 j .. . amj

 =^ a 1 j e~ 1 +^ · · ·^ +^ amj e~m.

Looking at each vector separately, applying a row operation is actually manip- ulating the coefficient of the vector expansion correspondingly. Thus it is same as left-multiplying a matrix. Now consider the equation A~x = ~b

where ~b is a column m-vector, and ~x is a column n-vector to solve as the coef- ficients of the n column m-vectors A. We look at the augmented matrix

(A|~b)

and apply k elementary row operations. Let those elementary row operations operated on the identity matrix be E 1 , E 2 ,... , Ek. Then after the operations, we will get (Ek · · · E 1 A|Ek... E 1 ~b).