Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Homework 2 Solutions: Maximum Likelihood and Bayesian Estimation, PCA and LDA, Assignments of Pattern Classification and Recognition

Homework solution reference for CPE646

Typology: Assignments

2019/2020

Uploaded on 10/15/2020

little_cute
little_cute 🇺🇸

5

(1)

5 documents

1 / 4

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Homework 2 solutions
Problem 1.
P1.1 Maximum likelihood estimation
2
2 0
0
( | ) , 0
otherwise
x
xe x
p x
2
1 1
1
1
1
2
1
2
2
1
2
1
2
0
2
1
0
1 1 0
( ) ln ( | ) ln ( | ) ln ( | )
ln ( | )
ln
ln
( )
we have
ˆ
f
ln
or and
ˆ
k
n n
k k
k k
n
k
k
n
k
n
k
n
k
n
k
n
k
x
k
k k
x e
x
l p D p x p x
l p x
x
nx
x x
n
x
pf3
pf4

Partial preview of the text

Download Homework 2 Solutions: Maximum Likelihood and Bayesian Estimation, PCA and LDA and more Assignments Pattern Classification and Recognition in PDF only on Docsity!

Homework 2 solutions

Problem 1.

P1.1 Maximum likelihood estimation

2

otherwise

x

xe x

p x

 

2

1 1

1 1 1 2 1 2

2

1

2

1

( ) ln ( | ) ln ( | ) ln ( | )

ln ( | )

ln

ln

we have

f

ln

or and

k

n n

k k

k k

n k k n k n k n k n k n k

x

k

k k

x e

x

l p D p x p x

l p x

x

n

x

x x

n

x

 

 

     

P1.2 Bayesian estimation. Given

We estimate

( ) ~ ( , ) , 0 and fixed

otherwise

p U

2

1

1 1

let ( | ) ( ) = , which is a normalization factor independent of

we try to find whi

k

x

n

k

k

n n

k

k k

k

p x p

p D p

p D

p D p d p D p d

p D p d

p D p x p x e

 

1 1 2 1 2 1

2

2

2

ch maximizes ( | ) as our estimate of

ln ( | ) ln ln ln ln

ln

ln ( | )

ln ln

we have

for and 0<

because ln ln ln isu

k k

k k

k

n k n k n k n k k

x x

x x

p D

p D

p

x

D

x

x x

n

x

nimodal and increases before maxima

so if>we let=

P2.2 LDA

We can see that if we use y k

for classification, which is the result of LDA for dimension

reduction, the classification performance will be good.

1

2

1 2

1 1

m = ,m =

i

i

W

t

i i

x D

S

S

S

S S

x m

S

x m

1

1

1 2

within-class scatter matrix,

W

t

k k

w S m m

y w x

 

the reconstructed da

i

t s

a

k k

x y w