Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Information-Introduction to Machine Learning-Lecture 19-Computer Science, Lecture notes of Introduction to Machine Learning

Information, Learning, Entropy, Information Theory, KL-Divergence, EM Convergence, Optimal Code, EM, Likelihood, Mixture Model, Regression, Gating Network, Conditional Mixtures, Parametrization, Graphical Model, Non-Parametric Models, Greg Shakhnarovich, Lecture Slides, Introduction to Machine Learning, Computer Science, Toyota Technological Institute at Chicago, United States of America.

Typology: Lecture notes

2011/2012

Uploaded on 03/12/2012

alfred67
alfred67 🇺🇸

4.9

(20)

328 documents

1 / 30

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture 19: Information and learning
TTIC 31020: Introduction to Machine Learning
Instructor: Greg Shakhnarovich
TTI–Chicago
November 8, 2010
Lecture 19: Information and learning TTIC 31020
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e

Partial preview of the text

Download Information-Introduction to Machine Learning-Lecture 19-Computer Science and more Lecture notes Introduction to Machine Learning in PDF only on Docsity!

Lecture 19: Information and learning

TTIC 31020: Introduction to Machine Learning

Instructor: Greg Shakhnarovich TTI–Chicago

November 8, 2010

Review

The entropy of a RV A ∈ {a 1 ,... , am}

H(A) ,

∑^ m i=

p(ai) log (^) p(^1 a i)^

∑^ m i=

p(ai) log p(ai)

is the (asymptotically) optimal codelength for for a sequence of outcomes of that RV Minimum Description Length (MDL) principle:

argmin ˆθ

DL(X, θˆ) ≈ argmin ˆθ

∑^ N

i=

log p

xi | ˆθ

  • k log

N

BIC is an approximation of MDL

Learning and coding

Suppose we have a random discrete variable X with distribution p, pi , Pr(X = i), i = 1,... , m. Optimal code (knowing p) has expected length per observation

L(p) = −

∑^ m i=

pi log pi.

Suppose now we think (estimate) the distribution is ˆp = q.

  • We build code with codeword lengths − log qi;
  • (^) The expected length is

L(q) = −

∑^ m i=

pi log qi.

Example

3-letter alphabet, true probabilities p(a) = 0.5, p(b) = 0.2, p(c) = 0.3. Estimate from (small) sample text q(a) = 0.35, q(b) = 0.25, q(c) = 0.4. Huffman code: assumed distribution P a b c L(P ) p 0 10 11

Example

3-letter alphabet, true probabilities p(a) = 0.5, p(b) = 0.2, p(c) = 0.3. Estimate from (small) sample text q(a) = 0.35, q(b) = 0.25, q(c) = 0.4. Huffman code: assumed distribution P a b c L(P ) p 0 10 11 1.5 bits q

Example

3-letter alphabet, true probabilities p(a) = 0.5, p(b) = 0.2, p(c) = 0.3. Estimate from (small) sample text q(a) = 0.35, q(b) = 0.25, q(c) = 0.4. Huffman code: assumed distribution P a b c L(P ) p 0 10 11 1.5 bits q 10 11 0 1.7 bits

KL divergence

The cost of estimating p by q:

DKL (p || q) , L(q) − L(p) = −

∑^ m i=

pi log qi +

∑^ m i=

pi log pi

∑^ m i=

pi(log pi − log qi)

∑^ m i=

pi log p qi i

called the Kullback-Leibler divergence between p and q A result from information theory:

  • For any p, q, DKL (p || q) ≥ 0, with DKL (p || p) = 0.
  • (^) I.e., the cost is lowest (zero) when we estimate ˆp = p.

Properties of KL-divergence

DKL (p || q) =

∑^ m i=

pi log p qi i

DKL (p || q) ≥ 0 for any p, q DKL (p || q) = 0 if and only if p ≡ q It’s asymmetric:

  • (^) If pi = 0, qi ≥ 0

Properties of KL-divergence

DKL (p || q) =

∑^ m i=

pi log p qi i

DKL (p || q) ≥ 0 for any p, q DKL (p || q) = 0 if and only if p ≡ q It’s asymmetric:

  • (^) If pi = 0, qi ≥ 0 ⇒ 0 · log(0) → 0.
  • If qi = 0, pi ≥ 0

Properties of KL-divergence

DKL (p || q) =

∑^ m i=

pi log p qi i

DKL (p || q) ≥ 0 for any p, q DKL (p || q) = 0 if and only if p ≡ q It’s asymmetric:

  • (^) If pi = 0, qi ≥ 0 ⇒ 0 · log(0) → 0.
  • If qi = 0, pi ≥ 0 ⇒ pi · log(pi/0) → ∞.

Back to EM

Recall: X are observed, Z are hidden; by chain rule p (X, Z | θ) = p (Z | X, θ) p (X | θ) log p (X, Z | θ) − log p (Z | X, θ) = log p (X | θ)

Now take expectation w.r.t. p

Z | X, θold

Ep(Z | X,θold) [log p (X, Z | θ)] ︸ ︷︷ ︸

−Ep(Z | X,θold) [log p (Z | X, θ)] = log p (X | θ)

Back to EM

Recall: X are observed, Z are hidden; by chain rule p (X, Z | θ) = p (Z | X, θ) p (X | θ) log p (X, Z | θ) − log p (Z | X, θ) = log p (X | θ)

Now take expectation w.r.t. p

Z | X, θold

Ep(Z | X,θold) [log p (X, Z | θ)] ︸ ︷︷ ︸ Q(θ;θold)

−Ep(Z | X,θold) [log p (Z | X, θ)] = log p (X | θ)

Likelihood of EM solution

log p (X | θ) = Q(θ; θold) − Ep(Z | X,θold) [log p (Z | X, θ)]

Since θnew^ = argmaxθ Q(θ; θold), we have Q(θnew; θold) ≥ Q(θold; θold). Also,

Ep(Z | X,θold)

[

log p

Z | X, θold

)]

− Ep(Z | X,θold) [log p (Z | X, θnew)]

=

p

Z | X, θold

log p^

Z | X, θold

p (Z | X, θnew) = DKL

p

Z | X, θold

|| p (Z | X, θnew)

So, p (X | θnew) − p

X | θold

Mixture model for regression

Example:

−3^1 −2 −1 0 1 2 3

2

3

4

5

6