Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Cramér-Rao Lower Bound and Fisher Information in Statistical Inference - Prof. Grzegorz A., Study notes of Statistics

The cramér-rao lower bound and fisher information in statistical inference. The cramér-rao lower bound is a theorem that provides a lower limit on the variance of any unbiased estimator for a parameter. The document derives the cramér-rao lower bound and discusses its implications. It also introduces the concept of fisher information, which measures the amount of information that a sample contains about an unknown parameter.

Typology: Study notes

Pre 2010

Uploaded on 08/04/2009

koofers-user-13h
koofers-user-13h 🇺🇸

10 documents

1 / 8

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
STAT 9220
Lecture 9
Information Inequality
Greg Rempala
Department of Biostatistics
Medical College of Georgia
Mar 10, 2009
1
pf3
pf4
pf5
pf8

Partial preview of the text

Download Cramér-Rao Lower Bound and Fisher Information in Statistical Inference - Prof. Grzegorz A. and more Study notes Statistics in PDF only on Docsity!

STAT 9220

Lecture 9

Information Inequality

Greg Rempala

Department of Biostatistics

Medical College of Georgia

Mar 10, 2009

9.1 Fisher information and Cram´er-Rao lower bound

Suppose that we have a lower bound for the variances of all unbiased estimators of ϑ. There is an unbiased estimator T of ϑ whose variance is always the same as the lower bound. Then T is a UMVUE of ϑ.

Although this is not an effective way to find UMVUEs, it provides a way of as- sessing the performance of UMVUEs.

Theorem 9.1.1 (Cram´er-Rao lower bound). Let X = (X 1 ,... , Xn) be a sample from P ∈ P = P θ : θ ∈ Θ where Θ is an open set in Rk^. Suppose that T (X) is an estimator with E[T (X)] = g(θ) being a differentiable function of θ; Pθ has a p.d.f. fθ w.r.t. a measure ν for all θ ∈ Θ; and fθ is differentiable as a function of θ and satisfies

∂ ∂θ

h(x)fθ(x)dν =

h(x)

∂θ

fθ(x)dν, θ ∈ Θ, (9.1)

for h(x) ≡ 1 and h(x) = T (x). Then

V ar(T (X)) ≥

[

∂θ

g(θ)

]>

[I(θ)]−^1

∂θ

g(θ), (9.2)

where

I(θ) = E

∂θ

log fθ(X)

[

∂θ

log fθ(X)

]>}>

is assumed to be positive definite for any θ ∈ Θ.

The k × k matrix I(θ) in (9.3) is called the Fisher information matrix.

The greater I(θ) is, the easier it is to distinguish θ from neighboring values and, therefore, the more accurately θ can be estimated. Thus, I(θ) is a measure of the information that X contains about the unknown θ.

The inequalities in (9.2) and (9.4) are called information inequalities.

The following result is helpful in finding the Fisher information matrix.

Proposition 9.1.1. (i) Let X and Y be independent with the Fisher informa- tion matrices IX (θ) and IY (θ), respectively. Then, the Fisher information about θ contained in (X, Y ) is IX (θ) + IY (θ). In particular, if X 1 ,... , Xn are i.i.d. and I 1 (θ) is the Fisher information about θ contained in a single Xi, then the Fisher information about θ contained in X 1 ,... , Xn is nI 1 (θ).

(ii) Suppose that X has the p.d.f. fθ that is twice differentiable in θ and that (9.1) holds with h(x) ≡ 1 and fθ replaced by ∂fθ/∂θ. Then

I(θ) = −E

[

∂^2

∂θ∂θ>^

log fθ(X)

]

Proof. Result (i) follows from the independence of X and Y and the definition of the Fisher information. Result (ii) follows from the equality

∂^2 ∂θ∂θ>^

log fθ(X) =

∂^2 ∂θ∂θ>^ fθ(X) fθ(X)

∂θ

log f θ(X)

[

∂θ

log fθ(X)

]>

Example 9.1.1. Let X 1 ,... , Xn be i.i.d. with the Lebesgue p.d.f. (^1) σ f

((x−μ σ

where f (x) > 0 and f ′(x) exists for all x ∈ R, μ ∈ R, and σ > 0 (a location- scale family). Let θ = (μ, σ). Then, the Fisher information about θ contained in X 1 ,... , Xn is (discussion)

I(θ) =

n σ^2

( (^) ∫ (^) [f ′(x)] 2 f (x) dx^

∫ (^) f ′(x)[xf ′(x)+f (x)] ∫ f^ (x)^ dx f ′(x)[xf ′(x)+f (x)] f (x) dx^

∫ (^) [xf ′(x)+f (x)] 2 f (x) dx

Note that I(θ) depends on the particular parameterization. If θ = ψ(η) and ψ is differentiable, then the Fisher information that X contains about η is

∂ ∂η

ψ(η)I(ψ(η)) [∂∂ηψ(η)]>^.

However, the Cram´er-Rao lower bound in (9.2) or (9.4) is not affected by any one-to-one reparameterization. If we use inequality (9.2) or (9.4) to find a UMVUE T (X), then we obtain a formula for V ar(T (X)) at the same time. On the other hand, the Cram´er-Rao lower bound in (9.2) or (9.4) is typically not sharp.

Under some regularity conditions, the Cram´er-Rao lower bound is attained if and only if fθ is in an exponential family; see Propositions 9.1.2 and 9.1.3 and the discussion in Lehmann (1983, p. 123).

Some improved information inequalities are available (see, e.g., Lehmann (1983, Sections 2.6 and 2.7)).

(iii) Since ϑ = E[T (X)] = (^) ∂η∂ ξ(η),

I(η) =

∂ϑ ∂η

I(ϑ)

∂ϑ ∂η

∂^2

∂η∂η>^

ξ(η)I(ϑ)

[

∂^2

∂η∂η>^

ξ(η)

]>

By thm on differentiation in exponential families and the result in (ii), ∂

2 ∂η∂η>^ ξ(η) = V ar(T ) = I(η). Hence

I(ϑ) = [I(η)]−^1 I(η)[I(η)]−^1 = [I(η)]−^1 = [V ar(T )]−^1.

A direct consequence of Proposition 9.1.2(ii) is that the variance of any linear function of T in (9.6) attains the Cram´er-Rao lower bound. The following result gives a necessary condition for V ar(U (X)) of an estimator U (X) to attain the Cram´er-Rao lower bound.

Proposition 9.1.3. Assume that the conditions in Theorem 9.1.1 hold with T (X) replaced by U (X) and that Θ ⊂ R. (i) If V ar(U (X)) attains the Cram´er-Rao lower bound in (9.4), then

a(θ)[U (X) − g(θ)] = g′(θ)

∂θ

log fθ(X) a.s. Pθ.

for some function a(θ), θ ∈ Θ. (ii) Let fθ and T be given by (9.6). If V ar(U (X)) attains the Cram´er-Rao lower bound, then U (X) is a linear function of T (X) a.s. Pθ, θ ∈ Θ.

Example 9.1.2. Let X 1 ,... , Xn be i.i.d. from the N (μ, σ^2 ) distribution with an unknown μ ∈ R and a known σ^2. Let fμ be the joint distribution of X = (X 1 ,... , Xn). Then

∂ ∂μ

log fμ(X) =

∑^ n

i=

(Xi − μ)^2 /σ^2.

Thus, I(μ) = n/σ^2. It is obvious that V ar( X¯) attains the Cram´er-Rao lower bound in (9.4). Consider now the estimation of ϑ = μ^2. Since E X¯^2 = μ^2 + σ^2 /n, the UMVUE of ϑ is h( X¯) = X¯^2 − σ^2 /n. A straightforward calculation (check) shows that

V ar(h( X¯)) =

4 μ^2 σ^2 n

2 σ^4 n^2

On the other hand, the Cram´er-Rao lower bound in this case is 4μ^2 σ^2 /n. Hence V ar(h(varX)) does not attain the Cram´er-Rao lower bound. The difference is 2 σ^4 /n^2.

Condition (9.1) is a key regularity condition for the results in Theorem 9.1.1 and Proposition 9.1.3.

If fθ is not in an exponential family, then (9.1) has to be checked. Typically, it does not hold if the set {x : fθ(x) > 0 } depends on θ (text, chap 2 exercise 37). More discussions can be found in Pitman (1979).