Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

LSE and UMVUE in Linear Regression: Proof of Theorem 11.1.1 - Prof. Grzegorz A. Rempala, Study notes of Statistics

The proof of theorem 11.1.1 in greg rempala's stat 9220 lecture 11, which covers the lse (least squares estimator) and umvue (uniform minimum variance unbiased estimator) in the context of a linear regression model. The theorem states that the lse is the umvue for any estimable parameter, and that both attain the crlb (cramér-rao lower bound).

Typology: Study notes

Pre 2010

Uploaded on 08/04/2009

koofers-user-13h
koofers-user-13h 🇺🇸

10 documents

1 / 9

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
STAT 9220
Lecture 11
LSE and UMVUE
Greg Rempala
Department of Biostatistics
Medical College of Georgia
Mar 24, 2009
1
pf3
pf4
pf5
pf8
pf9

Partial preview of the text

Download LSE and UMVUE in Linear Regression: Proof of Theorem 11.1.1 - Prof. Grzegorz A. Rempala and more Study notes Statistics in PDF only on Docsity!

STAT 9220

Lecture 11

LSE and UMVUE

Greg Rempala

Department of Biostatistics

Medical College of Georgia

Mar 24, 2009

11.1 BLUEs

Theorem 11.1.1. Consider model

X = Zβ +  (11.1)

with assumption (A1) ( is distributed as Nn(0, σ

2

In) with an unknown σ

2

0 ).

(i) The LSE l

β is the UMVUE of l

β for any estimable l

β.

(ii) The UMVUE of σ

2 is σˆ

2 = (n − r)

− 1 ‖X − Z

β‖

2 , where r is the rank of Z

(iii) The UMVUE’s in (i) and (ii) both attain CRLB.

Proof. Let

β be an LSE of β. By Z

Z

β = Z

X,

(X − Z

β)

Z(

β − β) = (X

Z − X

Z)(

β − β) = 0

and, hence,

‖X − Zβ‖

2

= ‖X − Z

β + Z

β − Zβ‖

2

= ‖X − Z

β‖

2

  • ‖Z

β − Zβ‖

2

= ‖X − Z

β‖

2

− 2 β

Z

Z

β + ‖Zβ‖

2

  • ‖Z

β‖

2

= ‖X − Z

β‖

2

− 2 β

Z

X + ‖Zβ‖

2

  • ‖Z

β‖

2

.

Therefore

E‖X − Z

β‖

2

= E(X − Z

β)

(X − Z

β) = EX

(X − Zβ + Z(β −

β))

= EX

(X − Zβ) − EX

Z(

β − β) = E(X − Zβ)

(X − Zβ) − E(Z

Z

β)

(

β − β)

= E‖X − Zβ‖

2

− E(

β − β)

Z

Z(

β − β)

= tr (V ar(X − Zβ)) − tr

V ar(Z(

β − β)

= tr(V ar(X)) − tr(V ar(Z

β))

= σ

2

[n − tr

V ar(ZZ

X(Z

Z)

]

= σ

2

[n − tr

Z(Z

Z)

Z

Z(Z

Z)

Z

]

= σ

2

[n − tr

(Z

Z)

Z

Z

].

since tr(AB) = tr(BA). We evaluate tr

(Z

Z)

− Z

Z

using any choice of (Z

Z)

− .

From the theory of linear algebra, there exists a p×p matrix C such that CC

= I p

and

C

(Z

Z)C =

where Λ is an r × r diagonal matrix whose diagonal elements are positive. Then,

a particular choice of (Z

Z)

− is

(Z

Z)

= C

− 1 0

C

((Z

Z)(Z

Z)

(Z

Z) = (Z

Z)).

It follows that

tr((Z

Z)

Z

Z) = tr

C

I

r

C

= tr

Ip

I

r

= r

Hence,

ˆσ

2

=

E‖X − Z

β‖

2

n − r

= σ

2

.

and ˆσ

2 is the UMVUE of σ

2 as a function of the complete sufficient statistic.

(iii) It follows from the result on unbiased estimators in exponential families.

Theorem 11.1.2. Consider model (11.1) with assumption (A1). For any es-

timable parameter l

β, the UMVUE’s l

β and σˆ

2 are independent. Moreover, the

distribution of l

ˆ β is N (l

β, σ

2

l

(Z

Z)

l); and (n − r)ˆσ

2

2

∼ χ

2

n−r

Linear algebra refreshment.

(1) tr(A) =

n

i=

aii, tr(AB) = tr(BA),

(2) P is called projection matrix if P

2 = P ,

(3) Let P be symmertic projection matrix, then the only possible eigenvalues of

P are 0, 1.

(4) Let A be symmetric, then there exist orthogonal matrices C, C

such that

CC

= I and CAC

= Λ, where Λ is diagonal matrix,

(5) If EX = 0 then EX

X = tr(V ar(X)).

Note: Let Y = (Y 1

,... , Y

n−r

). Under (A1) Y is normal, V arY = σ

2 I n−r

since

V arY = EY Y

= E(GPnX)(GPnX)

= GPn(EXX

)P

n

G

= GP

n

σ

2

I n

P

n

G

= σ

2

GP n

P

n

G

= σ

2

GP n

G

= σ

2

I n−r

We need to show that EY = 0. But EY j

= EG

j

X = 0, since for j ≤ n − r,

EG

j

X = G

j

EX = G

j

P

n

Zβ.

It suffices to show that G j

P

n

Z = 0 for every j ≤ n. But

(GPnZ)

(GPnZ) = Z

P

n

G

GPnZ = Z

PnZ

= Z

(I n

− Z(Z

Z)

Z

)Z = Z

Z − Z

Z(Z

Z)

Z

Z = 0.

Example 11.1.1. In on-way ANOVA:

SSR =

m ∑

i=

n i ∑

j=

(X

ij

X

2

;

And UMVUE’s of estimable l

β in poly-regression and ANOVA are the LSE’s.

Theorem 11.1.3 (Gauss-Markov). Consider model (11.1) with assumption (A2).

(i) A necessary and sufficient condition for the existence of a linear unbiased es-

timator of l

β (i.e., an unbiased estimator that is linear in X) is l ∈ lin(Z). (ii)

If l ∈ lin(Z), then the LSE l

β is the best linear unbiased estimator (BLUE) of

l

β in the sense that it has the minimum variance in the class of linear unbiased

estimators of l

β of the form T = c

X.

Proof. (i) The sufficiency has been established already. To show necessity,

suppose a linear function of X, c

X with c ∈ R

n , is unbiased for l

β. Then

l

β = E(c

X) = c

EX = c

Zβ.

hence l ∈ lin(Z).

(ii) Let l ∈ lin(Z) = lin(Z

Z). Then l = (Z

Z)α for some α and l

β =

α

(Z

Z)

β = α

Z

X by Z

Zb = Z

X.

Let c

X be any linear unbiased estimator of l

β. From the proof of (i), Z

c = l.