Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

OLS cheat sheet: essential notions, Cheat Sheet of Economics

Here are some basics that you should know about Ordinary Least Squares (OLS)

Typology: Cheat Sheet

2019/2020

Uploaded on 10/09/2020

karthur
karthur 🇺🇸

4.8

(8)

230 documents

1 / 1

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Economics 215, 2015 Allin Cottrell
OLS cheat sheet
Here are some basics that you should know about Ordinary Least Squares. Note that several of the points that
are simply asserted here are proved and/or explained more fully in the notes titled “Regression Basics in Matrix
Terms”. Key assumptions are marked as, for example, “[A1]”.
1. The linear multiple regression model can be written compactly in vector–matrix form as
y=Xβ+u(1)
where the dependent variable yis n×1; the regressor matrix Xis n×k; the parameter vector βis k×1; and
the error term uis n×1.
2. The OLS estimator of β, which we write as ˆ
β, is given by
ˆ
β=(X0X)1X0y(2)
This exists provided that X0Xis non-singular, which requires that the Xmatrix is of full column rank (no exact
collinearity among the columns of X, [A1]).
Assuming ˆ
βexists, two useful additional vectors may be formed: fitted values, ˆy=Xˆ
β, and residuals, ˆu=
y ˆy=yXˆ
β.
3. If the data-generating process conforms to (1) [A2] then the expectation of ˆ
βis given by
E(ˆ
β) =β+Eh(X0X)1X0ui(3)
On condition [A3] that E(u|X)=0 the second term above disappears and we have E(ˆ
β) =β, or in other
words the OLS estimator is unbiased.
4. The variance of ˆ
β(a k×kmatrix) is, from first principles,
Var(ˆ
β) =Eˆ
βE(ˆ
β)ˆ
βE(ˆ
β)0
If the condition for the estimator to be unbiased is met, then
Var(ˆ
β) =(X0X)1X0E(uu0)X(X0X)1(4)
If the error term has a constant variance, σ2
u[A4], and the drawings from the error distribution are independent,
such that E(uiuj)=0 for all i6= j[A5], then E(uu0)=σ2
uInand the OLS variance simplifies to the
“classical” formula,
Var(ˆ
β) =σ2
u(X0X)1(5)
which can be estimated by using
s2
u=Pn
i=1ˆu2
i
nk
in place of the unknown σ2
u.
5. Note the role of the various assumptions: [A1] is required for ˆ
βto exist; in addition, [A2] and [A3] are
required for OLS to be unbiased; and in addition [A4] and [A5] are needed for “classical” standard errors to be
valid. (The standard errors routinely reported alongside OLS estimates are just the square roots of the diagonal
elements of Var(ˆ
β)).

Partial preview of the text

Download OLS cheat sheet: essential notions and more Cheat Sheet Economics in PDF only on Docsity!

Economics 215, 2015 Allin Cottrell

OLS cheat sheet

Here are some basics that you should know about Ordinary Least Squares. Note that several of the points that

are simply asserted here are proved and/or explained more fully in the notes titled “Regression Basics in Matrix

Terms”. Key assumptions are marked as, for example, “[A1]”.

  1. The linear multiple regression model can be written compactly in vector–matrix form as

y = Xβ + u (1)

where the dependent variable y is n × 1; the regressor matrix X is n × k; the parameter vector β is k × 1; and

the error term u is n × 1.

  1. The OLS estimator of β, which we write as

β, is given by

β = (X

X)

− 1

X

y (2)

This exists provided that X

X is non-singular, which requires that the X matrix is of full column rank (no exact

collinearity among the columns of X, [A1]).

Assuming

β exists, two useful additional vectors may be formed: fitted values, yˆ = X

β, and residuals, uˆ =

y − ˆy = y − X

β.

  1. If the data-generating process conforms to (1) [A2] then the expectation of

β is given by

E(

β) = β + E

[

(X

X)

− 1

X

u

]

On condition [A3] that E(u|X) = 0 the second term above disappears and we have E(

β) = β, or in other

words the OLS estimator is unbiased.

  1. The variance of

β (a k × k matrix) is, from first principles,

Var(

β) = E

[

β − E(

β)

β − E(

β)

]

If the condition for the estimator to be unbiased is met, then

Var(

β) = (X

X)

− 1

X

E(uu

)X(X

X)

− 1

(4)

If the error term has a constant variance, σ

2

u

[A4], and the drawings from the error distribution are independent,

such that E(u i u j ) = 0 for all i 6 = j [A5], then E(uu

) = σ

2

u

I

n and the OLS variance simplifies to the

“classical” formula,

Var(

β) = σ

2

u

(X

X)

− 1

(5)

which can be estimated by using

s

2

u

n

i= 1

2

i

n − k

in place of the unknown σ

2

u

  1. Note the role of the various assumptions: [A1] is required for

β to exist; in addition, [A2] and [A3] are

required for OLS to be unbiased; and in addition [A4] and [A5] are needed for “classical” standard errors to be

valid. (The standard errors routinely reported alongside OLS estimates are just the square roots of the diagonal

elements of Var(

β)).