Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Point Estimation: Obtaining Sensible Guesses for Unknown Parameters, Exercises of Mathematical Statistics

Point estimation, a statistical method used to find a sensible guess (estimate) for an unknown parameter (θ) based on sample data. Point estimators are formulas that take sample data and produce an estimate. Different samples may result in different estimates, but the goal is to minimize estimation errors. Measures of estimator quality include mean squared error (MSE) and bias. Unbiased estimators have smaller MSE and are preferred. The document also covers minimum variance unbiased estimators (MVUE) and reporting a point estimate with the standard error.

Typology: Exercises

2021/2022

Uploaded on 09/12/2022

newfound
newfound 🇨🇦

4.5

(13)

363 documents

1 / 47

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
6 Point Estimation
Stat 4570/5570
Material from Devores book (Ed 8), and Cengage
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f

Partial preview of the text

Download Point Estimation: Obtaining Sensible Guesses for Unknown Parameters and more Exercises Mathematical Statistics in PDF only on Docsity!

Point Estimation

Stat 4570/ Material from Devores book (Ed 8), and Cengage

Point Estimation

Statistical inference: directed toward conclusions about one or more parameters. We will use the generic Greek

letter θ for the parameter of interest.

Process:

  • Obtain sample data from each population under study
  • Based on the sample data, estimate θ
  • Conclusions based on sample estimates.

The objective of point estimation = estimate θ

Example

20 observations on breakdown voltage for some material: 24.46 25.61 26.25 26.42 26.66 27.15 27.31 27.54 27.74 27. 27.98 28.04 28.28 28.49 28.50 28.87 29.11 29.13 29.50 30. Assume that after looking at the histogram, we think that the distribution of breakdown voltage is normal with

mean value μ. What are some point estimators for μ?

Estimator “quality”

“Which estimator is the best?” What does “best” mean?

Measures of estimator quality

A sensible way to quantify the idea of being close to θ is

to consider the squared error ( ) 2 and the mean squared error MSE = E [( ) 2 ]. If among two estimators, one has a smaller MSE than the other, the first estimator is usually the better one.

Another good quality is unbiasedness : E [( )] = θ

Another good quality is small variance , Var [( )]

Unbiased Estimators

  • Suppose we have two measuring instruments; one instrument is accurately calibrated, and the other systematically gives readings smaller than the true value.
  • When each instrument is used repeatedly on the same object, because of measurement error, the observed measurements will not be identical.
  • The measurements produced by the first instrument will be distributed about the true value symmetrically , so it is called an unbiased instrument.
  • The second one has a systematic bias, and the measurements are centered around the wrong value.

Estimators with Minimum Variance

Suppose and are two estimators of θ that are both

unbiased. Then, although the distribution of each estimator

is centered at the true value of θ, the spreads of the

distributions about the true value may be different.

Among all estimators of θ that are unbiased, we will always

choose the one that has minimum variance. WHY? The resulting is called the minimum variance unbiased

estimator (MVUE) of θ.

Estimators with Minimum Variance

Figure below pictures the pdf’s of two unbiased estimators, with having smaller variance than. Then is more likely than to produce an estimate close

to the true θ. The MVUE is, in a certain sense, the most

likely among all unbiased estimators to produce an

estimate close to the true θ.

Graphs of the pdf’s of two different unbiased estimators

Note that the following result shows that the arithmetic average is unbiased: : Proposition Let X 1 , X 2 , …, Xn be a random sample from a distribution with mean μ and standard deviation σ. Then Thus we see that the arithmetic average is an unbiased estimator for the mean for any random sample of any size from any distribution.

The Mean is unbiased

  1. E(X) = μ
  2. V (X) = 2 /n and X = / p n

General methods for constructing estimators We have:

  • a sample from a probability distribution (“the model”)
  • we don’t know the parameters of that distribution How do we find the parameters to best match our sample data? Method 1: Methods of Moments (MoM):
  1. equate sample characteristics (eg. mean, or variance), to the corresponding population values
  2. solve these equations for unknown parameter values
  3. the solution formula is the estimator (need to check bias). Method 2 : Maximum Likelihood Estimation (MLE)

The Method of Moments

Let X 1

, X

2

,... , X

n be a random sample from a distribution

with pmf or pdf f ( x ; θ

1

m

), where θ

1

m are parameters whose values are unknown.

Then the moment estimators θ

1

m are obtained by equating the first m sample moments to the corresponding

first m population moments and solving for θ

1

m

If, for example, m = 2, E ( X ) and E ( X 2 ) will be functions of

1

and θ

2

Setting E(X) = M 1 and E(X 2 ) = M 2

gives two equations in θ

1

and θ

2

. The solution then defines the estimators. ˆ ✓ 1 , ˆ ✓ 2 ,... , ˆ ✓m ˆ ✓ 1 , ˆ ✓ 2 ,... , ˆ ✓m

Example for MoM

Let X 1

, X

2

,... , X

n represent a random sample of service times of n customers at a certain facility, where the underlying distribution is assumed exponential with parameter λ. What is the MOM estimate for λ?

MLE

Method 2 : Maximum likelihood estimation (MLE) The method of maximum likelihood was first introduced by R. A. Fisher, a geneticist and statistician, in the 1920s. Most statisticians recommend this method, at least when the sample size is large, since the resulting estimators have many desirable mathematical properties.

Example for MLE

A sample of ten independent bike helmets just made in the factory A was up for testing. 3 helmets are flawed. Let p = P (flawed helmet). The probability of X=3 is: P(X=3) = C(10,3) p 3 (1 – p ) 7 But the likelihood function is given as : L ( p | sample data ) = p 3 (1 – p ) 7 Likelihood function = function of the parameter only. For what value of p is the obtained sample most likely to have occurred? bi.e., what value of p maximizes the likelihood?