







































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Point estimation, a statistical method used to find a sensible guess (estimate) for an unknown parameter (θ) based on sample data. Point estimators are formulas that take sample data and produce an estimate. Different samples may result in different estimates, but the goal is to minimize estimation errors. Measures of estimator quality include mean squared error (MSE) and bias. Unbiased estimators have smaller MSE and are preferred. The document also covers minimum variance unbiased estimators (MVUE) and reporting a point estimate with the standard error.
Typology: Exercises
1 / 47
This page cannot be seen from the preview
Don't miss anything!
Stat 4570/ Material from Devore ’ s book (Ed 8), and Cengage
Statistical inference: directed toward conclusions about one or more parameters. We will use the generic Greek
Process:
20 observations on breakdown voltage for some material: 24.46 25.61 26.25 26.42 26.66 27.15 27.31 27.54 27.74 27. 27.98 28.04 28.28 28.49 28.50 28.87 29.11 29.13 29.50 30. Assume that after looking at the histogram, we think that the distribution of breakdown voltage is normal with
“Which estimator is the best?” What does “best” mean?
to consider the squared error ( ) 2 and the mean squared error MSE = E [( ) 2 ]. If among two estimators, one has a smaller MSE than the other, the first estimator is usually the better one.
Another good quality is small variance , Var [( )]
unbiased. Then, although the distribution of each estimator
distributions about the true value may be different.
choose the one that has minimum variance. WHY? The resulting is called the minimum variance unbiased
Figure below pictures the pdf’s of two unbiased estimators, with having smaller variance than. Then is more likely than to produce an estimate close
likely among all unbiased estimators to produce an
Graphs of the pdf’s of two different unbiased estimators
Note that the following result shows that the arithmetic average is unbiased: : Proposition Let X 1 , X 2 , …, Xn be a random sample from a distribution with mean μ and standard deviation σ. Then Thus we see that the arithmetic average is an unbiased estimator for the mean for any random sample of any size from any distribution.
General methods for constructing estimators We have:
Let X 1
2
n be a random sample from a distribution
1
m
1
m are parameters whose values are unknown.
1
m are obtained by equating the first m sample moments to the corresponding
1
m
If, for example, m = 2, E ( X ) and E ( X 2 ) will be functions of
1
2
Setting E(X) = M 1 and E(X 2 ) = M 2
1
2
. The solution then defines the estimators. ˆ ✓ 1 , ˆ ✓ 2 ,... , ˆ ✓m ˆ ✓ 1 , ˆ ✓ 2 ,... , ˆ ✓m
Let X 1
2
n represent a random sample of service times of n customers at a certain facility, where the underlying distribution is assumed exponential with parameter λ. What is the MOM estimate for λ?
Method 2 : Maximum likelihood estimation (MLE) The method of maximum likelihood was first introduced by R. A. Fisher, a geneticist and statistician, in the 1920s. Most statisticians recommend this method, at least when the sample size is large, since the resulting estimators have many desirable mathematical properties.
A sample of ten independent bike helmets just made in the factory A was up for testing. 3 helmets are flawed. Let p = P (flawed helmet). The probability of X=3 is: P(X=3) = C(10,3) p 3 (1 – p ) 7 But the likelihood function is given as : L ( p | sample data ) = p 3 (1 – p ) 7 Likelihood function = function of the parameter only. For what value of p is the obtained sample most likely to have occurred? bi.e., what value of p maximizes the likelihood?