Handout 6 (Chapter 6) : Point Estimation: Unbiased Estimator: A Point Estimator
Handout 6 (Chapter 6) : Point Estimation: Unbiased Estimator: A Point Estimator
Handout 6 (Chapter 6) : Point Estimation: Unbiased Estimator: A Point Estimator
A point estimate of a parameter is a single number that can be regarded as the most
plausible value of .
^
= + error of estimation, is an unbiased estimator of if
Unbiased Estimator: A point estimator,
^ ^
E(
)= for every possible value of . Otherwise, it is biased and Bias = E(
)- .
Read the example 6.2 (your textbook).
Example 1: When X is a binomial r.v. with parameters, n and p, the sample proportion X/n is an unbiased
estimator of p.
^
To prove this, you need to show E(X/n)=p where p =X/n.
E(X/n) = E(X)/n, Using the rules of the expected value.
= np / n =p If X~Binomial(n,p) then E(X)=np (Chapter 3)
Example 2: A sample of 15 students who had taken calculus class yielded the following information on
brand of calculator owned: T H C T H H C T T C C H S S S (T: Texas Instruments, H: Hewlett Packard,
C=Casio, S=Sharp).
(a) Estimate the true proportion of all such students who own a Texas Instruments calculator.
Answer=0.2667
(b) Three out of four calculators made by only Hewlett Packard utilize reverse Polish logic. Estimate
the true proportion of all such students who own a calculator that does not use reverse Polish
logic.
Answer=0.80
Example 3 (Exercise 6.8) : In a random sample of 80 components of a certain type, 12 are found to be
defective.
(a) A point estimate of the proportion of all such components which are not defective.
Answer=0.85
(b) Randomly select 5 of these components and connect them in series for the system. Estimate the
proportion of all such systems that work properly.
Answer=0.4437
n1 1
n2 _
n2 1
STAT 211 2
(n1 1) (n 2 1)
2 2 2
n1 n 2 2 n1 n 2 2
Example 5 (Exercise 6.13) : X1,X2,.,Xn be a random sample from the pdf f(x)=0.5(1+x), -1x1,
^ _
is an unbiased estimator for .
-11. Show that
3X
^
It means that you need show E .
^ _
E 3E X 3E ( X ) = where
chapter5
1
1
x2 x3 1 1
E ( X ) x 0.5(1 x) dx 0.5 0.5 , 1 1
1 2 3 1 2 3 2 3 3
^ ^.
The standard error: The standard error of an estimator
is its standard deviation
The estimated standard error: The estimated standard error of an estimator is its estimated standard
^
deviation ^ = s ^ .
The minimum variance unbiased estimator (MVUE): The best point estimate. Among all estimators
^
of that are unbiased choose the one that has minimum variance. The resulting
is MVUE.
STAT 211 3
^
p(1 p)
Example 6: If we go back to example 1, the standard error of
^
p is ^ Var p where
p n
Var(X)
^
Var p 2 np (1 p ) p (1 p )
Var (X )np(1 p)
^
Example 7: If we go back to example 5, the standard error of
is
Var (X ) 3 3
^ _ 2 2
^ Var 9Var X 9 9
chapter5 n 9n n
1 2 3 2
where Var(X)= E ( X 2 ) [ E ( X )] 2
3 9 9
1
1
x3 x4 1 1 1
0.5
2
E(X2)= x 0 . 5(1 x ) dx 0 . 5 3
1 4 1 3 4 3 4 3
^ _
Example 8: For normal distribution,
x is the MVUE for . Proof is as follows.
STAT 211 4
The following graphs are generated by creating 500 samples with size 5 from N(0,1) and calculating the
sample mean and the sample median for each sample.
Example 9 (Exercise 6.3): Given normally distributed data yield the following summary statistics.
Variable n Mean Median TrMean StDev SE Mean
thickness 16 1.3481 1.3950 1.3507 0.3385 0.0846
(c) A point estimate of the value that separates the largest 10% of all values in the coating thickness
distribution from the remaining 90%.
Answer=1.78138
(d) Estimate P(X<1.5) (The proportion of all thickness values less than 1.5)
Answer=0.6736
Boxplot of thickness
99 ML Estimates
Mean 1.34812
95
StDev 0.327781
90
Goodness of Fit
80
70 AD* 1.074
Percent
60
50
40
30
20
10
5
Example 11: Find the MME for the parameters and in gamma distribution.
There are two unknown parameters.
The 1th population moment of the distribution is E(X)= .
_
The 1th sample moment is x
_
Then = x but this did not help to solve for any unknown parameter. We need to
continue the steps.
The 2nd population moment of the distribution is E(X2)= 2(1+).
1 n 2
nd
The 2 sample moment is xi
n i 1
1 n 2
Then 2(1+)= xi
n i 1
Since we have 2 unknown parameters and two equations, we can solve for the unknown
parameters.
_ 2
n
_
The MME for and are
( x) 2
2
xi x
and i 1 , respectively
n
_
i 1
xi x
_
x
Example 12: Find the MME for the parameters and 2 in normal distribution.
There are two unknown parameters.
The 1th population moment of the distribution is E(X)= .
_
The 1th sample moment is x
_
Then = x but we still need to solve for the second unknown parameters. We need to
continue the steps.
STAT 211 7
d ln( L)
n
x 0 then x
i _
d 2 ln( L)
xi 0 then the MLE of is
^ _
d 2
2 x
^ ^ ^
of the parameters 1 , 2 ,..., m . Then the
The Invariance Principle: Let
1 2 m be the MLE's
., ,...,
^ ^ ^
MLE of any function h( 1 , 2 ,..., m ) of these parameters is the function h(
1 2 m ) of the
., ,...,
MLE's
Example 14:
STAT 211 8
(1) Let X1,,Xn be a random sample of normally distributed random variables with the mean and the
standard deviation .
n _
(3) Let X1,,Xn be a random sample of binomial distributed random variables with parameter p.
The method of moment estimate and the maximum likelihood estimate of p are X/n.
(4) Let X1,,Xn be a random sample of Poisson distributed random variables with parameter .
_
The method of moment estimate and the maximum likelihood estimate of are x .
All the estimates above are unbiased? Some Yes but others No. (will be discussed in class)
(c) Maximum likelihood of (1-p)5 (none of the next five helmets examined is flawed)?
(d) Instead of selecting 20 helmets to examine, examine the helmets in succession until 3 flawed ones
are found. What would be different in X and p?
(a) Obtain the MME of and compute the estimate using the data.
1
1
x 2 1
E ( X ) x ( 1) x dx ( 1)
0 2 0
2
_
Set E(X)= x and then solve for .
_
_ ~ 2 x 1 2(0.8) 1
The given data yield x = 0.80 then the method of moment estimator for is
_
1 0 .8
1 x
=3
(b) Obtain the MLE of and compute the estimate using the data.
STAT 211 9
n n n
L=Likelihood= f ( x ) ( 1) x
i 1
i
i 1
i ( 1) n
x
i 1
i
n
ln(L)= n ln( 1) ln( x )
i 1
i
n
d ln( L) n
ln( xi ) =0 then solve for .
d 1 i 1
n
The given data yield ln( x ) -2.4295
i 1
i then the maximum likelihood estimator for is
n
n ln( xi )
^ (10 2.4295)
i 1
=3.1161
n
((2.4295))
ln(xi )
i 1
Proposition: Under very general conditions on the joint distribution of the sample when the sample size
is large, the MLE of any parameter is approximately unbiased and has a variance that is nearly as small
as can be achieved by an estimator.