STA 303 Lec 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

STA 303 Theory of Estimation

September 28, 2021

1 Lecture 1
Lecture Hours: 3

Purpose of the Course The purpose of this course is to introduce students


to the theory of inferential statistics.

Expected Learning Outcomes At the end of the course, the students


should be able to:
1. Estimate a parameter from an estimator.
2. Differentiate a parameter from an estimator.
3. Determine the best estimator.
4. Differentiate a Bayesian estimator, a point estimator and an interval
estimator.
5. Determine and differentiate MVUE and UMVUE together with their
properties.
6. Determine Cramer-Rao inequality and Fisher information.

Course Content Population parameters and sample statistics. Point esti-


mation: method of moments; method of least squares; method of maximum
likelihood and Bayesian estimation. Properties of point estimators: unbiased-
ness and consistency. Sufficiency and completeness: sufficient and jointly suf-
ficient statistics; complete and jointly complete statistics. Minimum variance
unbiased estimation: uniformly minimum variance unbiased estimators; Rao
Blackwell Theorem; Lehmann-Scheffe Theorem. Cramer Rao inequality; Fisher
information; efficient estimators. Interval Estimation: pivot quantity method of
determining confidence intervals; shortest confidence intervals.

Mode of Delivery • Lectures • Tutorials • Directed reading • Group dis-


cussion
Instructional Material and/or Equipment
Textbooks, whiteboards, whiteboard markers, handouts, computer based
tools, LCD and overhead projectors

1
Course Assessment Type Weighting
Continuous Assessment 30 %
End of Semester examination 70 %
Total 100 %

Recommended References 1. Mood, A.M; Graybill, F.A and Boes, D.C:


Introduction to the theory of statistics.
2. Hogg, R.V and Graig, A.T: Introduction to mathematical statistics.
3. Jay L. Devore, Probability and Statistics for Engineering and the Sciences,
(2004), Brooks/Cole Publishing, Belmont, USA. ISBN 0-534-39933-9
4. Mario F. Triola (2001), Elementary Statistics, Addison- Wesley Publish-
ing Company. ISBN 0-201-61477-4.
5. Morris H. DeGroot(1989), Probability and Statistics, Addison- Wesley
Publishing Company, Reading, USA. ISBN 0-201-11366-X.

Other References 1. Grinstead C.M. & Snell J.L. (1997). Introduction to


Probability. American Mathematical Society
2. Hacking I. (2011). An Introduction to probability and inductive Logic.
Cambridge University Press.

Relevant Website;

2 Lecture (1) One


2.1 Introduction
2.1.1 Estimation of Parameters
Let X ∼ N (µ, σ 2 ) , µ and σ are the population parameter.
Z ∞
µ = E(X) = x · f (x)dx
−∞

Z ∞
σ 2 = V ar(X) = E(X − µ)2 = (x − µ)2 f (x)dx
−∞

Therefore, taking a random sample from the above distribution, we are then
interested in estimating the population parameters (µ and σ). These can be
estimated from the sample observations.

Definition 1: The problem of estimation of parameters is the one in which


given the form of distribution (i.e. the probability mass function (pmf) (density
form of discrete function) or the probability density function (pdf) (density
form of a continuous function) depending on some unknown parameters. It is

2
required to estimate the unknown parameters from random sample taken from
the given distribution.
e.g. the population mean, µ, can be estimated with the sample mean; also
it can be estimated with the help of the sample median.

Definition 2: Any sample valued functions of sample observations proposed


or estimating unknown parameters of observation is known as Statistic or an
Estimator. E.g. if µ is the population mean and X the sample mean, we say X
is the statistic /estimator for estimating µ.
This estimator T = t(x1 , · · · , xn ) is a function of sample observations and
the value of t of T for a parameter sample is known as an estimate.
The distinction between the estimator and the estimate is the same as that
between the function and the value of a function.

Definition 3: Parameter Space


Consider a r.v. x with pdf f (x, θ). In most applications, though not always,
the functional form of the population distribution is assumed to be known except
for the value of some unknown parameter(s) θ which may take any value on a
set Θ (or Ω).
This is expressed by writing the pdf function in the form

f (x.θ), θ ∈ Θ(Ω)

The set Θ which is the set of all possible values of θ is called the parameter
space.
Such situation gives rise not to one probability distribution but a family of
probability distribution which we write as

{f (x.θ), θ ∈ Θ}.

e.g. If X ∼ N (µ, σ 2 ), then the parameter space

Θ = {(µ, σ 2 ) : −∞ < µ < ∞; 0 < σ < ∞}.

In particular, for σ 2 = 1, the family of probability distribution is given by


{N (µ, 1); µ ∈ Θ} where Θ = {µ : −∞ < µ < ∞}.
Considering a general family of distributions

{f (x; θ1 , · · · , θk ) : θi ∈ Θ, i = 1, · · · , k}

Now, consider Poisson distribution with parameter θ, i.e.

θx e−θ
f (x, θ) = , x = 0, 1, · · · or x ≥ 0
x!
which gives E(x) = θ and V ar(x) = θ. Thus θ can be estimated with the help
of

3
n n
(xi − X)2
P P
xi
i=1 2 i=1
X= or S =
n n
Thus, generally ∃ several estimators of the same unknown parameter.
Thus we wish to determine the function (such estimating functions are
known as estimators) of the sample observations: T1 = t1 (x1 , · · · , xn ), T2 =
t2 (x1 , · · · , xn ), · · · , Tn = tn (x1 , · · · , xn ) s.t their distribution is concentrated as
closely as possible near the true value of the parameter.
This implies we have to select a good estimator from the several estimators.

3 Characteristics of a good estimator


The following are some of the criteria that should be satisfied by a good esti-
mator
1. Unbiasedness
2. Efficiency
3. Consistency
4. Sufficiency

1. Unbiasedness Definition;
An estimator T1 = t1 (x1 , · · · , xn ), is said to be unbiased estimator of the
unknown parameter θ if
E(T ) = θ f or all θ ∈ Ω (parametric space).
If however E(T ) 6= θ, we say T is biased estimator of θ.
In particular if E(T ) > θ, then T is positively biased estimator of θ; and if
E(T ) < θ, then it is a negatively biased estimator of θ.
E(T )−θis called an amount of bias in T as estimator of θ. Further if E(T ) 6=
θ, but E(T ) = θ, when n → ∞, we then say , T is asymptotically unbiased
estimator of θ i.e.
E(T ) 6= θ and E(T ) = θ
n→∞

Examples 1. The sample mean X is unbiased estimator of the population


mean µ provided the two exists i.e. we have to show that : E(X) = µ
Solution:
Let E(xi ) = µ, i = 1, · · · , n where x1 , x2 , · · · , xn represent a r.s. taken from
a given population with mean µ. Therefore
P n 
xi
n
E(X) = E x1 +···+x = E  i=1n  = n1
 P
n
E(xi )
 
n
i=1
n
1
P
= n µ=µ
i=1

4
2. The sample variance S 2 is a biased estimator of a population variance σ 2 i.e.
we have to show that
n
(xi − X)2
P
i=1
E(S 2 ) 6= σ 2 where S 2 =
n
Solution: n
P
(xi −X)2
For E(xi ) = µ and V ar(xi ) = σ , i = 1, · · · , n and S
2 2
= i=1
n ,
introducing the population mean, we have,
n
P
(xi −µ+µ+X)2
2 i=1
S = n
n
P 2
((xi −µ)−(X−µ))
i=1
= n
= σ 2 − (X − µ)2

Taking expectations both sides, then we have,

E(S 2 ) = σ 2 − E(X − µ)2


= σ 2 − V ar(X)
2
= σ 2 − σn 6= σ 2

Note:
σ2
V ar(X) =
n
Thus S 2 is a biased estimator of σ 2
The amount of bias is equal to :
2 2
Bias
 = E(S 2
) − σ
= σ 2 − σn − σ 2
2
= − σn < 0

Therefore, S 2 is anegatively biased estimator of σ 2 .

You might also like