Inference Presentation
Inference Presentation
Inference Presentation
• It is difficult to trace back who introduced the MOM, but Johan Bernoulli (1667-
1748) was one of the first who used the method in his work.
• With the MOM, the moments of a distribution function in terms of its parameters
are set equal to the moments of the observed sample.
• Analytical expressions can be derived quite easily but the estimators can be
biased and not efficient. The moment estimators, however, can be very well used
as a starting estimation in an iteration process.
Method of moment estimator
mᵣ = μᵣ
The sample mean is a natural estimator for μ. The higher sample moments m ᵣ are
reasonable estimators of the μᵣ.
but they are not unbiased. Unbiased estimators are often used. In particular
and the fourth cumulant are unbiasedly estimated by:
• With this method one chooses that value of θ for which the likelihood function is
maximized.
• The ML-method gives asymptotically unbiased parameter estimations and of all the
unbiased estimators it has the smallest mean squared error.
• The variances approach asymptotically to:
• The maximum likelihood estimator is unbiased, fully efficient (in that it achieves the
Cramer-Rao bound under regularity conditions), and normally distributed; all of them in
asymptotical sense.
• Regularity conditions are not fulfilled if the range of the random variable X depends on
unknown parameters.
• The MML is extremely useful since it is often quite straightforward to evaluate from the
MLE and the observed information. Nonetheless it is an approximation and should
only be trusted for large values of n (though the quality of the approximation will vary
from model to model).
Method of Least Squares (MLS)
• Least Squares were introduced by Gauss (1777-1855). Given the observations x=(x1,
• x2, ..., xn) and y=(y1, y2, ..., yn), a regression model can be fitted. For the general case:
with the assumed constant variance of Y around its regression line, the parameter
estimates are:
The estimators are linear functions of the Yi ‘s and they are unbiased.
Method of L-Moments
• In Statistics, L-moments are a sequence of statistics used to summarize the
shape of a probability distribution.
• They are linear combinations of order statistics (L-statistics) and can be used
to calculate quantities analogous to standard deviation, skewness
and kurtosis, termed the L-scale, L-skewness and L-kurtosis respectively (the
L-mean is identical to the conventional mean).
• Standardized L-moments are called L-moment ratios.
• Sample L-moments can be defined for a sample from the population and can
be used as estimators of the population L-moments.
• Since L-moment estimators are linear functions of the ordered data
values, they are virtually unbiased and have relatively small sampling
variance.
• L-moment ratio estimators also have small bias and variance,
especially in comparison with the classical coefficients of skewness
and kurtosis.
• Moreover, estimators of L-moments are relatively insensitive to
outliers.
• Finally, L-moments are in fact nothing else than summary statistics for
probability distributions and data samples. They are analogous to
ordinary moments – they provide measures of location, dispersion,
skewness, kurtosis, and other aspects of the shape of probability
distributions or data samples.