Asymptotic Statistics (By Changliang ZOU)
Asymptotic Statistics (By Changliang ZOU)
Asymptotic Statistics (By Changliang ZOU)
Changliang Zou
Prologue
Why asymptotic statistics? The use of asymptotic approximation is two-fold. First, they
enable us to find approximate tests and confidence regions. Second, approximations can be
used theoretically to study the quality (efficiency) of statistical procedures— Van der Vaart
To carry out a statistical test, we need to know the critical value of the test statistic.
Roughly speaking, this means we must know the distribution of the test statistic under the
null hypothesis. Because such distributions are often analytically intractable, only approxi-
mations are available in practice.
Consider for instance the classical t-test for location. Given a sample of iid observations
X1 , . . . , Xn , we wish to test H0 : µ = µ0 . If the observations arise from a normal distribution
√
with mean µ0 , then the distribution of t-test statistic, n(X̄n − µ0 )/Sn , is exactly known,
say t(n − 1). However, we may have doubts regarding the normality. If the number of
observations is not too small, this does not matter too much. Then we may act as if
√
n(X̄n − µ0 )/Sn ∼ N (0, 1). The theoretical justification is the limiting result, as n → ∞,
√
n(X̄n − µ)
sup P ≤ x − Φ(x) → 0,
x Sn
provided that the variables Xi have a finite second moment. Then, a “large-sample” or
√
“asymptotical” level α test is to reject H0 if | n(X̄n − µ0 )/Sn | > zα/2 . When the underlying
distribution is exponential, the approximation is satisfactory if n ≥ 100. Thus, one aim of
asymptotic statistics is to derive the asymptotical distribution of many types of statistics.
There are similar benefits when obtaining confidence intervals. For instance, consider
maximum likelihood estimator θ bn of dimension p based on a sample of size n from a density
√ b
f (X; θ). A major result in asymptotic statistic is that in many situations n(θ n − θ) is
asymptotically normally distributed with zero mean and covariance matrix I−1
θ , where
" T #
∂ log f (X; θ) ∂ log f (X; θ)
Iθ = Eθ
∂θ ∂θ
√ b
is the Fisher information matrix. Thus, acting as if n(θ n − θ) ∼ Np (0, I−1
θ ), we can find
2
the following ellipsoid
χ2p,α
bn )T Iθ (θ − θ
θ : (θ − θ bn ) ≤
n
is an approximate 1 − α confidence region.
For a relatively small number of statistical problems, there exists an exact, optimal
solution. For example, the Neyman-Pearson lemma to find UMP tests, the Rao-Blackwell
theory to find MVUE, and Cramer-Rao Theorem.
However, there are not always exact optimal theory or procedure, then asymptotic op-
timality theory may help. For instance, to compare two tests, we might compare approxi-
mations to their power functions. Consider the foregoing hypothesis problem for location.
n
A well-known nonparametric test statistic is the sign statistic Tn = n−1
P
IXi >θ0 , where the
i=1
null hypothesis is H0 : θ = θ0 and θ denotes the median associated the distribution of X. To
compare the efficiency of sign and t-test is rather difficult because the exact power functions
of two tests are untractable. However, by the definitions and methods introduced later, we
can obtain the asymptotic relative efficiency of the sign test versus the t-test is equal to
Z
4f (0) x2 f (x)dx.
2
To compare estimators, we might compare asymptotic variances rather than exact variances.
A major result in this area is that for smooth parametric models maximum likelihood es-
timators are asymptotically optimal. This roughly means the following. First, MLE are
asymptotically consistent; Second, the rate at which MLE converge to the true value is the
√
fastest possible, typically n; Third, the asymptotic variance, attain the C-R bound. Thus,
asymptotic justify the use of MLE in certain situations. (Even though in general it does
not lead to best estimators for finite sample in many cases, it is always not a worst one and
always leads to a reasonable estimator.
Contents
3
• The basic sample statistics: distribution function, moment, quantiles, and order statis-
tics (3)
• Asymptotic theory in parametric inference: MLE, likelihood ratio test, etc (6)
Text books
Billingsley, P. (1995). Probability and Measure, 3rd edition, John Wiley, New York.
DasGupta, A. (2008). Asymptotic Theory of Statistics and Probability, Springer.
Serfling, R. (1980). Approximation Theorems of Mathematical Statistics, John Wiley, New
York.
Shao, J. (2003). Mathematical Statistics, 2nd ed. Springer, New York.
Van der Vaart, A. W. (2000). Asymptotic Statistics, Cambridge University Press.
4
Chapter 1
Throughout this course, there will usually be an underlying probability space (Ω, F, P ),
where Ω is a set of points, F is a σ-field of subsets of Ω, and P is a probability distribution
or measure defined on the element of F. A random variable X(w) is a transformation of
Ω into the real line R such that images X −1 (B) of Borel sets B are elements of F. A
collection of random variables X1 (w), X2 (w), . . . on a given (Ω, F) will typically be denoted
by X1 , X2 , . . ..
5
p
This is usually written as Xn → X. Extensions to the vector case: for random p-vectors
p p
X1 , X2 . . . and X, we say Xn → X if ||Xn − X|| → 0, where ||z|| = ( pi=1 zi2 )1/2 denotes
P
p
the Euclidean distance (L2 -norm) for z ∈ Rp . It is easily to seen that Xn → X iff the
corresponding component-wise convergence holds.
Example 1.1.1 For iid Bernoulli trials with a success probability p = 1/2, let Xn denote the
number of times in the first n trials that a success is followed by a failure. Denoting Ti = I{ith
Pn−1
trial is success and (i+1)st trial is a failure}, Xn = i=1 Ti , and therefore E[Xn ] = (n−1)/4,
Pn−1 Pn−2
and Var[Xn ] = i=1 Var[Ti ]+2 i=1 Cov[Ti , Ti+1 ] = 3(n−1)/16−2(n−2)/16 = (n+1)/16.
p
It then follows by an application of Chebyshev’s inequality that Xn /n → 1/4. [P (|x − µ| ≥
) ≤ σ 2 /2 ]
This expresses that the sequence Xn converges in probability to zero or is bounded in prob-
ability “at the rate Rn ”. For deterministic sequences Xn and Rn , Op (·) and op (·) reduce to
the usual o(·) and O(·) from calculus. Obviously, Xn = op (Rn ) implies that Xn = Op (Rn ).
p
An expression we will often used is: for some sequence an , if an Xn → 0, then we write
Xn = op (a−1 −1
n ); if an Xn = Op (1), then we write Xn = Op (an ).
Definition 1.1.3 (convergence with probability one) Let {Xn , X} be random variables
6
defined on a common probability space. We say Xn converges to X with probability 1 (or al-
most surely, strongly, almost everywhere) if
P lim Xn = X = 1.
n→∞
It is clear from this equivalent condition that wp1 is stronger than convergence in probability.
Its proof can be found on page 7 in Serfling (1980).
P (|X(n) − 1| ≤ , ∀n ≥ m) = P (X(n) ≥ 1 − , ∀n ≥ m)
= P (X(m) ≥ 1 − ) = 1 − (1 − )m → 1, as m → ∞.
Definition 1.1.4 (convergence in rth mean) Let {Xn , X} be random variables defined
on a common probability space. For r > 0, we say Xn converges to X in rth mean if
rth
This is written Xn → X. It is easily shown that
rth sth
Xn → X ⇒ Xn → X, 0 < s < r,
by Jensen’s inequality (If g(·) is a convex function on R, and X and g(X) are integrable
r.v.’s, then g(E[X]) ≤ E[g(X)]).
7
Definition 1.1.5 (convergence in distribution) Let {Xn , X} be random variables. Con-
sider their distribution functions FXn (·) and FX (·). We say that Xn converges in distribution
(in law) to X if limn→∞ FXn (t) = FX (t) at every point that is a continuity point of FX .
d
This is written as Xn → X or FXn ⇒ FX .
Taking the limit of the distribution function of Xn as n → ∞ yields limn FXn (x) = Φ(x) for
d
all x ∈ R. Thus, Xn → N (0, 1).
p p
According to the assertion below the definition of →, we know that Xn → X is equivalent
to convergence of every one of the sequences of components. The analogous statement
for convergence in distribution is false: Convergence in distribution of the sequence Xn is
stronger than convergence of every one of the sequences of components Xni . The point is
that the distribution of the components Xni separately does not determine their distribution
(they might be independent or dependent in many ways). We speak of joint convergence in
law versus marginal convergence.
Example 1.1.5 If X ∼ U [0, 1] and Xn = X for all n, and Yn = X for n odd and Yn = 1−X
d d
for n even, then Xn → X and Yn → U [0, 1], yet (Xn , Yn ) does not converge in law.
Suppose {Xn , X} are integer-valued random variables. It is not hard to show that
d
Xn → X ⇔ P (Xn = k) → P (X = k)
for every integer k. This is a useful characterization of convergence in law for integer-valued
random variables.
8
1.2 Fundamental results and theorems on convergence
1.2.1 Relationship
The results describes the relationship among four convergence modes are summarized as
follows.
wp1 p
(i) If Xn → X, then Xn → X.
rth p
(ii) If Xn → X for a r > 0, then Xn → X.
p d
(iii) If Xn → X, then Xn → X.
∞
P wp1
(iv) If, for every > 0, P (|Xn − X| > ) < ∞, then Xn → X.
n=1
Proof. (i) is an obvious consequence of the equivalent characterization (1.1); (ii) for any
> 0,
and thus
(iii) This is a direct application of Slutsky Theorem; (iv) Let > 0 be given. We have
∞
! ∞
[ X
P (|Xm − X| ≥ , for some m ≥ n) = P {|Xm − X| ≥ } ≤ P (|Xm − X| ≥ ).
m=n m=n
The last term in the equation above is the tail of a convergent series and hence goes to zero
as n → ∞.
Example 1.2.1 Consider iid N (0, 1) random variables X1 , X2 , . . . , and suppose X̄n is the
mean of the first n observations. For an > 0, consider ∞
P
n=1 P (|X̄n | > ). By Markov’s
E[X̄n4 ] 3
P∞ −2
inequality, P (|X̄n | > ) ≤ 4 = 4 n2 . Since n=1 n < ∞, from Theorem 1.2.1-(iv) it
wp1
follows that Xn → 0.
9
1.2.2 Transformation
It turns out that continuous transformations preserve many types of convergence, and this
fact is useful in many applications. We record it next. Its proof can be found on page 24 in
Serfling (1980).
d d
Example 1.2.2 (i) If Xn → N (0, 1), then χ21 ; (ii) If (Xn , Yn ) → N2 (0, I2 ), then
d
max{Xn , Yn } → max{X, Y },
The most commonly considered functions of vectors converging in some stochastic sense
are linear and quadratic forms, which is summarized in the following result.
Corollary 1.2.1 Suppose that the p-vector Xn converge to the p-vector X in probability,
almost surely, or in law. Let Aq×p and Bp×p be matrices. Then AXn → AX and XTn BXn →
XT BX in the given mode of convergence.
10
d d
Example 1.2.3 (i) If Xn → Np (µ, Σ), then CXn → N (Cµ, CΣCT ) where Cq×p is a matrix;
d
Also, (Xn − µ)T Σ−1 (Xn − µ) → χ2p ; (ii) (Sums and products of random variables converging
wp1 wp1 wp1 wp1
wp1 or in probability) If Xn → X and Yn → Y , then Xn + Yn → X + Y and Xn Yn → XY .
Replacing the wp1 with in probability, the foregoing arguments also hold.
Remark 1.2.1 The condition that g(·) is continuous function in Theorem 1.2.2 can be
further relaxed to that g(·) is continuous a.s., i.e., P (X ∈ C(g)) = 1 where C(g) = {x :
g is continuous at x} is called the continuity set of g.
d d
Example 1.2.4 (i) If Xn → X ∼ N (0, 1), then 1/Xn → Z, where Z has the distribution of
1/X, even though the function g(x) = 1/x is not continuous at 0. This is due to P (X =
0) = 0. However, if Xn = 1/n (degenerate distribution) and
1, x > 0,
g(x) =
0, x ≤ 0,
d d d d
then Xn → 0 but g(Xn ) → 1 6= g(0); (ii)If (Xn , Yn ) → N2 (0, I2 ) then Xn /Yn → Cauchy.
d d
In Example 1.2.2, the condition that (Xn , Yn ) → N2 (0, I2 ) cannot be relaxed to Xn → X
d
and Yn → Y where X and Y are independent, i.e., we need the convergence of the joint CDF
d p wp1
of (Xn , Yn ). This is different when → is replaced by → or → , such as in Example 1.2.3-(ii).
The following result, which plays an important role in probability and statistics, establishes
the convergence in distribution of Xn + Yn or Xn Yn when no information regarding the joint
CDF of (Xn , Yn ) is provided.
d p
Theorem 1.2.3 (Slutsky’s Theorem) Let Xn → X and Yn → c, where c is a finite con-
stant. Then,
11
d
(i) Xn + Yn → X + c;
d
(ii) Xn Yn → cX;
d
(iii) Xn /Yn → X/c if c 6= 0.
Proof. The method of proof of the theorem is demonstrated sufficiently by proving (i).
Choose and fix t such that t − c is a continuity point of FX . Let ε > 0 be such that t − c + ε
and t − c − ε are also continuity points of FX . Then
≤ P (Xn ≤ t − c + ε) + P (|Yn − c| ≥ ε)
and, similarly
It follows from the previous two inequalities and the hypotheses of the theorem that
FX (t − c − ε) ≤ lim inf FXn +Yn (t) ≤ lim sup FXn +Yn (t) ≤ FX (t − c + ε).
n n
Since t − c is a continuity point of FX , and since ε can be taken arbitrary small, the above
equation yields
12
Example 1.2.6 (i) Theorem 1.2.1-(iii); Furthermore, convergence in probability to a con-
stant is equivalent to convergence in law to the given constant. “⇒” follows from the part (i).
“⇐” can be proved by definition. Because the degenerate distribution function of constant
c is continuous everywhere except for point c, for any > 0,
→ 1 − FX (c + ) + FX (c − ) = 0
Gamma(αn , βn ), where αn and βn are sequences of positive real numbers such that αn → α
and βn → β for some positive real numbers α and β. Also, let β̂n be a consistent estimator
d
of β. We can conclude that Xn /β̂n → Gamma(α, 1).
Example 1.2.8 (t-statistic) Let X1 , X2 , . . . be iid random variables with EX1 = 0 and
√
EX12 < ∞. Then the t-statistic nX̄n /Sn , where Sn2 = (n − 1)−1 ni=1 (Xi − X̄n )2 is the
P
sample variance, is asymptotically standard normal. To see this, first note that by two
applications of WLLN and CMT
n
!
n 1X 2 p
Sn2 = X − X̄n2 → 1(EX12 − (EX1 )2 ) = Var(X1 ).
n−1 n i=1 i
p p √ d
Again, by CMT, Sn → Var(X1 ). By the CLT, nX̄n → N (0, Var(X1 )). Finally, Slutsky’s
p
Theorem gives that the sequence of t-statistics converges in law to N (0, Var(X1 ))/ Var(X1 ) =
N (0, 1).
We next state some theorems known as the laws of large numbers. It concerns the limiting
behavior of sums of independent random variables. The weak law of large numbers (WLLN)
refers to convergence in probability, whereas the strong of large numbers (SLLN) refers to
a.s. convergence. Our first result gives the WLLN and SLLN for a sequence of iid random
variables.
13
Theorem 1.2.4 Let X1 , X2 , . . ., be iid random variables having a CDF F .
t(2). The variance of Xi does not exist, but Theorem 1.2.4 still applies to this case and we
p
can still therefore conclude that X̄n → 0 as n → ∞.
The next result is for sequences of independent but not necessarily identically distributed
random variables.
(i) The WLLN Let X1 , X2 , . . ., be uncorrelated with means µ1 , µ2 , . . . and variances σ12 , σ22 , . . ..
If limn→∞ n12 ni=1 σi2 = 0, then
P
n n
1X 1X p
Xi − µi → 0.
n i=1 n i=1
(ii) The SLLN Let X1 , X2 , . . ., be independent with means µ1 , µ2 , . . . and variances σ12 , σ22 , . . ..
If ∞ 2 2
P
i=1 σi /ci < ∞ where cn ultimately monotone and cn → ∞, then
n
wp1
X
c−1
n (Xi − µi ) → 0.
i=1
14
(iii) The SLLN with common mean Let X1 , X2 , . . ., be independent with common mean
µ and variances σ12 , σ22 , . . .. If ∞ −2
P
i=1 σi = ∞, then
n n
X Xi X −2 wp1
2
/ σi → µ.
i=1
σ i i=1
The proof of Theorems 1.2.4 and 1.2.5 can be found in Billingsley (1995).
indep
Example 1.2.10 Suppose Xi ∼ (µ, σi2 ). Then, by simple calculus, the BLUE (best linear
unbiased estimate) of µ is ni=1 σi−2 Xi / ni=1 σi−2 . Suppose now that the σi2 do not grow at
P P
a rate faster than i; i.e., for some constant K, σi2 ≤ iK. Then, ni=1 σi−2 clearly diverges as
P
Example 1.2.11 Suppose (Xi , Yi ), i = 1, . . . , n are iid bivariate samples from some distri-
bution with E(X1 ) = µ1 , E(Y1 ) = µ2 , Var(X1 ) = σ12 , Var(Y1 ) = σ22 , and corr(X1 , Y1 ) = ρ.
Let rn denote the sample correlation coefficient. The almost sure convergence of rn to ρ
follow very easily. We write
1
P
Xi Yi − X̄ Ȳ
n
rn = q P 2 P Yi2 ,
Xi 2 )( 2)
( n
− X̄ n
− Ȳ
then from the SLLN for iid random variables (Theorem 1.2.4) and continuous mapping
theorem (Theorem 1.2.2; Example 1.2.3-(ii)),
wp1 E(X1 Y1 ) − µ1 µ2
rn → = ρ.
σ12 σ22
Next we provide a collection of basic facts about convergence in distribution. The following
theorems provide methodology for establishing convergence in distribution.
15
Theorem 1.2.6 Let X, X1 , X2 , . . . random p-vectors.
d
(i) (The Portmanteau Theorem) Xn → X is equivalent to the following condition:
E[g(Xn )] → E[g(X)] for every bounded continuous function g.
(ii) (Levy-Cramer continuity theorem) Let ΦX , ΦX1 , ΦX2 , . . . be the character func-
d
tions of X, X1 , X2 , . . ., respectively. Xn → X iff limn→∞ ΦXn (t) = ΦX (t) for all t ∈ Rp .
d d
(iii) (Cramer-Wold device) Xn → X iff cT Xn → cT X for every c ∈ Rp .
d
Proof. (i) See Serfling (1980), page 16; (ii) Shao (2003), page 57; (iii) Assume cT Xn → cT X
for any c, then by Theorem 1.2.6-(ii)
d
With t = 1, and since c is arbitrary, it follows by Theorem 1.2.6-(ii) again that Xn → X. The
converse can be proved by a similar argument. [ΦcT Xn (t) = ΦXn (tc) and ΦcT X (t) = ΦX (tc)
for any t ∈ R and any c ∈ Rp .]
d d
A straightforward application of Theorem 1.2.6 is that if Xn → X and Yn → c for con-
d
stant vector c, then (Xn , Yn ) →(X, c).
Example 1.2.12 Example 1.1.3 revisited. Consider now the function g(x) = x10 , 0 ≤
x ≤ 1. Note that g is continuous and bounded. Therefore, by the Portmanteau theorem,
10 R1
E(g(Xn )) = ni=1 ni 11 → E(g(X)) = 0 x10 dx = 11
1
P
.
which is so-called Bernstein polynomials. Note that Bn (p) = E[g( Xn )|X ∼ Bin(n, p)]. As
X p X d
n → ∞, n
→p (WLLN), and it follows that n
→ δp , the point mass at p. Since g is
continuous and hence bounded (compact interval), it follows from the Portmanteau theorem
that Bn (p) → g(p).
16
Example 1.2.14 (i) Let X1 , . . . , Xn be independent random variables having a common
CDF and Tn = X1 + . . . + Xn , n = 1, 2, . . .. Suppose that E|X1 | < ∞. It follows from
the property of CHF and Taylor expansion that the CHF of X1 satisfies [ ∂Φ∂t
X (t)
]|t=0 =
√ 2
−1EX, [ ∂ Φ∂tX2 (t) ]|t=0 = −EX 2 ]
√
ΦX1 (t) = ΦX1 (0) + −1µt + o(|t|)
d
Theorem 1.2.7 (i) (Prohorov’s Theorem) If Xn → X for some X, then Xn = Op (1).
sup |FXn − FX | → 0.
−∞<x<∞
Proof. (i) For any given ε > 0, fix a constant M such that P (X ≥ M ) < ε. By the
definition of convergence in law, P (|Xn | ≥ M ) exceeds P (|X| ≥ M ) arbitrarily small for
17
sufficiently large n. Thus, there exists N such that P (|Xn | ≥ M ) < 2ε, for all n ≥ N . The
results follows from the definition of Op (1). (ii) Firstly, fix k ∈ N. By the continuity of F
there exists points −∞ = x0 < x1 < · · · < xk = ∞ with F (xi ) = i/k. By monotonicity, we
have, for xi−1 ≤ x ≤ xi ,
FXn (x) − FX (x) ≤ FXn (xi ) − FX (xi−1 ) = FXn (xi ) − FX (xi ) + 1/k
Thus, FXn (x) − FX (x) is bounded above by supi |FXn (xi ) − FX (xi )| + 1/k, for every x. The
latter, finite supremum converges to zero because each term converges to zero due to the
condition, for each fixed k. Because k is arbitrary, the result follows.
d
The following result can be used to check whether Xn → X when X has a PDF f and
Xn has a PDF fn .
R
Proof. Put gn (x) = [f (x) − fn (x)]If (x)≥fn (x) . By noting that [fn (x) − f (x)]dx = 0,
Z Z
|fn (x) − f (x)|dx = 2 gn (x)dx.
R
Since 0 ≤ gn (x) ≤ f (x) for all x. Hence, by dominated convergence, limn gn (x)dx = 0.
[Dominated convergence theorem. If limn→∞ fn = f and there exists an integrable function
R R
g such that |fn | ≤ g, then limn fn (x)dx = limn fn (x)dx holds]
The following result provides a convergence of moments criterion for convergence in law.
Theorem 1.2.9 (Frechet and Shohat Theorem) Let the distribution function Fn possess
R
finite moments αnk = tk dFn (t) for k = 1, 2, . . . and n = 1, 2, . . .. Assume that the limits
αk = limn αnk exist (finite) for each k. Then,
18
(i) the limits αk are the moments of some a distribution function F ;
There are many rules of calculus with o and O symbols, which we will apply without com-
ment. For instance,
Lemma 1.2.1 Let g be a function defined on Rp such that g(0) = 0. Let Xn be a sequence
of random vectors with values on R that converges in probability to zero. Then, for every
r > 0,
Proof. Define f (t) = g(t)/||t||r for t 6= 0 and f (0) = 0. Then g(Xn ) = f (Xn )||Xn ||r .
p
(i) Because the function f is continuous at zero by assumption, f (Xn ) → f (0) = 0 by
Theorem 1.2.2.
(ii) By assumption there exists M and δ > 0 such that |f (t)| ≤ M whenever ||t|| ≤ δ.
Thus
19
1.3 The central limit theorem
The most fundamental result on convergence in law is the central limit theorem (CLT) for
sums of random variables. We firstly state the case of chief importance, iid summands.
Theorem 1.3.1 (Lindeberg-Levy) Let Xi be iid with mean µ and finite variance σ 2 . Then
√
n X̄ − µ d
→ N (0, 1).
σ
√ d
By Slutsky’s Theorem, we can write n X̄ − µ → N (0, σ 2 ). Also, X̄ is AN (µ, σ 2 /n). See
Billingsley (1995) for a proof.
Example 1.3.1 (Confidence intervals) This theorem can be used to approximate P (X̄ ≤
µ + √kσn ) by Φ(k). This is very useful because the sampling distribution of X̄ is not available
√
except for some special cases. Then, setting k = Φ−1 (1 − α) = zα , [X̄n − σ/ nzα , X̄n +
√
σ/ nzα ] is a confidence interval for µ of asymptotic level 1 − 2α. More precisely, we have
that the probability that µ is contained in this interval converges to 1 − 2α (how accurate?).
Example 1.3.2 (Sample variance) Suppose X1 , . . . , Xn are iid with mean µ, variance σ 2
1
Pn
and E(X14 ) < ∞. Consider the asymptotic distribution of Sn2 = n−1 2
i=1 (Xi − X̄n ) . Write
n
!
√ 2 2
√ 1 X 2 2
√ n
n(Sn − σ ) = n (Xi − µ) − σ − n (X̄n − µ)2 .
n − 1 i=1 n−1
The second term converges to zero in probability and the first term is asymptotically normal
by the CLT. The whole expression is asymptotically normal by the Slutsky’ Theorem, i.e.,
√ d
n(Sn2 − σ 2 ) → N (0, µ4 − σ 4 ),
20
where µ4 denotes the centered fourth moment of X1 and µ4 − σ 4 comes certainly from
computing the variance of (X1 − µ)2 .
Example 1.3.3 (Level of the Chi-square test) Normal theory prescribes to reject the
null hypothesis H0 : σ 2 ≤ 1 for values of nSn2 exceeding the upper α point χ2n−1,α of the χ2n−1
distribution. If the observations are sample from a normal distribution, the test has exactly
level α. However, this is not approximately the case of the underlying distribution is not
normal. The CLT and the Example 1.3.2 yield the following two statements
χ2n−1 − (n − 1) d √
2
Sn d
p → N (0, 1), n 2
− 1 → N (0, κ + 2),
2(n − 1) σ
So, the asymptotic level reduces to 1−Φ(zα ) = α iff the kurtosis of the underlying distribution
is 0. If the kurtosis goes to infinity, then the asymptotic level approaches to 1 − Φ(0) = 1/2.
We conclude that the level of the chi-square test is nonrobust against departures of normality
that affect the value of the kurtosis. If, instead, we would use a normal approximation to
√
the distribution n(Sn2 /σ 2 − 1) the problem would not arise, provided that the asymptotic
variance κ + 2 is estimated accurately.
Theorem 1.3.2 (Multivariate CLT for iid case) Let Xi be iid random p-vectors with
mean µ and and covariance matrix Σ. Then
√ d
n X̄ − µ → Np (0, Σ).
Proof. By the Cramer-Wold device, this can be proved by finding the limit distribution of
the sequences of real variables
n
! n
1 X 1 X T
cT √ (Xi − µ) = √ (c Xi − cT µ).
n i=1 n i=1
21
Because the random variables cT Xi − cT µ are iid with zero mean and variance cT Σc, this
sequence is AN (0, cT Σc) by Theorem 1.3.1. This is exactly the distribution of cT X if X
possesses an Np (0, Σ).
Example 1.3.4 Suppose that X1 , . . . , Xn is a random sample from the Poisson distribution
with mean θ. Let Zn be the proportions of zero observed, i.e., Zn = 1/n ni=1 I{Xj =0} . Let
P
us find the joint asymptotic distribution of (X̄n , Zn ). Note that E(X1 ) = θ, EI{X1 =0} = e−θ ,
Var(X1 ) = θ, Var(I{X1 =0} ) = e−θ (1 − e−θ ), and EX1 I{X1 =0} = 0. So, Cov(X1 , I{X1 =0} ) =
√ d
−θe−θ . Hence, n (X̄n , Zn ) − (θ, e−θ ) → N2 (0, Σ), where
θ −θe−θ
Σ= .
−θ −θ −θ
−θe e (1 − e )
It is not as widely known that existence of a variance is not necessary for asymptotic
normality of partial sums of iid random variables. A CLT without a finite variance can
sometimes be useful. We present the general result below and then give an illustrative
example. Feller (1966) contains detailed information on the availability of CLTs without the
existence of a variance, along with proofs. First, we need a definition.
Definition 1.3.2 A function g : R → R is called slowly varying at ∞ if, for every t > 0,
limx→∞ g(tx)/g(x) = 1.
Examples of slowly varying functions are log x, x/(1 + x), and indeed any function with a
finite limit as x → ∞. But, for example, x or e−x are not slowly varying.
Rx
Theorem 1.3.3 Let X1 , X2 , . . . be iid from a CDF F on R. Let v(x) = −x
y 2 dF (y). Then,
there exist constants {an }, {bn } such that
Pn
i=1 X i − an d
→ N (0, 1),
bn
22
If F has a finite second moment, then automatically v(x) is slowly varying at ∞. We
present an example below where asymptotic normality of the sample partial sums still holds,
although the summands do not have a finite variance.
Example 1.3.5 Suppose X1 , X2 , . . . are iid from a t-distribution with 2 degrees of freedom
(t(2)) that has a finite mean but not a finite variance. The density is given by f (y) =
3
c/(2 + y 2 ) 2 for some positive c. Hence, by a direct integration, for some other constant k,
√ √ i
r
1 h 2 arcsinh(x/ 2) .
v(x) = k x − 2 + x
2 + x2
Therefore, on using the fact that arcsinh(x) = log(2x) + O(x−2 ) as x → ∞, we get, for
v(tx)
any t > 0, → 1 on some algebra. It follows that for iid observations from a t(2)
v(x)
distribution, on suitable centering and normalizing, the partial sums ni=1 Xi converge to
P
a normal distribution, although the Xi ’s do not have a finite variance. The centering can
be taken to be zero for the centered t-distribution; it can be shown that the normalizing
√
required is bn = n log n (why?).
1.3.2 The CLT for the independent not necessarily iid case
n Z
1 X
(x − µj )2 dFj (x) → 0, (1.2)
s2n j=1 |x−µj |>sn
A proof can be seen on page 67 in Shao (2003). The condition (1.2) is called Lindeberg-Feller
condition.
23
Example 1.3.6 Let X1 , X2 . . . , be independent variables such that Xj has the uniform
distribution on [−j, j], j = 1, 2, . . .. Let us verify the conditions of Theorem 1.3.4 are satisfied.
Rj
Note that EXj = 0 and σj2 = 2j1 −j x2 dx = j 2 /3 for all j. Hence,
n n
X 1 X 2 n(n + 1)(2n + 1)
s2n = σj2 = j = .
j=1
3 j=1 18
For any > 0, n < sn for sufficiently large n, since limn n/sn = 0. Because |Xj | ≤ j ≤ n,
when n is sufficiently large,
E(Xj2 I{|Xj |>sn } ) = 0.
Pn
Consequently, limn→∞ j=1 E(Xj2 I{|Xj |>sn } ) < ∞. Considering sn → ∞, Lindeberg’s con-
dition holds.
The Lindeberg- Feller theorem is a landmark theorem in probability and statistics. Gen-
erally, it is hard to verify the Lindeberg-Feller condition. A simpler theorem is the following.
n
1 X
E|Xj − µj |2+δ → 0 (1.3)
s2+δ
n j=1
as n → ∞, then
n
P
(Xi − µi )
i=1 d
→ N (0, 1).
sn
A proof is given in Sen and Singer (1993). For instance, if sn → ∞, supj≥1 E|Xj −µj |2+δ < ∞
and n−1 sn is bounded, then the condition of Liapounov’s theorem is satisfied. In practice,
usually one tries to work with δ = 1 or 2 for algebraic convenience. It can be easily checked
that if Xi is uniformly bounded and sn → ∞, the condition is immediately satisfied with
δ = 1.
Example 1.3.7 Let X1 , X2 , . . . be independent random variables. Suppose that Xi has the
binomial distribution BIN(pi , 1), i = 1, 2, . . .. For each i, EXi = pi and E|Xi − EXi |3 =
24
(1 − pi )3 pi + p3i (1 − pi ) ≤ 2pi (1 − pi ). Hence, ni=1 E|Xi − EXi |3 ≤ 2s2n = 2 ni=1 E|Xi −
P P
Theorem 1.3.6 (Hajek-Sidak) Suppose X1 , X2 , . . . are iid random variables with mean µ
and variance σ 2 < ∞. Let cn = (cn1 , cn2 , . . . , cnn ) be a vector of constants such that
c2ni
max P n →0 (1.4)
1≤i≤n
c2nj
j=1
as n → ∞. Then
n
P
cni (Xi − µ)
i=1 d
s → N (0, 1).
n
c2nj
P
σ
j=1
The condition (1.4) is to ensure that no coefficient dominates the vector cn , and is
referred as Hajek-Sidak condition in the literatures. For example, if cn = (1, 0, . . . , 0), then
the condition would fail and so would the theorem. The Hajek-Sidak’s theorem has many
applications, including in the regression problem. Here is an important example.
Example 1.3.8 (Simplest linear regression) Consider the simple linear regression model
yi = β0 + β1 xi + εi , where εi ’s are iid with mean 0 and variance σ 2 but are not necessarily
normally distributed. The least squares estimate of β1 based on n observations is
Pn Pn
i=1 (yi − ȳn )(xi − x̄n ) εi (xi − x̄n )
β1 =
b Pn 2
= β1 + Pi=1
n 2
.
i=1 (xi − x̄n ) i=1 (xi − x̄n )
So, βb1 = β1 + ni=1 εi cni / nj=1 c2nj , where cni = xi − x̄n . Hence, by the Hajek-Sidak’s
P P
Theorem
v
u n 2 βb1 − β1
uX Pn
εi cni d
t cnj = qi=1 → N (0, 1),
j=1
σ σ
P n
c 2
j=1 nj
25
provided
max1≤i≤n (xi − x̄n )2
Pn 2
→0
j=1 (xj − x̄n )
as n → ∞. For most reasonable designs, this condition is satisfied. Thus, the asymptotic
normality of the LSE (least squares estimate) is established under some conditions on the
design variables, an important result.
then
n
1 X d
√ (Xi − µi ) → N (0, Σ).
n i=1
where an1 , . . . , ann are the columns of the (p × n) matrix (XT X)−1/2 XT =: A. This sequence
is asymptotically normal if the vectors an1 ε1 , . . . , ann εn satisfy the Lindeberg conditions.
The norming matrix (XT X)1/2 has been chosen to ensure that the vectors in the display
have covariance matrix σ 2 Ip for every n. The remaining condition is
n
X
||ani ||2 Eε2i I{||ani |||εi |>} → 0.
i=1
26
||ani ||2 = tr(AAT ) = p,
P
This can be simplified to other conditions in several ways. Because
it suffices that maxi Eε2i I{||ani |||εi |>} → 0, which is also equivalent to maxi ||ani || → 0. Al-
ternatively, the expectation Eε2i I{||ani |||εi |>} can be bounded −k E|εi |k+2 ||ani ||k and a second
set of sufficient conditions is
n
X
||ani ||k → 0; E|ε1 |k < ∞, k > 2.
i=1
The canonical CLT for the iid case says that if X1 , X2 , . . . are iid with mean zero and a finite
variance σ 2 , then the sequence of partial sums Tn = ni=1 Xi obeys the central limit theorem
P
T√n d
in the sense σ n
→ N (0, 1). There are some practical problems that arise in applications, for
example in sequential statistical analysis, where the number of terms present in a partial sum
is a random variable. Precisely, {N (t)}, t ≥ 0, is a family of (nonnegative) integer-valued
random variables, and we want to approximate the distribution of TN (t) , where for each fixed
n, Tn is still the sum of n iid variables as above. The question is whether a CLT still holds
under appropriate conditions. Here is the Anscombe-Renyi theorem.
Theorem 1.3.8 (Anscombe-Renyi) Let Xi be iid with mean µ and a finite variance σ 2 ,
and let {Nn }, be a sequence of (nonnegative) integer-valued random variables and {an } a
p
sequence of positive constants tending to ∞ such that Nn /an → c, 0 < c < ∞, as n → ∞.
Then,
TNn − Nn µ d
√ → N (0, 1) as n → ∞.
σ Nn
27
complete set of coupons is TNn = X1 + . . . + XNn . By the Anscombe-Renyi theorem and
TNn −Nn µ
Slutsky’s theorem, we have that √
σ n ln n
is approximately N (0, 1).
[On the distribution of Nn . Let ti be the boxes to collect the i-th coupon after i − 1
coupons have been collected. Observe that the probability of collecting a new coupon given
i−1 coupons is pi = (n−i+1)/n. Therefore, ti has a geometric distribution with expectation
1/pi and Nn = ni=1 ti . By Theorem 1.2.5, we know
P
n n n
1 p 1 X −1 1 X 1 1 X1 1
Nn → pi = n = =: Hn .
n ln n n ln n i=1 n ln n i=1 i ln n i=1 i ln n
Note that Hn is the harmonic number and hence by using the asymptotics of the harmonic
Nn
numbers (Hn = ln n + γ + o(1); γ is Euler-constant), we obtain n ln n
→ 1.]
The assumption that observed data X1 , X2 , . . . form an independent sequence is often one
of technical convenience. Real data frequently exhibit some dependence and at the least
some correlation at small lags. Exact sampling distributions for fixed n are even more
complicated for dependent data than in the independent case, and so asymptotics remain
useful. In this subsection, we present CLTs for some important dependence structures. The
cases of stationary m-dependence and without replacement sampling are considered.
Stationary m-dependence
We start with an example to illustrate that a CLT for sample means can hold even if the
summands are not independent.
28
n
1
where γi = Cov(X1 , Xi+1 ). Therefore, τ 2 < ∞ if and only if
P
n
(n − i)γi has a finite limit,
i=1
√ d
say ρ, in which case n(X̄n − µ) → N (0, σ 2 + ρ).
n
1
P
What is going on qualitatively is that n
(n−i)γi is summable when |γi | → 0 adequately
i=1
fast. Instances of this are when only a fixed finite number of the γi are nonzero or when γi is
damped exponentially; i.e., γi = O(ai ) for some |a| < 1. It turns out that there are general
CLTs for sample averages under such conditions. The case of m-dependence is provided
below.
Definition 1.3.3 A stationary sequence {Xn } is called m-dependent for a given fixed m if
(X1 , . . . , Xi ) and (Xj , Xj+1 , . . .) are independent whenever j − i > m.
See Lehmann (1999) for a proof; m-dependent data arise either as standard time series
models or as models in their own right. For example, if {Zi } are i.i.d. random variables
and Xi = a1 Zi−1 + a2 Zi−2 , i ≥ 3, then {Xi } is 1-dependent. This is a simple moving
average process of use in time series analysis. A more general m-dependent sequence is
Xi = h(Zi , Zi+1 , . . . , Zi+m ) for some function h.
Example 1.3.12 Suppose Zi are i.i.d. with a finite variance σ 2 , and let Xi = (Zi + Zi+1 )/2.
Pn Z1 +Zn+1 Pn √
Then, obviously i=1 Xi = 2
+ i=2 Zi . Then, by Slutsky’s theorem, n(X̄n −
d 2
√
µ) → N (0, σ ). Notice we write n(X̄n − µ) into two parts in which one part is dominant
and produces the CLT, and the other part is asymptotically negligible. This is essentially
the method of proof of the CLT for more general m-dependent sequences.
Dependent data also naturally arise in sampling without replacement from a finite popula-
tion. Central limit theorems are available and we will present them shortly. But let us start
29
with an illustrative example.
(a)
max1≤i≤N (Xi − X̄N )2
N
→ 0,
P 2
(Xi − X̄N )
i=1
and n/N → 0 < τ < 1 as N → ∞;
(b)
N max1≤i≤N (Xi − X̄N )2
N
= O(1), as N → ∞.
P
(Xi − X̄N )2
i=1
Then,
X̄n − E(X̄n ) d
p → N (0, 1).
Var(X̄n )
30
Example 1.3.14 Suppose XN1 , . . . , XNn is a sample without replacement from the set
{1, 2, . . . , N }, and let X̄n = ni=1 XNi /n . Then, by a direct calculation,
P
N +1 (N − n)(N + 1)
E(X̄n ) = , Var(X̄n ) = .
2 12n
Furthermore,
N max1≤i≤N (Xi − X̄N )2 3(N − 1)
N
= = O(1).
P 2
N +1
(Xi − X̄n )
i=1
n −E(X̄n ) d
Hence, by Theorem 1.3.10, X̄√ → N (0, 1).
VarX̄n
d
Suppose a sequence of CDFs FXn → FX for some FX . Such a weak convergence result is
usually used to approximate the true value of FXn (x) at some fixed n and x by FX (x).
However, the weak convergence result by itself says absolutely nothing about the accuracy
of approximating FXn (x) by FX (x) for that particular value of n. To approximate FXn (x) by
FX (x) for a given finite n is a leap of faith unless we have some idea of the error committed;
i.e., |FXn (x) − FX (x)|. More specifically, if for a sequence of random variables X1 , . . . , Xn
X̄n − E(X̄n ) d
p → Z ∼ N (0, 1),
Var(X̄n )
then we need some idea of the error
!
X̄n − E(X̄n )
P p ≤x − Φ(x) .
Var(X̄n )
in order to use the central limit theorem for a practical approximation with some degree
of confidence. The first result for the iid case in this direction is the classic Berry-Esseen
theorem. Typically, these accuracy measures give bounds on the error in the appropriate
CLT for any fixed n, making assumptions about moments of Xi .
√
n(X̄ −µ)/σ converges
In the canonical iid case with a finite variance, the CLT says that
√
in law to the N (0, 1). By Polya’s theorem, the uniform error ∆n = sup−∞<x<∞ |P ( n(X̄ −
µ)/σ ≤ x) − Φ(x)| → 0 as n → ∞. Bounds on ∆n for any given n are called uniform bounds.
31
The following results are the classic Berry-Esseen uniform bound and an extension of the
Berry-Esseen inequality to the case of independent but not iid variables.; a proof can be seen
in Petrov (1975). Introducing higher-order moment assumptions (third), the Berry-Esseen
inequality assert for this convergence the rate O(n−1/2 ).
Theorem 1.3.11 (i) (Berry-Esseen; iid case) Let X1 , . . . , Xn be iid with E(X1 ) = µ,
Var(X1 ) = σ 2 , and β3 = E|X1 − µ|3 < ∞. Then there exists a universal constant C, not
depending on n or the distribution of the Xi , such that
√
n(X̄n − µ) Cβ3
sup P ≤ x − Φ(x) ≤ 3 √ .
x σ σ n
(ii) (independent but not iid case) Let X1 , . . . , Xn be independent with E(Xi ) = µi ,
Var(Xi ) = σi2 , and β3i = E|Xi − µi |3 < ∞. Then there exists a universal constant C ∗ , not
depending on n or the distribution of the Xi , such that
!
C ∗ ni=1 β3i
P
X̄n − E(X̄n )
sup P ≤ x − Φ(x) ≤ Pn .
( i=1 σi2 )3/2
p
x Var(X̄n )
It is the best possible rate in the sense of not being subject to improvement without narrowing
the class of distribution functions considered. For some specific underlying CDFs FX , better
rates of convergence in the CLT may be possible. This issue will be clearer when we discuss
√
asymptotic expansions for P ( n(X̄n − µ)/σ ≤ x). In Theorem 1.3.11-(i), the universal
constant C may be taken as C = 0.8.
Example 1.3.15 The Berry-Esseen bound is uniform in x, and it is valid for any n ≥ 1.
While these are positive features of the theorem, it may not be possible to establish that
∆n ≤ for some preassigned > 0 by using the Berry-Esseen theorem unless n is very large.
iid
Let us see an illustrative example. Suppose X1 , . . . , Xn ∼ BIN(p, 1) and n = 100. Suppose
we want the CLT approximation to be accurate to within an error of ∆n = 0.005. In the
Bernoulli case, β3 = pq(1 − 2pq), where q = 1 − p. Using C = 0.8, the uniform Berry-Esseen
bound is
0.8pq(1 − 2pq)
∆n ≤ √ .
(pq)3/2 n
32
This is less than the prescribed ∆n = 0.005 iff pq > 0.4784, which does not hold for any
0 < p < 1. Even for p = 0.5, the bound is less than or equal to ∆n = 0.005 only when
n > 25, 000, which is a very large sample size. Of course, this is not necessarily a flaw
of the Berry-Esseen inequality itself because the desire to have a uniform error of at most
∆n = 0.005 is a tough demand, and a fairly large value of n is probably needed to have such
a small error in the CLT.
Example 1.3.16 As an example of independent variables that are not iid, consider Xi ∼
BIN(i−1 , 1), i ≥ 1, and let Sn = ni=1 Xi . Then, E(Sn ) = ni=1 i−1 , Var(Sn ) = ni=1 (i−1)/i2
P P P
Observe now ni=1 (i − 1)/i2 = log n + O(1) and ni=1 (i − 1)(i2 − 2i + 2)/i4 = log n + O(1).
P P
Substituting these back into the Berry-Esseen bound, one obtains with some minor algebra
that ∆n = O(log n)−1/2 .
For x sufficiently large, while n remains fixed, the quantities FXn (x) and FX (x)each
become so close to 1 that the bound given in Theorem 1.3.11 is too rude. There has been
a parallel development on developing bounds on the error in the CLT at a particular x as
opposed to bounds on the uniform error. Such bounds are called local Berry-Esseen bounds.
Many different types of local bounds are available.We present here just one.
Such local bounds are useful in proving convergence of global error criteria such as
R
|FXn (x) − Φ(x)|p dx or for establishing approximations to the moments of FXn . Uniform
error bounds would be useless for these purposes. If the third absolute moments are finite,
33
an explicit value for the universal constant D can be chosen to be 31. Good reference for
local bounds is Serfling (1980).
Error bounds for normal approximations to many other types of statistics besides sample
means are known, such as the result for statistics that are smooth functions of means. The
order of the error depends on the conditions one assumes on the nature of the function. We
will discuss this problem in Section 2 after we introduced the Delta method.
We now consider the important topic of writing asymptotic expansions for the CDFs of
√
centered and normalized statistics. When the statistic is a sample mean, let Zn = n(X̄n −
µ)/σ and FZn (x) = P (Zn ≤ x), where X1 , . . . , Xn are i.i.d with a CDF F having mean µ
and variance σ 2 < ∞.
The CLT says that FZn (x) → Φ(x) for every x, and the Berry-Esseen theorem says
|FZn (x) − Φ(x)| = O(n−1/2 ) uniformly in x if X has three moments. If we change the
√
approximation Φ(x) to Φ(x) + C1 (F )p1 (x)φ(x)/ n for some suitable constant C1 (F ) and a
suitable polynomial p1 (x), we can assert that
C1 (F )p1 (x)φ(x)
|Fn (x) − Φ(x) − √ | = O(n−1 ),
n
uniformly in x. Expansions of the form
k
X qs (x)
Fn (x) = Φ(x) + √ s + o(n−k/2 ) uniformly in x,
s=1
n
are known as Edgeworth expansions for Zn . One needs some conditions on F and enough
moments of X to carry the expansion to k terms for a given k. Excellent references for the
main results on Edgeworth expansions are Hall (1992). The coefficients in the Edgeworth
expansion for means depend on the cumulants of F , which share a functional relationship
with the sequence of moments of F . Cumulants are also useful in many other contexts, for
example, the saddlepoint approximation.
We start with the definition and recursive representations of the sequence of cumulants
of a distribution. The term cumulant was coined by Fisher (1931).
34
Definition 1.3.4 Let X ∼ F have a finite m.g.f. ψn (t) in some neighborhood of zero,
and let K(t) = log ψn (t) when it exists. The rth cumulant of X (or of F ) is defined as
dr
κr = dtr
K(t)|t=0 .
Equivalently, the cumulants of X are the coefficients in the power series expansion K(t) =
∞ n
κn tn! within the radius of convergence of K(t). By equating coefficients in eK(t) with
P
n=1
those in ψ(t), it is easy to express the first few moments (and therefore the first few central
moments) in terms of the cumulants. Indeed, letting ci = E(X i ), µi = E(X − µ)i , one
obtains the expressions
µ2 = σ 2 = κ2 , µ3 = κ3 , µ4 = κ4 + 3κ22 .
which results in
κ1 = µ, κ2 = σ 2 , κ3 = µ3 , κ4 = µ4 − 3µ22 .
The higher-order ones are quite complex but can be found from Kendall’s Advanced Theory
of Statistics.
Now let us consider the expansion for (function of) means. To illustrate the idea, let
us consider Zn . Assume that the m.g.f of W = (X1 − µ)/σ is finite and positive in a
neighborhood of 0. The m.g.f of Zn is equal to
( ∞
)
√ n t2 X κj tj
ψn (t) = exp{K(t/ n)} = exp + ,
2 j=3
j!n(j−2)/2
35
where K(t) is the cumulant generating function of W and κj ’s are the corresponding cumu-
2 /2
lants (κ1 = 0, κ2 = 1, κ3 = EW 3 and κ4 = EW 4 − 3). Using the series expansion for et ,
we obtain that
2 /2 2 /2 2 /2
ψn (t) = et + n−1/2 r1 (t)et + · · · + n−j/2 rj (t)et + ··· , (1.5)
The CLT for means fails to capture possible skewness in the distribution of the mean
for a given finite n because all normal distributions are symmetric. By expanding the CDF
to the next term, the skewness can be captured. Expansion to another term also adjusts for
the kurtosis. Although expansions to any number of terms are available under existence of
enough moments, usually an expansion to two terms after the leading term is of the most
practical importance. Indeed, expansions to three terms or more can be unstable due to the
presence of the polynomials in the expansions. We present the two-term expansion next. A
rigorous statement of the Edgeworth expansion for a more general Zn will be introduced in
the next chapter after entailing the multivariate Delta theorem. The proof can be found in
Hall (1992).
36
uniformly in x, where
E(X−µ)4
E(X − µ)3 σ4
−3 C12 (F )
C1 (F ) = , C2 (F ) = , C3 (F ) = ,
6σ 3 24 72
p1 (x) = 1 − x , p2 (x) = 3x − x , p3 (x) = 10x3 − 15x − x5 .
2 3
Note that the terms C1 (F ) and C2 (F ) can be viewed as skewness and kurtosis correction
of departure from normality for FZn (x), respectively. It is useful to mention here that the
corresponding formal two-term expansion for the density of Zn is given by
φ(z)+n−1/2 C1 (F )(z 3 −3z)φ(z)+n−1 [C3 (F )(z 6 −15z 4 +45z 2 −15)+C2 (F )(z 4 −6z 2 +3)]φ(z).
iid
Example 1.3.18 Suppose X1 , . . . , Xn ∼ Exp(λ) and we wish to test H0 : λ = 1 vs. H1 :
λ > 1. The UMP test rejects H0 for large values of ni=1 Xi . If the cutoff value is found by
P
√
using the CLT, then the test rejects H0 for X̄n > 1 + k/ n, where k = zα . The power at an
alternative λ equals
√
√
X̄n − λ 1 + k/ n − λ
Power = Pλ X̄n > 1 + k/ n = Pλ √ > √
λ/ n λ/ n
√
X̄n − λ n(1 − λ) k
= 1 − Pλ √ ≤ + → 1.
λ/ n λ λ
For a more useful approximation, the Edgeworth expansion is used. For example, the general
one-term Edgeworth expansion for sample means
C1 (F )(1 − x2 )φ(x)
Fn (x) = Φ(x) + √ + O(n−1 ),
n
can be used to approximate the power expression above. Algebra reduces the one-term
Edgeworth expression to the formal approximation
√ √ √
( n(λ − 1) − k)2
n(λ − 1) − k 1 n(λ − 1) − k
Power ≈ Φ + √ −1 φ .
λ 3 n λ2 λ
37
This is a much more useful approximation than simply saying that for large n the power is
close to 1.
For constructing asymptotically correct confidence intervals for a parameter on the basis
of an asymptotically normal statistic, the first-order approximation to the quantiles of the
statistic (suitably centered and normalized) comes from using the central limit theorem. Just
as Edgeworth expansions produce more accurate expansions for the CDF of the statistic than
does just the central limit theorem, higher-order expansions for the quantiles produce more
accurate approximations than does just the normal quantile. These higher-order expansions
for quantiles are essentially obtained from recursively inverted Edgeworth expansions, start-
ing with the normal quantile as the initial approximation. They are called Cornish-Fisher
expansions. We briefly present the case of sample means. Let the standardized cumulants
are the quantities ρr = κr /σ r .
Theorem 1.3.14 Let X1 , . . . , Xn be i.i.d with absolutely continuous CDF F having a finite
√
m.g.f in some open neighborhood of zero. Let Zn = n(X̄n −µ)/σ and Hn (x) = PF (Zn ≤ x).
Then,
Using Taylor’s expansions at zα for Φ(wnα ), p1 (wnα )φ(wnα ) and p2 (wnα )φ(wnα ), and the fact
that φ0 (x) = −xφ(x), we can obtain this theorem by inverting the Edgeworth expansion.
√ d
Example 1.3.19 Let Wn ∼ χ2n and Zn = (Wn − n)/ 2n → N (0, 1) as n → ∞, so a first-
√
order approximation to the upper αth quantile of Wn is just n + zα 2n. The Cornish-Fisher
expansion should produce a more accurate approximation. To verify this, we will need the
√
standardized cumulants, which are ρ3 = 2 2 and ρ4 = 12. Now substituting into the theorem
√ 3 −7z
above, we get the two-term Cornish-Fisher expansion χ2n,α = n + zα 2n + 32 (zα2 − 1) + zα9√ 2n
α
.
38
1.3.7 The law of the iterated logarithm
The law of the iterated logarithm (LIL) complements the CLT by describing the precise
extremes of the fluctuations of the sequence of random variables
Pn
i=1 (Xi − µ)
, n = 1, 2, . . . .
σn1/2
The CLT states that this sequence converges in law to N (0, 1), but does not otherwise provide
information about the fluctuations of these random variables about the expected value 0.
The LIL asserts that the extremes fluctuations of this sequence are essentially of the exact
order of magnitude (2 log log n)1/2 . The classical iid case is covered by
Theorem 1.3.15 (Hartman and Wintner). let {Xi } be iid with mean µ and finite vari-
ance σ 2 . Then
Pn
i=1 (Xi − µ)
lim sup = 1 wp1;
n→∞ (2σ n log log n)1/2
2
Pn
i=1 (Xi − µ)
lim inf = −1 wp1.
n→∞ (2σ n log log n)1/2
2
In other words: with probability 1, for any > 0, only finitely many of the events
Pn
i=1 (Xi − µ)
> 1 + , n = 1, 2, . . . ;
(2σ n log log n)1/2
2
Pn
i=1 (Xi − µ)
> −1 − , n = 1, 2, . . .
(2σ 2 n log log n)1/2
are realized, whereas infinitely many of the events
Pn
i=1 (Xi − µ)
> 1 − , n = 1, 2, . . . ;
(2σ 2 n log log n)1/2
Pn
i=1 (Xi − µ)
> −1 + , n = 1, 2, . . . ,
(2σ 2 n log log n)1/2
occur. That is, with probability 1, for any > 0, all but finitely many of these fluc-
tuations fall within the boundaries ±(1 + )(2 log log n)1/2 and moreover, the boundaries
±(1 − )(2 log log n)1/2 are reached infinitely often.
In LIL theorem, what is going on is that, for a given n, there is some collection of sample
√
points ω for which the partial sum Sn − nµ stays in a specific n-neighborhood of zero.
39
But this collection keeps changing with changing n, and any particular ω is sometimes in
the collection and at other times out of it. Such unlucky values of n are unbounded, giving
√
rise to the LIL phenomenon. The exact rate n log log n is a technical aspect and cannot
be explained intuitively.
The LIL also complements-indeed, refines- the SLLN (assuming existence of 2nd mo-
n
ments). It terms of the average dealt with the SLLN, n1
P
Xi − µ, the LIL assert that the
i=1
extreme fluctuations are essentially of the exact order of magnitude
contains µ with only finitely many exceptions. Say, in this asymptotic fashion, the LIL
provides the basis for concepts of 100% confidence intervals. The LIL also provides an
example of almost sure convergence being truly stronger than convergence in probability.
But, by the LIL, √ Sn −nµ does not converge a.s. to zero. Hence, convergence in probability
2n log log n
is weaker than almost sure convergence, in general.
References
Billingsley, P. (1995). Probability and Measure, 3rd edition, John Wiley, New York.
Petrov, V. (1975). Limit Theorems for Sums of Independent Random Variables (translation from Russian),
Springer-Verlag, New York.
Serfling, R. (1980). Approximation Theorems of Mathematical Statistics, John Wiley, New York.
Shao, J. (2003). Mathematical Statistics, 2nd ed. Springer, New York.
Van der Vaart, A. W. (2000). Asymptotic Statistics, Cambridge University Press.
40
Chapter 2
The delta theorem says how to approximate the distribution of a transformation of a statistic
in large samples if we can approximate the distribution of the statistic itself. We firstly treat
the univariate case and present the basic delta theorem as follows.
41
Theorem 2.1.1 (Delta Theorem) Let Tn be a sequence of statistics such that
√ d
n(Tn − θ) → N (0, σ 2 (θ)). (2.1)
Proof. First note that it follows from the assumed CLT for Tn that Tn converges in prob-
ability to θ and hence Tn − θ = op (1). The proof of the theorem now follows from a simple
application of Taylor’s theorem that says that
if g is differentiable at x0 . Therefore
That the remainder term is op (Tn − θ) follows from our observation that Tn − θ = op (1) and
√
Lemma 1.2.1. Taking g(θ) to the left and multiplying both sides by n, we obtain
√ √ √
n [g(Tn ) − g(θ)] = n(Tn − θ)g 0 (θ) + nop (Tn − θ).
√
Observing that n(Tn − θ) = Op (1) by the assumption of the theorem, we see that the
√
last term on the right-hand side is nop (Tn − θ) = op (1). Hence, an application of Slutskys
√ d
theorem to the above gives n [g(Tn ) − g(θ)] → N (0, [g 0 (θ)]2 σ 2 (θ)).
Remark 2.1.2 In fact, the Delta Theorem does not require the asymptotic distribution of
d
Tn to be normal. By the foregoing proofs, we see that assuming an (Tn − θ) → Y in which
an is a sequence of positive numbers with limn→∞ an = ∞ and the conditions in the Delta
Theorem hold, we have
d
an [g(Tn ) − g(θ)] →[g 0 (θ)]Y.
42
Example 2.1.1 Suppose X1 , . . . , Xn are iid with mean µ and variance σ 2 . By taking Tn =
X̄n , θ = µ, σ 2 (θ) = σ 2 , and g(x) = x2 , one gets for µ 6= 0
√ d
n(X̄n2 − µ2 ) → N (0, 4µ2 σ 2 ).
d
For µ = 0, nX̄n2 /σ 2 → χ21 by continuous mapping theorem.
Example 2.1.2 For estimating p2 , suppose that we have the choice between (a) X ∼
Bin(n, p2 ); (b) Y ∼ Bin(n, p) and that as estimators of p2 in the two cases, we would
use respectively X/n and (Y /n)2 . Then we have
√
X d
n − p → N (0, p2 (1 − p2 ));
2
n
2 !
√ Y d
n − p2 → N (0, pq · 4p2 ).
n
At least for large n, X/n will thus be more accurate than (Y /n)2 , provided
p2 (1 − p2 ) < pq · 4p2 ,
Example 2.1.3 Suppose Tn is a sequence of statistics satisfying (2.1) and that we are
interested in the limiting behavior of |Tn |. Since g(θ) = |θ| is differentiable with derivative
g 0 (θ) = ±1 at all values of θ 6= 0, it follows from Theorem 2.1.1 that
√ d
n(|Tn | − |θ|) → N (0, σ 2 ) for all θ 6= 0.
When θ = 0, Theorem 2.1.1 does not apply, but it is easy to determine the limit behavior of
|Tn | directly. With |Tn | − |θ| = |Tn |, we have
√ √
P ( n|Tn | < a) = P (−a < nTn < a)
a a
→Φ −Φ − = P (σχ1 < a),
σ σ
p
where χ1 = χ21 is the distribution of the absolute value of a standard normal variable. The
√
convergence rate of |Tn | therefore continues to be 1/ n, but the form of the limit distribution
is χ1 rather than normal.
43
2.2 Higher-order expansions
There are instances in which g 0 (θ) = 0 (at least for some special value of θ), in which case
the limiting distribution of g(Tn ) is determined by the third term in the Taylor expansion.
Thus, if g 0 (θ) = 0, then
(Tn − θ)2 00
g (θ) + op (Tn − θ)2
g(Tn ) = g(θ) + (2.2)
2
and hence
00
(Tn − θ)2 00 2
d g (θ)σ (θ) 2
n(g(Tn ) − g(θ)) = n g (θ) + op (1) → χ1 .
2 2
Formally, the following result generalizes Theorem 2.1.1 to include this case.
√ d
n(Tn − θ) → N (0, σ 2 (θ)).
Let g be a real-valued function differentiable k(≥ 1) at θ with g (k) (θ) 6= 0 but g (j) (θ) = 0 for
j < k. Then
√ d 1
( n)k [g(Tn ) − g(θ)] → [g (k) (θ)][N (0, σ 2 (θ))]k .
k!
Proof. The argument is similar to that for Theorem 2.1.1, this time using the higher-order
Taylor expansions as in (2.2). The remaining details are left as an exercise.
d
Example 2.2.1 (i) Example 2.1.1 revisited. For µ = 0, nX̄n2 /σ 2 → 21 · 2 · [N (0, 1)]2 = χ21 ; (ii)
√
Suppose that nX̄n converges in law to a standard normal distribution. Now consider the
limiting behavior of cos(X̄n ). Because the derivative of cos(x) is zero at x = 0, the proof of
√
Theorem 2.1.1 yields that n(cos(X̄n ) − 1) converges to zero in probability (or equivalently
√
in law). Thus, it should be concluded that n is not the right norming rate for the random
sequence cos(X̄n ) − 1. A more informative statement is that −2n(cos(X̄n ) − 1) converges in
law to χ21 .
44
2.3 Multivariate version of delta theorem
Next we state the multivariate delta theorem, which is similar to the univariate case.
Theorem 2.3.1 Suppose {Tn } is a sequence of k-dimensional random vectors such that
√ d
n(Tn − θ) → Nk (0, Σ(θ)). Let g : Rk → Rm be once differentiable at θ with the gradient
matrix ∇g(θ). Then
√ d
n(g(Tn ) − g(θ)) → Nm 0, ∇T g(θ)Σ(θ)∇g(θ)
Proof. This theorem can be proved by using the Cramer-Wold device. It suffices to show
that for every c ∈ Rm , we have
√ T d
nc (g(Tn ) − g(θ)) → N 0, cT ∇T g(θ)Σ(θ)∇g(θ)c
The remaining proofs are similar to the univariate case by the application of Corollary 1.2.1
and left to exercises.
The multivariate delta theorem is useful in finding the limiting distribution of sample
moments. We state next some examples most often used.
Example 2.3.1 (Sample variance revisited) Suppose X1 , . . . , Xn are iid with mean µ,
variance σ 2 and E(X14 ) < ∞. Then by taking
Var(X1 ) Cov(X1 , X12 )
Tn = (X̄n , Xn2 )T , θ = (EX1 , EX12 )T , Σ=
Cov(X12 , X1 ) Var(X12 )
45
Taking the function g(u, v) = v − u2 which is obviously differentiable at the point θ with
derivative g 0 (u, v) = (−2u, 1), it follows that
n
!
√ 1X d
(Xi − X̄n )2 − Var(X1 ) → N2 0, (−2µ, 1)Σ(−2µ, 1)T .
n
n i=1
Because the sample variance does not depend on location, we may as well assume µ = 0 (or
equivalently working with Xi − µ). Thus, it is readily seen that
√ d
n(Sn2 − σ 2 ) → N (0, µ4 − σ 4 ),
where µ4 denotes the centered fourth moment of X1 . If the parent distribution is normal,
√ d
then µ4 = 3σ 4 and n(Sn2 − σ 2 ) → N (0, 2σ 4 ). In view of Slutsky’s Theorem, the same result
is valid for the unbiased version n/(n − 1)Sn2 of the sample variance. From here, by another
use of the univariate delta theorem, one sees that
√ µ4 − σ 4
d
n(Sn − σ) → N 0, .
4σ 2
√
In the previous example the asymptotic distribution of n(Sn2 − σ 2 ) was obtained by
the delta method. Actually, it can also and more easily be derived by a direct application of
CLT and Slutsky’ theorem as we have illustrated in Example 1.3.2. Thus, it is not always a
good idea to apply the general theorems. However, in many cases the delta method is a good
way to package the mechanics of Taylor expansions in a transparent way. The followings are
more examples.
Example 2.3.2 (The joint limit distribution) (i) Consider the joint limit distribution
of the sample variance Sn2 and the t-statistic X̄n /Sn . Again for the limit distribution it does
not make a difference whether we use a factor n or n − 1 to standardize Sn2 . For simplicity
we use n. Then (Sn2 , X̄n /Sn ) can be written as g(X̄n , Xn2 ) for the map g : R2 → R2 given by
2 u
g(u, v) = v − u , .
(v − u2 )1/2
√
The joint limit distribution of n(X̄n − α1 , Xn2 − α2 ) is derived in the preceding example,
where αk denotes the kth moment of X1 . The function g is differentiable at θ = (EX1 , EX12 )
46
provided that σ 2 is positive, with derivative
0
−2α1 1
[g(α 1 ,α2 )
]T = 2
α1
.
−α1
(α2 −α2 )3/2
+ (α2 −α1 2 )1/2 2(α2 −α21 )3/2
1 1
√
It follows that the sequence n(Sn2 − σ 2 , X̄n /Sn − α1 /σ) is asymptotically bivariate normally
distributed, with mean zero and covariance matrix,
2
0
α2 − α1 α3 − α1 α2 0
[g(α 1 ,α2 )
]T g(α
1 ,α2 )
.
2
α3 − α1 α2 α4 − α2
It is easy but uninteresting to compute this explicitly; A direct application of this result is
to analyze the so-called effect size θ = µ/σ. A natural estimator of θ is X̄n /Sn .
(ii) A more commonly seen case is to derive the joint limit distribution of X̄n and Sn2 .
Then, by using the multivariate delta theorem on some algebra,
2
√ X̄n − µ d 0 σ µ3
n → N2
2 2 4
Sn − σ 0 µ3 µ4 − σ
Thus X̄n and Sn2 are asymptotically independent if the population skewness is 0 (i.e., µ3 =0).
47
has centered on finding transformations, say g(θ),
b that (i) have an asymptotic variance func-
tion free of θ, eliminating the annoying need to use a plugin estimate, (ii) have skewness≈ 0
in some precise sense, and (iii) have bias ≈ 0 as an estimate of g(θ), again in some precise
sense.
[g 0 (θ)]2 σ 2 (θ) = k 2 .
As long as there is an analytical formula for the asymptotic variance function in the
limiting normal distribution for Tn , and as long as the reciprocal of its square root can be
48
integrated in closed form, a VST can be written down. Next, we work out some examples of
VSTs and show how they are used to construct asymptotically correct confidence intervals
for an original parameter of interest.
√ d
Example 2.4.1 Suppose X1 , X2 , . . ., are iid Poisson(θ). Then n(X̄n −θ) → N (0, θ). Thus
√
σ(θ) = θ and so a variance-stabilizing transformation is
Z
k √
g(θ) = √ dθ = 2k θ.
θ
√
Taking k = 1/2 gives that g(θ) = θ is a variance-stabilizing transformation for the Poisson
√ p √ d
case. Indeed n( X̄n − θ) → N (0, 1/4). Thus, an asymptotically correct confidence
√ p
interval for θ is X̄n ± 2z√αn . This implies that an asymptotically correct confidence interval
for θ is ( 2 p 2 )
p zα zα
X̄n − √ , X̄n + √ .
2 n 2 n
p
z√α
Of course, if X̄n − 2 n
< 0, that expression should be replaced by 0. This confidence
p
interval is different from the more traditional interval, namely X̄n ± √zαn X̄n , which goes by
the name of the Wald interval. In fact, the actual coverage properties of the interval based
on the VST are significantly better than those of the Wald interval.
Example 2.4.2 (Sample correlation revisited) Consider the same assumption in Ex-
ample 1.2.11. Firstly, by using the multivariate delta theorem, we can derive the limiting
distribution of the sample correlation coefficient rn . By taking
n n n
!T
1X 2 1X 2 1X
Tn = X̄n , Ȳn , X , Y , Xi Yi ,
n i=1 i n i=1 i n i=1
49
for some v > 0, provided that the fourth moments of (X, Y ) exist. It is not possible to write
2
a clean formula for v in general. If (Xi , Yi ) are iid N2 (µX , µY , σX , σY2 , ρ), then the calculation
can be done in closed form and
√ d
n(rn − ρ) → N (0, (1 − ρ2 )2 ).
However, it does not work well to base an asymptotic confidence interval directly on this
result. The transformation
Z
1 1 1+ρ
g(ρ) = 2
dρ = log = arctanh(ρ)
1−ρ 2 1−ρ
is a VST for rn . This is the famous arctanh transformation of Fisher, popularly known as
√
Fisher’s z. Thus, the sequence n(arctanh(rn ) − arctanh(ρ)) converges in law to the N (0, 1)
distribution. Confidence intervals for ρ are computed from the arctanh transformation as
√ √
tanh(arctanh(rn ) − zα / n), tanh(arctanh(rn ) + zα / n) .
rather than by using the asymptotic distribution of rn itself. The arctanh transformation
of rn attains normality much quicker than rn itself. (Interest students may run a small
simulation to verify it by using R) .
The delta theorem is proved by an ordinary Taylor expansion of Tn around θ. The same
method also produces approximations, with error bounds, on the moments of g(Tn ). The
order of the error can be made smaller the more moments Tn has. To keep notation simple,
we give approximations to the mean and variance of a function g(Tn ) below when Tn is a
sample mean.
Before proceeding, we need to address the so-called moment convergence problem. Some-
times we need to establish that moments of some sequence {Xn }, or at least some lower-order
d
moments, converge to moments of X when Xn → X. Convergence in distribution by itself
simply cannot ensure convergence of any moments. An extra condition that ensures con-
vergence of appropriate moments is uniform integrability. However, direct verification of
50
its definition is usually cumbersome. Thus, here we choose to introduce some sufficient
conditions which could ensure convergence of moments.
d
Theorem 2.5.1 Suppose Xn → X for some X. If supn E|Xn |k+δ < ∞ for some δ > 0, then
E(Xnr ) → E(X r ) for every 1 ≤ r ≤ k.
Another common question is the convergence of moments in the canonical CLT for iid
random variables, which is stated in the following theorem.
Theorem 2.5.2 (von Bahr) Suppose X1 , . . . , Xn are i.i.d. with mean µ and finite variance
σ 2 , suppose that, for some specific k, E|X1 |k < ∞. Suppose Z ∼ N (0, 1). Then,
√ r
n(X̄n − µ) 1
E = E(Z r ) + O( √ ),
σ n
for every r ≤ k.
By a similar arguments in the proof of Delta theorem, a direct application of this theorem
is the following approximations to the mean and variance of a function g(Tn ).
Proposition 2.5.1 Suppose X1 , X2 , . . . are iid observations with a finite fourth moment.
Let E(X1 ) = µ and Var(X1 ) = σ 2 . Let g be a scalar function with four uniformly bounded
derivatives. Then
g 00 (µ)σ 2
(i) E(g(X̄n )) = g(µ) + 2n
+ O(n−2 )
(g 0 (µ))2 σ 2
(ii) Var(g(X̄n )) = n
+ O(n−2 ).
The variance approximation above is simply what the delta theorem says. With more deriva-
tives of g that are uniformly bounded, higher-order approximations can be given.
Example 2.5.1 Suppose X1 , X2 , . . . are iid Poi(µ) and we wish to estimate P (X1 = 0) =
e−µ . The MLE is e−X̄n , and suppose we want to find an approximation to the bias and
variance of e−X̄n . We apply Proposition 2.5.1 with the function g(x) = e−x so that g 0 (x) =
51
−g 00 (x) = −e−x . Plugging into the proposition, we get the approximations Bias(e−X̄n ) =
µe−µ µe−2µ
2n
+ O(n−2 ), and Var(e−X̄n ) = n
+ O(n−2 ).
Note that it is in fact possible to derive exact expressions for the mean and variance
−X̄n
in this case, as ni=1 Xi has a Poi(nµ) distribution and therefore its mgf (moment
P
of e
t/n −1)
generating function) equals ψn (t) = E(etX̄n ) = (eµ(e )n . In particular, the mean of e−X̄n
−1/n −1)
is (eµ(e )n . It is possible to recover the approximation for the bias given above from
this exact expression. Indeed,
on collecting the terms of the exponentials together. On subtracting e−µ , this reproduces
the bias approximation given above. The delta theorem produces it more easily than the
direct calculation.
In this section, we present the a more general result regarding Edgeworth expansions, which
can be applied to many useful cases.
of µ = EX1 , h(µ) = 0, σh2 = [∇h(µ)]T Var(X1 )∇h(µ) > 0. Assume the C.D.F. of X1 is
absolutely continuous. Then FWn admits the Edgeworth expansion
m
X pj (x)φ(x) 1
sup FWn − Φ(x) − = o( ),
x
j=1
nj/2 nm/2
where pj (x) is a polynomial of degree at most 3j − 1, with coefficients depending on the first
m + 2 moments of X1 . In particular,
1
p1 (x) = −c1 σh−1 + c2 σh−3 (x2 − 1),
6
52
1
Pk Pk
with c1 = 2 i=1 i=1 aij µij and
k X
X k X
k k X
X k X
k X
k
c2 = ai aj al µijl + 3 ai aj alq µil µjq ,
i=1 j=1 l=1 i=1 j=1 l=1 q=1
where ai is the ith component of ∇h(µ), aij is the (i,j)th element of the Hessian matrix
∇2 h(µ), µij = E(Yi Yj ), µijl = E(Yi Yj Yl ), and Yi is the ith component of X1 − µ.
Example 2.6.1 The t-test and the t confidence interval are among the most used tools of
statistical methodology. As such, an Edgeworth expansion for the C.D.F. of the t-statistic for
general populations is interesting and useful, and we can derive it according to Theorem 2.6.1.
√
b2 = n1 ni=1 (Xi −
P
Consider the studentized random variable Wn = n(X̄n − µ)/b σ , where σ
X̄n )2 . Assuming that EX12m+4 < ∞ and applying multivariate Delta theorem to random
p
vectors (Xi , Xi2 ), i = 1, 2, . . ., and h(x, y) = (x − µ)/ y − x2 , we obtain the Edgeworth
expansion with σh = 1
1
p1 (x) = κ3 (2x2 + 1).
6
Furthermore, it can be found in Hall (1992; p73) that
1 1 1
p2 (x) = κ4 x(x2 − 3) − κ23 x(x4 + 2x2 − 3) − x(x2 + 3).
12 18 4
53
54
Chapter 3
Consider X1 , X2 . . . be iid with distribution function F . For each sample of size n, a corre-
sponding sample distribution function Fn is constructed by placing at each observation Xi a
mass 1/n. Thus Fn can be represented as
n
1X
Fn (x) = I{Xi ≤x} ,
n i=1
The simplest aspect of Fn is that, for each fixed x, Fn (x) serves as an estimator of F (x).
55
(i) Fn (x) is unbiased and has variance
F (x)[1 − F (x)]
Var[Fn (x)] = ;
n
2nd
(ii) Fn (x) is consistent in mean square, i.e., Fn (x) → F (x);
wp1
(iii) Fn (x) → F (x);
(iv) Fn (x) is AN F (x), F (x)[1−F
n
(x)]
.
Proof. Note that the exact distribution of nFn (x) is BIN(F (x), n). And, (i)-(ii) follows im-
mediately; The third part is a direct application of SLLN; (iv) is a consequence of Lindeberg-
Levy CLT and (i).
The ECDF is quite useful for estimation of the population distribution function F . Besides
pointwise estimation of F (x), it is also of interest to characterize globally the estimation of
F by Fn . To this end, a popular useful measure of closeness of Fn to F is the Kolmogorov-
Smirnov distance
Dn = sup |Fn (x) − F (x)|.
−∞<x<∞
This measure is also known as the sup-norm distance between Fn and F , and denoted as
||Fn (x) − F (x)||∞ . The metrics such as Dn has many applications: (1) goodness-of-fit test;
(2) confidence band; (3) theoretical investigation of many other statistics of interest which
can be advantageously carried out by representing exactly and approximately as functions of
ECDF. In this respect, the following results concerning the sup-norm distance is of interest
on its own account but also provides a useful starting tool for the asymptotic analysis of
other statistics, such as quantiles, order statistics and ranks.
The next results give useful explicit bounds on probabilities of large values for the devi-
ation of Fn from F .
56
Theorem 3.1.1 (DKW’s inequality) Let Fn be the ECDF based on iid X1 , . . . , Xn from
a CDF F defined on R. There exists a positive constant C (not depending on F ) such that
2
P (Dn > z) ≤ Ce−2nz , z > 0, for all n = 1, 2, . . . ,
The following results useful in statistics are direct consequences of Theorem 3.1.1.
Corollary 3.1.1 Let F and C be as in Theorem 3.1.1. Then for every > 0,
C
P sup Dm > ≤ hn ,
m≥n 1 − h
where h = exp(−22 ).
Proof.
X∞ ∞
X C
P sup Dm > ≤ P (Dm > ) ≤ C hm
= hn .
m≥n
m=n m=n
1 − h
wp1
Theorem 3.1.3 (Glivenko-Cantelli) Dn → 0.
P∞
Proof. Note that n=1 P (Dn > z) < ∞ by DKW’s inequality. Hence, the result follows
from Theorem 1.2.1-(iv).
From the Glivenko-Cantelli theorem, we know that Dn = op (1). However, the statistic
√
nDn may have a nondegenerate limit distribution as suggested by DKW’s inequality, and,
this is true as revealed by the following result.
57
Theorem 3.1.4 (Kolmogorov) Let F be continuous. Then
∞
√ X 2 2
lim P ( nDn ≤ z) = 1 − 2 (−1)j+1 e−2j z , z > 0.
n→∞
j=1
A convenient feature of this asymptotic distribution is that it does not depend upon F . In
fact, for every n, if the true CDF F is continuous, then Dn has the remarkable property that
its exact distribution is completely independent of F which is stated as follows.
√
Proposition 3.1.2 Let F be continuous. Then nDn is distribution-free in the sense that
its exact distribution does not depend on F for every fixed n.
Proof. The quickest way to see this property is to notice the identity:
√ d √
i i−1
nDn = n max max − U(i) , U(i) − ,
0≤i≤n n n
where U(1) ≤ . . . ≤ U(n) are order statistics of an independent sample from U [0, 1] and the
d
relation = denotes “equality in law”.
58
3.1.3 Applications: Kolmogorov-Smirnov and other ECDF-based
GOF tests
which are respectively known as the Kolmogorov-Smirnov, the Cramer-von Mises, and the
Anderson-Darling test statistics.
Similar to Proposition 3.1.2, we have the following simple expressions for Cn and An .
It is clear from these computational formulas that, for every fixed n, the sampling distribu-
tions of Cn and An under H0 do not depend on F0 , provided F0 is continuous. For small n,
the true sampling distributions can be worked out exactly by discrete enumeration.
The tests introduced above based on the ECDF Fn all have the pleasant property that
they are consistent against any alternative F 6= F0 . For example, the Kolmogorov-Smirnov
√
statistic Dn has the property that PF ( nDn > G−1 −1
n (1 − α)) → 1,∀F 6= F0 , where Gn (1 − α)
√
is the (1 − α)th quantile of the distribution of nDn under F0 . To explain heuristically
59
why this should be the case, consider a CDF F1 6= F0 , so that there exists η such that
F1 (η) 6= F0 (η). Let us suppose that F1 (η) > F0 (η). First note that G−1
n (1 − α) → λ for some
√
nDn > G−1
PF1 n (1 − α)
√
−1
= PF1 sup n(Fn (t) − F0 (t)) > Gn (1 − α)
t
√ √
−1
= PF1 sup n(Fn (t) − F1 (t)) + n(F1 (t) − F0 (t)) > Gn (1 − α)
t
√ √
n(F1 (η) − F0 (η)) > G−1
≥ PF1 n(Fn (η) − F1 (η)) + n (1 − α) → 1,
√ √
as n → ∞ since n(Fn (η) − F1 (η)) = Op (1) under F1 , and n(F1 (η) − F0 (η)) → ∞.
The same argument establishes the consistency of the other ECDF-based tests against all
alternatives. In contrast, we will later see that chi-square goodness-of-fit tests cannot be
consistent against all alternatives.
Example 3.1.2 (The Berk-Jones procedure) Berk and Jones (1979) proposed an in-
tuitively appealing ECDF-base method of testing the simple goodness-of-fit null hypothesis
F = F0 for some specified continuous F0 in the one-dimensional iid situation. It has al-
so led to subsequent developments of other tests for the simple goodness-of-fit problem as
generalizations of the Berk-Jones idea.
The Berk-Jones method is to transform the simple goodness-of-fit problem into a family
of binomial testing problems. More specifically, if the true underlying CDF is F , then for
any given x, as stated above, nFn (x) ∼ Bin(n, F (x)). Suppressing the x and writing p for
F (x) and p0 for F0 (x), for the given x, we want to test p = p0 . We can use a likelihood
ratio test corresponding to a two-sided alternative to test this hypothesis. It will require
maximization of the binomial likelihood function over all values of p that corresponds to
maximization over F (x), with x being fixed, while F is an arbitrary CDF. The likelihood is
maximized at F (x) = Fn (x), resulting in the likelihood ratio statistic
60
But, of course, the original problem is to test that F (x) = F0 (x), ∀x. So, it would make sense
to take a supremum of the log-likelihood ratio statistics over x. The Berk-Jones statistic is
In recent literatures, some authors have found that an analog of the traditional Anderson-
Darling rank test based on log λn (x)
Z
log λn (x)
dFn (t)
Fn (t)(1 − Fn (t))
is much more powerful than the Anderson-Darling test and the foregoing Berk-Jones statistic.
Example 3.1.3 (The two-sample case) Suppose Xi , i = 1, . . . , n are iid samples from
some continuous CDF F1 and Yi , i = 1, . . . , m are iid samples from some continuous CDF
F2 , and all random variables are mutually independent. Let Fn1 and Fm2 denote the empirical
CDFs of the Xi ’s and the Yi ’s, respectively. Analogous to the one-sample case, one can define
two-sided Kolmogorov- Smirnov test statistics and other ECDF based GOF tests, such as
where Fn,m (x) is the ECDF of the pooled sample X1 , . . . , Xn , Y1 , . . . , Ym . Similar to Propo-
sition 3.1.3, one can also show that neither the null distribution of Dm,n nor that of the Am,n
depends on F1 or F2 .
Chi-square tests are well-known competitors to ECDF-based statistics. They discretize the
null distribution in some way and assess the agreement of observed counts to the postulated
counts, so there is obviously some loss of information and hence a loss in power. But they are
versatile. Unlike ECDF-based tests, a chi-square test can be used for continuous as well as
discrete data and in one dimension as well as many dimensions. Thus, a loss of information
is being exchanged for versatility of the principle and ease of computation.
61
Suppose X1 , . . . Xn are iid observations from some distribution F in and that we want
to test H0 : F = F0 , F0 being a completely specified distribution. Let S be the support of
F0 and, for some given k ≥ 1, Aki , i = 1, . . . , k form a partition of S. Let p0i = PF0 (Aki )
and ni = #{j : xj ∈ Aki }, i.e., the observed frequency of the partition set Aki . Therefore,
under H0 , E(ni ) = np0i . K. Pearson suggested that as a measure of discrepancy between
the observed sample and the null hypothesis, one compare (n1 , . . . , nk ) with (np01 , . . . , np0k ).
The Pearson chi-square statistic is defined as
k
2
X (ni − np0i )2
K = .
i=1
np0i
For fixed n, certainly K 2 is not distributed as a chi-square, for it is just a quadratic form
in a multinomial random vector. However, the asymptotic distribution of K 2 is χ2k−1 if H0
holds, which is stated in the following result.
Theorem 3.1.5 (The asymptotic null distribution) Suppose X1 , X2 , . . . Xn are iid ob-
d
servations from some distribution F . Consider testing H0 : F = F0 (specified). K 2 → χ2k−1
under H0 .
Proof. Define
T
T n1 − np01 nk − np0k
Y = (Y1 , . . . , Yk ) = √ ,..., √ .
np01 np0k
d √ √
By the multivariate CLT, we know Y → Nk (0, Σ), where Σ = Ik −µµT and µ = ( p01 , . . . , p0k )T .
This can be easily seen by writing n = (n1 , . . . , nk )T = ni=1 Zi , where Zi = (0, . . . , 0, 1, 0, . . . , 0)T
P
with a single nonzero component 1 located in the jth position if the ith trial yields the jth
outcome. Note that Zi ’s are iid with mean p0 = (p01 , . . . , p0k )T and covariance matrix
n−np d
diag(p0 ) − p0 pT0 . By the multivariate CLT, √ 0 → Nk (0, diag(p0 )
n
− p0 pT0 ). Thus,
√ √ n − np0
Y = diag−1 ( p01 , . . . , p0k ) √
n
d √ √ √ √
→ Nk 0, diag−1 ( p01 , . . . , p0k ) diag(p0 ) − p0 pT0 diag−1 ( p01 , . . . , p0k )
d
= Nk (0, Σ).
62
Note that tr(Σ) = k − 1. Notice now that Pearson’s K 2 = YT Y, and if Y ∼ Nk (0, Σ) for
d
any general Σ, then YT Y = XT PT PX = XT X, where X ∼ Nk (0, diag(λ1 , . . . , λk )), λi are
the eigenvalues of Σ, and PT ΣP = diag(λ1 , . . . , λk ) is the spectral decomposition of Σ. Note
√ √
that X has the same distribution as the vector ( λ1 η1 , . . . , λk ηk )T , where ηj ’s are the iid
d P iid
standard normal variates. So, it follows that XT X = ki=1 λi wi with wi ∼ χ21 . Because the
eigenvalues of a symmetric and idempotent matrix (Σ) are either 0 or 1, for our Σ, k − 1 of
λi ’s are 1 and the remaining one is zero. Since a sum of independent chi-squares is again a
d
chi-square, it follows that K 2 → χ2k−1 under H0 .
Example 3.1.4 (The Hellinger statistic)We may consider a transformation g(x) that
makes the denominator in Pearson’s χ2 a constant. Specially, the differentiable function
of the form g(x) = (g1 (x1 ), . . . , gk (xk ))T , such that the jth component of the transfor-
mation is a function only of the jth component of x. As a consequence, the gradient
√
∇g(x) = diag{g10 (x1 ), . . . , gk0 (xk )}. As in the proof of Delta Theorem, n(g(Z̄n ) − g(p0 )) is
√
asymptotically equivalent to n∇g(p0 )(Z̄n − p0 ), so that in Pearson’s χ2 , we may replace
√ √
n(Z̄n − p0 ) by n∇−1 g(p0 )(g(Z̄n ) − g(p0 )) and obtain the transformed χ2
T
χ2g = n g(Z̄n ) − g(p0 ) ∇−1 g(p0 )diag(p)∇−1 g(p0 ) g(Z̄n ) − g(p0 )
k
X (gi (ni /n) − gi (p0i ))2 d
=n → χ2k−1 .
i=1
p0i [gi0 (p0i )]2
√ √
Naturally, we are led to investigate the transformed χ2 with g(x) = ( x1 , . . . , xk )T .
The transformed χ2 , with gi0 (p0i ) = √1 ,
2 p0i
becomes
k
X p √ 2
χ2H = 4n ni /n − p0i .
i=1
This is known as the Hellinger χ2 because of its relation to Hellinger distance. The Hellinger
distance between two densities, f (x) and g(x), is d(f, g), where
Z p p 2
2
d (f, g) = f (x) − g(x) dx.
Let F1 be a distribution different from F0 and let p1i = PF1 (Aki ). Clearly, if by chance
p1i = p0i ∀i = 1, . . . k (which is certainly possible), then a test based on the empirical
63
frequencies of Aki cannot distinguish F0 from F1 , even asymptotically. In such a case, the
χ2 test cannot be consistent against F1 . However, otherwise it will be consistent, as can be
seen easily from the following result.
K2 p
Pk (p1i −p0i )2
(i) n
→ i=1 p0i
.
Pk (p1i −p0i )2 p
(ii) If i=1 p0i
> 0, then KP2 → ∞ and hence the Pearson χ2 test is consistent against
F1 .
Proof. Recall the definition in the proof of Theorem 3.1.5. It can be easily seen that
d √ √
Y → Nk (diag−1 ( p01 , . . . , p0k )δ, Σ) by using the Slutsky’s Theorem and CLT. Since Σ is
symmetric and idempotent,
k
!
d
X
K 2 = YT Y → χ2k−1 δi2 /p0i
i=1
64
by using the Cochran Theorem (or derived by a similar arguments in the proof of Theorem
3.1.5).
Let X1 , X2 . . . be iid with distribution function F . For k ∈ N+ , the kth moment and central
moment of F are defined as
Z ∞
αk = xk dF (x) = EX1k
Z−∞
∞
µk = (x − α1 )k dF (x) = E[(X1 − α1 )k ],
−∞
respectively. α1 and µ2 are certainly the mean and variance of F respectively. Also, µ1 = 0.
αk and µk represent important characteristics for describing F . Natural estimators of these
parameters are given by the corresponding moments of the sample distribution function
Fn (x) = n1 ni=1 I{Xi ≤x} , say
P
Z ∞ n
k 1X k
ak = x dFn (x) = X , k = 1, 2, . . . ,
−∞ n i=1 i
Z ∞ n
k 1X
mk = (x − α1 ) dFn (x) = (Xi − a1 )k , k = 2, 3, . . . .
−∞ n i=1
65
wp1 α2k −α2k
Proposition 3.2.1 (i) ak → αk ; (ii) E(ak ) = αk ; (iii) Var(ak ) = n
.
By noting that ak is a mean of iid random variables having mean αk and variance α2k −αk2 ,
the result follows immediately by SLLN. Furthermore, because the vector (a1 , . . . , ak )T is
the mean of the iid vectors (Xi , . . . , Xik )T , 1 ≤ i ≤ n, we have the following asymptotically
normal result.
√
Proposition 3.2.2 n(a1 − α1 , . . . , ak − αk )T is ANk (0, Σ), where Σ = (σij )k×k with σij =
αi+j − αi αj .
Certainly, it is implicitly assumed that all stated moments are finite. This proposition is a
direct application of the multivariate CLT Theorem 1.3.2.
The following result concerns the consistency and the asymptotically normality of the
vector m1 , . . . , mk .
wp1
(i) mk → µk ;
√
(ii) The random vector n(m2 −µ2 , . . . , mk −µk )T is ANk−1 (0, Σ∗ ), where Σ∗ = (σij∗ )(k−1)×(k−1)
with σij∗ = µi+j+2 − µi+1 µj+1 − (i + 1)µi µj+2 − (j + 1)µi+2 µj + (i + 1)(j + 1)µi µj µ2 .
Proof. Instead of dealing with mk directly, we exploit the connection between mk and bj ’s.
Writing
n n k
1X 1 XX j
mk = (Xi − a1 )k = C (Xi − α1 )j (α1 − a1 )k−j ,
n i=1 n i=1 j=0 k
66
we have
k
X
mk = Ckj (−1)k−j bj bk−j
1 ,
j=0
where we define b0 = 1. (i). By noting that µ1 = 0, this result follows from (i) of Proposition
3.2.3 and the CMT; (ii) This is again an application of the multivariate Delta Theorem.
Consider the map g : Rk → Rk−1 given by
2 k
!T
X X
g(t1 , . . . , tk ) = C2j (−1)2−j tj t12−j , . . . , Ckj (−1)k−j tj tk−j
1 .
j=0 j=0
Σ∗ = ∇T g|θ Σ̃∇g|θ .
The assertion follows immediately from some simple algebras on ∇T g|θ Σ̃∇g|θ . .
A most direct result from this theorem is the asymptotical normality of the sample
variance (by choosing k = 2 in (ii)) which is studied detailedly in Example 1.3.2.
A few selected sample percentiles provide useful diagnostic summaries of the full ECDF. For
example, the three quartiles of the sample already provide some information about symmetry
of the underlying population, and extreme percentiles give information about the tail. So
asymptotic theory of sample percentiles is of great interest in statistics. In this section, we
67
present a selection of the fundamental results on the asymptotic theory for percentiles. The
iid case and then an extension to the regression setup are discussed.
Suppose X1 , . . . , Xn are iid real-valued random variables with CDF F . We denote the
order statistics of X1 , . . . , Xn by X(1) , . . . , X(n) . For 0 < p < 1, the pth quantile of F is
defined as F −1 (p) ≡ ξp = inf{x : F (x) ≥ p}. Note that ξp satisfies F (ξp −) ≤ p ≤ F (ξp ).
Correspondingly, the sample quantile is defined as the pth quantile of the ECDF Fn , that
is, Fn−1 (p) ≡ ξbp = inf{x : Fn (x) ≥ p}. Also, the sample quantile can be expressed as X(dnpe)
where dke denotes the smallest integer greater than or equal to k. Thus, the discussion of
quantile could be carried out formally in terms of order statistics.
The first result is a probability inequality for |ξbp − ξp | which implies that ξbp is strongly
wp1
consistent, say ξbp → ξp .
Theorem 3.3.1 Let X1 , . . . , Xn be iid random variables from a CDF F satisfying p < F (ξp +
) for any > 0. Then, for every > 0 and n = 1, 2, . . . ,
2
P (|ξbp − ξp | > ) ≤ 2Ce−2nδ ,
where δ = min{F (ξp + ) − p, p − F (ξp − )} and C is the same constant in DKW inequality.
Proof. Let > 0 be fixed. Note that G(x) ≥ t iff x ≥ G−1 (t) for any CDF G on R. Hence
where the last inequality follows from DKW’s inequality (Theorem 3.1.1). Similarly,
2
P (ξbp < ξp − ) ≤ Ce−2nδ .
68
By this inequality, the strong consistency of ξbp can be established easily from Theorem
1.2.1-(iv).
Remark 3.3.1 The exact distribution of ξbp can be obtained as follows. Since nFn (t) has
the binomial distribution BIN(F (t), n) for any t ∈ R,
l −1
p
φn (t) = nCn−1 [F (t)]lp −1 [1 − F (t)]n−lp f (t).
√
The following result provides an asymptotic distribution for n(ξbp − ξp ).
Theorem 3.3.2 Let X1 , . . . , Xn be iid random variables from a CDF F . Suppose that F is
continuous at ξp .
69
√ p
Let t > 0, pnt = F (ξp +tσF+ n−1/2 ), cnt = n(pnt −p)/ pnt (1 − pnt ), and Znt = [Bn (pnt )−
p p
npnt ]/ npnt (1 − pnt ), where σF+ = p(1 − p)/F 0 (ξp +) and Bn (q) denotes a random variable
having the binomial distribution BIN(q, n). Then,
P ξbp ≤ ξp + tσF+ n−1/2 = P p ≤ Fn (ξp + tσF+ n−1/2 )
= P (Znt ≥ −cnt ).
Under the assumed conditions on F , pnt → p and cnt → t. Hence, the result follows from
But this follows from the CLT and Polya’s theorem (Theorem 1.2.7-(ii)).
Remark 3.3.2 If both F 0 (ξp +) and F 0 (ξp −) exist and are positive, but F 0 (ξp +) 6= F 0 (ξp −),
√
then the asymptotic distribution of n(ξbp −ξp ) has the CDF Φ(t/σF− )I{−∞<t<0} +Φ(t/σF+ )I{0≤t<∞} ,
a mixture of two normal distributions, where σF− = p(1 − p)/F 0 (ξp −). An example of such
p
1
a case when p = 2
is
1
F (x) = xI{0≤x< 1 } + (2x − )I{ 1 ≤x< 3 } + I{ 3 ≤x<∞} .
2 2 2 4 4
Example 3.3.1 Suppose X1 , X2 , . . . , are iid N (µ, 1). Let Mn = ξb1 denote the sample
√
2
median. Since the standard normal density φ(x) at zero equals 1/ 2π, it follows from
√ d √ d
Theorem 3.3.2 that n(Mn − µ) → N (0, π2 ). On the other hand, n(X̄n − µ) → N (0, 1). The
ratio of the variances in the two asymptotic distributions, 2/π, is called the ARE (asymptotic
relative efficiency) of Mn relative to X̄n . Thus, for normal data, Mn is less efficient than X̄n .
The sample median of an iid sample from some CDF F is clearly not a linear statistic;
i.e., it is not a function of the form ni=1 hi (Xi ). In 1966, Bahadur proved that the sample
P
median, and more generally any fixed sample percentile, is almost a linear statistic. The
result in Bahadur (1966) not only led to an understanding of the probabilistic structure of
percentiles but also turned out to be an extremely useful technical tool. For example, as
70
we shall shortly see, it follows from Bahadur’s result that, for iid samples from a CDF F ,
under suitable conditions not only are X̄n , ξb1 marginally asymptotically normal but they
2
are jointly asymptotically bivariate normal. The result derived in Bahadur (1966) is known
as the Bahadur representation of quantiles.
√
Proof. Let t ∈ R, ξnt = ξp + tn−1/2 , Zn (t) = n[F (ξnt ) − Fn (ξnt )]/F 0 (ξp ), and Un (t) =
√
n[F (ξnt ) − Fn (ξbp )]/F 0 (ξp ). It can be shown that
P (ηn ≥ t + , Zn (0) ≤ t) → 0.
It follows that ηn − Zn (0) = op (1) with Lemma 3.3.1 given below, which is the same as the
assertion.
Lemma 3.3.1 Let {Xn } and {Yn } be two sequences of random variables such that Xn is
bounded in probability and, for any real number t and > 0, limn [P (Xn ≤ t, Yn ≥ t + ) +
p
P (Xn ≥ t + , Yn ≤ t)] = 0. Then Xn − Yn → 0.
71
Proof. For any > 0, there exists and M > 0 such that P (|Xn | > M ) ≤ for any n, since
Xn is bounded in probability. For this fixed M , there exists an N such that 2M/N < /2.
Let ti = −M + 2M i/N, i = 0, 1, . . . , N . Then,
p
Since is arbitrary, we conclude that Xn − Yn → 0.
Remark 3.3.3 Actually, Bahadur gave an a.s. order for op (n−1/2 ) under the stronger as-
sumption that F is twice differentiable at ξp with F 0 (ξp ) > 0. The theorem stated here is in
the form later given in Ghosh (1971). The exact a.s. order was shown to be n−3/4 (log log n)3/4
by Kiefer (1967) in a landmark paper. However, the weaker version presented here suffices
for proving the following CLTs.
The Bahadur representation easily leads to the following two joint asymptotic distributions.
Corollary 3.3.1 Let X1 , . . . , Xn be iid random variables from a CDF F having positive
derivatives at ξpj , where 0 < p1 < · · · < pm < 1 are fixed constants. Then
√ d
n[(ξbp1 , . . . , ξbpm ) − (ξp1 , . . . , ξpm )] → Nm (0, D),
72
√
n[F (ξpi )−Fn (ξpi )]
the joint asymptotic distribution of F 0 (ξpi )
, i = 1, . . . , m. By the definition of ECD-
F, the sequence of [Fn (ξp1 ), . . . , Fn (ξpm )]T can be represented as the sum of independent
random vectors n
1X
[I{Xi ≤ξp1 } , . . . , I{Xi ≤ξpm } ]T .
n i=1
Thus, the result immediately follows from the multivariate CLT by using the fact that
Example 3.3.2 (Interquartile range; IQR) One application of Corollary 3.3.1 is the
derivation of the derivation of the asymptotic distribution of the interquartile range ξb0.75 −
ξb0.25 . It is widely used as a measure of the variability among Xi ’s. Use of such an estimate
is quite common when normality is suspect. It can be shown that
√ d
n[(ξb0.75 − ξb0.25 ) − (ξ0.75 − ξ0.25 )] → N (0, σF2 )
with
3 3 1
σF2 = + − .
16[F 0 (ξ0.75 )]2 16[F 0 (ξ0.25 )]2 8F 0 (ξ0.75 )F 0 (ξ0.25 )
In particular, if X1 , . . . , Xn are iid N (0, σ 2 ), then, by using the general result above, on
√ d
some algebra, n(IQR − 1.35σ) → N (0, 2.48σ 2 ). Consequently, for normal data, IQR/1.35
is a consistent estimate of σ (the 1.35 value of course is an approximation) with asymptotic
√ d
variance 2.48σ 2 /1.352 = 1.36σ 2 . On the other hand, n(Sn − σ) → N (0, 0.5σ 2 ). The ratio
of the asymptotic variances, namely 0.5/1.36 = 0.37, is the ARE of the IQR-based estimate
relative to Sn . Thus, for normal data, one is better off using Sn . For populations with thicker
tails, IQR-based estimates can be more efficient.
statistics are called L-statistics. A particular L-statistic that was found to have attractive
versatile performance is the Gastwirth estimate
73
This estimate is asymptotically normal with an explicitly available variance formula since we
know from our general theorem that [X( n3 ) , X( n2 ) , X( 2n ) ]T is jointly asymptotically trivariate
3
Corollary 3.3.2 Let X1 , . . . Xn be iid from a CDF F . Let 0 < p < 1 and suppose VarF (X1 ) <
∞. If F is differentiable at ξp with F 0 (ξp ) = f (ξp ) > 0, then
√ d
n X̄n − µ, Fn−1 (p) − ξp → N2 (0, Σ),
where
p 1
R
Var(X1 ) E (X1 )
f (ξp ) F
− f (ξp ) x≤ξp
xdF (x)
Σ= .
p p(1−p)
E (X1 ) − f (ξ1p ) x≤ξp xdF (x)
R
f (ξp ) F f 2 (ξp )
The proof of this corollary is very similar to Corollary 3.3.1 and hence left as an exercise.
1
Example 3.3.4 As an application of this result, consider iid N (µ, 1) data. Take p = 2
so that Corollary 3.3.2 gives the joint asymptotic distribution of the sample mean and the
sample median. The covariance entry in the matrix Σ equals (assuming without any loss
√ R0
of generality that µ = 0) − 2π −∞ xφ(x)dx = 1. Therefore, the asymptotic correlation
q
between the sample mean and median in the normal case is π2 = 0.7979, a fairly strong
correlation.
Since the population median and more generally population percentiles provide useful sum-
maries of the population CDF, inference for them is of clear interest. Confidence intervals
iid
for population percentiles are therefore of interest in inference. Suppose X1 , X2 , . . . , Xn ∼ F
and we wish to estimate ξp = F −1 (p) for some 0 < p < 1. The corresponding sample
percentile ξbp = Fn−1 (p) is typically a fine point estimate for p. But how does one find a
confidence interval of guaranteed coverage?
74
One possibility is to use the quantile transformation and observe that
d
(F (X(1) ), F (X(2) ), . . . , F (X(n) )) =(U(1) , U(2) , . . . , U(n) ),
where U(i) is the ith order statistic of a U [0, 1] random sample, provided F is continuous.
Therefore, for given 1 ≤ i1 < i2 ≤ n,
PF X(i1 ) ≤ ξp ≤ X(i2 ) = PF F (X(i1 ) ) ≤ p ≤ F (X(i2 ) )
= P U(i1 ) ≤ p ≤ U(i2 ) ≥ 1 − α
if i1 , i2 are appropriately chosen. The pair (i1 , i2 ) can be chosen by studying the joint density
of (U(i1 ) , U(i2 ) ), which has an explicit formula. However, the formula involves incomplete Beta
functions, and for certain n and α, the actual coverage can be substantially larger than 1−α.
This is because no pair (i1 , i2 ) may exist such that the event involving the two uniform order
statistics has exactly or almost exactly 1 − α probability. This will make the confidence
interval [X(i1 ) , X(i2 ) ] larger than one wishes and therefore less useful.
However, an obvious drawback of this procedure is that F 0 (ξp ) must be known in advance.
Say, this method is not asymptotically distribution-free. A remedy is given as follows. Before
proceeding, we need a refinement of Bahadur representation.
Theorem 3.3.4 Let X1 , . . . , Xn be iid random variables from a continuous CDF F . Suppose
that for 0 < p < 1, F 0 (ξp ) exists and is positive. Let kn be a sequence of integers satisfying
1 ≤ kn ≤ n and kn /n = p + cn−1/2 + o(n−1/2 ) with a constant c. Then
√ p c
n(X(kn ) − ξbp ) → .
F 0 (ξp )
75
√ √
Proof. Let t ∈ R, ξnt = ξp +tn−1/2 , ηn = n(ξbkn −ξp ), Zn (t) = n[F (ξnt )−Fn (ξnt )]/F 0 (ξp ),
√ n
and Un (t) = n[F (ξnt ) − Fn (ξbkn )]/F 0 (ξp ). By using similar arguments in proving Theorem
n
kn /n − Fn (ξp )
X(kn ) − ξp = 0
+ op (n−1/2 ).
F (ξp )
p − Fn (ξp )
ξbp − ξp = + op (n−1/2 ).
F 0 (ξp )
The result follows by taking the difference of the two previous equations.
Corollary 3.3.3 Assume the conditions in Theorem 3.3.4. Let {k1n } and {k2n } be two
sequences of integers satisfying 1 ≤ k1n < k2n ≤ n,
p
k1n /n = p − zα/2 p(1 − p)/n + o(n−1/2 )
p
k1n /n = p + zα/2 p(1 − p)/n + o(n−1/2 ),
where zα = Φ−1 (1−α). Then, the confidence interval C(X) = [X(k1n ) , X(k2n ) ] has the property
that P (ξp ∈ C(X)) does not depend on F and
76
By Theorems 3.3.4, 3.3.2 and Slutsky’s Theorem,
p !
p(1 − p)
P (X(k1n ) > ξp ) = P ξbp − zα/2 0 √ + op (n−1/2 ) > ξp
F (ξp ) n
√ b !
n(ξp − ξp )
=P p + op (1) > zα/2
p(1 − p)/F 0 (ξp )
→ 1 − Φ(zα/2 ) = α/2.
Least squares estimates in regression minimize the sum of squared deviations of the observed
and the expected values of the dependent variable. In the location-parameter problem, this
principle would result in the sample mean as the estimate. If instead one minimizes the
sum of the absolute values of the deviations, one would obtain the median as the estimate.
Likewise, one can estimate the regression parameters by minimizing the sum of the absolute
deviations between the observed values and the regression function.
For example, if the model says yi = xTi β + i , then one can estimate the regression vector
β by minimizing ni=1 |yi − xTi β|, a very natural idea. This estimate is called the least
P
absolute deviation (LAD) regression estimate. While it is not as good as the least squares
estimate when the errors are exactly normal, it outperforms the least squares estimate for
a variety of error distributions that are heavy-tailed. Generalizations of the LAD estimate,
analogous to sample percentiles, are called quantile regression estimate. A good reference
for the material in this section and proofs of theorems below is Koenker (2005).
Definition 3.3.1 For 0 < p < 1, the pth quantile regression estimate is defined as
X n
βQR = arg min
b p|yi − xTi β|I{yi ≥xTi β} + (1 − p)|yi − xTi β|I{yi <xTi β} .
β
i=1
77
where ρp (t) = pt+ + (1 − p)t− is the so-called check function where subscripts + and − stand
for the positive and negative parts, respectively. The following theorem describe the limiting
distribution of quantile regression estimate. There are some neat analogies in this result to
limiting distribution of the sample quantile for iid data.
iid
Theorem 3.3.5 Let yi = xTi β + i , where i ∼ F , with F having median zero. Let 0 < p <
1, and let β
b
QR be any pth quantile regression estimate. Suppose F has a strictly positive
p(1−p)
where e1 = (1, 0, . . . , 0)T , Σ = limn n1 XT X (assumed to exist), and ν = f 2 (ξp )
.
References
Bahadur, R. R. (1966). A note on quantiles in large samples, Ann. Math. Stat., 37, 577–580.
Ghosh, J. K. (1971). A new proof of the Bahadur representation of quantiles and an application, Ann.
Math. Stat., 42, 1957–1961.
Kiefer, J. (1967). On Bahadurs representation of sample quantiles, Ann. Math. Stat., 38, 1323–1342.
Koenker, R. (2005). Quantile Regression. Cambridge Univ. Press.
78
Chapter 4
In this chapter, we treats asymptotic statistics which arises in connection with estimation
or hypothesis testing relative to a parametric family of possible distributions for the data.
In this respect, maximum likelihood inference might be one of the most popular methods.
Many think that maximum likelihood is the greatest conceptual invention in the history of
statistics. Although in some high-or infinite-dimensional problems, computation and per-
formance of maximum likelihood estimates (MLEs) are problematic, in a vast majority of
models in practical use, MLEs are about the best that one can do. They have many asymp-
totic optimality properties that translate into fine performance in finite samples. Before
elaborating on maximum likelihood estimates and testings, we first consider the concept of
asymptotic optimality of point estimators in parametric models.
79
where for each n, Vn (θ) is a k × k positive definite matrix depending on θ. If θ is one-
dimensional (k = 1), then Vn (θ) is the asymptotic variance as well as the asymptotic MSE
of θbn . When k > 1, Vn (θ) is called the asymptotic covariance matrix of θ
bn and can be used
as a measure of asymptotic performance of estimators.
bjn satisfies (4.1) with asymptotic covariance matrix Vjn (θ), j = 1, 2, and V1n (θ) ≤
If θ
V2n (θ) (in the sense that V2n (θ) − V1n (θ) is nonnegative definite) for all θ ∈ Θ, then θ
b1n is
said to be asymptotically more efficient than θ
b2n . When Xi ’s are iid, Vn (θ) is usually of the
form n−δ V(θ) for some δ > 0 (=1 in the majority of cases) and a positive definite matrix
V(θ) that does not depend on n.
where Ĩn (β) is the Fisher information matrix about β. If p = k and g is one-to-one, then
and, therefore, β
b is asymptotically efficient iff θ
n
bn is asymptotically efficient. For this reason,
we can focus on the estimation of θ only.
It was first believed as folklore that the MLE under regularity conditions on the un-
derlying distribution is asymptotically the best for every value of θ0 ∈ Θ; i.e., if an MLE
80
√ d
θbn exists and n(θbn − θ0 ) → N (0, I −1 (θ0 )), and if another competing sequence Tn satisfies
√ d
n(Tn − θ0 ) → N (0, V (θ0 )), then for every θ0 , V (θ0 ) ≥ I −1 (θ0 ).
It was a major shock when in 1952 Hodges gave an example that destroyed this belief and
proved it to be false even in the normal case. Hodges, in a private communication to LeCam,
produced an estimate Tn that beats the MLE X̄n locally at some θ0 , say θ0 = 0. Later, in
a very insightful result, LeCam (1953) showed that this can happen only on Lebesgue-null
sets of θ. An excellent reference for this topic is van der Vaart (1998).
iid
Let X1 , . . . , Xn ∼ N (θ, 1). Define an estimate Tn as
X̄n , X̄n ≥ n−1/4 ,
θ̃ =
tX̄ , X̄ < n−1/4 ,
n n
where we choose 0 < t < 1. We are interested in estimating the population mean. If X̄n is
not close to 0, we simply take the sample mean as the estimator. If we know that it is pretty
close to 0, we can shrink it further to make it closer to 0. Thus, the resulting estimator
should be more efficient than the sample mean X̄n at 0. Of course, we can take other values
than 0, the same thing will happen too. Now, let’s find the asymptotic distribution of θ̃. If
θ = 0, then we can write
√ √
nθ̃ =n X̄n I{|X̄n |≥n−1/4 } + tX̄n I{|X̄n |<n−1/4 }
√
= n tX̄n + (1 − t)X̄n I{|X̄n |≥n−1/4 }
√
where Yn = nX̄n ∼ N (0, 1), and hence tYn ∼ N (0, t2 ). Now let us look at the second term
Wn = Yn I{|Yn |≥n1/4 } . Since
2
(E|Wn |)2 ≤ E(Yn2 )E(I{|Y n |≥n
1/4 } )
p
which implies Wn → 0. By Slutsky’s theorem, we get
√ d
nθ̃ → N (0, t2 ), if θ = 0.
81
Similarly, when θ 6= 0, we can write
√
nθ̃ = Yn + (t − 1)Yn I{|Yn |<n1/4 } .
√ √ p
Again, Yn − nθ = n(X̄n − θ) ∼ N (0, 1). Now it remains to show that Yn I{|Yn |<n1/4 } → 0.
For any 0 < < n1/4 ,
√ d
By Slutsky’s theorem again, we get n(θ̃ − θ) → N (0, 1). Combining the above two cases,
we get
√ N (0, t2 ), θ = 0,
d
n(θ̃ − θ) →
N (0, 1), θ 6= 0,
In the case of θ = 0, the usual asymptotic Cramer-Rao theorem does not hold, since t2 <
1 = I −1 (θ). It is clear, however, that Tn has certain undesirable features. First, as a function
of X1 , ..., Xn , Tn is not smooth. Second, V (θ) is not continuous in θ.
82
The maximum likelihood estimate (MLE) is given by θ b = arg maxθ∈Θ log L(θ; X). Often,
the estimate θ
b may be obtained by solving the system of likelihood equations (score function),
∂ log L
=0
∂θi θ=θ
b
Next, we will show that under regularity condition on F, the MLE (RLE) are strongly
consistent, asymptotically normal, and asymptotically efficient. For simplicity, we focus on
the case of k = 1. The multivariate version will be discussed without giving it proof.
∂ 3 log fθ (x)
(C1) The third derivative with respect to θ, ∂θ3
, exists for all x, and also for each
θ0 ∈ Θ there exists a function H(x) ≥ 0 (possibly depending on θ0 ) such that for
θ ∈ N (θ0 , ) = {θ : |θ − θ0 | < },
∂ 3 log fθ (x)
≤ H(x), Eθ H(X1 ) < ∞;
∂θ3
∂fθ (x)
(C2) For gθ (x) = fθ (x) or gθ (x) = ∂θ
, we have
Z Z
∂ ∂gθ (x)
gθ (x)dx = dx;
∂θ ∂θ
∂log fθ (x)
Remark 4.2.1 Condition (C1) ensures that ∂θ
, for any x, has a Taylor’s expansion as
a function of θ; Condition 2 means that fθ (x) or ∂f∂θ θ (x)
can be differentiated with respect to
θ under the integral sign. That is, the integration and differentiation can be interchanged;
A sufficient condition for Condition 2 is the following:
83
For each θ0 ∈ Θ, there exists functions g(x), h(x), and H(x) (possibly depending on θ0 )
such that for θ ∈ N (θ0 , ) = {θ : |θ − θ0 | < },
∂log fθ (x)
Condition 3 ensures that the variance of ∂θ
is finite.
Theorem 4.2.1 Assume regularity conditions (C1)-(C3) on the family F. Consider iid
observations on Fθ0 , for θ0 an element of Θ. Then with probability 1, the likelihood equations
admit a sequence of solutions {θbn } satisfying
Then,
n n
0 1 X ∂ 2 log fθ (Xi ) 00 1 X ∂ 3 log fθ (Xi )
s (X, θ) = , s (X, θ) = .
n i=1 ∂θ2 n i=1 ∂θ3
Note that
n n
00 1 X ∂ 3 log fθ (Xi ) 1X
|s (X, θ)| ≤ ≤ |H(Xi )| ≡ H̄(X),
n i=1 ∂θ3 n i=1
1
s(X, θ) = s(X, θ0 ) + s0 (X, θ0 )(θ − θ0 ) + s00 (X, ξ)(θ − θ0 )2
2
1
= s(X, θ0 ) + s0 (X, θ0 )(θ − θ0 ) + H̄(X)η ∗ (θ − θ0 )2 ,
2
84
where |η ∗ | = |s00 (X, ξ)|/H̄(X) ≤ 1. By the SLLN, we have,
wp1
s(X, θ0 ) → Eθ0 s(X, θ0 ) = 0,
wp1
s0 (X, θ0 ) → Eθ0 s0 (X, θ0 ) = −I(θ0 ),
wp1
H̄(X) → Eθ0 H(Xi ) < ∞,
It can be shown that θbn, is a proper random variable. It can be also shown that we can
obtain a sequence of θbn not depending on the choice of . The details are omitted here but
can be found in Serfling (1980). This proves (i).
85
(ii) For large n, we have seen that
1
0 = s(X, θbn ) = s(X, θ0 ) + s0 (X, θ0 )(θbn − θ0 ) + H̄(X)η ∗ (θbn − θ0 )2 .
2
Thus,
√ √
0 1 ∗ b
ns(X, θ0 ) = n(θn − θ0 ) −s (X, θ0 ) − H̄(X)η (θn − θ0 ) .
b
2
√ d wp1
Since ns(X, θ0 ) → N (0, I(θ0 )) by CLT, and −s0 (X, θ0 ) − 21 H̄(X)η ∗ (θbn − θ0 ) → I(θ0 ), then
it follows from Slutsky’s theorem that
√
√ ns(X, θ0 ) d
n(θn − θ0 ) =
b
1
→ N (0, I(θ0 ))/I(θ0 ) = N (0, I −1 (θ0 )).
0 ∗
−s (X, θ0 ) − 2 H̄(X)η (θn − θ0 )
b
Remark 4.2.2 This theorem does not say which sequence of roots of s(X; θ) = 0 should be
chosen to ensure consistency in the case of multiple roots. It does not even guarantee that
for any given n, however large, the likelihood function log L(θ; X) has any local maxima at
all. This specific theorem is useful in only those cases where s(X; θ) = 0 has a unique root
for all n.
86
One good strategy is to use Newton’s method with one of simply computed estimates
based on the method of moments or sample quantiles as the initial guess. This method takes
the initial guess, θb(0) , and inductively generates a sequence of hopefully better and better
estimates by
One simplification of this strategy can be made if the Fisher information is available. Ordi-
narily, s0 (X, θb(k) ) will converge as n → ∞ to −I(θ0 ) and so can be replaced by −I(θb(k) ) in
the iterations,
θb(k+1) = θb(k) + [I(θb(k) )]−1 s(X, θb(k) ), k = 0, 1, 2, . . . .
As we know, this method is the method of scoring. The scores, [I(θb(k) )]−1 s(X, θb(k) ) are
increments added to an estimate to improve it.
87
Since for the MLE, θbn ,
√ d
n(θbn − θ) → N (0, I(θ)−1 ) = N (0, 3),
is the first iteration in improving the estimator as discussed above. In fact, this is just the
first iteration in computing and MLE using the Newton-Raphson iteration method with θb(0)
as the initial value and, hence, is often called the one-step MLE. Under some conditions, θb(1)
is asymptotically efficient, as the following result shows.
√
Theorem 4.3.1 Assume the conditions in Theorem 4.2.1 hold and that θb(0) is n-consistent
for θ. Then
(ii) The one-step MLE obtained by replacing s0 (X, θb(0) ) with its expected value, −I(θb(0) ),
is asymptotically efficient.
√
Proof. Let θbn be a n-consistent sequence satisfying s(X; θbn ) = 0. In what follows, we
suppress “X” for simplicity. Expanding θb(0) at θbn ,
1
s(θb(0) ) = s(θbn ) + s0 (θbn )(θb(0) − θbn ) + s00 (ξ)(θb(0) − θbn )2 , (4.2)
2
and using
(θb(1) − θbn ) = (θb(0) − θbn ) − [s0 (θb(0) )]−1 s(θb(0) ), s(θbn ) = 0
88
we find
√ √ n o
n(θb(1) − θbn ) = ns(θb(0) ) [s0 (θbn )]−1 − [s0 (θb(0) )]−1
√
n 00
− s (ξ)(θb(0) − θbn )2 [s0 (θbn )]−1 . (4.3)
2
√
Now, we need to study the right hand of (4.3). Firstly, note that the term n(θb(0) −
√ √
θbn ) = n(θb(0) − θ0 ) − n(θbn − θ0 ), is bounded in probability because the second term is
asymptotically normal from Theorem 4.2.1-(ii) and the first term is Op (1) by the assumption.
By
wp1
|s00 (ξ)| ≤ H̄(X) → Eθ0 H(Xi ) < ∞
we have s00 (ξ) = Op (1). Also, by CMT, we know s0 (θbn ) = s0 (θ0 ) + op (1) = −I(θ0 ) + op (1).
Thus, the last term in (4.3) is of order Op (n−1/2 ). Similarly, [s0 (θbn )]−1 − [s0 (θb(0) )]−1 = op (1).
Finally, by (4.2) again, we obtain s(θb(0) ) = Op (n−1/2 ), which leads to
√ (1) √
n(θb − θbn ) = nOp (n−1/2 )op (1) + Op (n−1/2 ) = op (1).
√ p √ √ √
Hence, n(θb(1) − θbn ) → 0 as n → ∞. Say, n(θb(1) − θbn ) = n(θb(1) −θ0 )− n(θbn −θ0 ) = op (1).
√ b(1) √
n(θ − θ0 ) is asymptotically equivalent to n(θbn − θ0 ) which is asymptotically efficient
√
according to Theorem 4.2.1. It follows that n(θb(1) − θ0 ) is AN (θ0 , [I(θ0 )]−1 ) and thus
asymptotically efficient. The agrement for estimate using scoring method is identical.
As we know, UMP and UMPU tests often do not exist in a particular problem. In this
chapter, we shall introduce other tests. These tests may not be optimal, but they are very
general methods, easy to use, and have intuitive appeal. They often coincide with optimal
tests (UMP, UMPU tests). They play similar role to the MLE in the estimation theory. For
all these reasons, a treatment of testing is essential. We discuss the asymptotic theory of
likelihood ratio, Wald, and Rao score tests in the remainder of this chapter.
89
testing problem is
H0 : θ ∈ Θ0 versus H1 : θ ∈ Θ1 ,
S T
where Θ0 Θ1 = Θ and Θ0 Θ1 = ∅.
supθ∈Θ0 L(θ; X)
Λn =
supθ∈Θ L(θ; X)
Equivalently, the test may be carried out in terms of the commonly used statistic
λn = −2 log Λn ,
which turns out to be more convenient for asymptotic derivation. The motivation for Λn
comes from two sources: (a) The case where H0 , and H1 are each simple, for which a UMP
test is found from Λn by the Neyman-Pearson lemma; (b) The intuitive explanation that,
for small values of Λn , we can better match the observed data with some value of θ outside
of Θ0 .
Ri (θ) = 0, 1 ≤ i ≤ r.
In the case of a simple hypothesis θ = θ 0 , the set Θ0 = {θ 0 }, and the function Ri (θ) may
be taken to be
Ri (θ) = θi − θ0i , 1 ≤ i ≤ k.
In the case of a composite hypothesis, the set Θ0 contains more than one element and we must
have r < k. For instance, k = 3, we might have H0 : θ 0 ∈ Θ0 = {θ = (θ1 , θ2 , θ3 ) : θ1 = θ01 }.
In this case, r = 1 and the function R1 (θ) may be taken to be R1 (θ) = θ1 − θ01 . We start
with a well-known but intuitive example that illustrate important aspects of the likelihood
ratio method.
iid
Example 4.4.1 Let X1 , . . . , Xn ∼ N (µ, σ 2 ), and consider testing H0 : µ = 0 versus H1 :
90
µ 6= 0. Let θ = (µ, σ 2 )T . Then, k = 2, r = 1. Apparently,
supθ∈Θ0 (1/σ n ) exp{− 2σ1 2 i (Xi − µ)2 }
P
Λn =
supθ∈Θ1 (1/σ n ) exp{− 2σ1 2 i (Xi − µ)2 }
P
2 n/2
P
i (X i − X̄n )
= P 2
i Xi
is the t-statistic. In other words, the t-test is the LRT (equivalently). Also, observe that
This implies
n/2
n−1
Λn = 2
tn + n − 1
t2n
⇒ λn = −2 log Λn = n log 1 +
n−1
2
t2
tn d
=n + op ( n ) → χ21 .
n−1 n−1
d
under H0 since tn → N (0, 1) as illustrated in Example 1.2.8.
As seen earlier, sometimes it is very difficult or impossible to find the exact distribution
of λn . So approximations in these cases become necessary. The next celebrated theorem
originally stated Wilks (1938), established the asymptotic chi-square distribution of λn under
H0 . The degree of freedom is just the number of independent constraints specified by H0 ; it
is useful to remember this as a general rule. Before proceeding, to better derive the result,
we need have a representation of Θ0 given above. As we have r constraints on the parameter
θ, then only k − r components of θ = (θ1 , . . . , θk )T are free to change, and so it has k − r
degrees of freedom. Without loss of generality, we denote these k −r dimension parameter by
ϑ = (ϑ1 , . . . , ϑk−r ). So, this specification of Θ0 may equivalently be given as a transformation
H0 : θ = g(ϑ), (4.4)
91
where g is a continuously differentiable function from Rk−r to Rk with a full rank ∂g(ϑ)/∂ϑ.
For example, consider again H0 : θ 0 ∈ Θ0 = {θ = (θ1 , θ2 , θ3 ) : θ1 = θ01 }. Then, we can set
ϑ1 = θ2 , ϑ2 = θ3 , g1 (ϑ) = θ01 , g2 (ϑ) = θ2 , g3 (ϑ) = θ3 ; Also, suppose θ = (θ1 , θ2 , θ3 )T and
H0 : θ1 = θ2 . Here, Θ = R3 , k = 3 and r = 1, and θ2 and θ3 are the two free changing
parameters. Then we can take ϑ = (θ2 , θ3 )T ∈ Rk−r = R2 , and g1 (ϑ) = θ2 , g2 (ϑ) = θ2 ,
g3 (ϑ) = θ3 .
Theorem 4.4.1 Assume the conditions in Theorem 4.2.1 hold and H0 is determined by
d
(4.4). Under H0 , λn → χ2r .
bn − θ 0 )T I(θ 0 )(θ
= n(θ bn − θ 0 ) + op (1).
Then,
h i
bn ) − log L(θ 0 ) = nsT (θ 0 )[I(θ 0 )]−1 s(θ 0 ) + op (1).
2 log L(θ
Similarly, under H0 ,
h i
2 log L(g(ϑn )) − log L(g(ϑ0 )) = ns̃T (ϑ0 )[Ĩ(ϑ0 )]−1 s̃(ϑ0 ) + op (1)
b
where
1 ∂ log L(g(ϑ))
s̃(ϑ) = = D(ϑ)s(g(ϑ)),
n ∂ϑ
D(ϑ) = ∂g(ϑ)/∂ϑ,
92
and Ĩ(ϑ) is the Fisher information matrix about ϑ. Combining these results, we can obtain
h i
λn = −2 log Λn = 2 log L(θ
bn ) − log L(g(ϑ
b n ))
under H0 , where
B(ϑ) = [I(g(ϑ))]−1 − [D(ϑ)]T [Ĩ(ϑ)]−1 D(ϑ).
√ d
By the CLT, n[I(θ 0 )]−1/2 s(θ 0 ) → Z, where Z = Nk (0, Ik ). Then, it follows from the
Slutsky’s Theorem that, under H0 ,
d
λn → ZT [I(g(ϑ0 ))]1/2 B(ϑ0 )[I(g(ϑ0 ))]1/2 Z.
Finally, it remains to investigate the properties of the matrix [I(g(ϑ0 ))]1/2 B(ϑ0 )[I(g(ϑ0 ))]1/2 .
For notational convenience, let D = D(ϑ), B = B(ϑ), A = I(g(ϑ)), and C = Ĩ(ϑ). Then,
= Ik − A1/2 DT C −1 DA1/2
= A1/2 BA1/2 ,
where the fourth equality follows from the fact that C = DADT . This shows that A1/2 BA1/2
is a projection matrix. The rank of A1/2 BA1/2 is
Thus, by using similar arguments in the proof of Theorem 3.1.5 (or more directly by Cochran
d
Theorem), ZT [I(g(ϑ0 ))]1/2 B(ϑ0 )[I(g(ϑ0 ))]1/2 Z = χ2r .
2
Consequently, the LRT with rejection Λn < e−χr,α /2 has asymptotic significance level α,
where χ2r,α is the (1 − α)th quantile of the chi-square distribution χ2r .
93
Under the first type of null hypothesis, say H0 : θ = θ 0 , the same result holds with the
degree of freedom being k. This result can be easily derived in a similar fashion to Theorem
4.4.1 but with less algebras. We do not elaborate here but left as an exercise.
To find the power of the test that rejects H0 when λn > χ2r,α , for some r, one would need
to know the distribution of λn at the particular θ = θ 1 value where we want to know the
power. But the distribution under θ 1 of λn for a fixed n is also generally impossible to find, so
we may appeal to asymptotics. However, there cannot be a nondegenerate limit distribution
on [0, ∞) for λn under a fixed θ 1 in the alternative. The following simple example illustrates
this difficulty.
Example 4.4.2 Consider the testing problem in Example 4.4.1 again. We saw earlier that
X̄n2
λn = n log 1 + 1 P
n
(Xi − X̄n )2
wp1 1 wp1
Consider now a value µ 6= 0. Then, X̄n2 → µ2 (> 0) and (Xi − X̄n )2 → σ 2 . Therefore,
P
n
wp1
clearly λn → ∞ under each fixed µ 6= 0. Thus, There cannot be a non-degenerate limit
distribution for λn under a fixed alternative µ.
Instead, similar to the Pearson’s Chi-square test discussed earlier, we may also consider
the behavior of λn under “local” alternative, that is, for a sequence θ 1n = θ 0 + n−1/2 δ, where
δ = (δ1 , . . . , δk )T . In this case, a non-central χ2 approximation under the alternative could
be achieved.
Two competitors to the LRT are available in the literature, see Wald (1943) and Rao (1948)
for the first introduction of these procedures respectively. Both of them are general and can
be applied to a wide selection of problems. Typically, the three procedures are asymptotically
first-order equivalent. Recall the null hypothesis
H0 : R(θ) = 0, (4.5)
94
where R(θ) is continuously differentiable function from Rk to Rr . The Wald test statistic is
defined
n o−1
T T −1
Wn = [R(θ n )] [C(θ n )] [In (θ n )] C(θ n )
b b b b R(θ
bn ),
where θ
bn is an MLE or RLE of θ, In (θ
bn ) is the Fisher information matrix based on X and
C(θ) = ∂R(θ)/∂θ. For testing a simple null hypothesis H0 : θ = θ 0 , R(θ) will become
θ − θ 0 and Wn simplifies to
bn − θ 0 )T I(θ
Wn = n(θ bn − θ 0 ).
bn )(θ
Rao (1947) introduced a score test that rejects H0 when the value of
is large, where θ̃ n is an MLE or RLE of θ under H0 . For testing a simple null hypothesis
H0 : θ = θ 0 , Rn will simplify to
Here are the asymptotic chi-square results for these two statistics.
Theorem 4.5.1 Assume the conditions in Theorem 4.2.1 hold. Under H0 given by (4.5),
d d
then (i) Wn → χ2r and (ii) also Rn → χ2r .
−1 d
bn ]T [C(θ 0 )]T [I(θ 0 )]−1 C(θ 0 ) bn ) → χ2 .
n[R(θ R(θ r
p
bn →
by CMT. Then the result follows from Slutsky’s theorem and the fact that θ θ 0 and I(θ)
and C(θ) are continuous at θ.
95
(ii) From the Lagrange multipliers, θ̃ n satisfies
and
Multiplying [C(θ 0 )]T [nI(θ 0 )]−1 to the left-hand side of (4.7) and using (4.6), we obtain that
[C(θ 0 )]T [nI(θ 0 )]−1 C(θ 0 )ηn = −n[C(θ 0 )]T [nI(θ 0 )]−1 s(θ 0 ) + op (n−1/2 ),
which implies
d
ηnT [C(θ 0 )]T [nI(θ 0 )]−1 C(θ 0 )ηn → χ2r .
Then, the result follows from the above equation and the fact that C(θ̃ n )ηn = −ns(θ̃ n ), and
I(θ) is continuous at θ 0 .
Thus, Wald’s test, Rao’s tests and LRT are asymptotically equivalent. Note that Wald’s
test requires computing θbn , whereas Rao’s score test requires computing θ̃ n , not θ
bn . On
the other hand, the LRT requires computing both θ
bn and θ̃ n (or solving two maximization
problems). Hence, one may choose one of these tests that is easy to compute in a particular
application.
The usual duality between testing and confidence intervals says that the acceptance region
of a test with size α can be inverted to give a confidence set of coverage probability (1 − α).
In other words, suppose A(θ 0 ) is the acceptance region of a size α test for H0 : θ = θ 0 ,
and define C(X) = {θ : X ∈ A(θ)}. Then Pθ0 (θ 0 ∈ C(x)) = 1 − α and hence C(X) is
96
a 100(1 − α)% confidence set for θ. For example, the acceptance region of the LRT with
Θ0 = {θ : θ = θ 0 } is
2
A(θ 0 ) = {x : L(θ 0 ; x) ≥ e−χk,α /2 L(θ
bn ; x)}
Consequently,
2
C(X) = {θ : L(θ; X) ≥ e−χk,α /2 L(θ
bn ; X)}
This method is often called the inversion of a test. In particular, the LRT, the Wald test,
and the Rao score test can all be inverted to construct confidence sets that have asymptot-
ically a 100(1 − α)% coverage probability. The confidence sets constructed from the LRT,
the Wald test, and the score test are respectively called the likelihood ratio, Wald, and s-
core confidence sets. Of these, the Wald and the score confidence sets are ellipsoids because
of how the corresponding test statistics are defined. The likelihood ratio confidence set is
typically more complicated but it is also ellipsoids from asymptotic viewpoints. Here is an
example.
iid
Example 4.6.1 Suppose Xi ∼ BIN(p, 1), 1 ≤ i ≤ n. For testing H0 : p = p0 versus H1 :
p 6= p0 , the LRT statistic is
pY0 (1 − p0 )n−Y
Λn =
supp pY (1 − p)n−Y
pY0 (1 − p0 )n−Y pY0 (1 − p0 )n−Y
= = ,
Y Y Y n−Y
pbY (1 − pb)n−Y
n
1 − n
Pn
where Y = i=1 Xi and pb = Y /n. Thus, the likelihood ratio confidence set is of the form
n 2
o
C1 (X) = p : pY (1 − p)n−Y ≥ e−χ1,α /2 pbY (1 − pb)n−Y .
The confidence set obtained by inverting acceptance regions of Wald’s test is simply
zα/2 p
C2 (X) = p : |bp − p| ≤ √ pb(1 − pb)
n
h p p i
= pb − zα/2 pb(1 − pb)/n, pb + zα/2 pb(1 − pb)/n
97
1
since (χ21,α )1/2 = zα/2 and Wn = n(b
p − p0 )2 I(b
p), where I(p) = p(1−p)
. This is the textbook
confidence interval for p.
where lC , uC are the roots of the quadratic equation p(1 − p)χ21,α − n(b
p − p)2 = 0.
98
Chapter 5
Asymptotics in nonparametric
inference
This is perhaps the earliest example of a nonparametric testing procedure. In fact, the test
was apparently discussed by Laplace in the 1700s. The sign test is a test for the median of
any continuous distribution without requiring any other assumptions.
Hypothesis The null hypothesis of interest here is that of zero shift in location due to the
treatment, namely, H0 : θ = 0 versus H0 : θ > 0. This null hypothesis asserts that each
of the distributions (not necessarily the same) for the difference (post-treatment minus pre-
treatment observations) has median 0, corresponding to no shift in location due to treatment.
Certainly, this is essentially equivalent to consider the null hypothesis H0 : θ = θ0 because
we can simply use H0 : θ − θ0 = 0.
Procedure The test statistic is given by the total number of X1 , X2 , . . . , Xn that are greater
99
than θ0 , say
n
X
Sn = I(Xi > θ0 ),
i=1
where I(·) is the indicator function. Then, small value of Sn leads to reject H0 . We now
need to know the distribution of Sn . Obviously, the distribution of Sn under H0 :
n
k 1
Sn ∼ BIN(n, 1/2), P (Sn = k) = Cn
2
Thus, the p-value is
n n
X 1
P (Bin(n, 1/2) ≥ Sn ) = Cnk .
k=Sn
2
For simplicity, we may use the following large-sample approximation to obtain an ap-
proximated p-value. Note that
n
X 1
EH0 (Sn ) = = n/2
i=1
2
n
X 1
VarH0 (Sn ) = = n/4.
i=1
4
follows from standard central limit theory for sums of mutually independent, identically
distributed random variables.
For large sample sizes, we can make use of the standard central limit theorem for sums
of i.i.d. random variables to conclude that
Sn − npθ Sn − n(1 − F (0))
1/2
=
[npθ (1 − pθ )] [n(1 − F (0))(F (0))]1/2
has an asymptotic N (0, 1) distribution. Thus, for large n, we can approximate the exact
power by
bα,1/2 − npθ
Powerθ = 1 − Φ
[npθ (1 − pθ )]1/2
100
We note that both the exact power and the approximate power against an alternative θ > 0
depend on the common distribution only through the value of its distribution F (z) at z = 0.
Thus, if two distributions F1 and F2 have a common median θ > 0 and F1 (0) = F2 (0), then
the exact power of the sign test against the alternative θ > 0 will be the same for both F1
and F2 .
(ii) EF (φn ) → 1, ∀F ∈ Ω1 .
Example 5.1.1 For a parametric example, let X1 , . . . , Xn be an i.i.d. sample from the
Cauchy distribution, C(θ, 1). For all n ≥ 1, we know that X̄n also has the C(θ, 1) distri-
bution. Consider testing the hypothesis H0 : θ = 0 versus H1 : θ > 0 by using a test that
rejects for large X̄n . The cutoff point, k, is found by making PH0 (X̄n > k) = α. But k is
simply the αth quantile of the C(0, 1) distribution. Then the power of this test is given by
This is a fixed number not dependent on n. Therefore, the power does not approach to 1
as n → ∞, and so so the test is not consistent even against parametric alternatives. In
contrast, a test based on the median would be consistent in the C(θ, 1) case (why?).
101
Theorem 5.1.1 If F is a continuous C.D.F. with unique median θ, then the sign test is
consistent for tests on θ.
P
Proof. Recall that the sign test rejects H0 if Sn = I(Xi > θ0 ) ≥ kn . If we choose
kn = n2 + zα n4 , then, by the ordinary central limit theorem, we have
p
PH0 (Sn ≥ kn ) → α.
where pθ = Pθ (X1 > θ0 ). Since we assume θ > θ0 , it follows that n1 kn − pθ < 0 for all large
n. Also, n1 Sn − pθ converges in probability to 0 under any F (WLLN), and so Qn → 1. Since
the power goes to 1, the test is consistent against any alternative F satisfying θ > θ0 .
We wish to compare the sign test with the t-test in terms of asymptotic relative efficiency.
The point is that, at a fixed alternative θ, if α remains fixed, then, for large n, the power
of both tests is approximately 1 (say, consistent) and there would be no way to practically
compare the two tests. Perhaps we can see how the powers compare for θ ≈ θ0 . The idea is
to take θ = θn → θ0 at such a rate that the limiting power of the tests is strictly between α
and 1. If the two powers converge to different values, then we can take the ratio of the limits
as a measure of efficiency. The idea is due to E.J.G. Pitman (Pitman 1948). We firstly give
a brief introduction to the concept of ARE regarding the test.
In estimation, an agreed-on basis for comparing two sequences of estimates whose mean
squared error each converges to zero as n → ∞ is to compare the variances in their limit
√ d √ d
distributions. Thus, if n(θb1n − θ) → N (0, σ12 (θ)) and n(θb2n − θ) → N (0, σ22 (θ)), then the
asymptotic relative efficiency (ARE) of θb2n with respect to θb1n is defined as σ 2 (θ)/σ 2 (θ).
1 2
One can similarly ask what should be a basis for comparison of two sequences of tests
based on statistics T1n and T2n of a hypothesis H0 : θ = θ0 . Suppose we use statistics such
that large values of them correspond to rejection of H0 ; i.e., H0 is rejected if Tn > cn . Let α,
102
β denote the type 1 error probability and the power of the test, and let θ denote a specific
alternative. Suppose n(α, β, θ, T ) is the smallest sample size such that
Two tests based on T1n and T2n can be compared through the ratio
and T1n is preferred if this ratio is less than 1. The threshold sample size n(α, β, θ, T ) is
difficult or impossible to calculate even in the simplest examples. Furthermore, the ratio can
depend on particular choices of α, β, θ.
Theorem 5.1.2 Let −∞ < h < ∞ and θn = θ0 + √h . Consider the following conditions:
n
(i) there exist functions µ(θ), σ(θ), such that, for all h,
√
n(Tn − µ(θn )) d
→ N (0, 1);
σ(θn )
(ii) µ0 (θ0 ) > 0; (iii) σ(θ0 ) > 0 and σ(θ) is continuous at θ0 . Suppose T1n and T2n each satisfy
conditions (i)-(iii). Then
2
σ12 (θ0 ) µ02 (θ0 )
e(T2 , T1 ) = 2
σ2 (θ0 ) µ01 (θ0 )
103
See Serfling (1980) for a detailed proof. By this theorem, we are now ready to derive the
ARE of the sign test with respect to the t-test.
Corollary 5.1.1 Let X1 , . . . , Xn be i.i.d. observations from any symmetric continuous dis-
tribution function F (x − θ) with density f (·), where f (0) > 0, f is continuous at 0 and
F (0) = 21 . The Pitman asymptotic relative efficiency of the one-sample test procedure (one-
or two-sided) based on the sign test statistic Sn with respect to the corresponding normal
theory test based on X̄n is
1
Proof. For T2n = S ,
n n
first notice that Eθ (T2n ) = Pθ (X1 > 0) = 1 − F (−θ). Also
Varθ (T2n ) = F (−θ)(1 − F (−θ))/n. We choose µn (θ) = 1 − F (−θ) and σn2 (θ) = F (−θ)(1 −
F (−θ))/n. Therefore, µ0n (θ) = f (−θ) and µ0n (θ0 ) = f (0) > 0. For T1n = X̄n , choose
µn (θ) = θ and σn2 (θ) = σF2 /n. Conditions (i)-(iii) are easily verified here, too, with these
choices of µn (θ) and σn (θ). Therefore, by Theorem 5.1.2, the result follows immediately.
The sign test, however, cannot get arbitrarily bad with respect to the t-test under some
restrictions on the C.D.F. F , as is shown by the following result, although the t-test can be
arbitrarily bad with respect to the sign test. Hodges and Lehmann (1956) found that within
a certain class of populations, e(Sn , X̄n ) is always at least 1/3 and the bound is attained
when F is any symmetric uniform distribution. Of course, the minimum efficiency is not
very good. We will later discuss alternative nonparametric tests for the location-parameter
problem that have much better asymptotic efficiencies.
104
5.2 Signed rank test (Wilcoxon)
5.2.1 Procedure
Recall that Hodges and Lehmann proved that the sign test has a small positive lower bound
of 1/3 on the Pitman efficiency with respect to the t-test in the class of densities with a
finite variance, which is not satisfactory. The problem with the sign test is that it only uses
the signs of Xi − θ0 , not the magnitude of Xi − θ0 . A nonparametric test that incorporates
the magnitudes as well as the signs is called the Wilcoxon signed-rank test, under a little bit
more assumption about the population distribution; see Wilcoxon (1945).
Suppose that X1 , . . . , Xn are the observed data from some location parameter distribu-
tion F (x − θ), and assume that F is symmetric. Let θ = median(F ). We want to test
H0 : θ = 0 against H1 : θ > 0. We start by ranking |Xi | from the smallest to the largest,
giving the units ranks R1 , . . . , Rn and order statistics |X|(1) , . . . , |X|(n) .
Then, the Wilcoxon signed-rank statistic is defined to be the sum of these ranks that
correspond to originally positive observations. That is,
n
X
Tn = Ri I(Xi > 0),
i=1
where the term Ri I(Xi > 0) is known as the positive signed rank of Xi .
When θ is greater than 0, there will tend to be a large proportion of positive X and they
will tend to have the larger absolute values. Hence, we would expect a higher proportion of
positive signed ranks with relatively large sizes. At the α level of significance, reject H0 if
Tn ≥ tα , where the constant tα is chosen to make the type I error probability equal to α.
Lower-sided and two-sided tests can be constructed similarly.
Remark 5.2.1 It may appear that some of the information in the ranking of the sample is
being lost by using only the positive signed ranks to compute Tn . Such is not the case. If we
define T̃n to be the sum of ranks (of the absolute values) corresponding to the negative X
n
observations, then T̃n = ni=1 (1−I(Xi > 0)Ri . It follows that Tn + T̃n =
P P
Ri = n(n+1)/2.
i=1
105
Thus, the test procedures defined above could be constructed equivalently based on T̃n =
n(n + 1)/2 − Tn .
It turns out that, under H0 , the {Wi } have a relatively simple joint distribution.
Proof. By the symmetric assumption, Wi ∼ BIN(1, 1/2) is obvious. To show the indepen-
dence, we define the so-called anti-rank,
Dk = {i : Ri = k, 1 ≤ i ≤ n},
say, the index of the observation whose absolute rank is k. Thus, Wk = I(XDk > 0). Let
D = (D1 , . . . , Dn ), d = (d1 , . . . , dn ), and then we have
P (W1 = w1 , . . . , Wn = wn )
X
= P (I(XD1 > 0) = w1 , . . . , I(XDn > 0) = wn | D = d) P (D = d)
d
X
= P (I(Xd1 > 0) = w1 , . . . , I(Xdn > 0) = wn ) P (D = d)
d n X n
1 1
= P (D = d) = ,
2 d
2
where the second equality comes from the fact that I(X1 > 0), . . . , I(Xn > 0) are indepen-
dent with (D1 , . . . , Dn ). The independence is therefore immediately obtained by noting that
P (Wi = wi ) = 12 . The independence between I(X1 > 0), . . . , I(Xn > 0) and (D1 , . . . , Dn )
can be easily established as follows. Actually, (D1 , . . . , Dn ) is the function of |X1 |, . . . , |Xn |
106
and (I(Xi > 0), |Xi |), i = 1, . . . , n are independent each other. Thus, it suffices to show that
I(Xi > 0) is independent with |Xi |. In fact,
1
P (I(Xi > 0) = 1, |Xi | ≤ x) = P (0 < Xi ≤ x) = F (x) − F (0) = F (x) −
2
2F (x) − 1
= = P (I(Xi > 0) = 1)P (|Xi | ≤ x).
2
Therefore, the signed-rank test can be implemented by rejecting the null hypothesis, H0 :
θ = 0 if r
n(n + 1) n(n + 1)(2n + 1)
Tn > + zα .
4 24
The other option would be to find the exact finite sample distribution of Tn under the null
as illustrated above. This can be done in principle, but the CLT approximation works pretty
well.
Unlike the null case, the Wilcoxon signed-rank statistic Tn does not have a representation
as a sum of independent random variables under the alternative. So the asymptotic non-null
distribution of Tn , which is very useful for approximating the power and establishing the
consistency of the test, does not follow from the CLT for independent summands. However,
Tn still belongs to the class of U -statistics, and hence the CLTs for U -statistics can be used
to derive the asymptotic nonnull distribution of Tn and thereby get an approximation to the
power of the Wilcoxon signed-rank test. The following proposition is useful for deriving its
non-null distribution.
107
Proposition 5.2.2 We have the following equivalent expression for Tn ,
X X i + Xj
Tn = I >0 .
i≤j
2
Using this, we have that the right side of expression (5.1) is equal to
n
X X XDi + XDj X n Xn
I(Xi > 0) + I >0 = I(XDj > 0) + (j − 1)I(XDj > 0)
i=1 i<j
2 j=1 j=1
n
X
= jI(XDj > 0),
j=1
108
Statistics of this form are called U -statistics (U for unbiased), and h is called the kernel
and r its order. We will assume that h is permutation symmetric in order that U has that
property as well.
Example 5.2.1 Suppose, r = 1. Then the linear statistic n1 ni=1 h(Xi ) is clearly a U -
P
statistic. In particular, n1 ni=1 Xik is a U -statistic for any k; Let r = 2 and h(x1 , x2 ) =
P
1
2
(x1 − x2 )2 . Then, on calculation,
n
1 X1 2 1 X
2
(Xi − Xj ) = (Xi − X̄)2 .
Cn i<j 2 n − 1 i=1
Thus, the sample variance is a U -statistic; Let x0 be a fixed real, r = 1, and h(x) = I(x ≤ x0 ).
Then U = n1 ni=1 I(Xi ≤ x0 ) = Fn (x0 ), the empirical C.D.F. at x0 . Thus Fn (x0 ) for any
P
specified x0 is a U -statistic.
Example 5.2.2 Let r = 2 and h(X1 , X2 ) = I(X1 + X2 > 0). The corresponding U =
1
P
C2
n i<j I(Xi + Xj > 0). Now, U is related to the one-sample Wilcoxon statistic, Tn
The summands in the definition of a U -statistic are not independent. Hence, neither the
exact distribution theory nor the asymptotics are straightforward. Hajek had the brilliant
idea of projecting U onto the class of linear statistics of the form n1 ni=1 h(Xi ). It turns out
P
that the projection is the dominant part and determines the limiting distribution of U . The
main theorems can be seen in Serfling (1980).
For k = 1, . . . , r, let
hk (x1 , . . . , xk ) = E[h(X1 , . . . , Xr ) | X1 = x1 , . . . , Xk = xk ]
Theorem 5.2.2 Suppose that the kernel h satisfying Eh2 (X1 , . . . , Xr ) < ∞. Assume that
0 < ζ1 < ∞. Then,
U −θ d
p → N (0, 1).
Var(U )
where Var(U ) = n1 r2 ζ1 + O(n−2 ).
109
With these results, we are ready to present the asymptotic normality of Tn .
Note that the first term is of smaller order (Op (n−1 )) and we need only consider the second
term (Op (1)). However, the second term, denoted as Un is a U -statistic as defined above.
d
Thus, by Theorem 5.2.2, (Un − E(Un ))/Var(Un ) → N (0, 1). The result immediately follows
from the Slutsky’s Theorem.
With the help of this theorem, we can easily establish the consistency of the Tn test.
Theorem 5.2.4 If F is a continuous symmetric C.D.F. with unique median θ, then the
signed rank test is consistent for tests on θ.
P X +X
Proof. Recall that the signed-rank test rejects H0 if Tn = i≤j I( i 2 j > 0) ≥ tn . If we
q
choose tn = n(n+1)
4
+ zα n(n+1)(2n+1)
24
, then, by Theorem 5.2.1, we have
PH0 (Tn ≥ tn ) → α.
where pθ = Pθ (X1 + X2 > 0). Since we assume θ > 0 under the alternative, it follows that
1 1
Cn2 kn − pθ < 0 for all large n. Also, Cn2 Tn − pθ converges in probability to 0 under any F
110
(Theorem 5.2.2), and so Qn → 1. Since the power goes to 1, the test is consistent against
any alternative F satisfying θ > 0.
Furthermore, Theorem 5.2.2 allows us to derive the relative efficiency of Tn with respect
to other tests. Since Tn takes into account the magnitude as well as the sign of the sample
observations, we expect that overall it may have better efficiency properties than the sign
test. The following striking result was proved by Hodges and Lehmann in (1956).
Theorem 5.2.5 Let X1 , . . . , Xn be i.i.d. observations from any symmetric continuous dis-
tribution function F (x − θ) with density f (x − θ),
(i) The Pitman asymptotic relative efficiency of the one-sample test procedure based on
the Tn with respect to the test based on X̄n is
Z ∞ 2
e(Tn , X̄n ) = 12σF2 2
f (u)du ,
−∞
108
(ii) inf F ∈F e(Tn , X̄n ) = 125
≈ 0.864, where F is the family of CDFs satisfying continuous,
symmetric and σF2 < ∞. The equality is attained at F such that f (x) = b(a2 −x2 ), |x| <
√ √
a, where a = 5 and b = 3 5/20.
Proof. (i) Similar to the proof of Corollary 5.1.1, we need to verify the conditions in
1
Theorem 5.1.2. Let T2n = Cn2 Tn . By Theorem 5.2.3, we know the T2n is asymptotically
normally distributed. It suffices to study its expectation and variance. It is easily to see that
1 n(n − 1)
E(T2n ) = 2 n(1 − F (−θ)) + Pθ (X1 + X2 > 0)
Cn 2
Z
= Pθ (X1 + X2 > 0) + O(n−1 ) ≈ [1 − F (−x − θ)]f (x − θ)dx.
111
The variance is more complicated, however, by using Theorem 5.2.2,
1 2
Var(T2n ) = 2 Var(h1 (X1 )) + O(n−2 )
n
4
≈ E(E 2 (h(X1 , X2 ) | X1 )) − (E(E(h(X1 , X2 ) | X1 )))2 .
n
4
= E [1 − F (−X1 )]2 − E 2 h(X1 , X2 )
n(
Z Z 2 )
4
= [1 − F (−x − θ)]2 f (x − θ)dx − [1 − F (−x − θ)]f (x − θ)dx .
n
R
Thus, to apply Pitman efficiency theorem, we choose µn (θ) = F (x + θ)f (x − θ)dx and
(Z Z 2 )
4
σn2 (θ) = F 2 (x + θ)f (x − θ)dx − F (x + θ)f (x − θ)dx .
n
Therefore, some calculation yields µ0n (θ) = 2 f (x+θ)f (x−θ)dx and µ0n (0) = 2 f 2 (u)du >
R R
(ii) It can be shown that e(Tn , X̄n ) is location and scale invariant, so, we can assume that
R
h is symmetric about 0 and σF2 = 1. The problem, then, is to minimize f 2 (u)du subject to
R R R
f (u)du = u2 f (u) = 1 and uf (u) = 0 (by symmetry). This is equivalent to minimizing
Z Z Z
2 2 2
f + 2b u f − 2ba f, (5.2)
where a and b are positive constants to be determined later. We now write as (5.2)
Z Z Z
2 2 2 2 2 2
[f + 2b(x − a )f ] = [f + 2b(x − a )f ] + [f 2 + 2b(x2 − a2 )f ]. (5.3)
|x|≤a |x|>a
First complete the square on the first term on the right side of (5.3) to get
Z Z
2 2 2
[f + b(x − a )] − b2 (x2 − a2 )2 . (5.4)
|x|≤a |x|≤a
Now (5.3) is equal to the two terms of (5.4) plus the second term on the right side of (5.3).
We can now write the density that minimizes (5.3).
If |x| > a take f (x) = 0, since x2 > a2 , and if |x| ≤ a take f (x) = b(a2 − x2 ), since the
integral in the first term of (5.4) is nonnegative. We can now determine the values of a and
112
R
b from the side conditions. From f = 1, we have
Z a
b(a2 − x2 )dx = 1,
−a
R Ra
which implies that a3 b = 3/4. Further, from x2 f = 1, we have −a x2 b(a2 − x2 )dx = 1,
√ √
from which a5 b = 15/4. Hence solving for a and b yields a = 5 and b = 3 5/100. Now,
" √
√ #2 √
Z Z 5
2 3 5 2 3 5
f = √ (5 − x ) dx = ,
− 5 100 25
√ 2
3 5 108
which leads to the result, inf F ∈F e(Tn , X̄n ) = 12 25
= 125
≈ 0.864.
Remark 5.2.2 Notice that the worst-case density f is not one of heavy tails but one with
no tails at all (i.e., it has a compact support). Also note that the minimum Pitman efficiency
is 0.864 in the class of symmetric densities with a finite variance, a very respectable lower
bound.
The following table shows the value of the Pitman efficiency for several distributions
that belong to the family of CDFs F defined in Theorem 5.2.5. They are obtained by direct
calculation using the formula given above. It is interesting that, even in the normal case,
the Wilcoxon test is 95% efficient with respect to the t-test.
The Wilcoxon signed-rank statistic Tn can be used to construct a point estimate for the point
of symmetry of a symmetric density, and from it one can construct a confidence interval.
113
symmetric about its mean n(n + 1)/4. A natural estimator of θ is the amount θb that should
be subtracted from each Xi so that the value of Tn , when applied to the shifted sample
X1 − θ,
b . . . , Xn − θ,
b is as close to n(n + 1)/4 as possible. Intuitively, we estimate θ by the
b that the X sample should be shifted in order that X1 − θ,
amount (θ) b . . . , Xn − θb as a sample
from a population with median 0.
For any pair i, j with i ≤ j, define the Walsh average Wij = 12 (Xi + Xj ) (see Walsh
(1959)). Then the Hodges-Lehmann estimate θb is defined as
θb = Median{Wij : 1 ≤ i ≤ j ≤ n}.
√ d 1
n(θb − θ) → N 0, o2 .
nR
∞
12 −∞
f 2 (u)du
The proof of this theorem can be found in Hettmansperger and McKean (1998). For sym-
√ d
metric distributions, by CLT, n(X̄ − θ) → N (0, σF2 ). The ratio of the variances in the two
R 2
2 ∞ 2
asymptotic distributions, 12σF −∞ f (u)du , is the ARE of θb relative to X̄n . This ARE
equals to the asymptotic relative efficiency of the Wilcoxon signed rank test with respect to
t-test in the testing problem (Theorem 5.2.5).
A confidence interval for θ can be constructed using the distribution of Tn . The interval
n(n+1)
is found from the following connection with the null distribution of Tn . Let M = 2
be
the total number of Walsh averages W(1) ≤, · · · ≤ W(M ) .
Theorem 5.2.7 (Tukey’s method of confidence interval) Let kα denote the positive
integer such that: P (Tn < kα ) = α/2. Then, [W(kα ) , W(M −kα +1) ] is a confidence interval for
θ at confidence level 1 − α (0 < α < 1/2).
114
Proof. Write
= 1 − P (Tn ≥ M − kα + 1) − P (Tn ≤ kα − 1)
= 1 − 2P (Tn < kα ) = 1 − α,
where we use the fact that Tn follows a symmetric distribution about n(n + 1)/4 (Remark
??).
115