Asymptotic Statistics (By Changliang ZOU)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 115

Lecture Notes on Asymptotic Statistics

Changliang Zou
Prologue
Why asymptotic statistics? The use of asymptotic approximation is two-fold. First, they
enable us to find approximate tests and confidence regions. Second, approximations can be
used theoretically to study the quality (efficiency) of statistical procedures— Van der Vaart

Approximate statistical procedures

To carry out a statistical test, we need to know the critical value of the test statistic.
Roughly speaking, this means we must know the distribution of the test statistic under the
null hypothesis. Because such distributions are often analytically intractable, only approxi-
mations are available in practice.

Consider for instance the classical t-test for location. Given a sample of iid observations
X1 , . . . , Xn , we wish to test H0 : µ = µ0 . If the observations arise from a normal distribution

with mean µ0 , then the distribution of t-test statistic, n(X̄n − µ0 )/Sn , is exactly known,
say t(n − 1). However, we may have doubts regarding the normality. If the number of
observations is not too small, this does not matter too much. Then we may act as if

n(X̄n − µ0 )/Sn ∼ N (0, 1). The theoretical justification is the limiting result, as n → ∞,
√ 
n(X̄n − µ)
sup P ≤ x − Φ(x) → 0,
x Sn

provided that the variables Xi have a finite second moment. Then, a “large-sample” or

“asymptotical” level α test is to reject H0 if | n(X̄n − µ0 )/Sn | > zα/2 . When the underlying
distribution is exponential, the approximation is satisfactory if n ≥ 100. Thus, one aim of
asymptotic statistics is to derive the asymptotical distribution of many types of statistics.

There are similar benefits when obtaining confidence intervals. For instance, consider
maximum likelihood estimator θ bn of dimension p based on a sample of size n from a density
√ b
f (X; θ). A major result in asymptotic statistic is that in many situations n(θ n − θ) is

asymptotically normally distributed with zero mean and covariance matrix I−1
θ , where
"  T #
∂ log f (X; θ) ∂ log f (X; θ)
Iθ = Eθ
∂θ ∂θ
√ b
is the Fisher information matrix. Thus, acting as if n(θ n − θ) ∼ Np (0, I−1
θ ), we can find

2
the following ellipsoid
χ2p,α
 
bn )T Iθ (θ − θ
θ : (θ − θ bn ) ≤
n
is an approximate 1 − α confidence region.

Efficiency of statistical procedures

For a relatively small number of statistical problems, there exists an exact, optimal
solution. For example, the Neyman-Pearson lemma to find UMP tests, the Rao-Blackwell
theory to find MVUE, and Cramer-Rao Theorem.

However, there are not always exact optimal theory or procedure, then asymptotic op-
timality theory may help. For instance, to compare two tests, we might compare approxi-
mations to their power functions. Consider the foregoing hypothesis problem for location.
n
A well-known nonparametric test statistic is the sign statistic Tn = n−1
P
IXi >θ0 , where the
i=1
null hypothesis is H0 : θ = θ0 and θ denotes the median associated the distribution of X. To
compare the efficiency of sign and t-test is rather difficult because the exact power functions
of two tests are untractable. However, by the definitions and methods introduced later, we
can obtain the asymptotic relative efficiency of the sign test versus the t-test is equal to
Z
4f (0) x2 f (x)dx.
2

To compare estimators, we might compare asymptotic variances rather than exact variances.
A major result in this area is that for smooth parametric models maximum likelihood es-
timators are asymptotically optimal. This roughly means the following. First, MLE are
asymptotically consistent; Second, the rate at which MLE converge to the true value is the

fastest possible, typically n; Third, the asymptotic variance, attain the C-R bound. Thus,
asymptotic justify the use of MLE in certain situations. (Even though in general it does
not lead to best estimators for finite sample in many cases, it is always not a worst one and
always leads to a reasonable estimator.

Contents

• Basic convergence concepts and preliminary theorems (8)

• Transformations of given statistics: The Delta method (4)

3
• The basic sample statistics: distribution function, moment, quantiles, and order statis-
tics (3)

• Asymptotic theory in parametric inference: MLE, likelihood ratio test, etc (6)

• U -statistic, M -estimates and R-estimates (6)

• Asymptotic relative efficiency (6)

• Asymptotic theory in nonparametric inference: rank and sign tests (6)

• Goodness of fit (3)

• Nonparametric regression and density estimation (4)

• Advanced topic selected: bootstrap and empirical likelihood (4)

Text books
Billingsley, P. (1995). Probability and Measure, 3rd edition, John Wiley, New York.
DasGupta, A. (2008). Asymptotic Theory of Statistics and Probability, Springer.
Serfling, R. (1980). Approximation Theorems of Mathematical Statistics, John Wiley, New
York.
Shao, J. (2003). Mathematical Statistics, 2nd ed. Springer, New York.
Van der Vaart, A. W. (2000). Asymptotic Statistics, Cambridge University Press.

4
Chapter 1

Basic convergence concepts and


preliminary theorems

Throughout this course, there will usually be an underlying probability space (Ω, F, P ),
where Ω is a set of points, F is a σ-field of subsets of Ω, and P is a probability distribution
or measure defined on the element of F. A random variable X(w) is a transformation of
Ω into the real line R such that images X −1 (B) of Borel sets B are elements of F. A
collection of random variables X1 (w), X2 (w), . . . on a given (Ω, F) will typically be denoted
by X1 , X2 , . . ..

1.1 Modes of convergence of a sequence of random


variables

Definition 1.1.1 (convergence in probability) Let {Xn , X} be random variables defined


on a common probability space. We say Xn converges to X in probability if, for any  > 0,
P (|Xn − X| > ) → 0 as n → ∞, or equivalently

lim P (|Xn − X| < ) = 1, every  > 0.


n→∞

5
p
This is usually written as Xn → X. Extensions to the vector case: for random p-vectors
p p
X1 , X2 . . . and X, we say Xn → X if ||Xn − X|| → 0, where ||z|| = ( pi=1 zi2 )1/2 denotes
P
p
the Euclidean distance (L2 -norm) for z ∈ Rp . It is easily to seen that Xn → X iff the
corresponding component-wise convergence holds.

Example 1.1.1 For iid Bernoulli trials with a success probability p = 1/2, let Xn denote the
number of times in the first n trials that a success is followed by a failure. Denoting Ti = I{ith
Pn−1
trial is success and (i+1)st trial is a failure}, Xn = i=1 Ti , and therefore E[Xn ] = (n−1)/4,
Pn−1 Pn−2
and Var[Xn ] = i=1 Var[Ti ]+2 i=1 Cov[Ti , Ti+1 ] = 3(n−1)/16−2(n−2)/16 = (n+1)/16.
p
It then follows by an application of Chebyshev’s inequality that Xn /n → 1/4. [P (|x − µ| ≥
) ≤ σ 2 /2 ]

Definition 1.1.2 (bounded in probability) A sequence of random variables Xn is said


to be bounded in probability if, for any  > 0, there exists a constant k such that P (|Xn | >
k) ≤  for all n.

Any random variable (vector) is bounded in probability. It is convenient to have short


p
expressions for terms that converge or bounded in probability. If Xn → 0, then we write
Xn = op (1), pronounced by “small oh-P-one”; The expression Op (1) (“big oh-P-one”) denotes
a sequence that is bounded in probability, say, write Xn = Op (1). These are so-called
stochastic o(·) and O(·). More generally, for a given sequence of random variables Rn ,
p
Xn = op (Rn ) means Xn = Yn Rn and Yn → 0;

Xn = Op (Rn ) means Xn = Yn Rn and Yn = Op (1).

This expresses that the sequence Xn converges in probability to zero or is bounded in prob-
ability “at the rate Rn ”. For deterministic sequences Xn and Rn , Op (·) and op (·) reduce to
the usual o(·) and O(·) from calculus. Obviously, Xn = op (Rn ) implies that Xn = Op (Rn ).
p
An expression we will often used is: for some sequence an , if an Xn → 0, then we write
Xn = op (a−1 −1
n ); if an Xn = Op (1), then we write Xn = Op (an ).

Definition 1.1.3 (convergence with probability one) Let {Xn , X} be random variables

6
defined on a common probability space. We say Xn converges to X with probability 1 (or al-
most surely, strongly, almost everywhere) if
 
P lim Xn = X = 1.
n→∞

This can be written as P (ω : Xn (ω) → X(ω)) = 1. We denote this mode of convergence as


wp1 a.s
Xn → X or Xn → X. Extensions to random vector case is straightforward.

Almost sure convergence is a stronger mode of convergence than convergence in proba-


bility. In fact, a characterization of wp1 is that

lim P (|Xm − X| < , all m ≥ n) = 1, every  > 0. (1.1)


n→∞

It is clear from this equivalent condition that wp1 is stronger than convergence in probability.
Its proof can be found on page 7 in Serfling (1980).

Example 1.1.2 Suppose X1 , X2 , . . . is an infinite sequence of iid U [0, 1] random variables,


wp1
and let X(n) = max{X1 , . . . , Xn }. See X(n) → 1. Note that

P (|X(n) − 1| ≤ , ∀n ≥ m) = P (X(n) ≥ 1 − , ∀n ≥ m)

= P (X(m) ≥ 1 − ) = 1 − (1 − )m → 1, as m → ∞.

Definition 1.1.4 (convergence in rth mean) Let {Xn , X} be random variables defined
on a common probability space. For r > 0, we say Xn converges to X in rth mean if

lim E|Xn − X|r = 0.


n→∞

rth
This is written Xn → X. It is easily shown that

rth sth
Xn → X ⇒ Xn → X, 0 < s < r,

by Jensen’s inequality (If g(·) is a convex function on R, and X and g(X) are integrable
r.v.’s, then g(E[X]) ≤ E[g(X)]).

7
Definition 1.1.5 (convergence in distribution) Let {Xn , X} be random variables. Con-
sider their distribution functions FXn (·) and FX (·). We say that Xn converges in distribution
(in law) to X if limn→∞ FXn (t) = FX (t) at every point that is a continuity point of FX .

d
This is written as Xn → X or FXn ⇒ FX .

Example 1.1.3 Consider Xn ∼ Uniform{ n1 , n2 , . . . , n−1


n
, 1}. Then, it can be shown easily
that the sequence Xn converges in law to U [0, 1]. Actually, consider any t ∈ [ ni , i+1
n
), the
i
difference between FXn (t) = n
and FX (t) = t can be arbitrarily small if n is sufficiently large
d
(| ni − t| < n−1 ). The result follows from the definition of →.

Example 1.1.4 Let {Xn }∞ −1


n=1 is a sequence of random variables where Xn ∼ N (0, 1 + n ).

Taking the limit of the distribution function of Xn as n → ∞ yields limn FXn (x) = Φ(x) for
d
all x ∈ R. Thus, Xn → N (0, 1).

p p
According to the assertion below the definition of →, we know that Xn → X is equivalent
to convergence of every one of the sequences of components. The analogous statement
for convergence in distribution is false: Convergence in distribution of the sequence Xn is
stronger than convergence of every one of the sequences of components Xni . The point is
that the distribution of the components Xni separately does not determine their distribution
(they might be independent or dependent in many ways). We speak of joint convergence in
law versus marginal convergence.

Example 1.1.5 If X ∼ U [0, 1] and Xn = X for all n, and Yn = X for n odd and Yn = 1−X
d d
for n even, then Xn → X and Yn → U [0, 1], yet (Xn , Yn ) does not converge in law.

Suppose {Xn , X} are integer-valued random variables. It is not hard to show that

d
Xn → X ⇔ P (Xn = k) → P (X = k)

for every integer k. This is a useful characterization of convergence in law for integer-valued
random variables.

8
1.2 Fundamental results and theorems on convergence

1.2.1 Relationship

The results describes the relationship among four convergence modes are summarized as
follows.

Theorem 1.2.1 Let {Xn , X} be random variables (vectors).

wp1 p
(i) If Xn → X, then Xn → X.
rth p
(ii) If Xn → X for a r > 0, then Xn → X.
p d
(iii) If Xn → X, then Xn → X.

P wp1
(iv) If, for every  > 0, P (|Xn − X| > ) < ∞, then Xn → X.
n=1

Proof. (i) is an obvious consequence of the equivalent characterization (1.1); (ii) for any
 > 0,

E|Xn − X|r ≥ E[|Xn − X|r I(|Xn − X| > )] ≥ r P (|Xn − X| > )

and thus

P (|Xn − X| > ) ≤ −r E|Xn − X|r → 0, as n → ∞.

(iii) This is a direct application of Slutsky Theorem; (iv) Let  > 0 be given. We have

! ∞
[ X
P (|Xm − X| ≥ , for some m ≥ n) = P {|Xm − X| ≥ } ≤ P (|Xm − X| ≥ ).
m=n m=n

The last term in the equation above is the tail of a convergent series and hence goes to zero
as n → ∞. 

Example 1.2.1 Consider iid N (0, 1) random variables X1 , X2 , . . . , and suppose X̄n is the
mean of the first n observations. For an  > 0, consider ∞
P
n=1 P (|X̄n | > ). By Markov’s
E[X̄n4 ] 3
P∞ −2
inequality, P (|X̄n | > ) ≤ 4 = 4 n2 . Since n=1 n < ∞, from Theorem 1.2.1-(iv) it
wp1
follows that Xn → 0.

9
1.2.2 Transformation

It turns out that continuous transformations preserve many types of convergence, and this
fact is useful in many applications. We record it next. Its proof can be found on page 24 in
Serfling (1980).

Theorem 1.2.2 (Continuous Mapping Theorem) Let X1 , X2 , . . . and X be random p-


vectors defined on a probability space, and let g(·) be a vector-valued (including real-valued)
continuous function defined on Rp . If Xn converges to X in probability, almost surely, or in
law, then g(Xn ) converges to X in probability, almost surely, or in law, respectively.

d d
Example 1.2.2 (i) If Xn → N (0, 1), then χ21 ; (ii) If (Xn , Yn ) → N2 (0, I2 ), then

d
max{Xn , Yn } → max{X, Y },

which has the CDF [Φ(x)]2 .

The most commonly considered functions of vectors converging in some stochastic sense
are linear and quadratic forms, which is summarized in the following result.

Corollary 1.2.1 Suppose that the p-vector Xn converge to the p-vector X in probability,
almost surely, or in law. Let Aq×p and Bp×p be matrices. Then AXn → AX and XTn BXn →
XT BX in the given mode of convergence.

Proof. The vector-valued function


p p
!T
X X
Ax = a1i xi , . . . , aqi xi
i=1 i=1

and the real-valued function


p p
X X
T
x Bx = bij xi xj
i=1 j=1

are continuous function of x = (x1 , . . . , xp )T . 

10
d d
Example 1.2.3 (i) If Xn → Np (µ, Σ), then CXn → N (Cµ, CΣCT ) where Cq×p is a matrix;
d
Also, (Xn − µ)T Σ−1 (Xn − µ) → χ2p ; (ii) (Sums and products of random variables converging
wp1 wp1 wp1 wp1
wp1 or in probability) If Xn → X and Yn → Y , then Xn + Yn → X + Y and Xn Yn → XY .
Replacing the wp1 with in probability, the foregoing arguments also hold.

Remark 1.2.1 The condition that g(·) is continuous function in Theorem 1.2.2 can be
further relaxed to that g(·) is continuous a.s., i.e., P (X ∈ C(g)) = 1 where C(g) = {x :
g is continuous at x} is called the continuity set of g.

d d
Example 1.2.4 (i) If Xn → X ∼ N (0, 1), then 1/Xn → Z, where Z has the distribution of
1/X, even though the function g(x) = 1/x is not continuous at 0. This is due to P (X =
0) = 0. However, if Xn = 1/n (degenerate distribution) and

 1, x > 0,
g(x) =
 0, x ≤ 0,

d d d d
then Xn → 0 but g(Xn ) → 1 6= g(0); (ii)If (Xn , Yn ) → N2 (0, I2 ) then Xn /Yn → Cauchy.

Example 1.2.5 Let {X}∞


n=1 be a sequence of independent random variables where Xn has

a Poi(θ) distribution. Let X̄n be the sample mean computed on X1 , . . . , Xn . By definition,


p
we can see that X̄n → θ as n → ∞. If we wish to find a consistent estimator of the standard
1/2
deviation of Xn which is θ1/2 we can consider X̄n . CMT implies that the square root
1/2 p
transformation is continuous at θ if θ > 0 that X̄n → θ1/2 as n → ∞.

d d
In Example 1.2.2, the condition that (Xn , Yn ) → N2 (0, I2 ) cannot be relaxed to Xn → X
d
and Yn → Y where X and Y are independent, i.e., we need the convergence of the joint CDF
d p wp1
of (Xn , Yn ). This is different when → is replaced by → or → , such as in Example 1.2.3-(ii).
The following result, which plays an important role in probability and statistics, establishes
the convergence in distribution of Xn + Yn or Xn Yn when no information regarding the joint
CDF of (Xn , Yn ) is provided.

d p
Theorem 1.2.3 (Slutsky’s Theorem) Let Xn → X and Yn → c, where c is a finite con-
stant. Then,

11
d
(i) Xn + Yn → X + c;

d
(ii) Xn Yn → cX;

d
(iii) Xn /Yn → X/c if c 6= 0.

Proof. The method of proof of the theorem is demonstrated sufficiently by proving (i).
Choose and fix t such that t − c is a continuity point of FX . Let ε > 0 be such that t − c + ε
and t − c − ε are also continuity points of FX . Then

FXn +Yn (t) = P (Xn + Yn ≤ t)

≤ P (Xn + Yn ≤ t, |Yn − c| < ε) + P (|Yn − c| ≥ ε)

≤ P (Xn ≤ t − c + ε) + P (|Yn − c| ≥ ε)

and, similarly

FXn +Yn (t) ≥ P (Xn ≤ t − c − ε) − P (|Yn − c| ≥ ε).

It follows from the previous two inequalities and the hypotheses of the theorem that

FX (t − c − ε) ≤ lim inf FXn +Yn (t) ≤ lim sup FXn +Yn (t) ≤ FX (t − c + ε).
n n

Since t − c is a continuity point of FX , and since ε can be taken arbitrary small, the above
equation yields

lim FXn +Yn (t) = FX (t − c).


n

The result follows from FX (t − c) = FX+c (t). 

Extensions to the vector case is straightforward. (iii) is valid provided C 6= 0 is under-


stood as C being invertible.
d p
A straightforward but often used result by this theorem is that Xn → X and Xn −Yn → 0,
d
then Yn → X. In asymptotic practice, we often firstly derive the result such as Yn = Xn +op (1)
and then investigate the asymptotic distribution of Xn .

12
Example 1.2.6 (i) Theorem 1.2.1-(iii); Furthermore, convergence in probability to a con-
stant is equivalent to convergence in law to the given constant. “⇒” follows from the part (i).
“⇐” can be proved by definition. Because the degenerate distribution function of constant
c is continuous everywhere except for point c, for any  > 0,

P (|Xn − c| ≥ ) = P (Xn ≥ c + ) + P (Xn ≤ c − )

→ 1 − FX (c + ) + FX (c − ) = 0

The results follows from the definition of convergence in probability.

Example 1.2.7 Let {Xn }∞


n=1 is a sequence of independent random variables where Xn ∼

Gamma(αn , βn ), where αn and βn are sequences of positive real numbers such that αn → α
and βn → β for some positive real numbers α and β. Also, let β̂n be a consistent estimator
d
of β. We can conclude that Xn /β̂n → Gamma(α, 1).

Example 1.2.8 (t-statistic) Let X1 , X2 , . . . be iid random variables with EX1 = 0 and

EX12 < ∞. Then the t-statistic nX̄n /Sn , where Sn2 = (n − 1)−1 ni=1 (Xi − X̄n )2 is the
P

sample variance, is asymptotically standard normal. To see this, first note that by two
applications of WLLN and CMT
n
!
n 1X 2 p
Sn2 = X − X̄n2 → 1(EX12 − (EX1 )2 ) = Var(X1 ).
n−1 n i=1 i
p p √ d
Again, by CMT, Sn → Var(X1 ). By the CLT, nX̄n → N (0, Var(X1 )). Finally, Slutsky’s
p
Theorem gives that the sequence of t-statistics converges in law to N (0, Var(X1 ))/ Var(X1 ) =
N (0, 1).

1.2.3 WLLN and SLLN

We next state some theorems known as the laws of large numbers. It concerns the limiting
behavior of sums of independent random variables. The weak law of large numbers (WLLN)
refers to convergence in probability, whereas the strong of large numbers (SLLN) refers to
a.s. convergence. Our first result gives the WLLN and SLLN for a sequence of iid random
variables.

13
Theorem 1.2.4 Let X1 , X2 , . . ., be iid random variables having a CDF F .

(i) The WLLN The existence of constants an for which


n
1X p
X i − an → 0
n i=1
Rn
holds iff limx→∞ x[1 − F (x) + F (−x)] = 0, in which case we may choose an = −n
xdF (x).

(ii) The SLLN The existence of a constant c for which


n
1 X wp1
Xi → c
n i=1

holds iff E[X1 ] is finite and equals c.

Example 1.2.9 Suppose {Xi }∞


i=1 is a sequence of independent random variables where Xi ∼

t(2). The variance of Xi does not exist, but Theorem 1.2.4 still applies to this case and we
p
can still therefore conclude that X̄n → 0 as n → ∞.

The next result is for sequences of independent but not necessarily identically distributed
random variables.

Theorem 1.2.5 Let X1 , X2 , . . ., be random variables with finite expectations.

(i) The WLLN Let X1 , X2 , . . ., be uncorrelated with means µ1 , µ2 , . . . and variances σ12 , σ22 , . . ..
If limn→∞ n12 ni=1 σi2 = 0, then
P

n n
1X 1X p
Xi − µi → 0.
n i=1 n i=1

(ii) The SLLN Let X1 , X2 , . . ., be independent with means µ1 , µ2 , . . . and variances σ12 , σ22 , . . ..
If ∞ 2 2
P
i=1 σi /ci < ∞ where cn ultimately monotone and cn → ∞, then

n
wp1
X
c−1
n (Xi − µi ) → 0.
i=1

14
(iii) The SLLN with common mean Let X1 , X2 , . . ., be independent with common mean
µ and variances σ12 , σ22 , . . .. If ∞ −2
P
i=1 σi = ∞, then
n n
X Xi X −2 wp1
2
/ σi → µ.
i=1
σ i i=1

A special case of Theorem 1.2.5-(ii) is to set ci = i in which we have


n n
1X 1 X wp1
Xi − µi → 0.
n i=1 n i=1

The proof of Theorems 1.2.4 and 1.2.5 can be found in Billingsley (1995).

indep
Example 1.2.10 Suppose Xi ∼ (µ, σi2 ). Then, by simple calculus, the BLUE (best linear
unbiased estimate) of µ is ni=1 σi−2 Xi / ni=1 σi−2 . Suppose now that the σi2 do not grow at
P P

a rate faster than i; i.e., for some constant K, σi2 ≤ iK. Then, ni=1 σi−2 clearly diverges as
P

n → ∞, and so by Theorem 1.2.5-(iii) the BLUE of µ is strongly consistent.

Example 1.2.11 Suppose (Xi , Yi ), i = 1, . . . , n are iid bivariate samples from some distri-
bution with E(X1 ) = µ1 , E(Y1 ) = µ2 , Var(X1 ) = σ12 , Var(Y1 ) = σ22 , and corr(X1 , Y1 ) = ρ.
Let rn denote the sample correlation coefficient. The almost sure convergence of rn to ρ
follow very easily. We write
1
P
Xi Yi − X̄ Ȳ
n
rn = q P 2 P Yi2 ,
Xi 2 )( 2)
( n
− X̄ n
− Ȳ

then from the SLLN for iid random variables (Theorem 1.2.4) and continuous mapping
theorem (Theorem 1.2.2; Example 1.2.3-(ii)),

wp1 E(X1 Y1 ) − µ1 µ2
rn → = ρ.
σ12 σ22

1.2.4 Characterization of convergence in law

Next we provide a collection of basic facts about convergence in distribution. The following
theorems provide methodology for establishing convergence in distribution.

15
Theorem 1.2.6 Let X, X1 , X2 , . . . random p-vectors.

d
(i) (The Portmanteau Theorem) Xn → X is equivalent to the following condition:
E[g(Xn )] → E[g(X)] for every bounded continuous function g.

(ii) (Levy-Cramer continuity theorem) Let ΦX , ΦX1 , ΦX2 , . . . be the character func-
d
tions of X, X1 , X2 , . . ., respectively. Xn → X iff limn→∞ ΦXn (t) = ΦX (t) for all t ∈ Rp .
d d
(iii) (Cramer-Wold device) Xn → X iff cT Xn → cT X for every c ∈ Rp .

d
Proof. (i) See Serfling (1980), page 16; (ii) Shao (2003), page 57; (iii) Assume cT Xn → cT X
for any c, then by Theorem 1.2.6-(ii)

lim ΦXn (tc1 , . . . , tcp ) = ΦX (tc1 , . . . , tcp ), for all t.


n→∞

d
With t = 1, and since c is arbitrary, it follows by Theorem 1.2.6-(ii) again that Xn → X. The
converse can be proved by a similar argument. [ΦcT Xn (t) = ΦXn (tc) and ΦcT X (t) = ΦX (tc)
for any t ∈ R and any c ∈ Rp .] 
d d
A straightforward application of Theorem 1.2.6 is that if Xn → X and Yn → c for con-
d
stant vector c, then (Xn , Yn ) →(X, c).

Example 1.2.12 Example 1.1.3 revisited. Consider now the function g(x) = x10 , 0 ≤
x ≤ 1. Note that g is continuous and bounded. Therefore, by the Portmanteau theorem,
10 R1
E(g(Xn )) = ni=1 ni 11 → E(g(X)) = 0 x10 dx = 11
1
P
.

Example 1.2.13 For n ≥ 1, 0 ≤ p ≤ 1, and a given continuous function g : [0, 1] → R,


define the sequence
n
X k
Bn (p) = g( )Cnk pk (1 − p)n−k ,
k=0
n

which is so-called Bernstein polynomials. Note that Bn (p) = E[g( Xn )|X ∼ Bin(n, p)]. As
X p X d
n → ∞, n
→p (WLLN), and it follows that n
→ δp , the point mass at p. Since g is
continuous and hence bounded (compact interval), it follows from the Portmanteau theorem
that Bn (p) → g(p).

16
Example 1.2.14 (i) Let X1 , . . . , Xn be independent random variables having a common
CDF and Tn = X1 + . . . + Xn , n = 1, 2, . . .. Suppose that E|X1 | < ∞. It follows from
the property of CHF and Taylor expansion that the CHF of X1 satisfies [ ∂Φ∂t
X (t)
]|t=0 =
√ 2
−1EX, [ ∂ Φ∂tX2 (t) ]|t=0 = −EX 2 ]

ΦX1 (t) = ΦX1 (0) + −1µt + o(|t|)

as |t| → 0, where µ = EX1 . Then, it follows that the CHF of Tn /n is


  n  √ n
t −1µt −1
ΦTn /n (t) = ΦX1 = 1+ + o(|t|n )
n n
for any t ∈ R as n → ∞. Since (1 + cn /n)n → exp{c} for any complex sequence cn satisfying

cn → c, we obtain that ΦTn /n (t) → exp{ −1µt}, which is the CHF of the distribution
d
degenerated at µ. By Theorem 1.2.6-(ii), Tn /n → µ. From 1.2.6-(i), this also shows that
p
Tn /n → µ (an informal proof of WLLN); (ii) Similarly, µ = 0 and σ 2 = Var(X1 ) < ∞ imply
[second-order Taylor expansion]
 2 2
n
σ t 2 −1
ΦTn /√n (t) = 1 − + o(t n )
2n
for any t ∈ R as n → ∞, which implies that ΦTn /√n (t) → exp{−σ 2 t2 /2}, the CHF of
√ d
N (0, σ 2 ). Hence, Tn / n → N (0, σ 2 ); (iii) Suppose now that X1 , . . . , Xn are random p-vectors
and µ = EX1 and Σ = Cov(X1 ) are finite. For any fixed c ∈ Rp , it follows from the previous
√ d
discussion that (cT Tn − ncT µ)/ n → N (0, cT Σc). From Theorem 1.2.6-(iii), we conclude
√ d
that (Tn − nµ)/ n → Np (0, Σ).

The following two simple results are frequently useful in calculations.

d
Theorem 1.2.7 (i) (Prohorov’s Theorem) If Xn → X for some X, then Xn = Op (1).

(ii) (Polya’s Theorem) If FXn ⇒ FX and FX is continuous, then as n → ∞,

sup |FXn − FX | → 0.
−∞<x<∞

Proof. (i) For any given ε > 0, fix a constant M such that P (X ≥ M ) < ε. By the
definition of convergence in law, P (|Xn | ≥ M ) exceeds P (|X| ≥ M ) arbitrarily small for

17
sufficiently large n. Thus, there exists N such that P (|Xn | ≥ M ) < 2ε, for all n ≥ N . The
results follows from the definition of Op (1). (ii) Firstly, fix k ∈ N. By the continuity of F
there exists points −∞ = x0 < x1 < · · · < xk = ∞ with F (xi ) = i/k. By monotonicity, we
have, for xi−1 ≤ x ≤ xi ,

FXn (x) − FX (x) ≤ FXn (xi ) − FX (xi−1 ) = FXn (xi ) − FX (xi ) + 1/k

≥ FXn (xi−1 ) − FX (xi ) = FXn (xi−1 ) − FX (xi−1 ) − 1/k.

Thus, FXn (x) − FX (x) is bounded above by supi |FXn (xi ) − FX (xi )| + 1/k, for every x. The
latter, finite supremum converges to zero because each term converges to zero due to the
condition, for each fixed k. Because k is arbitrary, the result follows. 
d
The following result can be used to check whether Xn → X when X has a PDF f and
Xn has a PDF fn .

Theorem 1.2.8 (Scheffe Theorem) Let fn be a sequence of densities of absolutely con-


tinuous functions,, with limn fn (x) = f (x), each x ∈ Rp . If f is a density function, then
R
limn |fn (x) − f (x)|dx = 0.

R
Proof. Put gn (x) = [f (x) − fn (x)]If (x)≥fn (x) . By noting that [fn (x) − f (x)]dx = 0,
Z Z
|fn (x) − f (x)|dx = 2 gn (x)dx.
R
Since 0 ≤ gn (x) ≤ f (x) for all x. Hence, by dominated convergence, limn gn (x)dx = 0.
[Dominated convergence theorem. If limn→∞ fn = f and there exists an integrable function
R R
g such that |fn | ≤ g, then limn fn (x)dx = limn fn (x)dx holds] 

As an example, consider the PDF fn of the t- distribution tn , n = 1, 2, . . .. One can show


(exercise) that fn → f , where f is the standard normal PDF.

The following result provides a convergence of moments criterion for convergence in law.

Theorem 1.2.9 (Frechet and Shohat Theorem) Let the distribution function Fn possess
R
finite moments αnk = tk dFn (t) for k = 1, 2, . . . and n = 1, 2, . . .. Assume that the limits
αk = limn αnk exist (finite) for each k. Then,

18
(i) the limits αk are the moments of some a distribution function F ;

(ii) if the F given by (i) is unique, then Fn ⇒ F .

[A sufficient condition: the moment sequence αk determines the distribution F uniquely if


−1/(2i)
the Carleman condition ∞
P
i=1 α2i = ∞ holds.]

1.2.5 Results on op and Op

There are many rules of calculus with o and O symbols, which we will apply without com-
ment. For instance,

op (1) + op (1) = op (1), op (1) + Op (1) = Op (1), Op (1)op (1) = op (1)

(1 + op (1))−1 = Op (1), op (Rn ) = Rn op (1), Op (Rn ) = Rn Op (1), op (Op (1)) = op (1).

Two more complicated rules are given by the following lemma.

Lemma 1.2.1 Let g be a function defined on Rp such that g(0) = 0. Let Xn be a sequence
of random vectors with values on R that converges in probability to zero. Then, for every
r > 0,

(i) if g(t) = o(||t||r ) as t → 0, then g(Xn ) = op (||Xn ||r );

(ii) if g(t) = O(||t||r ) as t → 0, then g(Xn ) = Op (||Xn ||r ).

Proof. Define f (t) = g(t)/||t||r for t 6= 0 and f (0) = 0. Then g(Xn ) = f (Xn )||Xn ||r .
p
(i) Because the function f is continuous at zero by assumption, f (Xn ) → f (0) = 0 by
Theorem 1.2.2.

(ii) By assumption there exists M and δ > 0 such that |f (t)| ≤ M whenever ||t|| ≤ δ.
Thus

P (|f (Xn )| > M ) ≤ P (||Xn || > δ) → 0,

and the sequence f (Xn ) is bounded. 

19
1.3 The central limit theorem

The most fundamental result on convergence in law is the central limit theorem (CLT) for
sums of random variables. We firstly state the case of chief importance, iid summands.

Definition 1.3.1 A sequence of random variables Xn is asymptotically normal with µn and


d
σn2 if (Xn − µn )/σn → N (0, 1), written by Xn is AN (µn , σn2 ).

1.3.1 The CLT for the iid case

Theorem 1.3.1 (Lindeberg-Levy) Let Xi be iid with mean µ and finite variance σ 2 . Then
√ 
n X̄ − µ d
→ N (0, 1).
σ

√  d
By Slutsky’s Theorem, we can write n X̄ − µ → N (0, σ 2 ). Also, X̄ is AN (µ, σ 2 /n). See
Billingsley (1995) for a proof.

Example 1.3.1 (Confidence intervals) This theorem can be used to approximate P (X̄ ≤
µ + √kσn ) by Φ(k). This is very useful because the sampling distribution of X̄ is not available

except for some special cases. Then, setting k = Φ−1 (1 − α) = zα , [X̄n − σ/ nzα , X̄n +

σ/ nzα ] is a confidence interval for µ of asymptotic level 1 − 2α. More precisely, we have
that the probability that µ is contained in this interval converges to 1 − 2α (how accurate?).

Example 1.3.2 (Sample variance) Suppose X1 , . . . , Xn are iid with mean µ, variance σ 2
1
Pn
and E(X14 ) < ∞. Consider the asymptotic distribution of Sn2 = n−1 2
i=1 (Xi − X̄n ) . Write

n
!
√ 2 2
√ 1 X 2 2
√ n
n(Sn − σ ) = n (Xi − µ) − σ − n (X̄n − µ)2 .
n − 1 i=1 n−1

The second term converges to zero in probability and the first term is asymptotically normal
by the CLT. The whole expression is asymptotically normal by the Slutsky’ Theorem, i.e.,
√ d
n(Sn2 − σ 2 ) → N (0, µ4 − σ 4 ),

20
where µ4 denotes the centered fourth moment of X1 and µ4 − σ 4 comes certainly from
computing the variance of (X1 − µ)2 .

Example 1.3.3 (Level of the Chi-square test) Normal theory prescribes to reject the
null hypothesis H0 : σ 2 ≤ 1 for values of nSn2 exceeding the upper α point χ2n−1,α of the χ2n−1
distribution. If the observations are sample from a normal distribution, the test has exactly
level α. However, this is not approximately the case of the underlying distribution is not
normal. The CLT and the Example 1.3.2 yield the following two statements
χ2n−1 − (n − 1) d √
 2 
Sn d
p → N (0, 1), n 2
− 1 → N (0, κ + 2),
2(n − 1) σ

where κ = µ4 /σ 4 − 3 is the kurtosis of the underlying distribution. The first statement


p
implies that (χ2n−1,α − (n − 1))/ 2(n − 1) converges to the upper α point zα of N (0, 1).
Thus, the level of the chi-square test satisfies
√ !
√ χ2n−1,α − n
  2  
Sn z 2
2 2
PH0 (nSn > χn−1,α ) = P n −1 > √ →1−Φ √α
σ2 n k+2

So, the asymptotic level reduces to 1−Φ(zα ) = α iff the kurtosis of the underlying distribution
is 0. If the kurtosis goes to infinity, then the asymptotic level approaches to 1 − Φ(0) = 1/2.
We conclude that the level of the chi-square test is nonrobust against departures of normality
that affect the value of the kurtosis. If, instead, we would use a normal approximation to

the distribution n(Sn2 /σ 2 − 1) the problem would not arise, provided that the asymptotic
variance κ + 2 is estimated accurately.

Theorem 1.3.2 (Multivariate CLT for iid case) Let Xi be iid random p-vectors with
mean µ and and covariance matrix Σ. Then
√  d
n X̄ − µ → Np (0, Σ).

Proof. By the Cramer-Wold device, this can be proved by finding the limit distribution of
the sequences of real variables
n
! n
1 X 1 X T
cT √ (Xi − µ) = √ (c Xi − cT µ).
n i=1 n i=1

21
Because the random variables cT Xi − cT µ are iid with zero mean and variance cT Σc, this
sequence is AN (0, cT Σc) by Theorem 1.3.1. This is exactly the distribution of cT X if X
possesses an Np (0, Σ). 

Example 1.3.4 Suppose that X1 , . . . , Xn is a random sample from the Poisson distribution
with mean θ. Let Zn be the proportions of zero observed, i.e., Zn = 1/n ni=1 I{Xj =0} . Let
P

us find the joint asymptotic distribution of (X̄n , Zn ). Note that E(X1 ) = θ, EI{X1 =0} = e−θ ,
Var(X1 ) = θ, Var(I{X1 =0} ) = e−θ (1 − e−θ ), and EX1 I{X1 =0} = 0. So, Cov(X1 , I{X1 =0} ) =
√  d
−θe−θ . Hence, n (X̄n , Zn ) − (θ, e−θ ) → N2 (0, Σ), where
 
θ −θe−θ
Σ= .
−θ −θ −θ
−θe e (1 − e )

It is not as widely known that existence of a variance is not necessary for asymptotic
normality of partial sums of iid random variables. A CLT without a finite variance can
sometimes be useful. We present the general result below and then give an illustrative
example. Feller (1966) contains detailed information on the availability of CLTs without the
existence of a variance, along with proofs. First, we need a definition.

Definition 1.3.2 A function g : R → R is called slowly varying at ∞ if, for every t > 0,
limx→∞ g(tx)/g(x) = 1.

Examples of slowly varying functions are log x, x/(1 + x), and indeed any function with a
finite limit as x → ∞. But, for example, x or e−x are not slowly varying.

Rx
Theorem 1.3.3 Let X1 , X2 , . . . be iid from a CDF F on R. Let v(x) = −x
y 2 dF (y). Then,
there exist constants {an }, {bn } such that
Pn
i=1 X i − an d
→ N (0, 1),
bn

if and only if v(x) is slowly varying at ∞.

22
If F has a finite second moment, then automatically v(x) is slowly varying at ∞. We
present an example below where asymptotic normality of the sample partial sums still holds,
although the summands do not have a finite variance.

Example 1.3.5 Suppose X1 , X2 , . . . are iid from a t-distribution with 2 degrees of freedom
(t(2)) that has a finite mean but not a finite variance. The density is given by f (y) =
3
c/(2 + y 2 ) 2 for some positive c. Hence, by a direct integration, for some other constant k,

√ √ i
r
1 h 2 arcsinh(x/ 2) .
v(x) = k x − 2 + x
2 + x2

Therefore, on using the fact that arcsinh(x) = log(2x) + O(x−2 ) as x → ∞, we get, for
v(tx)
any t > 0, → 1 on some algebra. It follows that for iid observations from a t(2)
v(x)
distribution, on suitable centering and normalizing, the partial sums ni=1 Xi converge to
P

a normal distribution, although the Xi ’s do not have a finite variance. The centering can
be taken to be zero for the centered t-distribution; it can be shown that the normalizing

required is bn = n log n (why?).

1.3.2 The CLT for the independent not necessarily iid case

Theorem 1.3.4 (Lindeberg-Feller) Suppose Xn is a sequence of independent variables


with means µn and variances σn2 < ∞. Let s2n = ni=1 σi2 . If for any  > 0
P

n Z
1 X
(x − µj )2 dFj (x) → 0, (1.2)
s2n j=1 |x−µj |>sn

where Fi is the CDF of Xi , then


n
P
(Xi − µi )
i=1 d
→ N (0, 1).
sn

A proof can be seen on page 67 in Shao (2003). The condition (1.2) is called Lindeberg-Feller
condition.

23
Example 1.3.6 Let X1 , X2 . . . , be independent variables such that Xj has the uniform
distribution on [−j, j], j = 1, 2, . . .. Let us verify the conditions of Theorem 1.3.4 are satisfied.
Rj
Note that EXj = 0 and σj2 = 2j1 −j x2 dx = j 2 /3 for all j. Hence,
n n
X 1 X 2 n(n + 1)(2n + 1)
s2n = σj2 = j = .
j=1
3 j=1 18

For any  > 0, n < sn for sufficiently large n, since limn n/sn = 0. Because |Xj | ≤ j ≤ n,
when n is sufficiently large,
E(Xj2 I{|Xj |>sn } ) = 0.
Pn
Consequently, limn→∞ j=1 E(Xj2 I{|Xj |>sn } ) < ∞. Considering sn → ∞, Lindeberg’s con-
dition holds.

The Lindeberg- Feller theorem is a landmark theorem in probability and statistics. Gen-
erally, it is hard to verify the Lindeberg-Feller condition. A simpler theorem is the following.

Theorem 1.3.5 (Liapounov) Suppose Xn is a sequence of independent variables with


means µn and variances σn2 < ∞. Let s2n = ni=1 σi2 . If for some δ > 0
P

n
1 X
E|Xj − µj |2+δ → 0 (1.3)
s2+δ
n j=1

as n → ∞, then
n
P
(Xi − µi )
i=1 d
→ N (0, 1).
sn

A proof is given in Sen and Singer (1993). For instance, if sn → ∞, supj≥1 E|Xj −µj |2+δ < ∞
and n−1 sn is bounded, then the condition of Liapounov’s theorem is satisfied. In practice,
usually one tries to work with δ = 1 or 2 for algebraic convenience. It can be easily checked
that if Xi is uniformly bounded and sn → ∞, the condition is immediately satisfied with
δ = 1.

Example 1.3.7 Let X1 , X2 , . . . be independent random variables. Suppose that Xi has the
binomial distribution BIN(pi , 1), i = 1, 2, . . .. For each i, EXi = pi and E|Xi − EXi |3 =

24
(1 − pi )3 pi + p3i (1 − pi ) ≤ 2pi (1 − pi ). Hence, ni=1 E|Xi − EXi |3 ≤ 2s2n = 2 ni=1 E|Xi −
P P

EXi |2 = 2 ni=1 pi (1 − pi ). Then Liapounov’s condition (1.3) holds with δ = 1 if sn → ∞.


P

For example, if pi = 1/i or M1 ≤ pi ≤ M2 with two constants belong to (0, 1), sn → ∞


Pn
i=1 (Xi −pi )
d
holds. Accordingly, by Liapounov’s theorem, sn
→ N (0, 1).

A consequence especially useful in regression is the following theorem, which is also


proved in Sen and Singer (1993).

Theorem 1.3.6 (Hajek-Sidak) Suppose X1 , X2 , . . . are iid random variables with mean µ
and variance σ 2 < ∞. Let cn = (cn1 , cn2 , . . . , cnn ) be a vector of constants such that
c2ni
max P n →0 (1.4)
1≤i≤n
c2nj
j=1

as n → ∞. Then
n
P
cni (Xi − µ)
i=1 d
s → N (0, 1).
n
c2nj
P
σ
j=1

The condition (1.4) is to ensure that no coefficient dominates the vector cn , and is
referred as Hajek-Sidak condition in the literatures. For example, if cn = (1, 0, . . . , 0), then
the condition would fail and so would the theorem. The Hajek-Sidak’s theorem has many
applications, including in the regression problem. Here is an important example.

Example 1.3.8 (Simplest linear regression) Consider the simple linear regression model
yi = β0 + β1 xi + εi , where εi ’s are iid with mean 0 and variance σ 2 but are not necessarily
normally distributed. The least squares estimate of β1 based on n observations is
Pn Pn
i=1 (yi − ȳn )(xi − x̄n ) εi (xi − x̄n )
β1 =
b Pn 2
= β1 + Pi=1
n 2
.
i=1 (xi − x̄n ) i=1 (xi − x̄n )

So, βb1 = β1 + ni=1 εi cni / nj=1 c2nj , where cni = xi − x̄n . Hence, by the Hajek-Sidak’s
P P

Theorem
v
u n 2 βb1 − β1
uX Pn
εi cni d
t cnj = qi=1 → N (0, 1),
j=1
σ σ
P n
c 2
j=1 nj

25
provided
max1≤i≤n (xi − x̄n )2
Pn 2
→0
j=1 (xj − x̄n )

as n → ∞. For most reasonable designs, this condition is satisfied. Thus, the asymptotic
normality of the LSE (least squares estimate) is established under some conditions on the
design variables, an important result.

Theorem 1.3.7 (Lindeberg-Feller multivariate) Suppose Xi is a sequence of indepen-


dent vectors with means µi , covariances Σi and distribution function Fi . Suppose that
1
Pn
n i=1 Σi → Σ as n → ∞, and that for any  > 0
n Z
1X
||x − µj ||2 dFj (x) → 0,
n j=1 ||x−µj ||>√n

then
n
1 X d
√ (Xi − µi ) → N (0, Σ).
n i=1

Example 1.3.9 (multiple regression) In the linear regression problem, we observe a


vector y = Xβ + ε for a fixed or random matrix X of full rank, and an error vector ε
with iid components with mean zero and variance σ 2 . The least squares estimator of β is
b = (XT X)−1 XT y. This estimator is unbiased and has covariance matrix σ 2 (XT X)−1 . If
β
the error vector ε is normally distributed, then β
b is exactly normally distributed. Under
reasonable conditions on the design matrix, β
b is asymptotically normally distributed for a
large range of error distributions. Here we fix p and let n tend to infinity. This follows from
the representation
n
X
T
(X X) 1/2 b − β) = (XT X)−1/2 XT ε =
(β ani εi ,
i=1

where an1 , . . . , ann are the columns of the (p × n) matrix (XT X)−1/2 XT =: A. This sequence
is asymptotically normal if the vectors an1 ε1 , . . . , ann εn satisfy the Lindeberg conditions.
The norming matrix (XT X)1/2 has been chosen to ensure that the vectors in the display
have covariance matrix σ 2 Ip for every n. The remaining condition is
n
X
||ani ||2 Eε2i I{||ani |||εi |>} → 0.
i=1

26
||ani ||2 = tr(AAT ) = p,
P
This can be simplified to other conditions in several ways. Because
it suffices that maxi Eε2i I{||ani |||εi |>} → 0, which is also equivalent to maxi ||ani || → 0. Al-
ternatively, the expectation Eε2i I{||ani |||εi |>} can be bounded −k E|εi |k+2 ||ani ||k and a second
set of sufficient conditions is
n
X
||ani ||k → 0; E|ε1 |k < ∞, k > 2.
i=1

1.3.3 CLT for a random number of summands

The canonical CLT for the iid case says that if X1 , X2 , . . . are iid with mean zero and a finite
variance σ 2 , then the sequence of partial sums Tn = ni=1 Xi obeys the central limit theorem
P

T√n d
in the sense σ n
→ N (0, 1). There are some practical problems that arise in applications, for
example in sequential statistical analysis, where the number of terms present in a partial sum
is a random variable. Precisely, {N (t)}, t ≥ 0, is a family of (nonnegative) integer-valued
random variables, and we want to approximate the distribution of TN (t) , where for each fixed
n, Tn is still the sum of n iid variables as above. The question is whether a CLT still holds
under appropriate conditions. Here is the Anscombe-Renyi theorem.

Theorem 1.3.8 (Anscombe-Renyi) Let Xi be iid with mean µ and a finite variance σ 2 ,
and let {Nn }, be a sequence of (nonnegative) integer-valued random variables and {an } a
p
sequence of positive constants tending to ∞ such that Nn /an → c, 0 < c < ∞, as n → ∞.
Then,

TNn − Nn µ d
√ → N (0, 1) as n → ∞.
σ Nn

Example 1.3.10 (coupon collection problem) Consider a problem in which a person


keeps purchasing boxes of cereals until she obtains a full set of some n coupons. The as-
sumptions are that the boxes have an equal probability of containing any of the n coupons
mutually independently. Suppose that the costs of buying the cereal boxes are iid with
some mean µ and some variance σ 2 . If it takes Nn boxes to obtain the complete set of all
p
n coupons, then Nn /(n ln n) → 1 as n → ∞ The total cost to the customer to obtain the

27
complete set of coupons is TNn = X1 + . . . + XNn . By the Anscombe-Renyi theorem and
TNn −Nn µ
Slutsky’s theorem, we have that √
σ n ln n
is approximately N (0, 1).

[On the distribution of Nn . Let ti be the boxes to collect the i-th coupon after i − 1
coupons have been collected. Observe that the probability of collecting a new coupon given
i−1 coupons is pi = (n−i+1)/n. Therefore, ti has a geometric distribution with expectation
1/pi and Nn = ni=1 ti . By Theorem 1.2.5, we know
P

n n n
1 p 1 X −1 1 X 1 1 X1 1
Nn → pi = n = =: Hn .
n ln n n ln n i=1 n ln n i=1 i ln n i=1 i ln n

Note that Hn is the harmonic number and hence by using the asymptotics of the harmonic
Nn
numbers (Hn = ln n + γ + o(1); γ is Euler-constant), we obtain n ln n
→ 1.]

1.3.4 Central limit theorems for dependent sequences

The assumption that observed data X1 , X2 , . . . form an independent sequence is often one
of technical convenience. Real data frequently exhibit some dependence and at the least
some correlation at small lags. Exact sampling distributions for fixed n are even more
complicated for dependent data than in the independent case, and so asymptotics remain
useful. In this subsection, we present CLTs for some important dependence structures. The
cases of stationary m-dependence and without replacement sampling are considered.

Stationary m-dependence

We start with an example to illustrate that a CLT for sample means can hold even if the
summands are not independent.

Example 1.3.11 Suppose X1 , X2 , . . . is a stationary Gaussian sequence with E(Xi ) = µ,


√ √
Var(Xi ) = σ 2 < ∞. Then, for each n, n(X̄n − µ) is normally distributed and so n(X̄n −
d √
µ) → N (0, τ 2 ), provided τ 2 = limn→∞ Var( n(X̄n − µ)) < ∞. But
n
√ 1X 2X
Var( n(X̄n − µ)) = σ 2 + Cov(Xi , Xj ) = σ 2 + (n − i)γi ,
n i6=j n i=1

28
n
1
where γi = Cov(X1 , Xi+1 ). Therefore, τ 2 < ∞ if and only if
P
n
(n − i)γi has a finite limit,
i=1
√ d
say ρ, in which case n(X̄n − µ) → N (0, σ 2 + ρ).

n
1
P
What is going on qualitatively is that n
(n−i)γi is summable when |γi | → 0 adequately
i=1
fast. Instances of this are when only a fixed finite number of the γi are nonzero or when γi is
damped exponentially; i.e., γi = O(ai ) for some |a| < 1. It turns out that there are general
CLTs for sample averages under such conditions. The case of m-dependence is provided
below.

Definition 1.3.3 A stationary sequence {Xn } is called m-dependent for a given fixed m if
(X1 , . . . , Xi ) and (Xj , Xj+1 , . . .) are independent whenever j − i > m.

Theorem 1.3.9 (m-dependent sequence) Let {Xi } be a stationary m-dependent se-


√ d
quence. Let E(Xi ) = µ and Var(Xi ) = σ 2 < ∞. Then n(X̄n − µ) → N (0, τ 2 ), where
τ 2 = σ 2 + 2 m+1
P
i=2 Cov(X1 , Xi ).

See Lehmann (1999) for a proof; m-dependent data arise either as standard time series
models or as models in their own right. For example, if {Zi } are i.i.d. random variables
and Xi = a1 Zi−1 + a2 Zi−2 , i ≥ 3, then {Xi } is 1-dependent. This is a simple moving
average process of use in time series analysis. A more general m-dependent sequence is
Xi = h(Zi , Zi+1 , . . . , Zi+m ) for some function h.

Example 1.3.12 Suppose Zi are i.i.d. with a finite variance σ 2 , and let Xi = (Zi + Zi+1 )/2.
Pn Z1 +Zn+1 Pn √
Then, obviously i=1 Xi = 2
+ i=2 Zi . Then, by Slutsky’s theorem, n(X̄n −
d 2

µ) → N (0, σ ). Notice we write n(X̄n − µ) into two parts in which one part is dominant
and produces the CLT, and the other part is asymptotically negligible. This is essentially
the method of proof of the CLT for more general m-dependent sequences.

Sampling without replacement

Dependent data also naturally arise in sampling without replacement from a finite popula-
tion. Central limit theorems are available and we will present them shortly. But let us start

29
with an illustrative example.

Example 1.3.13 Suppose, among N objects in a population, D are of type 1 and N − D


of type 2. A sample without replacement of size n is taken, and let X be the number of
sampled units of type 1. We can regard these D type 1 units as having numerical values
X1 , . . . , XD = 1 and the rest as having values XD+1 , . . . , XN = 0, X = ni=1 XNi , where
P

XN1 , . . . , XNn correspond to the sampled units.

Of course, X has the hypergeometric distribution


x n−x
CD CN −D
P (X = x) = , 0 ≤ x ≤ D.
CNn
Two configurations can be thought of: (a) n is fixed, and D/N → p, 0 < p < 1 with N → ∞.
In this case, by applying Stirlings approximation to N ! and D!, P (X = x) → Cnx px (1 − p)x ,
d
and so X → Bin(n, p); (b) n, N, N − n → ∞, D/N → p, 0 < p < 1. This is the case where
convergence of X to normality holds.

Here is a general result; again, see Lehmann (1999) for a proof.

Theorem 1.3.10 For N ≥ 1, let πN be a finite population with numerical values X1 , X2 , . . . XN .


Let XN1 , XN2 , . . . , XNn be the values of the units of a sample without replacement of size n.
Let X̄n = ni=1 XNi /n and X̄N = N
P P
i=1 XN /N . Suppose n, N − n → ∞, and

(a)
max1≤i≤N (Xi − X̄N )2
N
→ 0,
P 2
(Xi − X̄N )
i=1
and n/N → 0 < τ < 1 as N → ∞;

(b)
N max1≤i≤N (Xi − X̄N )2
N
= O(1), as N → ∞.
P
(Xi − X̄N )2
i=1

Then,
X̄n − E(X̄n ) d
p → N (0, 1).
Var(X̄n )

30
Example 1.3.14 Suppose XN1 , . . . , XNn is a sample without replacement from the set
{1, 2, . . . , N }, and let X̄n = ni=1 XNi /n . Then, by a direct calculation,
P

N +1 (N − n)(N + 1)
E(X̄n ) = , Var(X̄n ) = .
2 12n
Furthermore,
N max1≤i≤N (Xi − X̄N )2 3(N − 1)
N
= = O(1).
P 2
N +1
(Xi − X̄n )
i=1

n −E(X̄n ) d
Hence, by Theorem 1.3.10, X̄√ → N (0, 1).
VarX̄n

1.3.5 Accuracy of CLT

d
Suppose a sequence of CDFs FXn → FX for some FX . Such a weak convergence result is
usually used to approximate the true value of FXn (x) at some fixed n and x by FX (x).
However, the weak convergence result by itself says absolutely nothing about the accuracy
of approximating FXn (x) by FX (x) for that particular value of n. To approximate FXn (x) by
FX (x) for a given finite n is a leap of faith unless we have some idea of the error committed;
i.e., |FXn (x) − FX (x)|. More specifically, if for a sequence of random variables X1 , . . . , Xn

X̄n − E(X̄n ) d
p → Z ∼ N (0, 1),
Var(X̄n )
then we need some idea of the error
!
X̄n − E(X̄n )
P p ≤x − Φ(x) .
Var(X̄n )

in order to use the central limit theorem for a practical approximation with some degree
of confidence. The first result for the iid case in this direction is the classic Berry-Esseen
theorem. Typically, these accuracy measures give bounds on the error in the appropriate
CLT for any fixed n, making assumptions about moments of Xi .

n(X̄ −µ)/σ converges
In the canonical iid case with a finite variance, the CLT says that

in law to the N (0, 1). By Polya’s theorem, the uniform error ∆n = sup−∞<x<∞ |P ( n(X̄ −
µ)/σ ≤ x) − Φ(x)| → 0 as n → ∞. Bounds on ∆n for any given n are called uniform bounds.

31
The following results are the classic Berry-Esseen uniform bound and an extension of the
Berry-Esseen inequality to the case of independent but not iid variables.; a proof can be seen
in Petrov (1975). Introducing higher-order moment assumptions (third), the Berry-Esseen
inequality assert for this convergence the rate O(n−1/2 ).

Theorem 1.3.11 (i) (Berry-Esseen; iid case) Let X1 , . . . , Xn be iid with E(X1 ) = µ,
Var(X1 ) = σ 2 , and β3 = E|X1 − µ|3 < ∞. Then there exists a universal constant C, not
depending on n or the distribution of the Xi , such that
√ 
n(X̄n − µ) Cβ3
sup P ≤ x − Φ(x) ≤ 3 √ .
x σ σ n

(ii) (independent but not iid case) Let X1 , . . . , Xn be independent with E(Xi ) = µi ,
Var(Xi ) = σi2 , and β3i = E|Xi − µi |3 < ∞. Then there exists a universal constant C ∗ , not
depending on n or the distribution of the Xi , such that
!
C ∗ ni=1 β3i
P
X̄n − E(X̄n )
sup P ≤ x − Φ(x) ≤ Pn .
( i=1 σi2 )3/2
p
x Var(X̄n )

It is the best possible rate in the sense of not being subject to improvement without narrowing
the class of distribution functions considered. For some specific underlying CDFs FX , better
rates of convergence in the CLT may be possible. This issue will be clearer when we discuss

asymptotic expansions for P ( n(X̄n − µ)/σ ≤ x). In Theorem 1.3.11-(i), the universal
constant C may be taken as C = 0.8.

Example 1.3.15 The Berry-Esseen bound is uniform in x, and it is valid for any n ≥ 1.
While these are positive features of the theorem, it may not be possible to establish that
∆n ≤  for some preassigned  > 0 by using the Berry-Esseen theorem unless n is very large.
iid
Let us see an illustrative example. Suppose X1 , . . . , Xn ∼ BIN(p, 1) and n = 100. Suppose
we want the CLT approximation to be accurate to within an error of ∆n = 0.005. In the
Bernoulli case, β3 = pq(1 − 2pq), where q = 1 − p. Using C = 0.8, the uniform Berry-Esseen
bound is
0.8pq(1 − 2pq)
∆n ≤ √ .
(pq)3/2 n

32
This is less than the prescribed ∆n = 0.005 iff pq > 0.4784, which does not hold for any
0 < p < 1. Even for p = 0.5, the bound is less than or equal to ∆n = 0.005 only when
n > 25, 000, which is a very large sample size. Of course, this is not necessarily a flaw
of the Berry-Esseen inequality itself because the desire to have a uniform error of at most
∆n = 0.005 is a tough demand, and a fairly large value of n is probably needed to have such
a small error in the CLT.

Example 1.3.16 As an example of independent variables that are not iid, consider Xi ∼
BIN(i−1 , 1), i ≥ 1, and let Sn = ni=1 Xi . Then, E(Sn ) = ni=1 i−1 , Var(Sn ) = ni=1 (i−1)/i2
P P P

and β3i = (i − 1)(i2 − 2i + 2)/i4 . Therefore, from Theorem 1.3.11-(ii),


Pn
∗ i=1 (i − 1)(i2 − 2i + 2)/i4
∆n ≤ C Pn 2 3/2
i=1 [(i − 1)/i ]

Observe now ni=1 (i − 1)/i2 = log n + O(1) and ni=1 (i − 1)(i2 − 2i + 2)/i4 = log n + O(1).
P P

Substituting these back into the Berry-Esseen bound, one obtains with some minor algebra
that ∆n = O(log n)−1/2 .

For x sufficiently large, while n remains fixed, the quantities FXn (x) and FX (x)each
become so close to 1 that the bound given in Theorem 1.3.11 is too rude. There has been
a parallel development on developing bounds on the error in the CLT at a particular x as
opposed to bounds on the uniform error. Such bounds are called local Berry-Esseen bounds.
Many different types of local bounds are available.We present here just one.

Theorem 1.3.12 Let X1 , . . . , Xn be independent with E(Xi ) = µi , Var(Xi ) = σi2 , and


E|Xi − µi |2+δ < ∞ for some 0 < δ ≤ 1. Then
! Pn 2+δ
X̄n − E(X̄n ) D i=1 E|Xi − µi |
P p ≤ x − Φ(x) ≤ δ .
1 + |x|2+δ ( ni=1 σi2 )1+ 2
P
Var(X̄n )

for some universal constant 0 < D < ∞.

Such local bounds are useful in proving convergence of global error criteria such as
R
|FXn (x) − Φ(x)|p dx or for establishing approximations to the moments of FXn . Uniform
error bounds would be useless for these purposes. If the third absolute moments are finite,

33
an explicit value for the universal constant D can be chosen to be 31. Good reference for
local bounds is Serfling (1980).

Error bounds for normal approximations to many other types of statistics besides sample
means are known, such as the result for statistics that are smooth functions of means. The
order of the error depends on the conditions one assumes on the nature of the function. We
will discuss this problem in Section 2 after we introduced the Delta method.

1.3.6 Edgeworth and Cornish-Fisher expansions

We now consider the important topic of writing asymptotic expansions for the CDFs of

centered and normalized statistics. When the statistic is a sample mean, let Zn = n(X̄n −
µ)/σ and FZn (x) = P (Zn ≤ x), where X1 , . . . , Xn are i.i.d with a CDF F having mean µ
and variance σ 2 < ∞.

The CLT says that FZn (x) → Φ(x) for every x, and the Berry-Esseen theorem says
|FZn (x) − Φ(x)| = O(n−1/2 ) uniformly in x if X has three moments. If we change the

approximation Φ(x) to Φ(x) + C1 (F )p1 (x)φ(x)/ n for some suitable constant C1 (F ) and a
suitable polynomial p1 (x), we can assert that
C1 (F )p1 (x)φ(x)
|Fn (x) − Φ(x) − √ | = O(n−1 ),
n
uniformly in x. Expansions of the form
k
X qs (x)
Fn (x) = Φ(x) + √ s + o(n−k/2 ) uniformly in x,
s=1
n
are known as Edgeworth expansions for Zn . One needs some conditions on F and enough
moments of X to carry the expansion to k terms for a given k. Excellent references for the
main results on Edgeworth expansions are Hall (1992). The coefficients in the Edgeworth
expansion for means depend on the cumulants of F , which share a functional relationship
with the sequence of moments of F . Cumulants are also useful in many other contexts, for
example, the saddlepoint approximation.

We start with the definition and recursive representations of the sequence of cumulants
of a distribution. The term cumulant was coined by Fisher (1931).

34
Definition 1.3.4 Let X ∼ F have a finite m.g.f. ψn (t) in some neighborhood of zero,
and let K(t) = log ψn (t) when it exists. The rth cumulant of X (or of F ) is defined as
dr
κr = dtr
K(t)|t=0 .

Equivalently, the cumulants of X are the coefficients in the power series expansion K(t) =
∞ n
κn tn! within the radius of convergence of K(t). By equating coefficients in eK(t) with
P
n=1
those in ψ(t), it is easy to express the first few moments (and therefore the first few central
moments) in terms of the cumulants. Indeed, letting ci = E(X i ), µi = E(X − µ)i , one
obtains the expressions

c1 = κ1 , c2 = κ2 + κ21 , c3 = κ3 + 3κ1 κ2 + κ31 , c4 = κ4 + 4κ1 κ3 + 3κ22 + 6κ21 κ2 + κ41

µ2 = σ 2 = κ2 , µ3 = κ3 , µ4 = κ4 + 3κ22 .

In general, the cumulants satisfy the recursion relations


n−1
X j−1
κn = cn − Cn−1 cn−j κj ,
j=1

which results in
κ1 = µ, κ2 = σ 2 , κ3 = µ3 , κ4 = µ4 − 3µ22 .

The higher-order ones are quite complex but can be found from Kendall’s Advanced Theory
of Statistics.

Example 1.3.17 Suppose X ∼ N (µ, σ 2 ). Of course, κ1 = µ, κ2 = σ 2 . Since K(t) =


tµ + t2 σ 2 /2, a quadratic, all derivatives of K(t) of order higher than 2 vanish. Consequently,
κr = 0 for r > 2. If X ∼ Poisson(λ), then K(t) = λ(et − 1), and therefore all derivatives of
K(t) are equal to λet . It follows that κr = λ for r ≥ 1. These are two interesting special
cases with neat structure and have served as the basis for stochastic modeling.

Now let us consider the expansion for (function of) means. To illustrate the idea, let
us consider Zn . Assume that the m.g.f of W = (X1 − µ)/σ is finite and positive in a
neighborhood of 0. The m.g.f of Zn is equal to
( ∞
)
 √ n t2 X κj tj
ψn (t) = exp{K(t/ n)} = exp + ,
2 j=3
j!n(j−2)/2

35
where K(t) is the cumulant generating function of W and κj ’s are the corresponding cumu-
2 /2
lants (κ1 = 0, κ2 = 1, κ3 = EW 3 and κ4 = EW 4 − 3). Using the series expansion for et ,
we obtain that
2 /2 2 /2 2 /2
ψn (t) = et + n−1/2 r1 (t)et + · · · + n−j/2 rj (t)et + ··· , (1.5)

where rj is a polynomial of degrees 3j depending on κ3 , . . . , κj+2 but not on n, j = 1, 2, . . ..


For example, it can be shown that
1 1 1
r1 (t) = κ3 t3 , r2 (t) = κ4 t4 + κ23 t6 .
6 24 72
R tx 2 R
Since ψn (t) = e dFZn (x) and et /2 = etx dΦ(x), expansions (1.5) suggests the inverse
expansion
FZn (x) = Φ(x) + n−1/2 R1 (x) + · · · + n−j/2 Rj (x) + · · · ,
R 2
where Rj (x) is a function satisfying etx dRj (x) = rj (t)et /2 , j = 1, 2, . . .. Thus, Rj ’s can be
obtained once rj ’s are derived. For example,
1
R1 (x) = − κ3 (x2 − 1)φ(x)
6 
1 2 1 2 4 2
R2 (x) = − κ4 (x − 3) + κ3 x(x − 10x + 15) φ(x)
24 72

The CLT for means fails to capture possible skewness in the distribution of the mean
for a given finite n because all normal distributions are symmetric. By expanding the CDF
to the next term, the skewness can be captured. Expansion to another term also adjusts for
the kurtosis. Although expansions to any number of terms are available under existence of
enough moments, usually an expansion to two terms after the leading term is of the most
practical importance. Indeed, expansions to three terms or more can be unstable due to the
presence of the polynomials in the expansions. We present the two-term expansion next. A
rigorous statement of the Edgeworth expansion for a more general Zn will be introduced in
the next chapter after entailing the multivariate Delta theorem. The proof can be found in
Hall (1992).

Theorem 1.3.13 (Two-term Edgeworth expansion) Suppose F is absolutely continu-


ous distributions and EF (X 4 ) < ∞. Then
C1 (F )p1 (x)φ(x) C2 (F )p2 (x) + C3 (F )p3 (x)
FZn (x) = Φ(x) + √ + + O(n−3/2 ),
n n

36
uniformly in x, where
E(X−µ)4
E(X − µ)3 σ4
−3 C12 (F )
C1 (F ) = , C2 (F ) = , C3 (F ) = ,
6σ 3 24 72
p1 (x) = 1 − x , p2 (x) = 3x − x , p3 (x) = 10x3 − 15x − x5 .
2 3

Note that the terms C1 (F ) and C2 (F ) can be viewed as skewness and kurtosis correction
of departure from normality for FZn (x), respectively. It is useful to mention here that the
corresponding formal two-term expansion for the density of Zn is given by

φ(z)+n−1/2 C1 (F )(z 3 −3z)φ(z)+n−1 [C3 (F )(z 6 −15z 4 +45z 2 −15)+C2 (F )(z 4 −6z 2 +3)]φ(z).

One of the uses of an Edgeworth expansion in statistics is approximation of the power of


a test. In the one-parameter regular exponential family, the natural sufficient statistic is a
sample mean, and standard tests are based on this statistic. So the Edgeworth expansion for
sample means of iid random variables can be used to approximate the power of such tests.
Here is an example.

iid
Example 1.3.18 Suppose X1 , . . . , Xn ∼ Exp(λ) and we wish to test H0 : λ = 1 vs. H1 :
λ > 1. The UMP test rejects H0 for large values of ni=1 Xi . If the cutoff value is found by
P

using the CLT, then the test rejects H0 for X̄n > 1 + k/ n, where k = zα . The power at an
alternative λ equals

√ 
 
X̄n − λ 1 + k/ n − λ
Power = Pλ X̄n > 1 + k/ n = Pλ √ > √
λ/ n λ/ n
 √ 
X̄n − λ n(1 − λ) k
= 1 − Pλ √ ≤ + → 1.
λ/ n λ λ
For a more useful approximation, the Edgeworth expansion is used. For example, the general
one-term Edgeworth expansion for sample means
C1 (F )(1 − x2 )φ(x)
Fn (x) = Φ(x) + √ + O(n−1 ),
n
can be used to approximate the power expression above. Algebra reduces the one-term
Edgeworth expression to the formal approximation
√  √  √
( n(λ − 1) − k)2
 
n(λ − 1) − k 1 n(λ − 1) − k
Power ≈ Φ + √ −1 φ .
λ 3 n λ2 λ

37
This is a much more useful approximation than simply saying that for large n the power is
close to 1.

For constructing asymptotically correct confidence intervals for a parameter on the basis
of an asymptotically normal statistic, the first-order approximation to the quantiles of the
statistic (suitably centered and normalized) comes from using the central limit theorem. Just
as Edgeworth expansions produce more accurate expansions for the CDF of the statistic than
does just the central limit theorem, higher-order expansions for the quantiles produce more
accurate approximations than does just the normal quantile. These higher-order expansions
for quantiles are essentially obtained from recursively inverted Edgeworth expansions, start-
ing with the normal quantile as the initial approximation. They are called Cornish-Fisher
expansions. We briefly present the case of sample means. Let the standardized cumulants
are the quantities ρr = κr /σ r .

Theorem 1.3.14 Let X1 , . . . , Xn be i.i.d with absolutely continuous CDF F having a finite

m.g.f in some open neighborhood of zero. Let Zn = n(X̄n −µ)/σ and Hn (x) = PF (Zn ≤ x).
Then,

(zα2 − 1)ρ3 (zα3 − 3zα )ρ4 (2zα3 − 5zα )ρ23


Hn−1 (α) = zα + √ + − + O(n−3/2 ).
6 n 24n 36n

Using Taylor’s expansions at zα for Φ(wnα ), p1 (wnα )φ(wnα ) and p2 (wnα )φ(wnα ), and the fact
that φ0 (x) = −xφ(x), we can obtain this theorem by inverting the Edgeworth expansion.

√ d
Example 1.3.19 Let Wn ∼ χ2n and Zn = (Wn − n)/ 2n → N (0, 1) as n → ∞, so a first-

order approximation to the upper αth quantile of Wn is just n + zα 2n. The Cornish-Fisher
expansion should produce a more accurate approximation. To verify this, we will need the

standardized cumulants, which are ρ3 = 2 2 and ρ4 = 12. Now substituting into the theorem
√ 3 −7z
above, we get the two-term Cornish-Fisher expansion χ2n,α = n + zα 2n + 32 (zα2 − 1) + zα9√ 2n
α
.

38
1.3.7 The law of the iterated logarithm

The law of the iterated logarithm (LIL) complements the CLT by describing the precise
extremes of the fluctuations of the sequence of random variables
Pn
i=1 (Xi − µ)
, n = 1, 2, . . . .
σn1/2
The CLT states that this sequence converges in law to N (0, 1), but does not otherwise provide
information about the fluctuations of these random variables about the expected value 0.
The LIL asserts that the extremes fluctuations of this sequence are essentially of the exact
order of magnitude (2 log log n)1/2 . The classical iid case is covered by

Theorem 1.3.15 (Hartman and Wintner). let {Xi } be iid with mean µ and finite vari-
ance σ 2 . Then
Pn
i=1 (Xi − µ)
lim sup = 1 wp1;
n→∞ (2σ n log log n)1/2
2
Pn
i=1 (Xi − µ)
lim inf = −1 wp1.
n→∞ (2σ n log log n)1/2
2

In other words: with probability 1, for any  > 0, only finitely many of the events
Pn
i=1 (Xi − µ)
> 1 + , n = 1, 2, . . . ;
(2σ n log log n)1/2
2
Pn
i=1 (Xi − µ)
> −1 − , n = 1, 2, . . .
(2σ 2 n log log n)1/2
are realized, whereas infinitely many of the events
Pn
i=1 (Xi − µ)
> 1 − , n = 1, 2, . . . ;
(2σ 2 n log log n)1/2
Pn
i=1 (Xi − µ)
> −1 + , n = 1, 2, . . . ,
(2σ 2 n log log n)1/2
occur. That is, with probability 1, for any  > 0, all but finitely many of these fluc-
tuations fall within the boundaries ±(1 + )(2 log log n)1/2 and moreover, the boundaries
±(1 − )(2 log log n)1/2 are reached infinitely often.

In LIL theorem, what is going on is that, for a given n, there is some collection of sample

points ω for which the partial sum Sn − nµ stays in a specific n-neighborhood of zero.

39
But this collection keeps changing with changing n, and any particular ω is sometimes in
the collection and at other times out of it. Such unlucky values of n are unbounded, giving

rise to the LIL phenomenon. The exact rate n log log n is a technical aspect and cannot
be explained intuitively.

The LIL also complements-indeed, refines- the SLLN (assuming existence of 2nd mo-
n
ments). It terms of the average dealt with the SLLN, n1
P
Xi − µ, the LIL assert that the
i=1
extreme fluctuations are essentially of the exact order of magnitude

σ(2 log log n)1/2


.
n1/2
Thus, with provability 1, for any  > 0, the infinite sequence of “confidence intervals”
( n )
1X σ(2 log log n)1/2
Xi ± (1 + )
n i=1 n1/2

contains µ with only finitely many exceptions. Say, in this asymptotic fashion, the LIL
provides the basis for concepts of 100% confidence intervals. The LIL also provides an
example of almost sure convergence being truly stronger than convergence in probability.

Example 1.3.20 Let X1 , X2 , . . . be iid with a finite variance. Then,


S − nµ Sn − nµ 1
√ n = √ √ = Op (1) · o(1) = op (1).
2n log log n n 2 log log n

But, by the LIL, √ Sn −nµ does not converge a.s. to zero. Hence, convergence in probability
2n log log n
is weaker than almost sure convergence, in general.

References
Billingsley, P. (1995). Probability and Measure, 3rd edition, John Wiley, New York.
Petrov, V. (1975). Limit Theorems for Sums of Independent Random Variables (translation from Russian),
Springer-Verlag, New York.
Serfling, R. (1980). Approximation Theorems of Mathematical Statistics, John Wiley, New York.
Shao, J. (2003). Mathematical Statistics, 2nd ed. Springer, New York.
Van der Vaart, A. W. (2000). Asymptotic Statistics, Cambridge University Press.

40
Chapter 2

Transformations of given statistics:


The delta method

Distributions of transformations of a statistic are of importance in applications. Suppose an


estimator Tn for a parameter θ is available, but the quantity of interest is g(θ) for some known
function g. A natural estimator is g(Tn ). The aim is to deduce the asymptotic behavior of
g(Tn ) based on those of Tn .

A first result is an immediate consequence of the continuous-mapping theorem. Of



greater interest is a similar question concerning limit distributions. In particular, if n(Tn −

θ) converges in law to a limit distribution, is the same true for n [g(Tn ) − g(θ)]? If g is
differentiable, then the answer is affirmative.

2.1 Basic result

The delta theorem says how to approximate the distribution of a transformation of a statistic
in large samples if we can approximate the distribution of the statistic itself. We firstly treat
the univariate case and present the basic delta theorem as follows.

41
Theorem 2.1.1 (Delta Theorem) Let Tn be a sequence of statistics such that
√ d
n(Tn − θ) → N (0, σ 2 (θ)). (2.1)

Let g : R → R be once differentiable at θ with g 0 (θ) 6= 0. Then


√ d
n [g(Tn ) − g(θ)] → N (0, [g 0 (θ)]2 σ 2 (θ)).

Proof. First note that it follows from the assumed CLT for Tn that Tn converges in prob-
ability to θ and hence Tn − θ = op (1). The proof of the theorem now follows from a simple
application of Taylor’s theorem that says that

g(x0 + h) = g(x0 ) + hg 0 (x0 ) + o(h)

if g is differentiable at x0 . Therefore

g(Tn ) = g(θ) + (Tn − θ)g 0 (θ) + op (Tn − θ).

That the remainder term is op (Tn − θ) follows from our observation that Tn − θ = op (1) and

Lemma 1.2.1. Taking g(θ) to the left and multiplying both sides by n, we obtain
√ √ √
n [g(Tn ) − g(θ)] = n(Tn − θ)g 0 (θ) + nop (Tn − θ).

Observing that n(Tn − θ) = Op (1) by the assumption of the theorem, we see that the

last term on the right-hand side is nop (Tn − θ) = op (1). Hence, an application of Slutskys
√ d
theorem to the above gives n [g(Tn ) − g(θ)] → N (0, [g 0 (θ)]2 σ 2 (θ)). 

Remark 2.1.1 Assume that g is differentiable in a neighborhood of θ, and g 0 (x) is continu-


ous at θ. Further, if σ(θ) is a continuous function of θ, then we have the modified conclusion
by replacing g 0 (θ) and σ(θ) with g 0 (Tn ) and σ(Tn ),

n [g(Tn ) − g(θ)] d
0
→ N (0, 1).
{[g (Tn )]2 σ 2 (Tn )}1/2

Remark 2.1.2 In fact, the Delta Theorem does not require the asymptotic distribution of
d
Tn to be normal. By the foregoing proofs, we see that assuming an (Tn − θ) → Y in which
an is a sequence of positive numbers with limn→∞ an = ∞ and the conditions in the Delta
Theorem hold, we have
d
an [g(Tn ) − g(θ)] →[g 0 (θ)]Y.

42
Example 2.1.1 Suppose X1 , . . . , Xn are iid with mean µ and variance σ 2 . By taking Tn =
X̄n , θ = µ, σ 2 (θ) = σ 2 , and g(x) = x2 , one gets for µ 6= 0
√ d
n(X̄n2 − µ2 ) → N (0, 4µ2 σ 2 ).
d
For µ = 0, nX̄n2 /σ 2 → χ21 by continuous mapping theorem.

Example 2.1.2 For estimating p2 , suppose that we have the choice between (a) X ∼
Bin(n, p2 ); (b) Y ∼ Bin(n, p) and that as estimators of p2 in the two cases, we would
use respectively X/n and (Y /n)2 . Then we have

 
X d
n − p → N (0, p2 (1 − p2 ));
2
n
 2 !
√ Y d
n − p2 → N (0, pq · 4p2 ).
n

At least for large n, X/n will thus be more accurate than (Y /n)2 , provided

p2 (1 − p2 ) < pq · 4p2 ,

say X/n or Y 2 /n2 is preferable as p > 1/3 or p < 1/3.

Let us finally consider an example in which g 0 (·) does not exist.

Example 2.1.3 Suppose Tn is a sequence of statistics satisfying (2.1) and that we are
interested in the limiting behavior of |Tn |. Since g(θ) = |θ| is differentiable with derivative
g 0 (θ) = ±1 at all values of θ 6= 0, it follows from Theorem 2.1.1 that
√ d
n(|Tn | − |θ|) → N (0, σ 2 ) for all θ 6= 0.

When θ = 0, Theorem 2.1.1 does not apply, but it is easy to determine the limit behavior of
|Tn | directly. With |Tn | − |θ| = |Tn |, we have
√ √
P ( n|Tn | < a) = P (−a < nTn < a)
a  a
→Φ −Φ − = P (σχ1 < a),
σ σ
p
where χ1 = χ21 is the distribution of the absolute value of a standard normal variable. The

convergence rate of |Tn | therefore continues to be 1/ n, but the form of the limit distribution
is χ1 rather than normal.

43
2.2 Higher-order expansions

There are instances in which g 0 (θ) = 0 (at least for some special value of θ), in which case
the limiting distribution of g(Tn ) is determined by the third term in the Taylor expansion.
Thus, if g 0 (θ) = 0, then

(Tn − θ)2 00
g (θ) + op (Tn − θ)2

g(Tn ) = g(θ) + (2.2)
2

and hence
00
(Tn − θ)2 00 2
d g (θ)σ (θ) 2
n(g(Tn ) − g(θ)) = n g (θ) + op (1) → χ1 .
2 2

Formally, the following result generalizes Theorem 2.1.1 to include this case.

Theorem 2.2.1 Let Tn be a sequence of statistics such that

√ d
n(Tn − θ) → N (0, σ 2 (θ)).

Let g be a real-valued function differentiable k(≥ 1) at θ with g (k) (θ) 6= 0 but g (j) (θ) = 0 for
j < k. Then
√ d 1
( n)k [g(Tn ) − g(θ)] → [g (k) (θ)][N (0, σ 2 (θ))]k .
k!

Proof. The argument is similar to that for Theorem 2.1.1, this time using the higher-order
Taylor expansions as in (2.2). The remaining details are left as an exercise. 

d
Example 2.2.1 (i) Example 2.1.1 revisited. For µ = 0, nX̄n2 /σ 2 → 21 · 2 · [N (0, 1)]2 = χ21 ; (ii)

Suppose that nX̄n converges in law to a standard normal distribution. Now consider the
limiting behavior of cos(X̄n ). Because the derivative of cos(x) is zero at x = 0, the proof of

Theorem 2.1.1 yields that n(cos(X̄n ) − 1) converges to zero in probability (or equivalently

in law). Thus, it should be concluded that n is not the right norming rate for the random
sequence cos(X̄n ) − 1. A more informative statement is that −2n(cos(X̄n ) − 1) converges in
law to χ21 .

44
2.3 Multivariate version of delta theorem

Next we state the multivariate delta theorem, which is similar to the univariate case.

Theorem 2.3.1 Suppose {Tn } is a sequence of k-dimensional random vectors such that
√ d
n(Tn − θ) → Nk (0, Σ(θ)). Let g : Rk → Rm be once differentiable at θ with the gradient
matrix ∇g(θ). Then
√ d
n(g(Tn ) − g(θ)) → Nm 0, ∇T g(θ)Σ(θ)∇g(θ)


provided ∇T g(θ)Σ(θ)∇g(θ) is positive definite.

Proof. This theorem can be proved by using the Cramer-Wold device. It suffices to show
that for every c ∈ Rm , we have
√ T d
nc (g(Tn ) − g(θ)) → N 0, cT ∇T g(θ)Σ(θ)∇g(θ)c


The first-order Taylor’s expansion gives

g(Tn ) = g(θ) + ∇T g(θ)(Tn − θ) + op (||Tn − θ||).

The remaining proofs are similar to the univariate case by the application of Corollary 1.2.1
and left to exercises. 

The multivariate delta theorem is useful in finding the limiting distribution of sample
moments. We state next some examples most often used.

Example 2.3.1 (Sample variance revisited) Suppose X1 , . . . , Xn are iid with mean µ,
variance σ 2 and E(X14 ) < ∞. Then by taking
 
Var(X1 ) Cov(X1 , X12 )
Tn = (X̄n , Xn2 )T , θ = (EX1 , EX12 )T , Σ= 
Cov(X12 , X1 ) Var(X12 )

and using the multivariate CLT theorem (Theorem 1.3.2), we have


√ d
n(Tn − θ) → N2 (0, Σ) .

45
Taking the function g(u, v) = v − u2 which is obviously differentiable at the point θ with
derivative g 0 (u, v) = (−2u, 1), it follows that
n
!
√ 1X d
(Xi − X̄n )2 − Var(X1 ) → N2 0, (−2µ, 1)Σ(−2µ, 1)T .

n
n i=1

Because the sample variance does not depend on location, we may as well assume µ = 0 (or
equivalently working with Xi − µ). Thus, it is readily seen that
√ d
n(Sn2 − σ 2 ) → N (0, µ4 − σ 4 ),

where µ4 denotes the centered fourth moment of X1 . If the parent distribution is normal,
√ d
then µ4 = 3σ 4 and n(Sn2 − σ 2 ) → N (0, 2σ 4 ). In view of Slutsky’s Theorem, the same result
is valid for the unbiased version n/(n − 1)Sn2 of the sample variance. From here, by another
use of the univariate delta theorem, one sees that
√ µ4 − σ 4
 
d
n(Sn − σ) → N 0, .
4σ 2


In the previous example the asymptotic distribution of n(Sn2 − σ 2 ) was obtained by
the delta method. Actually, it can also and more easily be derived by a direct application of
CLT and Slutsky’ theorem as we have illustrated in Example 1.3.2. Thus, it is not always a
good idea to apply the general theorems. However, in many cases the delta method is a good
way to package the mechanics of Taylor expansions in a transparent way. The followings are
more examples.

Example 2.3.2 (The joint limit distribution) (i) Consider the joint limit distribution
of the sample variance Sn2 and the t-statistic X̄n /Sn . Again for the limit distribution it does
not make a difference whether we use a factor n or n − 1 to standardize Sn2 . For simplicity
we use n. Then (Sn2 , X̄n /Sn ) can be written as g(X̄n , Xn2 ) for the map g : R2 → R2 given by
 
2 u
g(u, v) = v − u , .
(v − u2 )1/2

The joint limit distribution of n(X̄n − α1 , Xn2 − α2 ) is derived in the preceding example,
where αk denotes the kth moment of X1 . The function g is differentiable at θ = (EX1 , EX12 )

46
provided that σ 2 is positive, with derivative
 
0
−2α1 1
[g(α 1 ,α2 )
]T =  2
α1
.
−α1
(α2 −α2 )3/2
+ (α2 −α1 2 )1/2 2(α2 −α21 )3/2
1 1


It follows that the sequence n(Sn2 − σ 2 , X̄n /Sn − α1 /σ) is asymptotically bivariate normally
distributed, with mean zero and covariance matrix,
 
2
0
α2 − α1 α3 − α1 α2 0
[g(α 1 ,α2 )
]T   g(α
1 ,α2 )
.
2
α3 − α1 α2 α4 − α2

It is easy but uninteresting to compute this explicitly; A direct application of this result is
to analyze the so-called effect size θ = µ/σ. A natural estimator of θ is X̄n /Sn .

(ii) A more commonly seen case is to derive the joint limit distribution of X̄n and Sn2 .
Then, by using the multivariate delta theorem on some algebra,
     
2
√ X̄n − µ d 0 σ µ3
n → N2    
2 2 4
Sn − σ 0 µ3 µ4 − σ

Thus X̄n and Sn2 are asymptotically independent if the population skewness is 0 (i.e., µ3 =0).

2.4 Variance-stabilizing transformations

A principal use of parametric asymptotic theory is to construct asymptotically correct con-


fidence intervals. More precisely, suppose θb is a reasonable estimate of some parameter
√ d
θ. Suppose it is consistent and even asymptotically normal; i.e., n(θb − θ) → N (0, σ 2 (θ))
for some function σ(θ) > 0. Then, a simple calculation shows that the confidence interval
b α/2 /√n is asymptotically correct; i.e., its limiting coverage is 1 − α under every θ.
θb ± σ(θ)z
A number of approximations have been made in using this interval. The exact distribution
of θb has been replaced by a normal; the correct standard deviation has been replaced by
another plug-in estimate σ(θ); and the true mean of θb has been replaced by θ. The plug-in
standard deviation estimate is quite often an underestimate of the true standard deviation.
And depending on the situation, θb may have a nontrivial bias as an estimate of θ. Interest

47
has centered on finding transformations, say g(θ),
b that (i) have an asymptotic variance func-
tion free of θ, eliminating the annoying need to use a plugin estimate, (ii) have skewness≈ 0
in some precise sense, and (iii) have bias ≈ 0 as an estimate of g(θ), again in some precise
sense.

Transformations of the first type are known as variance-stabilizing transformations (VST-


s), those of the second type are known as symmetrizing transformations (STs), and those of
the third type are known as bias-corrected transformations (BCTs). In this course, we only
elaborate on the first one, i.e., VSTs since it is of greatest interest in practice and also a
major use of the delta theorem. Unfortunately, the concept does not generalize to multipa-
rameter cases, i.e., it is generally infeasible to find a dispersion-stabilizing transformation.
It is, however, a useful tool in one-parameter problems.
√ d
Suppose Tn is sequence of statistics such that n(Tn − θ) → N (0, σ 2 (θ)), σ(θ) > 0. By
the delta theorem, if g(·) is once differentiable at θ with g 0 (θ) 6= 0, then
√ d
n(g(Tn ) − g(θ)) → N (0, [g 0 (θ)]2 σ 2 (θ)).

Therefore, if we want the variance in the asymptotic distribution of g(Tn ) to be constant,


we set

[g 0 (θ)]2 σ 2 (θ) = k 2 .

for some constant k. Thus, a way of deriving g(·) from σ(·) is


Z
1
g(θ) = k dθ
σ(θ)
if σ(θ) is continuous in θ. k can obviously be chosen as any nonzero real number. In the
above, the integral is to be interpreted as a primitive. For such a g(·), g(Tn ) has an asymptotic
distribution with a variance that is free of θ. Such a statistic or transformation of Tn is called
a variance-stabilizing transformation. Note that the transformation is monotone. So, if we
use g(Tn ) to make an inference for g(θ), then we can automatically retransform to make an
inference for θ, which is the parameter of interest.

As long as there is an analytical formula for the asymptotic variance function in the
limiting normal distribution for Tn , and as long as the reciprocal of its square root can be

48
integrated in closed form, a VST can be written down. Next, we work out some examples of
VSTs and show how they are used to construct asymptotically correct confidence intervals
for an original parameter of interest.

√ d
Example 2.4.1 Suppose X1 , X2 , . . ., are iid Poisson(θ). Then n(X̄n −θ) → N (0, θ). Thus

σ(θ) = θ and so a variance-stabilizing transformation is
Z
k √
g(θ) = √ dθ = 2k θ.
θ

Taking k = 1/2 gives that g(θ) = θ is a variance-stabilizing transformation for the Poisson
√ p √ d
case. Indeed n( X̄n − θ) → N (0, 1/4). Thus, an asymptotically correct confidence
√ p
interval for θ is X̄n ± 2z√αn . This implies that an asymptotically correct confidence interval
for θ is ( 2 p 2 )
p zα zα
X̄n − √ , X̄n + √ .
2 n 2 n
p
z√α
Of course, if X̄n − 2 n
< 0, that expression should be replaced by 0. This confidence
p
interval is different from the more traditional interval, namely X̄n ± √zαn X̄n , which goes by
the name of the Wald interval. In fact, the actual coverage properties of the interval based
on the VST are significantly better than those of the Wald interval.

Example 2.4.2 (Sample correlation revisited) Consider the same assumption in Ex-
ample 1.2.11. Firstly, by using the multivariate delta theorem, we can derive the limiting
distribution of the sample correlation coefficient rn . By taking
n n n
!T
1X 2 1X 2 1X
Tn = X̄n , Ȳn , X , Y , Xi Yi ,
n i=1 i n i=1 i n i=1

θ = (EX1 , EY1 , EX12 , EY12 , EX1 Y1 )T ,

Σ = Cov(X1 , Y1 , X12 , Y12 , X1 Y1 ),


p
and using the transformation g(u1 , u2 , u3 , u4 , u5 ) = (u5 − u1 u2 )/ (u3 − u21 )(u4 − u22 ), it fol-
lows that
√ d
n(rn − ρ) → N (0, v 2 )

49
for some v > 0, provided that the fourth moments of (X, Y ) exist. It is not possible to write
2
a clean formula for v in general. If (Xi , Yi ) are iid N2 (µX , µY , σX , σY2 , ρ), then the calculation
can be done in closed form and
√ d
n(rn − ρ) → N (0, (1 − ρ2 )2 ).

However, it does not work well to base an asymptotic confidence interval directly on this
result. The transformation
Z
1 1 1+ρ
g(ρ) = 2
dρ = log = arctanh(ρ)
1−ρ 2 1−ρ
is a VST for rn . This is the famous arctanh transformation of Fisher, popularly known as

Fisher’s z. Thus, the sequence n(arctanh(rn ) − arctanh(ρ)) converges in law to the N (0, 1)
distribution. Confidence intervals for ρ are computed from the arctanh transformation as
√ √ 
tanh(arctanh(rn ) − zα / n), tanh(arctanh(rn ) + zα / n) .

rather than by using the asymptotic distribution of rn itself. The arctanh transformation
of rn attains normality much quicker than rn itself. (Interest students may run a small
simulation to verify it by using R) .

2.5 Approximation of moments

The delta theorem is proved by an ordinary Taylor expansion of Tn around θ. The same
method also produces approximations, with error bounds, on the moments of g(Tn ). The
order of the error can be made smaller the more moments Tn has. To keep notation simple,
we give approximations to the mean and variance of a function g(Tn ) below when Tn is a
sample mean.

Before proceeding, we need to address the so-called moment convergence problem. Some-
times we need to establish that moments of some sequence {Xn }, or at least some lower-order
d
moments, converge to moments of X when Xn → X. Convergence in distribution by itself
simply cannot ensure convergence of any moments. An extra condition that ensures con-
vergence of appropriate moments is uniform integrability. However, direct verification of

50
its definition is usually cumbersome. Thus, here we choose to introduce some sufficient
conditions which could ensure convergence of moments.

d
Theorem 2.5.1 Suppose Xn → X for some X. If supn E|Xn |k+δ < ∞ for some δ > 0, then
E(Xnr ) → E(X r ) for every 1 ≤ r ≤ k.

Another common question is the convergence of moments in the canonical CLT for iid
random variables, which is stated in the following theorem.

Theorem 2.5.2 (von Bahr) Suppose X1 , . . . , Xn are i.i.d. with mean µ and finite variance
σ 2 , suppose that, for some specific k, E|X1 |k < ∞. Suppose Z ∼ N (0, 1). Then,
√ r
n(X̄n − µ) 1
E = E(Z r ) + O( √ ),
σ n
for every r ≤ k.

By a similar arguments in the proof of Delta theorem, a direct application of this theorem
is the following approximations to the mean and variance of a function g(Tn ).

Proposition 2.5.1 Suppose X1 , X2 , . . . are iid observations with a finite fourth moment.
Let E(X1 ) = µ and Var(X1 ) = σ 2 . Let g be a scalar function with four uniformly bounded
derivatives. Then

g 00 (µ)σ 2
(i) E(g(X̄n )) = g(µ) + 2n
+ O(n−2 )

(g 0 (µ))2 σ 2
(ii) Var(g(X̄n )) = n
+ O(n−2 ).

The variance approximation above is simply what the delta theorem says. With more deriva-
tives of g that are uniformly bounded, higher-order approximations can be given.

Example 2.5.1 Suppose X1 , X2 , . . . are iid Poi(µ) and we wish to estimate P (X1 = 0) =
e−µ . The MLE is e−X̄n , and suppose we want to find an approximation to the bias and
variance of e−X̄n . We apply Proposition 2.5.1 with the function g(x) = e−x so that g 0 (x) =

51
−g 00 (x) = −e−x . Plugging into the proposition, we get the approximations Bias(e−X̄n ) =
µe−µ µe−2µ
2n
+ O(n−2 ), and Var(e−X̄n ) = n
+ O(n−2 ).

Note that it is in fact possible to derive exact expressions for the mean and variance
−X̄n
in this case, as ni=1 Xi has a Poi(nµ) distribution and therefore its mgf (moment
P
of e
t/n −1)
generating function) equals ψn (t) = E(etX̄n ) = (eµ(e )n . In particular, the mean of e−X̄n
−1/n −1)
is (eµ(e )n . It is possible to recover the approximation for the bias given above from
this exact expression. Indeed,

−1/n −1) −1/n −1)


P∞ (−1)k µ
(eµ(e )n = enµ(e = enµ( k=1 k!nk )
= e−µ (1 + + O(n−2 ))
2n

on collecting the terms of the exponentials together. On subtracting e−µ , this reproduces
the bias approximation given above. The delta theorem produces it more easily than the
direct calculation.

2.6 Multivariate-version Edgeworth expansion

In this section, we present the a more general result regarding Edgeworth expansions, which
can be applied to many useful cases.

Theorem 2.6.1 (Edgeworth expansions) Let m be a positive integer and X1 , X2 , . . . be



i.i.d. random k-vectors having finite m + 2 moments. Consider Wn = nh(X̄n )/σh , where
X̄n = n−1 i Xi , h is a function being m+2 times continuous differentiable in a neighborhood
P

of µ = EX1 , h(µ) = 0, σh2 = [∇h(µ)]T Var(X1 )∇h(µ) > 0. Assume the C.D.F. of X1 is
absolutely continuous. Then FWn admits the Edgeworth expansion
m
X pj (x)φ(x) 1
sup FWn − Φ(x) − = o( ),
x
j=1
nj/2 nm/2

where pj (x) is a polynomial of degree at most 3j − 1, with coefficients depending on the first
m + 2 moments of X1 . In particular,

1
p1 (x) = −c1 σh−1 + c2 σh−3 (x2 − 1),
6
52
1
Pk Pk
with c1 = 2 i=1 i=1 aij µij and
k X
X k X
k k X
X k X
k X
k
c2 = ai aj al µijl + 3 ai aj alq µil µjq ,
i=1 j=1 l=1 i=1 j=1 l=1 q=1

where ai is the ith component of ∇h(µ), aij is the (i,j)th element of the Hessian matrix
∇2 h(µ), µij = E(Yi Yj ), µijl = E(Yi Yj Yl ), and Yi is the ith component of X1 − µ.

Example 2.6.1 The t-test and the t confidence interval are among the most used tools of
statistical methodology. As such, an Edgeworth expansion for the C.D.F. of the t-statistic for
general populations is interesting and useful, and we can derive it according to Theorem 2.6.1.

b2 = n1 ni=1 (Xi −
P
Consider the studentized random variable Wn = n(X̄n − µ)/b σ , where σ
X̄n )2 . Assuming that EX12m+4 < ∞ and applying multivariate Delta theorem to random
p
vectors (Xi , Xi2 ), i = 1, 2, . . ., and h(x, y) = (x − µ)/ y − x2 , we obtain the Edgeworth
expansion with σh = 1
1
p1 (x) = κ3 (2x2 + 1).
6
Furthermore, it can be found in Hall (1992; p73) that
1 1 1
p2 (x) = κ4 x(x2 − 3) − κ23 x(x4 + 2x2 − 3) − x(x2 + 3).
12 18 4

53
54
Chapter 3

The basic sample statistics

3.1 The sample distribution function

Consider X1 , X2 . . . be iid with distribution function F . For each sample of size n, a corre-
sponding sample distribution function Fn is constructed by placing at each observation Xi a
mass 1/n. Thus Fn can be represented as
n
1X
Fn (x) = I{Xi ≤x} ,
n i=1

which is always called empirical cumulative distribution function; ECDF. When Xi ∈ Rp ,


the inequality above is understood as componentwise version. For simplicity, here we only
consider the case d = 1. The Fn can and does play a fundamental role in statistical inference.
In this subsection, we discuss several aspects of the properties and applications of Fn .

3.1.1 Basic properties

The simplest aspect of Fn is that, for each fixed x, Fn (x) serves as an estimator of F (x).

Proposition 3.1.1 For fixed x, x ∈ (−∞, ∞),

55
(i) Fn (x) is unbiased and has variance

F (x)[1 − F (x)]
Var[Fn (x)] = ;
n

2nd
(ii) Fn (x) is consistent in mean square, i.e., Fn (x) → F (x);

wp1
(iii) Fn (x) → F (x);
 
(iv) Fn (x) is AN F (x), F (x)[1−F
n
(x)]
.

Proof. Note that the exact distribution of nFn (x) is BIN(F (x), n). And, (i)-(ii) follows im-
mediately; The third part is a direct application of SLLN; (iv) is a consequence of Lindeberg-
Levy CLT and (i).

3.1.2 Kolmogorov-Smirnov distance

The ECDF is quite useful for estimation of the population distribution function F . Besides
pointwise estimation of F (x), it is also of interest to characterize globally the estimation of
F by Fn . To this end, a popular useful measure of closeness of Fn to F is the Kolmogorov-
Smirnov distance
Dn = sup |Fn (x) − F (x)|.
−∞<x<∞

This measure is also known as the sup-norm distance between Fn and F , and denoted as
||Fn (x) − F (x)||∞ . The metrics such as Dn has many applications: (1) goodness-of-fit test;
(2) confidence band; (3) theoretical investigation of many other statistics of interest which
can be advantageously carried out by representing exactly and approximately as functions of
ECDF. In this respect, the following results concerning the sup-norm distance is of interest
on its own account but also provides a useful starting tool for the asymptotic analysis of
other statistics, such as quantiles, order statistics and ranks.

The next results give useful explicit bounds on probabilities of large values for the devi-
ation of Fn from F .

56
Theorem 3.1.1 (DKW’s inequality) Let Fn be the ECDF based on iid X1 , . . . , Xn from
a CDF F defined on R. There exists a positive constant C (not depending on F ) such that
2
P (Dn > z) ≤ Ce−2nz , z > 0, for all n = 1, 2, . . . ,

Note that this inequality may be expresses in the form:


√ 2
P ( nDn > z) ≤ Ce−2z ,

which clearly demonstrate that nDn = Op (1). The originally DKW inequality did not
specify the constant C, however, Massart (1990) found that C = 2 which cannot be improved.
It is stated next.

Theorem 3.1.2 (Massart) If nz 2 ≥ log 2/2,


√ 2
P ( nDn > z) ≤ 2e−2z , z > 0, for all n = 1, 2, . . . .

The following results useful in statistics are direct consequences of Theorem 3.1.1.

Corollary 3.1.1 Let F and C be as in Theorem 3.1.1. Then for every  > 0,
 
C
P sup Dm >  ≤ hn ,
m≥n 1 − h
where h = exp(−22 ).

Proof.
  X∞ ∞
X C
P sup Dm >  ≤ P (Dm > ) ≤ C hm
 = hn .
m≥n
m=n m=n
1 − h 

wp1
Theorem 3.1.3 (Glivenko-Cantelli) Dn → 0.

P∞
Proof. Note that n=1 P (Dn > z) < ∞ by DKW’s inequality. Hence, the result follows
from Theorem 1.2.1-(iv). 

From the Glivenko-Cantelli theorem, we know that Dn = op (1). However, the statistic

nDn may have a nondegenerate limit distribution as suggested by DKW’s inequality, and,
this is true as revealed by the following result.

57
Theorem 3.1.4 (Kolmogorov) Let F be continuous. Then

√ X 2 2
lim P ( nDn ≤ z) = 1 − 2 (−1)j+1 e−2j z , z > 0.
n→∞
j=1

A convenient feature of this asymptotic distribution is that it does not depend upon F . In
fact, for every n, if the true CDF F is continuous, then Dn has the remarkable property that
its exact distribution is completely independent of F which is stated as follows.


Proposition 3.1.2 Let F be continuous. Then nDn is distribution-free in the sense that
its exact distribution does not depend on F for every fixed n.

Proof. The quickest way to see this property is to notice the identity:
√ d √
 
i i−1
nDn = n max max − U(i) , U(i) − ,
0≤i≤n n n
where U(1) ≤ . . . ≤ U(n) are order statistics of an independent sample from U [0, 1] and the
d
relation = denotes “equality in law”. 

Example 3.1.1 (Kolmogorov-Smirnov confidence intervals) A method constructing


asymptotically valid intervals for a mean is due to T. W. Anderson. The construction
depends on the classical Dn distance due to Kolmogorov and Smirnov, summarized below.
By Proposition 3.1.2, we know given α ∈ (0, 1), there is a well-defined d = dα,n such that,

for any continuous CDF F , PF ( nDn > d) = α. Thus,
√ √
1 − α = PF ( nDn ≤ d) = PF ( n||Fn − F ||∞ ≤ d)
 
d
= PF |Fn − F | ≤ √ , ∀x
n
 
d d
= PF Fn (x) − √ ≤ F (x) ≤ Fn (x) + √ , ∀x .
n n
This gives us a “confidence band” for the true CDF F . More precisely, the 1−α Kolmogorov-
Smirnov confidence band for the CDF F is
 
d d
KSn,α : max(0, Fn (x) − √ ) ≤ F (x) ≤ min(1, Fn (x) + √ ) .
n n
The computation of d = dα,n is quite nontrivial, but tables are available which will be
discussed later.

58
3.1.3 Applications: Kolmogorov-Smirnov and other ECDF-based
GOF tests

We know that, for large n, Fn is “close” to the true F . So if H0 : F = F0 holds, then we


should be able to test H0 by studying the deviation between Fn and F0 . Any choice of a
discrepancy measure between Fn and F0 would result in a test. The utility of the test would
depend on whether one can work out the distribution theory of the test statistic. Three most
well-known discrepancy measures that have been proposed are the following
 
+ −
Dn = max(Dn , Dn ) ≡ max sup (Fn (x) − F0 (x)), sup (F0 (x) − Fn (x)) ,
−∞<x<∞ −∞<x<∞
Z
Cn = n (Fn (t) − F0 (t))2 dF0 (t),
(Fn (t) − F0 (t))2
Z
An = n dF0 (t),
F0 (t)(1 − F0 (t))

which are respectively known as the Kolmogorov-Smirnov, the Cramer-von Mises, and the
Anderson-Darling test statistics.

Similar to Proposition 3.1.2, we have the following simple expressions for Cn and An .

Proposition 3.1.3 Let F0 be continuous.


n  2
1 X i − 12
Cn = + U(i) − ,
12n i=1 n
n     
2X 1 1 
An = −n − i− log U(i) + n − i + log(1 − U(n−i+1) ) .
n i=1 2 2

It is clear from these computational formulas that, for every fixed n, the sampling distribu-
tions of Cn and An under H0 do not depend on F0 , provided F0 is continuous. For small n,
the true sampling distributions can be worked out exactly by discrete enumeration.

The tests introduced above based on the ECDF Fn all have the pleasant property that
they are consistent against any alternative F 6= F0 . For example, the Kolmogorov-Smirnov

statistic Dn has the property that PF ( nDn > G−1 −1
n (1 − α)) → 1,∀F 6= F0 , where Gn (1 − α)

is the (1 − α)th quantile of the distribution of nDn under F0 . To explain heuristically

59
why this should be the case, consider a CDF F1 6= F0 , so that there exists η such that
F1 (η) 6= F0 (η). Let us suppose that F1 (η) > F0 (η). First note that G−1
n (1 − α) → λ for some

λ by Theorem 3.1.4, say G−1


n (1 − α) = O(1). So,


nDn > G−1

PF1 n (1 − α)

 
−1
= PF1 sup n(Fn (t) − F0 (t)) > Gn (1 − α)
t
√ √
 
−1
= PF1 sup n(Fn (t) − F1 (t)) + n(F1 (t) − F0 (t)) > Gn (1 − α)
t
√ √
n(F1 (η) − F0 (η)) > G−1

≥ PF1 n(Fn (η) − F1 (η)) + n (1 − α) → 1,

√ √
as n → ∞ since n(Fn (η) − F1 (η)) = Op (1) under F1 , and n(F1 (η) − F0 (η)) → ∞.
The same argument establishes the consistency of the other ECDF-based tests against all
alternatives. In contrast, we will later see that chi-square goodness-of-fit tests cannot be
consistent against all alternatives.

Example 3.1.2 (The Berk-Jones procedure) Berk and Jones (1979) proposed an in-
tuitively appealing ECDF-base method of testing the simple goodness-of-fit null hypothesis
F = F0 for some specified continuous F0 in the one-dimensional iid situation. It has al-
so led to subsequent developments of other tests for the simple goodness-of-fit problem as
generalizations of the Berk-Jones idea.

The Berk-Jones method is to transform the simple goodness-of-fit problem into a family
of binomial testing problems. More specifically, if the true underlying CDF is F , then for
any given x, as stated above, nFn (x) ∼ Bin(n, F (x)). Suppressing the x and writing p for
F (x) and p0 for F0 (x), for the given x, we want to test p = p0 . We can use a likelihood
ratio test corresponding to a two-sided alternative to test this hypothesis. It will require
maximization of the binomial likelihood function over all values of p that corresponds to
maximization over F (x), with x being fixed, while F is an arbitrary CDF. The likelihood is
maximized at F (x) = Fn (x), resulting in the likelihood ratio statistic

Fn (x)nFn (x) (1 − Fn (x))n−nFn (x)


λn (x) = .
F0 (x)nFn (x) (1 − F0 (x))n−nFn (x)

60
But, of course, the original problem is to test that F (x) = F0 (x), ∀x. So, it would make sense
to take a supremum of the log-likelihood ratio statistics over x. The Berk-Jones statistic is

Rn = n−1 sup log λn (x).


x

In recent literatures, some authors have found that an analog of the traditional Anderson-
Darling rank test based on log λn (x)
Z
log λn (x)
dFn (t)
Fn (t)(1 − Fn (t))
is much more powerful than the Anderson-Darling test and the foregoing Berk-Jones statistic.

Example 3.1.3 (The two-sample case) Suppose Xi , i = 1, . . . , n are iid samples from
some continuous CDF F1 and Yi , i = 1, . . . , m are iid samples from some continuous CDF
F2 , and all random variables are mutually independent. Let Fn1 and Fm2 denote the empirical
CDFs of the Xi ’s and the Yi ’s, respectively. Analogous to the one-sample case, one can define
two-sided Kolmogorov- Smirnov test statistics and other ECDF based GOF tests, such as

Dm,n = sup |Fn1 − Fm2 |.


−∞<x<∞
(Fn1 (x) − Fm2 (x))2
Z
nm
Am,n = dFn,m (x),
n+m Fn,m (x)(1 − Fn,m (x))

where Fn,m (x) is the ECDF of the pooled sample X1 , . . . , Xn , Y1 , . . . , Ym . Similar to Propo-
sition 3.1.3, one can also show that neither the null distribution of Dm,n nor that of the Am,n
depends on F1 or F2 .

3.1.4 The Chi-square test

Chi-square tests are well-known competitors to ECDF-based statistics. They discretize the
null distribution in some way and assess the agreement of observed counts to the postulated
counts, so there is obviously some loss of information and hence a loss in power. But they are
versatile. Unlike ECDF-based tests, a chi-square test can be used for continuous as well as
discrete data and in one dimension as well as many dimensions. Thus, a loss of information
is being exchanged for versatility of the principle and ease of computation.

61
Suppose X1 , . . . Xn are iid observations from some distribution F in and that we want
to test H0 : F = F0 , F0 being a completely specified distribution. Let S be the support of
F0 and, for some given k ≥ 1, Aki , i = 1, . . . , k form a partition of S. Let p0i = PF0 (Aki )
and ni = #{j : xj ∈ Aki }, i.e., the observed frequency of the partition set Aki . Therefore,
under H0 , E(ni ) = np0i . K. Pearson suggested that as a measure of discrepancy between
the observed sample and the null hypothesis, one compare (n1 , . . . , nk ) with (np01 , . . . , np0k ).
The Pearson chi-square statistic is defined as
k
2
X (ni − np0i )2
K = .
i=1
np0i

For fixed n, certainly K 2 is not distributed as a chi-square, for it is just a quadratic form
in a multinomial random vector. However, the asymptotic distribution of K 2 is χ2k−1 if H0
holds, which is stated in the following result.

Theorem 3.1.5 (The asymptotic null distribution) Suppose X1 , X2 , . . . Xn are iid ob-
d
servations from some distribution F . Consider testing H0 : F = F0 (specified). K 2 → χ2k−1
under H0 .

Proof. Define
 T
T n1 − np01 nk − np0k
Y = (Y1 , . . . , Yk ) = √ ,..., √ .
np01 np0k

d √ √
By the multivariate CLT, we know Y → Nk (0, Σ), where Σ = Ik −µµT and µ = ( p01 , . . . , p0k )T .
This can be easily seen by writing n = (n1 , . . . , nk )T = ni=1 Zi , where Zi = (0, . . . , 0, 1, 0, . . . , 0)T
P

with a single nonzero component 1 located in the jth position if the ith trial yields the jth
outcome. Note that Zi ’s are iid with mean p0 = (p01 , . . . , p0k )T and covariance matrix
n−np d
diag(p0 ) − p0 pT0 . By the multivariate CLT, √ 0 → Nk (0, diag(p0 )
n
− p0 pT0 ). Thus,

√ √ n − np0
Y = diag−1 ( p01 , . . . , p0k ) √
n
d √ √ √ √
→ Nk 0, diag−1 ( p01 , . . . , p0k ) diag(p0 ) − p0 pT0 diag−1 ( p01 , . . . , p0k )
  

d
= Nk (0, Σ).

62
Note that tr(Σ) = k − 1. Notice now that Pearson’s K 2 = YT Y, and if Y ∼ Nk (0, Σ) for
d
any general Σ, then YT Y = XT PT PX = XT X, where X ∼ Nk (0, diag(λ1 , . . . , λk )), λi are
the eigenvalues of Σ, and PT ΣP = diag(λ1 , . . . , λk ) is the spectral decomposition of Σ. Note
√ √
that X has the same distribution as the vector ( λ1 η1 , . . . , λk ηk )T , where ηj ’s are the iid
d P iid
standard normal variates. So, it follows that XT X = ki=1 λi wi with wi ∼ χ21 . Because the
eigenvalues of a symmetric and idempotent matrix (Σ) are either 0 or 1, for our Σ, k − 1 of
λi ’s are 1 and the remaining one is zero. Since a sum of independent chi-squares is again a
d
chi-square, it follows that K 2 → χ2k−1 under H0 . 

Example 3.1.4 (The Hellinger statistic)We may consider a transformation g(x) that
makes the denominator in Pearson’s χ2 a constant. Specially, the differentiable function
of the form g(x) = (g1 (x1 ), . . . , gk (xk ))T , such that the jth component of the transfor-
mation is a function only of the jth component of x. As a consequence, the gradient

∇g(x) = diag{g10 (x1 ), . . . , gk0 (xk )}. As in the proof of Delta Theorem, n(g(Z̄n ) − g(p0 )) is

asymptotically equivalent to n∇g(p0 )(Z̄n − p0 ), so that in Pearson’s χ2 , we may replace
√ √
n(Z̄n − p0 ) by n∇−1 g(p0 )(g(Z̄n ) − g(p0 )) and obtain the transformed χ2
T
χ2g = n g(Z̄n ) − g(p0 ) ∇−1 g(p0 )diag(p)∇−1 g(p0 ) g(Z̄n ) − g(p0 )


k
X (gi (ni /n) − gi (p0i ))2 d
=n → χ2k−1 .
i=1
p0i [gi0 (p0i )]2

√ √
Naturally, we are led to investigate the transformed χ2 with g(x) = ( x1 , . . . , xk )T .
The transformed χ2 , with gi0 (p0i ) = √1 ,
2 p0i
becomes

k 
X p √ 2
χ2H = 4n ni /n − p0i .
i=1

This is known as the Hellinger χ2 because of its relation to Hellinger distance. The Hellinger
distance between two densities, f (x) and g(x), is d(f, g), where
Z p p 2
2
d (f, g) = f (x) − g(x) dx.

Let F1 be a distribution different from F0 and let p1i = PF1 (Aki ). Clearly, if by chance
p1i = p0i ∀i = 1, . . . k (which is certainly possible), then a test based on the empirical

63
frequencies of Aki cannot distinguish F0 from F1 , even asymptotically. In such a case, the
χ2 test cannot be consistent against F1 . However, otherwise it will be consistent, as can be
seen easily from the following result.

Proposition 3.1.4 Under F1 ,

K2 p
Pk (p1i −p0i )2
(i) n
→ i=1 p0i
.
Pk (p1i −p0i )2 p
(ii) If i=1 p0i
> 0, then KP2 → ∞ and hence the Pearson χ2 test is consistent against
F1 .

Pk (ni −np0i )2 Pk (ni /n−p0i )2 p


This is evident as K 2 = i=1 np0i
But n/n →(p11 , . . . p1k ) ≡ p1
=n i=1 p0i
.
p 2
under F1 . Therefore, by the continuous mapping theorem, K 2 → n ki=1 (p1ip−p 0i )
P
0i
. Thus, for
a fixed alternative F1 such that the vector p1 6= p0 , Pearson’s χ2 cannot have a nondegenerate
limit distribution under F1 . However, if the alternative is very close to the null, in the sense
of being a Pitman alternative, there is a nondegenerate limit distribution.

To obtain an approximation to the power, we consider the behavior of K 2 under a


sequence of local alternatives to the null hypothesis. In particular, take

p1i = p0i + δi n−1/2 , 1 ≤ i ≤ k.

Note that because both p1 and p0 are probability vectors, 1T δ =


P
i δi = 0. Then, we
have the following result which allows us to approximate the power of the χ2 test at a close
alternative by using the noncentral χ2 CDF as an approximation to the exact CDF of χ2
under the alternative.

Theorem 3.1.6 (The asymptotic alternative distribution) Under H1 , say p = p1 =


d
p0 + δn−1/2 . Then K 2 → χ2k−1 (λ), where λ = ki=1 δi2 /p0i is the noncentrality parameter.
P

Proof. Recall the definition in the proof of Theorem 3.1.5. It can be easily seen that
d √ √
Y → Nk (diag−1 ( p01 , . . . , p0k )δ, Σ) by using the Slutsky’s Theorem and CLT. Since Σ is
symmetric and idempotent,
k
!
d
X
K 2 = YT Y → χ2k−1 δi2 /p0i
i=1

64
by using the Cochran Theorem (or derived by a similar arguments in the proof of Theorem
3.1.5). 

A direct application of this theorem is to calculate the approximate power of K 2 test.


Suppose that the critical region is {K 2 > c}, where the choice of c for a level α test would
be based on the null hypothesis asymptotic χ2k−1 distribution of K 2 . Then the approximate
power of K 2 at the alternative H1 is given by calculating the probability that a random
 P 
2 k 1 2
variable having the distribution χk−1 n i=1 p0i (nj /n − p0i ) exceeds the value c, in which
the expected value p1i in the noncentrality parameter is replaced by the observed frequencies
nj /n.

3.2 The sample moments

Let X1 , X2 . . . be iid with distribution function F . For k ∈ N+ , the kth moment and central
moment of F are defined as
Z ∞
αk = xk dF (x) = EX1k
Z−∞

µk = (x − α1 )k dF (x) = E[(X1 − α1 )k ],
−∞

respectively. α1 and µ2 are certainly the mean and variance of F respectively. Also, µ1 = 0.
αk and µk represent important characteristics for describing F . Natural estimators of these
parameters are given by the corresponding moments of the sample distribution function
Fn (x) = n1 ni=1 I{Xi ≤x} , say
P

Z ∞ n
k 1X k
ak = x dFn (x) = X , k = 1, 2, . . . ,
−∞ n i=1 i
Z ∞ n
k 1X
mk = (x − α1 ) dFn (x) = (Xi − a1 )k , k = 2, 3, . . . .
−∞ n i=1

Since Fn possesss desirable properties as an estimator of F , it could be expected that


the sample moment ak and mk posses desirable features as estimators of αk and µk . The
first result is the strong mean square consistencies regarding the ak .

65
wp1 α2k −α2k
Proposition 3.2.1 (i) ak → αk ; (ii) E(ak ) = αk ; (iii) Var(ak ) = n
.

By noting that ak is a mean of iid random variables having mean αk and variance α2k −αk2 ,
the result follows immediately by SLLN. Furthermore, because the vector (a1 , . . . , ak )T is
the mean of the iid vectors (Xi , . . . , Xik )T , 1 ≤ i ≤ n, we have the following asymptotically
normal result.


Proposition 3.2.2 n(a1 − α1 , . . . , ak − αk )T is ANk (0, Σ), where Σ = (σij )k×k with σij =
αi+j − αi αj .

Certainly, it is implicitly assumed that all stated moments are finite. This proposition is a
direct application of the multivariate CLT Theorem 1.3.2.

To deduce the the properties of mk , as seen in Example 1.3.2, it is advantageous to


consider the closely related random variables bk = n1 ni=1 (Xi − α1 )k , k = 1, 2, . . .. The same
P

arguments employed in dealing with the ak ’s immediately yield

wp1 µ2k −µ2k √


Proposition 3.2.3 (i) bk → µk ; (ii) E(bk ) = µk ; (iii) Var(bk ) = n
; (iv) n(b1 −
µ1 , . . . , bk − µk )T is ANk (0, Σ̃), where Σ̃ = (σ̃ij )k×k with σ̃ij = µi+j − µi µj .

The following result concerns the consistency and the asymptotically normality of the
vector m1 , . . . , mk .

Theorem 3.2.1 Suppose that µ2k < ∞.

wp1
(i) mk → µk ;

(ii) The random vector n(m2 −µ2 , . . . , mk −µk )T is ANk−1 (0, Σ∗ ), where Σ∗ = (σij∗ )(k−1)×(k−1)
with σij∗ = µi+j+2 − µi+1 µj+1 − (i + 1)µi µj+2 − (j + 1)µi+2 µj + (i + 1)(j + 1)µi µj µ2 .

Proof. Instead of dealing with mk directly, we exploit the connection between mk and bj ’s.
Writing
n n k
1X 1 XX j
mk = (Xi − a1 )k = C (Xi − α1 )j (α1 − a1 )k−j ,
n i=1 n i=1 j=0 k

66
we have
k
X
mk = Ckj (−1)k−j bj bk−j
1 ,
j=0

where we define b0 = 1. (i). By noting that µ1 = 0, this result follows from (i) of Proposition
3.2.3 and the CMT; (ii) This is again an application of the multivariate Delta Theorem.
Consider the map g : Rk → Rk−1 given by
2 k
!T
X X
g(t1 , . . . , tk ) = C2j (−1)2−j tj t12−j , . . . , Ckj (−1)k−j tj tk−j
1 .
j=0 j=0

Let θ = (0, µ2 , . . . , µk )T and g(θ) = (µ2 , . . . , µk )T . A direct evaluation of ∇g at θ yields


 
−2µ1 1 0 ··· 0
..
 
..
. .
 
 
 
∇T g|θ =  −(i + 1)µi 0 ··· 1 ··· .
 
.. ..
 
 
 . . 
 
−kµk−1 0 ··· 1

It follows that the sequence n(m2 −µ2 , . . . , mk −µk )T is asymptotically normally distributed,
with mean zero and covariance matrix,

Σ∗ = ∇T g|θ Σ̃∇g|θ .

The assertion follows immediately from some simple algebras on ∇T g|θ Σ̃∇g|θ . .

A most direct result from this theorem is the asymptotical normality of the sample
variance (by choosing k = 2 in (ii)) which is studied detailedly in Example 1.3.2.

3.3 The sample quantiles

A few selected sample percentiles provide useful diagnostic summaries of the full ECDF. For
example, the three quartiles of the sample already provide some information about symmetry
of the underlying population, and extreme percentiles give information about the tail. So
asymptotic theory of sample percentiles is of great interest in statistics. In this section, we

67
present a selection of the fundamental results on the asymptotic theory for percentiles. The
iid case and then an extension to the regression setup are discussed.

Suppose X1 , . . . , Xn are iid real-valued random variables with CDF F . We denote the
order statistics of X1 , . . . , Xn by X(1) , . . . , X(n) . For 0 < p < 1, the pth quantile of F is
defined as F −1 (p) ≡ ξp = inf{x : F (x) ≥ p}. Note that ξp satisfies F (ξp −) ≤ p ≤ F (ξp ).
Correspondingly, the sample quantile is defined as the pth quantile of the ECDF Fn , that
is, Fn−1 (p) ≡ ξbp = inf{x : Fn (x) ≥ p}. Also, the sample quantile can be expressed as X(dnpe)
where dke denotes the smallest integer greater than or equal to k. Thus, the discussion of
quantile could be carried out formally in terms of order statistics.

3.3.1 Basic results

The first result is a probability inequality for |ξbp − ξp | which implies that ξbp is strongly
wp1
consistent, say ξbp → ξp .

Theorem 3.3.1 Let X1 , . . . , Xn be iid random variables from a CDF F satisfying p < F (ξp +
) for any  > 0. Then, for every  > 0 and n = 1, 2, . . . ,
2
P (|ξbp − ξp | > ) ≤ 2Ce−2nδ ,

where δ = min{F (ξp + ) − p, p − F (ξp − )} and C is the same constant in DKW inequality.

Proof. Let  > 0 be fixed. Note that G(x) ≥ t iff x ≥ G−1 (t) for any CDF G on R. Hence

P (ξbp > ξp + ) = P (p > Fn (ξp + ))

= P (F (ξp + ) − Fn (ξp + ) > F (ξp + ) − p)


2
≤ P (D(Fn , F ) > δ ) ≤ Ce−2nδ

where the last inequality follows from DKW’s inequality (Theorem 3.1.1). Similarly,
2
P (ξbp < ξp − ) ≤ Ce−2nδ .

This proves the assertion. 

68
By this inequality, the strong consistency of ξbp can be established easily from Theorem
1.2.1-(iv).

Remark 3.3.1 The exact distribution of ξbp can be obtained as follows. Since nFn (t) has
the binomial distribution BIN(F (t), n) for any t ∈ R,

P (ξbp ≤ t) = P (Fn (t) ≥ p)


n
X
= Cni [F (t)]i [1 − F (t)]n−i ,
i=lp

where lp = dnpe. If F has a PDF f , then ξbp has the PDF

l −1
p
φn (t) = nCn−1 [F (t)]lp −1 [1 − F (t)]n−lp f (t).


The following result provides an asymptotic distribution for n(ξbp − ξp ).

Theorem 3.3.2 Let X1 , . . . , Xn be iid random variables from a CDF F . Suppose that F is
continuous at ξp .

(i) If there exists F 0 (ξp −) > 0, then for any t < 0,


√ !
n(ξbp − ξp )
lim P p ≤t = Φ(t);
n→∞ p(1 − p)/F 0 (ξp −)

(ii) If there exists F 0 (ξp +) > 0, then for any t > 0,


√ !
n(ξbp − ξp )
lim P p ≤t = Φ(t);
n→∞ p(1 − p)/F 0 (ξp +)

(iii) If F 0 (ξp ) exists and is positive, then



 
d p(1 − p)
n(ξbp − ξp ) → N 0, 0 .
[F (ξp )]2

Proof. If F is differentiable at ξp , then F 0 (ξp −) = F 0 (ξp +) = F 0 (ξp ). Thus, part (iii) is a


direct consequence of (i) and (ii). Note that the proofs of (i) and (ii) are similar. Thus, we
only give a proof for (ii).

69
√ p
Let t > 0, pnt = F (ξp +tσF+ n−1/2 ), cnt = n(pnt −p)/ pnt (1 − pnt ), and Znt = [Bn (pnt )−
p p
npnt ]/ npnt (1 − pnt ), where σF+ = p(1 − p)/F 0 (ξp +) and Bn (q) denotes a random variable
having the binomial distribution BIN(q, n). Then,
 
P ξbp ≤ ξp + tσF+ n−1/2 = P p ≤ Fn (ξp + tσF+ n−1/2 )


= P (Znt ≥ −cnt ).

Under the assumed conditions on F , pnt → p and cnt → t. Hence, the result follows from

P (Znt < −cnt ) − Φ(−cnt ) → 0.

But this follows from the CLT and Polya’s theorem (Theorem 1.2.7-(ii)). 

Remark 3.3.2 If both F 0 (ξp +) and F 0 (ξp −) exist and are positive, but F 0 (ξp +) 6= F 0 (ξp −),

then the asymptotic distribution of n(ξbp −ξp ) has the CDF Φ(t/σF− )I{−∞<t<0} +Φ(t/σF+ )I{0≤t<∞} ,
a mixture of two normal distributions, where σF− = p(1 − p)/F 0 (ξp −). An example of such
p

1
a case when p = 2
is
1
F (x) = xI{0≤x< 1 } + (2x − )I{ 1 ≤x< 3 } + I{ 3 ≤x<∞} .
2 2 2 4 4

Example 3.3.1 Suppose X1 , X2 , . . . , are iid N (µ, 1). Let Mn = ξb1 denote the sample

2

median. Since the standard normal density φ(x) at zero equals 1/ 2π, it follows from
√ d √ d
Theorem 3.3.2 that n(Mn − µ) → N (0, π2 ). On the other hand, n(X̄n − µ) → N (0, 1). The
ratio of the variances in the two asymptotic distributions, 2/π, is called the ARE (asymptotic
relative efficiency) of Mn relative to X̄n . Thus, for normal data, Mn is less efficient than X̄n .

3.3.2 Bahadur’s representation

The sample median of an iid sample from some CDF F is clearly not a linear statistic;
i.e., it is not a function of the form ni=1 hi (Xi ). In 1966, Bahadur proved that the sample
P

median, and more generally any fixed sample percentile, is almost a linear statistic. The
result in Bahadur (1966) not only led to an understanding of the probabilistic structure of
percentiles but also turned out to be an extremely useful technical tool. For example, as

70
we shall shortly see, it follows from Bahadur’s result that, for iid samples from a CDF F ,
under suitable conditions not only are X̄n , ξb1 marginally asymptotically normal but they
2

are jointly asymptotically bivariate normal. The result derived in Bahadur (1966) is known
as the Bahadur representation of quantiles.

Theorem 3.3.3 (Bahadur’s representation) Let X1 , . . . , Xn be iid random variables


from a CDF F . Suppose that F 0 (ξp ) exists and is positive. Then
F (ξp ) − Fn (ξp ) 1
ξbp = ξp + 0
+ op ( √ ).
F (ξp ) n


Proof. Let t ∈ R, ξnt = ξp + tn−1/2 , Zn (t) = n[F (ξnt ) − Fn (ξnt )]/F 0 (ξp ), and Un (t) =

n[F (ξnt ) − Fn (ξbp )]/F 0 (ξp ). It can be shown that

Zn (t) − Zn (0) = op (1). (3.1)

Note that |p − Fn (ξbp )| ≤ n−1 . Then,



Un (t) = n[F (ξnt ) − p + p − Fn (ξbp )]/F 0 (ξp ) (3.2)

= n[F (ξnt ) − p]/F 0 (ξp ) + O(n−1/2 ) → t

Let ηn = n(ξbp − ξp ). Then for any t ∈ R and  > 0,

P (ηn ≤ t, Zn (0) ≥ t + ) = P (Zn (t) ≤ Un (t), Zn (0) ≥ t + )

≤ P (|Zn (t) − Zn (0)| ≥ /2) + P (|Un (t) − t| ≥ /2) → 0

by (3.1) and (3.2). Similarly,

P (ηn ≥ t + , Zn (0) ≤ t) → 0.

It follows that ηn − Zn (0) = op (1) with Lemma 3.3.1 given below, which is the same as the
assertion. 

Lemma 3.3.1 Let {Xn } and {Yn } be two sequences of random variables such that Xn is
bounded in probability and, for any real number t and  > 0, limn [P (Xn ≤ t, Yn ≥ t + ) +
p
P (Xn ≥ t + , Yn ≤ t)] = 0. Then Xn − Yn → 0.

71
Proof. For any  > 0, there exists and M > 0 such that P (|Xn | > M ) ≤  for any n, since
Xn is bounded in probability. For this fixed M , there exists an N such that 2M/N < /2.
Let ti = −M + 2M i/N, i = 0, 1, . . . , N . Then,

P (|Xn − Yn | ≥ ) ≤ P (|Xn | ≥ M ) + P (|Xn | < M, |Xn − Yn | ≥ )


N
X
≤+ P (ti−1 ≤ Xn ≤ ti , |Xn − Yn | ≥ )
i=1
XN
≤+ P (Yn ≤ ti−1 − /2, ti−1 ≤ Xn ) + P (Yn ≥ ti + /2, Xn ≤ ti ).
i=1

This, together with the given condition, implies that

lim sup P (|Xn − Yn | ≥ ) ≤ .


n

p
Since  is arbitrary, we conclude that Xn − Yn → 0. 

Remark 3.3.3 Actually, Bahadur gave an a.s. order for op (n−1/2 ) under the stronger as-
sumption that F is twice differentiable at ξp with F 0 (ξp ) > 0. The theorem stated here is in
the form later given in Ghosh (1971). The exact a.s. order was shown to be n−3/4 (log log n)3/4
by Kiefer (1967) in a landmark paper. However, the weaker version presented here suffices
for proving the following CLTs.

The Bahadur representation easily leads to the following two joint asymptotic distributions.

Corollary 3.3.1 Let X1 , . . . , Xn be iid random variables from a CDF F having positive
derivatives at ξpj , where 0 < p1 < · · · < pm < 1 are fixed constants. Then
√ d
n[(ξbp1 , . . . , ξbpm ) − (ξp1 , . . . , ξpm )] → Nm (0, D),

where D is the m × m symmetric matrix with element

Dij = pi (1 − pj )/[F 0 (ξpi )F 0 (ξpj )], i ≤ j.



Proof. By Theorem 3.3.3, we know that the n[(ξbp1 , . . . , ξbpm ) − (ξp1 , . . . , ξpm )]T is asymp-
√ F (ξ )−F (ξ )
totically equivalent to n[ pF1 0 (ξp n) p1 , . . . , F (ξpmF 0)−F n (ξpm ) T
(ξp )
] and thus we only need to derive
1 m

72

n[F (ξpi )−Fn (ξpi )]
the joint asymptotic distribution of F 0 (ξpi )
, i = 1, . . . , m. By the definition of ECD-
F, the sequence of [Fn (ξp1 ), . . . , Fn (ξpm )]T can be represented as the sum of independent
random vectors n
1X
[I{Xi ≤ξp1 } , . . . , I{Xi ≤ξpm } ]T .
n i=1
Thus, the result immediately follows from the multivariate CLT by using the fact that

E(I{Xi ≤ξpk } ) = F (ξpk ), Cov(I{Xi ≤ξpk } , I{Xi ≤ξpl } ) = pk (1 − pl ), k ≤ l.

Example 3.3.2 (Interquartile range; IQR) One application of Corollary 3.3.1 is the
derivation of the derivation of the asymptotic distribution of the interquartile range ξb0.75 −
ξb0.25 . It is widely used as a measure of the variability among Xi ’s. Use of such an estimate
is quite common when normality is suspect. It can be shown that
√ d
n[(ξb0.75 − ξb0.25 ) − (ξ0.75 − ξ0.25 )] → N (0, σF2 )

with
3 3 1
σF2 = + − .
16[F 0 (ξ0.75 )]2 16[F 0 (ξ0.25 )]2 8F 0 (ξ0.75 )F 0 (ξ0.25 )
In particular, if X1 , . . . , Xn are iid N (0, σ 2 ), then, by using the general result above, on
√ d
some algebra, n(IQR − 1.35σ) → N (0, 2.48σ 2 ). Consequently, for normal data, IQR/1.35
is a consistent estimate of σ (the 1.35 value of course is an approximation) with asymptotic
√ d
variance 2.48σ 2 /1.352 = 1.36σ 2 . On the other hand, n(Sn − σ) → N (0, 0.5σ 2 ). The ratio
of the asymptotic variances, namely 0.5/1.36 = 0.37, is the ARE of the IQR-based estimate
relative to Sn . Thus, for normal data, one is better off using Sn . For populations with thicker
tails, IQR-based estimates can be more efficient.

Example 3.3.3 (Gastwirth estimate) Suppose X1 , . . . , Xn are continuous and distribut-


ed as iid F (x − µ), where F (−x) = 1 − F (x) and we wish to estimate the location parameter
µ. An obvious idea is to use a convex combination of order statistics ni=1 cni X(i) . Such
P

statistics are called L-statistics. A particular L-statistic that was found to have attractive
versatile performance is the Gastwirth estimate

µ = 0.3X( n3 ) + 0.4X( n2 ) + 0.3X( 2n ) .


3

73
This estimate is asymptotically normal with an explicitly available variance formula since we
know from our general theorem that [X( n3 ) , X( n2 ) , X( 2n ) ]T is jointly asymptotically trivariate
3

normal under mild conditions.

Corollary 3.3.2 Let X1 , . . . Xn be iid from a CDF F . Let 0 < p < 1 and suppose VarF (X1 ) <
∞. If F is differentiable at ξp with F 0 (ξp ) = f (ξp ) > 0, then

√  d
n X̄n − µ, Fn−1 (p) − ξp → N2 (0, Σ),

where
 
p 1
R
Var(X1 ) E (X1 )
f (ξp ) F
− f (ξp ) x≤ξp
xdF (x)
Σ= .
p p(1−p)
E (X1 ) − f (ξ1p ) x≤ξp xdF (x)
R
f (ξp ) F f 2 (ξp )

The proof of this corollary is very similar to Corollary 3.3.1 and hence left as an exercise.

1
Example 3.3.4 As an application of this result, consider iid N (µ, 1) data. Take p = 2
so that Corollary 3.3.2 gives the joint asymptotic distribution of the sample mean and the
sample median. The covariance entry in the matrix Σ equals (assuming without any loss
√ R0
of generality that µ = 0) − 2π −∞ xφ(x)dx = 1. Therefore, the asymptotic correlation
q
between the sample mean and median in the normal case is π2 = 0.7979, a fairly strong
correlation.

3.3.3 Confidence intervals for quantiles

Since the population median and more generally population percentiles provide useful sum-
maries of the population CDF, inference for them is of clear interest. Confidence intervals
iid
for population percentiles are therefore of interest in inference. Suppose X1 , X2 , . . . , Xn ∼ F
and we wish to estimate ξp = F −1 (p) for some 0 < p < 1. The corresponding sample
percentile ξbp = Fn−1 (p) is typically a fine point estimate for p. But how does one find a
confidence interval of guaranteed coverage?

74
One possibility is to use the quantile transformation and observe that
d
(F (X(1) ), F (X(2) ), . . . , F (X(n) )) =(U(1) , U(2) , . . . , U(n) ),

where U(i) is the ith order statistic of a U [0, 1] random sample, provided F is continuous.
Therefore, for given 1 ≤ i1 < i2 ≤ n,
 
PF X(i1 ) ≤ ξp ≤ X(i2 ) = PF F (X(i1 ) ) ≤ p ≤ F (X(i2 ) )

= P U(i1 ) ≤ p ≤ U(i2 ) ≥ 1 − α

if i1 , i2 are appropriately chosen. The pair (i1 , i2 ) can be chosen by studying the joint density
of (U(i1 ) , U(i2 ) ), which has an explicit formula. However, the formula involves incomplete Beta
functions, and for certain n and α, the actual coverage can be substantially larger than 1−α.
This is because no pair (i1 , i2 ) may exist such that the event involving the two uniform order
statistics has exactly or almost exactly 1 − α probability. This will make the confidence
interval [X(i1 ) , X(i2 ) ] larger than one wishes and therefore less useful.

Alternatively, under previously stated conditions,



 
d p(1 − p)
n(ξp − ξp ) → N 0, 0
b .
[F (ξp )]2

zα p(1−p)
Hence, an asymptotically correct 1 − α confidence interval for ξp is ξbp ± √
n F 0 (ξp )
. This
confidence interval typically will have an asymptotic 1 − α coverage probability. The interval
has a simplistic appeal and is computed much more easily than the interval based on order
statistics.

However, an obvious drawback of this procedure is that F 0 (ξp ) must be known in advance.
Say, this method is not asymptotically distribution-free. A remedy is given as follows. Before
proceeding, we need a refinement of Bahadur representation.

Theorem 3.3.4 Let X1 , . . . , Xn be iid random variables from a continuous CDF F . Suppose
that for 0 < p < 1, F 0 (ξp ) exists and is positive. Let kn be a sequence of integers satisfying
1 ≤ kn ≤ n and kn /n = p + cn−1/2 + o(n−1/2 ) with a constant c. Then
√ p c
n(X(kn ) − ξbp ) → .
F 0 (ξp )

75
√ √
Proof. Let t ∈ R, ξnt = ξp +tn−1/2 , ηn = n(ξbkn −ξp ), Zn (t) = n[F (ξnt )−Fn (ξnt )]/F 0 (ξp ),
√ n

and Un (t) = n[F (ξnt ) − Fn (ξbkn )]/F 0 (ξp ). By using similar arguments in proving Theorem
n

3.3.3, it is not difficult to show that for any t ∈ R and  > 0,


 
c
P ηn ≤ t, Zn (0) + 0 ≥t+ →0
F (ξp )
c
It follows that ηn = Zn (0) + F 0 (ξp )
+ op (1). Thus, we have

kn /n − Fn (ξp )
X(kn ) − ξp = 0
+ op (n−1/2 ).
F (ξp )

By Theorem 3.3.3 again, we know

p − Fn (ξp )
ξbp − ξp = + op (n−1/2 ).
F 0 (ξp )

The result follows by taking the difference of the two previous equations. 

Using this theorem, we can obtain an asymptotic 1 − α confidence interval for ξp .

Corollary 3.3.3 Assume the conditions in Theorem 3.3.4. Let {k1n } and {k2n } be two
sequences of integers satisfying 1 ≤ k1n < k2n ≤ n,
p
k1n /n = p − zα/2 p(1 − p)/n + o(n−1/2 )
p
k1n /n = p + zα/2 p(1 − p)/n + o(n−1/2 ),

where zα = Φ−1 (1−α). Then, the confidence interval C(X) = [X(k1n ) , X(k2n ) ] has the property
that P (ξp ∈ C(X)) does not depend on F and

lim P (ξp ∈ C(X)) = 1 − α.


n→∞

Proof. Note that


 
PF X(k1n ) ≤ ξp ≤ X(k2n ) = PF F (X(k1n ) ) ≤ p ≤ F (X(k2n ) )

= P U(k1n ) ≤ p ≤ U(k2n )

and thus P (ξp ∈ C(X)) does not depend on F .

76
By Theorems 3.3.4, 3.3.2 and Slutsky’s Theorem,
p !
p(1 − p)
P (X(k1n ) > ξp ) = P ξbp − zα/2 0 √ + op (n−1/2 ) > ξp
F (ξp ) n
√ b !
n(ξp − ξp )
=P p + op (1) > zα/2
p(1 − p)/F 0 (ξp )

→ 1 − Φ(zα/2 ) = α/2.

Similarly, P (X(k2n ) < ξp ) → α/2 which completes the proofs. 

3.3.4 Quantile regression

Least squares estimates in regression minimize the sum of squared deviations of the observed
and the expected values of the dependent variable. In the location-parameter problem, this
principle would result in the sample mean as the estimate. If instead one minimizes the
sum of the absolute values of the deviations, one would obtain the median as the estimate.
Likewise, one can estimate the regression parameters by minimizing the sum of the absolute
deviations between the observed values and the regression function.

For example, if the model says yi = xTi β + i , then one can estimate the regression vector
β by minimizing ni=1 |yi − xTi β|, a very natural idea. This estimate is called the least
P

absolute deviation (LAD) regression estimate. While it is not as good as the least squares
estimate when the errors are exactly normal, it outperforms the least squares estimate for
a variety of error distributions that are heavy-tailed. Generalizations of the LAD estimate,
analogous to sample percentiles, are called quantile regression estimate. A good reference
for the material in this section and proofs of theorems below is Koenker (2005).

Definition 3.3.1 For 0 < p < 1, the pth quantile regression estimate is defined as
X n
βQR = arg min
b p|yi − xTi β|I{yi ≥xTi β} + (1 − p)|yi − xTi β|I{yi <xTi β} .
β
i=1

We always write the equivalent definition


X n
βQR = arg min
b ρp (yi − xTi β),
β
i=1

77
where ρp (t) = pt+ + (1 − p)t− is the so-called check function where subscripts + and − stand
for the positive and negative parts, respectively. The following theorem describe the limiting
distribution of quantile regression estimate. There are some neat analogies in this result to
limiting distribution of the sample quantile for iid data.

iid
Theorem 3.3.5 Let yi = xTi β + i , where i ∼ F , with F having median zero. Let 0 < p <
1, and let β
b
QR be any pth quantile regression estimate. Suppose F has a strictly positive

derivative f (ξp ) at ξp . Then,


√ d
b − β − ξp e1 ) → Np (0, νΣ−1 ),
n(β QR

p(1−p)
where e1 = (1, 0, . . . , 0)T , Σ = limn n1 XT X (assumed to exist), and ν = f 2 (ξp )
.

References
Bahadur, R. R. (1966). A note on quantiles in large samples, Ann. Math. Stat., 37, 577–580.
Ghosh, J. K. (1971). A new proof of the Bahadur representation of quantiles and an application, Ann.
Math. Stat., 42, 1957–1961.
Kiefer, J. (1967). On Bahadurs representation of sample quantiles, Ann. Math. Stat., 38, 1323–1342.
Koenker, R. (2005). Quantile Regression. Cambridge Univ. Press.

78
Chapter 4

Asymptotics in parametric inference

In this chapter, we treats asymptotic statistics which arises in connection with estimation
or hypothesis testing relative to a parametric family of possible distributions for the data.
In this respect, maximum likelihood inference might be one of the most popular methods.
Many think that maximum likelihood is the greatest conceptual invention in the history of
statistics. Although in some high-or infinite-dimensional problems, computation and per-
formance of maximum likelihood estimates (MLEs) are problematic, in a vast majority of
models in practical use, MLEs are about the best that one can do. They have many asymp-
totic optimality properties that translate into fine performance in finite samples. Before
elaborating on maximum likelihood estimates and testings, we first consider the concept of
asymptotic optimality of point estimators in parametric models.

4.1 Asymptotic efficient estimation

bn be a sequence of estimators of θ based on a sequence of samples X = {X1 , . . . , Xn }


Let θ
whose distributions are in a parametric family indexed by θ. Suppose that as n → ∞

bn − θ) ∼ ANk (0, Vn (θ)),


(θ (4.1)

79
where for each n, Vn (θ) is a k × k positive definite matrix depending on θ. If θ is one-
dimensional (k = 1), then Vn (θ) is the asymptotic variance as well as the asymptotic MSE
of θbn . When k > 1, Vn (θ) is called the asymptotic covariance matrix of θ
bn and can be used
as a measure of asymptotic performance of estimators.

bjn satisfies (4.1) with asymptotic covariance matrix Vjn (θ), j = 1, 2, and V1n (θ) ≤
If θ
V2n (θ) (in the sense that V2n (θ) − V1n (θ) is nonnegative definite) for all θ ∈ Θ, then θ
b1n is
said to be asymptotically more efficient than θ
b2n . When Xi ’s are iid, Vn (θ) is usually of the
form n−δ V(θ) for some δ > 0 (=1 in the majority of cases) and a positive definite matrix
V(θ) that does not depend on n.

Definition 4.1.1 Assume that the Fisher information matrix


 " #T 
∂ X ∂ X 
In (θ) = E log fθ (Xi ) log fθ (Xi )
 ∂θ ∂θ i 
i

is well defined and positive definite for every n. A sequence of estimators θ


bn satisfying (4.1)
is said to be asymptotically efficient or asymptotically optimal iff Vn (θ) = [In (θ)]−1 .

Suppose that we are interested in estimating β = g(θ), where g is a differentiable


function from Rk to Rp , 1 ≤ p ≤ k. If θ
bn satisfies (4.1), then by Delta Theorem, β
b n = g(θ
bn ) is
asymptotically distributed as Np (β, [∇g(θ)]T Vn (θ)∇g(θ)). Thus, the information inequality

[∇g(θ)]T Vn (θ)∇g(θ) ≥ [Ĩn (β)]−1 ,

where Ĩn (β) is the Fisher information matrix about β. If p = k and g is one-to-one, then

[Ĩn (β)]−1 = [∇g(θ)]T [In (θ)]−1 ∇g(θ),

and, therefore, β
b is asymptotically efficient iff θ
n
bn is asymptotically efficient. For this reason,
we can focus on the estimation of θ only.

Remark 4.1.1 (The super-efficiency and Hodges estimator)

It was first believed as folklore that the MLE under regularity conditions on the un-
derlying distribution is asymptotically the best for every value of θ0 ∈ Θ; i.e., if an MLE

80
√ d
θbn exists and n(θbn − θ0 ) → N (0, I −1 (θ0 )), and if another competing sequence Tn satisfies
√ d
n(Tn − θ0 ) → N (0, V (θ0 )), then for every θ0 , V (θ0 ) ≥ I −1 (θ0 ).

It was a major shock when in 1952 Hodges gave an example that destroyed this belief and
proved it to be false even in the normal case. Hodges, in a private communication to LeCam,
produced an estimate Tn that beats the MLE X̄n locally at some θ0 , say θ0 = 0. Later, in
a very insightful result, LeCam (1953) showed that this can happen only on Lebesgue-null
sets of θ. An excellent reference for this topic is van der Vaart (1998).
iid
Let X1 , . . . , Xn ∼ N (θ, 1). Define an estimate Tn as

 X̄n , X̄n ≥ n−1/4 ,
θ̃ =
 tX̄ , X̄ < n−1/4 ,
n n

where we choose 0 < t < 1. We are interested in estimating the population mean. If X̄n is
not close to 0, we simply take the sample mean as the estimator. If we know that it is pretty
close to 0, we can shrink it further to make it closer to 0. Thus, the resulting estimator
should be more efficient than the sample mean X̄n at 0. Of course, we can take other values
than 0, the same thing will happen too. Now, let’s find the asymptotic distribution of θ̃. If
θ = 0, then we can write
√ √  
nθ̃ =n X̄n I{|X̄n |≥n−1/4 } + tX̄n I{|X̄n |<n−1/4 }
√  
= n tX̄n + (1 − t)X̄n I{|X̄n |≥n−1/4 }

= tYn + (1 − t)Yn I{|Yn |≥n1/4 }


where Yn = nX̄n ∼ N (0, 1), and hence tYn ∼ N (0, t2 ). Now let us look at the second term
Wn = Yn I{|Yn |≥n1/4 } . Since

2
(E|Wn |)2 ≤ E(Yn2 )E(I{|Y n |≥n
1/4 } )

= P (|Yn | ≥ n1/4 ) ≤ E|Yn |2 /n1/2 = n−1/2 → 0,

p
which implies Wn → 0. By Slutsky’s theorem, we get
√ d
nθ̃ → N (0, t2 ), if θ = 0.

81
Similarly, when θ 6= 0, we can write


nθ̃ = Yn + (t − 1)Yn I{|Yn |<n1/4 } .

√ √ p
Again, Yn − nθ = n(X̄n − θ) ∼ N (0, 1). Now it remains to show that Yn I{|Yn |<n1/4 } → 0.
For any 0 <  < n1/4 ,

P (|Yn I{|Yn |<n1/4 } | > ) ≤ P (n1/4 I{|Yn |<n1/4 } > )

= P (I{|Yn |<n1/4 } > /n1/4 )

= P (|Yn | < n1/4 )

= P (−n1/4 < Yn < n1/4 )


√ √
= Φ(− nθ + n1/4 ) − Φ(− nθ − n1/4 ) → 0.

√ d
By Slutsky’s theorem again, we get n(θ̃ − θ) → N (0, 1). Combining the above two cases,
we get

√  N (0, t2 ), θ = 0,
d
n(θ̃ − θ) →
 N (0, 1), θ 6= 0,

In the case of θ = 0, the usual asymptotic Cramer-Rao theorem does not hold, since t2 <
1 = I −1 (θ). It is clear, however, that Tn has certain undesirable features. First, as a function
of X1 , ..., Xn , Tn is not smooth. Second, V (θ) is not continuous in θ.

4.2 Maximum likelihood estimation

Let X = {X1 , . . . , Xn } be iid with distribution Fθ belonging to a family F = {Fθ : θ =


(θ1 , . . . , θk )T ∈ Θ} and suppose that the distribution Fθ posses densities fθ (x). The likeli-
hood function of the sample X is defined as
n
Y
L(θ; X) = fθ (Xi ).
i=1

82
The maximum likelihood estimate (MLE) is given by θ b = arg maxθ∈Θ log L(θ; X). Often,
the estimate θ
b may be obtained by solving the system of likelihood equations (score function),

∂ log L
=0
∂θi θ=θ
b

and check that the solution θ


b indeed maximizes L. Since the solutions of likelihood equations
may not be the MLE, we also always term the root of the likelihood equation as RLE.

Next, we will show that under regularity condition on F, the MLE (RLE) are strongly
consistent, asymptotically normal, and asymptotically efficient. For simplicity, we focus on
the case of k = 1. The multivariate version will be discussed without giving it proof.

Regularity Condition of F Consider Θ to be an open interval in R. Assume:

∂ 3 log fθ (x)
(C1) The third derivative with respect to θ, ∂θ3
, exists for all x, and also for each
θ0 ∈ Θ there exists a function H(x) ≥ 0 (possibly depending on θ0 ) such that for
θ ∈ N (θ0 , ) = {θ : |θ − θ0 | < },

∂ 3 log fθ (x)
≤ H(x), Eθ H(X1 ) < ∞;
∂θ3

∂fθ (x)
(C2) For gθ (x) = fθ (x) or gθ (x) = ∂θ
, we have
Z Z
∂ ∂gθ (x)
gθ (x)dx = dx;
∂θ ∂θ

(C3) For each θ ∈ Θ, we have


 2
∂log fθ (x)
0 < I(θ) = Eθ < ∞.
∂θ

∂log fθ (x)
Remark 4.2.1 Condition (C1) ensures that ∂θ
, for any x, has a Taylor’s expansion as
a function of θ; Condition 2 means that fθ (x) or ∂f∂θ θ (x)
can be differentiated with respect to
θ under the integral sign. That is, the integration and differentiation can be interchanged;
A sufficient condition for Condition 2 is the following:

83
For each θ0 ∈ Θ, there exists functions g(x), h(x), and H(x) (possibly depending on θ0 )
such that for θ ∈ N (θ0 , ) = {θ : |θ − θ0 | < },

∂fθ (x) ∂ 2 fθ (x) ∂ 3 log fθ (x)


≤ g(x), ≤ h(x), ≤ H(x)
∂θ ∂θ2 ∂θ3

hold for all x and


Z Z
g(x)dx < ∞, h(x)dx < ∞, Eθ H(X1 ) < ∞;

∂log fθ (x)
Condition 3 ensures that the variance of ∂θ
is finite.

Theorem 4.2.1 Assume regularity conditions (C1)-(C3) on the family F. Consider iid
observations on Fθ0 , for θ0 an element of Θ. Then with probability 1, the likelihood equations
admit a sequence of solutions {θbn } satisfying

(i) strong consistency: θbn → θ0 , as n → ∞;

(ii) asymptotic normality and efficiency: θbn is AN (θ0 , [nI(θ0 )]−1 ).

Proof. (i) Denote the score function by


n
1 ∂ log L(θ; X) 1 X ∂ log fθ (Xi )
s(X, θ) = = .
n ∂θ n i=1 ∂θ

Then,
n n
0 1 X ∂ 2 log fθ (Xi ) 00 1 X ∂ 3 log fθ (Xi )
s (X, θ) = , s (X, θ) = .
n i=1 ∂θ2 n i=1 ∂θ3

Note that
n n
00 1 X ∂ 3 log fθ (Xi ) 1X
|s (X, θ)| ≤ ≤ |H(Xi )| ≡ H̄(X),
n i=1 ∂θ3 n i=1

where H̄(X) = n−1 ni=1 H(Xi ). By Taylor’s expansion


P

1
s(X, θ) = s(X, θ0 ) + s0 (X, θ0 )(θ − θ0 ) + s00 (X, ξ)(θ − θ0 )2
2
1
= s(X, θ0 ) + s0 (X, θ0 )(θ − θ0 ) + H̄(X)η ∗ (θ − θ0 )2 ,
2

84
where |η ∗ | = |s00 (X, ξ)|/H̄(X) ≤ 1. By the SLLN, we have,

wp1
s(X, θ0 ) → Eθ0 s(X, θ0 ) = 0,
wp1
s0 (X, θ0 ) → Eθ0 s0 (X, θ0 ) = −I(θ0 ),
wp1
H̄(X) → Eθ0 H(Xi ) < ∞,

where we use the fact that


  Z Z
∂ log fθ (x) 1 ∂fθ (x) ∂
Eθ = fθ (x)dx = fθ (x)dx = 0
∂θ fθ (x) ∂θ ∂θ
 2  Z " 2 #
1 ∂ 2 fθ (x)

∂ log fθ (x) 1 ∂fθ (x)
Eθ = − fθ (x)dx
∂θ2 fθ (x) ∂θ2 fθ (x) ∂θ
 2
∂ log fθ (x)
= −E
∂θ

provided Condition (C2) holds.

Clearly, for  > 0, we have with probability one,


1
s(X, θ0 ± ) = s(X, θ0 ) + s0 (X, θ0 )(±) + H̄(X)η ∗ (±)2
2
1
≈ ∓I(θ0 ) + Eθ0 H(X1 )c2 , |c| < 1.
2
In particular, we choose 0 <  < I(θ0 )/Eθ0 H(X1 ). Then for large enough n, we have, with
probability 1,
1 1
s(X, θ0 + ) = s(X, θ0 ) + s0 (X, θ0 ) + H̄(X)η ∗ 2 ≤ −I(θ0 ) + Eθ0 H(X1 )c2 < 0
2 2
0 1 ∗ 2 1
s(X, θ0 − ) = s(X, θ0 ) − s (X, θ0 ) + H̄(X)η  ≥ I(θ0 ) − Eθ0 H(X1 )c2 > 0.
2 2
Therefore, by the continuity of s(X, θ), for such n, the interval [θ0 − , θ0 + ] contains a
solution of the likelihood equation s(X, θ) = 0. In particular, it contains the solution

θbn, = inf{θ : θ0 −  ≤ θ ≤ θ0 + , and s(X, θ) = 0}.

It can be shown that θbn, is a proper random variable. It can be also shown that we can
obtain a sequence of θbn not depending on the choice of . The details are omitted here but
can be found in Serfling (1980). This proves (i).

85
(ii) For large n, we have seen that

1
0 = s(X, θbn ) = s(X, θ0 ) + s0 (X, θ0 )(θbn − θ0 ) + H̄(X)η ∗ (θbn − θ0 )2 .
2

Thus,
√ √
 
0 1 ∗ b
ns(X, θ0 ) = n(θn − θ0 ) −s (X, θ0 ) − H̄(X)η (θn − θ0 ) .
b
2
√ d wp1
Since ns(X, θ0 ) → N (0, I(θ0 )) by CLT, and −s0 (X, θ0 ) − 21 H̄(X)η ∗ (θbn − θ0 ) → I(θ0 ), then
it follows from Slutsky’s theorem that

√ ns(X, θ0 ) d
n(θn − θ0 ) =
b
1
→ N (0, I(θ0 ))/I(θ0 ) = N (0, I −1 (θ0 )).
0 ∗
−s (X, θ0 ) − 2 H̄(X)η (θn − θ0 )
b

For the case of θ ∈ Rk , under appropriate generalization of the Conditions (C1)-


(C3), there exists a sequence of θ bn wp1
bn of solutions to s(X, θ) such that θ → θ 0 and θ
bn is
AN (θ 0 , I−1 −1
n (θ 0 )), where In (θ 0 ) is the information matrix defined in Definition 4.1.1. A

succinct proof can be found on page 290 of Shao (2003).

Remark 4.2.2 This theorem does not say which sequence of roots of s(X; θ) = 0 should be
chosen to ensure consistency in the case of multiple roots. It does not even guarantee that
for any given n, however large, the likelihood function log L(θ; X) has any local maxima at
all. This specific theorem is useful in only those cases where s(X; θ) = 0 has a unique root
for all n.

4.3 Improving the sub-efficient estimates

The method of moments ordinarily provides asymptotically normal estimates. Sometimes


these estimates are asymptotically efficient. For example, in estimating (µ, σ 2 ) in N (µ, σ 2 )
by (X̄n , Sn2 ), the method of moments and MLE coincide but usually they are not. One would
like to use MLE, but this has the disadvantage of being difficult to evaluate in general. The
likelihood equations, s(X, θ) = 0, are generally highly nonlinear and one must to numerical
approximation methods to solve them.

86
One good strategy is to use Newton’s method with one of simply computed estimates
based on the method of moments or sample quantiles as the initial guess. This method takes
the initial guess, θb(0) , and inductively generates a sequence of hopefully better and better
estimates by

θb(k+1) = θb(k) − [s0 (X, θb(k) )]−1 s(X, θb(k) ), k = 0, 1, 2, . . . .

One simplification of this strategy can be made if the Fisher information is available. Ordi-
narily, s0 (X, θb(k) ) will converge as n → ∞ to −I(θ0 ) and so can be replaced by −I(θb(k) ) in
the iterations,
θb(k+1) = θb(k) + [I(θb(k) )]−1 s(X, θb(k) ), k = 0, 1, 2, . . . .

As we know, this method is the method of scoring. The scores, [I(θb(k) )]−1 s(X, θb(k) ) are
increments added to an estimate to improve it.

Example 4.3.1 (Logistic distribution) Let X1 , . . . , Xn be a sample from density


exp {−(x − θ)}
fθ (x) = .
(1 + exp {−(x − θ)})2
The log-likelihood function is given by
n
X n
X
ln (θ) = − (Xj − θ) − 2 log (1 + exp {−(Xj − θ)})
j=1 j=1

and the likelihood equations are


n
X 1
ln0 (θ) =n−2 = 0.
j=1
1 + exp {Xj − θ}

Newton’s method is easy to apply here because


n
X exp {Xj − θ}
ln00 (θ) = −2 .
j=1
(1 + exp {Xj − θ})2
1
Even easier is the method of scoring, since I(θ) = 3
[I(θ) is a constant for location parameter
families of distributions.] As an initial guess we may use the sample median, mn , or the
sample mean, X̄n . The asymptotic distributions are

 
d 1
n(mn − θ) → N 0, = N (0, 4),
4fθ (θ)2
√ π2
 
d
n(X̄n − θ) → N 0, ≈ N (0, 3.2899).
3

87
Since for the MLE, θbn ,

√ d
n(θbn − θ) → N (0, I(θ)−1 ) = N (0, 3),

it would seem worthwhile to improve mn and X̄n by once iteration or two of


" n
#
2 X 1
θb(k+1) = θb(k) + 3 1 − .
n j=1 1 + exp {Xj − θb(k) }

Once is enough. In improving asymptotically normal estimates by scoring, one iteration


is generally enough to achieve asymptotically efficiency! Let θb(0) be an estimator that is
sub-efficient. The estimator

θb(1) = θb(0) − [s0 (X, θb(0) )]−1 s(X, θb(0) ), k = 0, 1, 2, . . . .

is the first iteration in improving the estimator as discussed above. In fact, this is just the
first iteration in computing and MLE using the Newton-Raphson iteration method with θb(0)
as the initial value and, hence, is often called the one-step MLE. Under some conditions, θb(1)
is asymptotically efficient, as the following result shows.


Theorem 4.3.1 Assume the conditions in Theorem 4.2.1 hold and that θb(0) is n-consistent
for θ. Then

(i) The one-step MLE θb(1) is asymptotically efficient;

(ii) The one-step MLE obtained by replacing s0 (X, θb(0) ) with its expected value, −I(θb(0) ),
is asymptotically efficient.


Proof. Let θbn be a n-consistent sequence satisfying s(X; θbn ) = 0. In what follows, we
suppress “X” for simplicity. Expanding θb(0) at θbn ,

1
s(θb(0) ) = s(θbn ) + s0 (θbn )(θb(0) − θbn ) + s00 (ξ)(θb(0) − θbn )2 , (4.2)
2

and using
(θb(1) − θbn ) = (θb(0) − θbn ) − [s0 (θb(0) )]−1 s(θb(0) ), s(θbn ) = 0

88
we find
√ √ n o
n(θb(1) − θbn ) = ns(θb(0) ) [s0 (θbn )]−1 − [s0 (θb(0) )]−1

n 00
− s (ξ)(θb(0) − θbn )2 [s0 (θbn )]−1 . (4.3)
2

Now, we need to study the right hand of (4.3). Firstly, note that the term n(θb(0) −
√ √
θbn ) = n(θb(0) − θ0 ) − n(θbn − θ0 ), is bounded in probability because the second term is
asymptotically normal from Theorem 4.2.1-(ii) and the first term is Op (1) by the assumption.
By
wp1
|s00 (ξ)| ≤ H̄(X) → Eθ0 H(Xi ) < ∞

we have s00 (ξ) = Op (1). Also, by CMT, we know s0 (θbn ) = s0 (θ0 ) + op (1) = −I(θ0 ) + op (1).
Thus, the last term in (4.3) is of order Op (n−1/2 ). Similarly, [s0 (θbn )]−1 − [s0 (θb(0) )]−1 = op (1).
Finally, by (4.2) again, we obtain s(θb(0) ) = Op (n−1/2 ), which leads to
√ (1) √
n(θb − θbn ) = nOp (n−1/2 )op (1) + Op (n−1/2 ) = op (1).
√ p √ √ √
Hence, n(θb(1) − θbn ) → 0 as n → ∞. Say, n(θb(1) − θbn ) = n(θb(1) −θ0 )− n(θbn −θ0 ) = op (1).
√ b(1) √
n(θ − θ0 ) is asymptotically equivalent to n(θbn − θ0 ) which is asymptotically efficient

according to Theorem 4.2.1. It follows that n(θb(1) − θ0 ) is AN (θ0 , [I(θ0 )]−1 ) and thus
asymptotically efficient. The agrement for estimate using scoring method is identical. 

4.4 Hypothesis testing by likelihood method

As we know, UMP and UMPU tests often do not exist in a particular problem. In this
chapter, we shall introduce other tests. These tests may not be optimal, but they are very
general methods, easy to use, and have intuitive appeal. They often coincide with optimal
tests (UMP, UMPU tests). They play similar role to the MLE in the estimation theory. For
all these reasons, a treatment of testing is essential. We discuss the asymptotic theory of
likelihood ratio, Wald, and Rao score tests in the remainder of this chapter.

Let X = {X1 , . . . , Xn } be iid with distribution Fθ belonging to a family F = {Fθ : θ =


(θ1 , . . . , θk )T ∈ Θ ⊂ Rk } and suppose that the distribution Fθ possess densities fθ (x). The

89
testing problem is
H0 : θ ∈ Θ0 versus H1 : θ ∈ Θ1 ,
S T
where Θ0 Θ1 = Θ and Θ0 Θ1 = ∅.

The likelihood ratio test (LRT) rejects H0 for small values of

supθ∈Θ0 L(θ; X)
Λn =
supθ∈Θ L(θ; X)

Equivalently, the test may be carried out in terms of the commonly used statistic

λn = −2 log Λn ,

which turns out to be more convenient for asymptotic derivation. The motivation for Λn
comes from two sources: (a) The case where H0 , and H1 are each simple, for which a UMP
test is found from Λn by the Neyman-Pearson lemma; (b) The intuitive explanation that,
for small values of Λn , we can better match the observed data with some value of θ outside
of Θ0 .

A null hypothesis H0 will be specified as a subset Θ0 of Θ, where Θ0 is determined by a


set of r ≤ k restrictions given by equations

Ri (θ) = 0, 1 ≤ i ≤ r.

In the case of a simple hypothesis θ = θ 0 , the set Θ0 = {θ 0 }, and the function Ri (θ) may
be taken to be
Ri (θ) = θi − θ0i , 1 ≤ i ≤ k.

In the case of a composite hypothesis, the set Θ0 contains more than one element and we must
have r < k. For instance, k = 3, we might have H0 : θ 0 ∈ Θ0 = {θ = (θ1 , θ2 , θ3 ) : θ1 = θ01 }.
In this case, r = 1 and the function R1 (θ) may be taken to be R1 (θ) = θ1 − θ01 . We start
with a well-known but intuitive example that illustrate important aspects of the likelihood
ratio method.

iid
Example 4.4.1 Let X1 , . . . , Xn ∼ N (µ, σ 2 ), and consider testing H0 : µ = 0 versus H1 :

90
µ 6= 0. Let θ = (µ, σ 2 )T . Then, k = 2, r = 1. Apparently,
supθ∈Θ0 (1/σ n ) exp{− 2σ1 2 i (Xi − µ)2 }
P
Λn =
supθ∈Θ1 (1/σ n ) exp{− 2σ1 2 i (Xi − µ)2 }
P

2 n/2
P 
i (X i − X̄n )
= P 2
i Xi

by an elementary calculation of MLEs of θ under H0 and in the general parameter space.


By another elementary calculation, Λn < c is seen to be equivalent to t2n > k, where

nX̄n
tn = q .
1
P 2
n−1 i (Xi − X̄n )

is the t-statistic. In other words, the t-test is the LRT (equivalently). Also, observe that

t2n = (n − 1)Λn−2/n − (n − 1).

This implies
 n/2
n−1
Λn = 2
tn + n − 1
t2n
 
⇒ λn = −2 log Λn = n log 1 +
n−1
 2
t2

tn d
=n + op ( n ) → χ21 .
n−1 n−1
d
under H0 since tn → N (0, 1) as illustrated in Example 1.2.8.

As seen earlier, sometimes it is very difficult or impossible to find the exact distribution
of λn . So approximations in these cases become necessary. The next celebrated theorem
originally stated Wilks (1938), established the asymptotic chi-square distribution of λn under
H0 . The degree of freedom is just the number of independent constraints specified by H0 ; it
is useful to remember this as a general rule. Before proceeding, to better derive the result,
we need have a representation of Θ0 given above. As we have r constraints on the parameter
θ, then only k − r components of θ = (θ1 , . . . , θk )T are free to change, and so it has k − r
degrees of freedom. Without loss of generality, we denote these k −r dimension parameter by
ϑ = (ϑ1 , . . . , ϑk−r ). So, this specification of Θ0 may equivalently be given as a transformation

H0 : θ = g(ϑ), (4.4)

91
where g is a continuously differentiable function from Rk−r to Rk with a full rank ∂g(ϑ)/∂ϑ.
For example, consider again H0 : θ 0 ∈ Θ0 = {θ = (θ1 , θ2 , θ3 ) : θ1 = θ01 }. Then, we can set
ϑ1 = θ2 , ϑ2 = θ3 , g1 (ϑ) = θ01 , g2 (ϑ) = θ2 , g3 (ϑ) = θ3 ; Also, suppose θ = (θ1 , θ2 , θ3 )T and
H0 : θ1 = θ2 . Here, Θ = R3 , k = 3 and r = 1, and θ2 and θ3 are the two free changing
parameters. Then we can take ϑ = (θ2 , θ3 )T ∈ Rk−r = R2 , and g1 (ϑ) = θ2 , g2 (ϑ) = θ2 ,
g3 (ϑ) = θ3 .

Theorem 4.4.1 Assume the conditions in Theorem 4.2.1 hold and H0 is determined by
d
(4.4). Under H0 , λn → χ2r .

Proof. Without loss of generality, we assume that there exists an MLE θ


bn and MLE ϑ
bn
under H0 such that
supθ∈Θ0 L(θ; X) L(g(ϑ
b n ), X)
Λn = = .
supθ∈Θ L(θ; X) L(θbn ; X)
Following the proof of Theorem 4.2.1-(ii), we can obtain that
√ √
bn − θ 0 ) =
nI(θ 0 )(θ ns(θ 0 ) + op (1),

and also, by Taylor’s expansion,


h i
bn ) − log L(θ 0 ) = 2n(θ
2 log L(θ bn − θ 0 )T s0 (θ 0 )(θ
bn − θ 0 )T s(θ 0 ) + n(θ bn − θ 0 ) + op (1)

bn − θ 0 )T I(θ 0 )(θ
= n(θ bn − θ 0 ) + op (1).

Then,
h i
bn ) − log L(θ 0 ) = nsT (θ 0 )[I(θ 0 )]−1 s(θ 0 ) + op (1).
2 log L(θ

Similarly, under H0 ,
h i
2 log L(g(ϑn )) − log L(g(ϑ0 )) = ns̃T (ϑ0 )[Ĩ(ϑ0 )]−1 s̃(ϑ0 ) + op (1)
b

where
1 ∂ log L(g(ϑ))
s̃(ϑ) = = D(ϑ)s(g(ϑ)),
n ∂ϑ
D(ϑ) = ∂g(ϑ)/∂ϑ,

92
and Ĩ(ϑ) is the Fisher information matrix about ϑ. Combining these results, we can obtain
h i
λn = −2 log Λn = 2 log L(θ
bn ) − log L(g(ϑ
b n ))

= n[s(g(ϑ0 ))]T B(ϑ0 )s(g(ϑ0 )) + op (1)

under H0 , where
B(ϑ) = [I(g(ϑ))]−1 − [D(ϑ)]T [Ĩ(ϑ)]−1 D(ϑ).

√ d
By the CLT, n[I(θ 0 )]−1/2 s(θ 0 ) → Z, where Z = Nk (0, Ik ). Then, it follows from the
Slutsky’s Theorem that, under H0 ,
d
λn → ZT [I(g(ϑ0 ))]1/2 B(ϑ0 )[I(g(ϑ0 ))]1/2 Z.

Finally, it remains to investigate the properties of the matrix [I(g(ϑ0 ))]1/2 B(ϑ0 )[I(g(ϑ0 ))]1/2 .
For notational convenience, let D = D(ϑ), B = B(ϑ), A = I(g(ϑ)), and C = Ĩ(ϑ). Then,

(A1/2 BA1/2 )2 = A1/2 BABA1/2

= A1/2 (A−1 − DT C −1 D)A(A−1 − DT C −1 D)A1/2

= (Ik − A1/2 DT C −1 DA1/2 )(Ik − A1/2 DT C −1 DA1/2 )

= Ik − 2A1/2 DT C −1 DA1/2 + A1/2 DT C −1 DADT C −1 A1/2

= Ik − A1/2 DT C −1 DA1/2

= A1/2 BA1/2 ,

where the fourth equality follows from the fact that C = DADT . This shows that A1/2 BA1/2
is a projection matrix. The rank of A1/2 BA1/2 is

tr(A1/2 BA1/2 ) = tr(Ik − DT C −1 DA)

= k − tr(C −1 DADT ) = k − tr(C −1 C) = k − (k − r) = r.

Thus, by using similar arguments in the proof of Theorem 3.1.5 (or more directly by Cochran
d
Theorem), ZT [I(g(ϑ0 ))]1/2 B(ϑ0 )[I(g(ϑ0 ))]1/2 Z = χ2r . 
2
Consequently, the LRT with rejection Λn < e−χr,α /2 has asymptotic significance level α,
where χ2r,α is the (1 − α)th quantile of the chi-square distribution χ2r .

93
Under the first type of null hypothesis, say H0 : θ = θ 0 , the same result holds with the
degree of freedom being k. This result can be easily derived in a similar fashion to Theorem
4.4.1 but with less algebras. We do not elaborate here but left as an exercise.

To find the power of the test that rejects H0 when λn > χ2r,α , for some r, one would need
to know the distribution of λn at the particular θ = θ 1 value where we want to know the
power. But the distribution under θ 1 of λn for a fixed n is also generally impossible to find, so
we may appeal to asymptotics. However, there cannot be a nondegenerate limit distribution
on [0, ∞) for λn under a fixed θ 1 in the alternative. The following simple example illustrates
this difficulty.

Example 4.4.2 Consider the testing problem in Example 4.4.1 again. We saw earlier that
X̄n2
 
λn = n log 1 + 1 P
n
(Xi − X̄n )2
wp1 1 wp1
Consider now a value µ 6= 0. Then, X̄n2 → µ2 (> 0) and (Xi − X̄n )2 → σ 2 . Therefore,
P
n
wp1
clearly λn → ∞ under each fixed µ 6= 0. Thus, There cannot be a non-degenerate limit
distribution for λn under a fixed alternative µ.

Instead, similar to the Pearson’s Chi-square test discussed earlier, we may also consider
the behavior of λn under “local” alternative, that is, for a sequence θ 1n = θ 0 + n−1/2 δ, where
δ = (δ1 , . . . , δk )T . In this case, a non-central χ2 approximation under the alternative could
be achieved.

4.5 The Wald and Rao score tests

Two competitors to the LRT are available in the literature, see Wald (1943) and Rao (1948)
for the first introduction of these procedures respectively. Both of them are general and can
be applied to a wide selection of problems. Typically, the three procedures are asymptotically
first-order equivalent. Recall the null hypothesis

H0 : R(θ) = 0, (4.5)

94
where R(θ) is continuously differentiable function from Rk to Rr . The Wald test statistic is
defined
n o−1
T T −1
Wn = [R(θ n )] [C(θ n )] [In (θ n )] C(θ n )
b b b b R(θ
bn ),

where θ
bn is an MLE or RLE of θ, In (θ
bn ) is the Fisher information matrix based on X and
C(θ) = ∂R(θ)/∂θ. For testing a simple null hypothesis H0 : θ = θ 0 , R(θ) will become
θ − θ 0 and Wn simplifies to

bn − θ 0 )T I(θ
Wn = n(θ bn − θ 0 ).
bn )(θ

Rao (1947) introduced a score test that rejects H0 when the value of

Rn = n[s(θ̃ n )]T [I(θ̃ n )]−1 s(θ̃ n )

is large, where θ̃ n is an MLE or RLE of θ under H0 . For testing a simple null hypothesis
H0 : θ = θ 0 , Rn will simplify to

Rn = n[s(θ 0 )]T [I(θ 0 )]−1 s(θ 0 ).

Here are the asymptotic chi-square results for these two statistics.

Theorem 4.5.1 Assume the conditions in Theorem 4.2.1 hold. Under H0 given by (4.5),
d d
then (i) Wn → χ2r and (ii) also Rn → χ2r .

Proof. (i) Using Theorem 4.2.1 and Delta Theorem,


√ d
bn ) − R(θ 0 )) →
n(R(θ Nr (0, [C(θ 0 )]T [I(θ 0 )]−1 C(θ 0 )).

Under H0 , R(θ 0 ) = 0 and, therefore,

−1 d
bn ]T [C(θ 0 )]T [I(θ 0 )]−1 C(θ 0 ) bn ) → χ2 .

n[R(θ R(θ r

p
bn →
by CMT. Then the result follows from Slutsky’s theorem and the fact that θ θ 0 and I(θ)
and C(θ) are continuous at θ.

95
(ii) From the Lagrange multipliers, θ̃ n satisfies

ns(θ̃ n ) + C(θ̃ n )ηn = 0 and R(θ̃ n ) = 0.

Using Taylor’s expansion, one can show that under H0

[C(θ 0 )]T (θ̃ n − θ 0 ) = op (n−1/2 ) (4.6)

and

ns(θ 0 ) − nI(θ 0 )(θ̃ n − θ 0 ) + C(θ 0 )ηn = op (n1/2 ), (4.7)

Multiplying [C(θ 0 )]T [nI(θ 0 )]−1 to the left-hand side of (4.7) and using (4.6), we obtain that

[C(θ 0 )]T [nI(θ 0 )]−1 C(θ 0 )ηn = −n[C(θ 0 )]T [nI(θ 0 )]−1 s(θ 0 ) + op (n−1/2 ),

which implies
d
ηnT [C(θ 0 )]T [nI(θ 0 )]−1 C(θ 0 )ηn → χ2r .

Then, the result follows from the above equation and the fact that C(θ̃ n )ηn = −ns(θ̃ n ), and
I(θ) is continuous at θ 0 . 

Thus, Wald’s test, Rao’s tests and LRT are asymptotically equivalent. Note that Wald’s
test requires computing θbn , whereas Rao’s score test requires computing θ̃ n , not θ
bn . On
the other hand, the LRT requires computing both θ
bn and θ̃ n (or solving two maximization
problems). Hence, one may choose one of these tests that is easy to compute in a particular
application.

4.6 Confidence sets based on likelihoods

The usual duality between testing and confidence intervals says that the acceptance region
of a test with size α can be inverted to give a confidence set of coverage probability (1 − α).
In other words, suppose A(θ 0 ) is the acceptance region of a size α test for H0 : θ = θ 0 ,
and define C(X) = {θ : X ∈ A(θ)}. Then Pθ0 (θ 0 ∈ C(x)) = 1 − α and hence C(X) is

96
a 100(1 − α)% confidence set for θ. For example, the acceptance region of the LRT with
Θ0 = {θ : θ = θ 0 } is
2
A(θ 0 ) = {x : L(θ 0 ; x) ≥ e−χk,α /2 L(θ
bn ; x)}

Consequently,
2
C(X) = {θ : L(θ; X) ≥ e−χk,α /2 L(θ
bn ; X)}

is a 1 − α asymptotically correct confidence set.

This method is often called the inversion of a test. In particular, the LRT, the Wald test,
and the Rao score test can all be inverted to construct confidence sets that have asymptot-
ically a 100(1 − α)% coverage probability. The confidence sets constructed from the LRT,
the Wald test, and the score test are respectively called the likelihood ratio, Wald, and s-
core confidence sets. Of these, the Wald and the score confidence sets are ellipsoids because
of how the corresponding test statistics are defined. The likelihood ratio confidence set is
typically more complicated but it is also ellipsoids from asymptotic viewpoints. Here is an
example.

iid
Example 4.6.1 Suppose Xi ∼ BIN(p, 1), 1 ≤ i ≤ n. For testing H0 : p = p0 versus H1 :
p 6= p0 , the LRT statistic is

pY0 (1 − p0 )n−Y
Λn =
supp pY (1 − p)n−Y
pY0 (1 − p0 )n−Y pY0 (1 − p0 )n−Y
= = ,
Y Y Y n−Y
pbY (1 − pb)n−Y
 
n
1 − n
Pn
where Y = i=1 Xi and pb = Y /n. Thus, the likelihood ratio confidence set is of the form
n 2
o
C1 (X) = p : pY (1 − p)n−Y ≥ e−χ1,α /2 pbY (1 − pb)n−Y .

The confidence set obtained by inverting acceptance regions of Wald’s test is simply
 
zα/2 p
C2 (X) = p : |bp − p| ≤ √ pb(1 − pb)
n
h p p i
= pb − zα/2 pb(1 − pb)/n, pb + zα/2 pb(1 − pb)/n

97
1
since (χ21,α )1/2 = zα/2 and Wn = n(b
p − p0 )2 I(b
p), where I(p) = p(1−p)
. This is the textbook
confidence interval for p.

For the score test statistic, we need


pb 1 − pb pb − p
s(p) = − =
p 1−p p(1 − p)
and
p − p)2
(b p − p)2
n(b
n[s(p)]2 [I(p)]−1 = n p(1 − p) = .
p2 (1 − p)2 p(1 − p)
Hence, The confidence set obtained by inverting acceptance regions of Rao’s score test is

p − p)2 ≤ p(1 − p)χ21,α ≡ [lC , uC ],



C3 (X) = p : n(b

where lC , uC are the roots of the quadratic equation p(1 − p)χ21,α − n(b
p − p)2 = 0.

98
Chapter 5

Asymptotics in nonparametric
inference

5.1 Sign test (Fisher)

5.1.1 Test procedure

This is perhaps the earliest example of a nonparametric testing procedure. In fact, the test
was apparently discussed by Laplace in the 1700s. The sign test is a test for the median of
any continuous distribution without requiring any other assumptions.

Hypothesis The null hypothesis of interest here is that of zero shift in location due to the
treatment, namely, H0 : θ = 0 versus H0 : θ > 0. This null hypothesis asserts that each
of the distributions (not necessarily the same) for the difference (post-treatment minus pre-
treatment observations) has median 0, corresponding to no shift in location due to treatment.
Certainly, this is essentially equivalent to consider the null hypothesis H0 : θ = θ0 because
we can simply use H0 : θ − θ0 = 0.

Procedure The test statistic is given by the total number of X1 , X2 , . . . , Xn that are greater

99
than θ0 , say
n
X
Sn = I(Xi > θ0 ),
i=1

where I(·) is the indicator function. Then, small value of Sn leads to reject H0 . We now
need to know the distribution of Sn . Obviously, the distribution of Sn under H0 :
 n
k 1
Sn ∼ BIN(n, 1/2), P (Sn = k) = Cn
2
Thus, the p-value is
n  n
X 1
P (Bin(n, 1/2) ≥ Sn ) = Cnk .
k=Sn
2

For simplicity, we may use the following large-sample approximation to obtain an ap-
proximated p-value. Note that
n  
X 1
EH0 (Sn ) = = n/2
i=1
2
n  
X 1
VarH0 (Sn ) = = n/4.
i=1
4

The asymptotic normality of the standardized form


Sn − EH0 (Sn ) Sn − n2
Sn∗ = 1/2
= .
VarH0 (Sn ) (n/4)1/2

follows from standard central limit theory for sums of mutually independent, identically
distributed random variables.

For large sample sizes, we can make use of the standard central limit theorem for sums
of i.i.d. random variables to conclude that
Sn − npθ Sn − n(1 − F (0))
1/2
=
[npθ (1 − pθ )] [n(1 − F (0))(F (0))]1/2
has an asymptotic N (0, 1) distribution. Thus, for large n, we can approximate the exact
power by
bα,1/2 − npθ
 
Powerθ = 1 − Φ
[npθ (1 − pθ )]1/2

100
We note that both the exact power and the approximate power against an alternative θ > 0
depend on the common distribution only through the value of its distribution F (z) at z = 0.
Thus, if two distributions F1 and F2 have a common median θ > 0 and F1 (0) = F2 (0), then
the exact power of the sign test against the alternative θ > 0 will be the same for both F1
and F2 .

5.1.2 Asymptotic Properties

Consistency of the sign Test

Definition 5.1.1 Let {φn } be a sequence of tests for H0 : F ∈ Ω0 versus H1 : F ∈ Ω1 .


Then, {φn } is consistent against the alternatives Ω1 if

(i) EF (φn ) → α ∈ (0, 1), ∀F ∈ Ω0 ;

(ii) EF (φn ) → 1, ∀F ∈ Ω1 .

As in estimation, consistency is a rather weak property of a sequence of tests. However,


something must be fundamentally wrong with the test for it not to be consistent. If a test
is inconsistent against a large class of alternatives, then it is considered an undesirable test.

Example 5.1.1 For a parametric example, let X1 , . . . , Xn be an i.i.d. sample from the
Cauchy distribution, C(θ, 1). For all n ≥ 1, we know that X̄n also has the C(θ, 1) distri-
bution. Consider testing the hypothesis H0 : θ = 0 versus H1 : θ > 0 by using a test that
rejects for large X̄n . The cutoff point, k, is found by making PH0 (X̄n > k) = α. But k is
simply the αth quantile of the C(0, 1) distribution. Then the power of this test is given by

Pθ (X̄n > k) = P (C(θ, 1) > k) = P (θ + C(0, 1) > k) = P (C(0, 1) > k − θ).

This is a fixed number not dependent on n. Therefore, the power does not approach to 1
as n → ∞, and so so the test is not consistent even against parametric alternatives. In
contrast, a test based on the median would be consistent in the C(θ, 1) case (why?).

101
Theorem 5.1.1 If F is a continuous C.D.F. with unique median θ, then the sign test is
consistent for tests on θ.

P
Proof. Recall that the sign test rejects H0 if Sn = I(Xi > θ0 ) ≥ kn . If we choose
kn = n2 + zα n4 , then, by the ordinary central limit theorem, we have
p

PH0 (Sn ≥ kn ) → α.

The power of the test is


 
1 1
Qn = PF (Sn ≥ kn ) = PF Sn − pθ ≥ kn − pθ ,
n n

where pθ = Pθ (X1 > θ0 ). Since we assume θ > θ0 , it follows that n1 kn − pθ < 0 for all large
n. Also, n1 Sn − pθ converges in probability to 0 under any F (WLLN), and so Qn → 1. Since
the power goes to 1, the test is consistent against any alternative F satisfying θ > θ0 . 

Asymptotic relative efficiency (ARE)

We wish to compare the sign test with the t-test in terms of asymptotic relative efficiency.
The point is that, at a fixed alternative θ, if α remains fixed, then, for large n, the power
of both tests is approximately 1 (say, consistent) and there would be no way to practically
compare the two tests. Perhaps we can see how the powers compare for θ ≈ θ0 . The idea is
to take θ = θn → θ0 at such a rate that the limiting power of the tests is strictly between α
and 1. If the two powers converge to different values, then we can take the ratio of the limits
as a measure of efficiency. The idea is due to E.J.G. Pitman (Pitman 1948). We firstly give
a brief introduction to the concept of ARE regarding the test.

In estimation, an agreed-on basis for comparing two sequences of estimates whose mean
squared error each converges to zero as n → ∞ is to compare the variances in their limit
√ d √ d
distributions. Thus, if n(θb1n − θ) → N (0, σ12 (θ)) and n(θb2n − θ) → N (0, σ22 (θ)), then the
asymptotic relative efficiency (ARE) of θb2n with respect to θb1n is defined as σ 2 (θ)/σ 2 (θ).
1 2

One can similarly ask what should be a basis for comparison of two sequences of tests
based on statistics T1n and T2n of a hypothesis H0 : θ = θ0 . Suppose we use statistics such
that large values of them correspond to rejection of H0 ; i.e., H0 is rejected if Tn > cn . Let α,

102
β denote the type 1 error probability and the power of the test, and let θ denote a specific
alternative. Suppose n(α, β, θ, T ) is the smallest sample size such that

PH0 (Tn ≥ cn ) ≤ α, Pθ (Tn ≥ cn ) ≥ β,

Two tests based on T1n and T2n can be compared through the ratio

e(T2 , T1 ) = n(α, β, θ, T1 )/n(α, β, θ, T2 ),

and T1n is preferred if this ratio is less than 1. The threshold sample size n(α, β, θ, T ) is
difficult or impossible to calculate even in the simplest examples. Furthermore, the ratio can
depend on particular choices of α, β, θ.

Fortunately, if α → 0, β → 1, or θ → θ0 (an element of the boundary θ0 ), then the


ratio (generally) converges to something that depends on θ alone or is just a constant.
The three respective measures of efficiency correspond to approaches by Bahadur, Hodges
and Lehmann, and Pitman; see Pitman (1948), Hodges and Lehmann (1956), and Bahadur
(1960). Typically, of these, Pitman ARE is the easiest to calculate in most applications by a
fixed recipe under frequently satisfied conditions that we present below. It is also important
to note that the Pitman efficiency works out to just the asymptotic efficiency in the point
estimation problem, with T1n and T2n being considered as the respective estimates. Testing
and estimation come together in the Pitman approach. We state a theorem describing the
calculation of the Pitman efficiency, which is a simple one in form and suffices for many
applications.

Theorem 5.1.2 Let −∞ < h < ∞ and θn = θ0 + √h . Consider the following conditions:
n
(i) there exist functions µ(θ), σ(θ), such that, for all h,

n(Tn − µ(θn )) d
→ N (0, 1);
σ(θn )

(ii) µ0 (θ0 ) > 0; (iii) σ(θ0 ) > 0 and σ(θ) is continuous at θ0 . Suppose T1n and T2n each satisfy
conditions (i)-(iii). Then
2
σ12 (θ0 ) µ02 (θ0 )

e(T2 , T1 ) = 2
σ2 (θ0 ) µ01 (θ0 )

103
See Serfling (1980) for a detailed proof. By this theorem, we are now ready to derive the
ARE of the sign test with respect to the t-test.

Corollary 5.1.1 Let X1 , . . . , Xn be i.i.d. observations from any symmetric continuous dis-
tribution function F (x − θ) with density f (·), where f (0) > 0, f is continuous at 0 and
F (0) = 21 . The Pitman asymptotic relative efficiency of the one-sample test procedure (one-
or two-sided) based on the sign test statistic Sn with respect to the corresponding normal
theory test based on X̄n is

e(Sn , X̄n ) = 4σF2 f 2 (0),

where σF2 = VarF (X) < ∞.

1
Proof. For T2n = S ,
n n
first notice that Eθ (T2n ) = Pθ (X1 > 0) = 1 − F (−θ). Also
Varθ (T2n ) = F (−θ)(1 − F (−θ))/n. We choose µn (θ) = 1 − F (−θ) and σn2 (θ) = F (−θ)(1 −
F (−θ))/n. Therefore, µ0n (θ) = f (−θ) and µ0n (θ0 ) = f (0) > 0. For T1n = X̄n , choose
µn (θ) = θ and σn2 (θ) = σF2 /n. Conditions (i)-(iii) are easily verified here, too, with these
choices of µn (θ) and σn (θ). Therefore, by Theorem 5.1.2, the result follows immediately. 

Some values of this ARE for selected F (·) are:

F: Normal Uniform Logistic DE Cauchy t3 t5

e(Sn , Tn ) 0.637 0.333 0.822 2.000 ∞ 1.620 0.961

The sign test, however, cannot get arbitrarily bad with respect to the t-test under some
restrictions on the C.D.F. F , as is shown by the following result, although the t-test can be
arbitrarily bad with respect to the sign test. Hodges and Lehmann (1956) found that within
a certain class of populations, e(Sn , X̄n ) is always at least 1/3 and the bound is attained
when F is any symmetric uniform distribution. Of course, the minimum efficiency is not
very good. We will later discuss alternative nonparametric tests for the location-parameter
problem that have much better asymptotic efficiencies.

104
5.2 Signed rank test (Wilcoxon)

5.2.1 Procedure

Recall that Hodges and Lehmann proved that the sign test has a small positive lower bound
of 1/3 on the Pitman efficiency with respect to the t-test in the class of densities with a
finite variance, which is not satisfactory. The problem with the sign test is that it only uses
the signs of Xi − θ0 , not the magnitude of Xi − θ0 . A nonparametric test that incorporates
the magnitudes as well as the signs is called the Wilcoxon signed-rank test, under a little bit
more assumption about the population distribution; see Wilcoxon (1945).

Suppose that X1 , . . . , Xn are the observed data from some location parameter distribu-
tion F (x − θ), and assume that F is symmetric. Let θ = median(F ). We want to test
H0 : θ = 0 against H1 : θ > 0. We start by ranking |Xi | from the smallest to the largest,
giving the units ranks R1 , . . . , Rn and order statistics |X|(1) , . . . , |X|(n) .

Then, the Wilcoxon signed-rank statistic is defined to be the sum of these ranks that
correspond to originally positive observations. That is,
n
X
Tn = Ri I(Xi > 0),
i=1

where the term Ri I(Xi > 0) is known as the positive signed rank of Xi .

When θ is greater than 0, there will tend to be a large proportion of positive X and they
will tend to have the larger absolute values. Hence, we would expect a higher proportion of
positive signed ranks with relatively large sizes. At the α level of significance, reject H0 if
Tn ≥ tα , where the constant tα is chosen to make the type I error probability equal to α.
Lower-sided and two-sided tests can be constructed similarly.

Remark 5.2.1 It may appear that some of the information in the ranking of the sample is
being lost by using only the positive signed ranks to compute Tn . Such is not the case. If we
define T̃n to be the sum of ranks (of the absolute values) corresponding to the negative X
n
observations, then T̃n = ni=1 (1−I(Xi > 0)Ri . It follows that Tn + T̃n =
P P
Ri = n(n+1)/2.
i=1

105
Thus, the test procedures defined above could be constructed equivalently based on T̃n =
n(n + 1)/2 − Tn .

To do a test, we need the null distribution of Tn . If we define

Wi = I(|X|(i) corresponds to some positive Xj ),

then we have an alternative expression for Tn , namely


n
X
Tn = iWi .
i=1

It turns out that, under H0 , the {Wi } have a relatively simple joint distribution.

Proposition 5.2.1 Under H0 , W1 , . . . , Wn are i.i.d. BIN(1, 1/2) variables.

Proof. By the symmetric assumption, Wi ∼ BIN(1, 1/2) is obvious. To show the indepen-
dence, we define the so-called anti-rank,

Dk = {i : Ri = k, 1 ≤ i ≤ n},

say, the index of the observation whose absolute rank is k. Thus, Wk = I(XDk > 0). Let
D = (D1 , . . . , Dn ), d = (d1 , . . . , dn ), and then we have

P (W1 = w1 , . . . , Wn = wn )
X
= P (I(XD1 > 0) = w1 , . . . , I(XDn > 0) = wn | D = d) P (D = d)
d
X
= P (I(Xd1 > 0) = w1 , . . . , I(Xdn > 0) = wn ) P (D = d)
d n X  n
1 1
= P (D = d) = ,
2 d
2

where the second equality comes from the fact that I(X1 > 0), . . . , I(Xn > 0) are indepen-
dent with (D1 , . . . , Dn ). The independence is therefore immediately obtained by noting that
P (Wi = wi ) = 12 . The independence between I(X1 > 0), . . . , I(Xn > 0) and (D1 , . . . , Dn )
can be easily established as follows. Actually, (D1 , . . . , Dn ) is the function of |X1 |, . . . , |Xn |

106
and (I(Xi > 0), |Xi |), i = 1, . . . , n are independent each other. Thus, it suffices to show that
I(Xi > 0) is independent with |Xi |. In fact,
1
P (I(Xi > 0) = 1, |Xi | ≤ x) = P (0 < Xi ≤ x) = F (x) − F (0) = F (x) −
2
2F (x) − 1
= = P (I(Xi > 0) = 1)P (|Xi | ≤ x). 
2

When n is large, a large-sample approximation is sufficient to obtain an approximately


correct signed-rank test. Proposition 5.2.1, together with the representation of Tn above and
Hajek-Sidak’s CLT, leads to the asymptotic null distribution of Tn . Clearly,
n(n + 1) n(n + 1)(2n + 1)
EH0 (Tn ) = , and VarH0 (Tn ) = .
4 24
The results above imply the following theorem.

Theorem 5.2.1 Let X1 , . . . , Xn be i.i.d. observations from F (x − θ), where F is continuous


and symmetric. Under H0 : θ = 0,
n(n+1)
Tn − 4 d
q → N (0, 1).
n(n+1)(2n+1)
24

Therefore, the signed-rank test can be implemented by rejecting the null hypothesis, H0 :
θ = 0 if r
n(n + 1) n(n + 1)(2n + 1)
Tn > + zα .
4 24
The other option would be to find the exact finite sample distribution of Tn under the null
as illustrated above. This can be done in principle, but the CLT approximation works pretty
well.

Unlike the null case, the Wilcoxon signed-rank statistic Tn does not have a representation
as a sum of independent random variables under the alternative. So the asymptotic non-null
distribution of Tn , which is very useful for approximating the power and establishing the
consistency of the test, does not follow from the CLT for independent summands. However,
Tn still belongs to the class of U -statistics, and hence the CLTs for U -statistics can be used
to derive the asymptotic nonnull distribution of Tn and thereby get an approximation to the
power of the Wilcoxon signed-rank test. The following proposition is useful for deriving its
non-null distribution.

107
Proposition 5.2.2 We have the following equivalent expression for Tn ,
X  X i + Xj 
Tn = I >0 .
i≤j
2

Proof. Consider to use the antirank Dk again. Note that


X  Xi + Xj  X n X  XDi + XDj 
I >0 = I(Xi > 0) + I >0 . (5.1)
i≤j
2 i=1 i<j
2
X 
Di +XDj
For i < j, hence |XDi | ≤ |XDj |, consider the expression I 2
> 0 . There are four
cases to consider: where XDi and XDj are both positive; where they are both negative; and
the two cases where they have mixed signs. In all these cases, though, it is easy to see that
 
XDi + XDj
I > 0 = I(XDj > 0).
2

Using this, we have that the right side of expression (5.1) is equal to
n
X X  XDi + XDj  X n Xn
I(Xi > 0) + I >0 = I(XDj > 0) + (j − 1)I(XDj > 0)
i=1 i<j
2 j=1 j=1
n
X
= jI(XDj > 0),
j=1

and we are finished. 

To established the asymptotic normality of Tn under alternative cases, we present the


basic results about U -statistics here. Suppose that h(x1 , x2 , . . . , xr ) is some real-valued func-
tion of r arguments x1 , x2 , . . . , xr . The arguments x1 , x2 , . . . , xr can be real or vector valued.
Now, suppose X1 , . . . , Xn are i.i.d. observations from some C.D.F. F, and for a given r ≥ 1 we
want to estimate or make inferences about the parameter θ = θ(F ) = EF h(X1 , X2 , . . . , Xr ).
We assume n ≥ r. Of course, one unbiased estimate is h(X1 , X2 , . . . , Xr ) itself. But one
should be able to find a better unbiased estimate if n > r because h(X1 , X2 , . . . , Xr ) does
not use all of the sample data. Indeed, in this case
1 X
h(Xi1 , Xi2 , . . . , Xir )
Cnr 1≤i1 <i2 ···<ir ≤n

may be a better unbiased estimate than h(X1 , X2 , . . . , Xr ).

108
Statistics of this form are called U -statistics (U for unbiased), and h is called the kernel
and r its order. We will assume that h is permutation symmetric in order that U has that
property as well.

Example 5.2.1 Suppose, r = 1. Then the linear statistic n1 ni=1 h(Xi ) is clearly a U -
P

statistic. In particular, n1 ni=1 Xik is a U -statistic for any k; Let r = 2 and h(x1 , x2 ) =
P

1
2
(x1 − x2 )2 . Then, on calculation,
n
1 X1 2 1 X
2
(Xi − Xj ) = (Xi − X̄)2 .
Cn i<j 2 n − 1 i=1

Thus, the sample variance is a U -statistic; Let x0 be a fixed real, r = 1, and h(x) = I(x ≤ x0 ).
Then U = n1 ni=1 I(Xi ≤ x0 ) = Fn (x0 ), the empirical C.D.F. at x0 . Thus Fn (x0 ) for any
P

specified x0 is a U -statistic.

Example 5.2.2 Let r = 2 and h(X1 , X2 ) = I(X1 + X2 > 0). The corresponding U =
1
P
C2
n i<j I(Xi + Xj > 0). Now, U is related to the one-sample Wilcoxon statistic, Tn

The summands in the definition of a U -statistic are not independent. Hence, neither the
exact distribution theory nor the asymptotics are straightforward. Hajek had the brilliant
idea of projecting U onto the class of linear statistics of the form n1 ni=1 h(Xi ). It turns out
P

that the projection is the dominant part and determines the limiting distribution of U . The
main theorems can be seen in Serfling (1980).

For k = 1, . . . , r, let

hk (x1 , . . . , xk ) = E[h(X1 , . . . , Xr ) | X1 = x1 , . . . , Xk = xk ]

= E[h(x1 , . . . , xk , Xk+1 , . . . , Xr )].

Define ζk = Var(hk (X1 , . . . , Xk )).

Theorem 5.2.2 Suppose that the kernel h satisfying Eh2 (X1 , . . . , Xr ) < ∞. Assume that
0 < ζ1 < ∞. Then,
U −θ d
p → N (0, 1).
Var(U )
where Var(U ) = n1 r2 ζ1 + O(n−2 ).

109
With these results, we are ready to present the asymptotic normality of Tn .

Theorem 5.2.3 The Wilcoxon signed-rank statistic Tn is asymptotically normally distribut-


ed,
Tn − E(Tn ) d
p → N (0, 1).
Var(Tn )

Proof. By Proposition 5.2.2,


 
1 1 X X i + Xj
Tn = 2 I >0
Cn2 Cn i≤j 2
n  
1 X 1 X Xi + Xj
= 2 I(Xi > 0) + 2 I >0 .
Cn i=1 Cn i<j 2

Note that the first term is of smaller order (Op (n−1 )) and we need only consider the second
term (Op (1)). However, the second term, denoted as Un is a U -statistic as defined above.
d
Thus, by Theorem 5.2.2, (Un − E(Un ))/Var(Un ) → N (0, 1). The result immediately follows
from the Slutsky’s Theorem. 

With the help of this theorem, we can easily establish the consistency of the Tn test.

Theorem 5.2.4 If F is a continuous symmetric C.D.F. with unique median θ, then the
signed rank test is consistent for tests on θ.

P X +X
Proof. Recall that the signed-rank test rejects H0 if Tn = i≤j I( i 2 j > 0) ≥ tn . If we
q
choose tn = n(n+1)
4
+ zα n(n+1)(2n+1)
24
, then, by Theorem 5.2.1, we have

PH0 (Tn ≥ tn ) → α.

The power of the test is


 
1 1
Qn = PF (Tn ≥ tn ) = PF Tn − pθ ≥ 2 tn − pθ ,
Cn2 Cn

where pθ = Pθ (X1 + X2 > 0). Since we assume θ > 0 under the alternative, it follows that
1 1
Cn2 kn − pθ < 0 for all large n. Also, Cn2 Tn − pθ converges in probability to 0 under any F

110
(Theorem 5.2.2), and so Qn → 1. Since the power goes to 1, the test is consistent against
any alternative F satisfying θ > 0. 

Furthermore, Theorem 5.2.2 allows us to derive the relative efficiency of Tn with respect
to other tests. Since Tn takes into account the magnitude as well as the sign of the sample
observations, we expect that overall it may have better efficiency properties than the sign
test. The following striking result was proved by Hodges and Lehmann in (1956).

Theorem 5.2.5 Let X1 , . . . , Xn be i.i.d. observations from any symmetric continuous dis-
tribution function F (x − θ) with density f (x − θ),

(i) The Pitman asymptotic relative efficiency of the one-sample test procedure based on
the Tn with respect to the test based on X̄n is

Z ∞ 2
e(Tn , X̄n ) = 12σF2 2
f (u)du ,
−∞

where σF2 = VarF (X) < ∞.

108
(ii) inf F ∈F e(Tn , X̄n ) = 125
≈ 0.864, where F is the family of CDFs satisfying continuous,
symmetric and σF2 < ∞. The equality is attained at F such that f (x) = b(a2 −x2 ), |x| <
√ √
a, where a = 5 and b = 3 5/20.

Proof. (i) Similar to the proof of Corollary 5.1.1, we need to verify the conditions in
1
Theorem 5.1.2. Let T2n = Cn2 Tn . By Theorem 5.2.3, we know the T2n is asymptotically
normally distributed. It suffices to study its expectation and variance. It is easily to see that

 
1 n(n − 1)
E(T2n ) = 2 n(1 − F (−θ)) + Pθ (X1 + X2 > 0)
Cn 2
Z
= Pθ (X1 + X2 > 0) + O(n−1 ) ≈ [1 − F (−x − θ)]f (x − θ)dx.

111
The variance is more complicated, however, by using Theorem 5.2.2,
1 2
Var(T2n ) = 2 Var(h1 (X1 )) + O(n−2 )
n
4
≈ E(E 2 (h(X1 , X2 ) | X1 )) − (E(E(h(X1 , X2 ) | X1 )))2 .
n
4
= E [1 − F (−X1 )]2 − E 2 h(X1 , X2 )
n(
Z Z 2 )
4
= [1 − F (−x − θ)]2 f (x − θ)dx − [1 − F (−x − θ)]f (x − θ)dx .
n
R
Thus, to apply Pitman efficiency theorem, we choose µn (θ) = F (x + θ)f (x − θ)dx and
(Z Z 2 )
4
σn2 (θ) = F 2 (x + θ)f (x − θ)dx − F (x + θ)f (x − θ)dx .
n

Therefore, some calculation yields µ0n (θ) = 2 f (x+θ)f (x−θ)dx and µ0n (0) = 2 f 2 (u)du >
R R

0, while σn2 (0) = n4 VarF [F (X)] = 4 1


n 12
= 1
3n
. For T1n = X̄n , choose µn (θ) = θ and σn2 (θ) =
σF2 /n. With these choices of µn (θ) and σn (θ), the results are immediately follows from
Theorem 5.1.2.

(ii) It can be shown that e(Tn , X̄n ) is location and scale invariant, so, we can assume that
R
h is symmetric about 0 and σF2 = 1. The problem, then, is to minimize f 2 (u)du subject to
R R R
f (u)du = u2 f (u) = 1 and uf (u) = 0 (by symmetry). This is equivalent to minimizing
Z Z Z
2 2 2
f + 2b u f − 2ba f, (5.2)

where a and b are positive constants to be determined later. We now write as (5.2)
Z Z Z
2 2 2 2 2 2
[f + 2b(x − a )f ] = [f + 2b(x − a )f ] + [f 2 + 2b(x2 − a2 )f ]. (5.3)
|x|≤a |x|>a

First complete the square on the first term on the right side of (5.3) to get
Z Z
2 2 2
[f + b(x − a )] − b2 (x2 − a2 )2 . (5.4)
|x|≤a |x|≤a

Now (5.3) is equal to the two terms of (5.4) plus the second term on the right side of (5.3).
We can now write the density that minimizes (5.3).

If |x| > a take f (x) = 0, since x2 > a2 , and if |x| ≤ a take f (x) = b(a2 − x2 ), since the
integral in the first term of (5.4) is nonnegative. We can now determine the values of a and

112
R
b from the side conditions. From f = 1, we have
Z a
b(a2 − x2 )dx = 1,
−a
R Ra
which implies that a3 b = 3/4. Further, from x2 f = 1, we have −a x2 b(a2 − x2 )dx = 1,
√ √
from which a5 b = 15/4. Hence solving for a and b yields a = 5 and b = 3 5/100. Now,
" √
√ #2 √
Z Z 5
2 3 5 2 3 5
f = √ (5 − x ) dx = ,
− 5 100 25
 √ 2
3 5 108
which leads to the result, inf F ∈F e(Tn , X̄n ) = 12 25
= 125
≈ 0.864. 

Remark 5.2.2 Notice that the worst-case density f is not one of heavy tails but one with
no tails at all (i.e., it has a compact support). Also note that the minimum Pitman efficiency
is 0.864 in the class of symmetric densities with a finite variance, a very respectable lower
bound.

F: Normal Uniform Logistic DE Cauchy t3 t5

e(Sn , Tn ) 0.955 1.000 1.097 1.500 ∞ 1.900 1.240

The following table shows the value of the Pitman efficiency for several distributions
that belong to the family of CDFs F defined in Theorem 5.2.5. They are obtained by direct
calculation using the formula given above. It is interesting that, even in the normal case,
the Wilcoxon test is 95% efficient with respect to the t-test.

5.2.2 Point estimator and confidence interval associated with the


Wilcoxon signed rank statistic

The Wilcoxon signed-rank statistic Tn can be used to construct a point estimate for the point
of symmetry of a symmetric density, and from it one can construct a confidence interval.

Suppose X1 , . . . , Xn ∼ F , where F has a symmetric density centered at θ. Consider to


 
P Xi +Xj
estimate the θ. When θ = 0, the distribution of the statistic Tn = i≤j I 2
> 0 is

113
symmetric about its mean n(n + 1)/4. A natural estimator of θ is the amount θb that should
be subtracted from each Xi so that the value of Tn , when applied to the shifted sample
X1 − θ,
b . . . , Xn − θ,
b is as close to n(n + 1)/4 as possible. Intuitively, we estimate θ by the
b that the X sample should be shifted in order that X1 − θ,
amount (θ) b . . . , Xn − θb as a sample
from a population with median 0.

For any pair i, j with i ≤ j, define the Walsh average Wij = 12 (Xi + Xj ) (see Walsh
(1959)). Then the Hodges-Lehmann estimate θb is defined as

θb = Median{Wij : 1 ≤ i ≤ j ≤ n}.

Theorem 5.2.6 Let X1 , . . . , Xn ∼ F (x−θ), where f , the density of F , is symmetric around


R∞ 2
zero. Let θb be the Hodges-Lehmann estimator of θ. Then, if f (u)du < ∞,
−∞

 
√ d 1
n(θb − θ) → N 0, o2  .
 
nR

12 −∞
f 2 (u)du

The proof of this theorem can be found in Hettmansperger and McKean (1998). For sym-
√ d
metric distributions, by CLT, n(X̄ − θ) → N (0, σF2 ). The ratio of the variances in the two
R 2
2 ∞ 2
asymptotic distributions, 12σF −∞ f (u)du , is the ARE of θb relative to X̄n . This ARE
equals to the asymptotic relative efficiency of the Wilcoxon signed rank test with respect to
t-test in the testing problem (Theorem 5.2.5).

A confidence interval for θ can be constructed using the distribution of Tn . The interval
n(n+1)
is found from the following connection with the null distribution of Tn . Let M = 2
be
the total number of Walsh averages W(1) ≤, · · · ≤ W(M ) .

Theorem 5.2.7 (Tukey’s method of confidence interval) Let kα denote the positive
integer such that: P (Tn < kα ) = α/2. Then, [W(kα ) , W(M −kα +1) ] is a confidence interval for
θ at confidence level 1 − α (0 < α < 1/2).

114
Proof. Write

P (θ ∈ [W(kα ) , W(M −kα +1) ])


 
= 1 − P θ < W(kα ) − P θ > W(M −kα +1)

= 1 − P (Tn ≥ M − kα + 1) − P (Tn ≤ kα − 1)

= 1 − 2P (Tn < kα ) = 1 − α,

where we use the fact that Tn follows a symmetric distribution about n(n + 1)/4 (Remark
??). 

In practice, we can approximate kα by using the asymptotic normality of Tn :


r
n(n + 1) n(n + 1)(2n + 1)
kα = − zα/2 .
4 24
For any continuous symmetric distribution, this confidence interval all holds. Hence, we can
control the coverage probability to be 1 − α whiteout having any more specific knowledge
about the forms of the underlying X distributions. Thus, it is a distribution-free confidence
interval for θ over a very large class of populations.

115

You might also like