Universités de Paris 6 & Paris 7 - CNRS (UMR 7599)
PRÉPUBLICATIONS DU LABORATOIRE
DE PROBABILITÉS & MODÈLES ALÉATOIRES
4, place Jussieu - Case 188 - 75 252 Paris cedex 05
http://www.proba.jussieu.fr
Gaussian limits for vector-valued
multiple stochastic integrals
G. PECCATI & C.A. TUDOR
NOVEMBRE 2003
Prépublication no 861
G. Peccati : Laboratoire de Statistique Théorique et Appliquée, Université Paris VI,
Case 158, 4 Place Jussieu, F-75252 Paris Cedex 05.
C.A. Tudor : Laboratoire de Probabilités et Modèles Aléatoires, CNRS-UMR 7599,
Université Paris VI & Université Paris VII, 4 place Jussieu, Case 188, F-75252 Paris
Cedex 05.
Gaussian limits for vector-valued multiple stochastic integrals
Giovanni PECCATI
Laboratoire de Statistique Théorique et Appliquée
Université de Paris VI
175, rue du Chevaleret
75013 Paris, France
email:
[email protected]
Ciprian A. TUDOR
Laboratoire de Probabilités et Modèles Aléatoires
Universités de Paris VI &VII
175, rue du Chevaleret
75013 Paris, France
email:
[email protected]
October 31, 2003
Abstract
We establish necessary`and sufficient
´ conditions for a sequence of d-dimensional vectors of multiple
stochastic integrals Fkd = F1k , ..., Fdk , k ≥ 1, to converge in distribution to a d-dimensional Gaussian
vector Nd = (N1 , ..., Nd ). In particular, we show that if the covariance structure of Fkd converges to
that of Nd , then componentwise convergence implies joint convergence. These results extend to the
multidimensional case the main theorem of [9].
Keywords – Multiple stochastic integrals; Limit theorems; Weak convergence; Brownian motion.
AMS Subject classification – 60F05; 60H05
1
Introduction
For d ≥ 2, fix d natural numbers 1 ≤ n1 ≤ ... ≤ nd and, for every k ≥ 1, let Fkd = F1k , ..., Fdk be a vector
of d random variables such that, for each j = 1, ..., d, F jd belongs to the nj th Wiener chaos associated to a
real valued Gaussian process. The aim of this paper is to prove necessary and sufficient conditions to have
that the sequence Fkd converges in distribution to a given d-dimensional Gaussian vector, when k tends
to infinity. In particular, our main result states that, if for every 1 ≤ i, j ≤ d lim k→+∞ E Fik Fjk = δij ,
where δij is Kronecker symbol, then the following two conditions are equivalent: (i) F kd converges in
distribution to a standard centered Gaussian vector N d (0, Id ) (Id is the d × d identity matrix), (ii) for
every j = 1, ..., d, Fjk converges in distribution to a standard Gaussian random variable. Now suppose
that, for every k ≥ 1 and every j = 1, ..., d, the random variable F jk is the multiple Wiener-Itô stochastic
(k)
n
integral of a square integrable kernel fj , for instance on [0, 1] j . We recall that, according to the main
h
4 i
=3
result of [9], condition (ii) above is equivalent to either one of the following: (iii) lim k→+∞ E Fjk
(k)
(k)
for every j, (iv) for every j and every p = 1, ..., nj − 1 the contraction fj ⊗p fj converges to zero in
2(n −p)
L2 [0, 1] j
. Some other necessary and sufficient condition for (ii) to hold is stated in the subsequent
sections, and an extension is provided to deal with the case of a Gaussian vector N d with a more general
covariance structure.
1
Besides [9], our results should be compared with other central limit theorems (CLT) for non linear
functionals of Gaussian processes. The reader is referred to [2], [5], [6], [7], [14] and the references therein
for several results in this direction. As in [9], the main tool in the proof of our results is a well known
time-change formula for continuous local martingales, due to Dambis, Dubins and Schwarz (see e.g. [12,
Chapter 5]). In particular, this technique enables to obtain our CLTs, by estimating and controlling
expressions that are related uniquely to the fourth moments of the components of each vector F kd .
The paper is organized as follows. In Section 2 we introduce some notation and discuss preliminary
results; in Section 3 our main theorem is stated and proved; finally, in Section 4 we present some applications, to the weak convergence of chaotic martingales (that is, martingales admitting a multiple
Wiener integral representation), and to the convergence in law of random variables with a finite chaotic
decomposition.
2
Notation and preliminary results
Let H be a separable Hilbert space. For every n ≥ 1, we define H ⊗n to be the nth tensor product
of H
√
and write H ⊙n for the nth symmetric tensor product of H, endowed with the modified norm n! k·kH ⊗n .
We denote by X = {X (h) : h ∈ H} an isonormal process on H, that is, X is a centered H-indexed
Gaussian family, defined on some probability space (Ω, F, P) and such that
E [X (h) X (k)] = hh, kiH , for every h, k ∈ H.
For n ≥ 1, let Hn be the nth Wiener chaos associated to X (see for instance [8, Chapter 1]): we denote
by InX the isometry between Hn and H ⊙n . For simplicity, in this paper we consider uniquely spaces of
the form H = L2 (T, A, µ), where (T, A) is a measurable space and µ is a σ-finite and atomless measure.
In this case, InX can be identified with the multiple Wiener-Itô integral with respect to the process X, as
defined e.g. in [8, Chapter 1]. We also note that, by some standard Hilbert space argument, our results
can be immediately extended to a general H. The reader is referred to [9, Section 3.3] for a discussion of
this fact.
Let H = L2 (T, A, µ); for any n, m ≥ 1, every f ∈ H ⊙n , g ∈ H ⊙m , and p = 1, ..., n ∧ m, the pth
contraction between f and g, noted f ⊗p g, is defined to be the element of H ⊗m+n−2p given by
Z
f ⊗p g (t1 , ..., tn+m−2p ) =
f (t1 , ..., tn−p , s1 , ..., sp ) ×
Tp
×g (tn−p+1 , ..., tm+n−2p , s1 , ..., sp ) dµ (s1 ) · · · dµ (sp ) ;
by convention, f ⊗0 g = f ⊗ g denotes the tensor product of f and g. Given φ ∈ H ⊗n , we write (φ)s for its
canonical symmetrization. In the special case T = [0, 1], A = B ([0, 1]) and µ = λ, where λ is Lebesgue
measure, some specific notation is needed. For any 0 < t ≤ 1, ∆ nt stands for the symplex contained
n
n
in [0, t] , i.e. ∆nt := {(t1 , ..., tn ) : 0 < tn < ... < t1 < t}. Given a function f on [0, 1] and t ∈ [0, 1], ft
n−1
denotes the application on [0, 1]
given by
(s1 , ..., sn−1 ) 7→ f (t, s1 , ..., sn−1 ) .
n
n
For any n, m ≥ 1, for any pair of functions f, g such that f ∈ L 2 ([0, 1] , B ([0, 1] ) , dλ⊗n ) :=
n
m
L ([0, 1] ) and g ∈ L2 ([0, 1] ), and for every 1 < t ≤ 1 and p = 1, ..., n ∧ m, we write f ⊗tp g for
the pth contraction of f and g on [0, t], defined as
Z
f ⊗tp g (t1 , ..., tn+m−2p ) =
f (t1 , ..., tp , s1 , ..., sp ) ×
2
[0,t]p
×g (tn−p+1 , ..., tm+n−2p , s1 , ..., sp ) dλ (s1 ) · · · dλ (sp ) ;
g = f ⊗ g. Eventually, we recall that if H = L2 ([0, 1] , B ([0, 1]) , dλ), then X coincides
as before, f
with the Gaussian space generated by the standard Brownian motion
t 7→ Wt := X 1[0,t] , t ∈ [0, 1]
⊗t0
2
and this implies in particular that, for every n ≥ 2, the multiple Wiener-Itô integral I nX (f ), f ∈
n
L2 ([0, 1] ), can be rewritten in terms of an iterated stochastic integral with respect to W , that is:
X
In (f ) = In1 ((f )s ) = n!Jn1 ((f )s ), where
Jnt
((f )s ) =
Z
t
0
···
Z
un−1
(f (u1 , ..., un ))s dWun ...dWu1
0
Int ((f )s ) = n!Jnt ((f )s ) , t ∈ [0, 1] .
3
d - dimensional CLT
The following facts will be used to prove our main results. Let H = L 2 (T, A, µ), f ∈ H ⊙n and g ∈ H ⊙m .
Then,
F1: (see [1, p. 211] or [8, Proposition 1.1.3])
InX
X
(f ) Im
(g) =
n∧m
X
p=0
n m X
(f ⊗p g) ;
I
p!
p n+m−2p
p
(1)
F2: (see [15, Proposition 1])
2
(n + m)! k(f ⊗0 g)s kH ⊗n+m
2
2
= m!n! kf kH ⊗n kgkH ⊗m
n∧m
X nm
2
+
n!m! kf ⊗q gkH ⊗n+m−2q
q
q
q=1
(2)
F3: (see [9])
n−1
h
i
X
4
2
4
E InX (f )
= 3 (n!) kf kH ⊗n +
p=1
h
4
(n!)
2
2 kf ⊗p f kH ⊗2(n−p)
(p! (n − p)!)
2n − 2p
+
(f ⊗p f )s
n−p
(3)
2
H ⊗2(n−p)
.
4
Let Vd be the set of all (i1 , i2 , i3 , i4 ) ∈ (1, ..., d) , such that one of the following conditions is satisfied:
(a) i1 6= i2 = i3 = i4 , (b) i1 6= i2 = i3 6= i4 and i4 6= i1 , (c) the elements of (i1 , ..., i4 ) are all distinct. Our
main result is the following.
Theorem 1 Let d ≥ 2, and consider a collection 1 ≤ n1 ≤ ... ≤ nd < +∞ of natural numbers, as well
as a collection of kernels
o
n
(k)
(k)
:k≥1
f1 , ..., fd
(k)
such that fj
∈ H ⊙nj for every k ≥ 1 and every j = 1, ..., d, and
(k)
2
lim j! fj
⊗nj
k→∞
H i
h
(k)
(k)
InXl fl
lim E InXi fi
k→∞
Then, the following conditions are equivalent:
3
= 1,
∀j = 1, ..., d
= 0,
∀1 ≤ i < l ≤ d.
(4)
(i) for every j = 1, ..., d
lim
(k)
k→∞
fj
(k)
⊗p fj
H ⊗2(nj −p)
for every p = 1, ..., nj − 1;
4
P
(k)
X
(ii) limk→∞ E
= 3d2 , and
i=1,...,d Ini fi
lim E
k→∞
"
4
Y
InXi
l
l=1
(k)
fil
#
=0
=0
for every (i1 , i2 , i3 , i4 ) ∈ Vd ;
(k)
(k)
(iii) as k goes to infinity, the vector InX1 f1
, ..., InXd fd
converges in distribution to a ddimensional standard Gaussian vector Nd (0, Id );
(k)
(iv) for every j = 1, ..., d, InXj fj
converges in distribution to a standard Gaussian random variable;
(v) for every j = 1, ..., d,
4
(k)
lim E InXj fj
= 3.
k→∞
Proof. We show the implications
(iii) ⇒ (ii) ⇒ (i) ⇒ (iii) and (iv) ⇔ (v) ⇔ (i)
(k)
(k)
[(iii) ⇒ (ii)] First notice that, for every k ≥ 1, the multiple integrals I nX1 f1
, ..., InXd fd
are
contained in the sum of the first nd chaoses associated to the Gaussian measure X. As a consequence,
condition (4) implies (see e.g. [3, Chapter V]) that for every M ≥ 2 and for every j = 1, ..., d
M
(k)
sup E InXj fj
< +∞
k≥1
and the conclusion is obtained by standard arguments.
[(ii) ⇒ (i)] The key of the proof is the following simple equality
!
d
4
X
(k)
InXi fi
E
i=1
=
d
X
i=1
4
(k)
E InXi fi
+6
+
X
E
(i1 ,...,i4 )∈Vd
"
4
Y
l=1
InXi
l
X
1≤i<j≤d
(k)
fil
2
2
(k)
(k)
InXi fi
E InXj fj
#
.
By the multiplication formula (1), for every 1 ≤ i < j ≤ d
InXi
(k)
fi
InXj
(k)
fj
ni
X
ni
nj X
(k)
(k)
q!
Ini +nj −2q fi ⊗q fj
=
q
q
q=0
4
and therefore
ni 2
2 X
2
ni
nj
(k)
(k)
(k)
(k)
X
X
=
(ni + nj − 2q)! fi ⊗q fj
q!
Inj fj
E Ini fi
q
q
s
q=0
2
H ⊗ni +nj −2q
.
Now, relations (2) and (3) imply that
!
d
4
X
(k)
= T1 (k) + T2 (k) + T3 (k)
InX fi
E
i
i=1
where
T1 (k) =
d
X
i=1
(
X
T2 (k) = 6
2
3 (ni !)
1≤i<j≤d
4
(k)
fi
+
nX
i −1
(ni !)
4
2
(p! (ni − p)!)
2ni − 2p
(k)
(k)
+
fi ⊗p fi
ni − p
s
H ⊗ni
(k)
ni !nj ! fi
p=1
2
(k)
fj
H ⊗ni
2
H ⊗nj
(k)
fi
(k)
⊗p fi
2
H ⊗2(ni −p)
H ⊗2(ni −p)
+
"
ni
2
X
ni
nj
(k)
(k)
+
(ni + nj − 2q)! fi ⊗q fj
q!
q
q
s
q=1
2
ni
nj
(k)
(k)
+
ni !nj ! fi ⊗q fj
,
q
q
H ⊗nj +ni −2q
and
X
T3 (k) =
E
(i1 ,...,i4 )∈Vd
"
4
Y
InXi
l
l=1
2
2
H ⊗ni +nj −2q
#
(k)
.
fil
But
3
d
X
2
(ni !)
(k)
fi
i=1
4
H ⊗ni
+6
X
ni !nj !
(k)
fi
1≤i<j≤d
2
H ⊗ni
(k)
fj
2
H ⊗nj
=3
"
d
X
ni !
i=1
(k)
fi
2
H ⊗ni
#2
and the desired conclusion is immediately obtained, since condition (4) ensures that the right side of the
above expression converges to 3d2 when k goes to infinity.
[(i) ⇒ (iii)] We will consider the case
H = L2 ([0, 1] , B ([0, 1]) , dx)
(5)
where dx stands for Lebesgue measure, and use the notation introduced at the end of Section 2. We
stress again that the extension to a general, separable Hilbert space H can be done by following the line
of reasoning presented in [9, Section 3.3.] and it is not detailed here. Now suppose (i) and (5) hold. The
result is completely proved, once the asymptotic relation
d
X
i=1
d
X
Law
(k)
(k)
λi InXi fi
λi ni !Jn1i fi
⇒
=
k↑+∞
i=1
5
λd
Rd
× N (0, 1)
is verified for every vector λd = (λ1 , ..., λd ) ∈ Rd . Thanks to the Dambis-Dubins-Schwarz Theorem (see
[12, Chapter V]), we know that for every k, there exists a standard Brownian motion W (k) (which depends
also on λd ) such that
!
Z 1 X
d
d
2
X
(k)
(k)
= W (k)
λi ni !Jnt i −1 fi,t
dt
λi ni !Jn1i fi
0
i=1
"
= W (k)
d
X
λ2i
i=1
X
+2
Z
1
0
λi λj ni !nj !
Z
1
0
1≤i<j≤d
Now, since (4) implies
E
i=1
2
(k)
ni !Jnt i −1 fi,t
dt
h
(k)
Jnt i −1 fi,t
(k)
Jnt j −1 fj,t
i
dt .
2
(k)
ni !Jn1i fi
→ 1
k↑+∞
for every i, condition (i) yields – thanks to Proposition 3 in [9] – that
d
X
λ2i
i=1
Z
1
0
2
L2
(k)
ni !Jnt i −1 fi,t
dt →
k↑+∞
λd
2
ℜd
.
To conclude, we shall verify that (i) implies also that for every i < j
h
i
(k)
(k)
t
Z 1h
Z 1 It
i
f
I
f
ni −1
nj −1
i,t
j,t
L2
(k)
(k)
dt =
dt → 0.
Jn1i −1 fi,t Jn1j −1 fj,t
k↑+∞
(ni − 1)! (nj − 1)!
0
0
To see this, use once again the multiplication formula (1) to write
Z 1 h
i
(k)
(k)
dt Int i −1 fi,t Int j −1 fj,t
=
0
nX
i −1
q=0
(ni + nj − 2 (q + 1))!q!
×
Z
n +nj −2(q+1)
∆1 i
Z
ni − 1 nj − 1
×
q
q
1
s1
(k)
(k)
dt fi,t ⊗tq fj,t
when ni < nj , or, when ni = nj
Z 1 h
i
(k)
(k)
dt Int i −1 fi,t Int j −1 fj,t
=
Z
0
1
dtE
0
h
×
Int i −1
Z
s1 , ..., sni +nj −2(q+1)
dWs1 ...dWsni +nj −2(q+1) ,
2
ni − 1
×
+
(2ni − 2 (q + 1))!q!
q
q=0
Z 1
(k)
t (k)
s1 , ..., sni +nj −2(q+1)
dt fi,t ⊗q fj,t
−2(q+1)
(k)
fi,t
n +nj
∆1 i
s
Int i −1
(k)
fj,t
i
nX
i −2
s
s1
dWs1 ...dWsni +nj −2(q+1) .
In what follows, for every m ≥ 2, we write tm to indicate a vector (t1 , ..., tm ) ∈ Rm , whereas dtm stands
for Lebesgue measure on Rm ; we shall also use the symbol btm = maxi (ti ). Now fix q < ni − 1 ≤ nj − 1,
6
and observe that, by writing p = q + 1,
Z
Z
≤
Z
n +nj −2(q+1)
∆1 i
s1
Z
dsni −p
[0,1]ni −p
×
(k)
(k)
dt fi,t ⊗tq fj,t
1
"Z
s1 , ..., sni +nj −2(q+1)
s
2
ds1 ...dsni +nj −2(q+1)
dτnj −p
[0,1]nj −p
Z
1
dt
[0,t]p−1
b
sni −p ∨b
τnj −p
(k)
dup−1 fj
(k)
t, τnj −p , up−1 fi
= C (k)
(t, sni −p , up−1 )
#2
and moreover
2
C (k)
=
(Z
×
Z
1
dt
[0,1]p−1
0
"Z
[0,t∧t′ ]ni −p
dup−1
[0,t∧t′ ]nj −p
Z
1
dt′
[0,1]p−1
0
(k)
dsni −p fi
"Z
×
Z
dvp−1 1(bup−1 ≤t,bvp−1 ≤t′ )
(k)
(t, sni −p , up−1 ) fi
(t′ , sni −p , vp−1 )
(k)
dτnj −p fj
(k)
t, τnj −p , up−1 fj
≤ Ci (k) × Cj (k)
#
′
t , τnj −p , vp−1
#)2
where, for γ = i, j
Cγ (k)
Z
=
1
dt
[0,1]p−1
0
"Z
Z
[0,t∧t′ ]nγ −p
Z
dup−1
Z
1
dt
′
0
[0,1]p−1
dvp−1
dsnγ −p fγ(k)
t, snγ −p , up−1 fγ(k)
′
t , snγ −p , vp−1
#2
and the calculations contained in [9] imply immediately that both C j (k) and Ci (k) converge to zero
whenever (i) is verified. On the other hand, when q = n i − 1 < nj − 1
Z
n −ni
∆1 j
≤
Z
Z
(k)
(k)
dt fi,t ⊗tni −1 fj,t
1
s1
s1 , ..., snj −ni
s
2
ds1 ...dsnj −ni
dτnj −ni
[0,1]nj −ni
×
"Z
Z
1
dt
(k)
duni −1 fj
[0,t]ni −1
τbnj −ni
(k)
t, τnj −ni , uni −1 fi
= D (k)
and also
(t, uni −1 )
#2
2
D (k) ≤ D1 (k) × D2 (k)
where
D1 (k) =
Z
1
dt
0
×
"Z
Z
[0,1]ni −1
duni −1
[0,t∧t′ ]nj −ni
Z
Z
1
dt
′
[0,1]ni −1
0
(k)
dτnj −ni fj
dvni −1
(k)
t, τnj −ni , uni −1 fj
7
′
t , τnj −ni , vni −1
#2
and
D2 (k) =
Z
1
dt
0
=
(k)
fi
Z
[0,1]ni −1
4
duni −1
Z
1
dt
′
Z
[0,1]ni −1
0
2
(k)
(k)
dvni −1 fi (t, uni −1 ) fi (t′ , vni −1 )
H ⊗ni
so that the conclusion is immediately achieved, due to (4). Finally, recall that for n i = nj
Z
1
0
Z
h
i
(k)
(k)
dtE Int i −1 fi,t Int i −1 fj,t
= (ni − 1)!
Z
(k)
(k)
dt
duni −1 fj (t, uni −1 ) fi (t, uni −1 )
ni −1
0
[0,t]
Z
(k)
(k)
2
dtduni −1 fj (t, uni −1 ) fi (t, uni −1 )
= ((ni − 1)!)
1
n
=
(ni − 1)!
ni !
2
∆1 i
h
i
(k)
(k)
InXi fj
→ 0
E InXi fi
k↑+∞
again by assumption (4). The proof of the implication is concluded.
[(iv) ⇔ (v) ⇔ (i)] This is a consequence of Theorem 1 in [9].
In what follows, Cd = {Cij : 1 ≤ i, j ≤ d} indicates a d × d positive definite symmetric matrix. In the
case of multiple Wiener integrals of the same order, a useful extension of Theorem 1 is the following
Proposition 2 Let d ≥ 2, and fix n ≥ 2 as well as a collection of kernels
n
o
(k)
(k)
f1 , ..., fd
:k≥1
(k)
such that fj
∈ H ⊙n for every k ≥ 1 and every j = 1, ..., d, and
(k)
2
lim j! fj
⊗n
k→∞
h
H i
(k)
(k)
lim E InX fi
InX fj
k→∞
= Cjj ,
∀j = 1, ..., d
= Cij ,
∀1 ≤ i < j ≤ d.
(6)
Then, the following conditions are equivalent:
(k)
(k)
(i) as k goes to infinity, the vector InX f1
, ..., InX fd
converges in distribution to a d-dimensional
Gaussian vector Nd (0, Cd ) = (N1 , ..., Nd ) with covariance matrix Cd ;
(ii)
lim E
k→∞
and
X
i=1,...,d
4
d
X
(k)
X
In fi
Cii + 2
=3
i=1
lim E
k→∞
"
4
Y
l=1
InX
(k)
fil
#
X
1≤i<j≤d
=E
"
4
Y
l=1
2
Cij = E
Nil
d
X
i=1
Ni
!4
,
#
for every (i1 , i2 , i3 , i4 ) ∈ Vd ;
(k)
(iii) for every j = 1, ..., d, InX fj
converges in distribution to Nj , that is, to a centered Gaussian
random variable with variance Cjj ;
8
(iv) for every j = 1, ..., d,
4
(k)
X
2
lim E In fj
;
= 3Cjj
k→∞
(v) for every j = 1, ..., d
(k)
lim
fi
k→∞
(k)
⊗p fi
H ⊗2(n−p)
= 0,
for every p = 1, ..., n − 1.
Sketch of the proof – The main idea is contained in the proof of Theorem 1. We shall discuss only
implications (ii) ⇒ (v) and (v) ⇒ (i). In particular, one can show that (ii) implies (v) by adapting the
same arguments as in the proof of Theorem 1 to show that
!
d
4
X
(k)
= V1 (k) + V2 (k) + V3 (k)
InX fi
E
i=1
where
d
X
V1 (k) =
i=1
(
3 (n!)
+
n−1
X
4
(n!)
2
H ⊗n
p=1 (p! (n − p)!)
2n − 2p
(k)
(k)
+
fi ⊗p fi
n−p
s
X
V2 (k) = 6
4
(k)
fi
2
1≤i<j≤d
2
(n!)
2
(k)
fi
(k)
H ⊗n
fj
2
H ⊗n
(k)
fi
2
H ⊗2(n−p)
H ⊗2(n−p)
+
2 !2
(k)
(k)
q! n
+
(2n − 2q)! fi ⊗q fj
q
s
q=1
#)
2
2
n
(k)
(k)
2
+
(n!) fi ⊗q fj
q
H ⊗2n−2q
X D (k) (k) E2
2
+ 12 (n!)
fi , fj
n−1
X
2
H ⊗2n−2q
H ⊗n
1≤i<j≤d
and
X
V3 (k) =
2
(k)
⊗p fi
E
(i1 ,...,i4 )∈Vd
"
4
Y
InX
l=1
(k)
fil
#
.
But (6) yields
3 (n!)
2
d
X
i=1
→ 3
k↑+∞
d
X
i=1
(k)
fi
4
H ⊗n
Cii2 + 6
X
+6
1≤i<j≤d
X
1≤i<j≤d
i=1
(k)
2
(n!)
fi
2
Cii Cjj + 2Cij
and the conclusion is obtained, since
!4
d
d
X
X
E
Ni
Cii2 + 6
=3
i=1
X
1≤i<j≤d
2
H ⊗n
(k)
fj
2
H ⊗n
+ 2 (n!)
2
D
(k)
(k)
fi , fj
E2
H ⊗n
2
+
Cii Cjj + 2Cij
9
X
(i1 ,...,i4 )∈Vd
E
"
4
Y
l=1
#
Nil .
Now keep the notations of the last part of the proof of Theorem 1. The implication (v) ⇒ (i) follows
from the calculations therein contained, implying, thanks to (6), that the quantity
Z
d
X
1
0
t
λi n!Jn−1
i=1
(k)
fi,t
!2
dt
P
P
2
converges in L2 to
i=1,...,d λi Cii + 2
1≤i<j≤d λi λj Cij , and therefore the desired conclusion. The
remaining details can be easily provided by the reader.
4
Applications
In this section, we will present some consequences of our results. We mention that our list of applications is
by no means exhaustive; for instance, the weak convergence results for quadratic functionals of (fractional)
Brownian motion given in [9], [10] and [11] can be immediately extended to the multidimensional case.
An example is given in the following generalization of the results contained in [11].
Proposition 3 Let W be a standard Brownian motion on [0, 1] and, for every d ≥ 2, define the process
t 7→
Wt⊗d
:=
Z
t
0
···
Z
Then: (a) for every d ≥ 1 the vector
q
1
log 1ε
Z
ε
1
da ⊗2
W ,
a2 a
sd−1
dWsd ...dWs1 ,
0
Z
ε
1
da ⊗4
W , ...,
a3 a
Z
1
ε
t ∈ [0, 1] .
da
W ⊗2d
ad+1 a
converges in distribution, as ε → 0, to
p
√
N1 (0, 1) , 2 3!N2 (0, 1) , ..., d (2d − 1)!Nd (0, 1)
where the Nj (0, 1), j = 1, ..., d, are standard, independent Gaussian random variables; (b) by defining,
for every d ≥ 1 and for every j = 0, ..., d, the positive constant
c (d, j) =
(2d)!
,
(d − j)!
2d−j
for every d ≥ 1 the vector
Z 1
Z 1
1
da 2
1
1
da 4
q
Wa − c (1, 0) log ,
Wa − c (2, 0) log , ...
2
3
a
ε
a
ε
ε
ε
log 1ε
Z 1
1
da
2d
Wa − c (d, 0) log
...,
d+1
ε
ε a
converges in distribution to a Gaussian vector (G1 , ..., Gd ) with the following covariance structure:
′
E [Gk′ Gk ] =
k
X
j=1
c (k, j) c (k ′ , j) j 2 (2j − 1)!
for every 1 ≤ k ′ ≤ k ≤ d.
10
Proof. From Proposition 4.1 in [11], we obtain immediately that for every j = 1, ..., d,
Z 1
da
1
(d) p
q
Wa⊗2j → j (2j − 1)!Nj (0, 1) ,
j+1
log 1ε ε a
and the asymptotic independence follows from Theorem 1, since for every i 6= j
Z 1
Z 1
Z 1
Z 1
db ⊗2j ⊗2i
da
db
da
⊗2i
⊗2j
E
W
W
E Wa Wb
=
a
b
j+1
i+1
j+1
i+1
ε b
ε a
ε b
ε a
= 0.
To prove point (b), use for instance Stroock’s formula (see [13]) to obtain that for every k = 1, ..., d
Z
ε
1
k
X
da
c (k, j)
Wa2k =
k+1
a
j=1
Z
1
da
1
W ⊗2j + c (k, 0) log ,
aj+1 a
ε
ε
so that the result derives immediately from point (a).
In what follows, we prove a new asymptotic version of Knight’s theorem – of the kind discussed e.g.
in [12, Chapter XIII] – and a necessary and sufficient condition for a class of random variables living in a
finite sum of chaos – and satisfying some asymptotic property – to have a Gaussian weak limit. Further
applications will be explored in a subsequent paper.
More specifically, we are interested in an asymptotic Knight’s theorem for chaotic martingales, that
is, martingales having a multiple Wiener integral representation (we stress that there is no relation with
normal martingales with the chaotic representation property, as discussed e.g. in [1, Chapter XXI]). To
this end, take d ≥ 2 integers
1 ≤ n1 ≤ n2 ≤ ... ≤ nd ,
and, for j = 1, ..., d and k ≥ 1 take a class
φtj,k : t ∈ [0, 1]
of elements of H ⊙nj , such that there exists a filtration {Ft : t ∈ [0, 1]}, satisfying the usual conditions
and such that, for every k and for every j, the process
t 7→ Mj,k (t) = InXj φtj,k , t ∈ [0, 1] ,
is a Ft - continuous martingale on [0, 1], vanishing at zero. We note hM j,k , Mj,k i and hMj,k , Mi,k i,
1 ≤ i, j ≤ d, the corresponding quadratic variation and covariation processes, whereas β j,k is the DambisDubins-Schwarz Brownian motion associated to M j,k . Then, we have the following
Proposition 4 (Asymptotic Knight’s theorem for chaotic martingales) Under the above assumptions and notation, suppose that for every j = 1, ..., d,
hMj,k , Mj,k i
(d)
→
k→+∞
Tj ,
where t 7→ Tj (t) is a deterministic, continuous and non-decreasing process. If in addiction
lim E hMi,k , Mj,k it = 0
k→+∞
for every i 6= j and for every t, then {Mj,k : 1 ≤ j ≤ d} converges in distribution to
{Bj ◦ Tj : 1 ≤ j ≤ d} ,
where {Bj : 1 ≤ j ≤ d} is a d dimensional standard Brownian motion.
11
(7)
(8)
Proof. Since
Mj,k (t) = βj,k hMj,k , Mj,k it , t ∈ [0, 1] ,
and hMj,k , Mj,k i weakly converges to Tj , we immediately obtain that Mj,k converges in distribution to
the Gaussian process Bj ◦ Tj . Thanks to Theorem 1, it is now sufficient to prove that, for every i 6= j
and for every s, t ∈ [0, 1], the quantity E [Mj,k (s) Mi,k (t)] converges to zero. But
E [Mj,k (s) Mi,k (t)] = E hMi,k , Mj,k it∧s
and assumption (8) yields the result.
Remark – An analogue of Proposition 4 for general martingales verifying (7) can be found in [12,
Exercise XIII.1.16], but in this case (8) has to be replaced by
hMj,k , Mi,k i
(d)
→
k→+∞
0
for every i 6= j. Since chaotic martingales have a very explicit covariance structure (due to the isometric
properties of multiple integrals), condition (8) is usually quite easy to verify. We also recall that – according e.g. to [12, Theorem XIII.2.3] – if condition (7) is dropped, to prove the asymptotic independence of
the Brownian motions {βj,k : 1 ≤ j ≤ d} one has to check the condition
lim hMi,k , Mj,k iτ k (t) = lim hMi,k , Mj,k iτ k (t) = 0
k→+∞
k→+∞
j
i
in probability for every i 6= j and for every t, where τ jk and τik are the stochastic time-changes associated
respectively to hMj,k , Mj,k i and hMi,k , Mi,k i.
We conclude the paper by stating a result on the weak convergence of random variables belonging to
a finite sum of Wiener chaos to a standard normal random variable (the proof is a direct consequence of
the arguments contained in the proof of Theorem 1).
(k)
Proposition 5 Let 1 ≤ n1 < ... < nd , d ≥ 2, and let fj ∈ H ⊙nj , for every k ≥ 1 and 1 ≤ j ≤ d.
Assume that
2
(k)
nj ! lim fj
= 1, j = 1, ..., d,
(9)
⊗n
k↑+∞
H
and
lim
k↑+∞
(k)
Define moreover Sd
=
(k)
(i) the sequence d−1/2 Sd
to infinity;
X
E
j
" 4
Y
InXi
l
l=1
(i1 ,...,i4 )∈Vd
(k)
fil
#
≥ 0.
(10)
(k)
X
f
I
. Then, the following conditions are equivalent:
j
j=1,...,d nj
P
converges in distribution to a standard Gaussian random variable, as k tends
(ii) for every j = 1, ..., d,
lim
k↑+∞
(k)
fj
(k)
⊗p fj
2
H ⊗2(nj −p)
= 0,
p = 1, ..., nj − 1;
(k)
(iii) for every j = 1, ..., d, InXj fj
converges in law to a standard Gaussian random variable, as k
goes to infinity.
12
An interesting consequence of the above result is the following
(k)
∈ H ⊙nj , k ≥ 1 and 1 ≤ j ≤ d. Assume moreover that (9)
(k)
is verified and that, for every k, the random variables InXj fj , j = 1, ..., d, are pairwise independent.
Corollary 6 Let 1 ≤ n1 < ... < nd , d ≥ 2, fj
(k)
Then, the sequence d−1/2 Sd , k ≥ 1, defined asbefore,
converges in law to a standard Gaussian random
(k)
X
variable N (0, 1) if, and only if, for every j Inj fj
converges in law to N (0, 1).
Proof. We know from [15] that, in the case of multiple stochastic integrals, pairwise independence
implies mutual independence, so that condition (10) is clearly verified.
(k)
Remarks – (i) If we add the assumption that, for every j, the sequence I nXj fj , k ≥ 1, admits
a weak limit, say µj , then the conclusion of Corollary 6 can be directly deduced from [4, p. 248]. As
a matter of fact, in such a reference
R the following implication is proved: if the d probability measures
µj , j = 1, ..., d, are such that (a) xdµj (x) = 0 for every j, and (b) µ1 ⋆ · · · ⋆ µd , where ⋆ indicates
convolution, is Gaussian, then each µj is necessarily Gaussian.
(ii) Condition (10) is also satisfied when d = 2 and n 1 + n2 is odd.
References
[1] Dellacherie, C., Maisonneuve, B. and Meyer, P.A. (1992), Probabilités et Potentiel, Chapitres XVII
à XXIV, Hermann, Paris.
[2] Giraitis, L. and Surgailis, D. (1985), “CLT and Other Limit Theorems for Functionals of Gaussian
Processes”, Z. Wahr. verw. Gebiete 70(2), 191-212.
[3] Janson, S. (1997), Gaussian Hilbert spaces, Cambridge University Press, Cambridge.
[4] Lukacs, E. (1983), Developments in characteristic functions, MacMillian Co., New York.
[5] Major, P. (1981), Multiple Wiener-Itô Integrals, Lecture Notes in Mathematics 849, Springer Verlag,
New York.
[6] Maruyama, G. (1982), “Applications of the multiplication of the Ito-Wiener expansions to limit
theorems”, Proc. Japan Acad. 58, 388-390.
[7] Maruyama, G. (1985), “Wiener functionals and probability limit theorems, I: the central limit theorem”, Osaka Journal of Mathematics 22, 697-732.
[8] Nualart, D. (1995), The Malliavin Calculus and Related Topics, Springer, Berlin Heidelberg New
York.
[9] Nualart, D. and Peccati, G. (2003), “Central limit theorems for sequences of multiple stochastic
integrals”, to appear in The Annals of Probability.
[10] Peccati, G. and Yor, M. (2003a), “Four limit theorems for quadratic functionals of Brownian motion
and Brownian bridge”, to appear in the volume: Asymptotic Methods in Stochastics, American
Mathematical Society, Communication Series.
[11] Peccati, G. and Yor, M. (2003b), “Hardy’s inequality in L 2 ([0, 1]) and principal values of Brownian
local times”, to appear in the volume: Asymptotic Methods in Stochastics, American Mathematical
Society, Communication Series.
13
[12] Revuz, D. and Yor, M. (1999), Continuous Martingales and Brownian Motion, Springer, Berlin
Heidelberg New York.
[13] D. W. Stroock (1987), “Homogeneous chaos revisited”, in: Séminaire de Probabilités XXI, Springer,
Berlin, LNM 1247, 1-8.
[14] Surgailis, D. (2003), “CLTs for Polynomials of Linear Sequences: Diagram Formula with illustrations”, in: Theory and Applications of Long Range Dependence, Birkhäuser, Boston.
[15] Üstünel, A. S. and Zakai, M. (1989), “Independence and conditioning on Wiener space”, The Annals
of Probability 17 (4), 1441-1453.
14