Academia.eduAcademia.edu

Time reversal for infinite-dimensional diffusions

1989, Probability Theory and Related Fields

In this paper we analize the reversibility of the diffusion property for the solution of certain infinite-dimensional systems of stochastic differential equations. Necessary and sufficient conditions ensuring this reversibility are given. The proofs use the techniques of the stochastic calculus of variations.

Probability Theory~ledted Fields Probab. Th. Rel. Fields 82, 315-347 (1989) 9 Springer-Verlag 1989 Time Reversal for Infinite-Dimensional Diffusions Annie Millet 1., David Nualart 2 and Marta Sanz 2 1 Universit~ d'Angers, Facult6 des Sciences, 2 Bd Lavoisier, F-49045 Angers, France 2 Universitat de Barcelona, Facultat de Matemfitiques, G r a n Via 585, E-08007 Barcelona, Spain Summary. In this paper we analize the reversibility of the diffusion property for the solution of certain infinite-dimensional systems of stochastic differential equations. Necessary and sufficient conditions ensuring this reversibility are given. The proofs use the techniques of the stochastic calculus of variations. O. Introduction Consider a diffusion process X={XI, i~Z; t6[0, 1]} solution of the infinitedimensional system of stochastic differential equations: dX~= Z a~(t, Xt)dWf +b'(t, XOdt, (0.1) fleZ where {Wf, fizZ; tel0, 1]} is an infinite-dimensional Brownian motion with variance 7p t, 7p being positive real numbers such that ~ Yp< ~ . These equations have been considered by several authors (cf. eg. [3, 17, 8]); they are related with certain continuous state Ising-type models in statistical mechanics, and also with models arising in population genetics. Our aim is to study the time-reversed process Xt=XI_~. More precisely, we ask if Xt is again a diffusion process, and we want to compute its diffusion and drift coefficients. In the finite-dimensional case this problem has been studied by different authors (see [14, 1, 7]). Using an integration by parts formula we have proved in [12] that X t is solution of a martingale problem if and only if the following condition holds: d (C) The sums of distributional derivatives ~, Vj(ai~(t,x)p~(x)) are locally intej=l grable functions, i = l, ..., d, where d is the dimension of Xt and a = a a*. * This work was partly done when the first author was visiting the "Centre de Recerca Matem/ttica" at Barcelona 316 A. Millet et al. Furthermore, the diffusion and drift coefficients of the reversed process are given by a ' J ( 1 - t, x) = a~J(t, x), b-i(1 - t, x) = - (0.2) b~(t, x) + [pt(x)] - 1 ~=1 Vj(aiJ(t' x) pt(x)) . J In this paper we extend these results to the infinite-dimensional case, using the same kind of techniques. It should be mentioned that time-reversal of infinitedimensional diffusions has also been studied by F6tlmer and Wakolbinger in [4] using the notion of entropy. In their paper {Wt~, fie Z, t ~ [0, 1] } is a sequence of independent standard Wiener processes, and ~ = Id. Consequently this situation can be considered as a particular case of our setting when o-~(t, x ) = 6~ 7~-1/2. The natural infinite-dimensional analogue to (0.2) will be aiJ(1- t, x)=aiJ(t, x) b-i(1 - t , x ) = -hi(t, x)+ [p,(xi I ~i)]-1 [ ~' ~(aiJ(t, x) pt(xi [ ~,))], (0.3) jel(i) p,(xi[ ~) is the conditional density of Xi(t)=(xJ(t), jd(i)) given (xJ(t), and for every ieZ, I(i) is a finite set of indices. We also assume that aiJ(t, x ) = 0 if j r (i). where jr The paper is organized as follows. In Sect. 1 we present some preliminary results about derivation on the Wiener space that will be needed in the sequel. Section 2 deals with infinite systems of stochastic differential equations with Lipschitz coefficients. Under some suitable conditions on the coefficients we show that the solution of (0.1) belongs to the Sobolev space 1D2,1, and we obtain a linear stochastic differential equation for the gradient. Similar results for finite-dimensional diffusions have been proved by Bouleau and Hirsch in [2]. In Sect. 3 we state a martingale problem for the reversed process, and we obtain necessary conditions analogous to the condition (C) for this problem to hold. We identify the diffusion and drift coefficients. As in the finite-dimensional case the proof of this fact is based on It6's formula, and does not require the techniques of derivation on the Wiener space. Section 4 is devoted to prove the sufficiency of the preceding integrability conditions for the martingale problem considered in Sect. 3, with coefficients given in (0.3). The above results need the hypothesis of existence of conditional densities pt(x,[ ~). In Sect. 5 sufficient conditions are given to ensure the absolute continuity of these conditional laws in particular cases. The proof is based again on the techniques of Malliavin calculus. The problem of time reversal is related to the structure of invariant measures. More precisely, in some particular situations reversible invariant measures with respect to diffusion operators are characterized as Gibbs measures (see [3] and [8]). At the end of Sect. 5 we discuss this problem in the general setting of diffusion processes solution of (0.1). Along the paper we usually denote by the letter K a generic positive constant. Its value can change from one formula to another. Also in most of the cases Time Reversal 317 we use the convention on summation of repeated indexes. However sometimes we emphasize the difference between constants, and write the sum for repeated indexes, to be clear. 1. Some Elements of the Stochastic Calculus of Variations In this section we recall some basic facts about the stochastic calculus of variations. F o r a more detailed exposition of this subject we refer to Malliavin [11], Ikeda-Watanabe [9], Watanabe [20], Zakai [-21], Nualart-Zakai [-16] and Nualart-Pardoux [15]. Let (T, J , #) be a o--finite separable measure space. Consider the Hilbert space H=LZ(T, J,, #). Suppose that {W(h), heH} is a zero mean Gaussian process with covariance function given by E(W(hO W(h2))= (h 1, h2), defined on some probability space (~, ~, P). We will write W(A) for W(1A) if A ~ g , and #(A) < oo. We assume that the a-algebra ~ is generated by the random variables {W(h), hell}. Let E be a real separable Hilbert space. An E-valued random variable F: f2 ~ E will be called smooth if M F= Z A(W(hO, ..., W(h,))v,, i=1 where fi~cg~(lt"), hi, ..., h,~H, and v~, ..., vM~E. Here cg~~ denotes the set of cg~ functions f : IR" --, IR which are bounded, together with all their derivatives. The derivative of a smooth random variable is the random variable taking values in the Hilbert space H | E = L2 (T, E) given by (W(hx), ..., W(h,))hk(r) v~, D, F = i=1 k=l for reT. For any real number p > 1, if)p, t(E) will denote the Banach space which is the completion of the set of smooth random variables with respect to the norm [IF IILp(n,E) + IIDF [IL~,(~,n| Set D~o, 1(E)= ~ lDp, 1(E); if E =IR, we simply write IDp, 1 for IDp, I(E). p>l We will mainly use functionals of W belonging to the Hilbert space ID2,1. oo Consider the orthogonal Wiener-chaos decomposition L2(O, E ) = @ 2/f,, and n=0 318 A. Millet et al. denote by J, the projection operator on Jg,. Then the space I)2, t coincides with the set of random variables F ~ L 2 (g2, E) such that E(IIDFII~| s nE(ll.J=fll~)<oo. n=l We recall the following basic properties of the derivative operator. Proposition 1.1 (chain rule). Let g: IRa ~ IR be a continuously differentiable function with bounded partial derivatives. Suppose that F = ( F 1 , . . . , F a) is a random variable in lDv, 1(Re) for d > 1, p > 1. Then g(F)~IDp, 1 and D [g (F)] = V~g (F) DF ~. (1.1) Proposition 1.2. Let F r L2 (Y2, ~@, P), where A e J- and ~a is the a-algebra generated by the random variables {W(B): B c A , B ~ J } . Assume FEE)2, ~. Then DtF=O for all (t, o9) in A c x Y2 a.e. Denote by 6 the dual of the operator D defined on Dz, I ( H | 2 (f2 x T, E). The domain of 6 (denoted by Dom 6(E)) is the set of square integrable processes u e L2 (~2 x T, E)---L2 (~2, H | E) such that there exists a constant C with IE(<u, OV>uoE)l ~ C IIF[ILz<S~,~), for any F~L2(Q, E). Then, if FelD2, I(E) and u e D o m 6(E) we have the duality relation E((u, DF>~t| = E((6(u), F)e). (1.2) Actually (1.2) can be regarded as an integration by parts formula, which plays a basic role in the applications of Malliavin calculus. The operator ~ coincides with the Skorohod stochastic integral (see Skorohod [18], Gaveau-Trauber [5]). This stochastic integral allows to integrate nonadapted processes, and is an extension of the It6 integral (see Nualart-Zakai [16], and Nualart-Pardoux [15]). By means of the Wiener-chaos expansion it can be proved that D2, a (H | E) c Dom 6 (E). Let us mention the following relation between the operators D and 6. Proposition 1.3. Let u ~ ID2,1 (H | E) be such that D t u ~ Dora ~5(H | E) for any t~ T, l~-a.e., and suppose that ~ E(I~(Dt u)[ 2) I~(dt) < oo. Then 6(u)~D2,1 (H | E), and r Dt(6(u)) = 2)(Dr u) + ut. (1.3) Time Reversal 319 We will need the following proposition which extends the chain rule to Lipschitz functions; see also [12] for a similar weaker result. Fix d > l , and let e,(x) be a sequence of regularization kernels on R e of the form ~. (x) = n d . (n x) (1.4) where c~ is a non-negative function of c#~o(Re) whose compact support contains 0, and such that ~ c~(x)dx = 1. Proposition 1.4. Let g: N a ~ be a globally Lipschitz function. Suppose that F = ( F 1, ..., F a) belongs to Dp, 1(~ a) for some p> 2. Then g(F)el)p, 1, and there exists ~r(F)-mesurable random variables Uk such that [L~[< [IVkg[I o~, and D [g(F)] = Uk DF k. I f F has a density, then Uk --- gk g(F). Remark. The r a n d o m variables Uk are the weak o(U ~ L~)-limits of a subsequence of Vkg,(F), where g , = g . e,, for regularization kernels e, on IRd. Proof Let e, be a sequence of regularization kernels on N a of the form (1.4). For each n, set g , = g . e,. Then, by Proposition 1.1, gn(F)MDp, 1, and D [g,(F)] = Vkg, (F) DF k. Since the Lipschitz constants of g, are bounded by that of g, the o(F)measurable sequences Vkg,(F) are bounded by II~glloo. Hence there exists a subsequence, still denoted by (n), such that Vkg,(F) converges to Uk in the weak topology of L~ for each k = 1, ..., d. Also as n ~ 0% the sequence g,(F) converges to g ( F ) i n LP(~?), and sup E(lID [g,(F)] Ilf~)<Kg(liDFIlf~). Consequently one can n extract a further subsequence still denoted by (n) such that the sequence D [g~ (F)] converges to some G in the weak topology of LP(O, H). The projections of G on the Wiener-chaos of L2(f2 x T) are the same as that of the series which formally defines D [g(F)], and hence g(F)EIDp, 1. Also, for each Z e L2 (~ • T), (D [g(F)], Z ) = lim ( D [g. (F)], Z ) n = l i m E [(gk g.)(F) ~ D~F k Z, #(dr)] n T =EEUk ~ DrFkZr#(dr)] T =(UkDF k , z ) . [] Remark. Sometimes we need to introduce random elements which are independent of the Gaussian process W. In that case our reference probability space will be the product of the Gaussian space (s ~, P) with some separable probability space (~0, fro, Po). Then we can take E=L2(Oo, f o , Po) and identify L2(O, E) with L2(O x Qo, f | P x Po). In this way we can define the operator D on 320 A. Millet et al. square integrable random variables defined on the product space ~2 x ~2o. The preceding results concerning the operators D and 6 can be properly translated to this more general situation. 2. Preliminary Results on Infinite Systems of Stochastic Differential Equations Let 7 = {7~; isZ} be a summable sequence of strictly positive real numbers. Set L2(7) = {x=(x')e lRz: Ilxll~ = Z T , IX'l2 < oo}, i L2(T x T)= {x=(xi)e]R_Z• ]lx[I,x,-= 2 7i T~lxi ]2< oo}. 2 Denote by W={Wti; ieZ, re[0, 1]} an independent family of Brownian motions with variance 7i t. We can apply the results of Sect. 1 to the measurable space T = [0, 1] x Z endowed with the product measure # = 2 x 7, where 2 denotes the Lebesgue measure on [0, 1]. N o w H is the Hilbert space L2([0, 1]; Lz(7))~-U([0, 1] x Z, 2 x 7) and for any hEH the Gaussian random variable W(h) 1 is given by the stochastic integral ~ ~ hi(t) dW/. If F belongs to ID2,1 we denote i 0 its derivative by DF= {D~F; te[0, 1], ieZ}, which is a random variable taking values on L2 ([0, 1] x Z). Suppose that u={u~; tel0, 1], iEZ} is an LZ(v)-valued square integrable, adapted process. That means, E 7,lu~12dt <0% and u, is g-measurable for x0 / any t~[0, 1], where {F~; t~[0, 1]} denotes the filtration generated by {Wt}. Then, as in the finite dimensional case (see [15]), it holds that u ~ D o m 6, and 6(u) 1 coincides with the sum of It6's integrals ~ S ul d Wt~. Furthermore we have the following isometry property ~o E[a(u) 2] = E (i 0 7~lu~12d . Let b: [0, 1] xlRZ-*lR z and a: [0, 1] x l R Z ~ N z• exists a constant K > 0 for which VxeL2 (7), sup { lib(t, (H1) x)lr (2.1) be maps such that there + lid(t, x)ll2• ~} =<K(1 + IFx]l2) t Vx, ysg2(7), sup { lib(t, x)-b(t, y)l12+ lid(t, x)-a(t, y)l[2• <__KIIx- y]l 2. t Remark. All the results in this section are true if the first usual growth condition on b is replaced by the following weaker one: Vx~L2(F), sup ~7ixib'(t, x ) ~ K ( 1 + Hx][72). t i Time Reversal 321 (cf. e.g. [10]). However we will need the stronger assumption (H1) in the next sections. The following stochastic differential equation is well-defined for processes X in L2((2 • [0, 1]; L2(2)): X ,i - Xoi + ~t a~(s, Xs) dl/Vf + i~ bi(s, X~) ds, 0 ieZ. (2.2) 0 The following theorem ensures that (2.2) has a strong solution; it is essentially proved in [10] Theorem 2.1. Let b and a satisfy (H1). 7-hen for any L2(7)-valued random variable X o independent of the Brownian motion {Wt; tE[0, 1]}, the equation (2.2) has a unique continuous L2 (y)-valued strong solution {Xt} such that E(sup IIX, II7) < oo. If {xt} and {I1,} denote the solutions of(2.2) associated with the random variables X o and Yo, respectively, then E ( s u p / I X , - Y~lb~)<=gE(llXo - Yo [12) 9 t Remark. The solution {Xt} is the strong limit of the sequence of processes {Xt(n); n >0, te [0, 1]} defined by Picard's iteration method. Sketch of Proof. Set Xt(O)=X o for every t~[O, 1], and for each n>=O set by induction t X~(n + 1)= X~ + i a~(s, Xs(n))d Wf + ~ bi(s, Xs(n))ds. 0 (2.3) 0 A maximal inequality for Hilbert-valued martingales, the isometry (2.1), and the boundedness assumption on b and a show, by induction on n, that X(n) belongs to L2 (f~ x [0, 1]; L2 (7)), and the following inequality holds 1 E(sup [bXt(n+ 1)112)<K1 + K 2 S E(sup []XAn)l]~z) dt, t<=l 0 s<t for some constants K s and K 2 independent of n. The Lipschitz condition on b and o- shows that for each t < 1, t E(sup [IXs(n+ 1)--Xs(n)iIZ)<=K ~ E(sup liXu(n)- X,(n-1)[lZ)ds. s<t 0 u<=s Hence the sequence {Xt(n); n>0} converges to Xt in LZ(g2x [0, 1]; L2(7)), which is a strong solution of (2.2). Since each process {Xt(n)} is clearly continu- 322 A. Millet et al. ous, the continuity of Xt follows immediately. Finally, standard arguments show that given L2(7)-valued random variables Xo and Yo, t E(sup ]IX,- Y~II~)=<K~ E(llXo- YoI1~)+K2 ~ E(sup IIX~- Y~rl2) ds t<=l s<t 0 <=KE(IIXo-- Yo 112) 9 This clearly implies the pathwise uniqueness of the solution of (2.2). [] We want to prove that X belongs to g)2,1(L2([0, 1], L2(?))). To do this we first generalize Proposition 1.4 to Lipschitz functions depending on infinitely many coordinates. For p > 1, let LP(TP)={x=(xi)eN, z: [lxll~=~7~[x~[p< + oo}. i Clearly, the convergence of the series ~7i implies that L2(7)cLP(70, for every p>l. i Proposition2.2. Let ~p:L2(?)~P,, be Lipschitz for the norm Ir I1~ for some pr 23, i.e., such that I~(x)-~(y)l <_K IIx-yll~, for all x, y~L2(?). Let F belong to E)2, ~(L2(?)). Then ~p(F)~E)2, 1, and there exist a(F)-measurable random variables Uk such that [Ukl < K?k, and D~[(p(F)] = Uk Dri F . k (2.4) Proof. The Lipschitz condition on ~o implies that 1[Vk~oI[o~=<K~k for each k eZ. For every n =>1, let q~.: L2(?) ~ ~ be defined by (p.(x)= ~o(x.), where x. - 1{1il < n}x '. Also denote by F. the L2(7)-valued random variable defined by F,i=loil<_,,~Fi. Then q),(F)=@(F,), and E[l~o,(V)--q~(V)lZ] = E[lq~(V,)-q~(F)l z] p 2-p < K 2 E [ ( Z 7~IF'[p)2/p]<KZE[( Z ~,1F'12)3( Z 7~ -p) ~ - , 0 , I/l>n li[>n ]i[>n a s n---~ oo. Proposition 1.4 implies the existence of random variables Uk,, with [Uk,, [ <=K?k, such that for each n, ~, r, UR,,Dr F 9 [kl<n We prove that sup [[D[(pn(F)[12< -t'- o0. In fact, we have n c~ _ IkJ =n Fll~• <+~. Time Reversal 323 Then, considering the Wiener-chaos decomposition of F in L2 (~2; L z (7)), we show that q~(F)eID2, 1 by an argument similar to that of Proposition 1.4. r Set ~x x Uk,n~-O if ]k[>n. Then t(-~,Uk,.), n > l ~ is a bounded sequence in %\J/4~ l LE(g2; LE(7)). Therefore, one can extract a subsequence, denoted again by (n), converging in the weak topology of L2(~2, La(y)). Denote the limit by (Vk), and let U~= 7k Vk. To conclude the proof of the Proposition if suffices to check that Ug, ~D~ F k converges to ~ Uk D~ F k in the weak topology of L2 (~2 • [0, 11; L: (7)). k k In fact, for any G~L2(O x [0, ll; L2 (7)), =E 7k ~7~G~D~F kd ,,- ~0, as n - - * o o . [] Note that the Lipschitz conditions in (HI) on a and b imply that for almost all (t, x)~[0, 13 x • z, ~ TilVk bi(t, x)[ 2 <=K yk, i and (2.5) 7i 7p ]Vk ai~(t, x)12 __<K 7k. i,fl Indeed, for each i, k ~ Z and each fixed 2k=(xJ; j+k), the real-valued function xk~--~b~(t,(x k, 2k)) is Lipschitz, and hence a.e. differentiable. Therefore, 7i[Vk b i (t, x)l z _=7i limh - 2 [b i (t, (x k + h, 2k)) -- b i (t, (x k, 2k))] 2 h-~ O < K lim inf h - 2 Ik(xk + h, 2 k) - (x k, 2k)][~ = K 7k, h~0 and a similar computation completes the proof of (2.5). Under additional assumptions on b and o-, we will prove that the solution X of (2.2) belongs to IDa, 1 (L2([0, 1]; L2(7))). Consider the following hypotheses on b and a: (H2) There exists M such that for each i, f i z z and tel0, 11, the coefficients bi(t, x) and a~(t, x) are functions of at most M coordinates (which may depend on i and fl). (H3) sup{[Lb(t, x ) - b ( t , y)lk2+ Ila(t, x ) - a ( t , Y)II,• 2 t for all x, y~L2(7). < 324 A. Millet et al. Remark The assumption (H3) is stronger than the Lipschitz condition in (H1), and implies that Y, IVk b' (t, x)[2 + ~, 7, 7~lVk a~ (t, x)12 < K 7~, i (2.6) i, fl for almost all (t, x)e [0, 1] xlR z. Theorem 2.3. Suppose that b and a satisfy (H1), and either (H2) or (H3). Then the solution Xt of (2.2) belongs to ID2,1(L2(?))for every t, and satisfies sup E(sup HDr Xt II2• 7) < o~. Furthermore, there exists a constant K and progresr t sively measurable processes B~(u) and S~,p(u)for the filtration a { W~, v < u}, such that: sup { [IBk(U)ll~+ IlS'k,.(u)ll~• ,} <KTk, gkeZ, (2.7) u for almost all (u, co), and the derivative Dr~Xti is solution of the system t Dr=Xti-'- a~ (r, X r ) + ~ t iSk,p(u)Dr=X,kdW~P+SBk(u)DrX, ~ k r (2.8) r for r _< t, and D ~r X ti= O i f r > t . Remarks. (1) When the coefficients b and a satisfy (H2) and are cgl functions of x, then B~(t)=Vkbi(t, Xt) and Sk, -- Vk ai~(t, Xt). (2) The condition (H3) yields a stronger inequality on B~, S~,., namely sup { IIB~(u)jr2+ ~ JIS~,. (u)]l~• <K7~, = k/k~Z (2.9) u Proof 1) We suppose at first that b and (r satisfy (H1) and (H2). Let X ~ and let Xt(n) be defined inductively by (2.3). We show by induction on n that X (n) e ID2,1 (L2 ([0, 1J; L2 (~))), and that sup sup E [sup rIDr Xt(n)][ 2• ~] < + oe. n r (2.10) t<l This implies that X belongs to D2, 1 (L2([0, 1]; L2(7))). Indeed, if (2.10) is proved, then let X = ~ Jk(X) denote the decomposition of X on the Wienerk=O chaos of L2 (f2; L 2 ([0, 1]; L2 (7))) -~ L2 (f2 x [0, 1] ; L 2 (7))- Let Y = { Y~"i(t)} denote the weak limit in Lz(f2 • [0, 112; L2(7 • 7)) of a subsequence of {Dr" Xt(n), i " n=>0}. Since X(n) converges to X in L2(y2 x [0, 1]; L2(V)), the projection of Y on the k-th Wiener-chaos is the same as that of the series ~Dr(JkX,)i which defines 1 formally - 1 1 k 1 T e ofo o domin.ted LO o mum in (2.10), and X~lDz, 1(L2([0, 12; L2 (7))). J supre- Time Reversal 325 TO prove (2.10), suppose that sup E [sup IID~Xt(n)]] 2• ~] < + ~ . Since bi(s, x) r t and a~(s, x) are functions of a fixed number of coordinates, it will follow from Proposition 1.4 that the processes b(s, Xs(n)) and a(s,X~(n)) belong to ID2, a(L2 ([0, 1]; L2(7))) and IDa, I(L2 ([0, 1]; La (7 x ~))), respectively. Furthermore, there exists processes B~(n, s) and S~,~(n, s), progressively measurable for the filtration a(W~; u < s), and such that D~[b~(s, X~(n))] - 1~, ~ ~}B ki (n, s) D~ X~ (n), o~ i Dr[atj(s,X~(n)) ] = l{,.<=s}S~,p(n,s) Dr~ X~k (n). (2.11) Indeed for any se[-0, 1], i, fi~Z, we know from Proposition 1.4 that bi(s, X~(n)) and ai~(s, X~(n)) belong to IDz, 1 and (2.11) holds with IB~(n, s)[ < qL~bill~ and IS~,,a(n,s)[<ll~ll~. Observe that the processes Bik(n, s) and S~,a(n,s) are obtained as weak limits in L~ (f2) of subsequences of { ~(b i 9era)(Xs(n)), m >=1} and { Vk(a~ * era)(Xs(n)), m > 1}, respectively. So, we can choose progressively measurable versions of these processes. Furthermore, the summation over k in (2.11) is taken over at most M terms. Then, the fact that b(s, X~(n)) belongs to IDz, 1(L2([0, 1]; L2(7))) is a consequence of the following inequalities: 1 1 E S ~ ~JD~[b'(s, X~(n))]127,7~drds 0 0 i,~ 1 i <__KE ~ ~ Z [B~,(n,s)12lD~X~(n)12y, y~dsdr 0 0 1 a,i,k 1 < K E ~ ~ Z7k?~[D~X~(n)[2dsdr 0 0 k,a = g ]lDX(n)llL~r • to, 112;L2(~ x ~,))< GO. a(s, Xs(n)) belongs to Similar computations would show that ID2.1 (L2([0, 1]; L2(7 x 7))). Fix i, and let Z(s) denote the adapted L2(y)-valued process Zr = l~s<tI a~(s, X~(n)). Then Proposition 1.3 implies that t : D~ [-6 (Z)] = Z ~(r) + ~ ~ D~ Z ~(s) d W~p, r ~r r~t. Therefore, each component X~(n + 1)~IO2, 1, and a i D~Xt(n+l)=a~(r,X,(n))+ i r t Sik,p(n,s)D~Xk(n)dW~P+~B~k(n , s) D r~Xsk (n) ds, r 326 A. Millet et al. where the summations with respect to k are taken over subsets of at most M terms. Thus, the maximal inequality and the isometry property (2.1) yield that X~(n)ll~• ~] E(sup IlD~X~(n-t-1)tl~• tN1 + g~,i ~,~g (! i ~ ~,~lSk,p(n, s)12lD~jX~(n)lZ ds ) 1 < g l +K2 I E(sup IlD,X.,(n)ll~• 0 u<s by Theorem 2.1 and the inequalities (2.5). Hence (2.10) follows easily. By a diagonal procedure choose a subsequence (still denoted by (n)) such that B~(n, t), S~,p(n, t) converge weakly in L~ (f2 x [0, 1]), say to B~(t) and S~,~(t), and such that the subsequence of processes DX(n) converges weakly in La(• x [0, 112; L2(7 x 7)) to DX. Then (2.7) is an easy consequence of (2.5), and similar computations for X instead of X(n) conclude the proof. 2) Suppose at last that b and o- satisfy (H1) and (H3). Let X(n) be the approximating sequence of processes defined by Picard's iterations (2.3). Suppose that X(n)E~)a,I(L2([O, 1]; L2(7))). As in the first part of the proof, it suffices to check (2.10). Applying Proposition 2.2 and using condition (H3) we obtain the existence of progressively measurable processes B~(n, s) and S~,~(n, s) such that (2.9) holds for all keZ, and for r<t t t D~ Xti (n + 1) = a~ (r, X~(n)) + ~ S~,z (n, s) D~ X~ (n) d Wf + ~ B~(n, s) D,~X~k(n) ds. r r Hence, E(sup [tD~Xt(n + 1)I[2• r) < K [1 + sup E(plX~(n)j]2)] t +E r [/ Z Sk (n,s)D:X (n)}2d o~,p,i sl 1 <g ~ 7~7~Ti]lD~X'~(n)ll I+E k LO a, fl, i 1 <K1 +K2 I E(sup IIO~X~(n)ll2• ~)ds, 0 u<s --[S~,p(n,s)[ 2 ds Zk Time Reversal 327 where K~ and K 2 are constants independent of n. (The last estimation follows from (2.9) and the summability of ~Tk.) Thus (2.10) is true, and a similar argument applied to X instead of X(n) concludes the proof. [] 3. Necessary Conditions for Time Reversal In this section we suppose that the reversed process 3~t=X~_ t is solution of a martingale problem with generator Lt. To set this martingale problem we will need some integrability assumptions. We identify Lt, and deduce integrability conditions on the coefficients. Let b and a satisfy (H1), and set Note that a(t, x) is locally Lipschitz in the following sense: Given any constant A, there exists a constant KA such that sup {lla(t, x)--a(t, y)H2• ~ K a Ilx-ylL T2~ t for any x, yeL2(7) with ]lxi[~ and IlYll~ less than A. For k > 1 we say that a function f: IRz ~ I I is a cgk-function of M coordinates if there exists a subset {il .... , iM} c Z , and a cgk-function f: R M ~ N such that f(x)= ~(x il, ..., xiM), for all x ~ N z. Fix a finite subset J ~ Z, and let ~s denote the set of functions f: lRZ~ IR which are cg~-functions of a finite set of coordinates which contains J. We also denote by @ the set of functions f : IRz ~ N which are cg~-functions of a finite number of coordinates. The process {Xt, tel0, 1]} solution of (2.2) is a Markov diffusion process with generator L t defined by: Vfe~, Lt f(x) = Vif(x) bi(t, x) + 89 f(x) aiJ(t, x), (cf. [-8, 10]). Suppose that the reversed process Xt=Xa_t is solution of the following martingale problem Vfe~j, V0_-<s<t<l, E [-f ( X t ) - f ( X s ) - i (Lu f ) (X,) duLXs] = 0 , (3.1)j s where L, f (x) = Vif (x) ~i (u, x) + 21-Viif (x) 8 ij (u, x), f e ~sTo ensure that the martingale problem (3.1)s is well-defined, we require some integrability assumptions on d and b-. For fixed J, set Jgs = {(1-[ K~) x (I~ IR); Kj compact oflR} c~ L2 (~). jsJ j~s 328 A. Milletet al. Denote by #~(dx) the law of Xt on L~(7). Consider the following integrability assumption (sufficient to give a meaning to (3.1)s): gto>0, VD~JFj, Vi, j, 1 f ~ {lb-;(1-- t, x)I+16~(1--t, x)]} lat(dx)dt<oo. to (3.2)j D Note that the growth condition of (HI) on b and a implies that for each fixed J, (3.2)j holds when a ( 1 - t , x) and b ( 1 - t , x) are replaced by a(t, x) and b(t, x), respectively. Changing u into 1 - u and defining L , = -L~_~, we change the martingale problem (3.1)s into the following equivalent one: Vf~j, Vg~, VO<s<t<l, t E {[f(Xt) - f ( X s ) ] g(X,)} = E {g(Xt) ~ (L,, f ) ( X , ) du}. (3.3)s s We prove the following result, which extends to the infinite-dimensional case Theorem 2.2 in [-12]. Theorem 3.1. Let b and cr satisfy (H1), and let X be the solution of (2.2). Suppose that for each i there exists a finite subset 1(0 c Z such that aiJ(t, x)= 0 for j(~I(i), and that the conditional taw of the vector Xi(t)=(X~,j~I(i)) given ff, i(t) =(XJt,jr has a density pt(xi[~i) with respect to the Lebesgue measure on ~i(i), for t> O. Assume that, for some J the integrability condition (3.2)s is satisfied, and that the reversed process X~ is solution of the martingale problem (3.1)s. Let 12tt(d~i) denote the law of ~i(t). Then the generator L t is defined by ciu(1- t, x)= aU(t, x), b-i(1 -- t, x) Pt(Xi I ~-i)= - b~(t, x) pt(xi I ~) + ~ Vk[aik(t, x)pt(xi] r (3.4) k~l(i) for all i,j~Z, and for all t~[O, 1], xi~lR m) a.e., and for all (i, fi~(d~i)-a.e. Furthermore, the sums of distributional derivatives appearing in (3.4) are locally integrable functions, in the sense that for each i~Z, to >0 and D~J~rj, 1 S ~l Y, ~ [a'J(t, x)p~(x~ l~i)]ldx~ A ( d ~ ) d t to < oo. (3.5b 19 j e l ( i ) Proof. Fix f e Nj, g c ~ and t ~ [0, 1]; the validity of (3.2)j implies the integrability of the map u ~ E []L, f ) (X,)]]. We will show that L, f ( X , ) du --, E [g(Xt) L~f(X0], h- ~E h (3.6) Time Reversal 329 for almost every t, as h $ 0. Notice that JE{g(XO[h-~ i L.f(X.)du-L,f(X,)]}l t-h ---Llgll~oh-1 i ElL.f(X.)-Ltf(Xt)ldu. t--h Since LI(f2) is separable, we can consider a countable and dense subset {F., n > 1} in L~(f2). Then, Lebesgue's derivation theorem yields t h -1 ~ EIL. f(X.)-Fn] du--*ElLtf(Xt)-F.[, (3.7) t-h for almost every t, as h $ 0, and each n > 1. Fix t such that (3.7) holds. For any fixed e> 0 these exists k > 1 such that E let f (Xt) - FkI~ e,. Hence lira sup h -1 i E l L . f ( X . ) - L t f ( X t ) l h~O du t-h t <lira suph -1 ~ gl~,f(Xu)--fkldU+glfk--Ltf(Xt)l<2~. h-*O t-h Since e is arbitrary, (3.6) is proved. It6's formula and the martingale property of stochastic integrals show that h -~ E {g (Xt) [f(Xt)--f(X~_h)]} = I 1 + 12, with 11 = h-1 i E { [ V~fiX.) bi (u, X.) + 89 Viif (X.) a i3(u, X.)] g (Xt) } du, t--h I2=h-lE [g(Xt)--g(Xt_h) ] ~ Vif(X.)ai~(u, X . ) d W f t--h =13+14 , where i EEV~f(X.) Vjg(X.)aiJ(u, X.)] du, I3 = h - t t-h and I4=h-t E {Is t (p(u)du h ][j t Vif(X.)a~(u,X.)dW f h for q~(u) = Vjg (X.) bj(u, X~) + 89 Vjkg (X~) a jk (u, X~). ]} , 330 A. Millet et al. The boundedness condition on b and a implies that the functions of u considered in 11 and 13 are integrable. Thus Lebesgue's derivation theorem implies that 11 --* E {[V~f (X~) b'(t, Xt) + k V~jf (Xt) aq(t, X,)] g(X,)}, I a ~ E [Vi f(Xt) Vjg(Xt) aiJ(t, Xt)], t-a.e, as h ~ 0. Schwarz's inequality and the isometry of stochastic integrals imply that I4<=h-l{ht!hE[(p2(u)]du} a/2 "{E [tJh V~f (X.) ~ f (x") au (u, Xu) du]} 1/2 Ell+sup]]X~JI~]du <__Kh-lh I/2 t-- =Kh~O, S as h--* 0. Fix ieZ, jeI(i), and express the corresponding term of the sum equal to the limit of Ia in terms of the conditional density of Xi(t) given Xi(t)= ~i. Then E [Vi f(Xt) ~ g(Xt) aU(t, Xt)] = ~ Mi~(~3 fit(d~i), with ]Mu(~i)] = ] [, Vif(xl, ~i) Vjg(xi, ~i) aU(t, (xl, ~i)) pt(xi I~i) dxi] < o~. Rxfi) Let F denote the finite subset of coordinates on which f depends, and suppose that g is a (g~~ of a finite subset of coordinates containing {k/3 i~F, k~I(i)}. For fixed ~i the functions xi~-,aq(t, (xi, ~i)) are locally Lipschitz on Ri(i). Hence taking the distributional derivatives with respect to each coordinate # , withjeI(i), we obtain that Mu(~i)= -- ~ g(x) g~[-V/f(x) aq(t, x) p~(xi]~i)] dxi ~l(i) = -- I g(x) { Viif(x) ~.I(i) aiJ(t, x) pt(xi 1~i)+ Vif(x) Vj[aiJ(t, x) pt(xi ] ~i)-]} dxi. The identification of the t-a.e, limits of the martingale problem (3.3b for s = t-- h, as h ~ 0, yields that Z ~ g(x) [ - V~f(x) ~(1 - t, x ) - 1 Z V~2f(x) a'J(1 - t, x)] pt(x, ]4,) dx, flt(d~i ) i e F p.l(i) jel(i) --I E ~ g(x)[gf(x)bi( t,x)- 89 E Vqf(x)dJ(t,x)]pt(xil~,) iEF ~R.r(i) j~J(1) Vi f(x) y', ~ [aiJ(t, x) pt(xi [ ~i)] dx i fit(d~i). jsl(1) - - (3.8) Time Reversal 331 Choosing a smooth approximation of f (x) = x i ~ g(x) [ - b-i(1 - t, x)] pt(xi 14,) dxi fq(d4,) F.r(i) =i ~ g(x)[bi( t, x)pt(x,I ~i)- Z Vj[aiJ(t, x)pt(x,[4i)] dxift,(d4i). jeI (i) Rr(i) Take g(x)=cp(xi)~(4i)~, with q) a cg~-function of xi=(xJ, jeI(i)), and a cg~-function of finitely many coordinates (#, jr Since (p and ~ are arbitrary (with ~o(xi)~,(4i)E9), we obtain the identification of b~(1-t, x). Once we know b-i we put its value in (3.8), and choose a smooth approximation of f(x)= x i x j in order to identify ~iiJ(1- t, x). Then the integrability condition (3.2)s expressed in terms of a and b by (3.4) implies the validity of (3.5)j, and concludes the proof. [] 4. Sufficient Conditions for T i m e Reversal In this section, we give sufficient conditions on the coefficients a and b which imply that the reversed process )~ is solution of a martingale problem. For each x~I-?(7) and s>O, let X~,t(x) denote the strong solution of the stochastic differential equation t t X~.t(x)= xi + ~ a~(u, Xs,.(u))dWf+ ~ b~(u, Xs,.(x)) du. S (4.1) S As in [12], we prove that the map q~(x)= E [g(Xs, dx))], for g in some subset of @, satisfies Lipschitz properties which imply that qo(Xs)~ID2, 1. This requires a Lipschitz condition with respect to 7 2 on the coefficients. More precisely, consider the hypothesis (H4) sup {Hb(t, x)-b(t, y)l[2~+ Ila(t, x)-~(t, y)l[2~• ~} t = sup {~ 72 ([b i(t, x ) - h i(t, y)] 2 + ~ 7~ [a~ (t, x ) - a~ (t, y)] 2)} t i /l <Kqlx-yll~. Remark 4.1. The summability of the sequence (7i) clearly implies that (H4) is weaker than (H3). If the coefficients bi(t, x) and a~(t, x) are functions of (xJ: IJ -- i[ _-<M), and if 7i 7f 1 __<c for [ j - il __<M, then the Lipschitz condition in (H1) implies (H4). Indeed, fix i, j e Z with [i[<M and [j-il<M. Given x,y~L2(7) and zelR, let Zi, j(x, y, z) denote the L2 (7)-valued vector defined by Nk(n) I Zi~(x, y, z)k(")=~ z [ yk(n) for i--M<=k<j for k=j for j<k<=i+ M, 332 A. Millet et al. where k (n) = k + n (2M + 1), ] k - i[ ~ M, n ~ Z. For each i with 1i1 __<M, 72(.)[b i(")(s, x) - b i(")(s, y)] 2 n yj(n) ----<ZT~(n)[ Z n ]j-il I VJ(.)bi(")(s,Zi, j(x,Y,z))dz] 2 <M xJ(,O yj(n) <K~,7i(.) Z 7j(.)[Yjt")-xJ(")] ~ I~(.)bi(n)(s, Zij(x,Y,Z))] 2dz n [j--i[<M x.i(n) y.i(n) _<K Z /...X-'Yy(). [YJ~")-xYt")] ~ ZYio.)[VJ(.)bi(m)(s,Z,,J(x,Y,Z))l 2dz ]j--i[<M <K n xJ(n) m Y~7~(.)lyJ(")-xJt")12=glly-x[IZ~2. ~ ]j-il<M n A similar computation for o-[ shows that (H4) holds. [] We also consider the following hypothesis: (H5) There exists an increasing sequence (J(n), n>O) of finite subsets of Z with U J ( n ) = Z , and such that for each n>0, M > 0 , there exists a constant K(n, M) for which: sup sup sup {Ibi(t, x)l+Y~la~(t, t ieJ(n) x)le: fl sup IlxJ[<=M}<=K(n,M). jeJ(n) Note that (H5) clearly holds if b and a do not depend on t, satisfy (HI), and for ieJ(n) the functions bi(t, x) and o-~(t, x) only depend on (x j, jeJ(n)). Lemma 4.2. Let b and a satisfy (H1) and (H4). Then if X~,t(x) denotes the solution of (4.1) we have." (i) sup E(sup IIXs.&)-X~,,(Y)ll~2)<K IIx-y[l~z~,for all x, y~L2(7). s t>=s (ii) Suppose furthermore that (H5) holds. Then for every function g which is a cgC-function of the coordinates (xJ; jeJ(n)) for some n, the function ~o: L2(y) --->IR defined by ~o(x)=E[g(X~,t(x))] is globally Lipschitz for the norm II II~, uniformly is s, t. Proof (i) For each x, y~L2(7), the maximal inequality, the isometry property of stochastic integrals and (H4) imply that E[sup IIX~.~(x)- X~,~(y)l[~]<=g Ilx-yl[~ t>s 1 + g ~ E ( ~ y 2 {ibm(u,X~,.(x))-b~(u, X~,u(y))[2 s i + ~ yp [o-~(u, X~,. (x)) - o-~(u, X~,. (y))l2}) du 1 <-_gllx-yIl~+g ~ E{ sup S S<--u<--t dt. Time Reversal 333 Hence, Gronwall's lemma concludes the proof. (ii) Let g be a cg~-function of (xJ:jeJ(n)). Then, by It6's formula, we have that lop( x ) - (p(y) 12__<C (I ~+ 12 + I3), where I~ = [g(x)-- g(y)[ 2 < K ]Ix _yl]2, 1 ) Ia -- ~- F, E i, j e J ( n ) [V~jg(Xs,,(x)) a~J(u, X~,,(u))- V~jg(X~,u(y)) a~J(u, X~,,(y))[2 du . xs The functions V/g(x) and V/jg(x) are Lipschitz for the norm ]l L]~ and bounded. They are null outside of a set like {sup Ixi[<__M}. Consequently, hypothesis i~J(n) (H5) shows that for each i~J(n) IV/g(x) bi(u, x ) - Viig(y) bi(u, y)l 2 <-_K(n, M) [Ix - ylk2~. Also by (H5) sup sup {]aiJ(t, x)]: sup [xk]<=M} < K(n, M) for all i, jsJ(n), therefore t k~J(n) t I2+I3<=c ~ E[sup [Ixs,.(x)-X~,u(y)ll2~l du s u<-t ~ K l l x - yl[Z~2, by part (i) of the proof. This completes the argument. [] We now prove the main result of this section. Theorem 4.3. Fix a finite subset J c Z. Let Xo ~ L2 (7) be a random variable independent of W, and let {Xt, t~[0, 1]} be the solution of the stochastic differential equation (2.1), where the coefficients b and a satisfy (HI) and (H5) suppose that: (i) The coefficients b and a satisfy either (H2) and (H4) or (H3). (ii) For each i there exists a finite subset I ( i ) c Z such that aiJ(t, x ) = 0 for j (~I (i). (iii) For every i~Z, and every t>O, the conditional law of the vector Xi(t ) = (X/: j ~ I (i)) given )~i(t) = (XI: j r I (i)) = ~i has a density Pt (xi I ~i) with respect to Lebesgue's measure on ]RI(~ (iv) For every i~Z, every t o >0, and every D ~ X s , 1 S ~ [ ~ Vj[alJ(t, x) pt(xil ~i)] Idxi ~t(d~i) dt < ~ , to D (4.2)s jeI(i) where x=(xi, ~i), xi=(xJ: jeI(i)), ~i=(xJ: jr 21(t). and fit(dxi) denotes the law of 334 A. Milletet al. Then the reversed process Xt=Xl_t is solution of the martingale problem (3.1)j with an operator L,t whose coefficients are given by CtiJ(1- t, X)= aiJ(t, x) b-i(1 - t, x) = - bi(t, x) + p,(x,I ~,)- 1 ~[aiJ(t, x) p,(xil ~i)], (4.3) jel(i) with the convention that the term involving P,(Xil ~)- 1 is null on the set {pt(xi[ ~i) =0}. Remark. Condition (4.2)j implies that for /~-almost all ~i we have ~[a'J(t, x)pt(x~] ~ ) ] = 0 a.e. on the set {x,eNm): pt(x~lr In fact, this jel(i) follows from Lemma A2 of [71 (see also [121). Consequently, the coefficients b-~ and fiij verify the integrability condition (3.2)j and the martingale problem is well-defined. Proof Let f e ~ s , g e ~ , and 0 < s < t < l . We want to prove the equality (3.3)s. Using It6's formula it is clear that the left-hand side of (3.3)j is absolutely continuous as a function of se(0, t]. Hence, in order to establish (3.3)s it will suffice to prove that for all se(0, t] a.e. lira h -1 E [(f(X~)-f(X~_h) ) g(Xt) ] = e [L, f(X,) g(X,)]. (4.4) h+O We have h -1 E [(f(X~)--f(Xs_h) ) g (Xt)l = 11 + 12, with 11 = h - 1 i e [{ V~f(X,) b'(u, X,) + 89 Vo f(X,) aU(u, Xu) } g(X,)] du, s-h and Iz=h-lE[g(Xt) f Vif(X,)a~(u, X,)dW, Pl. s-h By Lebesgue's derivation theorem, as h ~ 0 11 ~ E {g(X,) [V~f (X~) bi(s, X~) + 89 V~jf (Xs) a'~(s, X~)]}, for almost every s. The Markov property of {Xt, t~[0, 1]} implies that in the expression of 12 we may replace g(Xt) by E [g(Xt)]XJ. Let Xs, t(x) denote the strong solution of (4.1), and set ~o(x)= E [g (Xs, t(x))]. We will show that q)(XO =E[g(Xt) lX,], a.s. (4.5) To verify (4.5) it suffices to check that Xs,,(X3=X t a.s., i.e., that X~,t(Xs) satisfies the stochastic differential equation (2.2) for all t > s. Time Reversal 335 Clearly bi(u, Xs,,(x)) d ~ i = S bi(u, Xs,u(X~)) du, a.s., S for each ieZ. We check that similarly s for each ieZ. Fix ieZ, T e N and an U(7)-valued process {e(u); u>s} adapted to the filtration generated by { W~- W~; s < v < u}. Then F_, 'e (Xs) I ~e('~)d Wd I ~ (~, X~,.(Xs)) d Wd ,.9 L S LS t =E ~(X3 E ~7~(u)a~(U, Xs,.(x))dulX~=x oX t t which concludes the proof of (4.5), since T and ~ are arbitrary. We assume that g depends on the coordinates of J(n) for some n. Then Lemma 4.2, Proposition 2.2 and Theorem 2.3 imply that q~(X~)belongs to IDa, 1Hence, the integration by parts formula shows that 12 =h-1 i E(~7p Vif(X,,)a~(u, Xu)D~[q~(Xs)])du. s- h i,fl By Proposition 2.2, there exist random variables Uk(S), measurable with respect to a(X~), with IUk(S)I<=CTR, and D~[q~(X~)] = ~ U~(s)D~X~, i for u<=s. 336 A. Millet et al. Hence Theorem 2.3 implies that 12 = J1 + J2 +,/3, where J~ = h-~ f E (~, 7p V~f (X.) a~ (u, X~) Uj (s) a~ (u, X.)) du, s-h J z = h -a fl ~ E 7t~Vif(X,)a~(u,X,)Uj(s) f Sk,~(r)D,X, dW~ du, s-h u and J3=h-t s-h f E(~ 7~ ~ f (Xu) a~(u' Xu) Uj`s) ~fB~(r)D~ Xkrdr) du" Schwarz's inequality applied to E(~?p) and sup rlXtIF~, sup JID, Xt[J~• t t ~?j, the integrability of J uniformly in u and the boundedness conditions (HI) and (2.7) imply that the map u~--~E(~ ?p V~f (X,)a~(u, X.) Uj(s)a~(u,X.)) is integrable, and that IJ21<gh -a ~ { E(1 +sup IIXt/l~) Is E(sup IID.Xrll2• s-h t <=Kh-1 f (s-u) 89 u rtl/2 du r 89 s-h IJal<=Kh-t E ( l + s u p IlXtll 2) E(sup s--h t u IID.Xrll~• du r 1 <_KhL Therefore, Lebesgue's derivation theorem implies that 12 -+A z, as h ~ 0, for every s a.e., where A 2 = E (Vif (X,) U~(s)aU(s, X,)). Each random variable Uj(s) is the weak limit in L~~ of a subsequence " X,i--- I{H <,} X~, (p,(X)=~0(X,), and {c~, k > l } ~ ( X , ) , where ~(X)=(Cpo*~k,)(X,), is a sequence of regularization kernels in IR 2" + 1. Therefore A2 = lim A 2 (n), where n A2(n) = E(V~f (X,) Vj~(Xs) a'J(s, X~)). (4.6) Time Reversal 337 The assumption (iii) implies that for each n, if x = (x~, r Az(n)--~[ ~ V~f(x)~(x)aU(s,x)pAx~lr162 ~ p,I(O jel(i) i For every i and #~-almost every x = (x~, r define 49i(x) = - ~ Vij f(x) aU(s, x) - V~f(x) p~(x~ I ~)-~ Y. vj EaU(s, x) p~(x~ I ~0], jeI(i) jel(i) where the term involving p~(xi]~i)-1 is equal to zero on the set {xi[ps(xi[~i)= 0}. Then an integration by parts on IRm) yields that AE(n)=~j'E j" i =~ %(x)4),(x) p~(x,I ~,)dxi] p,(d~,) ~,_i(i) EE~(Xs)4 1 ( X s ) ] 9 i The assumption (4.2)j implies that ~bi(Xs) is integrable for almost every s. Moreover ~(Xs) converges to ~o(X~) weakly in L+(O), for some subsequence. Therefore, letting n ~ Go, since there is a finite subset of indices such ~b~# 0, we obtain that A 2 = ~ E [(p (Xs) ~)i(X~)] i = Z E Eg(x,) q~,(x3], for almost every s. i Hence for any f ~ s n, we obtain that and g e ~ which is a ~go~-function of (x i, ieJ(n)), for some h -1 E [(f(X~)--f(X~_h)) g (Xt)] converges to E[g(X,) { -- 89V~;f (Xs) aU(s, X~)+ Vif (X~) b~(s, Xs) - - Z g~f(Xs)[p,(xi I~/) -1 Z ~[aU( s, x)p~(x i [ ~,)]] (Xs)}] i jel(i) as h---, 0, for almost every s t [ 0 , t]. Therefore (4.4) holds and this completes the proof of the theorem. [] 5. Existence of Conditional Densities. Connection with Gibbs Measures The purpose of the first part of this Section is to give examples of infinitedimensional diffusions {X~, i~Z, tel0, 1]} satisfying the following condition: (C) For every finite subset I c Z, and every t > 0, the conditional law of the ~ has a density with respect to vector Xx(t ) = (X~, j sI) given Xi(t)= (X{, jr the Lebesgue measure on IR~. 338 A. Millet et al. Condition (C) implies the hypothesis of absolute continuity for the conditional laws assumed in Theorems 3.1 and 4.3. We first show the existence of a density for the N-marginal distributions of Xt for any N > 0, and then, following the ideas contained in Sect. 3 of [8], and using the Sobolev-logarithmic inequality of Gross ([6]) we prove that (C) holds under some restrictive conditions. The infinite-dimensional stochastic systems we consider along this section are those defined by Eq. (2.2). We suppose that the coefficients a and b satisfy hypotheses (H1) and (H2') There exists an integer L > 0 such that for each i, ritZ and t~[0, 1], the coefficients bi(t, x) and a~(t, x) are functions of the coordinates x j with [ j - i l <L. Since (H2') implies (H2), our process Xt belongs to D2,1 (U(V)) (see Theorem 2.3). Consider the stochastic differential system t t Yj(t,r)=a}+ S ,du) k(u,r)dWf+ BVu)V(u,r)du, r t>=r, (5.1) r if t<r, Y](t, r)=0, with coefficients S~,p(u) and B~(u) given in Theorem 2.3 and satisfying (2.7). By means of the usual Picard iteration schema we can prove that the system (5.1) has a unique solution Y={Yj(t,r), t>r, (i,j)~Z2}, and that Y belongs to L2 (7 x ?). The derivative of Xt can be written as D] X~ = Z cry(r, X,) Y](t, r), (5.2) where the series converges in Lz(7 x ?). The next statement provides sufficient conditions for the continuity of rv--~D]X~, for fl, i and t fixed. Lemma 5.1. Suppose that the coefficients G and b satisfy (H1) and (H2'). Assume also that a(t, x) is jointly continuous from [0, 1] x Lz(7) into L2(?), and that (H6) ~r~(t, x)=0, except for j in a finite set, which may depend on fl, holds 7-hen, foranyfl, ieZ, andte[O, 1], {Dr~Xt,i r <=t} has a continuous version. Proof In view of (5.2) and the hypotheses on o- it suffices to prove that for any i, j e Z and tel0, 1], {Yj(t, r), r<=t} has a continuous version. We will show that E[721Y](t, r ) - Y](t, r')l*] <__g{lr-r']2}, (5.3) for all r, r' < t, and the result will follow from Kolmogorov's continuity criterium. We at first establish that sup sup E { sup ~ 721Y](u,)I4 } < oe. j r<t r--<u<t i (5.4) Time Reversal 339 In fact, condition (2.7) implies that suP~7plSik,~(u)12<K ?k, u ~ sup lB~(u)12<-<K?k. and 7i u ?i On the other hand, by (H2'), the coefficients B~,(u)and Sik.p(U)are null for >L. By Burkholder and H61der's inequalities we obtain E( sup Z72[yji(u, r)[4) r<--u<--t i <=K~72+E(~72[~7/~ i ( -- x i +E 7 \ < [k-i[_L r ~ \ (zV - i Sik,l~(s)Yf(s, r),2ds]2) ~ [k-il<=L ( 2 B~(s)Yf(s,r)) 2ds i __<K #+E +E r i '~f Lfl E LsL~(s)I21V(s,r)?dS r ]k-il<=L ~, IB~,(s)12lY[(s,r)l2d ~r Ik-il<=L 2 Yk sly)} t 2 <=K{Y~'+E(~Yi [lk_~<=L~i ! 'Yf(s'r)12ds] )} <=K 7~.+E E Y~ Ilyjk(s,r)[ 4d [k-il<-L <K r 79+ I E ( sup ~7~[V(u, r)l*)ds . r rNu<s k Thus (5.4) follows from Gronwall's Lemma. Let us now prove (5.3). We have E[~IY](t , r)-Yj(t, r')[4]<=K(TI+ T2+ T3+ T4), [ ,] TI=E 72 i S~,~(u)Yf(u,r)dWf , r T2=E y~ S~,~(u)(y~k(u,r)--~(u, r'))dWf , T3=E ~ B~(u)V(u,r)du 4] , r and [i 4] T4=E ],~ B~(u)(Yf(u, r)-- yjk(u, r'))du . where for r < r' Ik-i[ 340 A. M i l l e t et al. The same arguments used to obtain (5.4) lead us to the following majorations: T~<=KE 72 ?t3 I( r Z S~,,(u) y~k(u,r))2du [k-i]<L rI <Klr-r'[ ~ E{ sup E r r<--U<--s IV(u, r)l ds k <=KIr-r'l 2, and similarly T <K i I r')l l du. u, r ) - 0 Notice that the same kind of estimations which have been obtained for Ta and T2 also hold for T3 and T4 respectively. Consequently E E721Y](t, r ) - Y](t,/)143 = K~lr-r'l 2 + ~ E[y~lyjk(u, r)- gig(u, r')143 du , 0 and by Gronwall's lemma we obtain (5.3). [] Proposition 5.2. Let {Xt, ts[0, 1]} be the solution of (2.2) with initial condition XoeI~(7 ) and coefficients a, b satisfying the hypotheses of Lemma 5.1. Assume further that for fixed i ~ Z, N > 0 the matrix Ai. N= (a~(t, x), ]k - i l < N, [j- i[ < N) is non-singular, for any t~ [0, 1] and x ~ N z. Then the (2N + 1)-dimensional distribution of the vector (X~, [j- i[ < N) has a density. Proof Fix t > 0 , i e Z and N > 0 . Let F(O = ((DXf, DX~)L2~[o,lj; L2(~)),k, l = i-- N, ..., i + N). By Theorem 9 of [2], if d e t F ( t ) > 0 , a.s., then the vector (Xj, ]j-il<<_N) has a density. We suppose that G={detF(t)=O} is such that P(G)>0, and we will show that this yields a contradiction with the assumption made on a. Fix o e G ; there exists velR 2N+1, v=(vi-N ..... vi+u), ]v]=l, such that vrFt(co)v=O. That means vTr, v=iZ 0 Hence ~ Z Ik-il<N VkDrp X~k = [k-il<N VkD~X~=O ~ e( 2 13 for VkDr Xt)k2 dr=O. (5.5) [k--i[<=N any flEZ and rE[0,@ In particular vk cry(t, X~)=0 for any fleZ. Therefore w K e r Ai N and Ik-i[<N this is contradictory with the hypothesis we made on Ai, N. [] Time Reversal 341 Remark. The conclusion of Propostion 5.2 can also be obtained under less restrictive hypotheses on the coefficient a but an additional condition on b. More precisely we can replace (H6) by (H4). The proof of this fact requires to establish an alternative of Lemma 5.1, as follows. Lemma 5.1'. Let X be the solution of (2.2) with initial condition Xo~L2(7), and with coefficients a and b which satisfy (HI), (H2') and (H4), such that ~ is jointly continuous from [0, 1] x L2(y) into L2(7). Then, for any given t~[-0, 1], there exists a version of DXt={D~X~; (i, fl)~Z 2, re[0, t~} which is continuous from E0, t] into L 2 (7" x 7) for any n > 2. Proof. For each r =<t we have 7~ 7~ { ~ [a~(r, Xr)l[ Yj(t, r)[} 2 ~-<E ])n~/~{E Tjl~TJ~(r' Xr) 12}{E ~/11 yji( t, r)l 2} i, fl j i, fl j j = ~ 7j 7~ [o-~(r, X,)[ a} {~ 77 7~-11 Y/(t, r) l2}. j, fl i,j From (H1) we get E{sup I[a(r, X~)]b~• ~} < ~ . r On the other hand, hypotheses (H4) imply that the processes B and S given in Theorem 2.3 satisfy sup{Y~EIB~(u)12+y,7~lS~,~(u)12])<<_KTf, VkeZ. (5.6) We want now to prove that (5.6) implies the existence of an L2(y~ x v-1)-valued version of V ( t , . ) = { Y / ( t , r ) ; O<_r<_t, (i,j)~Z 2} continuous on [0, t], for each n > 2. Indeed, the existence and uniqueness of an L2(?~ • ?-1)-valued strong solution of the stochastic differential system (5.1) can be easily proven by means of the usual Picard iteration schema. In order to prove the result about continuity we will show that E {Z 77 7j--11yji(t ' r) -- yji(t, r')12}a ~ K [r -- r'[ ~, i,j (5.7) n for e=-~, and for all r_<_1. Consequently, Kolmogorov's continuity criterium concludes the proof. We at first establish that supE{ sup ~ 7 , n7 j - 1 ]Y](u, r)[2~}<oo. r<~t (5.8) r<--u<--t i , j In fact, from condition (5.6) it follows immediately that ,andsuplB~(u)[ 2 _ sup~TplS~,p(u)[2<K u fl " u \7,/ 342 A. Millet et al. Then, using Burkholder and H61der's inequalities we obtain E { sup Z 77 7f ~[ Y](u, r)[ 2~} r<u<_<_t i , j -i,j [k-il<=L =<K 1 + I e ( sup Z ~ ~;~ IV(s, r) ff~) d . r r<_u<_sk , j Thus (5.8) follows from Gronwall's lemma. Let us now prove (5.7). We have, by H61der's inequality E{ZYTYf~lY](t,r)-Y](t,r)[ ' ~ } ~ ~(ZT, " Yj)~ - ~ E(ZT," ''-2=yj I ~ ( t , ~,j i,j i,j r)- Yj(t, r')ff'). Therefore it suffices to check that E(~7'~7) - 2~[ Y](t, r)-- Y](t, r')12~)<-_K[r--r'[~. i,j The proof of this fact is similar to that of (5.3), using the arguments which led to (5.8). In conclusion, the series ~ a~(r, X,) Yj(t, r) is, almost surely, normally converJ gent as a function of r into L2(7" x 7), and this completes the proof of the continuity property of the derivative of X. [] Theorem 5.3. Let {Xt, t~ [0, 1]} be the solution of the stochastic differential system t X~=x~+ ~ ai(s, Xi)dW='+ / bi(s,X:)ds, 0 ieZ, xo+L2 (7). 0 Assume that the coefficients satisfy the following properties: (a) o-~(t, x ) = 6~ ai(t, x i) satisfies (Hi). Furthermore, ai(t, y) is jointly continuous in (t, y) and is a ~2-function of y with bounded derivatives. (b) ai(t, y)>:ei, for all t >=O,y ~ , and for some el>O, ieZ. (c) b(t, x) satisfies (HI) and (H2'), and b i is bounded for any i~Z. Then, under these hypotheses, condition (C) is satisfied. Proof. Fix an integer N and let XN(t)={X~,[i[<N}, ~N(t)={X~,[i[>N}, )~N.M(t)----{X~, N < ]i[ <M}, for any M > N . We want to show that, for any t > 0 , Time Reversal 343 the conditional law of XN(t) given )?N(t)= { has a density with respect to the Lebesgue measure on R zN+ 1. Denote by v(dx) the standard normal law in lR 2N+1. By Proposition 5.2 we know that for any M the random vector XM(t) has an absolute continuous distribution. Let psM(t, X, ~M) be the conditional density (with respect to v) of XN(t) given Xn, u(t)=~ M. We are going to prove that {qM(X, co) =PN.M(t, X, XmM(t, co)), M>N} is a uniformly integrable martingale on the v xP) with respect to the discrete product space (IR2N+1 x t2, N ( N z N + I ) | filtration generated by {XmM(t), M>N}. This suffices to prove the theorem, since the D-limit of qM will provide a suitable version of the desired conditional density. Therefore we need to show that sup M ~ E[qu(x, co) log qM(x, co)]u(dx)< oo. R2N+ 1 By the Sobolev-logarithmic inequality applied to f = q~2 we have E ~ [qM(x,CO)1ogqM(X, CO)]v(dx) ~.2N + 1 < 89 E S q~(x, co)- l ]VqM(x' co)12v(dx)" (5.9) ~22r + 1 Notice that E ~ qM(x, co)-X[VqM(x, co)12v(dx)=E(V~PN'M(t'-XN(t)'-ff'mv(t)) 2~ ~2N+1 \ PN,M(t, XN(t), XN, M(t)) ]' (5.10) and therefore we have to identify ~ qu(X' co) qu(x, co) " We fix an integer k, ]k]NN, and denote by {Y~, tel0, 1]} the solution to the system t y t = x~ + 5 a t (s, y t) d I/V~t + leo (1) i b'(s, Y~)ds, 0 0 where F = {lsZ, 11-kl _-<L}. By construction yk only depends on the Brownian W k, while {Y~, 14:k} depends on {W t, 14:k}, therefore D t Ytk=Dk Y,~=0 for l , k . Set ai(x)= at(x i) for any xeL2(7), ieZ, and define lt[bJ\2 R(O=exp{j~e[i(b~j)(s,Y,)dl/VJ--~oyj~7] (s, Y,)ds]}. Let p be the probability measure which has the density R(1) with respect to P. By an extension of Girsanov's theorem (see Lemma 3.7 of [8]) the law of X under P is the same as the law of Y under/~. 344 A. Millet et al. Consider test functions fec~o~(lR2N+ 1) and gs~f(lR2(M-m). We are going prove that there exists a measurable function Ok,M(X, ~M), X~'~2~+ 1, ~MeNZ(M-N), (independent o f f g) such that to E[ ~ f(x) g(-~N,M(t)) VkqM(X, o~)v(dx)] ~2N +1 = E [f(XN(t)) g(XN, M(t)) ~9k,M(XN(t), XN, M(t))], (5.11) and sup ~ E[@k,M(XN(t),2raM(t))23< Oe. M (5.12) IM_<N In fact, an integration by parts yields E[ ~ f(x) ~2N gO~mM(t)) VkqM(X, CO)v(dx)] = - - I 1 + I 2 +1 with II=E[ ~ ~2N Vkf(x)g(XN, M(t))qM(x, co) v(dx)], +1 and I2=E[ ~ f(x) g(XN, M(t))qM(x, Co)xkv(dx)]. ~ 2 Ig + 1 Notice that 11 = E [Vkf(XN(t)) g(XN, M(t))] = E [-Vkf(YN(t)) g (~'N,M(t)) R(t)], where YN and ~'N,M are the analogues of X N and J~tr M, and 12 = E [ f (XN(t)) g (JfN, M(t)) X~]. By the chain rule N Df(YN(t))= y" Vjf(YN(t))DY~(t). j=-N Set Fk= <DYer(t), DYe(t)). The above remarks on the derivates of YN(t) imply that <DYe(t), Df ( YN(t))>= Vk f ( YN(t)). Fk, and thus 11 = E[<DYe(t), Df ( YN(t))) I'k- I g( YN,M(t)) R(t)], Time Reversal 345 t where (DYe(t), Df(YN(t)))=Tk ~ D) Y~(t)D~f(YN(t))dr, and Dk(g(IZN, M(t))=O. 0 The hypotheses on the coefficients o- and b imply that for any p > 2 sup E[ID~ Y~(t)f] < oQ, E[-(~) -p] < K y [ V e [ 2p, E[(Fk) p]_<_Kyf (5.13) O<_r<_t E((R(t))v<~, sup E([D~Rt[;)<o9 sup E [ [ D ~ [ P ] < ~ O~r_<t sup and O_<r~t sup E []D~k D~k Y~(t)[ v] < ~ ; O<_r<_t O<_s<_r this implies that D k. Y~(t)Fk- ~ R(t)eDom 8k, where 8k denotes the Skorohod integral with respect to the Brownian motion Wtk. Then the integration by parts formula (1.2) yields I 1-=E 7k D~ Y~(t)D~ f(VN(t))Fk -1 g(~'N,M(t))R(t)d 0 = E [f(Yu(t)) g (~'N,M(t)) g)k{Dk. Y~(t) Fk-1 R (t)}] = E~ [f(YN(t)) g (~'~,M(t)) (~k{Dk. Y~ (t) Fk-1 R (t)} n(1) - 1], where Ep denotes the mathematical expectation with respect to the probability p. Let Ak,M(X, ~M) be a measurable function defined on R 2M+ ~, such that E~[bk{D k. Y~(t) Fk-~ R(t)} R(1)-a/YM(t)] =Ak, M(YN(t), YN,M(t)). Then we have 11 = EI,[f ( YN(t)) g( YN,M(t)) Ak, M( Yu(t), YN,~(t))] = E [ f (XN (t)) g (J(N, M(t)) Ak,~t (Xu (t), Xu, M(t))]. Hence, taking Ok, M(X, ~M) = xk-- Ak, M(X, ~M) we get (5.11). Finally, (5.12) follows easily from the definition of Ak, M(X, ~ ) and using the properties (5.13). The proof of the theorem is now complete. [] The second part of this section is devoted to give some final remarks on properties about the law of the process X given by (2.2) with initial condition Xo~ L2 (y). We suppose that the coefficients a~, b i do not depend on t, and that the matrix a is diagonal, alJ(x)=8~ai(x), with ai(x)=~7~(a~(x))2>O. Furthermore, assume that: (a) The distribution F~of X t does not depend on t (# is an invariant measure). (b) For some fixed subset J c Z, Xt, is solution of the martingale problem (3.1)s. (c) The coefficients b ~ and b-~ coincide. (This is true if the laws of X and X are the same and the martingale problem (3.1)s has a unique solution.) A. Millet et al. 346 Then, # is a Gibbs measure with respect to (b~), that means, for any ieZ, the conditional density p(xi140, of the random variable X~(t) given )~i (t) = (X~, j 4=i) = ~i, has the following expression: (see [4] for related results). In fact, from (3.4), and Lemma A2 of [-7], we can write: i 1 V~[ai(x)p(xil b (x)=5 ~i)] (5.15) It is easy to check that the distributional gradient gp(xi[~i) is the function given by VdP(Xi I ~0) = ~ 1 { V~[ai(x) p(x~143] -P(X~l 4,) V~ai(x)}. Taking account of this remark we obtain from (5.15) [log P (xil r =~ [2 b i(x)- Viai(x)], and hence (5.14) is proved. Note also that, if we drop hypothesis (c) we obtain that the invariant measure # is a Gibbs measure with respect to (bi+ b-i), i.e., p(xi] ~i) = K exp If (bi+ ~)(0,ai(~i)-V~ai( O, ~i) O, ~,)dO] " Therefore # is a Gibbs measure with respect to b i if and only if bi= b-( This condition is equivalent to the reversivility of #, if the martingale problem (3.1)j has a unique solution. References 1. Anderson, B.D.O.: Reverse time diffusion equation models. Stochastic Processes Appl. 12, 313-326 (1982) 2. Bouleau, M., Hirsch, F.: Propirtrs d'absolue continuit6 dans les espaces de Dirichlet et applications aux 6quations diffrrentielles stochastiques. Srminaire de Probabilitrs XX. Lect. Notes Math. 1204, 131-161. Berlin Heidelberg New York: Springer 1986 3. Doss, H., Royer, G.: Processus de diffusion associ6 aux mesures de Gibbs. Z. Wahrscheinlichkeitstheor. Verw. Geb. 46, 107-124 (1978) 4. Frllmer, H., Wakolhinger, A.: Time reversal of infinite-dimensional Diffusions. Stochastic Processes Appl. 22, 59-77 (1986) 5. Gaveau, B., Trauber, P.: L'intrgrale stochastique comme oprrateur de divergence dans l'espace fonctionnel. J. Funct. Anal. 46, 230-238 (1982) 6. Gross, L.: Logarithmic Sobolev inequalities. Am. J. Math. 97, 1061-1083 (1976) Time Reversal 347 7. Haussmann, U., Pardoux, E.: Time reversal of diffusions. Ann. Probab. 14, 1188-1205 (1987) 8. Holley, R., Stroock, D,: Diffusions on an infinite dimensional torus. J. Funct. Anal. 42, 29-63 (1981) 9. Ikeda, N., Watanabe, S.: Stochastic differential equations and diffusion processes. Amsterdam Oxford New York: North Holland, Tokyo: Kodansha (1981) 10. Leha, G., Ritter, G.: On diffusion processes and their semigroups in Hilbert spaces with an application to interacting stochastic systems. Ann. Probab. 12, 1077-1112 (1984) 11. Malliavin, P.: Stochastic calculus of variations and hypoelliptic operators. Proceedings of the International Conference on Stochastic Differential Equations of Kyoto 1976, 195-263. Tokyo: Kinokuniya; New York: Wiley (1978) 12. Millet, A., Nualart, D., Sanz, M.: Integration by parts and time reversal for diffusion processes. Ann. Probab. 17, 208-238 (1989) 13. Moulinier, J.M.: Absolue continuit6 de probabilit6s de transition par rapport/L une mesure gaussienne dans un espace de Hilbert. J. Funct. Anal. 64, 275-295 (1985) 14. Nelson, E.: Dynamical theories of Brownian motion. Princeton: University Press 1967 15. Nualart, D , Pardoux, E.: Stochastic calculus with anticipating integrands. Probab. Th. Rel. Fields 78, 535-581 (1988) 16. Nualart, D., Zakai, M.: Generalized stochastic integrals and the Malliavin calculus. Prob. Th. Rel. Fields 73, 255-280 (1986) 17. Shiga, T., Shimizu, A.: Infinite dimensional stochastic differential equations and their applications. J. Math. Kyoto Univ. 20, 395-416 (1980) 18. Skorohod, A.V.: On a generalization of a stochastic integral. Theory Probab. Appl. XX, 219-233 (1975) 19. Ustunel, A.S.: Some applications of the Malliavin calculus to stochastic analysis. Lect. Notes Math. 1236, 230-238. Berlin Heidelberg New York: Springer 1987 20. Watanabe, S.: Lectures on stochastic differential equations and Malliavin calculus. Tata Institute of Fundamental Reseach. Berlin Heidelberg New York: Springer 1984 21. Zakai, M.: The Malliavin calculus. Acta Applicandae Math. 3, 175-207 (1985) Received February 22, 1988