Section 4 - Stochastic Calculus-1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

MS&E 322 Winter 2023

Stochastic Calculus and Control January 7, 2023


Prof. Peter W. Glynn Page 1 of 12

Section 4: Stochastic Calculus

Contents

4.1 The Itô Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


4.2 Itô’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4.3 Itô’s Formula in Higher Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.4 Existence and Uniqueness for Stochastic Differential Equations . . . . . . . . . . . . 10

4.1 The Itô Integral

In Section 1, we discussed the need to develop a rigorous interpretation of stochastic integrals


in which the integrator is Brownian motion. The obvious way to define
Z t
ψ(s)dB(s)
0

is to approximate the integral by a finite sum of the form

n−1
X
ψ(ξjn )(B(tj+1 ) − B(tj )),
j=0


where tj = jt/n, and ξjn is a point chosen from [tj , tj+1 ]. The sum is called a “Riemann-Stieltjes”
approximation.
Unfortunately, we cannot approximate the integral by a path-by-path Stieltjes approximation
because B does not have finite total variation on [0, t]. Indeed, note that

n−1 r n−1
X D t X
|B(tj+1 ) − B(tj )| = |Ni (0, 1)|,
n
i=0 i=0

where N1 (0, 1), N2 (0, 1), . . . is a sequence of iid standard normal random variables. Hence the law
of large numbers implies that
n−1
1 X
√ |B(tj+1 ) − B(tj )|
n
i=0

converges weakly to tE[|N (0, 1)|] > 0 as n → ∞, so B cannot have finite variation.
The fundamental reason for the failure of this approach is the highly irregular nature of Brow-
nian sample paths. Therefore, we need some entirely different approach to defining the stochastic
integral. One idea is to use the fact that Brownian motions are random functions, and so we can
essentially make use of “weaker” forms of limits.
1
§ SECTION 4: STOCHASTIC CALCULUS

To begin, we restrict the class of integrands that we are willing to consider. Let H 2 be the set
of processes (ψ(t) : t ≥ 0) which are adapted to (B(t) : t ≥ 0) and satisfy
Z t
E[ψ 2 (s)]ds < ∞
0

for each t ≥ 0. Suppose ψ ∈ H 2 is of the form


k−1
X
ψ(t) = ψ(τj )I(τj ≤ t < τj+1 ), (4.1.1)
j=0

with k, ` ≥ 1, τj = j2−` (τj here is deterministic) and (ψ(τj ) : j ≥ 0) is adapted to (B(τj ) : j ≥ 0).
Note that (4.1.1) defines a process that is constant over intervals of length 2−` . We call a process
having this form elementary.
For such an elementary ψ, define

Z t b2` tc−1

X
I(ψ, t) = ψ(s)dB(s) = ψ(τj )(B(τj+1 ) − B(τj )) + ψ(τb2` tc )(B(t) − B(τb2` tc )). (4.1.2)
0 j=0

Note that I(ψ, ·) is continuous, with (I(ψ, t) : t ≥ 0) adapted to (B(t) : t ≥ 0). Furthermore,
(I(ψ, t) : t ≥ 0) is a martingale with respect to (B(t) : t ≥ 0).
Suppose now that ψ ∈ H 2 is a bounded process with continuous sample paths. We can then
approximate ψ by a sequence of elementary processes (ψn : n ≥ 1) for which

ψn (t) → ψ(t)

a.s. for each t ≥ 0, as n → ∞. Therefore, it seems reasonable to define I(ψ, t) by “taking limits”:

I(ψ, t) = lim I(ψn , t). (4.1.3)


n→∞

The question is: can we show that the “limit” in (4.1.3) exists? (And in what sense?)
We will consider the I(ψn , t) as elements of L2 , i.e., E[I(ψn , t)2 ]1/2 < ∞ for each n. The key
to establishing the existence of the L2 limit (4.1.3) is the identity
Z t
2
E[I(ϕ, t) ] = E[ϕ2 (s)]ds, (4.1.4)
0

which is easily validated for all elementary processes ϕ. To show that (I(ψn , t) : n ≥ 1) converges
in L2 , because L2 is complete, it is sufficient to show that (I(ψn , t) : n ≥ 1) is Cauchy in L2 .
Indeed, note that for all n and m, ψn − ψm is elementary and I(ψn , t) − I(ψm , t) = I(ψn − ψm , t),
and thus,
Z t
2
E[(I(ψn , t) − I(ψm , t)) ] = E[(ψn (s) − ψm (s))2 ]ds.
0

Because ψ and the ψn are all uniformly bounded, it is evident that


Z t
E[(ψn (s) − ψ(s))2 ]ds → 0
0
2
§ SECTION 4: STOCHASTIC CALCULUS

as n → ∞. So, with  > 0, for n ≥ n(),


Z t
E[(ψn (s) − ψ(s))2 ]ds < 2 /4.
0

Consequently, for n, m ≥ n(),


Z t 1/2
2 1/2 2
E[(I(ψn , t) − I(ψm , t)) ] = E[(ψn (s) − ψm (s)) ]ds < .
0

Therefore, (I(ψn , t) : n ≥ 1) is indeed Cauchy in L2 , and by completeness of L2 , the limit in


(4.1.3) does exist in the L2 sense. To show that I(ψ, t), defined as limn→∞ I(ψn , t) in the L2
sense, is invariant to the particular elementary sequence (ψn : n ≥ 1) chosen, let (ψn : n ≥ 1) and
(ψn0 : n ≥ 1) be two elementary sequences such that a.s., for every t ≥ 0,

ψn (t) → ψ(t)

and
ψn0 (t) → ψ(t).
Let (ψ̃n : n ≥ 1) be the elementary sequence formed by alternating the two sequences, i.e.,

ψ̃1 = ψ1 , ψ̃2 = ψ10 , ψ̃3 = ψ2 , ψ̃4 = ψ20 , ψ̃5 = ψ3 , ψ̃6 = ψ30 . . .

It suffices to show that (I(ψ̃n , t) : n ≥ 1) is Cauchy in L2 , from which we can obtain the desired
invariance in the limit definition of I(ψ, t). The verification is straightforward. Like before, with
 > 0, for n ≥ n(),
Z t
E[(ψn (s) − ψ(s))2 ]ds < 2 /4,
0
and for n ≥ n0 (),
Z t
E[(ψn0 (s) − ψ(s))2 ]ds < 2 /4.
0
Consequently, for n, m ≥ ñ() = 2 max(n(), n0 ()),
Z t 1/2
2 1/2 2
E[(I(ψ̃n , t) − I(ψ̃m , t)) ] = E[(ψ̃n (s) − ψ̃m (s)) ]ds < .
0

Therefore, (I(ψ̃n , t) : n ≥ 1) is indeed Cauchy in L2 , as desired. As a consequence, we can define


the stochastic integral I(ψ, t) for all bounded continuous ψ ∈ H 2 .
It turns out that any ψ ∈ H 2 can be approximated by bounded continuous processes that are
elements of H 2 . One can thereby extend the definition of I(ψ, t) to all processes ψ ∈ H 2 .
The following proposition is easily established:

Proposition 4.1.1 Suppose that ψ, χ ∈ H 2 . Then, for s, t ≥ 0 and a ∈ R,

1. Z t+s Z s Z t+s
ψ(u)dB(u) = ψ(u)dB(u) + ψ(u)dB(u).
0 0 s

2. I(ψ + aχ, t) = I(ψ, t) + aI(χ, t).


3
§ SECTION 4: STOCHASTIC CALCULUS

3. (I(ψ, t) : t ≥ 0) is a martingale that is adapted to (B(t) : t ≥ 0).


Rt
4. E[I(ψ, t)2 ] = 0 E[ψ 2 (s)]ds.

Remark 4.1.1 For the most part, we have suppressed the discussion of measurability issues in
this course. For those of you who are familiar with σ-algebras and their uses in probability, it is
worth realizing that the definition of the stochastic integral given above is an area in which one
needs to be careful about measurability. Specifically, let Fs = σ(B(s) : 0 ≤ s ≤ t) be the σ-algebra
generated by Brownian motion up to time t. Note that I(ψ, t) is defined by a limiting operation
involving the probability P , and there is no guarantee that I(ψ, t) will be Ft -measurable. One
solution would be to throw the null sets of P into Ft , thereby enlarging Ft to Gt , where Gt is the
smallest σ-algebra containing Ft and the sets of having P -proability zero. (And we assume that
the original probability space is complete, in the sense that every subset of a zero-probability event
is assumed to be in F .)

Remark 4.1.2 The stochastic integral that we have constructed above was first developed by K.
Itô, and is therefore known as the Itô integral. The fundamental property given in Proposition
4.1.1 4. is the isometry property.

In view of the continuity of I(ψ, ·) for elementary processes ψ ∈ H 2 , it seems reasonable to


expect that I(ψ, ·) ought to be continuous for arbitrary ψ ∈ H 2 . Indeed, we have the following
theorem (whose proof requires the maximal inequality for martingales):

Theorem 4.1.1 Suppose that ψ ∈ H 2 . Then there exists a version of (I(ψ, t) : t ≥ 0) with
continuous sample paths.

4.2 Itô’s Formula

Itô’s formula generalizes the fundamental theorem of calculus to processes that are defined as
stochastic integrals. To get a sense of the complications that can arise, set g(x) = x2 . Then
n          
X it (i − 1)t it (i − 1)t
g(B(t)) − g(B(0)) = lim B −B B +B
n→∞ n n n n
i=1
n     2
X it (i − 1)t
= lim B −B
n→∞ n n
i=1
n       
X (i − 1)t it (i − 1)t
+ 2 lim B B −B .
n→∞ n n n
i=1

Clearly, the second limit converges to


Z t
2 B(s)dB(s).
0

As for the first term, observe that


n     2 n
X it (i − 1)t D t
X
B −B = Ni (0, 1)2
n n n
i=1 i=1
4
§ SECTION 4: STOCHASTIC CALCULUS

as n → ∞. We conclude that
Z t
2 2
B (t) − B (0) = t + 2 B(s)dB(s).
0

In other words, Z t
g(B(t)) − g(B(0)) = g 0 (B(s))dB(s) + t. (4.2.1)
0
The mysterious additional term appears because of the irregularity of Brownian paths.

Definition 4.2.1 Let x = (x(t) : t ≥ 0) be a continuous function. Suppose that


n     2
X it (i − 1)t
lim x −x
n→∞ n n
i=1

exists for each t ≥ 0. The above limit is denoted by [x](t) and is called the quadratic variation of
x.

Exercise 4.2.1 Show that if x is continuously differentiable, then [x](t) = 0 for all t ≥ 0.

We showed above that [B](t) = t. The nonzero quadratic variation of B is the basic reason that
the additional term appears (4.2.1). Suppose that g(x) = a0 + a1 x + a2 x2 . Then (4.2.1) implies
that
Z t
g(B(t)) − g(B(0)) = a1 (B(t) − B(0)) + 2a2 B(s)dB(s) + a2 t
0
Z t Z t 00
0 g (B(s))
= g (B(s))dB(s) + ds.
0 0 2

Let C 2 be the set of functions g : R → R that are twice continuously differentiable. For g ∈ C 2 ,
note that g is locally quadratic. So, if h is small, then
Z t+h Z t+h 00
0 g (B(s))
g(B(t + h)) − g(B(t)) ≈ g (B(s))dB(s) + ds,
t t 2
suggesting that
t t
g 00 (B(s))
Z Z
0
g(B(t)) − g(B(0)) = g (B(s))dB(s) + ds.
0 0 2
In differential notation, we write this as
1
dg(B(t)) = g 0 (B(t))dB(t) + g 00 (B(t))dt.
2
In fact, this formula generalizes from Brownian motion B to processes X that satisfy

dX(t) = φ(t)dt + ψ(t)dB(t),

where φ and ψ are adapted to B. Specifically, we get


1
dg(X(t)) = g 0 (X(t))dX(t) + g 00 (X(t))ψ 2 (t)dt. (4.2.2)
2
Relation (4.2.2) is known as Itô’s formula. Now we turn to proving this result in a special case.
5
§ SECTION 4: STOCHASTIC CALCULUS

Proposition 4.2.1 Suppose that φ and ψ are bounded and continuous processes that are adapted
to B. If Z Z t t
X(t) = x0 + φ(s)ds + ψ(s)dB(s),
0 0
and g, g 0 , and g 00 are bounded and uniformly continuous, then
Z t
1 t 00
Z
0
g(X(t)) = g(x0 ) + g (X(s))dX(s) + g (X(s))ψ 2 (s)ds. (4.2.3)
0 2 0
Remark 4.2.1 Note that the first integral can be written as
Z t Z t
g 0 (X(s))φ(s)ds + g 0 (X(s))ψ(s)dB(s).
0 0

We interpret the first integral as a Stieltjes integral (defined path-by-path), and the second integral
as an Itô integral.

Proof: Note that both sides of (4.2.3) are continuous processes, so it is sufficient to show equality
for each fixed t ≥ 0. (In that case, we get agreement on the rationals with probability one; the
continuity argument then gives agreement everywhere.) Without loss of generality, we may consider
t = 1. Then
Xn
g(X(1)) − g(X(0)) = (g(X(tj )) − g(X(tj−1 ))),
j=1

where tj = j/n, 0 ≤ j ≤ n. But


1
g(X(tj )) − g(X(tj−1 )) = g 0 (X(tj−1 ))(X(tj ) − X(tj−1 )) + g 00 (X(ξjn ))(X(tj ) − X(tj−1 ))2 , (4.2.4)
2

where ξnj ∈ [tj−1 , tj ]. We approximate the increment δjn = X(tj ) − X(tj−1 ) by

Wjn = φ(tj−1 )(tj − tj−1 ) + ψ(tj−1 )(B(tj ) − B(tj−1 ))


1
= φ(tj−1 ) + ψ(tj−1 )(B(tj ) − B(tj−1 )).
n
Note that by definition of the Itô integral,
n
X Z t
0
g (X(tj−1 ))Wjn ⇒ g 0 (X(s))dX(s)
j=1 0

as n → ∞. Also,
n
X n
X Z tj
0 0
g (X(tj−1 ))(δjn − Wjn ) = g (X(tj−1 )) (φ(s) − φ(tj−1 ))ds
j=1 j=1 tj−1

Xn Z tj
0
+ g (X(tj−1 )) (ψ(s) − ψ(tj−1 ))dB(s).
j=1 tj−1

The first term converges to zero a.s. as n → ∞. For the second term, note that its squared L2
norm satisfies
X n Z tj
0 2
Eg (X(tj−1 )) E(ψ(s) − ψ(tj−1 ))2 ds → 0
j=1 tj−1

6
§ SECTION 4: STOCHASTIC CALCULUS

as n → ∞, so the second term goes to zero in probability. We conclude that


n
X Z t
0
g (X(tj−1 ))(X(tj ) − X(tj−1 )) ⇒ g 0 (X(s))dX(s)
j=1 0

as n → ∞. We now focus on the second term in (4.2.4). Note that


n n
X 1 X 00
g 00 (X(tj−1 ))Wjn
2
= g (X(tj−1 ))φ2 (tj−1 )
n2
j=1 j=1
n
2 X
+ g 00 (X(tj−1 ))ψ(tj−1 )(B(tj ) − B(tj−1 ))
n
j=1
n
X
+ g 00 (X(tj−1 ))ψ 2 (tj−1 )(B(tj ) − B(tj−1 ))2 .
j=1

The first and second terms converge to zero in probability. Observe further that ((B(tj )−B(tj−1 ))2 −
1/n : j ≥ 1) is a sequence of independent stationary martingale differences, so that

 2 X
 
n  n
X 1  2
E g 00 (X(tj−1 ))ψ 2 (tj−1 ) (B(tj ) − B(tj−1 ))2 − = Eg 00 (X(tj−1 ))2 ψ 4 (tj−1 ) 2 → 0
n n
j=1 j=1

as n → ∞. But
n Z 1
X 1 00 2
g (X(tj−1 ))ψ (tj−1 ) → g 00 (X(s))ψ 2 (s)ds
n 0
j=1

a.s. as n → ∞, so evidently
n
X Z 1
00
g 2
(X(tj−1 ))Wjn ⇒ g 00 (X(s))ψ 2 (s)ds (4.2.5)
j=1 0

as n → ∞. A similar argument shows that


n
X
g 00 (X(tj−1 ))(Wjn
2
− ∆2jn ) ⇒ 0 (4.2.6)
j=1

as n → ∞. Finally,
n
X n
X
(g 00 (X(tj−1 )) − g 00 (X(ξjn )))∆2jn ≤ max |g 00 (X(u + s)) − g 00 (X(u))| ∆2jn . (4.2.7)
0≤s≤1, 0≤u≤1
j=1 j=1

The first term converges to zero a.s. while the second factor converges to
Z t
ψ(s)ds
0

as n → ∞. Relations (4.2.5), (4.2.6), and (4.2.7) together imply that


X n Z t
00 2
g (X(ξjn ))(X(tj ) − X(tj−1 )) ⇒ g 00 (X(s))ψ 2 (s)ds
j=1 0

7
§ SECTION 4: STOCHASTIC CALCULUS

as n → ∞. Combining all of the previous results yields the desired claim. 


Itô’s formula is perhaps the single most useful result in stochastic calculus. Consequently, it is
worth stating some generalizations. The basic formula asserts that if

dX(t) = φ(t)dt + ψ(t)dB(t),

then
1
dg(X(t)) = g 0 (X(t))dX(t) + g 00 (X(t))ψ 2 (t)dt.
2
Note that the formula involves the stochastic integral
Z t
g 0 (X(s))ψ(s)dB(s).
0

Without imposing significant restrictions, the integrand will typically not be in H 2 . Nevertheless,
it turns out that the stochastic integral can be extended beyond H 2 , and this extension is very
useful in stating general forms of Itô’s formula.
While the extended stochastic integral can be defined unambiguously, there is one property of
the integral on H 2 that is not inherited by more general integrands. In particular, it is no longer
the case that the stochastic integral is a square-integrable martingale. In fact, there is no longer
even any guarantee that the stochastic integral is a martingale; the difficulty is that the stochastic
integral may no longer be integrable. Of course, if one can show that the integrand is in H 2 , then
the stochastic integral is square-integrable, and this issue does not arise.
So, suppose that
Z t Z t
X(t) = x + φ(s)ds + ψ(s)dB(s), (4.2.8)
0 0

where φ, ψ are adapted to B,


Z t
|φ(s)|ds < ∞,
0

and Z t
ψ 2 (s)ds < ∞,
0

both almost surely. Then the stochastic integral defining X exists. With these assumptions, we
state the following theorem without proof.

Theorem 4.2.1 Suppose that X satisfies (4.2.8). If g : R → R is continuously differentiable, then


Z t Z t
0 1
g(X(t)) = g(x) + g (X(s))dX(s) + g 00 (X(s))ψ 2 (s)ds.
0 2 0

In some applications, it is convenient to permit g to depend explicitly on t.

Theorem 4.2.2 Given the same assumptions as in the preceding, let g : [0, ∞) × R → R be
continuously differentiable in t and twice continuously differentiable in x. Then
t t t
∂ 2 g(s, X(s)) 2
Z Z Z
∂g(s, X(s)) ∂g(s, X(s)) 1
g(t, X(t)) = g(0, x) + ds + dX(s) + ψ (s)ds.
0 ∂t 0 dx 2 0 ∂x2
8
§ SECTION 4: STOCHASTIC CALCULUS

We close this section by noting that some care needs to be taken in considering applications
that require relaxing the smoothness conditions on g. Generally, Itô’s formula continues to hold
if there are finitely many points at which the second derivative of g fails to exist, provided that
the first derivative is continuous there. However, if the first derivative is not continuous, then the
differential of g(X(t)) will have new (and unexpected) features. For instance, if g(x) = |x|, then
Z t
1 t 00
Z
0
g(B(t)) = g(B(0)) + g (B(s))dB(s) + g (B(s))ds + β(t),
0 2 0
where β involves the so-called “local time” of Brownian motion.

4.3 Itô’s Formula in Higher Dimensions

There are some models that demand the consideration of multi-dimensional processes. This
occurs, for example, in finance models in which the volatility is stochastic. In this case,
Z t Z t
dXi (t) = xi + φi (s)ds + ψi (s)dBi (s), 1 ≤ i ≤ d. (4.3.1)
0 0
Here, the driving “noise processes” are d-dimensional Brownian motions, typically correlated.
To be more specific, we define B̃ = (B̃(t) : t ≥ 0) to be d-dimensional standard Brownian motion
if B̃ = (B̃1 , . . . , B̃d ) has d independent components, each of which is a standard one-dimensional
Brownian motion. To construct a vector of d correlated Brownian motions B, let C be the d × d
matrix in which
C(i, j) = Cor(Bi (1), Bj (1))
is the desired coefficient of correlation between Bi and Bj . Then there exists a lower triangular
matrix L (the Cholesky factorization of C) such that LLT = C. Set
B(t) = LB̃(t)
for t ≥ 0. Then B has stationary independent increments and continuous sample paths, and
D
B(t) = N (0, tC).
It seems natural in (4.3.1) to permit φi and ψi to be adapted to B rather Bi . Extending
adaptedness in this way does not add any complications whatsoever to the construction of the Itô
integral. All our previous results extend easily to this more general setting, as long as both
Z t
|φi (s)|ds < ∞
0
and Z t
ψi2 (s)ds < ∞
0
hold a.s. for each 1 ≤ i ≤ d. The vector analogue of Itô’s formula is given by the following theorem:

Theorem 4.3.1 Suppose that X satisfies the previous assumptions. If g : [0, ∞) × Rd → R is


continuously differentiable in t and twice continuously differentiable in x, then
Z t
∂g(s, X(s))
g(t, X(t)) = g(0, x) + ds
0 ∂t
d Z t d Z
X ∂g(s, X(s)) 1 X t ∂ 2 g(s, X(s))
+ dXi (s) + ψi (s)ψj (s)C(i, j)ds,
0 ∂xi 2 0 ∂xi ∂xj
i=1 i, j=1

where x = (x1 , . . . , xd ).
9
§ SECTION 4: STOCHASTIC CALCULUS

4.4 Existence and Uniqueness for Stochastic Differential Equations

Our goal here is to study the stochastic differential equation

dX(t) = µ(X(t))dt + σ(X(t))dB(t) (4.4.1)

subject to X(0) = x0 . Specifically, the issue that arises is: for what specifications of µ and σ does
(4.4.1) lead to a well-defined model? Mathematically, this corresponds to the question of when one
can find a unique continuous process X satisfying
Z t Z t
X(t) = x0 + µ(X(s))ds + σ(X(s))dB(s).
0 0

Before addressing this question, let us gain further insight into the modeling interpretation of µ
and σ. Note that Z h
Ex (X(h) − X(0)) ≈ Ex µ(X(s))ds ≈ hµ(x),
0
so µ(x) is called the infinitesimal rate of drift of X, given that the process is currently at x. As a
result, µ is called the infinitesimal drift of X. Also, varx (X(h) − X(0)) ≈ hσ 2 (x), so σ 2 (x) is the
infinitesimal variability of X, conditional on the process currently occupying x. Thus, σ 2 is called
the infinitesimal variance of X.
In general, the model (4.4.1) need not exhibit any solutions. Furthermore, if solutions indeed
exist, they need not be unique. These problems already arise in the deterministic setting in which
σ(x) ≡ 0.

Example 4.4.1 Consider a population (x(t) : t ≥ 0) that obeys the dynamics


dx
= x(t)p , (4.4.2)
dt
subject to x(0) = 1 and with p > 1. Then
1
x(t) = ,
(1 − (p − 1)t)1/(p−1)
which explodes to ∞ as t % 1/(p − 1). In toher words, there exists no solution of (4.4.2) that can
be defined for all t ≥ 0. Note, however, that if p = 1, then (4.4.2) makes perfect sense; we thus
suspect that when the growth in x is “linear” or less, then this makes for good mathematics (and
good models).

Example 4.4.2 Consider the equation


dx
= 3x(t)2/3 , (4.4.3)
dt
subject to x(0) = 0. Then, for any a > 0,
(
0, t≤a
x(t) =
(t − a)3 , t>a

solves (4.4.3). So, (4.4.3) does not uniquely specify x. Note that f (x) = x2/3 has no derivative at
x = 0. In other words, f is not locally linear at x = 0. As in the previous example, one wishes that
the growth in x exhibit some form of linearity.
10
§ SECTION 4: STOCHASTIC CALCULUS

With these examples in mind, consider the following definition:

Definition 4.4.1 A function f : R → R is said to be globally Lipschitz if there exists c < ∞ such
that
|f (x) − f (y)| ≤ c|x − y|
for x, y ∈ R. Note that f (x) = xp for p > 0 is Lipschitz if and only if p = 1.

Exercise 4.4.1 If f : R → R is continuously differentiable, then f is globally Lipschitz if and only


if
sup |f 0 (x)| < ∞.
x∈R

In any case, Lipschitz functions exhibit the type of linearity necessary to establish existence/uniqueness
results for SDE’s. Indeed, let µi : [0, ∞) × Rd → R, σij : [0, ∞) × Rd → R for 1 ≤ i ≤ d, 1 ≤ j ≤ r.
Suppose that B = (B1 , . . . , Br ) is an r-dimensional Brownian motion.

Theorem 4.4.1 Suppose that µi and σij satisfy Lipschitz-type conditions: there exists c < ∞
such that
|µi (t, x) − µi (t, y)| ≤ c kx − yk , 1 ≤ i ≤ d
|σij (t, x) − σij (t, y)| ≤ c kx − yk , 1 ≤ i ≤ d, 1 ≤ j ≤ r
for x, y ∈ Rd , t ≥ 0. Suppose, in addition, that there exists d < ∞ for which
|µi (t, x)| ≤ d(1 + kxk), 1 ≤ i ≤ d
|σij (t, x)| ≤ d(1 + kxk), 1 ≤ i ≤ d, 1 ≤ j ≤ r
for t ≥ 0, x ∈ Rd . Then there exists a unique continuous solution X, adapted to B, satisfying
Z t Z t
X(t) = x + µ(s, X(s))ds + σ(s, X(s))dB(s)
0 0
and with the additional property that
Z t
E kX(s)k2 < ∞.
0

Example 4.4.3 In the finance context, a natural model to consider is the solution X to
dX(t) = rX(t)dt + σX(t)dB(t).
This is a model in which growth in the asset occurs in proportion to its value. In addition, the
random fluctuations are assumed to be in proportion to the state. Since these are also properties
exhibited by geometric Brownian motion, we expect the solution X to be related somehow to
geometric Brownian motion.
Indeed, the previous theorem ensures that the solution X in this setting exists and is suitably
unique. Since the logarithm of geometric Brownian motion is Brownian motion, this suggests
considering Y (t) = log X(t). Then Itô’s formula asserts that
dY (t) = (r − σ 2 /2)dt + σdB(t).
In other words,
Y (t) = Y (0) + (r − σ 2 /2)t + σB(t),
and hence
2 /2)t+σB(t)
X(t) = X(0)e(r−σ .
11
§ SECTION 4: STOCHASTIC CALCULUS

Example 4.4.4 As a second example of an SDE that one can explicitly solve, consider

dX(t) = −µX(t)dt + σdB(t).

If Z t
−µt
X(t) = X(0)e +σ e−µ(t−s) dB(s),
0
then a simple application of Itô’s formula establishes that this is indeed the unique solution. Since
X involves integrating a deterministic function against Brownian motion, it is evident that X is
Gaussian. This process is called the one-dimensional Ornstein-Uhlenbeck process.

12

You might also like