Chapter 06
Chapter 06
Chapter 06
t
() = ln E[e
iXt
], R,
we have
m
() = m
1
() = n
m/n
(),
or
m/n
() =
m
n
1
().
1
2 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
So for any rational number t,
t
() = t
t
().
For any irrational number t, we nd a sequence {t
n
} t as n , then
tn
()
t
()
due to the right continuity. So we conclude that any L evy process has the property of
E[e
iXt
] = e
t()
,
where () =
1
() is the characteristic exponent of X
1
and the L evy process is innitely
divisible.
The above discussions imply that
L evy processes innitely divisible processes.
For innity divisible processes, we have
Theorem 6.1.2 (L evy-Khintchine formula) A probability law of a real-valued random vari-
able is innitely divisible is and only if its characteristic function is given by
() = ia +
1
2
2
+
_
R
_
1 e
ix
+ix1
{|x|<1}
_
(dx), R,
where a R, 0 and is a measure dened on R\{0} satisfying
_
R
(1 x
2
)(dx) < .
The next result says that an innite divisible process is also a Levy process.
innitely divisible processes L evy processes
Theorem 6.1.3 (L evy-Khintchine formula for L evy processes) Suppose a R, 0 and
(x) is a measure on R\{0} s.t.
_
R
(1 x
2
)(dx) < . Dene
() = ia +
1
2
2
+
_
R
_
1 e
ix
+ix1
{|x|<1}
_
(dx)
Then there exists a probability space on which a L evy process is dened having characteristic expo-
nent (),
6.1. L
EVY PROCESSES
Dene, in addition,
(dx) = expected no. of jumps over [0, 1] such that X [x, x +dx],
or simply
(dx)dt = E[(dt, dx)],
and call (dx)dt the compensator to (dx dt). In the literature, (dx) is more often called
the L evy measure of a L evy process, which gives the intensity (i.e., probability per unit
time) of jumps of size in [x, x +dx].
According to the denition,
_
[0,t]R
x(dt, dx)
=
_
t
0
_
R
x(dt, dx) =
0st,Xs=0
X
s
,
i.e., the aggregated jump size over [0, t], while
_
[0,t]R
x(dx)dt = E
_
0st,Xs=0
X
s
_
,
i.e., the expected aggregated jump size over [0, t]. For the compensated jump measure,
(dt, dx) = (dt, dx) (dx)dt,
there is always
E
__
[0,t]R
f(x) (dt, dx)
_
= 0
for f(x) of a rather general class of functions.
6.1.3 Three examples as stepping stones
Poisson Processes
Consider a random variable N with probability distribution
P(N = k) =
e
k
k!
(k)
for some > 0. The characteristic function is
k0
e
ik
(k) =
k0
e
ik
e
k
k!
= e
(1e
i
)
.
6.1. L
N
i=1
i
] =
n0
E[e
i
N
k=1
i
]e
n
n!
=
n0
__
R
e
ix
F(dx)
_
n
e
n
n!
= e
R
e
ix
F(dx)
= e
R
(1e
ix
)F(dx)
,
When F(dx) = (x 1) dx, we have a Poisson process.
A compound process {X
n
, t 0} is dened by
X
t
=
Nt
i=0
i
, t 0.
Note that for 0 s < t < ,
X
t
= X
s
+
Nt
i=Ns+1
i
.
The last summation is an independent copy of X
ts
. The characteristic exponent of X
t
is
t
() = t
_
R
(1 e
ix
)F(dx)
and E[X
t
] =
_
R
xF(dx).
A compound Poisson with a drift is dened as
X
t
=
Nt
i=1
i
+ct, t 0,
6 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
with c R. The L evy-Khintchine exponent is
() =
_
R
(1 e
ix
)F(dx) ic.
If we choose c =
_
R
xF(dx), then
E[X
t
] = 0,
and X
t
is called centered compound Poisson process, with L evy-Khintchine exponent
() =
_
R
(1 e
ix
+ix)F(dx).
Linear Brownian motion
Take the probability law
,a
(dx) =
1
2
e
(x+a)
2
/2
2
dx
which is the normal distribution, N(a,
2
), with characteristic function
_
R
e
ix
,a
(dx) = e
1
2
2
ia
,
so that characteristic component is
() =
1
2
2
+ia,
which is that of a Brownian motion with a drift:
X
t
= at +B
t
.
6.1.4 The L evy-Ito Decomposition
We will prove Thm1.2: given (), there exists a L evy process with the same characteristic
exponent, through the L evy-Ito decomposition.
We rst rewrite the characteristic exponent as
() =
_
ia +
1
2
2
_
+
_
(R\(1, 1))
_
|x|1
(1 e
ix
)
(dx)
(R\(1, 1))
_
+
__
0<|x|<1
(1 e
ix
+ix)(dx)
_
(1)
() +
(2)
() +
(3)
().
6.1. L
(1)
() X
(1)
t
= B
t
at,
(2)
() X
(2)
t
=
Nt
i=1
i
, t 0,
while for
(3)
(), we have
(3)
() =
_
0<|x|<1
(1 e
ix
+ix)(dx)
=
n0
_
n
_
2
(n+1)
|x|<2
n
(1 e
ix
+ix)F
n
(dx)
_
n0
(3)
n
(),
where
n
= (2
(n+1)
|x| < 2
n
)
F
n
(dx) =
1
n
(dx)1
{2
(n+1)
|x|<2
n
}
In case
n
= 0, we let
(3)
n
= 0. There is the relationship:
(3)
n
() M
(n)
t
=
N
(n)
t
i=1
n
t
_
R
xF
n
(dx)
The question is whether
X
(3,k)
t
=
k
n=0
M
(n)
i
1. has a limit which has the characteristic exponent
(3)
as k , and
2. the limit is right-continuous with left limit.
The answers are positive and attributed to L evy and Khintchine. The proof of the follow-
ing theorem was given by L evy and Ito.
8 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
Theorem 6.1.4 (L evy-Ito decomposition) Given any a R, 0 and a measure on R\{0}
satisfying
_
R
(1 x
2
)(dx) < ,
then (a, , ) is the L evy triplet of a L evy process given by
X
t
= X
(1)
t
+X
(2)
t
+X
(3)
t
,
where X
(i)
, i = 1, 2, 3, are independent processes such that
X
(1)
t
= B
t
at,
X
(2)
t
=
Nt
i=1
i
,
and X
(3)
t
is a square integrable martingale with characteristic exponent
(3)
().
6.1.5 The Proof
Denition 6.1.5 Fix T > 0. Dene M
2
T
= M
2
T
(, F, F
t
, P) to be the space of real-valued,
zero mean right-continuous, square integrable P-martingales with respect to F over the nite time
period [0, T].
Note that any zero mean square integrable martingale with respect to {F
t
: t 0} has
a right-continuous version belonging to M
2
T
.
We will show that M
2
T
is a Hilbert space with respect to the inner product
< M, N >= E[M
T
N
T
]
for any M, N M
2
T
. It is obvious to see that, for any M, N, Q M
2
T
,
1. < aM +bN, Q >= a < M, Q > +b < N, Q > for any a, b R.
2. < M, N >=< N, M >.
3. < M, M > 0.
4. When < M, M >= 0, by Doobs maximal inequality,
E[ sup
0sT
M
2
s
] 4E[M
2
T
] = 4 < M, M >= 0
so, sup
0tT
M
t
= 0 almost surely.
6.1. L
E
_
E[M
2
T
|F
t
]
= E[M
2
T
].
Hence M M
2
T
and M
2
T
is a Hilbert space.
Suppose that {
i
: i 1} is a sequence of i.i.d. random variable with common law F
(with no mass at the origin) and that N = {N
t
: t 0} is a Poisson process with rate > 0,
we have
Lemma 6.1.6 Suppose that
_
R
|x|F(dx) < .
1. The process M = {M
t
: t 0} dened by
M
t
Nt
i=1
i
t
_
R
xF(dx)
is a zero mean martingale with respect to its natural ltration.
10 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
2. If, moreover,
_
R
x
2
F(dx) < , then
E[M
2
t
] = t
_
R
x
2
F(dx),
so M is square integrable martingale.
Proof: The proof consists of two steps.
1. By denition, M has stationary and independent increments so it is a L evy process.
Dene F
t
= (M
s
: s t) then for t s 0,
E[M
t
|F
s
] = M
s
+E[M
t
M
s
|F
s
]
= M
s
+E[M
ts
].
What left to show is that
E[M
u
] = 0 for all u 0.
In fact, u 0,
E[M
u
] = E
_
Nu
i=1
i
u
_
R
xF(dx)
_
= uE[
1
] u
_
R
xF(dx) = 0.
In addition
E[|M
u
|] E
_
Nu
i=1
+u
_
R
xF(dx)
_
uE[|
1
|] +u
_
R
xF(dx)
= u
__
R
(|x| +x)F(dx)
_
< .
6.1. L
i=1
i
_
2
_
_
2
t
2
__
R
xF(dx)
_
2
= E
_
Nt
i=1
2
i
_
+E
_
Nt
i=1
Nt
j=1
1
{i=j}
j
_
2
t
2
__
R
xF(dx)
_
2
= t
_
R
x
2
F(dx) +E
_
N
2
t
N
t
__
R
xF(dx)
_
2
2
t
2
__
R
xF(dx)
_
2
= t
_
R
x
2
F(dx) +
2
t
2
__
R
xF(dx)
_
2
2
t
2
__
R
xF(dx)
_
2
= t
_
R
x
2
F(dx)
Recall that
n
=
_
2
(n+1)
|x| < 2
n
_
,
F
n
(dx) =
1
n
(dx)1
{2
(n+1)
|x|<2
n
}
,
we now dene
N
(n)
= {N
(n)
t
: t 0} Poisson process with rate
n
,
{
i
: i = 1, 2, . . .} i.i.d random variable with law F
n
,
and M
(n)
= {M
(n)
t
: t 0} such that
M
(n)
t
=
N
(n)
t
i=1
(n)
i
n
t
_
R
xF
n
(dx)
F
(n)
t
= (M
(n)
s
: s t) for t s 0
We nally put {M
(n)
: n 1} on the same probability space with respect to the common
ltration
F
t
=
_
_
n1
F
(n)
t
_
.
Theorem 6.1.7 If
n1
n
_
R
x
2
F
n
(dx) < ,
12 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
then there is a L evy process X = {X
t
, t 0} which is also square integrable martingale with
characteristic exponent
() =
_
R
(1 e
ix
+ix)
n1
n
F
n
(dx)
for all R such that for each xed T > 0,
lim
k
E
_
_
sup
tT
_
X
t
n=1
M
(n)
t
_
2
_
_
= 0.
Proof We rst show that
k
n=1
M
(n)
is a square integrable martingale. In fact, due to inde-
pendence and zero mean,
E
_
_
_
k
n=1
M
(n)
t
_
2
_
_
=
k
n=1
E
_
_
M
(n)
t
_
2
_
= t
k
n=1
n
_
R
x
2
F
n
(dx) < .
Fix T > 0. We now claim X
(n)
= {X
(n)
t
, 0 t T} with
X
(n)
t
=
k
n=1
M
(n)
t
is a Cauchy sequence with respect to . Note that for k l,
X
(n)
X
(l)
2
= E
_
_
X
(n)
t
X
(l)
t
_
2
_
= T
k
n=l
n
_
R
x
2
F
n
(dx) 0 as k, l .
Then, there is X
T
in (, F
T
, P) such that
X
(n)
T
X
T
0 as k ,
due to completeness of L
2
(, F
t
, P). Dene
X
t
= E[X
T
|F
t
]
and X = {X
t
, 0 t T}, then there is also
X
(k)
X 0 as k .
6.1. L
= lim
k
E
_
e
i(X
(k)
t
X
(k)
s
)
_
= lim
k
E
_
e
iX
(k)
ts
_
= E
_
e
iX
ts
,
showing that X has stationary and independent increments. Due to the condition of the
theorem, we readily have
E
_
e
iXt
= lim
k
E
_
e
iX
(k)
t
_
= lim
k
E
k
n=1
E
_
e
iM
(n)
t
_
= exp
_
_
R
(1 e
ix
+ix)
n1
n
F
n
(dx)
_
=
(3)
().
There are two more minor issues. the rst is to showthe right-continuity of X. This comes
from the fact that the space of right continuity function over [0, T] is a closed space, under
the metric d(f, g) = sup
0tT
|f(t) g(t)|. The second is the dependence of X on T, which
should be dismissed.
Suppose we index X by T, say X
T
, using
sup
n
a
2
n
=
_
sup
n
|a
n
|
_
2
,
sup
n
|a
n
+b
n
| sup
n
|a
n
| + sup
n
|b
n
|,
14 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
and Minkowskis inequality, we have for T
1
T
2
,
E
_
sup
tT
1
_
X
T
1
t
X
T
2
t
_
2
_
1/2
E
_
_
sup
tT
1
X
T
1
t
X
(k)
t
+ sup
tT
1
X
T
2
t
X
(k)
t
_
2
_
1/2
E
_
sup
tT
1
_
X
T
1
t
X
(k)
t
_
2
_
1/2
+E
_
sup
tT
1
_
X
T
2
t
X
(k)
t
_
2
_
1/2
0 as k
So X
T
1
t
= X
T
2
t
for any t [0, T
1
], thus X
t
doesnt depend on T
Proof of L evy-Ito decomposition: The limit X established in Theorem4.1 is just X
(3)
, which
has a countable number of discontinuities
6.1.6 More examples of L evy process
Gamma processes
For , > 0 dene the probability measure
,
(dx) =
()
x
1
e
x
dx, x (0, )
as the gamma-(, ) distribution, where
() =
_
0
e
u
u
1
du.
When = 1 it reduces to t he exponential distribution. We have the MGF for the gamma-
distributed random variable as follows
E
_
e
uX
=
_
0
e
ut
,
(dx)
=
()
_
0
x
1
e
(u)x
dx
=
( u)
( u)
()
_
0
x
1
e
(u)x
dx
=
1
(1 u/)
1.
It is however a bit challenging to nd out the L evy triplet for the gamma process. We
begin with
6.1. L
t
f(tx) = f
(tx)x.
therefore,
_
0
1
x
(f(bx) f(ax)) dx =
_
0
1
x
_
b
a
f
(tx)x dt dx
=
_
b
a
_
0
f
(tx)dxdt =
_
b
a
f() f(0)
t
dt
=[f() f(0)] ln
b
a
= e
0
(1e
ux
)x
1
e
x
dx
Proof There is
_
0
(1 e
ux
)e
1
e
x
dx
=
_
0
e
x
e
(u)x
x
dx
=(1) ln
u
= ln
1
1 u/
= ln
1
(1 u/)
EVY PROCESSES
with
(dx) = x
1
e
x
dx,
a =
_
1
0
x(dx).
Let = 0. The L evy triplet of the gamma distribution is (a, , ).
Properties of a gamma process {X
t
, t 0}:
1. For 0 s < t < , X
t
> X
s
almost surely. This is because
X
t
= X
st
+
X
ts
,
where
X
ts
is an independent copy of X
ts
, which is strictly positive almost surely.
2.
_
1
0
(dx) = + the gamma process has innite activity.
3. The mean and variance of the gamma process are
E[X
t
] =
t
Var(X
t
) =
2
t
Denition 6.1.12 L evy processes whose paths are almost surely non-decreasing are called subor-
dinators.
Inverse Gaussian Processes
Consider a Brownian motion with a drift, {B
t
+bt, t 0}. Dene the rst passage time for
a xed level, t, such that
t
= inf {u > 0 : B
u
+ut > t} .
The process {
t
, t 0} has the following properties
1. B
t
+b
t
= t almost surely.
2. {
t
, t 0} has stationary independent increment. From the strong Markov property,
{B
s+u
+b(
s
+u) s, u 0} is equal in law to {B
u
+bu, u 0}. Hence, for 0 s < t,
t
=
s
+
ts
,
where
ts
is an independent copy of
ts
.
6.1. L
] E[X
0
].
If {(X
t
, F
t
)} is a martingale, then E[X
] = E[X
0
].
Apparently, > 0
e
Bt
1
2
2
t
, t 0
is a martingale. Use the Doobs optional sampling theorem we obtain
E
_
e
B
t
1
2
2
t
_
= 1,
or
E
_
e
(B
t
+bt)(
1
2
2
+b)t
_
= E
_
e
t(
1
2
2
+b)t
_
= 1.
It follows that
E
_
e
(
1
2
2
+b)t
_
= e
t
.
Using analytical extension, we know the above equality holds for C. Now let
_
1
2
2
+b
_
= i, R
18 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
we then have
= b +
b
2
2i ( is rejected)
and it follows that
E
_
e
it
= e
t(
b
2
2ib)
= e
t()
,
with
() =
b
2
2i b.
To derive the L evy triplet for the inverse Gaussian process, we also need
Proposition 6.1.14 Let 0 < < 1. For any u C with Re u 0, there is
_
0
(e
ux
1)x
1
dx = (+)u
.
Note that
(x) =
_
0
t
x1
e
t
dt,
(
1
2
) =
.
We now can verify that, for the L evy measure
(dx) = (2x
3
)
1
2
e
xb
2
/2
dx, x > 0,
there is
_
0
(1 e
ix
)(dx) = ().
The other two elements of the triplet are
= 0,
a =
_
1
0
x(dx) = 2b
1
_
b
0
1
2
e
y
2
/2
dy.
Finally, take
t
(dx) =
t
2x
3
e
tb
e
1
2
(t
2
/x+b
2
/x)
dx, x > 0,
6.1. L
t
(dx) = e
t(b
b
2
2)
_
0
t
2x
3
e
1
2
(b
2
2)x
2
dx
= e
t(b
b
2
2)
_
0
_
2 +b
2
2x
e
1
2
(b
2
2)x
2
dx.
Adding up the last two equations yields
_
0
e
x
t
(dx) = e
t(
b
2
2b)
,
conrming that
t
(dx) is the law of
t
.
6.1.7 Tempered Stable Processes
Consider the tempered stable subordinator, {X
t
(, , c)}
t0
, is a three-parameter process
with L evy measure
(x) =
ce
x
x
+1
1
{x>0}
where
c > 0 alters the intensity of jumps of all sizes simultaneously, i.e., it changes
the time scale of the process
> 0 xes the decay rate of big jumps
[0, 1) determines the relative importance of small jumps.
The Laplace exponent of tempered stable subordinator in the general case ( = 0) is
l(u) =
_
c()(( u)
), 0 < < 1,
c ln(1 u/), = 0.
We are particularly interested in the tempered stable subordinator that has mean value
t:
E[X
t
] = t, and V ar[X
t
] = t,
which correspond to
=
1
c =
1
(1 )
_
1
_
1
,
(6.1.1)
20 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
and then the tempered stable process become a two-parameter process and has the L evy
measure given by
(x) =
1
(1 )
_
1
_
1
e
(1)x/
x
1+
.
The two-parameter tempered stable process has two interesting special cases.
1. When = 0, it reduces to a gamma process with L evy measure
(x) =
1
e
x/
x
.
2. When = 1/2, it reduces to an inverse Gaussian process with L evy measure
(x) =
1
2
e
x
2
x
3/2
.
The TSSs have the following scaling property: for > 0,
X
t
(, , c)
= X
t
(, /, c).
6.1.8 CGMY Process
The L evy measure for the CGMY process is given by
CGMY
(dx) =
_
C exp(Gx)(x)
1Y
dx, x < 0,
C exp(Mx)x
1Y
dx, x < 0,
with restrictions
C, G, M > 0 and Y < 2.
The rst two parameters of the L evy triplet of the CGMY distribution are
b = C
__
1
0
exp(Mx)x
Y
dx
_
0
1
exp(Gx)|x|
Y
dx
_
and = 0, so the CMGY process is a pure jump process.
When Y = 0, the CGMY process reduces to a VG process, such that
CGMY (C, G, M, Y ) =
p
(C, M)
n
(C, G).
When Y < 0, the paths have nite jumps in any nite interval. When Y 0, the paths
have innite many jumps in any nite time interval. Moreover, when Y [1, 2), the
process is of innite variation. The roles of the parameters can be summarized below.
6.1. L
CGMY
(u; C, G, M, Y ) = exp
_
C(Y )
_
(M u)
Y
M
Y
+ (G+u)
Y
G
Y
_
.
Proof: According to the L evy-Khintchine theorem, the characteristic function satises
CGMY
(u) = exp
__
R
(e
iux
1)
CGMY
(dx)
_
= exp
_
C
_
0
(e
iux
1)
e
Gx
|x|
1+Y
dx
_
exp
_
C
_
0
(e
iux
1)
e
Mx
x
1+Y
dx
_
.
The two integrals above can be evaluated as follows:
_
0
(e
iux
1)
e
Mx
x
1+Y
dx
=
_
0
e
(Miu)x
e
Mx
x
1+Y
dx
=(M iu)
Y
_
0
ew
w
1+Y
dw
M
Y
_
0
ew
w
1+Y
dw
=(Y )
_
(M iu)
Y
M
Y
,
and the other integral over (, 0) can be obtained similarly
The CGMY L evy process is dened as the process that starts at zero, has independent
and stationary distributed increments and in which the increment over a time interval of
length s follows a CGMY (sC, G, M, Y ) distribution. Denote such a process by X
(CGMY )
t
,
t 0, then
E
_
exp
_
uX
(CGMY )
t
__
= exp
_
tC(Y )
_
(M u)
Y
M
Y
+ (G+u)
Y
G
Y
_
.
The following characteristics of the CGMY process can be obtained by direct evalua-
tion.
mean C(M
Y 1
G
Y 1
)(1 Y ),
variance C(M
Y 2
+G
Y 2
)(2 Y ),
skewness
C(M
Y 3
+G
Y 3
)(3 Y )
(C(M
Y 2
+G
Y 2
)(2 Y ))
3/2
,
kurtosis 3 +
C(M
Y 4
+G
Y 4
)(4 Y )
(C(M
Y 2
+G
Y 2
)(2 Y ))
2
.
22 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
The motivation is to obtain a more exible process than VG, one allows nite activity,
innite activity and innite variation.
6.2 Stochastic Clocks and Subordination
A subordinator is an increasing stochastic process C(t) such that C(t) > 0 for t > 0. So a
subordinator maps a time to another time, and virtually serves as a clock changing time
stochastically. If the clocks runs faster than usual, C(t) > t, larger variance could be
generated in a Gaussian model. Let the moment generating function of C(T) be
E
_
e
uC(T)
= e
C
(u)
.
Generated a subordinator process in stochastic clock models may be equivalent to pure
jump L evy process with innite activity.
In the Black-Scholes world, the MGF of the log return X
T
= bT +W
T
is given by
f(u) = E
_
e
uX
T
= e
(bu+
1
2
2
u
2
)T
.
Let
B
(u) = bu +
1
2
u
2
2
, the exponent of X
1
. Now we randomize time through replacing t
by C(t). Denote
T = C(T). The MGF of X(C(T)) can be obtained through conditioning:
f(u) =E
_
e
ub
T+
1
2
u
2
2
T
_
=E
_
e
B
(u)
T
_
=e
C
(
B
(u))
which is a nested moment generating function. If both
B
(u) and
C
(u) have analytical
expression, so does the MGF f(u). The procedure that subordinating a Brownian motion
B(t) to C(t) and deriving the corresponding MGF is called a Bochner procedure.
When we replace u iu, we obtain characteristic function.
6.3 Derivation of L evy Measures
6.3.1 Variance-Gamma model
The idea behind a variance-gamma model is to randomize the deterministic time of a
Brownian motion by a gamma distribution. Since the original Brownian motion has a
variance of 1 in one time unit, the variance of the time-changed Brownian motion is then
gamma distributed.
6.3. DERIVATION OF L
EVY MEASURES 23
Denote B(t; b, ) as a Brownian motion with a drift:
B(t; b, ) = bt +W
t
,
then the variance-gamma process is dened by
V G(t; , b, ) = B((t; , ); b, ).
Since the gamma process has a mean of one, the time-changed BM still has an expected
time length of t.
The characteristic function for VG is
X(t)
(iu) = E[exp(iuX(t))]
=
_
1 (ibu
2
u
2
2
)/
_
t
=
_
(1
iu
p
)(1 +
iu
n
)
_
t
=
p(t)
(iu)
n(t)
(iu)
where s satisfy
1
n
=
b
n
=
2
2
.
Solving the above equations we obtain
p,n
=
_
b
2
+ 2
2
b
2
.
The above results suggest that the VG process can be expressed as the difference of two
independent gamma processes,
X(t; , b, ) =
p
(t;
p
, )
n
(t;
n
, ).
The L evy measure for X(t) then follows
V G
(x) =
_
exp (
n
|x|)
|x|
, x < 0
exp (
p
x)
x
, x > 0
In terms of (, b and ) we can rewrite the L evy measure as
V G
(x) =
exp(bx/
2
)
|x|
exp
_
|x|
_
b
2
+ 2
2
2
_
.
From the L evy measure, we can derive
24 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
1.
_
R
V G
(x)dx = , i.e., innite activity.
2.
_
|x|<1
x
V G
(x)dx < , i.e., nite variation.
The VG model for option pricing is dened by the following asset price dynamics,
S(t) = S
0
exp(rt mt +V G(t; , b, ))
under the risk-neutral measure, where
m = ln E[exp V G(1; , b, )]
Let x(t) = ln S(t), then
dx(t) = (r m)dt +dV G(t; , b, ).
Through the Bochners procedure we can derive the MGF for asset price under the VG
option pricing model. The MGF of (t; , ) is known to be
f
(u) = (1
u
)
t
.
It then follows the MGF of the VG proess as
f
V G
(u) = f
G
(
B
(u)) =
_
1
_
ub +
1
2
u
2
2
_
/
_
t
Note that
m = ln E[exp V G(1; , b, )]
= ln f(u = 1, t = 1)
= ln
_
1
_
b +
1
2
2
_
/
_
An popular alternative parametrization of Gamma processes takes the following form.
The marginal distribution is given by
p
t
(x) =
_
_
2
t
x
t1
e
x
_
(
t)
where
(x) =
_
0
e
u
u
x1
du.
6.3. DERIVATION OF L
EVY MEASURES 25
There are
E[X
t
] = t
V ar[X
t
] = t
and
E[e
uXt
] =
_
1 u
t
.
The L evy measure corresponding to
(t)
(u) is
(x) =
_
2
exp
_
x
_
x
x > 0
0 x 0.
(6.3.2)
6.3.2 Normal Inverse Gaussian Model
An inverse Gaussian process is dened by
t
= inf{u > 0 : W
u
+u = t}.
There are
E[
t
] =
t
, V ar(
t
) =
t
3
.
We denote by IG(t; t, t) an inverse Gaussian process with mean t and variance t.
A normal Inverse Gaussian (NIG) process is dened by
NIG(t; , b, ) =B(
t
; b, )
=B(IG(t; t/, t/
3
); b, )
The risk-neutral process of an asset price is
S(t) = S
0
exp((r m)t +NIG(t; , b, ))
with
m = ln E[exp NIG(1; , b, )]
According to the Bochners procedure, the MGF of NIG is
f
NIG
(u) = exp
_
t[
2
u
2
2bu]
_
It follows that
m = ln f
NIG
(u = 1; t = 1) =
2
2b
26 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
The L evy measure for the NIG is
NIG
(x) =
_
2/(
2
1
+c
2
)
e
cx
K
1
(|x|)
|x|
where
c = b/
2
and K
n
(x) is the modied Bessel function of the second kind, dened by
K
n
(x) =
1
2
_
x
2
_
n
_
0
e
t
x
2
4t
t
n1
dt,
It can be shown that
1.
_
R
NIG
(x)dx = innite activity
2.
_
|x|<1
x
NIG
(x)dx = innite variation
6.3.3 Tempered Stable Subordinators
A normal tempered stable process is a Brownian motion with a drift subordinated to a
tempered stable process:
Y
t
= B(X
t
(, 1, ); b, ).
Its characteristic exponent is
(u) =
_
_
_
1
_
1
_
(u
2
2
/2ibu)
1
_
_
, 0 < < 1,
ln
_
1 +
_
u
2
2
2
ibu
__
, = 0.
To derive the L evy measure of a normal tempered stable process, we will need a gen-
eral result. Let {C
t
}
t0
be a subordinator (i.e., a positive increasing process) with triplet
(b, 0, ). The Laplace component of C(u), is dened through
E[e
uCt
] = e
tC(u)
, u 0,
or then
C(u) =
1
t
ln E[e
uCt
] = bu +
_
0
(e
ux
1)(dx).
The next theorem given an general expression of L evy triplet of a L evy process obtained
by subordination.
6.3. DERIVATION OF L
EVY MEASURES 27
Theorem 6.3.1 (Cout and Tankov) Subordination of a L evy process. Fix a probability space
(, F, P). Let (X
t
)
t0
be a L evy process on R
d
with characteristic exponent (u) and triplet
(, A, ) and let (C
t
)
t0
be a subordinator with Laplace exponent C(u) and triplet (b, 0, ). Then
the process (Y
t
)
t0
dened for each by Y (t, ) = X(C(t, ), ) is a L evy process with
characteristic function
E[e
iuYt
] = e
tC((u))
The L evy triplet of Y , (
Y
, A
Y
,
Y
), is
Y
= b +
_
0
(ds)
_
|x|1
xp
X
s
(dx),
A
Y
= bA,
Y
(B) = b(B) +
_
0
p
X
s
(B)(ds), B B(R
d
),
where p
X
t
is the probability distribution of X
t
According to the last theorem, the L evy measure of a normal tempered stable process
can be obtained as follows:
(x) =
c
2
_
0
e
(xby)
2
2y
2
y
dy
y
+3/2
=
c
2
_
b
2
+ 2
2
|x|
_
+1/2
e
bx/
2
K
+1/2
_
|x|
b
2
+ 2
2
2
_
Substituting and c using (6.1.1), we obtain
(x) =
c(, , b, )
|x|
+1/2
e
bx/
2
K
+1/2
_
_
|x|
_
b
2
+
2
2
(1 )
2
_
_
where
c(; , , b) =
1
(1 )
2
_
1
_
1
_
b
2
+
2
2
(1 )
_
2
+
1
4
K
n
() is the modied Bessel function of the 2nd kind which is, by Sommerfeld integral
representation,
K
n
(x) =
1
2
_
x
2
_
n
_
0
e
t
x
2
4t
t
n1
dt
Introducing tail decay rates
=
1
2
_
_
b
2
+
2
2
(1 ) b
_
28 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
we can rewrite the L evy measure to
(x) =
c
|x|
+1/2
e
x(
+
)/2
K
+1/2
(|x|(
+
+
)/2)
From the asymptotic behaviour formulae of K
n
, we can deduce that
(x)
1
|x|
1+2
, when x 0
(x)
1
|x|
1+
e
+
x
, when x +
(x)
1
|x|
1+
e
|x|
, when x
Fromthese behaviors we can see that the tempered stable processes have innite activities
and fat tails.
We have two additional remarks.
1. In the family of L evy processes obtained by subordinating to tempered stable pro-
cesses, VG process (for = 0) and NIG process (for = 1/2) are particularly useful
because their probability density function are available in closed form.
Finally we summarize the VG and NIG models in the following tables (Cont and
Tankov, 2007).
6.3. DERIVATION OF L
EVY MEASURES 29
30 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
Other L evy processes not included in this course
1. Generalized Hyperbolic Processes
2. The Generalized Inverse Gaussian Process
3. The Meixner Process
6.4 Itos Lemma for L evy Processes
We start with a jump-diffusion process where the jumps are modeled by a compound
Poisson process,
X
t
= X
0
+
_
t
0
b
s
ds +
_
t
0
s
dW
s
+
Nt
i=1
X
i
where b
t
and
t
are continuous non-anticipated process with
E
__
T
0
2
t
dt
_
< .
Dene Y
t
= f(t, X
t
) where f C
1,2
([0, T], R) and denote by T
i
, i = 1, , N
T
the jump
time of X. Over (T
i
, T
i+1
), X evolves according to
dX
t
= b
t
dt +
t
dW
t
= dX
c
t
.
So,
Y
T
i+1
Y
T
i
=
_
T
i+1
T
i
f
t
dt +
f
X
dX
c
t
+
1
2
2
f
X
2
(dX
c
t
)
2
.
With jumps, the increment over (0, t) can be written as
Y
t
Y
0
=
_
t
0
f
s
ds +
f
X
dX
c
s
+
1
2
2
f
X
2
(dX
c
s
)
2
+
0st,Xs=0
[f(s, X
s
+ X
s
) f(s, X
s
)]
=
_
t
0
_
f
s
+
1
2
2
s
2
f
X
2
_
ds +
f
X
dX
s
+
0st,Xs=0
_
f(s, X
s
+ X
s
) f(s
, X
s
)
f(s, X
s
)
X
X
s
_
.
Similar result holds for L evy processes.
6.4. ITOS LEMMA FOR L
EVY PROCESSES 31
Theorem 6.4.1 Let (X
t
)
t0
be a L evy process with L evy triplet (, , ) and f C
1,2
([0, T], R).
Then
f(t, X
t
) = f(0, X
0
) +
_
t
0
_
f
s
+
1
2
2
s
2
f
X
2
_
ds +
f
X
dX
s
+
0st,Xs=0
_
f(s, X
s
+ X
s
) f(s, X
s
) X
s
f
X
(s, X
s
)
_
Proof: First, the summation is meaningful because f C
1,2
([0, T], R), so there is
f(s, X
s
+ X
s
) f(s, X
s
) X
s
f
X
(s, X
s
)
cX
2
s
for a constant c > 0. Let S
t
denote the summation. Then,
E[S
t
] ct
_
R
x
2
(dx) <
due to the niteness of quadratic variation of the L evy jumps. In addition, there is also
V aR(S
t
) < because
_
R
1 x
2
(dx) < . Next, we write
X
t
= X
t
+R
t
,
where X
t
the jump-diffusion part of X
t
with jump size bigger than , while, recalling the
proof of L evy-Ito decomposition, there is
R
2
i
t
=
n=i
M
(n)
t
=
n=i
_
_
N
(n)
t
j=1
j
n
t
_
R
zF
n
(dz)
_
_
,
and it has been proven that E[R
2
i
t
] = 0 and V ar[R
2
i
t
] 0 as i 0. Since X
t
is a jump-
diffusion process, we know f(t, X
t
) satises the equation of the theorem. The limit for
0 exists in L
2
(, F, P) which also satises the same equation because
|f(t, X
t
) f(t, X
t
)|
2
c(R
t
)
2
,
where c is another constant
The differential form of the Ito formula is
df(t, X
t
) =
_
f
t
+
1
2
2
t
2
f
X
2
_
dt +
f
X
dX
t
+f(t, X
t
) f(t, X
t
) X
t
f
X
(t, X
t
).
For computing expectations, we rewrite the last theorem as follows
32 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
Theorem 6.4.2 (Martingale-drift decomposition) Let (X
t
)
t0
be L evy process with L evy triplet
(b, , ) and f C
1,2
([(0, T), R). Then, Y
t
= f(t, X
t
) = M
t
+ V
t
, where M
t
is the martingale
part,
M
t
= f(0, X
0
) +
_
t
0
f
X
dW
s
+
_
[0,t]R
(ds, dz) [f(s, X
s
+z) f(s, X
s
)]
and V
t
is a continuous nite variation process
V
t
=
_
t
0
_
f
s
+b
f
X
+
1
2
2
2
f
X
2
_
ds
+
_
[0,t]R
ds(dz)
_
f(s, X
s
+z) f(s, X
s
) z
f
X
(s, X
s
)1
|z|1
_
For L evy process with nite variation, there is no need to substract the termX
s
f
X
(s, X
s
).
6.5 Exponentials of L evy Processes
Let (X
t
)
t0
be a L evy process with jump measure , such that
dX
s
= ds +dW
s
+
_
R
z
_
(ds, dz) 1
|z|1
(dz)ds
_
.
Applying the Itos formula to Y
t
= exp(X
t
) yields
Y
t
=1 +
_
t
0
Y
s
dX
s
+
2
2
_
t
0
Y
s
ds +
0st,Xs=0
_
e
X
s
+Xs
e
X
s
X
s
e
X
s
_
=1 +
_
t
0
Y
s
dX
s
+
2
2
_
t
0
Y
s
ds +
_
[0,t]R
Y
s
(e
z
1 z) (dz, ds)
=1 +
_
t
0
Y
s
_
ds +dW
s
+
_
R
_
e
z
1 z1
|z|1
_
(dz)ds
_
+
2
2
_
t
0
Y
s
ds +
_
[0,t]R
Y
s
(e
z
1) ((dz, ds) (dz)ds) .
Denote the compensated jump measure by (dz, ds) = (dz, ds) (dz)ds. By the theorem
of martingale drift decomposition, there is
Y
t
= M
t
+A
t
,
with
M
t
= 1 +
_
t
0
Y
s
dW
s
+
_
[0,t]R
Y
s
(e
z
1) (dz, ds),
A
t
=
_
t
0
Y
s
_
+
2
2
+
_
R
_
e
z
1 z1
|z|1
_
(dz)
_
ds.
6.6. EQUIVALENT MEASURES FOR L
EVY PROCESSES 33
Therefore, Y
t
= exp X
t
is a martingale iff the drift term varnishes,
+
2
2
+
_
R
_
e
z
1 z1
|z|1
_
(dz) = 0.
6.6 Equivalent Measures for L evy Processes
Equivalence between two measures, P and Q, means that
P(A) = 1 Q(A) = 1
6.6.1 Poisson Processes
Theorem 6.6.1 (Equivalence of measures for Poisson processes) Let (N, P
1
) and (N, P
2
) be
Poisson processes on (, F
T
) with intensities
1
and
2
and jump sizes a
1
and a
2
,
1. If a
1
= a
2
, then P
1
P
2
, with
dP
1
dP
Ft
= exp
_
(
2
1
)t N
t
ln
2
1
_
= e
(
2
1
)t
_
1
_
Nt
.
2. If a
1
= a
2
, then P
1
P
2
Proof: We prove statement 1 by direct verication. Let B F
t
, then
P
1
(B) = E
P
2
_
1
B
dP
1
dP
2
_
=
k=0
e
2
t
(
2
t)
k
k!
_
2
_
e
(
2
1
)t
E
P
2
[1
B
|N
t
= k]
=
k=0
e
1
t
(
1
t)
k
k!
E
P
2
[1
B
|N
t
= k]
=
k=0
e
1
t
(
1
t)
k
k!
E
P
1
[1
B
|N
t
= k].
Here, we make use the fact that E
P
[1
B
|N
t
= k] is independent of .
For statement 2, we understand that all nonconstant paths possible under P
1
is im-
possible under P
EVY PROCESSES
6.6.2 Compound Poisson Processes
Theorem 6.6.2 (Equivalence of measures for compound Poisson processes) Let (X, P) and (X, Q)
be compound Poisson processes on (, F
T
) with L evy measures
P
and
Q
. P and Qare equivalent
if and only if
P
and
Q
are equivalent, and in this case,
dQ
dP
Ft
= exp
_
(
P
Q
)t +
st
(X
s
)
_
,
where
P
=
P
(R) and
Q
=
Q
(R) are the jump intensities and
= ln
_
d
Q
d
P
_
= ln
_
Q
(dx)
P
(dx)
_
.
Proof: Suppose
P
and
Q
are equivalent. Then, conditioning on the trajectory of X
t
,
we have
E
P
_
dQ
dP
Ft
_
= E
P
_
e
t(
P
Q
)+
st
(Xs)
_
= e
t(
P
Q
)
E
P
_
e
st
(Xs)
_
= e
t(
P
Q
)
k=0
e
P
t
(
P
t)
k
k!
E
P
_
e
st
(Xs)
k jumps
_
= e
Q
t
k=0
(
P
t)
k
k!
_
E
P
[e
(X)
]
_
k
= e
Q
t
k=0
(
P
t)
k
k!
_
E
P
_
d
Q
d
P
(X)
__
k
= e
Q
t
k=0
(
P
t)
k
k!
_
P
_
k
_
E
P
_
d
Q
/
Q
d
P
/
P
(X)
__
k
= e
Q
t
k=0
(
Q
t)
k
k!
= 1.
Note that d
P
(x) =
P
(x X x +dx) =
P
(dx) =
P
f
P
(x)dx, and
_
k
f(x)dx = 1.
Next, we show that
1. X
t
has Q-independent increments
6.6. EQUIVALENT MEASURES FOR L
EVY PROCESSES 35
Let f and g be two bounded measurable functions and let s < t T. Using the fact
that X
t
and ln
dQ
dP
are P-L evy processes, and that
dQ
dP
is a P-martingale we have
E
Q
[f(X
s
)g(X
t
X
s
)] = E
P
_
f(X
s
)g(X
t
X
s
)
dQ
dP
_
= E
P
[f(X
s
)D
s
] E
P
_
g(X
t
X
s
)
D
t
D
s
_
= E
Q
[f(X
s
)] E
Q
[g(X
t
X
s
)]
2. The law of X
t
under Q is that of a compound Poisson with L evy measure t
Q
.
By conditioning to the trajectory of X again,
E
Q
_
e
iuXt
= E
P
_
e
iuXt
e
t(
P
Q
)+
st
(Xs)
_
= e
Q
t
k=0
(
P
t)
k
k!
_
E
P
_
e
iuX+(X)
_
k
= e
Q
t
k=0
(
P
t)
k
k!
_
E
P
_
e
iuX
d
Q
d
P
(X)
__
k
= e
Q
t
k=0
(
P
t)
k
k!
__
e
iux
d
Q
d
P
(x)
P
(dx)
P
_
k
= e
Q
t
k=0
1
k!
_
t
_
e
iux
d
Q
(x)
_
k
= exp
_
t
_
(e
iux
1)
Q
(dx)
_
.
Suppose
P
and
Q
are not equivalent such that, e.g. there is a set B s.t.
P
(B) > 0
and
Q
(B) = 0. Then the set of paths having at least one jump the size in B has positive
P-probability and zero Q-probability, meaning the two measures are not equivalent
6.6.3 Diffusion Processes
Theorem 6.6.3 (Equivalence of measure for Brownian motion with drift) Let (X, P) and (X, Q)
be two Brownian motion on (, F
t
) with volatilities
P
> 0 and
Q
> 0 and drifts
P
and
Q
. P
and Q are equivalent if
P
=
Q
= and singular otherwise. When they are equivalent,
dQ
dP
F
T
= exp
_
2
X
T
1
2
(
Q
)
2
(
P
)
2
2
T
_
36 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
Proof: we need to show 1) Q-independent increment and 2) the CF of X
t
under Q is
that of a Gaussian with mean and variance (
Q
T,
2
T). Let = (
Q
P
)/ and X
t
=
W
t
+
P
t, where W
t
is a P-Brownian motion. It is well known that
dQ
dP
F
T
= exp
_
W
T
1
2
2
T
_
= exp
_
2
X
T
1
2
(
Q
)
2
(
P
)
2
2
T
_
6.6.4 General L evy Processes
Theorem 6.6.4 (Equivalence of measures for L evy process) Let (X
t
, P) and (X
t
, Q) be two L evy
processes on R with characteristic triplets (b, , ) and (b
_
e
(z)/2
1
_
2
(dz) < , (6.6.3)
where (z) = ln(
(dz)/(dz)).
The Radon-Nikodym derivatives is
dQ
dP
Ft
= e
Ut
,
with
U
t
= X
c
t
2
2
t
2
bt + lim
0
_
_
st, |Xs|>
(X) t
_
|z|>
_
e
(z)
1
_
(dz)
_
_
where X
c
t
is the continuous part of X
t
and is given by
=
_
b
1
1
z(
)(dz)
2
> 0
0 = 0.
Several comments are in order.
1. There is no constraint on b
= 1.
3. U
t
is a L evy process with triplet given by
b
U
=
1
2
_
R
(e
z
1 z1
|z|1
(
1
)(dz),
U
= ,
U
=
1
.
Example: Change of measure for CGMY or an extended tempered stable process. The
extended tempered stable process has a L evy measure
(x) =
c
+
e
+
x
x
1+
+
1{x 0} +
c
|x|
|x|
1+
1
x<0
where
, c
< 2. Suppose that the measures before and after change are
both tempered stable processes, then, the left-hand side of (6.6.3) over (0, ) writes
c
+
1
_
0
_
e
1
2
(
+
2
+
1
)x
_
c
+
2
/c
+
1
x
+
2
+
1
2
1
_
2
e
+
1
x
x
1+
+
1
dx
When
1.
+
2
=
+
1
2.
+
2
=
+
1
but c
+
2
= c
+
1
the integrand is proportional to 1/x
1+
+
1
and the integral is innite, so the two measure
is not equivalent. When
+
2
=
+
1
and c
+
2
= c
+
1
the integrand is proportional to 1/(x
+
1
1
) and is integrable.
This example shows that one can change freely the distribution of large jumps, but
one has to be careful with the distribution of small jumps. This is a good property because
many large jumps affect tails.
6.7 Arbitrage Pricing and Martingale Measures
Theorem 6.7.1 (Fundamental theorem of asset pricing) The market model dened by
(, F, F
t
, P) and asset prices, S
t
, is arbitrage-free iff there exists a probability measure Q P such
that the discount price,
S
t
= S
t
/B
t
, are martingale w.r.t. Q.
38 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
For proof, see Harrison and Krep (1999), Harrison and Phiska (1981) and Delbaen et
al.(1998).
Theorem 6.7.2 (Absence of arbitrage in exp-L evy model) Let (X, P) be a L evy process. If the
paths of X
t
are neither a.s. increasing nor a.s. decreasing, then the exp-L evy model S
t
= e
rt+Xt
is
arbitrage free: Q P s.t. (e
rt
S
t
)
t0
is a Q-martingale.
Proof: Let X have characteristic triplet (
2
, , ). If > 0, an equivalent martingale
measure can be obtained by changing the drift, as in the Black-Scholes model, without
changing the L evy measure. Hence, we only need to consider the case = 0. Furthermore,
by applying a measure transformation with the function (x) in Proposition 9.8 given by
(x) = x
2
we obtain an equivalent probability under which X is a L evy process with
zero Gaussian component, the same drift coefcient and L evy measure (dx) = e
x
2
(dx),
which has exponential moments of all orders. Therefore we can suppose that has expo-
nential moments of all orders to begin with.
We will now show that an equivalent martingale measure can be obtained from P
by Esscher transform under the conditions of this theorem. After such transform with
parameter the characteristic triplet of X becomes (0, , ) with (dx) = e
x
(dx) and
= +
_
1
1
x(e
x
1)(dx). For exp(X) to be a martingale under the new probability, the
new triplet must satisfy
+
_
(e
x
1 x
|x|1
) (dx) = 0
To prove the theorem we must now show that the equation = f() has a solution
under the conditions of the theorem, where
f() =
_
(e
x
1 x
|x|1
)e
x
(dx) +
_
1
1
x(e
x
1)(dx).
By dominated convergence we have that f is continuous and that f
() =
_
x(e
x
1)e
x
(dx) 0, therefore, f() is an increasing function. Moreover, if ((0, )) > 0 and
((, 0)) > 0 then f
= exp
_
t
_
1
2
2
u
2
+i
bu +
_
R
d
_
e
iuz
1 iuz1
|z|1
_
(dz)
__
.
Replace iu by u, we obtain
E[e
uXt
] = exp
_
t
_
1
2
2
u
2
+
bu +
_
R
d
_
e
uz
1 uz1
|z|1
_
(dz)
__
.
If exp(X
t
) is an martingale, then there must be
1
2
2
+
b +
_
R
d
_
e
z
1 z1
|z|1
_
(dz) = 0.
By the change of measure theorem, the equation becomes
1
2
2
+b +
2
+
_
1
1
z
_
e
(z)
_
(dz) +
_
R
d
_
e
z
1 z1
|z|1
_
(dz) = 0.
Note that and (z) give us plenty of freedom to choose the martingale measures. The
question is: what are criteria for choosing the martingale measure?
We can restrict ourselves to the use of Esscher transform for the change of L evy mea-
sures. Let X
t
be a L evy process with triplet (b, , ), and let be a real number such that
_
|z|1
e
z
(dz) < .
The measure change given by = 0 and (z) = z is equivalent to have
dQ
dP
Ft
=
e
Xt
E[e
Xt
]
= exp(X
t
+()t),
where () = ln E[exp(X
1
)]. After the change,
(dz) = e
z
(dz),
b = b +
_
|z|<1
z(e
z
1)(dz),
corresponding to choosing either = 0 or = 0 in Theorem 5.11.1.
40 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
6.8 Relative Entropy for L evy Processes
The notion of relative entropy or Kullback-Leibler distance is dened by
(Q, P) = E
Q
_
ln
dQ
dP
_
= E
P
_
dQ
dP
ln
dQ
dP
_
It is a convex function of Q. By Jensens inequality, (Q, P) 0, with (Q, P) = 0 iff
dQ/dP = 1 a.s.
Theorem 6.8.1 (Relative entropy of L evy process) Let P and Q be equivalent measures on (, F)
generated by exponential-L evy models with L evy triplet (b
P
, ,
P
) and (b
Q
, ,
Q
). Assume > 0,
Then
(Q, P) =
T
2
2
_
b
Q
b
P
_
1
1
x(
Q
P
)(dx)
_
2
+T
_
_
d
Q
d
P
ln
d
Q
d
P
+ 1
d
Q
d
P
_
P
(dx)
(6.8.4)
If both P and Q are martingale measures for the exponential-L evy model, then the relative entropy
reduces to
(Q| P) =
T
2
2
__
(e
x
1)(
Q
P
)(dx)
_
2
+T
_
_
d
Q
d
P
ln
d
Q
d
P
+ 1
d
Q
d
P
_
P
(dx)
Proof: Let (X
t
) be a L evy process and S
t
= exp(X
t
). From the bijectivity of the expo-
nential it is clear that the histories generated by (X
t
) and (S
t
) coincide. We can therefore
equivalently compute the relative entropy of the log-price processes (which are L evy pro-
cesses). To compute the relative entropy of two L evy processes we will use expression
(9.22) for Radon-Nikodym derivative:
E =
_
dQ
dP
ln
dQ
dP
dP = E
P
[U
T
e
U
T
],
where (U
t
) is a L evy process with characteristic triplet given by formulae (9.23) - (9.25).
Let
t
(z) denote its characteristic function and (z) its characteristic exponent, that is,
t
(z) = E
P
[e
izUt
] = e
t(z)
.
Then we can write:
E
P
[U
T
e
U
T
] = i
d
dz
T
(i) = iTe
T(i)
(i)
= iT
(i)E
P
[e
U
T
] = iT
(i).
6.8. RELATIVE ENTROPY FOR L
EVY PROCESSES 41
From the L evy-Khintchin formula we know that
(z) = a
U
z +i
U
+
_
(ixe
izx
ix1
|x|1
)
U
(dx).
We can now compute the relative entropy as follows:
E = a
U
T +
U
T +T
_
(xe
x
x1
|x|1
)
U
(dx)
=
2
T
2
2
+T
_
(ye
y
e
y
+ 1)(
P
1
)(dy)
=
2
T
2
2
+T
_ _
d
Q
d
P
ln
d
Q
d
P
+ 1
d
Q
d
P
_
P
(dx),
where is chosen such that
_
1
1
x(
Q
P
)(dx) =
2
.
Since we have assumed > 0, we can write
1
2
2
=
1
2
2
_
_
1
1
x(
Q
P
)(dx)
_
2
.
which leads to (9.28). If P and Qare martingale measures, we can express the drift using
and :
2
2
2
=
1
2
2
__
(e
x
1)(
Q
P
)(dx)
_
2
Example: Relative entropy for tempered stable processes. Let the L evy densities be
Q
(x) =
ce
(
1
1)x
x
1+
1
{x0}
+
ce
(
1
+1)|x|
|x|
1+
1
{x<0}
,
P
(x) =
ce
(
2
1)x
x
1+
1
{x0}
+
ce
(
2
+1)|x|
|x|
1+
1
{x<0}
,
with
1
> 1 and
2
> 1 imposed by the no-arbitrage property. Then, the rst termin (6.8.4)
_
0
x(
Q
P
)(dx) = c
_
0
e
1
x
e
2
x
x
dx < .
For the second term
_
0
_
d
Q
d
P
ln
d
Q
d
P
+ 1
d
Q
d
P
_
P
(dx)
=c
_
0
e
(
2
+1)x
e
(
1
+1)x
x(
1
2
)e
(
1
+1)x
x
+1
dx < .
42 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
6.9 Minimal Entropy Martingale Measure
Given a stochastic model (S
t
)
t0
, under measure P, we want to nd a martingale measure
for (S
t
) such that
(Q|P) = inf
QM
a
(S)
(
Q|P)
where M
a
(S) denote the set of martingale measures for (S
t
)
t0
. Financially, this means
nding the martingale measure by adding the least amount of information to the prior
model. Also, nding such a measure can be achieved by utility indifference pricing with
exponential utility function, and letting the risk aversion tend to zero.
For an exponential L evy model, the minimal entropy martingale measure, if exists,
is also given by an exponential L evy model. The proof of the next theorem is given by
Miyahara and Fujiwara (2003).
Theorem 6.9.1 (Minimal entropy measure in exponential L evy model) If S
t
= S
0
exp(r
t
+ X
t
),
where (X
t
)
t0
is a L evy process with L evy triplet (b, , ). If there is solution Rto the equation
b +
_
+
1
2
_
2
+
_
+1
1
(dx)
_
(e
x
1)e
(e
x
1)
x
+
_
|x|>1
(e
x
1)e
(e
x
1)
(dx) = 0
(6.9.5)
then the minimal entropy martingale S
t
is also an exp-L evy process, S
t
= S
0
exp(r
t
+X
t
), where
(X
t
)
t0
is a L evy process with L evy triplet (b
, ,
), given by
= b +
2
+
_
|x|<1
(dx)
_
e
(e
x
1)
x
t
= S
0
exp [b
t +W
t
+aN
t
]
The risk-neutral jump intensity is
= exp[(e
a
1)] 1.15
6.10. PARTIAL INTEGRO-DIFFERENTIAL EQUATIONS FOR OPTIONS 43
Example 2 The Merton Model. Under the Mertons model,
S
t
= S
0
exp
_
bt +W
t
+
Nt
i=1
Y
i
_
where N is Poisson process with intensity = 1, b = 10%, = 30% and the jump sizes
Y
i
N(a,
2
) with a = 10%and = 0.1. Equation (*) can be numerically solved to obtain
2.73. The minimal entropy martingale model is again a jump-diffusion model with
compound Poisson jumps,
S
t
= S
0
exp
_
_
b
t +W
t
+
N
i=1
Y
i
_
_
,
where the risk-neutral L evy measure is
(x) =
2
exp
_
(x a)
2
2
2
+(e
x
1)
_
where
=
_
R
exp [(e
x
1)] (dx)
=
_
R
1
2
exp
_
(x a)
2
2
2
+(e
x
1)
_
dx
0.4
The jump size denuities before and after the change of measures are
(x)
and
(x)
respectively. The initial and minimal entropy risk-neutral measures are depicted in Figure
10.1
Remarks: In many model of interest, the parameter is found to be negative, so the
right tail of the minimal martingale process is strongly damped and contribute no value
to the option prices. This is continued to the downward sloping patten of the volatility
similes.
6.10 Partial Integro-Differential Equations for Options
Consider a market where the risk-neutral dynamics of the asset is given by
S
t
= S
0
exp(rt +X
t
)
44 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
where X
t
is a L evy process with characteristic triplet (b, , ) under some risk-neutral mea-
sure Q s.t. exp(X
t
) is martingale.
The value of a European option with terminal payoff H(S
T
) is dened as a discount
conditional expectation of its terminal payoff under the risk-neutral measure Q:
f(t, S
t
) = E
Q
t
_
e
r(Tt)
H(S
T
)
.
Dene
S
t
= e
rt
S
t
= S
0
exp X
t
,
f(t, X
t
) = e
rt
f(t, S
t
) = E
Q
t
_
e
rT
H(S
T
)
,
= e
rT
E
Q
t
_
H(S
t
e
X
Tt
+r(Tt)
)
.
By the martingale-drift decomposition theorem, we know
Y
t
=
f(t, X
t
) = M
t
+V
t
,
with
V
t
=
_
t
0
_
f
s
+
2
2
2
f
x
2
+b
f
x
_
ds
+
_
t
0
ds
_
R
(dy)
_
f(s, x +y)
f(s, x) y
f
x
(s, x)1
|y|1
_
Due to the martingale property of e
Xt
, we know there must be
b =
2
2
_
R
(dy)(e
y
1 y1
|y|1
)
Plug into the expression for V
t
we obtain
V
t
=
_
t
0
_
f
s
+
2
2
_
2
f
x
2
f
x
__
ds
+
_
t
0
ds
_
R
(dy)
_
f(s, x +y)
f(s, x) (e
y
1)
f
x
(s, x)
_
By the martingale property of
Y
t
, we must have V
t
= 0, t, yielding
Theorem 6.10.1 (PIDE in log price) Under some technical conditions, there is a solution
f
C
1,2
([0, T] R) R to the PIDE
f
t
+
2
2
_
2
f
x
2
f
x
_
+
_
R
(dy)
_
f(s, x +y)
f(s, x) (e
y
1)
f
x
(s, x)
_
= 0,
(t, x) [0, T] R,
with terminal condition
f(T, x) = e
rT
H(S
0
e
x+rT
)
6.10. PARTIAL INTEGRO-DIFFERENTIAL EQUATIONS FOR OPTIONS 45
If we solve the PIDE with the terminal condition, we only obtain value for one option.
By proper scaling, we can obtain option values for many options. Suppose we want to
value call option across strikes, we may normalize the payoff function as follows.
z = x +rT + ln
S
0
K
= z(x)
u(t, z) = e
rT
f(t, x)/K
Then
f(S
0
, 0) = e
rT
f(0, 0) = Ku(0, z(0)) = Ku
_
0, ln
_
S
0
e
rT
K
__
and u(t, z) satises
u
t
+
2
2
_
2
u
z
2
u
z
_
+
_
R
(dy)
_
u(t, z +y) u(t, z) (e
y
1)
u
y
(t, z)
_
= 0
with terminal condition
u(T, z) = (e
z
1)
+
for call options.
We nish the section with the following remarks.
1. Let the L evy triplet of X
t
be (b, , ), then the innitesimal generator of X
t
is
L
X
g(x) = b
g
x
+
2
2
2
g
x
2
+
_
R
(dy)
_
g(x +y) g(x) y1
|y|1
g
x
_
when exp X
t
is a martingale
b =
2
2
_
R
_
e
y
1 y1
|y|1
_
(dy)
The equation for option price is
_
_
_
f
t
+L
X
f(t, x) = 0,
f(T, x) = e
rT
H(S
0
e
x
0
+rT
),
which reproduces our result.
2. Let S = e
x+rt
, C(t, S) = e
rt
f(t, x). In terms of S, we have the PIDE
_
_
_
C
t
+ (r E[Y ])S
C
S
+
1
2
2
S
2
2
C
S
2
rC +E [C(t, S(1 +Y )) C(t, S)] = 0,
C(T, S) = H(S).
where
E[Y ] =
_
R
(dy) [e
y
1] ,
E [C(t, S(1 +Y )) C(t, S)] =
_
R
(dy) [C(t, Se
y
) C(t, S)] .
46 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
6.11 Finite Difference Methods
In nite difference methods for PIDE, we approximate differentiation by difference quo-
tients and integrals by Riemann sum. The approximation consists of three steps
1. Truncation
To reduce the unbounded domain to a bounded one, for both differentiation and
integral terms. In doing that we induce truncation error and need articial boundary
conditions.
2. Approximation of small jumps (for innite activity)
When the L evy measure diverges at zero, the contribution of this singularity to the
integral can approximated by a second derivative term.
3. Discretization in space
The spatial domain is replaced by a discrete grid and the integral-differential opera-
tor L
X
is replaced by a matrix
4. Discretization in time
Adoption of time stepping schemes.
6.11.1 Truncation to A Bounded Domain
For a general European option, we truncate the domain by
_
_
_
0 =
u
A
t
+L
X
u
A
, [0, T) (A, A)
u
A
(T, x) = h(x),
Due to the elimination of time effect, beyond the domain, the option value behave asymp-
totically like the payoff function, yielding the boundary conditions
u
A
(t, x) = h(x), x
(A, A) (6.11.6)
The integral term in L
X
has to be truncated at some bound, (B
l
, B
u
), as well. So after
truncation, x takes value from (A + B
l
, A + B
u
). Beyond (A, A), we are the boundary
conditions (6.11.6).
6.11.2 Spatial Discretization
The next step is to replace the domain [A, A] by a discrete grid
x
i
= A +ix, i = 0, 1, , N, x = 2A/N
6.11. FINITE DIFFERENCE METHODS 47
Consider rst the nite activity care where (R) = < . Then the PIDE can be
written as
L
u
=
2
2
2
u
x
2
_
2
2
+
_
u
x
+
_
Bu
B
l
(dy)u(t, x +y) u(t, x)
where
=
_
Bu
B
l
(e
y
1)(dy)
To approximate the integral term we consider the trapezoidal quadratic rule. Let K
l
and K
u
be the index pairs such that [B
l
, B
u
]
_
(K
l
1
2
)x, (K
u
+
1
2
)x
. Then,
_
Bu
B
l
(dy)u(t, x
i
+y)
Ku
j=K
l
j
u
i+j
,
where
j
=
_
(j+1/2)x
(j1/2)x
(dy).
Furthermore,
.
=
=
Ku
j=K
l
j
=
Ku
j=K
l
(e
y
i
1)
j
Note that the actual grid extends from i = K
l
to i = N + K
u
, with u(t, x
i
) = h(x
i
) for
i
[0, N].
The space derivatives can be approximated as (upward scheme)
_
u
x
_
i
_
u
i+1
u
i
x
, if
2
2
+ < 0
u
i
u
i+1
x
, if
2
2
+ 0
_
2
u
x
2
_
i
u
i+1
2u
i
+u
i1
(x)
2
6.11.3 Time Discretization
Let D and J denote the matrices representing the discretization of the differential and
integral operators. We have the following usual scheme for time stepping.
48 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
1. Explicit scheme ( = T t)
u
n+1
u
n
= Du
n
+Ju
n
Stability condition:
t inf
_
1
,
x
2
2
_
2. Implicit scheme
u
n+1
u
n
= (D +J)u
n+1
,
or
[I (D +J)] u
n+1
= u
n
,
which is a very stable yet expensive scheme.
3. Explicit-implicit scheme
u
n+1
u
n
= Du
n+1
+Ju
n
,
or
(I D)u
n+1
= (I + J)u
n
.
6.11.4 Treatment of Small Jumps with Innite Activity
Let (X
t
)
t0
be an innite activity L evy process with L evy triplet (, 0, ). Then
X
t
=
_
[0,t]R
x
_
(ds, dx) 1
|x|1
(dx)ds
_
=
_
[0,t],|x|>
+
_
[0,t],|x|<
x
_
(ds, dx) 1
|x|1
(dx)ds
_
= X
t
+R
t
,
where
E[R
t
] = 0,
Var R
t
= t
_
|x|<
x
2
(dx) t
2
().
6.11. FINITE DIFFERENCE METHODS 49
Theorem 6.11.1 (Asmussen and Rasinshi(2001)) Suppose
()
0, (*)
then Y
t
=
1
()R
t
W
t
t
are bounded by , (*) means that the jumps of R
t
are
bounded by some number that converges to zero, meaning the limiting process has no
jumps. Since
E [Y
t
] = 0,
V ar [Y
t
] = t.
The limit
W
t
= lim
0
Y
t
exists and is a continuous L evy process a Brownian motion.
Based on the last theorem, we have the following approximation for a L evy triplet
X
t
X
t
(, , ) ((),
_
2
+
2
(), 1
|x|>
).
where
() =
2
+
2
()
2
_
|x|
_
e
x
1 x1
|x|<1
_
(dx)
We approximate u(t, x) by u
= L
,
u
(0, x) = h(x),
where
L
f =
2
+
2
()
2
2
f
x
2
_
2
+
2
()
2
+()
_
f
x
+
_
|y|>
(dy)f(x +y) ()f(x),
with
() =
_
|y|>
(e
y
1)(dy),
() =
_
|y|
(dy),
Note that for the stability of explicit-implicit scheme, there should be
< 1/()
which prohibits us from taking very small .
50 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
6.11.5 Convergence of the Finite Difference Methods
Theorem 6.11.2 (continuous, stability, and monotonicity) If the time step satises
1/
_
t
0
_
R
d
(s, y) (ds, dy)
2
_
= E[
_
t
0
_
R
d
|(s, y)|
2
(dy)ds].
Proof: The proof consists of three steps.
1. We rst establish
E
_
_
t
2
t
1
_
A\{|x|}
[(ds, dy) (dy)ds]
2
_
=
_
t
2
t
1
_
A\{|x|}
(dy)ds
=(t
2
t
1
)(A\{|x| })
In fact,
Y
_
t
2
t
1
_
A\{|x|}
(ds, dy) (dy)ds
= N
t
2
t
1
(t
2
t
1
)
_
A\{|x|}
(dy)
where N
t
2
t
1
is a Poisson process with intensity =
_
A\{|x|}
(dy) < . It follows
that
E[Y
2
] = E[N
2
t
2
t
1
] (t
2
t
1
)
2
__
A\{|x|}
(dy)
_
2
=
2
(t
2
t
1
)
2
+(t
2
t
1
)
2
(t
2
t
1
)
2
= (t
2
t
1
)(A\{|x| })
2. For simple predictable function
(t, x) =
n
i=1
m
j=1
ij
1
[T
i1
,T
i
]
(t)1
A
j
(x)
dene
X
t
=
_
t
0
_
R
(s, t) ((ds, dx) (dx)dt) .
Then,
E
_
|X
t
|
2
= E
__
t
0
_
R
|(s, x)|
2
(dx)
_
.
52 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
Let T
i
= T
i
T
i1
and Y
j
i
= [[T
i1
, T
i
] A
j
]. Then
X
T
=
n
i=1
m
j=1
ij
Y
j
i
.
Since Y
j
i
are independent with zero mean, there is
E[X
2
T
] = Var
__
T
0
_
R
(s, y) (ds, dy)
_
=
i,j
E
_
|
i,j
|
2
(Y
j
i
)
2
i,j
E
_
|
i,j
|
2
t
i
(A
j
)
= E
__
T
0
_
R
|(x, y)|
2
(dy)dt
_
3. Given a square integrable function (x, y) such that
E
__
T
0
_
R
|(t, y)|
2
(dt, dy)
_
<
there exists a sequence of simple predictable function
(n)
, such that
E
__
T
0
_
R
|
(n)
(t, y) (t, y)|
2
(dt, ty)
_
n
0
Let
X
(n)
=
_
T
0
_
R
(n)
(t, y)(dt, dy)
there is
E
_
|X
(n)
|
2
= E
__
T
0
_
R
|
(n)
(t, y)|
2
(dy)dt
_
Let n , we arrive at the Ito-L evy isometry
6.12.2 Stochastic Exponentials
Let (X
t
)
t0
be a L evy process with L evy triplet (b, , ) and Y
t
= exp X
t
. By Itos lemma,
Y
t
= 1 +
_
t
0
Y
s
dX
s
+
2
2
_
t
0
Y
s
ds +
0st,Xs=0
Y
s
_
e
Xs
1 X
s
_
= 1 +
_
t
0
Y
s
dX
s
+
2
2
_
t
0
Y
s
ds +
_
[0,t]R
Y
s
(e
x
1 x) (ds, dx)
6.12. QUADRATIC HEDGING 53
In differential form,
dY
t
Y
t
= dX
t
+
2
2
dt +
_
R
(e
x
1 x)(dt, dx)
= (b +
2
2
)dt +dW
t
+
_
R
x
_
(dt, dx) 1
|x|1
(dx)dt
+
_
R
(e
x
1 x)(dt, dx)
= (b +
2
2
)dt +dW
t
+
_
R
(e
x
1)(dt, dx) x1
|x|1
(dt, dx)
= (b +
2
2
)dt +
_
R
(e
x
1 1
|x|1
)(dx)dt +dW
t
+
_
R
(e
x
1)[(dt, dx) (dx)dt].
Note that the last two terms constitute a martingale. When
b +
2
2
+
_
R
(e
x
1 1
|x|1
)(dx) = 0,
Y
t
is a martingale. Let
dY
t
Y
t
= dZ
t
,
then Z
t
is also a L evy process. To gure out its L evy triplet, we rewrite dY
t
/Y
t
as
dY
t
Y
t
= (b +
2
2
)dt +
_
R
_
(e
x
1)1
|e
x
1|1
x1
|x|1
(dx)dt
+dW
t
+
_
R
(e
x
1)
_
(dt, dx) 1
|e
x
1|1
(dx)dt
_
= (b +
2
2
)dt +
_
R
_
(e
x
1)1
|e
x
1|1
x1
|x|1
(dx)dt
+dW
t
+
_
R
z
_
Z
(dt, dz) 1
|z|1
Z
(dz)dt
_
.
Hence, the L evy triplet for Z
t
is
b
z
= b +
2
2
+
_
R
(dx)
_
(e
x
1)1
|e
x
1|1
x1
|x|1
z
= ,
Z
(dx) =
(ln(1 +x))
1 +x
dx, for x > 1.
We call Y
t
the stochastic exponential of Z
t
and denote Y
t
= (Z
t
). In terms of Z
t
, we have
the integral representation of Y
t
:
Y
t
= Y
0
+
_
t
0
Y
s
dZ
s
54 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
6.12.3 Mean Variance Hedging: Martingale Case
We look for a self-nancing strategy given by an initial capital V
0
and a portfolio (
t
,
t
)
over the lifetime of the option which minimize the terminal hedging error in a mean-
square sense,
inf
E
_
|V
t
() H|
2
,
under a risk-neutral measure Q, and
V
T
() = V
0
e
rT
+
_
T
0
r
t
dt +
_
T
0
t
dS
t
,
0
= V
0
.
To simplify our discussion, we consider discounted prices
S
t
= e
rt
S
t
V
t
= e
rt
V
t
Then, the problem can be written as
inf
v
0
,
E
Q
_
2
(V
0
, )
where
(V
0
, ) =
H
V
T
=
H V
0
_
T
0
t
d
S
t
Assume H has nite variance s.t. H L
2
(, F, Q) Dene the set of admissible strategy
S =
_
caglad predictable and E
_
_
_
T
0
t
d
S
t
_
2
_
<
_
,
and
A =
_
V
0
+
_
T
0
t
d
S
t
, V
0
R,
t
S
_
and dene inner product
(X, Y )
L
2 = E[XY ]
Then, we can cast the problem as
inf
V
0
,
E
Q
|(V
0
, )|
2
= inf
AA
H A
L
2
(Q)
The solution of this minimization problem is given by
Theorem 6.12.2 (Galtchouk-Kanita-Watanche decomposition) Let (
S
t
)
t[0,T]
be a square-integrable
martingale with respect to Q. Any random variable
H with nite variance depending on the his-
tory (F
s
t
)
t[0,T]
of
S
t
can be represented as the sum of a stochastic integral with respect to
S
t
and a
random variable N orthogonal to A : (
H
t
)
t(0,t)
S s.t.
H = E
Q
[
H] +
_
T
0
H
t
d
S
t
+N
H
,
where
6.12. QUADRATIC HEDGING 55
1. N
H
A,
2. N
H
is martingale, N
H
t
= E
Q
_
N
H
T
F
t
S
t
=
S
t
dZ
t
.
It follows that
G
T
() =
_
T
0
t
d
S
t
=
_
T
0
t
S
t
dZ
t
=
_
T
0
t
S
t
dW
t
+
_
T
0
_
R
t
S
t
z
Z
(dt, dz)
=
_
T
0
t
S
t
dW
t
+
_
T
0
_
R
t
S
t
(e
x
1)
X
(dt, dx)
The quadratic hedging problem can now be written as
inf
V
0
,L
2
(
S)
E
Q
_
G
T
() +V
0
H
_
2
= inf
V
0
,L
2
(
S)
E
Q
_
G
T
()
_
H E
Q
[
H]
_
+V
0
E
Q
[
H]
_
2
= inf
V
0
,L
2
(
S)
E
Q
_
G
T
()
_
H E
Q
[
H]
__
2
+E
Q
_
V
0
E
Q
[
H]
_
2
= inf
V
0
,L
2
(
S)
E
Q
_
V
0
E
Q
[
H]
_
2
+V ar
Q
_
G
T
()
H
_
and the expectation of the hedging error is
V
0
E
Q
_
H
_
,
implying an optimal value for the initial capital
V
0
= E
Q
_
H
_
. (6.12.7)
The function , meanwhile, is chosen to minimize the variance of the hedging error.
56 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
Theorem 6.12.3 (Quadratic hedge in exponential-L evy models) Consider the risk-neutral dynam-
ics
d
S
t
=
S
t
dZ
t
where Z
t
is martingale L evy process with diffusion volatility and L evy measure
t
. For a Euro-
pean option with payoff satisfying the Lipschitz condition:
|H(x) H(y)| K|x y| for some K > 0,
the risk minimizing hedge is given by
t
= (t, S
t
) s.t.
(t, S
t
) =
2
C
S
+
1
S
_
Z
(dz)z
_
C
_
t,
S(1 +z)
_
C(t,
S)
_
2
+
_
z
2
Z
(dz)
,
with
C(t,
S) = e
r(Tt)
E
Q
t
[H]
Proof: Under Q,
S
t
is an exponential-martingale. Consider a strategy
t
L
2
(
S) s.t.
V
T
=
_
T
0
t
d
S
t
=
_
T
0
t
S
t
dZ
t
=
_
T
0
t
S
t
dW
t
+
_
T
0
_
R
t
S
t
z
Z
(dt, dz).
(6.12.8)
Dene now the function
C(t,
S) = e
r(Tt)
E
Q
t
[H
T
] ,
which is also a square-integrable martingale. By Itos lemma,
C(t,
S
t
)
C(0,
S
0
) =
_
t
0
S
_
u,
S
u
_
S
u
dW
u
+
_
t
0
_
R
_
C
_
u,
S
u
(1 +z)
_
C (u, S
u
)
_
Z
(du, dz)
(6.12.9)
The Lipschitz condition on H ensures that the integrands in (6.12.9) are square-integrable.
Subtracting (6.12.9) from (6.12.8), we obtain the hedging error:
(V
0
, ) =
_
T
0
_
t
S
t
S
t
S
(t,
S
t
)
_
dW
t
+
_
T
0
_
R
Z
(dt, dz)
_
t
S
t
z
_
C
_
t,
S
t
(1 +z)
_
C(t,
S
t
)
__
By the Isometry formula,
E |(V
0
, )|
2
=E
__
T
0
dt
_
R
Z
(dz)
C
_
t,
S
t
(1 +z)
_
C(t,
S
t
)
S
t
t
z
2
_
+E
_
_
_
T
0
S
2
t
_
S
(t,
S
t
)
_
2
2
dt
_
_
6.13. COMBINING STOCHASTIC VOLATILITY WITH JUMPS 57
Now we can choose
t
to minimize E [(V
0
, )]
2
. Differentiate the quadratic expression we
obtain the rst-order condition
S
2
t
2
_
S
(t,
S
t
)
_
+
_
R
Z
(dz)
S
t
z
_
S
t
t
z
C
_
t,
S
t
(1 +z)
_
+
C(t, S
t
)
_
= 0
whose solution is given by the statement. For the initial capital given by (6.12.7), the
residual hedging error is zero when the market is complete in the following two cases.
No jump, 0, the model reduces to BS model and
t
=
C
S
.
No diffusion ( = 0) and only one jump size, a, and
t
=
C(S
t
(1 +a)) C(S
t
)
S
t
a
.
Finally, we give several reasons for using the risk-neutral measure to minimize the
hedging error
1. Under a risk-neutral measure, the discounted asset price is martingale. The Galtchouk-
Kunita-Watanable theorem guarantees a solution. Otherwise, the solution to option
price can be negative, or doesnt exist.
2. The risk-neutral measure obtained through calibration naturally reects the market
anticipation of future scenarios.
3. The risk-neutral measure obtained through calibration of through estimation using
option data contain risk premiums the market change for the derivatives, and hence
is more natural to measure risk.
6.13 Combining Stochastic Volatility with Jumps
Almost any stochastic volatility model can be superimposed on a jump model to form
the so-called stochastic volatility and jump model. The most famous of such model is the
Bates model (1996), which combine the Hestor model with Merton model, taking the form
dS
t
S
t
=
t
dt +
_
V
t
dW
t
+dJ
t
dV
t
= ( V
t
)dt +
_
V
t
dZ
t
58 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
where W
t
and Z
t
are Brownian motions with correlation , and J
t
is compound Poisson
process with intensity and log-normal distribution of jump size, Y, such that ln(1 +Y )
N
_
ln(1 +
k)
1
2
2
,
2
_
.
Under a risk-neutral measure, the log-price X
t
= ln S
t
satises
dX
t
= (r
k
1
2
V
t
)dt +
_
V
t
dW
t
+d
J
t
,
where
J
t
is a compound Poisson process with intensity and Gaussian distribution of
jumps,
ln(1 +Y ) N(ln(1 +
k)
1
2
2
,
2
).
Due to the independence between the diffusion component and jump component, the
moment generating functions of the Bates model is given by
B
(X
t
, V
t
, t; u) E
Q
_
e
uX
T
F
t
=
H
(X
t
, V
t
, t; u)
M
(t; u),
where
M
(t; u) = exp
_
ut
_
e
+
1
2
2
1
_
+t
_
e
1
2
2
u
2
+u
1
__
,
with = ln(1 + )
1
2
2
, and
H
is the MGF of Hestons model
H
(x, v, t; u) = exp (A(T t, u) +B(T t, u)V +ux) .
We have a few remarks.
1. With the MGF, options can be evaluated via FFT.
2. The log price, X
t
, is not a L evy process.
6.14 Empirical Performances of Various Model
We study numerically the performance of calibration and hedging of three models, Hes-
ton, Merton and Betas. The data and calibration procedure are taken fromDetlefsen(2005),
Bashi, Cao and Chen (1997) and are described below.
1. Data: Call and Put options on DAX
2. Objectives function:
fit()
K
1
n
n
s
()
V (k, )
_
mod
(K, , )
obs
(K, )
_
2
6.14. EMPIRICAL PERFORMANCES OF VARIOUS MODEL 59
where V (K, ) is the Black-Scholes vega,
V (K, ) S
n
_
ln
S
K
+ (r
1
2
2
)
_
where n is the density function of standard normal distribution,
n(x) =
1
2
e
x
2
2
The surface plot of V (K, ) is given in Fig 4.4 Detlefren
Fitting results
1. Goodness of t for Bates, Heston and Merton
Fig 4.5 Detlefren
2. Goodness of t for Bates and Heston
Fig 4.6 Detlefren
3. Fit to the implied volatility curves
Fig 4.7 - 4.9 Detlefren
4. Fit to prices by the Merton model
Fig 4.10 Detlefren
5. Stability of parameters of the three models
Fig 4.11 - 4.12 Detlefren
Next we look at the hedging performance of the three models. We consider three kinds
of hedging:
1. Delta hedging
2. Vega-delta hedging
3. Minimum variance hedging with only the underlying
The results are shown in Fig 6.1 - 6.6 Detlefren
The nding is that the performances are similarly good.
Next, let us look at the empirical result of Bashi, Cao and Chen (1997) on stochastic
volatility, stochastic interest rates and jump models (SV-SI-J)
BCC examined models from three perspectives:
60 CHAPTER 6. MODELS BASED ON L
EVY PROCESSES
1. Internal consistency of implied parameters (b/w option implied and time series
data)
Note that for the Bates model, , , are the same for both objective and risk-neutral
measures, while K, , and