Financial Derivatives Class Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Financial Derivatives

1MA209 10105 2024/2025

Inês Mesquita

Based on the notes by professor Erik Ekström

1
1 Currency Risk and Options
Motivating Example: Currency Risk
A Swedish company has signed a contract to buy a machine from a US company. The price is
$100, 000 USD, to be paid upon delivery in $6 months (T = 21 ). The current exchange rate is
$11 SEK/USD. This situation presents a significant currency risk for the Swedish company.

Strategies for Managing Currency Risk

There are three possible strategies to manage this risk:

Strategy 1: Buy USD Today — The company could buy $100, 000 USD today and deposit it in a
bank account until payment is due.

– Advantage: Eliminates currency risk entirely.

– Disadvantages: Requires tying up a significant amount of capital for a prolonged


period. The company may not have $100, 000 USD readily available.

Strategy 2: Forward Contract — The company could enter into a forward contract with a bank. This
contract would obligate the bank to sell the company $100, 000 USD at a predetermined
exchange rate (K) on the delivery date (T ).

– Advantage: Eliminates the currency risk.

– Disadvantage: If the exchange rate drops below K at time T , the company will have
paid more than the prevailing market price for the USD.

Strategy 3: European Call Option — The company could buy a European call option. This option
would give the company the right, but not the obligation, to buy $100, 000 USD at a
predetermined exchange rate (K), called the strike price, on the delivery date (T ).

– Advantage: Flexibility – the company can benefit from favourable exchange rate
movements while being protected from unfavourable ones.

– Disadvantage: Determining the fair price of the option is complex. The seller of the
option needs to hedge against potential losses.

∗ If the exchange rate at time T is above K, the company will exercise the option
and buy the USD at the lower strike price.

∗ If the exchange rate at time T is below K, the company will not exercise the
option and will buy USD at the prevailing market price.

2
Two main problems arise with this approach:
1. What is the fair price of an option?
2. If you are the seller of an option, how can you protect yourself (hedge) against risk?

Definition 1.1. An option is a financial derivative that gives the holder the right, but not the
obligation, to buy or sell an asset at a specified price (the strike price) on or before a certain
date.

Understanding Options in Discrete Time


Consider a market with two assets at time t = 0:

• Bank Account (Non-Risky Asset): • Stock (Risky Asset):

– B0 = 1; – S1 = 120 with probability p = 0.6;

– B1 = 1. – S1 = 80 with probability 1 − p = 0.4

A call option is a contract that gives its holder the right (but not the obligation) to buy the
stock at t = 1 at a certain pre-determined price, K. Assume K = 110.

Thus, at time t = 1, the option is worth:

• 120 − 110 = 10 if the stock price is 120 (p = 0.6)

• 0 if the stock price is 80 (1 − p = 0.4)

The key question is: What is a fair price for this option?

The idea is to replicate the option’s payoff using a trading strategy that involves the bank
account (B) and the stock (S).

Replicating the Option

Assume we invest x amount in the bank account (B) and buy y shares of the stock (S).

  
x + 120 = 10 (120 − 80)y = 10 y = 1
4
⇐⇒ ⇐⇒
x + 80y = 0 x = −80y x = −20

One possible strategy is to:

1. Borrow 20 from the bank at t = 0.

3
2. Buy 1/4 shares of stock at t = 0.

3. The cost of this transaction is 100 × 1


4 − 20 = 5.

At time t = 1:

1. If the stock price is 120, our holdings are worth −20 (from the bank account) + 30 (from
the stock) = 10.

2. If the stock price is 80, our holdings are worth −20 (from the bank account) + 20 (from
the stock) = 0.

To ensure no arbitrage opportunities (risk-free profit), the option price at t = 0 must be 5!

Remark 1.2. The value of p does not influence the option value.

Remark 1.3. Let us change p into a value q such that EQ [S1 ] = S0 = 100 (i.e. q = 0.5)

Then E[max {(S1 − K)+ , 0}] = 1


2 · 10 + · 0 = 5. (The option value!)
1
2
h i h i
In general, the option price is EQ B0
B1 max{(S1 − K)+ , 0} where q is chosen so that EQ BB0 S1 1 =
S0 . (We do not prove this)
Remark 1.4. (Notation) We write a+ = max{a, 0}. in particular

s − K if s > K
(s − K)+ =
0 if s ≤ K

Exercise 1.5.

1. In the example above, find a replicating strategy for a put option (the right, but not the
obligation, to sell the stock at K = 110).

2. Find the value of the put option at t=0.

Answer: {x = 90, y = − 43 }, option value is 15.

Letx be the amount invested in the bank account (B). Let y be the number of shares of the
stock (S). The put option allows the holder to sell the stock at K = 110, regardless of whether
the actual stock price is higher or lower.

• If S1 = 120: The put option would be worthless (why sell for 110 when you can get 120 in
the market?). So, our replicating portfolio should also have a value of 0 in this scenario:
x + 120y = 0.

• If S1 = 80: The put option would allow the holder to sell the stock for 110, generating

4
a profit of 110 − 80 = 30. So, our replicating portfolio should have a value of 30 in this
scenario: x + 80y = 30.
 
x − 120y = 0 x = −120y n
⇐⇒ ⇐⇒ y = − 34 x = 90
x − 80y = 30 (80 − 120)y = 30

Then, to replicate the put option, we need to:

1. Lend 90 from the bank (x = 90)

2. Short-sell 3
4 of a share of the stock (y = − 43 )

3. Cost of the transactions: −90 + 3


4 × 100 = −15

Therefore, the put option value at t = 0 is 15.

5
2 Constructing Stochastic Processes in Continuous Time
Remark 2.1. pages 43 to 49

Simple Random Walk


X1 , X2 , X3 , . . . independent identically distributed random variables with

1
P(Xk = 1) = P(Xk = −1) = Sn = X1 + X2 + . . . + Xn
2

E[Sn ] = E[X1 + . . . + Xn ] = E[X1 ] + E[X2 ] + . . . + E[Xn ] = 0


2
Var(Sn ) = E[Sn2 ] − (E[Sn ]) = E[Sn2 ]

Var(X, Y ) = Var(X) + Var(Y ) if X, Y independent

Var(Sn ) = Var(X1 + . . . + Xn ) = Var(X1 ) + . . . + Var(Xn ) = n


Standard deviation of Sn is this n (compare central limite theorem).

Motivation

We now want to make a proper scaling of this process (a finer time grid.

Consider a stock with S0 = 100. How much do we expect it to fluctuate? Consider the
reasonable range for S over different time periods:
Time Period 1 day 1 month 1 year
Reasonable range for S 100 ± 1 100 ± 5 100 ± 15
∆S 1 5 15
∆t 1 20 250

∆S/ ∆t 1 ≈1 √5
20
≈1 √15
250

Below, a stochastic process with variance t (standard deviation t) is constructed.

Construction: Fix a time interval [0, T ].

Stage 1: Let X01 = 0. At t0 = 0, toss a coin.



– If heads, let XT1 = T .

– If tails, let XT1 = − T .
2 2 2
E[XT1 ] = 0 Var(XT1 ) = E[ XT1 ] − E[XT1 ] = E[ XT1 ] = T

6
Stage 2: Let X02 = X01 = 0. Toss a coin at t = 0.
q
– Head ⇒ X 2T = T2
2
q
– Tail ⇒ X T = − T2
2
2

q
Repeat at t = T
2 , adding/subtracting T
2 .

Stage n: Let X0n = 0. At each time tk = nk T toss a coin:


q
 T
n with probability 1
2
Xtnk+1 = Xtnk + Yk , where Yk = q .
− T
n with probability 1
2

Clearly, E[Xtnk ] = E[Y0 + Y1 + · · · + Yk−1 ] = 0.

Also, Varn (Xtk ) = Var(Y0 ) + · · · + Var(Yk−1 ) = T


nk = tk .

When n → ∞, we obtain Brownian motion (or Wiener process).

Definition 2.2. A stochastic process W is a Wiener process (Brownian motion) if:


1. W0 = 0
2. It has independent increments, i.e., Wt2 − Wt1 and Wt4 − Wt3 are independent if
t1 < t2 ≤ t3 < t4 .

3. Wt − Ws is N (0, t − s).
4. t 7→ Wt is continuous.

7
Theorem 2.3. t 7→ Wt is of infinite variation and nowhere differentiable.
X
lim |Wtk+1 − Wtk | = ∞
n→∞
k

Stochastic Integrals
Rt
The next goal is to define integrals 0
gs dWs , where gt is a stochastic process "determined by"
{Ws : 0 ≤ s ≤ t}.

First, define some key terms:

Definition 2.4. Let Xt be a stochastic process.

1. An event A is FtX -measurable (A ∈ FtX ) if it is possible to determine whether A has


happened or not based on observations of {Xs : 0 ≤ s ≤ t}.

2. If a random variable Z can be determined given observations of {Xs : 0 ≤ s ≤ t}, then we


also write Z ∈ FtX .

3. A stochastic process Yt with Yt ∈ FtX for all t ≥ 0 is adapted to the filtration FtX .

Examples:

1. A = {Xs ≤ 7 for all s ≥ 9} ∈ F9X .


R5
2. Z = 0 Xs ds ∈ F5X , but Z ̸∈ F4X .
Rt
3. Yt = 0 Ws ds is adapted to FtW .

4. Yt = sup0≤s≤ Ws is not adapted to FtW .

Now we can define what it means for a process to belong to a set L2 .

8
Definition 2.5. A process g belongs to L2 if:

1. g is adapted to FtW .
RT  
2. 0 E gs2 ds < ∞.


Example: Ws ∈ N (0, s)

t t
t2
Z Z
E[Ws2 ] ds = s ds = < ∞, so W ∈ L2 .
0 0 2

Stochastic Integration

Assume g ∈ L2 .

1. If g is simple, i.e., gs = gtk for s ∈ [tk , tk+1 ), where 0 = t0 < t1 < · · · < tn = t, define:

Z t n−1
X
gs dWs := gtk (Wtk+1 − Wtk ).
0 k=0

Rt
2. For a general g ∈ L2 , approximate g with simple g n such that 0
E[(gsn − gs )2 ] ds → 0 as
n → ∞.

Define: Z t Z t
gs dWs := lim gsn dWs (limit in L2 ).
0 n→∞ 0

Rt
Thus, 0
gs dWs is a well-defined L2 random variable. Thus g approximates the integrand.
Remark 2.6. 1. It can be shown that the limit exists and does not depend on the approxi-
mating sequence used.

2. Forward increments are used.

3. Riemann-Stieltjes integration is not possible since Wt has paths of infinite variation.

9
Properties of Stochastic Integrals

Proposition
hR 2.7. iAssume g ∈ L2 . Then:
t
(i) E 0 gs dWs = 0.
 2  R
Rt t
(ii) E g
0 s
dW s = 0 E[gs2 ] ds (Itô isometry).
Rt
(iii) Xt = 0 gs dWs is FtW -adapted.

Proof. 1. Assuming g is simple (the general case follows by approximation):


"n−1 #
Z t  X
E gs dWs = E gtk (Wtk+1 − Wtk )
0 k=0
n−1
X
= E[gtk (Wtk+1 − Wtk )]
k=0
n−1
X
= E[gtk ]E[Wtk+1 − Wtk ] = 0.
k=0

2.
 !2 
"Z
t 2 # n−1
X
E gs dWs = E gtk (Wtk+1 − Wtk ) 
0 k=0
n−1
X X 
E gt2k (Wtk+1 − Wtk )2 + 2
  
= E gtj gtk (Wtj+1 − Wtj )(Wtk+1 − Wtk )
k=0 j<k
n−1
X X
= E[gt2k ]E[(Wtk+1 − Wtk )2 ] + 2 E[gtj gtk ]E[(Wtj+1 − Wtj )]E[(Wtk+1 − Wtk )]
k=0 j<k
n−1
X Z t
= E[gt2k ] ds = E[gs2 ] ds.
k=0 0

Example: Approximating the Integrand


Rt Pn−1
Calculate 0 gtn dWs , where gtn = k=0 Wtk 1[tk ,tk+1 ) (s) for t ∈ [tk , tk+1 ).

10
Z t n−1
X Z tk+1
gsn dWs = Wtk 1[tk ,tk+1 ) (s) dWs
0 k=0 tk
n−1
X
= Wtk (Wtk+1 − Wtk )
k=0
n−1
1 X 2 
= Wtk+1 − Wt2k − (Wtk+1 − Wtk )2
2
k=0
n−1
1 2 1X
= Wt − (Wtk+1 − Wtk )2 .
2 2
k=0

Thus, g approximates the integrand.

Let
n−1
X
Sn = (Wtk+1 − Wtk )2 and ∆W = Wtk+1 − Wtk .
k=0

Then:
n−1 n−1
X X t
E[Sn ] = E[(∆W )2 ] = =t
n
k=0 k=0

n−1
X
Var(Sn ) = Var((∆W )2 ) = n · Var((∆W )2 )
k=0

(due to independence)
Sn → t a ∆t → 0

Rt
This implies that the second term in the expression for 0
gsn dWs converges to t
2 as ∆t → 0.
Therefore: Z t
1 2 t
lim gsn dWs = W −
∆t→0 0 2 t 2

11
3 Martingales and Itô’s Formula
Remark 3.1. pages 49 to 59

Martingales

Definition 3.2. Let {Ft }t≥0 be a filtration ("flow of information"). Formally, Ft is


a σ-algebra of events, and Fs ⊂ Ft for s ≤ t. If Y is a random variable, then E[Y |Ft ]
denotes the conditional expectation of Y given Ft , i.e., the expected value of Y given
all information up to t.

Example:

• E[Wt |Fs ] = E[Ws |Fs ] + E[Wt − Ws |Fs ] = Ws

Definition 3.3. Xt is an Ft -martingale if:


• Xt is Ft -adapted
• E[|Xt |] < ∞ for all t ≥ 0
• E[Xt |Fs ] = Xs for s ≤ t

Examples:

1. Brownian motion W is a martingale.

2. Yt = Wt2 − t is a martingale:

E[Yt |Fs ] = E[Wt2 − t|Fs ]


= E[(Wt − Ws )2 + 2Ws Wt − Ws2 |Fs ] − t
= t − s + 2Ws E[Wt |Fs ] − E[Ws2 |Fs ] − t
= Ws2 − s = Ys

Rt
3. Yt = 0
gu dWu is a martingale:
Z t 
E[Yt |Fs ] = E gu dWu Fs
0
Z s  Z t  Z s
=E gu dWu Fs + E gu dWu Fs = gu dWu = Ys
0 s 0

12
4. Wt3 is not a martingale:

(s > t) E[Wt3 |Fs ] = E[(Wt − Ws )3 + 3Ws Wt2 − 3Ws2 Wt + Ws3 |Fs ]


= Ws3 + 0 − 3Ws2 E[Wt |Fs ] + 3Ws E[Wt2 |Fs ]
= Ws3 + 3(t − s)Ws ̸= Ws3

Remark 3.4. A martingale is a "fair game."

Itô’s Formula
Assume Z t Z t
Xt = a + µs ds + σs dWs
0 0

for some adapted processes µt and σt .

Remark 3.5. Shorthand notation:

• dXt = µt dt + σt dWt • X0 = a

Let f (t, x) be a C 2 function, and define Zt := f (t, Xt ).

Question: What does dZt look like?

Remark 3.6. Recall:

Wt2
Rt
• so Zt := Wt2 = t + 2
Rt
• 0
Ws dWs = 2 − t
2 0
Ws dWs

Thus, dZt = d(Wt2 ) = dt + 2Wt dWt and Z0 = 0.

We will argue for the following identity (dWt )2 = dt. Fix n and let tk = n.
kt

Let ∆W = Wtk+1 − Wtk and consider:

n−1
X
Sn := (∆Wtk )2
k=0

13
We have:
n−1 n−1
X X t
E[Sn ] = E[(∆Wtk )2 ] = =t
n
k=0 k=0

and
n−1
X
Var (∆Wtk )2 = n · Var (∆Wtk )2
 
Var(Sn ) =
k=0
t2 2t2
=n·2 = → 0 as n → ∞
n2 n

Thus, Sn → t as n → ∞ in L2 . This motivates us to write


Z t
(dWs )2 = t or (dWt )2 = dt
0

(Taylor Expansion)

∂f ∂f 1 ∂2f ∂2f
dZt = dt + dXt + 2
(dXt )2 + dt dXt + higher order terms
∂t ∂x 2 ∂x  ∂t∂x
∂f ∂f 1 ∂2f ∂f
= + µt + σt2 2 dt + σt dWt + higher order terms
∂t ∂x 2 ∂x ∂x

Theorem 3.7. (Itô’s Formula, 1 − d):


If dXt = µt dt + σt dWt , and Zt := f (t, Xt ), then

1 ∂2f
 
∂f ∂f ∂f
dZt = + µt + σt2 2 dt + σt dWt
∂t ∂x 2 ∂x ∂x

(here ∂f
∂t = ∂f
∂t (t, Xt ), and similarly for other derivatives of f ).

14
Theorem 3.8. (Alternative Formulation)

∂f ∂f 1 ∂2f
dZ = dt + dX + (dX)2
∂t ∂x 2 ∂x2

where (dX)2 is calculated using:


• (dt)2 = 0
• dt dWt = 0
• (dWt )2 = dt

Rt
Example: Compute 0
Ws dWs (again).

Solution:

Let Xt = Wt so dWt = dXt . Let Zt = f (t, Xt ) where f (t, x) = x2

By Itô’s formula:

∂f ∂f 1 ∂2f
dZt = dt + dXt + (dXt )2
∂t ∂x 2 ∂x2
1
= 2Xt dXt + · 2(dXt )2
2
= dt + 2Wt dWt

Rt Rt Wt2
Thus, Wt2 = Zt = t + 2 0
Ws dWs , so 0
Ws dWs = 2 − 2t .

Example: Compute E[Wt4 ].

Solution: Let Zt = Wt4 . By Itô’s formula:

1
dZt = 4Wt3 dWt + · 12Wt2 (dWt )2
2
= 6Wt2 dt + 4Wt3 dWt

Rt Rt
Thus, Wt4 = Zt = 6 0
sWs2 ds + 4 0
Ws3 dWs .

Taking expectations gives:


Z t Z t 
E[Wt4 ] = 6 sE[Ws2 ] ds + 4E Ws3 dWs
0 0
Z t
=6 s · s ds = 3t2
0

15
[Alternatively, without using Itô’s formula]

x4 −x2
Z
E[Wt4 ] = √ e 2t dx = {integration by parts}
R 2πt
 3 ∞
x2 −x2
Z
x −x2
= √ e 2t +3 √ e 2t dx = 3t Var(Wt ) = 3t2
2πt −∞ R 2πt

Example: Compute E[eαWt ].

Solution: Let Zt = f (Wt ) where f (x) = eαx . Itô’s formula gives:

∂f 1 ∂2f
dZt = dWt (dWt2 )
∂x 2 ∂x2
1
= αeαWt dWt + α2 eαWt (dWt )2
2
α2 αWt
= e dt + αeαWt dWt
2

Integrating gives:

t t
α2
Z Z
Zt = 1 + Zs ds + α Zs dWs
2 0 0

so
Z t  Z t
α2
 
E[Zt ] = 1 + E Zs ds + E α Zs dWs
2 0 0
α2 t
Z
=1+ E[Zs ] ds
2 0

Denote m(t) := E[Zt ]. Then: 


m′ (t) = α2
2 m(t)
m(0) = 1

α2 t α2 t
which has the solution: m(t) = e 2 . So, E[Zt ] = E[eαWt ] = e 2 .

[Alternatively, without using Itô’s formula]



Using Wt ∈ N (0, t) we have
Z ∞ Z ∞
1 − x2 α2 t (x−αt)2 α2 t
E[eαWt ] = eαx √ e dt dx = e 2 e− dt dx = e 2

−∞ 2πt −∞

16
Multi Dimension Itô’s Formula
Pd
We will assume dXti = µit dt + j=1 σtij dWtj , i = 1, . . . n. (here W 1 , . . . , W d are independent
1-dim Brownian Motions)

On a matrix form,
dXt{n×1} = µt{n×1} dt + σt{n×d} dWt{d×1}

Let Zt = f (t, Xt ) where f is C 2 .

Theorem 3.9. (Multi Dimension Itô’s Formula)

n n
∂f X ∂f 1 X ∂2f
dZt = dt + i
dXt + dXti dXtj
∂t i=1
∂xi 2 j,1=1
∂xi ∂xj

where
• (dt)2 = 0
• dtdWti = 0
• dWti dWtj = 0 if i ̸= j
• (dWti )2 = dt

Example:


dXt = αXt dt + σXt dWt
dY = ϕY dt + δY dV
t t t t

are two independent Brownian Motions and Zt = Xt Yt . Determine dZt .

Solution: Itô’s formula gives (f (x, y) = xy):

1
dZt = Yt dXt + Xt dYt + · 2·
2

17
4 Multi-dimensional Itô Formula and Correlated BMs
Multi-dimensional Itô formula
Assume:
Pd
• dXti = µit dt + j=1 σtij dWtj , i = 1, . . . , n

• Where Wt1 , . . . , Wtd are independent Brownian motions.

On a matrix form:
dXt = µt dt + σt dWt

Let: Zt = f (t, Xt ), where f : [0, ∞) × Rn → R is C 1,2 .

Theorem 4.1. Itô’s Formula, Multi-Dimensional


n n
∂f X ∂f 1 X ∂2f
dZt = dt + dXti + dXti dXtj
∂t i=1
∂xi 2 i,j=1
∂xi ∂xj

Where:
• dWti dWtj = 0 if i ̸= j
• (dWti )2 = dt
• (dt)2 = dt dWti = 0.

Alternatively:
 
n n 2 n
∂f ∂f
X 1 X ∂ f  X ∂f i
dZt =  + µit + Ctij dt + σt dWt
∂t i=1
∂xi 2 i,j=1
∂xi ∂xj i=1
∂xi

Where Ct = σt σt∗ (σ ∗ transpose) and σ i is the ith row of σ (σti = σti1 , σti1 , . . . , σtid ).


Indeed:
d
! d
! d
!
X X X
dX i dX j = σ ik dW k σ jl dW l = σ ik σ jk dW k dW k =
k=1 l=1 k=1

d
!
X
= ik jk
σ σ dt = (σσ ∗ )ij dt = C ij dt
k=1

18
Exercise: If 
dXt = αXt dt + σXt dWt
dY = βY dt + τ Y dW̄
t t t t

where Wt and W̄t are independent Brownian motions, and Zt = Xt Yt , find dZt .

Solution: Itô’s formula gives:

dZt = Yt dXt + Xt dYt + dXt dYt = (α + β)Zt dt + σZt dWt + τ Zt dW̄t

1
Ŵ is a Brownian motion (why?)

Ŵt = √ σWt + τ W̄t ,
σ2 +τ 2

and
p
dZt = (α + β)Zt dt + σ 2 + τ 2 Zt dŴt

Correlated Brownian motions


 
W̄t1
 . 
Let W̄t =  .  where W̄t1 , . . . , W̄td are independent Brownian motions.
 . 
W̄td

   
δ11 ... δ1d δ1
 . .. 
Consider where δ =  .. ..  =  ..  with δ1 = (δ11 , . . . , δ1d ) , . . .
 
Wt = δ W̄t  . .  .
δd1 ... δdd δd

We also assume that ||δi || = 2 + . . . + δ 2 = 1. Then


p
δi1 id

 2     
Xd Xd Xd
E[(Wti )2 ] = E  δ ij W j   = E  δ ij δ ik Wtj Wtk  + E  (δ ij )2 (Wtj )2  =
 
j=1 j=1 j=1

   
Xd  2  Xd
j
= δ ij  E Wt = δ ij  t = t
j=1 j=1

So, Wti is a Brownian motion.

19
Moreover
d
! d
! d
X X X
dWti dWtj = ik
δ dW̄ k jl
δ dW̄ l
= δ ik δ jk dt = (δδ ∗ )ij dt
k=1 l=1 k=1

Definition 4.2. Wt constructed above is a d-dimensional correlated Wiener process


with correlation matrix δ = δ ∗ δ.

Proposition 4.3. Itô’s Formula, Correlated Wiener Version


If Wt is a correlated Wiener process as above, and dXt = µt dt + σt dWt , then Zt =
f (t, Xt ) satisfies:

n n
∂f X ∂f 1 X ∂2f
dZt = dt + dXti + dX i dX j
∂t i=1
∂xi 2 i,j=1
∂xi ∂xj

Where:
• (dWti )2 = dt
• dWti dWtj = δ ij dt.

Example: Given: !
W̄ 1
W̄t = (where W̄ 1 , W̄ 2 are independent)
W̄ 2

Construct a correlated Wiener process


! !
W1 1 δ12
W = with correlation matrix g = .
W2 δ21 1

! !
1 0 1 δ12
Note that δ= p
2
satisfies δδ ∗ = .
δ21 1 − δ21 δ21 1
!
W̄ 1
Thus W = p is a correlated Wiener process with correlation matrix g.
2 W̄ 2
δ21 W̄ 1 + 1 − δ21

20
What other choices for δ are possible?

!
b a
Any answers δ= √ would also work, as long as
a 1 − a2
√ √ ! !
∗ 1 ab + 1 − b2 1 − a2 1 δ0
δδ = √ √ = .
ab + 1 − b2 1 − a2 1 δ0 1

21
5 Stochastic Differential Equations (SDEs)
Let

• W : a d-dimensional Brownian motion

• µ : [0, ∞) × Rn → Rn

• σ : [0, ∞) × Rn → Rn×d

• x 0 ∈ Rn

Definition 5.1. A stochastic differential equation is an equation of the form:



(∗) dXt = µ(t, Xt ) dt + σ(t, Xt ) dWt
 X0 = x0

or, equivalently:
Z t Z t
Xt = x0 + µ(s, Xs ) ds + σ(s, Xs ) dWs , X 0 = x0 .
0 0

Proposition 5.2. Assume:


1. ||µ(t, x) − µ(t, y)|| + ||σ(t, x) − σ(t, y)|| ≤ K||x − y|| (Lipschitz Condition)
2. ||µ(t, x)|| + ||σ(t, x)|| ≤ K(1 + ||x||) for some K.
Then there exists a unique solution X to the SDE (*). Moreover:
i) X is FtW -adapted;
ii) Xt is continuous in t;
iii) X is a Markov process.

5.1 Geometric Brownian Motion (n = 1)


Consider 
dXt = αXt dt + σXt dWt
X = x
0 0

where α and σ are constants.

Remark 5.3. If σ = 0, then dXt = αXt dt so Xt = x0 eαt .

22
Let Zt = ln Xt . Then
 
1 1 1 1 1 2 2 2
dZt = dXt + − 2 (dXt )2 = α dt + σ dWt − σ Xt (dWt ) =
Xt 2 Xt 2 Xt2

σ2
 
= α− dt + σ dWt (Itô’s formula)
2
so:
σ2
 
Zt = ln x0 + α − t + σWt
2
and
2
 
α− σ2 t+σWt
Xt = eZt = x0 e .

Moreover:
Z t  Z t  Z t
E[Xt ] = x0 + E αXs ds + E σXs dWs = x0 + αE[Xs ] ds.
0 0 0

So if m(t) := E[Xt ], we find that



ṁ(t) = αm(t)
=⇒ m(t) = x0 eαt .
m(0) = x .
0

Result:

The solution of 
dXt = αXt dt + σXt dWt
X = x
0 0

is
σ2
  
Xt = x0 exp α− t + σWt .
2
Moreover, E[Xt ] = x0 eαt .

5.2 Mean-Reverting Ornstein-Uhlenbeck Process


Example: Ornstein-Uhlenbeck Process

23
Consider the SDE: 
dXt = −λXt dt + dWt
X = x
0

(this is a mean-reverting Ornstein-Uhlenbeck process).

Trick: Let Yt := eλt Xt . Then

dYt = eλt dXt + λeλt Xt dt = eλt dWt .

So Z t
Yt = x + eλs dWs .
0

Thus Z t
Xt = e−λt Yt = xe−λt + e−λt eλs dWs .
0

Moreover, E[Xt ] = xe−λt .

Definition 5.4. The solution X of an SDE



dXt = µ(t, Xt ) dt + σ(t, Xt ) dWt
X = x
0 0

is called a diffusion process. µ is the drift, and σ is the diffusion coefficient.

Remark 5.5. A comment on gBM . Let

σ2
 
Xt = x exp{ α − t + σWt }.
2

The expected value is


E [Xt ] = xeαt

Recall Jensen’s inequality: E[ϕ(Y )] ≥ ϕ (E[Y ]) if ϕ is convex.


 2

Let Y = α − σ2 t + σWt and ϕ(Y ) = xeY :

σ2
   2

α− σ2 t
E [Xt ] = E [ϕ(Y )] ≥ ϕ (E[Y ]) = ϕ α − t = xe
2

24
6 Partial Differential Equations
Consider the following terminal value problem — Given functions σ, µ, ϕ, find a function
F (t, x) such that:

 ∂F (t, x) + σ2 (t,x) ∂ 2 F2 (t, x) + µ (t, x) ∂F (t, x) = 0, t < T, x ∈ R
∂t 2 ∂x ∂x
(∗)
F (T, x) = ϕ (x) .

If F (t, x) satisfies (∗), define Xs by:



dXs = µ (s, Xs ) ds + σ (s, Xs ) dWs
for (t, x) ∈ [0, T ] × R (where the PDE holds)
X = x
t

and let Zs = F (s, Xs ). Then:

∂F ∂F 1 ∂2F 2
dZs = ds + dXs + (dXs )
∂s ∂x 2 ∂x2

(Itô’s formula)

σ2 ∂ 2 F
 
∂F ∂F ∂F ∂F ∂F
= +µ + ds + σ dWs = 0 + σ dWs = σ dWs .
∂s ∂x 2 ∂x2 ∂x ∂x ∂x

Integrate:
Z T
ϕ (XT ) = F (T, XT ) = ZT = Zt + . . . dWs .
t

Take expectation:
E[ZT ] = Zt = F (t, x),

E[F (T, XT )] = E[ϕ(XT )],


" #
Z T
E [ϕ(XT )] = E F (t, x) + . . . dWs = F (t, x).
t

We write:

dXs = µ (s, Xs ) ds + σ (s, Xs ) dWs
F (t, x) = E [ϕ (XT )] where
X = x
t

to indicate that Xt = x.

We have thus proved the following:

25
Proposition 6.1. Feynman-Kac
If F (t, x) satisfies

 ∂F + σ2 (t,x) ∂ 2 F2 + µ (t, x) ∂F = 0, (t < T )
∂t 2 ∂x ∂x
F (T, x) = ϕ (x) ,

Then F (t, x) = E [ϕ (XT )] where



dXs = µ (s, Xs ) ds + σ (s, Xs ) dWs
X = x.
t

Remark 6.2. We write


F (t, x) = Et,x [ϕ (XT )]

to indicate that Xt = x.

Example: 
 ∂F + σ2 ∂ 2 F2 = 0,
∂t 2 ∂x
where σ is constant.
F (T, x) = x2 ,

Let Xs be the solution of


n
dXs = σ dWs Xt = x, i.e. Xs = x + σ (Ws − Wt ) .

By Feynman-Kac,
h i h i
2 2
F (t, x) = ET [XT ] = E (x + σ (WT − Wt )) = E x2 + 2xσ (WT − Wt ) + σ 2 (WT − Wt ) .

h i
2
Since E (WT − Wt ) = T − t, we have

F (t, x) = x2 + σ 2 (T − t) .

σ2 ∂ 2 F σ2
Remark 6.3. Check that F (T, x) = x2 and ∂F
∂t + 2 ∂x2 = −σ 2 + 2 · 2 = 0!

26
Remark 6.4. A time change t =⇒ T − t gives

 ∂F σ2 ∂ 2 F
∂t = 2 ∂x2
F (0, x) = ϕ(X)

This is the heat equation!


Remark 6.5. The terminal value problem (∗) can be solved numerically using methods from
PDEs (for example, using finite differences). Feynman-Kac gives an alternative, namely Monte-
Carlo methods.

6.1 Feynman-Kac in Higher Dimensions + Discounting

Proposition 6.6. Assume that F : [0, T ] × Rm → R satisfies



 ∂F + 1 Pn ∂ 2 F (t,x) Pn ∂F
∂t 2 i,j=1 cij (t, x) ∂xi ∂xj + i=1 µi (t, x) ∂xi − rF (t, x) = 0,
F (T, x) = Φ (x) ,

where c (t, x) = σ (t, x) σ ∗ (t, x) for some matrix σ (n × d).


Then
F (t, x) = e−r(T −t) E [Φ (XT ) | Xt = x] ,

where 
dXs = µ (s, Xs ) ds + σ (s, Xs ) dWs ,
X = x.
t

Proof. Let Zs = e−r(s−t) F (s, Xs ). Then


 
n 2 n n
∂F 1 X ∂ F X ∂F X ∂F
dZs = e−r(s−t)  + cij + µi − rF  ds + e−r(s−t) σi · dWs ,
∂s 2 i,j=1 ∂xi ∂xj i=1
∂xi i=1
∂xi

Integrate + Take expectations:


"Z #
T
E [ZT − Zt ] = E . . . dWs = 0
t

h i
F (t, x) = E [F (t, x)] = E [Zt ] = E [ZT ] = E e−r(T −t) F (T, XT ) = e−r(T −t) E [ϕ(XT )]

27
so Z T
ZT = Zt + . . . dWs .
t

Using the terminal condition, we find

e−r(T −t) Φ (XT ) = F (t, x) ,

thus
F (t, x) = e−r(T −t) E [Φ (XT ) | Xt = x] .

Exercise: (deluxe, in the book r = 0)



 ∂F + σ2 ∂ 2 F2 + δ2 ∂ 2 F2 − rF = 0,
∂t 2 ∂x 2 ∂y
F (T, x, y) = xy.

Solution: Here
! !
σ2 0 σ 0
C= 2
, so σ= satisfies C = σσ ∗ .
0 δ 0 δ

The dynamics are given by

! ! ! 
X 0 dW 1 X = x + σ W 1 − W 1  ,
T t
d = , =⇒
Y 0 dW 2 Y = y + δ W 2 − W 2  .
T t

Feynman-Kac gives
F (t, x, y) = e−r(T −t) E [XY ] ,

where
x + σ WT1 − Wt1 y + δ WT2 − Wt2
  
E [XY ] = E .

Expanding this, we get

F (t, x, y) = e−r(T −t) E xy + xδ WT2 − Wt2 + yσ WT1 − Wt1 + σδ WT1 − Wt1 WT2 − Wt2 .
    

28
By independence of W 1 and W 2 , and the fact that E WTi − Wti = 0, it simplifies to
 

F (t, x, y) = e−r(T −t) xy.

Remark 6.7. (Notation) The differential operator

n n
1 X ∂2 X ∂
A := cij + µi ,
2 i,j=1 ∂xi ∂xj i=1
∂x i

is called the infinitesimal operator at x.

Itô’s Formula
Proposition 6.8. If Zt = F (t, Xt ), then
  n
∂F X ∂F
dZt = + AF dt + σi · dWt .
∂t i=1
∂xi

29

You might also like