Nearest Neighbor
Nearest Neighbor
Nearest Neighbor
(X
)] (1)
where is F-stopping time taking values in the set T = {0, 1, . . . , T} and g
i
:
d
+
,
i T , is a set of measurable functions. It is well known that for this discrete time pricing
problem the value function V
i
satises the following dynamic programming equation
V
T
(X
T
) = g
T
(X
T
) (2)
V
i
(X
i
) = max (g
i
(X
i
), C
i
(X
i
)) , i = 0, 1, . . . , T 1 (3)
where C
i
(X
i
) E [V
i+1
(X
i+1
)|X
i
] is called the continuation value of the option at t = i.
In order to evaluate an estimator
V
0
for the option price, we rst employ the nearest
neighbor method to estimate the continuation value function
C
i
at each exercise oppor-
tunity i. A suboptimal stopping time is evaluated as = min
_
g
i
(X
i
)
C
i
(X
i
)
_
which
gives a lower bound to the true option price.
3 Main result
Firs give the algorithm as in the project report and then the analysis after that. In this
paper we propose a method to estimate the continuation value function using the nearest
neighbor method as also suggested by Belomestny [2011]. The estimator is given as
C
i
(X) =
M
j=1
g
i+1
(X
j,i+1
)I(|X X
j,i+1
|)
M
j=1
I(|X X
j,i+1
|)
. (4)
2
In the estimation literature, above estimator is commonly known as Nadraya-Watson
estimator. We know the following property.
Lemma 1. Using the assumptions of Nadraya-Watson estimator we get
E[
C
T
(x)] C
T
(x) = B(x)h
2
+ o(h
2
)
Using this Lemma and result from Belomestny [2011] we get the following asymptotic
bias result.
3.1 Asymptotic bias
Using this continuation value estimator we can nd the asymptotic bias of the option
price estimator as follows
3.2 Asymptotic variance
Similarly we can nd the variance of the option price estimator
4 Upper bound
Generate N Stage 1 paths {(X
i,j
) i = 1, , m j = 1, , N} to evaluate the estimate
of the value function.
At t = m,
V
m
(X
m,j
) = V
m
(X
m,j
) = g(X
m,j
) for all j = 1, , N
At t = m1,
V
m1
(X
m1,j
) = max
_
g(X
m1,j
),
C
m1
(X
m1,j
)
_
for all j = 1, , N
where
C
m1
(X
m1,j
) =
N
k=1
k=j
V
m
(X
m,k
)1
{|X
m1,k
X
m1,j
|<h}
f(X
m1,j
,X
m,k
)
f(X
m1,k
,X
m,k
)
N
k=1
k=j
1
{|X
m1,k
X
m1,j
|<h}
.
Now by Jensens inequality,
E
_
V
m1
(X
m1,j
)
X
m1,j
max
_
g(X
m1,j
), E
_
C
m1
(X
m1,j
)
X
m1,j
_
= max
_
g(X
m1,j
), E
_
E
_
C
m1
(X
m1,j
)
X
m1,j
, {X
m1,k
}
N
k=1
k=j
_
_
.
3
We examine the conditional expectation on the right. We have
E
_
C
m1
(X
m1,j
)
X
m1,j
, {X
m1,k
}
N
k=1
k=j
= E
_
_
N
k=1
V
m
(X
m,k
)1
{|X
m1,k
X
m1,j
|<h}
f(X
m1,j
,X
m,k
)
f(X
m1,k
,X
m,k
)
N
k=1
1
{|X
m1,k
X
m1,j
|<h}
X
m1,j
, {X
m1,k
}
N
k=1
k=j
_
_
=
N
k=1
k=j
1
{|X
m1,k
X
m1,j
|<h}
E
_
V
m
(X
m,k
)
f(X
m1,j
,X
m,k
)
f(X
m1,k
,X
m,k
)
X
m1,j
, X
m1,k
_
N
k=1
k=j
1
{|X
m1,k
X
m1,j
|<h}
=
N
k=1
k=j
1
{|X
m1,k
X
m1,j
|<h}
C
m1
(X
m1,j
)
N
k=1
k=j
1
{|X
m1,k
X
m1,j
|<h}
= C
m1
(X
m1,j
)
where in the second last equality we use the result
E
_
V
m
(X
m,k
)
f(X
m1,j
, X
m,k
)
f(X
m1,k
, X
m,k
)
X
m1,j
, X
m1,k
_
= C
m1
(X
m1,j
).
This is conditional on
N
k=1
k=j
1
{|X
m1,k
X
m1,j
|<h}
= 0. When
N
k=1
k=j
1
{|X
m1,k
X
m1,j
|<h}
= 0,
we assign some value which upper bounds C
m1
(X
m1,j
). Finally we get
E
_
V
m1
(X
m1,j
)
X
m1,j
max {g(X
m1,j
), C
m1
(X
m1,j
)} = V
m1
(X
m1,j
).
Lets assume that E
_
V
i+1
(X
i+1,j
)
X
i+1,j
V
i+1
(X
i+1,j
) for all j = 1, , N. At t = i,
V
i
(X
i,j
) = max
_
g(X
i,j
),
C
i
(X
i,j
)
_
for all j = 1, , N
where
C
i
(X
i,j
) =
N
k=1
k=j
V
i+1
(X
i+1,k
)1
{|X
i,k
X
i,j
|<h}
f(X
i,j
,X
i+1,k
)
f(X
i,k
,X
i+1,k
)
N
k=1
k=j
1
{|X
i,k
X
i,j
|<h}
.
Now by Jensens inequality,
E
_
V
i
(X
i,j
)
X
i,j
max
_
g(X
i,j
), E
_
C
i
(X
i,j
)
X
i,j
_
= max
_
g(X
i,j
), E
_
E
_
C
i
(X
i,j
)
X
i,j
, {X
i,k
}
N
k=1
k=j
_
_
.
4
Then we have
E
_
C
i
(X
i,j
)
X
i,j
, {X
i,k
}
N
k=1
k=j
N
k=1
k=j
1
{|X
i,k
X
i,j
|<h}
E
_
V
i+1
(X
i+1,k
)
f(X
i,j
,X
i+1,k
)
f(X
i,k
,X
i+1,k
)
X
i,j
, {X
i,k
}
N
k=1
k=j
_
N
k=1
k=j
1
{|X
i,k
X
i,j
|<h}
=
N
k=1
k=j
1
{|X
i,k
X
i,j
|<h}
E
_
V
i+1
(X
i+1,k
)
f(X
i,j
,X
i+1,k
)
f(X
i,k
,X
i+1,k
)
X
i,j
, X
i,k
_
N
k=1
k=j
1
{|X
i,k
X
i,j
|<h}
=
N
k=1
k=j
1
{|X
i,k
X
i,j
|<h}
E
_
V
i+1
(X
i+1,j
)|X
i,j
_
N
k=1
k=j
1
{|X
i,k
X
i,j
|<h}
=
N
k=1
k=j
1
{|X
i,k
X
i,j
|<h}
E
_
E[
V
i+1
(X
i+1,j
)|X
i+1,j
]|X
i,j
_
N
k=1
k=j
1
{|X
i,k
X
i,j
|<h}
N
k=1
k=j
1
{|X
i,k
X
i,j
|<h}
E[V
i+1
(X
i+1,j
)|X
i,j
]
N
k=1
k=j
1
{|X
i,k
X
i,j
|<h}
= C
i
(X
i,j
)
where in the inequality we make use of the induction step at t = i +1. In the end we get
E
_
V
i
(X
i,j
)
X
i,j
max {g(X
i,j
), C
i
(X
i,j
)} = V
i
(X
i,j
)
which completes the induction argument.
5 Likelihood ratio weighted nearest neighbor estimator
We will need the following assumptions for our analysis.
Assumption 1. For > 0,
P(0 < |C
l
(X
l
) g
l
(X
l
)| < ) B
l
h
d
(g(X
il
)
f(X
l1
,X
il
)
f(X
i,l1
,X
il
)
C
l1
(X
l1
))1
{|X
i,l1
X
l1
|<h}
_
M
i=1
are i. i. d.
bounded random variables for all l = 1, . . . , L.
Using Assumption 3 we can prove the following result.
5
Lemma 2.
P
_
inf
xA
1
Mh
d
M
i=1
1
{|X
i,l
x|<h}
f
0
/2
_
B
0
exp(B
1
f
0
Mh
d
).
We have seen in the previous section that at t = L 1
C
L1
(X
L1
) =
M
i=1
g
L
(X
iL
)1
{|X
i,L1
X
L1
|<h}
f(X
L1
,X
iL
)
f(X
i,L1
,X
iL
)
M
i=1
1
{|X
i,L1
X
L1
|<h}
satises the property E[
C
L1
(X
L1
)|X
L1
] = C
L1
(X
L1
) using the following result
E
_
g
L
(X
iL
)
f(X
L1
, X
iL
)
f(X
i,L1
, X
iL
)
X
L1
, {X
i,L1
}
M
i=1
_
= C
L1
(X
L1
).
Next we want to prove the following result on the error in
C
L1
(X
L1
).
Lemma 3. For > 0
P
_
C
L1
(X
L1
) C
L1
(X
L1
)
1/2
M
_
exp(
2
)
where
M
= Mh
d
.
Proof. Let
(x) :=
1
Mh
d
M
i=1
1
{|X
i,L1
x|<h}
.
Consider,
P
_
C
L1
(X
L1
) C
L1
(X
L1
)
1/2
M
X
L1
_
P((X
L1
) f
0
/2|X
L1
) + P
_
C
L1
(X
L1
) C
L1
(X
L1
)
1/2
M
, (X
L1
) > f
0
/2
X
L1
_
.
We have
P
_
C
L1
(X
L1
) C
L1
(X
L1
)
1/2
M
, (X
L1
) > f
0
/2
X
L1
_
= P
_
_
1
Mh
d
M
i=1
_
g
L
(X
i,L
)
f(X
L1
,X
i,L
)
f(X
i,L1
,X
i,L
)
C
L1
(X
L1
)
_
1
{|X
i,L1
X
L1
|<h}
(X
L1
)
1/2
M
, (X
L1
) > f
0
/2
X
L1
_
P
_
1
Mh
d
M
i=1
_
g
L
(X
i,L
)
f(X
L1
, X
i,L
)
f(X
i,L1
, X
i,L
)
C
L1
(X
L1
)
_
1
{|X
i,L1
X
L1
|<h}
1/2
M
f
0
/2
X
L1
_
.
For a given value of X
L1
,
E
_
_
g
L
(X
i,L
)
f(X
L1
, X
i,L
)
f(X
i,L1
, X
i,L
)
C
L1
(X
L1
)
_
1
{|X
i,L1
X
L1
|<h}
_
= 0.
By Assumption 2 and 4, for some a, b R we get
a
1
h
d
_
g
L
(X
i,L
)
f(X
L1
, X
i,L
)
f(X
i,L1
, X
i,L
)
C
L1
(X
L1
)
_
1
{|X
i,L1
X
L1
|<h}
b for all i = 1, . . . , M.
6
We use Hoeding inequality which states that for independent and identically distributed
random variables X
1
, . . . , X
n
such that
P(X
i
[a, b]) = 1
we have
P
_
1
N
N
i=1
X
i
E[X
i
]
t
_
2 exp
_
2t
2
N
(b a)
2
_
to show
P
_
C
L1
(X
L1
) C
L1
(X
L1
)
1/2
M
, (X
L1
) > f
0
/2
X
L1
_
P
_
1
M
h
d
M
i=1
_
g
L
(X
i,L
)
f(X
L1
, X
i,L
)
f(X
i,L1
, X
i,L
)
C
L1
(X
L1
)
_
1
{|X
i,L1
X
L1
|<h}
M
1/2
f
0
/2
X
L1
_
exp(
2
B/4)
where B is some constant. Finally after using Lemma 2 we get
P
_
C
L1
(X
L1
) C
L1
(X
L1
)
1/2
M
|X
L1
_
B
0
exp(B
1
f
0
Mh
d
) + exp(
2
f
2
0
/4)
B
2
exp(
2
f
2
0
/4).
As the upper bound is independent of X
L1
, we have proved our claim.
5.1 Bias of the continuation value estimator
At t = L 2 we have
C
L2
(X
L2
) =
M
i=1
max{g
L1
(X
i,L1
),
C
L1
(X
i,L1
)}1
{|X
i,L2
X
L2
|<h}
f(X
L2
,X
i,L1
)
f(X
i,L2
,X
i,L1
)
M
i=1
1
{|X
i,L2
X
L2
|<h}
.
By conditioning and using the previous result we conclude that
E
_
C
L2
(X
L2
)
X
L2
_
= E
_
max{g
L1
(X
L1
),
C
L1
(X
L1
)}
X
L2
_
.
Also,
C
L2
(X
L2
) = E
_
max{g
L1
(X
L1
), C
L1
(X
L1
)}
X
L2
.
Lemma 4. E
_
C
L2
(X
L2
)
X
L2
_
C
L2
(X
L2
) = O(
1
M
) where
M
= Mh
d
.
Proof. We start with dening the sets
E
1
=
_
x R
d
: g
L1
(x) >
C
L1
(x)
_
,
E
2
=
_
x R
d
: g
L1
(x) > C
L1
(x)
_
.
7
Then
E
_
C
L2
(X
L2
)
X
L2
_
C
L2
(X
L2
)
= E
_
max{g
L1
(X
L1
),
C
L1
(X
L1
)} max{g
L1
(X
L1
), C
L1
(X
L1
)}
X
L2
_
= E
F
L2
_
(g
L1
(X
L1
) C
L1
(X
L1
))1
{X
L1
E
1
}
1
{X
L1
E
c
2
}
_
+ E
F
L2
_
(
C
L1
(X
L1
) C
L1
(X
L1
))1
{X
L1
E
c
1
}
1
{X
L1
E
c
2
}
_
+ E
F
L2
_
(
C
L1
(X
L1
) g
L1
(X
L1
))1
{X
L1
E
c
1
}
1
{X
L1
E
2
}
_
.
Then,
E
F
L2
_
(g
L1
(X
L1
) C
L1
(X
L1
))1
{X
L1
E
1
}
1
{X
L1
E
c
2
}
_
E
F
L2
_
g
L1
(X
L1
) C
L1
(X
L1
)
1
{X
L1
E
1
}
1
{X
L1
E
c
2
}
_
= E
F
L2
_
(C
L1
(X
L1
) g
L1
(X
L1
))1
{X
L1
E
1
}
1
{X
L1
E
c
2
}
_
.
We express E
c
2
as follows
E
c
2
= {x : 0 < C
L1
(x)g
L1
(x)
1/2
M
}
_
j1
{x : 2
j1
1/2
M
< C
L1
(x) g
L1
(x) 2
j
1/2
M
}
_
.
Using the fact that on E
1
E
c
2
|C
L1
(X
L1
) g
L1
(X
L1
)| |C
L1
(X
L1
)
C
L1
(X
L1
)|
we get
1/2
M
E
F
L2
_
1
{0<C
L1
(X
L1
)g
L1
(X
L1
)
1/2
M
}
_
+
j1
2
j
1/2
M
E
F
L2
_
1
{0<C
L1
(X
L1
)g
L1
(X
L1
)2
j
1/2
M
}
1
{2
j1
1/2
M
<C
L1
(X
L1
)
C
L1
(X
L1
)}
_
=
1
M
+
j1
2
j
1/2
M
E
F
L2
_
1
{0<C
L1
(X
L1
)g
L1
(X
L1
)2
j
1/2
M
}
P
L1
(C
L1
(X
L1
)
C
L1
(X
L1
) > 2
j1
1/2
M
)
_
1
M
+
j1
2
j
1/2
M
e
2
2j2
P
L2
(0 < C
L1
(X
L1
) g
L1
(X
L1
) 2
j
1/2
M
)
1
M
B
where B > 0 is some constant. For the last two inequalities we use one-sided Hoeding in-
equality result from Lemma 3 and the boundary assumption in Assumption 1 respectively.
The second term can be split as
E
F
L2
_
(
C
L1
(X
L1
) C
L1
(X
L1
))1
{X
L1
E
c
1
}
1
{X
L1
E
c
2
}
_
= E
F
L2
_
C
L1
(X
L1
) C
L1
(X
L1
)1
{X
L1
E
c
2
}
_
+ E
F
L2
_
(C
L1
(X
L1
)
C
L1
(X
L1
))1
{X
L1
E
1
}
1
{X
L1
E
c
2
}
_
(5)
8
By noting that E[
C
L1
(X
L1
)|X
1
] = C
L1
(X
L1
), we get
E
F
L2
_
C
L1
(X
L1
) C
L1
(X
L1
)1
{X
L1
E
c
2
}
_
= E
F
L2
_
E
F
L1
[
C
L1
(X
L1
) C
L1
(X
L1
)]1
{X
L1
E
c
2
}
_
= 0.
Next we consider,
E
F
L2
_
(C
L1
(X
L1
)
C
L1
(X
L1
))1
{X
L1
E
1
}
1
{X
L1
E
c
2
}
_
.
We can express E
1
as follows
E
1
= {x : 0 < g
L1
(x)
C
L1
(x)
1/2
M
}
_
j1
{x : 2
r1
1/2
M
< g
L1
(x)
C
L1
(x) 2
r
1/2
M
}
_
,
and E
c
2
as
E
c
2
= {x : 0 < C
L1
(x)g
L1
(x)
1/2
M
}
_
j1
{x : 2
j1
1/2
M
< C
L1
(x) g
L1
(x) 2
j
1/2
M
}
_
.
9
Then we can write
E
F
L2
_
(C
L1
(X
L1
)
C
L1
(X
L1
))1
{X
L1
E
1
}
1
{X
L1
E
c
2
}
_
= E
F
L2
_
(C
L1
(X
L1
)
C
L1
(X
L1
))1
{0<g
L1
(X
L1
)
C
L1
(X
L1
)
1/2
M
}
1
{0<C
L1
(X
L1
)g
L1
(X
L1
)
1/2
M
}
_
+
j1
E
F
L2
_
(C
L1
(X
L1
)
C
L1
(X
L1
))1
{0<g
L1
(X
L1
)
C
L1
(X
L1
)
1/2
M
}
1
{2
j1
1/2
M
<C
L1
(X
L1
)g
L1
(X
L1
)2
j
1/2
M
}
_
+
r1
E
F
L2
_
(C
L1
(X
L1
)
C
L1
(X
L1
))1
{2
r1
1/2
M
<g
L1
(X
L1
)
C
L1
(X
L1
)2
r
1/2
M
}
1
{0<C
L1
(X
L1
)g
L1
(X
L1
)
1/2
M
}
_
+
r,j1
E
F
L2
_
(C
L1
(X
L1
)
C
L1
(X
L1
))1
{2
r1
1/2
M
<g
L1
(X
L1
)
C
L1
(X
L1
)2
r
1/2
M
}
1
{2
j1
1/2
M
<C
L1
(X
L1
)g
L1
(X
L1
)2
j1
1/2
M
}
_
E
F
L2
_
(C
L1
(X
L1
)
C
L1
(X
L1
))1
{0<C
L1
(X
L1
)
C
L1
(X
L1
)2
1/2
M
}
1
{0<C
L1
(X
L1
)g
L1
(X
L1
)
1/2
M
}
_
+
j1
E
F
L2
_
(C
L1
(X
L1
)
C
L1
(X
L1
))1
{2
j1
1/2
M
<C
L1
(X
L1
)
C
L1
(X
L1
)(2
j
+1)
1/2
M
}
1
{2
j1
1/2
M
<C
L1
(X
L1
)g
L1
(X
L1
)2
j
1/2
M
}
_
+
r1
E
F
L2
_
(C
L1
(X
L1
)
C
L1
(X
L1
))1
{2
r1
1/2
M
<C
L1
(X
L1
)
C
L1
(X
L1
)(2
r
+1)
1/2
M
}
1
{0<C
L1
(X
L1
)g
L1
(X
L1
)
1/2
M
}
_
+
r,j1
E
F
L2
_
(C
L1
(X
L1
)
C
L1
(X
L1
))1
{(2
r1
+2
j1
)
1/2
M
<C
L1
(X
L1
)
C
L1
(X
L1
)(2
r
+2
j
)
1/2
M
}
1
{2
j1
1/2
M
<C
L1
(X
L1
)g
L1
(X
L1
)2
j1
1/2
M
}
_
E
_
(
C
L1
(X
L1
) C
L1
(X
L1
))1
{X
L1
E
c
1
}
1
{0<C
L1
(X
L1
)g
L1
(X
L1
)
1/2
M
}
_
E
_
C
L1
(X
L1
) C
L1
(X
L1
)
1
{X
L1
E
c
1
}
1
{0<C
L1
(X
L1
)g
L1
(X
L1
)
1/2
M
}
_
E
_
C
L1
(X
L1
) C
L1
(X
L1
)
1
{0<C
L1
(X
L1
)g
L1
(X
L1
)
1/2
M
}
_
1/2
M
P(0 < C
L1
(X
L1
) g
L1
(X
L1
)
1/2
M
)
+
j1
2
j
1/2
M
E
_
1
{2
j1
1/2
M
<|C
L1
(X
L1
)g
L1
(X
L1
)|}
1
{0<C
L1
(X
L1
)g
L1
(X
L1
)
1/2
M
}
_
1
M
+
j1
2
j
1/2
M
e
2
2j2
P(0 < C
L1
(X
L1
) g
L1
(X
L1
)
1/2
M
)
B
1
M
.
10
For the other term in equation (5) we proceed as follows
E
_
(
C
L1
(X
L1
) C
L1
(X
L1
))1
{X
L1
E
c
1
}
1
{
1/2
M
<C
L1
(X
L1
)g
L1
(X
L1
)}
_
E
_
(
C
L1
(X
L1
) C
L1
(X
L1
))1
{X
L1
E
c
1
}
_
+ E
_
(
C
L1
(X
L1
) C
L1
(X
L1
))1
{X
L1
E
c
1
}
1
{C
L1
(X
L1
)g
L1
(X
L1
)<
1/2
M
}
_
E
_
(
C
L1
(X
L1
) g
L1
(X
L1
))1
{X
L1
E
c
1
}
1
{X
L1
E
2
}
_
B
1
M
.
Hence, we get
E
_
C
L2
(X
L2
)
X
L2
_
C
L2
(X
L2
) = O(
1
M
).
Lemma 5. For > 0
P
_
C
L2
(X
L2
) C
L2
(X
L2
)
1/2
M
_
exp(
2
)
where
M
= Mh
d
.
Proof. We denote E
_
C
L2
(X
L2
) C
L2
(X
L2
)
X
L2
_
=
L2
(X
L2
).
P
_
C
L2
(X
L2
) C
L2
(X
L2
)
1/2
M
X
L2
_
= P
_
C
L2
(X
L2
) C
L2
(X
L2
)
L2
(X
L2
) +
L2
(X
L2
)
1/2
M
X
L2
_
P
_
C
L2
(X
L2
) C
L2
(X
L2
)
L2
(X
L2
)
1/2
M
L2
(X
L2
)
X
L2
_
P
_
C
L2
(X
L2
) C
L2
(X
L2
)
L2
(X
L2
)
1/2
M
X
L2
_
where in the last inequality we make use of the result in Lemma 4 that
L2
(X
L2
) =
O(
1
M
). We can proceed as in Lemma 3 to obtain the nal result.
Proposition 1. V
0
E[
V
0
] = O(
1
M
) where
M
= Mh
d
.
Proof. The proof will follow from the steps in the Belomsteny paper with the continuation
value error probability bounds replaced by our results.
6 Variance calculation
Low bias option price estimator from 2 stage method is given as
V =
1
N
N
i=1
g
i
(X
i
).
Then
Var(
V ) =
1
N
Var(g
i
(X
i
)) +
_
1
1
N
_
Cov
_
g
i
(X
i
), g
j
(X
j
)
_
.
11
It can be shown that
1
N
Var(g
i
(X
i
)) =
1
N
Var(g
i
(X
i
)) +
o(1)
N
.
Next we consider
Cov
_
g
i
(X
i
), g
j
(X
j
)
_
= E[(g
i
(X
i
) g
i
(X
i
))(g
i
(X
i
) g
j
(X
j
))] + E
_
g
i
(X
i
) E[g
i
(X
i
)]
E[g
i
(X
i
) g
j
(X
j
)]
The second term in the above expression can be calculated using the result of the bias
calculations. We focus on the following term
E[(g
i
(X
i
) g
i
(X
i
))(g
i
(X
i
) g
j
(X
j
))].
We rst evaluate the expression at t = L 1,
E
F
t
L1
[(g
i
(X
i
) g
i
(X
i
))(g
i
(X
i
) g
j
(X
j
))]
= E
F
t
L1
[(g
L1
(X
i,L1
) g
L
(X
iL
))(g
i
(X
i
) g
j
(X
j
))1
{
i
=L1,
i
=L}
1
{
j
=
j
}
]
+ E
F
t
L1
[(g
L
(X
iL
) g
L1
(X
i,L1
))(g
i
(X
i
) g
j
(X
j
))1
{
i
=L,
i
=L1}
1
{
j
=
j
}
]
Furthermore,
E
F
t
L1
[(g
L1
(X
i,L1
) g
L
(X
iL
))(g
i
(X
i
) g
j
(X
j
))1
{
i
=L1,
i
=L}
1
{
j
=
j
}
]
= E
F
t
L1
[(g
L1
(X
i,L1
) C
L1
(X
i,L1
))(g
i
(X
i
) g
j
(X
j
))1
{
i
=L1,
i
=L}
1
{
j
=
j
}
]
+ E
F
t
L1
[(C
L1
(X
i,L1
) g
L
(X
iL
))(g
i
(X
i
) g
j
(X
j
))1
{
i
=L1,
i
=L}
1
{
j
=
j
}
]
7 Numerical experiments
8 Conclusion
References
L. Andersen. A simple approach to the pricing of bermudan swaptions in the multi-factor
libor market model. Available at SSRN 155208, 1999.
L. Andersen and M. Broadie. Primal-dual simulation algorithm for pricing multidimen-
sional american options. Management Science, pages 12221234, 2004.
D. Belomestny. Pricing bermudan options by nonparametric regression: optimal rates of
convergence for lower estimates. Finance and Stochastics, 15(4):655, 2011.
P. Boyle. Options: A Monte Carlo approach. Journal of Financial Economics, 4(3):
323338, 1977.
M. Broadie and P. Glasserman. Pricing American-style securities using simulation. Jour-
nal of Economic Dynamics and Control, 21(8-9):13231352, 1997.
J. Carriere. Valuation of the early-exercise price for options using simulations and non-
parametric regression. Insurance: Mathematics and Economics, 19(1):1930, 1996.
12
M. Haugh and L. Kogan. Pricing American options: a duality approach. Operations
Research, pages 258270, 2004.
M. Kohler, A. Krzy zak, and N. Todorovic. Pricing of high-dimensional American options
by neural networks. Mathematical Finance, 20(3):383410, 2010.
F. Longsta and E. Schwartz. Valuing American options by simulation: A simple least-
squares approach. Review of Financial Studies, 14(1):113147, 2001.
J. Tsitsiklis and B. Van Roy. Regression methods for pricing complex American-style
options. Neural Networks, IEEE Transactions on, 12(4):694703, 2001.
Appendix
13