Markets91 Interest PDF
Markets91 Interest PDF
Markets91 Interest PDF
a
1
r
t
+ b
1
(
l
t
r
t
1) + e
1t
,
l
t+1
l
t
l
t
a
2
+ b
2
r
t
+ c
2
l
t
+ e
2t
, (7)
where e
1t
and e
2t
are normally-distributed bivariate random variables with standard deviations of
1
and
2
respectively with a correlation coefcient of . As discussed below, the above continuous model is
only well dened in the differential limit of these discrete equations.
These equations imply that the short-term rate converges towards the long-term rate (b
1
> 0). A
result found in part of the BS study is that a
1
< 0. i.e., the short-term rate could be negative if not subject
to an additional external boundary condition. This is a potentially serious problem for this model. BS
used the iterative Aitken procedure [13] to estimate the parameters of the model. They found a
1
to be
negative (the long-term rate has to be larger than 2% to avoid negative short-term rates). The correlation
coefcient , and the coefcients a
1
and b
1
were unstable. In addition, BS found a negative serial
correlation in the error terms e
1t
and e
2t
, which led them to believe that additional state variables should
be added to the model.
Using methods of stochastic calculus [5], BS further derived a partial differential equation for bond
prices as the maturity date is approached.
B ((r + f
r
r
+ f
l
l
+ g
rr
r
2
+ g
rl
rl
+ g
ll
l
2
))B
AB , (8)
where the coefcients { f , g} depend on r and l, = T t for t calendar time and T the time of maturity,
and A can be considered as a differential operator on B.
It may help to appreciate the importance of the BS methodology by discretizing the above partial
differential equation for B, in a mean-value limit. That is, at a given calendar time t indexed by s,
noting that / = /t, take
0 f
r
B
s
r
f
l
B
s
l
,
Statistical Mechanics Methodology - 4 - Ingber, Wehner, Jabbour, Barnhill
0 g
rr
B
s
r
2
g
rl
B
s
rl
g
ll
B
s
l
2
,
B
s
B
s+1
r
s
B
s
. (9)
This yields the popular expectations-hypothesis spot-interest estimate of bond prices, working backwards
from maturity,
B
s
(1 + r
s
)
1
B
s+1
. (10)
The important generalization afforded by BS is to include information about r and l and treat them
as stochastic variables with drifts and diffusions. Then, this discretized treatment yields
B
s rl
(1 A
s rlrl
)
1
B
s+1 rl
, (11)
where the operator inverse of the differential operator A has been formally written, and its dependence on
intermediate values of r and l has been explicitly portrayed. Their discretized calculation of their partial
differential equation, and our discretized calculation of the path-integral representation of this model,
essentially are mathematical and numerical methods of calculating this evolution of B
s
.
In this paper we present an alternative methodology of very fast simulated re-annealing
(VFSR) [14] to compute the parameters of the BS model. It is also shown that the VFSR methodology is
capable of handling more complicated n-factor non- linear models.
The advantages of using the simulated annealing methodology are: (1) Global minima in parameter
space are relatively more certain than with regression tting. (2) All parameters, including parameters in
the noise, are simultaneously and equally treated in the ts, i.e., different statistical methods are not being
used to estimate the deterministic parameters, then to go on to estimate noise parameters. (3) Boundary
conditions on the variables can be explicitly included in the tting process, a process not included in
standard regression ts. (4) We can efciently extend our methodology to develop 3-state and higher
models, including higher order nonlinearities.
We also present an alternative method of calculating the evolution of bond prices. Our particular
non-Monte Carlo path-integral technique has proven to be extremely accurate and efcient for a variety of
nonlinear systems [15,16]. The method of path integration is more accurate and efcient for calculating
the evolution of B as a function of the stochastic variables r and l. To mention a few advantages: (1) A
variable mesh is calculated in terms of the underlying nonlinearities. (2) Initial conditions and boundary
conditions typically are more easily implemented with integral, rather that with differential, equations,
e.g., by using the method of images. (3) Integration is inherently a smoothing process, whereas
differentiation is a sharpening process. This means that we can handle stiff and nonlinear problems
with more ease.
We also comment below on how our methodology can be applied to other future-price models
under development [17-19].
In Section 2, we give a brief theoretical description of mathematically equivalent representations of
multivariate stochastic systems. These methods, just recently developed by mathematical physicists in the
last decade in the context of statistical mechanics, permit the introduction of even more recently
developed numerical algorithms. They were proposed for nancial systems in a previous paper [20].
In Section 3, we apply this formalism to specic computations of the BS model. Section 4 gives
our numerical results. Section 5 presents our conclusions. Appendix A gives a derivation of the
stochastic calculus used.
2. DEVELOPMENT OF MATHEMATICAL METHODOLOGY
2.1. Background
Aggregation problems in nonlinear nonequilibrium systems typically are solved
(accommodated) by having new entities/languages developed at these disparate scales in order to
efciently pass information back and forth [21,22]. This is quite different from the nature of quasi-
Statistical Mechanics Methodology - 5 - Ingber, Wehner, Jabbour, Barnhill
equilibrium quasi-linear systems, where thermodynamic or cybernetic approaches are possible. These
approaches typically fail for nonequilibrium nonlinear systems.
In the late 1970s, mathematical physicists discovered that they could develop statistical mechanical
theories from algebraic functional forms
dr/dt f
r
(r, l) +
i
g
i
r
(r, l)
i
,
dl/dt f
l
(r, l) +
i
g
i
l
(r, l)
i
, (12)
where the gs and f s are general nonlinear algebraic functions of the variables r and l. These equations
represent differential limits of discretized stochastic difference equations, e.g., Wiener noise
dW dt [23]. The resulting stochastic differential equations (s.d.e.s) are referred to as Langevin
equations [23-28]. The f s are referred to as the (deterministic) drifts, and the square of the gs are
related to the diffusions (uctuations or volatilities). In fact, the statistical mechanics can be developed
for any number of variables, not just two. The s are sources of Gaussian-Markovian noise, often
referred to as white noise. The inclusion of the gs, called multiplicative noise, recently has been
shown to very well mathematically and physically model other forms of noise, e.g., shot noise, colored
noise, dichotomic noise [29-32]. Finite-jumps diffusions also can be included [33].
These new methods of nonlinear statistical mechanics only recently have been applied to complex
large-scale physical problems, demonstrating that observed data can be described by the use of these
algebraic functional forms. Success was gained for large-scale systems in neuroscience, in a series of
papers on statistical mechanics of neocortical interactions [34-38], and in nuclear physics [39,40]. This
methodology has been used for problems in combat analyses [16,22,41,42]. These methods were
suggested for nancial markets [20], and this paper is an application of that approach to estimating term
structure models.
Thus, now we can investigate various choices of f s and gs to test algebraic functional forms. In
science, this is a standard phenomenological approach to discovering and encoding knowledge and
observed data, i.e., tting algebraic functional forms which lend themselves to empirical interpretation.
This gives more condence when extrapolating to new scenarios, exactly the issue in building condence
in nancial models.
The utility of these algebraic functional forms goes further beyond their being able to t sets of
data. There is an equivalent representation to the Langevin equations, called a path-integral
representation for the long-time probability distribution of the variables. This short-time probability
distribution is driven by a Lagrangian, which can be thought of as a dynamic algebraic cost
function. The path-integral representation for the long-time distribution possesses a variational principle,
which means that simple graphs of the algebraic cost-function give a correct intuitive view of the most
likely states of the variables, and of their statistical moments, e.g., heights being rst moments (likely
states) and widths being second moments (uncertainties). Like a ball bouncing about a terrain of hills and
valleys, one can quickly visualize the nature of dynamically unfolding r and l states.
Especially because we are trying to mathematically model sparse and poor data, different drift and
diffusion algebraic functions can give approximately the same algebraic cost-function when tting short-
time probability distributions to data. The calculation of long-time distributions permits a clear choice of
the best algebraic functions, i.e., those which best follow the data through a predetermined long epoch of
trading. Afterwards, if there are closely competitive algebraic functions, they can be more precisely
assessed by calculating higher algebraic correlation functions from the probability distribution.
As discussed previously, the mathematical representation most familiar to other modelers is a
system of stochastic rate equations, often referred to as Langevin equations. From the Langevin
equations, other models may be derived, such as the times-series model and the Kalman lter method of
control theory. Howev er, in the process of this transformation, the Markovian description typically is lost
by projection onto a smaller state space [43,44].
Statistical Mechanics Methodology - 6 - Ingber, Wehner, Jabbour, Barnhill
2.2. Fitting Parameters
For example, for variables, these coupled stochastic differential equations can be represented
equivalently by a short-time conditional probability distribution, P, in terms of the Lagrangian, L:
P det( )
1/2
(2 t)
/2
exp(Lt) . (13)
The form for the Lagrangian, L, and the determinant of the metric, , is
L
G
(dM
G
/dt g
G
)(dM
G
/dt g
G
)
2g
GG
,
det(g
GG
) , (g
GG
) (g
GG
)
1
,
g
GG
g
G
i
g
G
i
, (14)
where G and G run over all variables. Here, the prepoint discretization is used, which hides the
Riemannian corrections explicit in the midpoint discretized Feynman Lagrangian; only the latter
representation possesses a variational principle useful for arbitrary noise [20,24].
This denes a scalar dynamic cost function, C, in terms of parameters, e.g., generically
represented as C( ),
C( ) Lt +
2
ln(2 t) +
1
2
ln , (15)
which can be used with the VFSR algorithm, discussed below [14], to nd the (statistically) best t of
parameters , e.g., identied by { }, to the data. The cost function for a given system is obtained by the
product of Ps over all data epochs, i.e., a sum of Cs is obtained. In the actual VFSR code, C is
normalized by dividing by the number of epochs. Then, since we essentially are performing a
maximum likelihood t, the cost functions obtained from somewhat different theories or data can provide
a relative statistical measure of their likelihood, e.g., P
12
exp(C
2
C
1
).
If there are competing mathematical forms, then it is advantageous to utilize the path-integral to
calculate the long-time evolution of P [16,22]. Experience has demonstrated that the long-time
correlations derived from theory, measured against the observed data, is a viable and expedient way of
rejecting models not in accord with observed evidence.
Note that the use of the path integral is a posteriori to and independent of the short-time tting
process, and is a subsidiary physical constraint on the mathematical models to judge their internal
soundness and suitability for attempts to extrapolate to other trading scenarios.
2.3. Algebraic Complexity Yields Simple Intuitive Results
Consider a multivariate system with variance a general nonlinear function of the variables. The
Einstein summation convention helps to compact the equations, whereby repeated indices in factors are to
be summed over.
The Ito (prepoint) discretization for a system of stochastic differential equations is dened by
t
s
[t
s
, t
s
+ t] [t
s
, t
s+1
] ,
M(t
s
) M(t
s
) ,
dM(t
s
)/dt M(t
s+1
) M(t
s
) . (16)
The stochastic equations are then written as
dM
G
/dt f
G
+ g
G
i
i
,
i 1,
. . .
, ,
Statistical Mechanics Methodology - 7 - Ingber, Wehner, Jabbour, Barnhill
G 1,
. . .
, . (17)
The operator ordering (of the /M
G
operators) in the Fokker-Planck equation corresponding to this
discretization is
P
t
VP +
(g
G
P)
M
G
+
1
2
2
(g
GG
P)
M
G
M
G
,
g
G
f
G
+
1
2
g
G
i
g
G
i
M
G
,
g
GG
g
G
i
g
G
i
. (18)
where a potential V is present in some systems.
The Lagrangian corresponding to this Fokker-Planck and set of Langevin equations may be written
in the Stratonovich (midpoint) representation, corresponding to
M(t
s
)
1
2
[M(t
s+1
) + M(t
s
)] . (19)
This discretization can be used to dene a Feynman Lagrangian L which possesses a variational principle,
and which explicitly portrays the underlying Riemannian geometry induced by the metric tensor g
GG
,
calculated to be the inverse of the covariance matrix [20]. More details are given in Appendix A and in
another paper on this subject [45].
P
. . .
DM exp(
u
s0
tL
s
) ,
DM g
1/2
0
+
(2 t)
/2
u
s1
g
1/2
s
+
G1
(2 t)
1/2
dM
G
s
,
dM
G
s
N
G
1
M
G
s
, M
G
0
M
G
t
0
, M
G
u+1
M
G
t
,
L
1
2
(dM
G
/dt h
G
)g
GG
(dM
G
/dt h
G
) +
1
2
h
G
;G
+ R/6 V ,
[
. . .
]
,G
[
. . .
]
M
G
,
h
G
g
G
1
2
g
1/2
(g
1/2
g
GG
)
,G
,
g
GG
(g
GG
)
1
,
g
s
[M
G
(t
s
), t
s
] det(g
GG
)
s
, g
s
+
g
s
[M
G
s+1
, t
s
] ,
h
G
;G
h
G
,G
+
F
GF
h
G
g
1/2
(g
1/2
h
G
)
,G
,
F
JK
g
LF
[JK, L] g
LF
(g
JL,K
+ g
KL,J
g
JK,L
) ,
R g
JL
R
JL
g
JL
g
JK
R
FJKL
,
R
FJKL
1
2
(g
FK,JL
g
JK,FL
g
FL,JK
+ g
JL,FK
) + g
MN
(
M
FK
N
JL
M
FL
N
JK
) , (20)
where R is the Riemannian curvature, and we also have explicitly noted the discretization in the mesh of
M
G
s
by , to be discussed further below.
Statistical Mechanics Methodology - 8 - Ingber, Wehner, Jabbour, Barnhill
A prepoint discretization for the same probability distribution P, giv es a much simpler algebraic
form,
M(t
s
) M(t
s
) ,
L
1
2
(dM
G
/dt g
G
)g
GG
(dM
G
/dt g
G
) V , (21)
but the Lagrangian L so specied does not satisfy a variational principle useful for moderate to large
noise. Still, this prepoint-discretized form has been quite useful in all systems examined thus far, simply
requiring a somewhat ner numerical mesh.
It must be emphasized that the output need not be conned to complex algebraic forms or tables of
numbers. Because L possesses a variational principle, sets of contour graphs, at different long-time
epochs of the path-integral of P over its variables at all intermediate times, give a visually intuitive and
accurate decision-aid to view the dynamic evolution of the scenario. For example, this Lagrangian
approach permits a quantitative assessment of concepts usually only loosely dened.
Momentum
G
L
(M
G
/t)
,
Mass g
GG
L
(M
G
/t)(M
G
/t)
,
Force
L
M
G
,
F ma: L 0
L
M
G
t
L
(M
G
/t)
, (22)
where M
G
are the variables and L is the Lagrangian. These physical entities provide another form of
intuitive, but quantitatively precise, presentation of these analyses. For example, daily newspapers use
this terminology to discuss the movement of security prices.
2.4. Numerical Methodology
Recently, two major computer codes have been developed, which are key tools for the use of this
approach to estimate model parameters and price bonds.
The rst code, very fast simulated re-annealing (VFSR) [14], ts short-time probability
distributions to observed data, using a maximum likelihood technique on the Lagrangian. An algorithm
of very fast simulated re-annealing has been developed to t observed data to a theoretical cost function
over a D-dimensional parameter space [14], adapting for varying sensitivities of parameters during the t.
The annealing schedule for the temperatures (articial uctuation parameters) T
i
decrease
exponentially in time (cycle-number of iterative process) k, i.e., T
i
T
i0
exp(c
i
k
1/D
).
Heuristic arguments have been developed to demonstrate that this algorithm is faster than the fast
Cauchy annealing [46], T
i
T
0
/k, and much faster than Boltzmann annealing [47], T
i
T
0
/ ln k. To be
more specic, the kth estimate of parameter
i
,
i
k
[ A
i
, B
i
] , (23)
is used with the random variable x
i
to get the k + 1th estimate,
i
k+1
i
k
+ x
i
(B
i
A
i
) ,
x
i
[1, 1] . (24)
The generating function is dened as
g
T
(x)
D
i1
1
2 ln(1 + 1/T
i
)(| x
i
| + T
i
)
D
i1
g
i
T
(x
i
) ,
Statistical Mechanics Methodology - 9 - Ingber, Wehner, Jabbour, Barnhill
T
i
T
i0
exp(c
i
k
1/D
) . (25)
Note that the use of C, the cost function given above, is not equivalent to doing a simple least squares t
on M(t + t).
The second code develops the long-time probability distribution from the Lagrangian t by the rst
code. A robust and accurate histogram-based (non-Monte Carlo) path-integral algorithm to calculate the
long-time probability distribution has been developed to handle nonlinear Lagrangians [15,16,48,49],
including a two-variable code for additive and multiplicative cases.
The histogram procedure recognizes that the distribution can be numerically approximated to a high
degree of accuracy as sum of rectangles at points M
i
of height P
i
and width M
i
. For convenience, just
consider a one-dimensional system. The above path-integral representation can be rewritten, for each of
its intermediate integrals, as
P(M; t + t)
dM[g
1/2
s
(2 t)
1/2
exp(L
s
t)]P(M; t)
dMG(M, M; t)P(M; t) ,
P(M; t)
N
i1
(M M
i
)P
i
(t)
(M M
i
)
'
0 , (M
i
1
2
M
i1
) M (M
i
+
1
2
M
i
) ,
1 , otherwise ,
(26)
which yields
P
i
(t + t) T
ij
(t)P
j
(t) ,
T
ij
(t)
2
M
i1
+ M
i
M
i
+M
i
/2
M
i
M
i1
/2
dM
M
j
+M
j
/2
M
j
M
j1
/2
dMG(M, M; t) . (27)
T
ij
is a banded matrix representing the Gaussian nature of the short-time probability centered about the
(varying) drift.
This histogram procedure has been extended to two dimensions, i.e., using a matrix T
ijkl
[16], e.g.,
essentially similar to the use of the A matrix in the previous section. Explicit dependence of L on time t
also can be included without complications. We see no problems in extending it to other dimensions,
other than care must be used in developing the mesh in M, which is dependent on the diffusion matrix.
Fitting data with the short-time probability distribution, effectively using an integral over this
epoch, permits the use of coarser meshes than the corresponding stochastic differential equation. The
coarser resolution is appropriate, typically required, for numerical solution of the time-dependent path-
integral: By considering the contributions to the rst and second moments of M
G
for small time slices ,
conditions on the time and variable meshes can be derived [48]. The time slice essentially is determined
by L
1
, where L is the static Lagrangian with dM
G
/dt 0, throughout the ranges of M
G
giving the
most important contributions to the probability distribution P. The variable mesh, a function of M
G
, is
optimally chosen such that M
G
is measured by the covariance g
GG
, or M
G
(g
GG
)
1/2
.
2.5. Chaos or Noise?
Given the context of current studies in complex nonlinear systems [50,51], the question can be
asked: What if markets have chaotic mechanisms that overshadow the above stochastic considerations?
The real issue is whether the scatter in data can be distinguished between being due to noise or chaos.
Several studies have been proposed with regard to comparing chaos to simple ltered (colored) noise [J.
Theiler, private communication] [51-53].
Statistical Mechanics Methodology - 10 - Ingber, Wehner, Jabbour, Barnhill
The previous references must be generalized, such that we must investigate whether scatter in
markets data can be distinguished from multiplicative noise. A previous application of this methodology
follows:
One of us (LI) was principal investigator of a US Army Models Committee project, working with a
team of Army and Lawrence Livermore National Laboratory personnel to compare, for the rst time,
large-scale high-delity combat computer-model data to exercise data [16,41]. The combat analysis was
possible only because now we had recent data on combat exercises from the National Training Center
(NTC) of sufcient temporal density to attempt dynamical mathematical modeling. The criteria used to
(not) determine chaos in this dynamical system is the nature of propagation of uncertainty, i.e., the
variance. For example, following by-now standard arguments [J. Yorke, seminar and private
communication], propagation of uncertainty may be considered as (a) diminishing, (b) increasing
additively, (c) or increasing multiplicatively. An example of (a) is the evolution of a system to an
attractor, e.g., a book dropped onto the oor from various heights reaches the same point no matter what
the spread in initial conditions. An example of (b) is the propagation of error in a clock, a cyclic system.
Examples of (c) are chaotic systems, of which very few real systems have been shown to belong. An
example of (c) is the scattering of a particle in a box whose center contains a sphere boundary: When a
spread of initial conditions is considered for the particle to scatter from the sphere, when its trajectories
are aligned to strike the sphere at a distance from its center greater that the diameter, the spread in
scattering is a factor of about three greater than the initial spread.
In our analysis of NTC data, we were able to t the short-time attrition epochs (determined to be
about 5 minutes from mesh considerations determined by the nature of the Lagrangian) with short-time
nonlinear Gaussian-Markovian probability distributions with a resolution comparable to the spread in
data. When we did the long-time path-integral from some point at the beginning of the battle, we found
that we could readily nd a form of the Lagrangian that made physical sense and that also t the
multivariate variances as well as the means at each point in time of the rest of the combat interval. I.e.,
there was not any degree of hyper-sensitivity to initial conditions that prevented us from predicting the
long time means and variances of the system. Since the system is dissipative, there is a strong tendency
for all moments to diminish in time, but in fact this combat is of sufciently modest duration (typically 1
to 2 hours) that variances do increase somewhat during the middle of the battle. In summary, this
battalion-regiment scale of this particular battle did not seem to possess chaos.
Similar considerations and calculations are planned for these studies of nancial markets.
3. BRENNAN-SCHWARTZ MODELS
3.1. Interest Rates
The pioneering Brennan-Schwartz (BS) model [5,7] can be used to illustrate how this methodology
is to be implemented numerically.
The BS model is summarized by:
dr [a
1
+ b
1
(l r)]dt + r
1
dz
1
,
dl [l(a
2
+ b
2
r + c
2
l)]dt + l
2
dz
2
,
< dz
i
> 0 , i {1, 2} ,
< dz
i
(t)dz
j
(t) > dt (t t) , i j ,
< dz
i
(t)dz
j
(t) > dt (t t) , i j ,
(t t)
'
0 , ,
1 ,
t t ,
t t ,
(28)
where < . > denotes expectations.
Statistical Mechanics Methodology - 11 - Ingber, Wehner, Jabbour, Barnhill
These can be rewritten as Langevin equations (in the Ito prepoint discretization)
dr/dt a
1
+ b
1
(l r) +
1
r(
+
n
1
+ sgn
n
2
) ,
dl/dt l(a
2
+ b
2
r + c
2
l) +
2
l(sgn
n
1
+
+
n
2
) ,
1
2
[1 t (1
2
)
1/2
]
1/2
,
n
i
(dt)
1/2
p
i
, (29)
where p
1
and p
2
are independent [0,1] Gaussian distributions.
The cost function C is dened from the equivalent short-time probability distribution, P, for the
above set of equations.
P g
1/2
(2 dt)
1/2
exp(Ldt)
exp(C) ,
C Ldt +
1
2
ln(2 dt) ln(g) ,
L
1
2
F
gF ,
F
dr/dt ((a
1
+ b
1
(l r)))
dl/dt l(a
2
+ b
2
r + c
2
l)
_
,
,
g det(g) ,
k 1
2
. (30)
g, the metric in {r, l}-space, is the inverse of the covariance matrix,
g
1
(r
1
)
2
rl
1
2
rl
1
2
(l
2
)
2
_
,
. (31)
As discussed above, the correct mesh for time, dt, in order that P represent the Langevin equations (to
order dt
3/2
) is
dt 1/L , (32)
where L is L evaluated with ds/dt dl/dt 0. If dt is greater than 1/L, then it is inappropriate to use P,
and instead the path integral over intermediate states of folded short-time distributions must be calculated.
In this context, it should be noted that the correct time mesh for the corresponding differential equations
must be at least as small, since typically differentiation is a sharpening process. This will be noted in
any discipline requiring numerical calculation, when comparing differential and integral representations
of the same system.
The VFSR code was checked out with a rather stringent test: Data was generated using the BS
equations above as a simulation, using a set of parameters given in the 1982 BS paper. Then, the
Lagrangian was used as a cost function to search for the parameters used in the generation of the data. A
time mesh was established at each point in time using the criteria given above, which turns out to be
dt 1 day. The code converged to within the statistical accuracy of the generated data. When a time
mesh of dt 1 month was used, as did BS, the results did not match the simulated data.
In light of the above, it should not be surprising that computer runs with real Treasury bills and
bonds over the same epochs as BS, using end-of-month data, did not agree precisely with BS or with
similar runs using daily data. Note we are not using the exact data as BS. However, should we be
Statistical Mechanics Methodology - 12 - Ingber, Wehner, Jabbour, Barnhill
concerned about the lack of excellent convergence with BS? As determined from C above, the dt 1
day, possibly 1 week, is appropriate to use to dene the short-time probability distribution, data not
available in the 1982 BS study.
However, dt 1 day, not available for the BS work, shows marked deviations in the development of
interest variables for selected trajectories using the BS equations as a simulation of data. This suggests
that a two-state model is insufcient to capture faster movements in rates at the daily scale.
When the s.d.e. were permitted to evolve, trajectories developed increasing interest rates. It was
noted that when the rates approached 100%, there suddenly was extremely wild growth of these variables.
In the context of chaos discussed above, this region will be further investigated.
3.2. Security Prices
BS [5] present arguments recognizing that the stochastic price of a discount bond for a given
maturity date T can utilize straightforward stochastic calculus to derive a form in terms of coefcients
appearing in their r l coupled stochastic equations. They use arbitrage arguments on portfolios of bonds
with different maturity dates to derive zero risk conditions for the market prices of risks,
1
and
2
, for
short-term and long-term interest rates, respectively. By considering l as related to a bonds price, they
straightforwardly derive an arbitrage expression for
2
. Their resulting partial differential equation
(p.d.e.) is an equilibrium (mean value) equation for a pure discount-bond price B, at a giv en time until
maturity = T t and continuous coupon payment of c.
The above formulation of interest rates is used by BS to determine the parameters needed to
calculate their derived p.d.e. for securities, i.e., bond prices B. Using some notation developed above,
with {M
G
; G r, l}, they obtain
B
VB +
(g
G
B)
M
G
+
1
2
2
(g
GG
B)
M
G
M
G
,
g
r
(
1
1
1
)
a
1
b
1
(l r) +
1
r
1
,
g
l
(
2
2
2
)
l(
2
2
+ l r) ,
(g
GG
) (g)
1
,
V
c
B
r , (33)
where c is the continuous coupon rate for bond B, and
1
is an additional parameter to be t by the data.
The above equation represents a truly nonlinear Fokker-Planck equation because of the presence
of B in V. Howev er, if c/B is a smooth function, such that
V(M
G
; ) V(M
G
; )
V
B
B(M
G
)
dM
G
O(
) , (34)
for > 1, where + , then our numerical path-integral codes may be used here as well [15].
In this formulation, all the above algebraic and numerical methodology can be utilized to dene a
Lagrangian-like function, L
B
, dening the evolution of B, subject to whatever initial conditions and
boundary conditions are deemed appropriate, e.g., if considering bonds or options [7,9]. Care must be
taken with the discretization in the forward direction.
Statistical Mechanics Methodology - 13 - Ingber, Wehner, Jabbour, Barnhill
In particular, it was tempting to use our simulated-annealing codes to t L
B
; then our path-integral
codes can be used to evolve/predict B for long epochs. The use of L
B
as a cost function for tting data is
still reasonable even though B is not a bona de E.g., B is hypothesized to be determined by intense
arbitrage, in the derivation of the p.d.e. This is equivalent to stating that the process is conned, within
statistical uctuations, to a maximum likelihood path. This is mathematically articulated in our
methodology as stating that the variational principle associated (not independently assumed) with L
B
,
including the specication of the boundary conditions and initial conditions, implies that we can t the
variables in L
B
to data specied by observed values. This is numerically similar to the way we perform a
maximum likelihood t of the Lagrangian associated with the s.d.e. As we show below, this assumption
appears to be well supported in our ts of L
B
to observed r l data, in that we get a set parameters very
close to those obtained by tting the L associated with the s.d.e. It should be noted that BS calculated
their
1
by interpolating ts to bond prices, thereby incorporating lots of pricing information into
1
.
However, independent of such arguments, once the parameters are established, it is mathematically
correct to use our path-integral codes on the B equation, with its initial conditions and boundary
conditions, to calculate the long-time evolution of B.
We note that our use of L
B
to describe the evolution of B is similar in spirit to recent attempts to
describe the evolution of bond prices as a functional of forward rates, for example by Heath, Jarrow and
Morton (HJM) [18,19]. Our functional corresponding to theirs is simply the Lagrangian L
B
. HJM
develop arbitrage arguments to impose equilibrium evolution by randomly varying their interest-rate
functional, somewhat similar to the numerical importance-sampling methodology employed when
performing numerical Monte Carlo techniques to take advantage of an underlying variational
principle [14,54]. The trade-off in dev eloping a theory of stochastic forward pricing is that parabolic
equations are unstable for the negative diffusion in calendar time t so dened for nancial systems, and so
HJM must impose additional constraints to prevent this explosion. The BS technique of developing B as
a function of T t avoids these problems. The methodology of HJM does not require the separate
estimation of market risk factors, e.g.,
1
. Our Lagrangian representation of the BS model, coupled with
our arbitrage arguments, also permits us to t all parameters directly to interest rate data. The
methodology of HJM permits the introduction of colored (time delayed) noise, which also can be
included in our methodology, albeit with substantial effort. In principle, given a forward rate HJM model
for a Lagrangian, then all our methodology could be used here as well.
In many cases it may be of interest to study the stochastic systems of interest rates, e.g., the coupled
r l equations above. Howev er, if only the evolution of B is of interest, then it is more direct, and likely
more numerically accurate, to t the parameters in the cost function C
B
(derived from L
B
as C was
derived from L above). I.e., we t {a
1
, b
1
, ,
2
,
1
} to {r, l, B, c} data. Since, as discussed below, we
will be dealing with portfolios of pure discount bonds to represent coupon bonds, we consider c = 0,
thereby requiring only {r, l} data. We do not need the parameters {a
2
, b
2
, c
2
} because of the arbitrage
arguments used to calculate
2
2
2
above [3]. This greatly improves the statistical merit of our ts,
requiring two less degrees of freedom for the B equation. This approach also may be viewed as an
empirical test of the consistency of assumptions used to derive the bond equation. Our VFSR
methodology also explicitly include boundary conditions, a very important component of any model even
if only implicitly described, since we use the same cost function, but as a function of its variables instead
of its parameters, to calculate the evolution of bond prices with the path integral.
The ultimate test of any methodology is to compare theoretical predictions/descriptions with
observed data [55]. Assume we already have t our parameters for the entire epoch of interest. Actual
bond prices with coupons may then be evaluated straightforwardly by considering a portfolio of n pure
discount bonds with a series of maturity dates T
n
equivalent to the dates of payment of coupons and the
face value of the actual coupon bond to be modeled. This prescription requires that we integrate back
such a portfolio of n pure discount bonds with maturity T
n
, to various times t
i
< T (including only those
bonds in the portfolio with maturity T
n
t
i
). At each of these times, we use the observed values of r(t
i
)
and l(t
i
) to calculate the bond prices B
n
(t
i
). This portfolio of {B
n
(t
i
)} is then compared to the observed
coupon bond B(t
i
), i.e., for many such times {t
i
}. For each zero-coupon bond in this portfolio, we start at
its time of maturity T
n
, enforcing the initial conditions B
n
(r, l; T
n
) 1, and integrate back to a given time
t < T
n
. We then weight each zero coupon bond by the actual coupon or face value paid on the coupon
Statistical Mechanics Methodology - 14 - Ingber, Wehner, Jabbour, Barnhill
bond.
Similarly, for the important purpose of theoretically pricing an actual coupon bond at time t, we use
the same methodology, e.g., using the present days best estimate of r(t) and l(t) to calculate B(r, l; t).
We are not in complete agreement with BS on their use of boundary conditions and numerical
implementation [5]. Their use of natural boundary conditions, actually more general unrestricted or
singular boundary conditions [56], is in part based on their own admittedly ad hoc choice of functional
forms for r and l diffusions, in both their s.d.e. and p.d.e., and in the r drift in their s.d.e. We believe that
the appropriate boundary conditions must be determined by nance considerations as follows:
r 0 and l 0 turn out to be natural boundary conditions of their model being inaccessible in
any nite time, and therefore do not require or permit any additional specication. At r 0, we should
have B(t + t) B(t), i.e., reecting boundary conditions. This is the regular boundary conditions of the
model only if the r drift is greater than zero [49,56]. At l 0, the BS model yields a unrestricted
boundary conditions implying B 0, and therefore does not require or permit any additional specication.
Further examination shows that l 0 is sometimes an entrance condition, sometimes an exit condition,
depending on the value of r. Similar analyses show that r
and l
3/2
[15,48,49].
Since the boundary conditions at r 0 and l 0 are mathematically unrestricted if the diffusions
vanish at these boundaries, as they do for the BS model, we therefore also propose that the ad hoc
functional form of the diffusions be relaxed to admit additive noise components, e.g.,
1
and
2
. This is
especially important for r 0. This not only can be tested by so tting data, but also mathematically
permits the imposition of external boundary conditions more tightly constrained to the nancial system
under consideration, not being technically restricted by the actual functional forms chosen for the drifts
and diffusions. Thus we also tested models for which
r
1
r
1
+
1
,
l
2
l
2
+
2
, (35)
in the drifts and diffusions of the bond p.d.e. Because of the induced singular behavior in the l-drift, this
transformation still requires unrestricted boundary conditions at l 0, which turn out to be entrance
boundary conditions.
We believe it is extremely important to gain this freedom over the functional forms of the drifts and
diffusions. For example, our calculations with this model clearly demonstrate that the rather mild
nonlinearities of the BS model only permit inationary evolution, since those were the periods were t to
data and since the functional forms likely cannot accommodate many swings and dips, on time scales of
months or years, much longer that of the uctuations, yet shorter than the period of long-term bonds.
This appears to require a higher degree of nonlinearity and/or an increase in the number of independent
interest-rate variables.
For future calculations, we propose to include some additive components in the r and l diffusions.
We intend to invoke external reecting boundary conditions for r 0, using the method of images.
Similar sets of boundary conditions were used in a previous project [16]. We checked the accuracy of the
reecting boundary conditions using MACSYMA, an algebraic manipulator.
Statistical Mechanics Methodology - 15 - Ingber, Wehner, Jabbour, Barnhill
BS procedures used a discretized form of their p.d.e. They compared their theoretical and observed
values of B for several values of
1
, and interpolated to nd the best value of
1
which t the data. Our
methodology of calculating
1
from L
B
, simultaneously with the other parameters in the p.d.e., avoids
this additional step in their methodology.
4. NUMERICAL RESULTS
4.1. Fits to Interest Rates
Interest rates were developed from Treasury bill and bond yields during the period October 1974
through December 1979, the same period as one of the sets used by BS [7]. Short-term rates were
determined from Treasury bills with a maturity of three months (BS used 30-day maturities), and long-
term rates were determined from Treasury bonds with a maturity of twenty years (BS used at least 15-year
maturities). For monthly runs, we used 63 points of data between 74-10-31 and 79-12-28. For daily runs,
we used 1283 points of data between 74-10-31 and 79-12-31. We used yearly rates divided by 12 to t
the parameters.
For daily data, the actual number of days between successive trades was used; i.e., during this time
period we had 1282 pieces of daily data and 62 pieces of end-of-month data. Although a rescaling in time
only simply scales the deterministic parameters linearly, since that is how they appear in this model, this
is not true for . Then we did all subsequent runs using the scale of one day. We used yearly rates
divided by 365 to t the parameters.
The BS parameters also were run through the data, calculating the cost function they giv e. The
single cost function bears the weight of determining all parameters. Typically, three or four signicant-
gure stability is required to get even one or two signicant-gure stability in the parameters. (All runs
were performed using double precision for all oating-point variables.) The cost function calculated is
the sum over all Lagrangians at each short-time epoch (divided by the number of epochs, which doesnt
affect its minimum, but helps to compare cost functions over different sets of data). I.e., a maximum
probability t is done by minimizing the cost functions (each the argument of the exponential
representing the probability distribution of the variables) over all time epochs. The BS versus our tted
parameters are given in Table 1.
Table 1. BS parameters were t to data using our Lagrangian representation for their coupled r l
equations, for both end-of-month and daily data between 74-10-31 and 79-12-31. The second column,
designated BS Monthly, giv es their published 1982 results, using somewhat different data during this
period. The third column gives our monthly ts on somewhat different data during this same time period.
The fourth column gives daily ts scaled to daily time.
Parameter BS Monthly L Monthly L Daily
a
1
0.0361 3.02 10
5
6.33 10
9
b
1
0.0118 3.89 10
4
0.0902
1
0.0777 0.0700 0.0132
0.442 0.534 0.136
a
2
0.169 9.73 10
3
2.43 10
4
b
2
0.0089 0.0262 0.0320
c
2
0.271 0.707 0.492
2
0.0243 0.0278 4.01 10
3
It should be noted that for all periods before October 1974, back through December 1958, using
monthly data, BS found a
1
< 0, and for the period April 1964 through June 1969 they found c
2
> 0.
Fits were performed on a Hewlett Packard 9000-835SE, a 12-MIPS computer. For example, the
t using the bond Lagrangian took approximately 100 CPU minutes for 1500 acceptance points,
representing about 2000 generated points per 100 acceptance points at each re-annealing cycle, in this six-
Statistical Mechanics Methodology - 16 - Ingber, Wehner, Jabbour, Barnhill
dimensional parameter space. It was found that once the VFSR code repeated the lowest cost function
within two cycles of 100 acceptance points, e.g., typically achieving 3 or 4 signicant-gure accuracy in
the global minimum of the cost function, by shunting to a local tting procedure, the Broyden-Fletcher-
Goldfarb-Shanno (BFGS) algorithm [57], only several hundred acceptance points were required to
achieve 7 or 8 signicant-gure accuracy in the cost function. This also provided yet another test of the
VFSR methodology.
4.2. Fits to Bond Lagrangian
We examined differences in tting L
B
under varying constraints: In Table 2, we compare the L
B
Daily t with those obtained by admitting some degree of additive noise
1
and
2
.
Table 2. We compare the L
B
t in the second column with those in the third column obtained by
admitting some degree of additive noise
1
and
2
. (Absolute values of
1,2
were used to constrain them
to positive values in the local ts.) Note that BS obtained 0.216 for
1
by interpolating among several
1
s to minimize the deviation of their theoretical bond prices to the observed ones.
Parameter L
B
a
1
2.02 10
8
1.01 10
7
b
1
3.02 10
4
1.01 10
3
1
0.0173 0.0166
0.673 0.684
2
5.36 10
3
3.84 10
3
1
2.42 10
3
0.0176
1
- 6.61 10
8
2
- 3.51 10
7
At this time, we are preparing ts to aggregate portfolios of bonds, to average over particulars of
individual bonds, as performed by other investigators. We nd that it is quite easy to t
2
to sets of bond
prices, after tting interest-rate parameters using the BS s.d.e., as done by BS. However, we nd it more
difcult to t
2
to these sets after using the bond Lagrangian to t the interest-rate parameters, as we
have presented here. These results will be reported in a future paper.
5. CONCLUSION
We hav e demonstrated how mathematical methodologies and numerical algorithms recently
developed in the eld of statistical mechanics can be brought to bear on term structure models.
Specically, methods of very fast simulated re-annealing can be used to statistically nd best global ts of
multivariate nonlinear stochastic term structure models, without requiring approximation of the basic
models.
We also have argued that other numerical techniques, i.e., the path integral, can be brought to bear
to calculate evolution of asset prices, using the term structure models for proxy variables. Another paper
in progress will report on more extensive comparisons with observed bond prices.
This new formalism also permits a fresh look at some of these models and affords comparison with
other nonlinear stochastic systems.
Statistical Mechanics Methodology - 17 - Ingber, Wehner, Jabbour, Barnhill
APPENDIX A: STATISTICAL MECHANICS DERIVATION OF PATH INTEGRAL
This Appendix outlines the derivation of the path integral representation of the nonlinear Langevin
equations, via the Fokker-Planck representation. This serves to point out the importance of properly
treating nonlinearities, and to emphasize the deceptive simplicity of the Langevin and Fokker-Planck
representations of stochastic systems. There are a few derivations in the literature, but the following blend
seems to be the most concise. All details may be found in the references given in this paper [23,58-60].
The Stratonovich (midpoint discretized) Langevin equations can be analyzed in terms of the Wiener
process dW
i
, which can be rewritten in terms of Gaussian noise
i
dW
i
/dt if care is taken in the
limit [23].
dM
G
f
G
[t, M(t)]dt + g
G
i
[t, M(t)]dW
i
,
M
G
(t) f
G
[t, M(t)] + g
G
i
[t, M(t)]
i
(t) ,
dW
i
i
dt ,
M {M
G
; G 1,
. . .
, } ,
{
i
; i 1,
. . .
, N} .
M
G
dM
G
/dt ,
<
j
(t) >
0 ,
<
j
(t),
j
(t) >
jj
(t t) , (A.1)
i
represents Gaussian white noise, and moments of an arbitrary function F() over this stochastic space
are dened by a path-type integral over
i
,
< F() >
N
1
DF() exp(
1
2
t
0
dt
i
i
) ,
N
D exp(
1
2
t
0
dt
i
i
) ,
D
v
lim
v+1
0
N
j1
(2 )
1/2
dW
j
,
t
t
0
+ ,
1
2
dt
i
1
2
(W
i
W
i
1
)
2
,
<
i
>
0 ,
<
i
(t)
j
(t) >
ij
(t t) . (A.2)
Non-Markovian sources, , and their inuence throughout this development, can be formally
treated by expansions about the Markovian process by dening
< F( ) >
N
1
D F exp[
1
2
dtdt (t)
1
(t t) (t)] ,
Statistical Mechanics Methodology - 18 - Ingber, Wehner, Jabbour, Barnhill
dt
1
(t t)
(t t) (t t) , (A.3)
with dened as an interval centered about the argument of
. Letting 0 is an unambiguous
procedure to dene the Stratonovich prescription used below.
In terms of a specic stochastic path , a solution to Eq. (A.1), M
G
(t; M
0
, t
0
) with
M
G
(t
0
; M
0
, t
0
) M
0
, the initial conditions on the probability distribution of M
is
P
[M, t| M
0
, t
0
] [M M
(t; M
0
, t
0
)] . (A.4)
Using the conservation of probability condition,
P
,t
+ (
M
G
P
)
,G
0 ,
[
. . .
]
,G
[
. . .
]/M
G
,
[
. . .
]
,t
[
. . .
]/t , (A.5)
the evolution of P
is written as
P
,t
[M, t| M
0
, t
0
] {[ f
G
(t, M) g(t, M)
i
]P
}
,G
. (A.6)
To perform the stochastic average of Eq. (A.6), the functional integration by parts lemma [28] is
used on an arbitrary function Z() [59],
Z()
i
0 . (A.7)
Applied to Z Z exp(
1
2
t
0
dt
i
i
), this yields
<
i
Z >
< Z/
i
>
. (A.8)
Applying this to
F[M
dM P
F(M),
dM
i
F(M)
F[M
]
M
G
M
G
i
1
2
dM F(M)( g
G
j
ij
P
)
,G
, (A.9)
where
designates functional differentiation. The last equation has used the Stratonovich prescription,
M
G
(t) M
G
0
+
dt
H(t t)
H(t t
0
)( f
G
+ g
G
i
i
) ,
tt0
lim
M
G
(t)
i
(t)
1
2
g
G
j
[t, M
(t)]
ij
,
H(z)
'
1, z 0
0, z < 0 .
(A.10)
Taking the averages < P
,t
>
and <
i
P
>
>
,
Statistical Mechanics Methodology - 19 - Ingber, Wehner, Jabbour, Barnhill
g
G
f
G
+
1
2
g
G
i
g
G
i,G
,
g
GG
g
G
i
g
G
i
,
[
. . .
]
,G
[
. . .
]/M
G
. (A.11)
Note that g
G
replaces f
G
in Eq. (A.1) if the Ito (prepoint discretized) calculus is used to dene that
equation.
To derive the path integral representation of Eq. (A.11), dene operators
M
G
, p
G
and
H,
[
M
G
, p
G
]
M
G
p
G
p
G
M
G
i
G
G
,
[
M
G
,
M
G
] 0 [ p
G
, p
G
] ,
P
,t
i
HP ,
H
i
2
p
G
p
G
g
GG
+ p
G
g
G
+ iV , (A.12)
and dene the evolution operator U(t, t) in terms of bra and ket probability states of M,
M
G
| M
G
> M
G
| M
G
> ,
p
G
| M
G
> i/M
G
| M
G
> ,
< M| M > (M M) ,
< M| p > (2 )
1
exp(ip M) ,
P[M, t| M
0
, t
0
] < M|U(t, t
0
)| M
0
> ,
H(t)U(t, t) iU(t, t)
,t
,
U(t, t) 1 ,
U(t
, t
1
)1 i
H(t
1
) , (A.13)
where indexes units of measuring the time evolution. This is formally integrated to give the path
integral in the phase space ( p, M),
P[M
t
| M
0
]
M(t)M
t
M(t
0
)M
0
DM Dp exp[
t
t
0
dt(ip
G
M
G
1
2
p
G
p
G
g
GG
ip
G
g
G
+ V) ] ,
DM
u
lim
G
u
1
dM
G
,
Dp
u
lim
G
u+1
1
(2 )
1
dp
G
,
t
t
0
+ . (A.14)
The integral over each dp
G
is a Gaussian and simply calculated. This gives the path integral in
coordinate space M, in terms of the prepoint discretized Lagrangian,
Statistical Mechanics Methodology - 20 - Ingber, Wehner, Jabbour, Barnhill
P[M
t
| M
0
]
DM
u
0
(2 )
/2
g(M
, t
)
1/2
exp {
1
2
g
GG
(M
, t
)[
G
/ g
G
(M
, t
)]
[
G
/ g
G
(M
, t
)] + V(M
, t
)} ,
L
I
(
M
G
, M
G
, t)
1
2
(
M
G
g
G
)g
GG
(
M
G
g
G
) V ,
g det(g
GG
) ,
g
GG
(g
GG
)
1
,
M
G
+1
M
G
. (A.15)
This can be transformed to the Stratonovich representation, in terms of the Feynman Lagrangian L
possessing a covariant variational principle,
P[M
t
| M
0
]
DM
u
0
(2 )
/2
g(M
, t
+ /2)
1/2
exp { min
t
+
t
dtL[M(t),
M(t), t] } , (A.16)
where min species that Eq. (A.11) is obtained by constraining L to be expanded about that M(t)
which makes the action S
) M
and M(t
+ ) M
+1
.
One way of proceeding is to expand Eq. (A.15) and compare to Eq. (A.16), but it is somewhat
easier to expand Eq. (A.16) and compare to Eq. (A.15) [60]. It can be shown that expansions to order
sufce, and that
2
O( ).
Write L in the general form
L
1
2
g
GG
M
G
M
G
h
G
M
G
+ b
L
0
+ L ,
L
0
1
2
g
GG
[M(t), t]
M
G
M
G
,
g
GG
[M(t), t] g
GG
[M(t), t] + g
GG,t
[M(t), t](t t) + O[(t t)
2
] , (A.17)
where h
G
and b must be determined by comparing expansions of Eq. (A.15) and Eq. (A.16). Only the L
0
term is dependent on the actual M(t) trajectory, and so
t
+
t
dt L (
1
4
g
GG,t
G
h
G
1
2
h
G,G
G
+ b)|
(M,t)
, (A.18)
where |
(M,t)
implies evaluation at (M, t).
The determinant g is expanded as
g(M + , t + /2)
1/2
g
1/2
(M, t) exp[
4g
g
,t
+
1
2g
G
g
,G
Statistical Mechanics Methodology - 21 - Ingber, Wehner, Jabbour, Barnhill
+
1
4g
G
G
(g
,GG
+ g
1
g
,G
g
,G
)]|
(M,t)
. (A.19)
The remaining integral over L
0
must be performed. This is accomplished using the variational
principle applied to
L
0
[58],
g
GH
M
H
1
2
(g
GH,K
+ g
GK,H
g
KH,G
)
M
K
M
H
,
M
F
F
JK
M
J
M
K
,
F
JK
g
LF
[JK, L] g
LF
(g
JL,K
+ g
KL,J
g
JK,L
) ,
(
1
2
g
GH
M
G
M
H
)
,t
0 ,
t+
t
L
0
dt
2
g
GH
M
G
M
H
|
(M,t+ )
. (A.20)
Differentiating the second equation in Eq. (A.20) to obtain
...
M, and expanding
M(t + ) [
1
1
2
G
KL
L
+
1
6
(
G
KL,N
+
G
AN
A
KL
)
G
N
]|
(M,t)
. (A.21)
Now Eq. (A.16) can be expanded as
P[M
t
| M
0
]dM(t)
DM
u
0
exp[
1
2
g
GG
(M, t)
G
G
+ B] ,
DM
u+1
1
g
1/2
(2 )
1/2
dM
G
. (A.22)
Expanding exp B to O( ) requires keeping terms of order ,
2
,
3
/ ,
4
/ , and
6
/
2
. Under the path
integral, evaluated at (M, t), and using to designate the order of terms obtained from
d
n
exp(
1
2
2
),
H
g
GH
,
K
(
G
g
HK
+
H
g
GH
+
K
g
GH
) ,
B
2
(g
GH
g
AB
+ g
GA
g
HB
+ g
GB
g
HA
) ,
F
3
(g
AB
g
CD
g
EF
+ 14 permutations) . (A.23)
This expansion of exp B is to be compared to Eq. (A.15), expanded as
P[M
t
| M
0
]dM(t)
DM
u
0
exp(
1
2
g
GG
G
)
[1 + g
GG
g
G
G
+ V + O(
3/2
)] , (A.24)
yielding identication of h
G
and b in Eq. (A.16),
h
G
g
GG
h
G
g
G
1
2
g
1/2
(g
1/2
g
GG
)
,G
,
Statistical Mechanics Methodology - 22 - Ingber, Wehner, Jabbour, Barnhill
b
1
2
h
G
h
G
+
1
2
h
G
;G
+ R/6 V ,
h
G
;G
h
G
,G
+
F
GF
h
G
g
1/2
(g
1/2
h
G
)
,G
,
R g
JL
R
JL
g
JL
g
FK
R
FJKL
. (A.25)
The result is
P[M
t
| M
t
0
]dM(t)
. . .
DM exp( min
t
t
0
dtL) [M(t
0
) M
0
] [M(t) M
t
] ,
DM
u
lim
u+1
1
g
1/2
G
(2 )
1/2
dM
G
,
L(
M
G
, M
G
, t)
1
2
(
M
G
h
G
)g
GG
(
M
G
h
G
) +
1
2
h
G
;G
+ R/6 V ,
h
G
g
G
1
2
g
1/2
(g
1/2
g
GG
)
,G
,
g
GG
(g
GG
)
1
,
g det(g
GG
) ,
h
G
;G
h
G
,G
+
F
GF
h
G
g
1/2
(g
1/2
h
G
)
,G
,
F
JK
g
LF
[JK, L] g
LF
(g
JL,K
+ g
KL,J
g
JK,L
) ,
R g
JL
R
JL
g
JL
g
JK
R
FJKL
,
R
FJKL
1
2
(g
FK,JL
g
JK,FL
g
FL,JK
+ g
JL,FK
) + g
MN
(
M
FK
N
JL
M
FL
N
JK
) . (A.26)
In summary, because of the presence of multiplicative noise, the Langevin system differs in its Ito
(prepoint) and Stratonovich (midpoint) discretizations. The midpoint-discretized covariant description, in
terms of the Feynman Lagrangian, is dened such that (arbitrary) uctuations occur about solutions to the
Euler-Lagrange variational equations. In contrast, the usual Ito and corresponding Stratonovich
discretizations are dened such that the path integral reduces to the Fokker-Planck equation in the weak-
noise limit. The term R/6 in the Feynman Lagrangian includes a contribution of R/12 from the WKB
approximation to the same order of (t)
3/2
[23].
Statistical Mechanics Methodology - 23 - Ingber, Wehner, Jabbour, Barnhill
REFERENCES
1. J.C. Cox, J.E. Ingersoll, Jr., and S.A. Ross, A re-examination of traditional hypotheses about the
term structure of interest rates, J. Finance 36, 769-799 (1981).
2. J.C. Cox, J.E. Ingersoll, Jr., and S.A. Ross, An intertemporal general equilibrium model of asset
prices, Econometrica 53, 363-384 (1985).
3. J.C. Cox, J.E. Ingersoll, Jr., and S.A. Ross, A theory of the term structure of interest rates,
Econometrica 53, 385-407 (1985).
4. O. Vasicek, An equilibrium characterization of the term structure, J. Finan. Econ.
5, 177-188 (1977).
5. M.J. Brennan and E.S. Schwartz, A continuous time approach to the pricing of bonds, J. Banking
Finan. 3, 133-155 (1979).
6. M.J. Brennan and E.S. Schwartz, Bond pricing and market efciency, Finan. Anal. J.
Sep-Oct, 49-56 (1982).
7. M.J. Brennan and E.S. Schwartz, An equilibrium model of bond pricing and a test of market
efciency, J. Finan. Quant. Anal. 17, 301-329 (1982).
8. S.M. Schaefer and E.S. Schwartz, A two-factor model on the term structure: An approximate
analytical solution, J. Finan. Quant. Anal. 19, 413-424 (1984).
9. B. Dietrich-Campbell and E. Schwartz, Valuing debt options, J. Finan. Econ. 16, 321-343 (1986).
10. B. B. Mandelbrot, Comments on: A subordinated stochastic process model with nite variance for
speculative prices, by Peter K. Clark, Econometrica 41, 157-159 (1973).
11. M. C. Jensen, Some anomalous evidence regarding market efciency, an editorial introduction, J.
Finan. Econ. 6, 95-101 (1978).
12. S. J. Taylor, Tests of the random walk hypothesis against a price-trend hypothesis, J. Finan. Quant.
Anal. 17, 37-61 (1982).
13. A.C. Aitken, On least squares and linear combinations of observations, Proc. Roy. Soc. Edinburgh
55, 42-48 (1935).
14. L. Ingber, Very fast simulated re-annealing, Mathl. Comput. Modelling 12 (8), 967-973 (1989).
15. M.F. Wehner and W.G. Wolfer, Numerical evaluation of path integral solutions to Fokker-Planck
equations. III. Time and functionally dependent coefcients, Phys. Rev. A 35, 1795-1801 (1987).
16. L. Ingber, H. Fujio, and M.F. Wehner, Mathematical comparison of combat computer models to
exercise data, Mathl. Comput. Modelling 15 (1), 65-90 (1991).
17. T.S.Y. Ho and S.-B. Lee, Term structure movements and pricing interest rate contingent claims, J.
Finance 41, 1011-1029 (1986).
18. D. Heath, R. Jarrow, and A. Morton, Bond pricing and the term structure of interest rates: A new
methodology for contingent claims valuation, (1987).
19. D. Heath, R. Jarrow, and A. Morton, Contingent claim valuation with a random evolution of interest
rates, (1989).
20. L. Ingber, Statistical mechanics of nonlinear nonequilibrium nancial markets, Math. Modelling
5 (6), 343-361 (1984).
21. L. Ingber, Mesoscales in neocortex and in command, control and communications (C
3
) systems, in
Systems with Learning and Memory Abilities: Proceedings, University of Paris 15-19 June 1987,
(Edited by J. Delacour and J.C.S. Levy), pp. 387-409, Elsevier, Amsterdam, (1988).
22. L. Ingber, Mathematical comparison of JANUS(T) simulation to National Training Center, in The
Science of Command and Control: Part II, Coping With Complexity, (Edited by S.E. Johnson and
A.H. Levis), pp. 165-176, AFCEA International, Washington, DC, (1989).
23. F. Langouche, D. Roekaerts, and E. Tirapegui, Functional Integration and Semiclassical
Expansions, Reidel, Dordrecht, The Netherlands, (1982).
Statistical Mechanics Methodology - 24 - Ingber, Wehner, Jabbour, Barnhill
24. H. Dekker, Quantization in curved spaces, in Functional Integration: Theory and Applications,
(Edited by J.P. Antoine and E. Tirapegui), pp. 207-224, Plenum, New York, (1980).
25. H. Grabert and M.S. Green, Fluctuations and nonlinear irreversible processes, Phys. Rev. A
19, 1747-1756 (1979).
26. R. Graham, Covariant formulation of non-equilibrium statistical thermodynamics, Z. Physik
B26, 397-405 (1977).
27. R. Graham, Lagrangian for diffusion in curved phase space, Phys. Rev. Lett. 38, 51-53 (1977).
28. L.S. Schulman, Techniques and Applications of Path Integration, J. Wiley & Sons, New York,
(1981).
29. L. Ingber, Nonlinear nonequilibrium statistical mechanics approach to C
3
systems, in 9th MIT/ONR
Workshop on C
3
Systems: Naval Postgraduate School, Monterey, CA, 2-5 June 1986, pp. 237-244,
MIT, Cambridge, MA, (1986).
30. R.F. Fox, Uniform convergence to an effective Fokker-Planck equation for weakly colored noise,
Phys. Rev. A 34, 4525-4527 (1986).
31. C. van der Broeck, On the relation between white shot noise, Gaussian white noise, and the
dichotomic Markov process, J. Stat. Phys. 31, 467-483 (1983).
32. P. Colet, H.S. Wio, and M. San Miguel, Colored noise: A perspective from a path-integral
formalism, Phys. Rev. A 39, 6094-6097 (1989).
33. C.W. Gardiner, Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences,
Springer-Verlag, Berlin, Germany, (1983).
34. L. Ingber, Statistical mechanics of neocortical interactions. Dynamics of synaptic modication,
Phys. Rev. A 28, 395-416 (1983).
35. L. Ingber, Statistical mechanics of neocortical interactions. Derivation of short-term-memory
capacity, Phys. Rev. A 29, 3346-3358 (1984).
36. L. Ingber, Statistical mechanics of neocortical interactions. EEG dispersion relations, IEEE Trans.
Biomed. Eng. 32, 91-94 (1985).
37. L. Ingber, Statistical mechanics of neocortical interactions: Stability and duration of the 7+2 rule
of short-term-memory capacity, Phys. Rev. A 31, 1183-1186 (1985).
38. L. Ingber and P.L. Nunez, Multiple scales of statistical physics of neocortex: Application to
electroencephalography, Mathl. Comput. Modelling 13 (7), 83-95 (1990).
39. L. Ingber, Path-integral Riemannian contributions to nuclear Schrodinger equation, Phys. Rev. D
29, 1171-1174 (1984).
40. L. Ingber, Riemannian contributions to short-ranged velocity-dependent nucleon-nucleon
interactions, Phys. Rev. D 33, 3781-3784 (1986).
41. L. Ingber, Mathematical comparison of computer models to exercise data, in 1989 JDL C
2
Symposium: National Defense University, Washington, DC, 27-29 June 1989, pp. 169-192, SAIC,
McLean, VA, (1989).
42. L. Ingber and D.D. Sworder, Statistical mechanics of combat with human factors, Mathl. Comput.
Modelling 15 (11), 99-127 (1991).
43. K. Kishida, Physical Langevin model and the time-series model in systems far from equilibrium,
Phys. Rev. A 25, 496-507 (1982).
44. K. Kishida, Equivalent random force and time-series model in systems far from equilibrium, J.
Math. Phys. 25, 1308-1313 (1984).
45. L. Ingber, Statistical mechanical aids to calculating term structure models, Phys. Rev. A
42 (12), 7057-7064 (1990).
46. H. Szu and R. Hartley, Fast simulated annealing, Phys. Lett. A 122 (3-4), 157-162 (1987).
47. S. Kirkpatrick, C.D. Gelatt, Jr., and M.P. Vecchi, Optimization by simulated annealing, Science
220 (4598), 671-680 (1983).
Statistical Mechanics Methodology - 25 - Ingber, Wehner, Jabbour, Barnhill
48. M.F. Wehner and W.G. Wolfer, Numerical evaluation of path-integral solutions to Fokker-Planck
equations. I., Phys. Rev. A 27, 2663-2670 (1983).
49. M.F. Wehner and W.G. Wolfer, Numerical evaluation of path-integral solutions to Fokker-Planck
equations. II. Restricted stochastic processes, Phys. Rev. A 28, 3003-3011 (1983).
50. C. Grebogi, E. Ott, and J.A. Yorke, Chaos, strange attractors, and fractal basin boundaries in
nonlinear dynamics, Science 238, 632-637 (1987).
51. R. Pool, Is it chaos, or is it just noise?, Science 243, 25-28 (1989).
52. J. Theiler, Correlation dimension of ltered noise, Report 6/29/1988, UC San Diego, La Jolla, CA,
(1988).
53. P. Grassberger, Do climatic attractors exist?, Nature 323, 609-612 (1986).
54. N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, and E. Teller, Equation of state
calculations by fast computing machines, J. Chem. Phys. 21 (6), 1087-1092 (1953).
55. S.G. Brush, Prediction and theory evaluation: The case of light bending, Science
246, 1124-1129 (1989).
56. N.S. Goel and N. Richter-Dyn, Stochastic Models in Biology, Academic Press, New York, NY,
(1974).
57. D.F. Shanno and K.H. Phua, Minimization of unconstrained multivariate functions, ACM Trans.
Mathl. Software 2, 87-94 (1976).
58. K.S. Cheng, Quantization of a general dynamical system by Feynmans path integration
formulation, J. Math. Phys. 13, 1723-1726 (1972).
59. F. Langouche, D. Roekaerts, and E. Tirapegui, Discretization problems of functional integrals in
phase space, Phys. Rev. D 20, 419-432 (1979).
60. F. Langouche, D. Roekaerts, and E. Tirapegui, Short derivation of Feynman Lagrangian for general
diffusion process, J. Phys. A 113, 449-452 (1980).