A Practical Method For Calculating Largest Lyapunov Exponents From Small Data Sets
A Practical Method For Calculating Largest Lyapunov Exponents From Small Data Sets
A Practical Method For Calculating Largest Lyapunov Exponents From Small Data Sets
North-Holland
SDI: 0167-2789(92)00033-6
Detecting the presence of chaos in a dynamical system is an important problem that is solved by measuring the largest
Lyapunov exponent. Lyapunov exponents quantify the exponential divergence of initially close state-space trajectories and
estimate the amount of chaos in a system. We present a new method for calculating the largest Lyapunov exponent from an
experimental time series. The method follows directly from the definition of the largest Lyapunov exponent and is accurate
because it takes advantage of all the available data. We show that the algorithm is fast, easy to implement, and robust to
changes in the following quantities: embedding dimension, size of data set, reconstruction delay, and noise level.
Furthermore, one may use the algorithm to calculate simultaneously the correlation dimension. Thus, one sequence of
computations will yield an estimate of both the level of chaos and the system complexity.
For this reason, we have developed a new meth- nents. The presence of a positive exponent is
od for calculating the largest Lyapunov expo- sufficient for diagnosing chaos and represents
nent. The method is reliable for small data sets, local instability in a particular direction. Note
fast, and easy to implement. “Easy to imple- that for the existence of an attractor, the overall
ment” is largely a subjective quality, although dynamics must be dissipative, i.e., globally sta-
we believe it has had a notable positive effect on ble, and the total rate of contraction must out-
the popularity of dimension estimates. weigh the total rate of expansion. Thus, even
The remainder of this paper is organized as when there are several positive Lyapunov expo-
follows. Section 2 describes the Lyapunov spec- nents, the sum across the entire spectrum is
trum and its relation to Kolmogorov entropy. A negative.
synopsis of previous methods for calculating Wolf et al. [39] explain the Lyapunov spec-
Lyapunov exponents from both system equations trum by providing the following geometrical in-
and experimental time series is also given. In terpretation. First, arrange the n principal axes
section 3 we describe the new approach for of the ellipsoid in the order of most rapidly
calculating A, and show how it differs from previ- expanding to most rapidly contracting. It follows
ous methods. Section 4 presents the results of that the associated Lyapunov exponents will be
our algorithm for several chaotic dynamical sys- arranged such that
tems as well as several non-chaotic systems. We
show that the method is robust to variations in A, 2 A, 2 . . . 2 A, , (1)
embedding dimension, number of data points,
reconstruction delay, and noise level. Section 5 is where A, and A, correspond to the most rapidly
a discussion that includes a description of the expanding and contracting principal axes, respec-
procedure for calculating A, and D, simulta- tively. Next, recognize that the length of the first
neously. Finally, section 6 contains a summary of principal axis is proportional to eAlr; the area
our conclusions. determined by the first two principal axes is
proportional to e(A~+AZ)r;and the volume de-
termined by the first k principal axes is propor-
2. Background tional to e(A1+AZC”‘+hk)r. Thus, the Lyapunov
spectrum can be defined such that the exponen-
For a dynamical system, sensitivity to initial tial growth of a k-volume element is given by the
conditions is quantified by the Lyapunov expo- sum of the k largest Lyapunov exponents. Note
nents. For example, consider two trajectories that information created by the system is repre-
with nearby initial conditions on an attracting sented as a change in the volume defined by the
manifold. When the attractor is chaotic, the expanding principal axes. The sum of the corre-
trajectories diverge, on average, at an exponen- sponding exponents, i.e., the positive exponents,
tial rate characterized by the largest Lyapunov equals the Kolmogorov entropy (K) or mean
exponent [ 151. This concept is also generalized rate of information gain 1151:
for the spectrum of Lyapunov exponents, Ai (i =
1, 2, . . . ,n), by considering a small n-dimension- K= c Ai. (2)
al sphere of initial conditions, where n is the h,>O
tally solving the system’s n equations for it + 1 the largest Lyapunov exponent can be defined
nearby initial conditions. The growth of a corre- using the following equation where d(t) is the
sponding set of vectors is measured, and as the average divergence at time t and C is a constant
system evolves, the vectors are repeatedly reor- that normalizes the initial separation:
thonormalized using the Gram-Schmidt proce-
dure. This guarantees that only one vector has a d(t) = C eA1’ . (3)
component in the direction of most rapid expan-
sion, i.e., the vectors maintain a proper phase- For experimental applications, a number of
space orientation. In experimental settings, how- researchers have proposed algorithms that esti-
ever, the equations of motion are usually un- mate the largest Lyapunov exponent [1,10,12,
known and this approach is not applicable. 16,17,29,33,38-401, the positive Lyapunov spec-
Furthermore, experimental data often consist of trum, i.e., only positive exponents [39], or the
time series from a single observable, and one complete Lyapunov spectrum [7,9,13,15,32,
must employ a technique for attractor recon- 35,411. Each method can be considered as a
struction, e.g., method of delays [27,37], singular variation of one of several earlier approaches
value decomposition [8]. [15,17,32,39] and as suffering from at least one
As suggested above, one cannot calculate the of the following drawbacks: (1) unreliable for
entire Lyapunov spectrum by choosing arbitrary small data sets, (2) computationally intensive,
directions for measuring the separation of nearby (3) relatively difficult to implement. These draw-
initial conditions. One must measure the separa- backs motivated our search for an improved
tion along the Lyapunov directions which corre- method of estimating the largest Lyapunov
spond to the principal axes of the ellipsoid previ- exponent.
ously considered. These Lyapunov directions are
dependent upon the system flow and are defined
using the Jacobian matrix, i.e., the tangent map, 3. Current approach
at each point of interest along the flow [15].
Hence, one must preserve the proper phase The first step of our approach involves recon-
space orientation by using a suitable approxi- structing the attractor dynamics from a single
mation of the tangent map. This requirement, time series. We use the method of delays [27,37]
however, becomes unnecessary when calculating since one goal of our work is to develop a fast
only the largest Lyapunov exponent. and easily implemented algorithm. The recon-
If we assume that there exists an ergodic mea- structed trajectory, X, can be expressed as a
sure of the system, then the multiplicative er- matrix where each row is a phase-space vector.
godic theorem of Oseledec [26] justifies the use That is,
of arbitrary phase space directions when calculat-
ing the largest Lyapunov exponent with smooth x = (X, x, *. . Jr,)= ) (4)
dynamical systems. We can expect (with prob-
ability 1) that two randomly chosen initial condi- where X, is the state of the system at discrete
tions will diverge exponentially at a rate given by time i. For an N-point time series, {x1,
the largest Lyapunov exponent [6,15]. In other x2,. . ., xN}, each Xi is given by
words, we can expect that a random vector of
initial conditions will converge to the most un- xi =txi xi+J *.’ xi+(m-*)J) 7 (5)
stable manifold, since exponential growth in this
direction quickly dominates growth (or contrac- where J is the lag or reconstruction delay, and m
tion) along the other Lyapunov directions. Thus, is the embedding dimension. Thus, X is an M x
120 M. Rosenstein et al. I Lyapunov exponents from small data sets
m matrix, and the constants m, M, J, and N are estimated as the mean rate of separation of the
related as nearest neighbors.
To this point, our approach for calculating A,
M=N-(m-1)J. (6) is similar to previous methods that track the
exponential divergence of nearest neighbors.
The embedding dimension is usually estimated in
However, it is important to note some differ-
accordance with Takens’ theorem, i.e., m> 2n,
ences:
although our algorithm often works well when m
(1) The algorithm by Wolf et al. [39] fails to
is below the Takens criterion. A method used to
take advantage of all the available data because
choose the lag via the correlation sum was ad-
it focuses on one “fiducial” trajectory. A single
dressed by Liebert and Schuster [23] (based on
nearest neighbor is followed and repeatedly re-
[19]). Nevertheless, determining the proper lag is
placed when its separation from the reference
still an open problem [4]. We have found a good
trajectory grows beyond a certain limit. Addi-
approximation of J to equal the lag where the
tional computation is also required because the
autocorrelation function drops to 1 - 1 /e of its
method approximates the Gram-Schmidt proce-
initial value. Calculating this J can be accom-
dure by replacing a neighbor with one that pre-
plished using the fast Fourier transform (FFT),
serves its phase space orientation. However, as
which requires far less computation than the
shown in section 2, this preservation of phase-
approach of Liebert and Schuster. Note that our
space orientation is unnecessary when calculating
algorithm also works well for a wide range of
only the largest Lyapunov exponent.
lags, as shown in section 4.3.
(2) If a nearest neighbor precedes (temporal-
After reconstructing the dynamics, the algo-
ly) its reference point, then our algorithm can be
rithm locates the nearest neighbor of each point
viewed as a “prediction” approach. (In such
on the trajectory. The nearest neighbor, X;, is
instances, the predictive model is a simple delay
found by searching for the point that minimizes
line, the prediction is the location of the nearest
the distance to the particular reference point, X,.
neighbor, and the prediction error equals the
This is expressed as
separation between the nearest neighbor and its
reference point.) However, other prediction
d,(O) = rn$ IlX, - X;\l , (7)
I methods use more elaborate schemes, e.g., poly-
nomial mappings, adaptive filters, neural net-
where d,(O) is the initial distance from the jth
works, that require much more computation.
point to its nearest neighbor, and (( [I denotes
The amount of computation for the Wales meth-
the Euclidean norm. We impose the additional
od [38] (based on [36]) is also greater, although
constraint that nearest neighbors have a tempo-
it is comparable to the present approach. We
ral separation greater than the mean period of
have found the Wales algorithm to give excellent
the time series#’ :
results for discrete systems derived from differ-
( j - jl> mean period . ence equations, e.g., logistic, H&on, but poor
(8)
results for continuous systems derived from dif-
This allows us to consider each pair of neighbors ferential equations, e.g., Lorenz, RGssler.
as nearby initial conditions for different trajec- (3) The current approach is principally based
tories. The largest Lyapunov exponent is then on the work of Sato et al. [33] which estimates A,
as
*‘We estimated the mean period as the reciprocal of the
mean frequency of the power spectrum, although we expect M-1
d.(i)
any comparable estimate, e.g., using the median frequency of A,(i) = 1 --L- x In -.C_
the magnitude spectrum, to yield equivalent results. EAt (M - i) j=, d,(O) ’
M. Rosenstein et al. I Lyapunov exponents from small data sets 121
where At is the sampling period of the time sets. Note that in eq. (ll), Cj performs the
series, and d,(i) is the distance between the jth function of normalizing the separation of the
pair of nearest neighbors after i discrete-time neighbors, but as shown in eq. (12), this nor-
steps, i.e., i At seconds. (Recall that M is the malization is unnecessary for estimating A,. By
number of reconstructed points as given in eq. avoiding the normalization, the current approach
(6).) In order to improve convergence (with gains a slight computational advantage over the
respect to i), Sat0 et al. [33] give an alternate method by Sato et al.
form of eq. (9): The new algorithm for calculating largest
Lyapunov exponents is outlined in fig. 1. This
1 method is easy to implement and fast because it
h,(i, k) = - ’ “c” In d,l;(:)k) . (10) uses a simple measure of exponential divergence
k At (M - k) j=1
that circumvents the need to approximate the
tangent map. The algorithm is also attractive
In eq. (lo), k is held constant, and A, is extrac-
from a practical standpoint because it does not
ted by locating the plateau of h,(i, k) with
require large data sets and it simultaneously
respect to i. We have found that locating this
yields the correlation dimension (discussed in
plateau is sometimes problematic, and the result-
section 5.5). Furthermore, the method is accur-
ing estimates of A, are unreliable. As discussed
ate for small data sets because it takes advantage
in section 5.3, this difficulty is due to the nor-
of all the available data. In the next section, we
malization by d,(i).
present the results for several dynamical systems.
The remainder of our method proceeds as
follows. From the definition of A, given in eq.
(3), we assume the jth pair of nearest neighbors
diverge approximately at a rate given by the
largest Lyapunov exponent:
4. Experimental results
typical plot (solid curve) of (In d,(i)) versus Fig. 2. Typical plot of (In(divergence)) versus time for the
i Ate’; the dashed line has a slope equal to the Lorenz attractor. The solid curve is the calculated result; the
slope of the dashed curve is the expected result.
theoretical value of A,. After a short transition,
there is a long linear region that is used to
extract the largest Lyapunov exponent. The 4.1. Embedding dimension
curve saturates at longer times since the system
is bounded in phase space and the average diver- Since we normally have no a priori knowledge
gence cannot exceed the “length” of the at- concerning the dimension of a system, it is im-
tractor, perative that we evaluate our method for differ-
The remainder of this section contains tabu- ent embedding dimensions. Table 2 and fig. 3
lated results from our algorithm under different show our findings for several values of m. In all
conditions. The corresponding plots are meant to but three cases (m = 1 for the Hinon, Lorenz
give the reader qualitative information about the and Rossler systems), the error was less than
facility of extracting A, from the data. That is, ?lO%, and most errors were less than ?5%. It
the more prominent the linear region, the easier is apparent that satisfactory results are obtained
one can extract the correct slope. (Repeatability only when m is at least equal to the topological
is discussed in section 5.2.) dimension of the system, i.e., m 2 n. This is due
to the fact that chaotic systems are effectively
stochastic when embedded in a phase space that
“In each figure “( In(divergence))” and “Time (s)” are is too small to accommodate the true dynamics.
used to denote (In d,(i)) and i At, respectively. Notice that the algorithm performs quite well
Table 1
Chaotic dynamical systems with theoretical values for the largest Lyapunov exponent. A,. The sampling period is denoted by Ar.
1 1 1
_:’
m=5
:’
A -1
8 ,:’
,:’ ml=,
F . ,;’
-3
,,1’
P
5 ,:’
xv -5 ,;’
_.I’
.-
,:’
~
,:’
(a) (b)
-7
0 5 10 15 20 25 0 5 10 15 20 25
Fig. 3. Effects of embedding dimension. For each plot, the solid curves are the calculated results, and the slope of the dashed
curve is the expected result. See table 2 for details. (a) Logistic map. (b) H&on attractor. (c) Lorenz attractor. (d) Riissler
attractor.
when m is below the Takens criterion. There- the literature. (The only exception is due to
fore, it seems one may choose the smallest em- Briggs [7], who examined the Lorenz system
bedding dimension that yields a convergence of with N = 600. However, Briggs reported errors
the results. for A, that ranged from 54% to 132% for this
particular time series length.) We also point out
4.2. Length of time series that the literature [1,9,13,15,35] contains results
for values of N that are an order of magnitude
Next we consider the performance of our algo- greater than the largest values used here.
rithm for time series of various lengths. As It is important to mention that quantitative
shown in table 3 and fig. 4, the present method analyses of chaotic systems are usually sensitive
also works well when N is small (N = 100-1000 to not only the data size (in samples), but also
for the examined systems). Again, the error was the observation time (in seconds). Hence, we
less than ?lO% in almost all cases. (The greatest examined the interdependence of N and N At for
difficulty occurs with the Rossler attractor. For the Lorenz system. Fig. 5 shows the output of
this system, we also found a 20-25% negative our algorithm for three different sampling condi-
bias in the results for N = 3000-5000.) To our tions: (1) N = 5000, At = 0.01 s (N At = 50 s); (2)
knowledge, the lower limit of N used in each N = 1000, At = 0.01 s (N At = 10 s); and (3) N =
case is less than the smallest value reported in 1000, At = 0.05 s (NAt =5Os). The latter two
124 M. Rosenstein et al. I Lyapunov exponents from small data sets
Table 2 Table 3
Experimental results for several embedding dimensions. The Experimental results for several time series lengths. The
number of data points, reconstruction delay, and embedding number of data points, reconstruction delay, and embedding
dimension are denoted by N, J, and m, respectively. We were dimension are denoted by N, J. and m, respectively.
unable to extract A,, with m equal to one for the Lorenz and
System N J m Calculated A, % error
Rossler systems because the reconstructed attractors are
extremely noisy in a one-dimensional embedding space. Logistic 100 1 2 0.659 -4.9
200 0.705 1.7
System N J m Calculated A, % error
300 0.695 0.3
Logistic 500 1 1 0.675 -2.6 400 0.692 -0.1
2 0.681 -1.7 500 0.686 -1.0
0.680 -1.9
Henon 100 1 2 0.426 1.9
0.680 -1.9
200 0.416 -0.s
0.651 -6.1
300 0.421 0.7
H&non SO0 1 1 0.19s -53.3 400 0.409 2.2
2 0.409 -2.2 500 0.412 - I.4
0.406 -2.9
Lorenz 1000 11 3 1.751 16.7
4 0.399 -4.5
2000 I.345 - 10.3
5 0.392 -6.2
3000 1.372 -8.5
Lorenz 5000 11 1 4000 1.392 -7.2
3 1.531 2.1 5000 1.523 1.5
1.498 -0.1
Rossler 400 8 3 0.0351 -61.0
1 .S62 4.1
800 0.0655 -27.2
1.560 4.0
1200 0.0918 2.1)
Rossler 2000 8 I _ 1600 0.0984 9.3
3 0.0879 -2.3 2000 0.0879 -2.3
s 0.0864 -4.0
7 0.0853 -5.2
9 0.0835 -1.2
-7
0 5 10 15 20 25 0 5 10 15 20 25
Time (s) Time (s)
(d)
-2 -2
0 0.5 1 1.5 2 2.5 3 0 10 20 30 40
Time (s) Time (s)
Fig. 4. Effects of time series lengths. For each plot, the solid curves are the calculated results, and the slope of the dashed curve
is the expected result. See table 3 for details. (a) Logistic map. (b) Henon attractor. (c) Lorenz attractor. (d) Rossler attractor.
1 1 .
,:’
,_I’
@)
-7 -5
0 5 10 1.5 20 25
Time (s) Time (s)
Fig. 6. Effects of reconstruction delay. For each plot, the solid curves are the calculated results, and the slope of the dashed curve
is the expected result. See table 4 for details. (a) Logistic map. (b) H&on attractor. (c) Lorenz attractor. (d) Riissler attractor.
rithm is not “fooled” by difficult systems such as systems that are analogous to discrete and con-
correlated noise. Hence, we further establish the tinuous chaotic systems, respectively. The
utility of our method by examining its per- “scrambled” Lorenz also represents a continuous
formance with the following non-chaotic sys- stochastic system, and the data set was generated
tems: two-torus, white noise, bandlimited noise, by randomizing the phase information from the
and “scrambled” Lorenz. Lorenz attractor. This procedure yields a time
For each system, a 2000-point time series was series of correlated noise with spectral charac-
generated. The two-torus is an example of a teristics identical to that of the Lorenz attractor.
quasiperiodic, deterministic system. The corre- For quasiperiodic and stochastic systems we
sponding time series, x(i), was created by a expect flat plots of (ln d,(i)) versus i At. That is,
superposition of two sinusoids with incommensu- on average the nearest neighbors should neither
rate frequencies: diverge nor converge. Additionally, with the sto-
chastic systems we expect an initial “jump” from
x(i) = sin(27Ffi . i AZ) + sin(2nfZ. i At) , (14) a small separation at t = 0. The results are shown
in fig. 9, and as expected, the curves are mostly
where fi = 1.732051 = ti Hz, fi = 2.236068 = flat. However, notice the regions that could be
fi Hz, and the sampling period was At = 0.01 s. mistaken as appropriate for extracting a positive
White noise and bandlimited noise are stochastic Lyapunov exponent. Fortunately, our empirical
M. Rosenstein et al. I Lyapunov exponents from small data sets
1
SNR=l
0 5 10 15 20 25 0 5 10 1s 20 25
Time (s) Time (s)
4 4
SNR=i ._.’
-4,
( SNA.1 __._
___...
Fig. 8. Results for systems with two positive Lyapunov exponents. For each plot, the solid curves are the calculated results, and
the slope of the dashed curve is the expected result. See table 7 for details. (a) Rossler-hyperchaos. (b) Mackey-Glass.
M. Rosenstein et al. I Lyapunov exponents from small data sets 129
Table 5 Table 7
Experimental results for several noise levels. The number of Experimental results for systems with two positive Lyapunov
data points, reconstruction delay, and embedding dimension exponents. The number of data points, reconstruction delay,
are denoted by N, J, and m, respectively. The signal-to-noise and embedding dimension are denoted by N, J, and m,
ratio (SNR) is the ratio of the power in the noise-free signal respectively.
to that of the pure-noise signal.
System N J m Calculated A, % error
System N J m SNR Calculated A, % error
Riissler-hyperchaos 8000 9 3 0.048 -56.8
Logistic 500 1 2 1 0.704 1.6 6 0.112 0.9
10 0.779 12.4 9 0.112 0.9
100 0.856 23.5 12 0.107 -3.6
1000 0.621 -10.4 15 0.102 -8.1
10000 0.628 -9.4
Mackey-Glass 8000 12 3 4.15E-3 -5.0
Hinon 500 1 2 1 0.643 53.8 6 4.87E-3 11.4
10 0.631 51.0 9 4.74E-3 8.5
100 0.522 24.9 12 4.80E-3 9.7
1000 0.334 -20.1 15 4.85E-3 11.0
10000 0.385 -7.9
Lorenz 5000 11 3 1 0.645 -57.0
10 1.184 -21.1
100 1.110 -26.0
1000 1.273 -15.1 dimension. Finite dimensional systems exhibit a
10000 1.470 -2.0 convergence once the embedding dimension is
Riissler 2000 8 3 1 0.0106 -88.2 large enough to accomodate the dynamics,
10 0.0394 -56.2
whereas stochastic systems fail to show a conver-
100 0.0401 -55.4
1000 0.0659 -26.8 gence because they appear more ordered in high-
10000 0.0836 -7.1 er embedding spaces. With the two-torus, we
attribute the lack of convergence to the finite
results suggest that one may still detect non- precision “noise” in the data set (Notice the
chaotic systems for the following reasons: small average divergence even at i At = 1,) Strict-
(1) The anomalous scaling region is not linear ly speaking, we can only distinguish high-dimen-
since the divergence of nearest neighbors is not sional systems from low-dimensional ones, al-
exponential. though in most applications a high-dimensional
(2) For stochastic systems, the anomalous system may be considered random, i.e., infinite-
scaling region flattens with increasing embedding dimensional.
Table 6
Chaotic systems with two positive Lyapunov exponents (A,, A,). To obtain a better representation of the dynamics, the numerical
integrations were performed using a step size 100 times smaller than the sampling period, At. The resulting time series were then
downsampled by a factor of 100 to achieve the desired At.
I
(cl j (d
O-
0 10 20 30 JO 50 0 0.25 0.5 0.75 1
Time (s) Time(s)
Fig. 9. Effects of embedding dimension for non-chaotic systems. (a) Two-torus. (b) White noise. (c) Bandlimited noise. (d)
“Scrambled” Lorenz.
r(r)=N($l. (19)
2 -.
Finally, eqs. (15) and (19) are combined to give
K
the Eckmann-Ruelle requirement for Lyapunov 4i
VI
exponents: 1 --
logN>Dlog(llp). (20)
01
For p = 0.1, eq. (20) directs us to choose N 0 0.5 1 1.5 2 2.5 3
Fig. 10. Plot of (dldt (In d,(i)) versus i At using our algo-
N>lOD. rithm with three 5000-point realizations of the Lorenz at-
(21)
tractor.
Kolmogorov entropy from a chaotic signal, Phys. Rev. spectrum from a chaotic time series, Phys. Rev. Lett. 55
A 28 (1983) 2591. (1985) 1082.
[22] M. H&on, A two-dimensional mapping with a strange [33] S. Sato, M. Sano and Y. Sawada, Practical methods of
attractor, Commun. Math. Phys. 50 (1976) 69. measuring the generalized dimension and the largest
[23] W. Liebert and H.G. Schuster, Proper choice of the Lyapunov exponent in high dimensional chaotic sys-
time delay for the analysis of chaotic time series, Phys. tems, Prog. Theor. Phys. 77 (1987) 1.
Lett. A 142 (1989) 107. [34] I. Shimada and T. Nagashima, A numerical approach to
[24] E.N. Lorenz, Deterministic nonperiodic flow, J. Atmos. ergodic problem of dissipative dynamical systems, Prog.
Sci. 20 (1963) 130. Theor. Phys. 61 (1979) 1605.
[25] M.C. Mackey and L. Glass, Oscillation and chaos in [35] R. Stoop and J. Parisi, Calculation of Lyapunov expo-
physiological control systems, Science 197 (1977) 287. nents avoiding spurious elements, Physica D 50 (1991)
[26] V.I. Oseledec, A multiplicative ergodic theorem. 89.
Lyapunov characteristic numbers for dynamical systems, 1361 G. Sugihara and R.M. May, Nonlinear forecasting as a
Trans. Moscow Math. Sot. 19 (1968) 197. way of distinguishing chaos from measurement error in
[27] N.H. Packard, J.P. Crutchfield, J.D. Farmer and R.S. time series, Nature 344 (1990) 734.
Shaw, Geometry from a time series, Phys. Rev. Lett. 45 [37] F. Takens, Detecting strange attractors in turbulence,
(1980) 712. Lecture Notes in Mathematics, Vol. 898 (1981) p. 366.
[28] J.B. Ramsey and H.-J. Yuan, The statistical properties [38] D.J. Wales, Calculating the rate loss of information
of dimension calculations using small data sets, Non- from chaotic time series by forecasting, Nature 350
linearity 3 (1990) 155. (1991) 485.
[29] F. Rauf and H.M. Ahmed, Calculation of Lyapunov [39] A. Wolf, J.B. Swift, H.L. Swinney and J.A. Vastano,
exponents through nonlinear adaptive filters, Proc. Determining Lyapunov exponents from a time series,
IEEE Int. Symp. on Circuits and Systems, (Singapore. Physica D 16 (1985) 285.
1991). [40] J. Wright, Method for calculating a Lyapunov exponent.
[30] O.E. RGssler, An equation for hyperchaos, Phys. Lett. Phys. Rev. A 29 (1984) 2924.
A 71 (1979) 155. [41] X. Zeng, R. Eykholt and R.A. Pielke, Estimating the
[31] O.E. Riissler, An equation for continuous chaos, Phys. Lyapunov-exponent spectrum from short time series of
Lett. A 57 (1976) 397. low precision, Phys. Rev. Lett. 66 (1991) 3229.
[32] M. Sano and Y. Sawada, Measurement of the Lyapunov