PhysRevX 6 011021

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

PHYSICAL REVIEW X 6, 011021 (2016)

Ensemble Kalman Filtering without a Model


Franz Hamilton,1 Tyrus Berry,2 and Timothy Sauer2,*
1
North Carolina State University, Raleigh, North Carolina 27695, USA
2
George Mason University, Fairfax, Virginia 22030, USA
(Received 28 October 2015; revised manuscript received 22 December 2015; published 1 March 2016)
Methods of data assimilation are established in physical sciences and engineering for the merging of
observed data with dynamical models. When the model is nonlinear, methods such as the ensemble Kalman
filter have been developed for this purpose. At the other end of the spectrum, when a model is not known,
the delay coordinate method introduced by Takens has been used to reconstruct nonlinear dynamics. In this
article, we merge these two important lines of research. A model-free filter is introduced based on the
filtering equations of Kalman and the data-driven modeling of Takens. This procedure replaces the model
with dynamics reconstructed from delay coordinates, while using the Kalman update formulation to
reconcile new observations. We find that this combination of approaches results in comparable efficiency to
parametric methods in identifying underlying dynamics, and may actually be superior in cases of model error.

DOI: 10.1103/PhysRevX.6.011021 Subject Areas: Complex Systems, Geophysics,


Nonlinear Dynamics

I. INTRODUCTION become the foundation of nonparametric time series pre-


diction methods. Under suitable genericity hypotheses, the
Data assimilation plays an increasingly important role in
dynamical attractor may be represented by delay coordinate
nonlinear science, as a means of inferring unobserved model
vectors built from the time series observations, and methods
variables and constraining unknown parameters. Use of the
of prediction, control, and other applications from chaotic
extended Kalman filter and ensemble Kalman filter (EnKF)
time series have been developed [24–26]. In particular, time
is now standard in a wide range of geophysical problems
series prediction algorithms locate the current position in the
[1–5] and several areas of physical and biological sciences
delay coordinate representation and use analogs from
where spatiotemporal dynamics is involved [6–9].
When a physically motivated model is available, para- previously recorded data to establish a predictive statistical
metric forecasting methods can be used in a variety of model, which can be accomplished in several ways [27–40].
applications in noise filtering, prediction, and control of In this article, we propose a “Kalman-Takens” method of
data analysis that is able to filter a noisy time series without
systems. Nonlinear approaches to filtering [1,2,10,11]
allow forecasting models to use the model equations to access to a model. We replace the model equations governing
develop close to optimal predictions. Even if some vari- the evolution of the system, which are assumed to be known
ables are not observable, they may be reconstructed, in standard data assimilation methods, with advancement of
provided that their model equations are known. the dynamics nonparametrically using delay-coordinate
In other cases, a model may not be available, or available vectors. We find that the Kalman-Takens algorithm as
models may be poor. In geophysical processes, basic proposed in this article is able in many cases to filter noisy
principles may constrain a variable in terms of other driving data at a rate comparable to parametric filtering methods that
variables in a way that is well understood, but the driving have full access to the exact model. In one sense, this is no
variables may be unmodeled or modeled with large error surprise: The content of Takens’s theorem is that, in the large
[12–15]. In numerical weather prediction codes, physics on data limit, the equations can be replaced by the data. But
the large scale is typically sparsely modeled [16,17]. Some particularly surprising is the fact that by implementing the
recent work has considered the case where only a partial Kalman update, the nonparametric representation of the
model is known [18,19]. dynamics is able to handle substantial noise in the observed
When a physical model is completely unavailable, data. Moreover, in cases where the parametric filter is subject
Takens’s method of attractor reconstruction [20–23] has to model error, we see that the Kalman-Takens approach may
exhibit superior results.
Throughout this paper we assume availability of a noisy
*
[email protected] set of historical data for which there is no model. The goal
is to use this historical data set to predict future values using
Published by the American Physical Society under the terms of
the Creative Commons Attribution 3.0 License. Further distri- one of the many established nonparametric prediction
bution of this work must maintain attribution to the author(s) and methods. Our objective is to first filter these noisy obser-
the published article’s title, journal citation, and DOI. vations using the Kalman-Takens algorithm and reconstruct

2160-3308=16=6(1)=011021(12) 011021-1 Published by the American Physical Society


HAMILTON, BERRY, and SAUER PHYS. REV. X 6, 011021 (2016)

the underlying signal, which will allow for increased Refs. [18,19] are based on combining a partially known
predictive capability. We compare this novel approach to or imperfect model with nonparametric methods in order to
filtering, which does not use any model, to the ideal overcome the high dimensionality that is present in many
scenario, which assumes that the perfect model is known, practical problems. While solving the forecasting problem
as well as to scenarios that include model error. requires representing the entire dynamical system, the filter-
Nonparametric forecasting has received renewed interest ing problem requires only the ability to perform a short-term
recently [18,19,41,42]; however, all of these methods forecast (until the next observation becomes available).
have impractically large data requirements for intrinsically We demonstrate this fact in Fig. 1 by filtering the
high-dimensional dynamics. Indeed, the approaches of Lorenz-96 spatiotemporal dynamical system without any

10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
0 10 20 30 40 50
Time

-5 0 5 10

10
20
30
40
10
20
30
40
10
20
30
40
10 20 30 40 50
Time

5 6 7 8 9 10

FIG. 1. Results of filtering the full 40-dimensional Lorenz-96 system. (a) Spatiotemporal plot of the system dynamics. The horizontal
axis represents time, and the vertical axis represents the spatial position. From top to bottom, (1) the true system trajectory,
(2) observations perturbed by 100% noise, (3) Kalman-Takens output, and (4) parametric filter output. (b) Spatiotemporal plot of the
resulting absolute errors. For readability, only absolute errors greater than 5 are shown. From top to bottom, error in (1) the observed
signals, (2) Kalman-Takens output, and (3) parametric filter output.

011021-2
ENSEMBLE KALMAN FILTERING WITHOUT A MODEL PHYS. REV. X 6, 011021 (2016)

knowledge of the underlying model. The Lorenz-96 system form an ensemble of E state vectors where the ith ensemble
represents a simplified weather system on a midlatitude and member is denoted xþ i;k−1 .
is a common benchmark for spatiotemporal filtering The model f is applied to the ensemble, advancing it one
methods due to its chaotic dynamics and moderate dimen- time step, and then observed with function g. The mean of
sionality. Notice that the Kalman-Takens filter does not the resulting state ensemble gives the prior state estimate
equal the performance of the parametric filter; however, this x−k , and the mean of the observed ensemble is the predicted
should be expected since the parametric method is given observation y− −
k . Denoting the covariance matrices Pk and
full knowledge of the true model whereas the Kalman- y
Pk of the resulting state and observed ensemble, and the
Takens filter has only the short noisy training data set from cross-covariance matrix Pxy k between the state and observed
which to learn the dynamics. ensembles, we define
One key application of the Kalman-Takens filter is to
data sets where one has no knowledge of the underlying
dynamics. However, we also show that the proposed X
E
P−
k ¼ ðx− − − − T
ik − xk Þðxik − xk Þ þ Q;
approach has significant advantages when the true model i¼1
is known, but imperfectly. In Secs. III and IV, we show that
the filter has only a small loss of performance compared to X
E
Pyk ¼ ðy− − − − T
ik − yk Þðyik − yk Þ þ R;
parametric methods that have the perfect model. In i¼1
exchange for this small loss of performance compared to
X
E
perfect knowledge, the Kalman-Takens method offers Pxy ðx− − − − T
k ¼ ik − xk Þðyik − yk Þ ; ð2Þ
extreme robustness to model error. Since we make no i¼1
model assumption at all, the results are free from the biases
these strong assumptions introduce. In Sec. V, we demon- and use the equations
strate the advantage of the new approach by showing that
introducing a small amount of model error (by simply y −1
perturbing parameters) leads to the parametric filtering K k ¼ Pxy
k ðPk Þ ;
methods being outperformed by the Kalman-Takens filter. Pþ − xy y −1 yx
k ¼ Pk − Pk ðPk Þ Pk ;
We return to the Lorenz-96 example at the end of Sec. IV.
As a precursor to this article, we note that Kalman xþ − −
k ¼ xk þ K k ðyk − yk Þ ð3Þ
filtering was used in Ref. [43] to fit parameters of a global
radial basis function model based at unstable fixed points in to update the state and covariance estimates with the
delay coordinates, which were in turn estimated by another observation yk . We refer to this throughout as the para-
method. In contrast, our approach is in the spirit of fully metric filter, since a full set of model equations is assumed
nonparametric statistics. to be known. In some cases Q and R are not known; an
algorithm was developed in Ref. [9] for adaptive estimation
II. KALMAN-TAKENS FILTER of Q and R. A brief description of this algorithm is included
in the Appendix.
The standard Kalman filtering context assumes a Contrary to Eq. (1), our assumption in this article is that
nonlinear system with n-dimensional state vector x and neither the model f nor observation function g is known,
m-dimensional observation vector y defined by making outright implementation of the EnKF impossible.
Instead, the filter we describe here requires no model while
xkþ1 ¼ fðxk ; tk Þ þ wk ; still leveraging the Kalman update described in Eq. (3).
The idea is to replace the system evolution, traditionally
yk ¼ gðxk ; tk Þ þ vk ; ð1Þ
done through application of f, with advancement of dynam-
ics nonparametrically using delay-coordinate vectors. We
where f and g are known, and where wk and vk are white describe this method with the assumption that a single
noise processes with covariance matrices Q and R, respec- variable is observable, say, y, but the algorithm can be easily
tively. The ensemble Kalman filter approximates a non- generalized to multiple system observables. In addition, we
linear system by forming an ensemble, such as through the assume in our examples that noise covariances Q and R are
unscented transformation (see, for example, Ref. [44]). unknown and are updated adaptively as in Ref. [9].
Here, we initialize the filter with state vector xþ
0 ¼ 0n×1 and Given the observable yk , consider the delay-coordinate
þ
covariance matrix P0 ¼ I n×n . At the kth step of the filter vector xk ¼ ½yk ; yk−1 ; …; yk−d , where d is the number of
there is an estimate of the state xþ k−1 and the covariance delays. This delay vector xk represents the state of the
matrix Pþk−1 . In the unscented version of EnKF, the singular system [20,22]. At each step of the filter an ensemble of
value decomposition is used to find the symmetric positive delay vectors is formed. However, the advancement of the
definite square root Sþ þ
k−1 of the matrix Pk−1 , allowing us to ensemble one time step forward requires a nonparametric

011021-3
HAMILTON, BERRY, and SAUER PHYS. REV. X 6, 011021 (2016)

technique to serve as a local proxy f~ for the unknown training set, we implement a 600-sample lockout centered
model f. around the current delay-coordinate vector to prevent
The proxy is determined as follows. Given a delay co- overfitting (effectively reducing the set from which to
ordinate vector xk ¼½yk ;yk−1 ;…;yk−d , we locate its N nearest find neighbors to 5400 data points). The Kalman-Takens
neighbors ½yk0 ;yk0 −1 ;…;yk0 −d , ½yk00 ;yk00 −1 ;…;yk00 −d ;… method can then be implemented, iteratively filtering the
;½ykN ;ykN −1 ;…;ykN −d  within the set of noisy training training data.
data, with respect to Euclidean distance. Once the neigh- Figure 2 shows a comparison of the parametric and
bors are found, the known yk0 þ1 ; yk00 þ1 ; …; ykN þ1 values are Kalman-Takens filter in reconstructing the Lorenz-63 x
used with a local model to predict ykþ1 . In this article, we time series given noisy observations of the variable. The
use a locally constant model that in its most basic form is an observed time series is corrupted by 60% noise. The green
average of the nearest neighbors; namely, circles denote the noisy observations and the black solid line
  denotes the true trajectory of the variable [signal root mean
~ k Þ ¼ yk0 þ1 þ yk00 þ1 þ; …; þykN þ1 ; yk ; …; yk−dþ1 :
fðx
square error ðRMSEÞ ¼ 4.76]. In Fig. 2(a), results of the
N parametric filter (solid blue curve) reconstruction are shown.
The parametric filter, operating with full knowledge of the
This prediction can be further refined by considering a underlying dynamical equations, is able to significantly
weighted average of the nearest neighbors where the weights
25
are assigned as a function of the neighbor’s distance to the
20
current delay-coordinate vector. Throughout the following
15
examples, unless otherwise specified, 20 neighbors are used.
10
This process is repeated for each member of the ensemble.
5
Once the full ensemble is advanced forward, the remaining
x 0
EnKF algorithm is then applied as previously described and
-5
our delay-coordinate vector is updated according to Eq. (3).
-10
We refer to this method as the Kalman-Takens filter.
-15
-20
III. IMPROVED FORECASTING -25
0 1 2 3 4 5
As an introductory example, we examine the Lorenz-63 Time
system [45],
25
x_ ¼ σðy − xÞ; 20

y_ ¼ xðρ − zÞ − y; 15
10
z_ ¼ xy − βz; ð4Þ 5
0
x

where σ ¼ 10, ρ ¼ 28, β ¼ 8=3. Assume we have a noisy -5


set of training data observed from the x variable. Using -10
these data, we want to develop a nonparametric forecasting -15
method to predict future x values of the system. However, -20
due to the presence of noise, outright application of a -25
0 1 2 3 4 5
prediction method leads to inaccurate forecasts. If knowl- Time
edge of Eq. (4) were available, the standard parametric filter
could be used to assimilate the noisy x observations to the
model, generate a denoised estimate of the x variable, and FIG. 2. Given noisy observations of the Lorenz-63 x variable
simultaneously estimate the unobserved y and z variables. (green circles) sampled at rate h ¼ 0.05, the goal is to reduce the
Without knowledge of Eq. (4), we are limited to methods signal error (RMSE ¼ 4.76) using an ensemble Kalman filter
of nonparametric filtering. Using the Kalman-Takens where the true x trajectory (solid black curve) is desired. (a) If we
method, we can effectively filter the noisy x observations, have knowledge of the Lorenz-63 equations, the standard filtering
methodology can be used whereby the noisy x data are assimi-
reconstructing the underlying signal, without any knowl-
lated to the model and using an EnKF a noise-reduced estimate
edge of the underlying system. of the x variable as well as estimates of the unobserved y and z
For this example, we assume that 6000 noisy training variables are provided. We refer to this as the parametric filter
points of the Lorenz-63 x variable are available, sampled at (solid blue curve, RMSE ¼ 2.87). (b) The Kalman-Takens filter
the rate h ¼ 0.05. We build a delay coordinate vector is able to significantly filter the noise in the observed signal
½xðtÞ; xðt − hÞ; …; xðt − dhÞ, where d ¼ 4. Given that we (solid red curve, RMSE ¼ 3.04) to a level comparable to the
are filtering and searching for neighbors within the same parametric filter.

011021-4
ENSEMBLE KALMAN FILTERING WITHOUT A MODEL PHYS. REV. X 6, 011021 (2016)

reduce the signal error (RMSE ¼ 2.87). Figure 2(b) shows used to reprocess the filtered data, which can lead to a
the result of the Kalman-Takens filter reconstruction (solid reduction in error (dotted red curve).
red curve). Without knowledge of any model equations, the Next, we show how Kalman-Takens filtering of the
Kalman-Takens method is able to significantly reduce the noisy data set can enhance forecasting. We utilize a simple
signal error to a level comparable with the parametric nonparametric prediction method similar to the local
filter (RMSE ¼ 3.04). reconstruction of f~ above. More sophisticated methods
The results of reconstructing the observed Lorenz-63 x exist, but we want to avoid complicating the comparison
variable over a range of noise levels are shown in Fig. 3. of the filtering methods. Assume the current time is k, and
Error bars denote standard error over 10 realizations. we want to forecast the variable y ahead j time units into the
Estimation of the noise covariances Q and R for both future. Using the delay coordinate vector xk ¼ ½yk ; yk−1 ; …;
filter algorithms is done using the methodology of Ref. [9]. yk−d , the vector’s N nearest neighbors ½yk0 ; yk0 −1 ; …; yk0 −d ,
The Kalman-Takens filter (solid red curve) is able to ½yk00 ; yk00 −1 ; …; yk00 −d ; …; ½ykN ; ykN −1 ; …; ykN −d  within the
substantially reduce the signal error (dotted black curve) set of noisy training data are located. Then, the known
to a level comparable to the parametric filter (solid blue yk0 þj ; yk00 þj ; …; ykN þj are averaged to predict ykþj .
curve). In some instances, the Kalman-Takens filter can be Figure 4 shows the error in predicting two time units of
the Lorenz-63 x variable when the training set is influenced
by 30% noise. Results are averaged over 3000 realizations.
The prediction error is calculated with respect to the

10
9
8
7
6
RMSE

5
4
3
2
1
0
0 0.5 1 1.5 2
Forecast Horizon

10
9
8
7
6
RMSE

5
4
3
2
1
0
0 0.5 1 1.5 2
FIG. 3. Filter performance of both the parametric (solid blue Forecast Horizon
curve) and nonparametric (solid red curves) filters when denois-
ing the observed Lorenz-63 x variable at increasing levels of
(a) large and (b) low observation noise (signal error, dotted black FIG. 4. Results of forecasting the Lorenz-63 x variable when
curve). Time series sampled at rate h ¼ 0.05 and error bars the training data are perturbed by 30% noise. Results averaged
denote standard error over 10 realizations. Q and R noise over 3000 realizations. Forecast error calculated with respect to
covariances are estimated adaptively. The nonparametric filter (a) the non-noisy truth and (b) the noisy truth. When the training
is able to significantly reduce the signal error to a level data are filtered by the Kalman-Takens filter (solid red curve) we
comparable to the parametric filter which has full use of the gain improved forecasting capability as compared to use of the
model. In some instances, the performance of the nonparametric unfiltered training data (dotted black curve). The forecasting
filter improves by reprocessing the data (dotted red curve). At results (using identical forecast algorithms) when the training
higher noise levels, the discrepancy between the parametric and data are filtered by the parametric filter (solid blue curve) are
nonparametric filter becomes negligible. included for a point of reference.

011021-5
HAMILTON, BERRY, and SAUER PHYS. REV. X 6, 011021 (2016)

10
non-noisy truth in Fig. 4(a) and the noisy truth in
8
Fig. 4(b). When using the noisy training data without
6
any filtering (dotted black curve) for the nonparametric 4
forecast, the prediction error is highest, as would be 2
expected. We observe a significant improvement in pre-

x1
0
diction error when the training data are filtered first by the -2
Kalman-Takens method (solid red curve). This improve- -4
ment in predictive capabilities when using the Kalman- -6

Takens method compares favorably to the improvement -8


-10
gained when filtering with the full set of model equations
0 2 4 6 8 0 2 4 6
(solid blue curve). Time
We emphasize that our purpose is to demonstrate the
utility of the Kalman-Takens filter, not to compare different
10
methods of forecasting. As such, all the forecast methods in 8
this paper are model-free methods, even when the true 6
model is used to filter the historical data, in order to focus 4
on the filtering comparison between the Kalman-Takens 2
and parametric approaches. In practice, if the correct model

x1
0
were available, the noisy initial conditions could be filtered -2

and the model advanced in time to calculate the forecasted -4


-6
values.
-8
-10
IV. SPATIOTEMPORAL DYNAMICS 0 2 4 6 8 0 2 4 6
Time
Given the success of the Kalman-Takens method in
filtering noisy observations in the low-dimensional Lorenz-
63 system, allowing for improved forecasting ability, FIG. 5. The filtering problem in a 40-dimensional Lorenz-96
we now examine the filter-forecast problem in a higher- system where noisy observations from 3 of the network nodes are
dimensional system. Consider a coupled ring of N Lorenz- available. Observations are perturbed by 60% noise. The goal is
96 nodes [46], to filter the noisy observations of the x1 node (black circles) and
reduce the signal error (RMSE ¼ 2.16). Black solid curve
x_ i ¼ ðaxiþ1 − xi−2 Þxi−1 − xi þ F; ð5Þ denotes true trajectory of variable. (a) The parametric filter (solid
blue curve) with full knowledge of the system is able to filter the
where a ¼ 1 and F ¼ 8. The Lorenz-96 system is a signal to RMSE ¼ 1.17. This requires full estimation of the 40-
convenient demonstrative example since it allows for a dimensional state. (b) The nonparametric Kalman-Takens filter
range of higher-dimensional complex behavior by adjust- (solid red curve), with no knowledge of the underlying physical
system and utilizing only delay coordinates of the observables, is
ing the number of nodes in the system. We first consider a
able to filter the signal to RMSE ¼ 1.36.
40-dimensional Lorenz-96 system in which three nodes are
observable, namely, x1 , x2 , and x40 . We increase the
number of observables in this example since filtering a the signal error (RMSE ¼ 1.17). With no model available,
40-dimensional Lorenz-96 with one observation, and the the Kalman-Takens method can be utilized, solid red curve
full set of model equations, is itself a difficult task. in Fig. 5(b), and even in this high-dimensional example is
Therefore, we introduce the assumption that in this able to reduce signal error to a level comparable to the
higher-dimensional system we are afforded additional parametric filter (RMSE ¼ 1.36).
observations. Even though we have additional observa- Figure 6 shows the results of filtering the x1 node at
tions, our goal is only to filter and predict the x1 variable. various noise levels. Figure 6(a) shows the results in a five-
We assume that 10 000 noisy training data points from dimensional Lorenz-96 network, where only x1 is observ-
each of the three nodes, sampled at rate h ¼ 0.05, are able, and Fig. 6(b) shows the results in a 40-dimensional
available. Since three nodes are observable in this example, network, where x1 , x2 , and x40 are observable. In this
our delay-coordinate vector takes the form ½x1 ðtÞ;…; example, Q and R noise covariances are tuned off-line for
x1 ðt−dhÞ;x2 ðtÞ;…;x2 ðt−dhÞ;x40 ðtÞ;…;x40 ðt−dhÞ, where each filter to ensure optimal performance. In both network
we set d ¼ 3. Figure 5 shows the results of filtering the sizes, the Kalman-Takens method (solid red curve) is able
x1 node in this system when the observations are to significantly reduce the signal error (dotted black curve)
perturbed by 60% noise (green circles). With knowledge to a level comparable with the parametric filter (solid blue
of the full 40-dimensional system, a parametric filter curve), which operates with knowledge of the full system
can be implemented, solid blue curve in Fig. 5(a), to reduce equations.

011021-6
ENSEMBLE KALMAN FILTERING WITHOUT A MODEL PHYS. REV. X 6, 011021 (2016)

3.5

2.5

RMSE
2

1.5

0.5

0
0 0.5 1 1.5 2 0 0.5 1 1.5
Forecast Horizon

3.5

2.5

RMSE
2

1.5

0.5

0
0 0.5 1 1.5 2 0 0.5 1 1.5
Forecast Horizon

FIG. 6. Filter performance of the parametric (solid blue curve) FIG. 7. Results of forecasting the x1 node in a five-dimensional
and Kalman-Takens filter (solid red curve) when denoising the Lorenz-96 system when the training data are affected by 60%
observed Lorenz-96 x1 variable at increasing levels of observa- noise. Results averaged over 2500 realizations. Forecast error
tion noise (signal error, dotted black curve). The variable is calculated with respect to (a) the non-noisy truth and (b) the noisy
sampled at rate h ¼ 0.05 and the error bars denote standard truth. Even in this higher-dimensional system, we see that use of
error over 10 realizations. Q and R noise covariances are tuned the Kalman-Takens filter to filter the training data results in
off-line for optimal performance of the filters. Results presented improved forecasting capability (solid red curve) as compared to
for both a (a) five-dimensional Lorenz-96 system with only x1 forecasting without filtering (dotted black curve). The forecast
observable and a (b) 40-dimensional Lorenz-96 system where based on Kalman-Takens initial conditions compares well to the
x1 , x2 , and x40 are observable. We assume additional nodes same Takens-based forecast method using initial conditions from
are observable in the 40-dimensional case due to the high the parametric filter (solid blue curve).
dimensionality of the system, which inhibits both the parametric
and Kalman-Takens filter. In both systems, the Kalman-
Takens filter is able to significantly reduce the signal error to
We emphasize that we do not expect to obtain long-
a level comparable to the parametric filter that has full knowledge term forecasts of intrinsically high-dimensional dynamics
of the system. from a nonparametric technique (due to the curse of
dimensionality). However, we now return to the example
in Fig. 1, which shows that filtering such systems without a
Figure 7 shows the results of forecasting x1 in a five- model is sometimes possible. Indeed, having shown that we
dimensional Lorenz-96 network, when the training data are can filter a single spatial location, we can easily filter the
corrupted by 60% noise, with respect to the non-noisy truth entire spatiotemporal system one location at a time. At each
in Fig. 7(a) and the noisy truth in Fig. 7(b). Prediction with spatial node, xi , of the 40-dimensional Lorenz-96 network,
the unfiltered training data (dotted black curve) results in we build the short-term forecast model used by the
large errors. Once again, application of the Kalman-Takens Kalman-Takens filter from xi along with its two spatial
method results in predictions with smaller error (solid red neighbors xiþ1 and xi−1 .
curve) and is comparable if a parametric filter is used The full spatiotemporal filtering results are shown in
(solid blue curve). The success of the Kalman-Takens filter Fig. 1. The Kalman-Takens filter is applied to the full
in these higher-dimensional systems is encouraging for its 40-dimensional Lorenz-96 system when the observations
use in real-world physical systems where the system is are corrupted by 100% noise, i.e., additive Gaussian noise
complex, high dimensional, and a model is most likely with mean zero and variance equal to the variance of the
unknown. signal. Figure 1(a) shows the spatiotemporal plot of the

011021-7
HAMILTON, BERRY, and SAUER PHYS. REV. X 6, 011021 (2016)

system dynamics along with the noisy signal and the filter
outputs. Note that the Kalman-Takens filter output (third
panel from the top) is able to filter a significant amount of
noise and reconstruct the underlying system dynamics at a
level comparable to the parametric filter (fourth panel from
the top). Figure 1(b) shows the spatiotemporal plot of
absolute errors that are greater than 5 for the (top) noisy
signal, (middle) Kalman-Takens output, and (bottom) para-
metric filter output.
The key to this application of the Kalman-Takens filter is
that the dynamical system is spatially localized on short
time scales. This allows us to learn short-term forecasting
models for each spatial location using only its two spatial
neighbors. These short-term forecasting models appear to
be low dimensional enough that they can be learned from
small training data sets with Takens-based methods. Of
course, not all dynamical systems will have this localization
property; in particular, systems with an infinite speed of
propagation of information may be problematic.

V. MODEL ERROR
Thus far, we have been working under the assumption
that while a set of noisy data exists from a system, we have
no knowledge of the dynamical equations. To that extent,
we have shown the power of the Kalman-Takens method in
filtering the noisy observations allowing for better predic-
tions. Next, we ask an even more provocative question: Are FIG. 8. Under situations of model error, the performance of the
there circumstances in which a system with noisy obser- parametric filter can degrade rapidly. In our examples, this model
vations can be predicted better without a model than with error is simulated by perturbing the system parameters. Even
a model? under this relatively mild form of model error, it is evident that the
parametric filter can have difficulty in filtering a noisy observa-
This may not be possible if a perfect model is available.
tion. Error bars denote standard error over 10 realizations.
However, in any typical filtering problem, the question of (a) Results for filtering the Lorenz-63 x variable when the
model error becomes relevant: How accurately do the observations are perturbed by 60% noise (dotted black curve
dynamics of the model describe those of the observed indicates signal error). The performance of the parametric filter
system? Moreover, if the model is slightly imperfect, how (solid blue curve) under slight model error becomes much worse
is the filtering affected? Of course, model error is not than the Kalman-Takens filter (solid red curve), which is
relevant to the Kalman-Takens method, as it relies solely completely robust to any form of model error. (b) Results for
on the observed data to nonparametrically advance the filtering the x1 node in a 40-dimensional Lorenz-96 system when
dynamics. observations are perturbed by 60% noise. Only three data points
We examine this question of model error when filtering are shown for the parametric filter because starting at 30%
the x variable from Lorenz-63 and the x1 node from a 40- parameter perturbation the filter diverges, once again showing the
crippling effect of model error on the performance of the
dimensional Lorenz-96 system, each corrupted by 60%
parametric filter.
noise. A mild form of model error is simulated in the
corresponding parametric filters by perturbing the model
parameters. Specifically, σ, ρ, β in the Lorenz-63 filter and Kalman-Takens filter (solid red curve) is robust to model
the a parameter in the Lorenz-96 filter are perturbed from error, and at higher levels of model error outperforms the
their correct values. parametric filter.
Figure 8 shows the filtering results at increasing levels Of course, in a situation of incorrect parameters, param-
of model error in Lorenz-63 in Fig. 8(a) and Lorenz-96 in eter-fitting methods could be used to correct the parameters
Fig. 8(b). As the model error increases, the performance and reduce the error. For example, state-augmentation
of the parametric filter (solid blue curve) breaks down methods based on Kalman filtering [47] could be used.
and is unable to reduce the signal error (dotted black However, realistic model error often manifests itself as
curve). Of note, in Fig. 8(b) there are only three data missing variables in the model, a mismatch between the
points plotted for the parametric filter due to filter governing physics of the system and those described by the
divergence at higher levels of model error. The model or a completely unknown model. The consideration

011021-8
ENSEMBLE KALMAN FILTERING WITHOUT A MODEL PHYS. REV. X 6, 011021 (2016)

of incorrect parameter values is an extremely mild form of forecast, and this dominant component is the target that the
model error since the underlying equations used to generate modeler is hoping to explain. However, the observed data
the data match those of the model used by the parametric also contain unpredictable “noise” components that result
filter. Our purpose in this example is to compare the from the projection of the high-dimensional dynamics, and
performance of the two filtering approaches in the mildest are nonlinearly intertwined with the dominant dynamical
of model error situations—a more realistic comparison component that we are interested in. This common scenario
would cause the parametric approach to fail even more is the perfect context for application of the Kalman-Takens
substantially. filter to remove the unpredictable noise component of the
In many complex modeling problems, empirical obser- time series. This will allow a method such as the diffusion
vation leads to ad hoc identification of model error forecast to be trained on a cleaner time series that is more
phenomena. One example of this is the El Niño phenome- focused on the desired feature.
non, an irregular variation in sea surface temperatures To demonstrate this, we examine the filter-forecast
(SST) in the eastern Pacific ocean. The El Niño phenome- problem in the context of the El Niño 3.4 index which
non is commonly described by various univariate indices was also used in Ref. [41]. We use historical data from
[48], which are time series that describe the SST field in a January 1950 to December 1999 to build our nonparametric
particular region. El Niño was shown in Ref. [49] to be forecast. Prior to building the diffusion forecast model, we
more accurately forecast by a data-driven model than by first filter the data using the Kalman-Takens method with
reduced models used in operational forecasts at the time. d ¼ 9, five neighbors, and Q and R noise covariance
While the method of Ref. [49] forecasts the entire SST matrices tuned for optimal performance. Figure 9 compares
field, a comparable forecast skill for the El Niño index was the 14-month diffusion forecast using the filtered training
obtained in Ref. [41] using only the historical time series of data (solid red curve) to the true value of the El Niño index
the one-dimensional index. at the corresponding time (solid black curve). We note that
The diffusion forecast of Ref. [41] uses training data to our predictions follow the same general pattern of the true
represent an unknown stochastic dynamical system on a trajectory. However, our real interest is in determining
custom basis of orthogonal functions, φj ðxi Þ, which are if use of the Kalman-Takens filter results in any improve-
defined on the attractor M of the dynamical system ment in forecasting capability. In Fig. 10(a), we show the
described by the data fxi g ⊂ M ⊂ Rn . By representing resulting forecast error and in Fig. 10(b) the forecast
an initialP probability density in this basis as correlation of the diffusion forecast when the historical
pðx; 0Þ ¼ j cj ð0Þφj ðxi Þ, the diffusion forecast evolves data are unfiltered (solid black curve) and filtered using the
the density P using a Markov process on the coefficients Kalman-Takens filter (solid red curve). We observe that
cj ðt þ τÞ ¼ Pk Ajk ck ðτÞ. The key result of Ref. [41] is that denoising the time series using the Kalman-Takens
Ajk ¼ ð1=NÞ i φk ðxi Þφj ðxiþ1 Þ is an unbiased estimate of filter improves the isolation of the predictable component
the forward operator eτL for the unknown dynamical of the El Niño 3.4 index as shown by the improved forecast
system, where τ is the time step from xi to xiþ1 and L skill of the diffusion forecast when trained on the
is the generator of the stochastic dynamical system. We can denoised data.
then obtain the initial probability density from the noisy
observations using an iterative Bayesian filter introduced in
2
Refs. [19,42]. These initial densities are projected into the
basis fφj g, the coefficients are evolved forward in time, 1.5

and the diffusion forecast is the mean of the reconstructed 1


El Nino 3.4 Index

forecast density.
0.5
The difficulty in applying data-driven methods such as
Refs. [41,49] to empirically identified model error phenom- 0

ena such as El Niño is that it is extremely difficult for the -0.5


modeler to completely isolate the model error phenomena. A -1
trivial example that illustrates this difficulty is that the El
-1.5
Niño indices contain seasonal variations that are not part of
the model error (these are empirically removed by comput- 2002 2004 2006 2008 2010 2012
ing the monthly anomalies). Of course, even if the season- Year

ality could be perfectly removed, many other aspects of the FIG. 9. The Kalman-Takens method is used to filter a set of
observed El Niño index are likely to be artifacts of the historical data from the El Niño 3.4 index. The resulting
projection from a high-dimensional chaotic dynamical 14-month lead nonparametric diffusion forecast using the filtered
system to a one-dimensional time series [15]. As a result, historical data is shown (solid red curve). The forecasted
we expect that the observed data contain a dominant trajectory captures the general pattern of the true trajectory (solid
dynamical component that is possible to probabilistically black curve).

011021-9
HAMILTON, BERRY, and SAUER PHYS. REV. X 6, 011021 (2016)
1
model error is relatively small, the nonparametric approach
0.9
0.8
may be a welcome alternative.
0.7
The Kalman-Takens filter shares the limitations of all
0.6
Kalman filtering to the extent that it is limited to Gaussian
noise assumptions. For more general, multimodal noise, a
RMSE

0.5
0.4
more complex type of data assimilation may be necessary.
0.3
In this article, we restrict the computational examples to the
0.2 EnKF form, but the fundamental strategy is not restricted to
0.1 any particular version of Kalman filtering.
0 We expect this idea to be applicable to a wide range of
2 4 6 8 10 12 14 16
nonlinear systems that are measured with observational
Forecast Steps (months)
noise. For forecasting, the data requirements are similar to
other applications of Takens’s theorem, in that enough
1
observations must be available to reconstruct the under-
0.9
lying dynamical attractor to a sufficient extent. However,
0.8
even if the system exhibits high-dimensional dynamics, we
0.7
show that filtering may still be possible if some spatial
Correlation

0.6
localization can be done. Intuitively, this means that on
0.5
short time scales, local information is sufficient to deter-
0.4
mine short-term dynamics. This may be used productively
0.3
in geophysical applications where considerable preprocess-
0.2
ing is needed, as a kind of nonparametric reanalysis. Such a
0.1
0
reanalysis may reduce bias from incorrect priors such as
2 4 6 8 10 12 14 16 insufficient models or poorly constrained parameters.
Forecast Steps (months)

ACKNOWLEDGMENTS
FIG. 10. Results of the diffusion forecast when trained on the We thank the anonymous referees for several helpful
unfiltered historical data (solid black curve) and Kalman-Takens suggestions. This research was partially supported by
filtered data (solid red curve). (a) Both forecasts compare Grants No. CMMI-1300007, No. DMS-1216568,
favorably to the climatological error (dotted gray curve). How- No. DMS-1250936, and No. RTG/DMS-1246991 from
ever, processing the historical data with the Kalman-Takens filter
the National Science Foundation.
results in improved forecast error. (b) Similarly, forecast corre-
lation is improved by filtering the data.
APPENDIX: ADAPTIVE UPDATING OF
COVARIANCE MATRICES
VI. DISCUSSION Since we cannot assume that the noise covariance
The blending of the Takens embedding method with matrices Q and R in Eq. (1) are known, we use a recently
Kalman filtering is designed to exploit the complementary developed method for the adaptive fitting of these matrices
strengths of the two methodologies. Under favorable [9] as part of the filtering algorithm. The method uses the
conditions, delay coordinate embedding can replace the innovations ϵk ≡ yk − y− k in observation space from Eq. (2)
underlying evolution equations with no loss in accuracy, to update the estimates Qk and Rk of the covariances Q and
and the Kalman update provides a maximum likelihood R, respectively, at step k of the filter.
estimate of the reconstructed state in the presence of First, we construct linearizations of the dynamics and the
observational noise. While the Kalman update equations observation function that are reconstructed from the
were proposed with the assumption that a known set of ensembles used by the EnKF. Let xþ þ þ
i;k−1 ∼ N ðxk−1 ; Pk−1 Þ
dynamical evolution equations were available, we show be the analysis ensemble at step k − 1, where the index
here that they can be used to effectively reduce observa- 1 ≤ i ≤ E indicates the ensemble member, and let x− i;k ¼
þ
tional noise and increase predictability when using non- F ðxi;k−1 Þ be the forecast ensemble that results from moving
parametric representations, such as those derived from from tk−1 to tk using the TakensPembedding with initial
delay coordinates. condition xþ −
i;k−1 . Define xk ¼ E
1 E −
i¼1 xik , the matrix of
Perhaps more surprisingly, our results show that the analysis perturbations,
Kalman-Takens filter may outperform the standard, para-
metric Kalman filter when model error is present. The Xþ þ þ T þ þ
k−1 ¼ ½ðx1;k−1 − xk−1 Þ ; …; ðxE;k−1 − xk−1 Þ ;
T

sensitivity of filter output to model error is difficult to


quantify; we show that in certain cases, even when the and the matrix of forecast perturbations,

011021-10
ENSEMBLE KALMAN FILTERING WITHOUT A MODEL PHYS. REV. X 6, 011021 (2016)

X− − − T − − T
k ¼ ½ðx1k − xk Þ ; …; ðxEk − xk Þ : the kth filter step and no ad hoc corrections are made to the
matrix Qk , which eventually stabilizes at a symmetric and
Then an approximation for the one-step dynamics is positive definite matrix naturally via the moving average in
þ
given by the matrix Fk ¼ X− †
k ðX k−1 Þ , where † denotes the Eq. (A2). These ad hoc corrections are only needed during
matrix pseudoinverse. Similarly, let x~ − − −
i;k ∼ N ðxk ; Pk þ QÞ the transient period of the estimation of Q, in particular,
be the inflated forecast ensemble and let z− i:k ¼ hð~x−
i;k Þ be
when the trivial initialization Q1 ¼ 0 is used.
the projection of this ensemble into the observation space.
Then we can define Hk ¼ Z− ~− †
k ðX k Þ , which we think of as a
local linearization of the observation function h, where
[1] E. Kalnay, Atmospheric Modeling, Data Assimilation, and
X~ − x−
k ¼ ½ð~
− T
x−
1k − xk Þ ; …; ð~
− T
Ek − xk Þ ; Predictability (Cambridge University Press, Cambridge,
England, 2003).
Z− − − T − − T
k ¼ ½ðz1k − zk Þ ; …; ðzEk − zk Þ  [2] G. Evensen, Data Assimilation: The Ensemble Kalman
Filter (Springer, Heidelberg, 2009).
are the matrix of inflated forecast perturbations and the [3] F. Rabier, Overview of Global Data Assimilation Develop-
matrix of observed
PE forecast perturbations, respectively, and ments in Numerical Weather-Prediction Centres, Q. J. R.
1
where z−k ¼ E z −
i¼1 ik . Meteorol. Soc. 131, 3215 (2005).
After computing Fk and H k , we use them to update [4] B. R. Hunt, E. Kalnay, E. Kostelich, E. Ott, D. J. Patil, T.
covariances Q and R as follows. We produce empirical Sauer, I. Szunyogh, J. A. Yorke, and A. V. Zimin, Four-
estimates Qek−1 and Rek−1 of Q and R based on the Dimensional Ensemble Kalman Filtering, Tellus A 56, 273
(2004).
innovations at time k and k − 1 using the formulas
[5] J. A. Cummings, Operational Multivariate Ocean Data
Assimilation, Q. J. R. Meteorol. Soc. 131, 3583 (2005).
Pek−1 ¼ F−1 −1 −T −T
k−1 H k ϵk ϵk−1 H k−1 þ K k−1 ϵk−1 ϵk−1 H k−1 ;
T T
[6] K. Yoshida, J. Yamaguchi, and Y. Kaneda, Regeneration
Qek−1 ¼ Pek−1 − Fk−2 Pak−2 FTk−2 ; of Small Eddies by Data Assimilation in Turbulence,
Phys. Rev. Lett. 94, 014501 (2005).
Rek−1 ¼ ϵk−1 ϵTk−1 − Hk−1 Pfk−1 HTk−1 : ðA1Þ [7] K. Law and A. Stuart, Evaluating Data Assimilation
Algorithms, Mon. Weather Rev. 140, 3757 (2012).
It was shown in Ref. [9] that Pek−1 is an empirical estimate [8] S. Schiff, Neural Control Engineering (MIT Press,
Cambridge, MA, 2012).
of the background covariance at time index k − 1. Notice
[9] T. Berry and T. Sauer, Adaptive Ensemble Kalman Filtering
that this procedure requires us to save the linearizations of Non-Linear Systems, Tellus A 65, 20331 (2013).
Fk−2 , Fk−1 , H k−1 , Hk , innovations ϵk−1 , ϵk , and the analysis [10] S. Julier, J. Uhlmann, and H. Durrant-Whyte, A New
Pak−2 and Kalman gain matrix K k−1 from the k − 1 and Method for the Nonlinear Transformation of Means and
k − 2 steps of the EnKF. Covariances in Filters and Estimators, IEEE Trans. Autom.
The estimates Qek−1 and Rek−1 are low-rank, noisy Control 45, 477 (2000).
estimates of the covariance matrices Q and R that will [11] S. Julier, J. Uhlmann, and H. Durrant-Whyte, Unscented
make the posterior estimate statistics from the filter con- Filtering and Nonlinear Estimation, Proc. IEEE 92, 401
sistent with the empirical statistics in the sense of Eq. (A1). (2004).
In order to form stable full-rank estimates of Q and R, we [12] R. H. Reichle and R. D. Koster, Global Assimilation of
Satellite Surface Soil Moisture Retrievals into the NASA
assimilate these estimates using an exponential moving
Catchment Land Surface Model, Geophys. Res. Lett. 32,
average with window τ: L02404 (2005).
[13] H. Hersbach, A. Stoffelen, and S. De Haan, An Improved
Qk ¼ Qk−1 þ ðQek−1 − Qk−1 Þ=τ; C-Band Scatterometer Ocean Geophysical Model Function:
CMOD5, J. Geophys. Res. Oceans 112, C03006 (2007).
Rk ¼ Rk−1 þ ðRek−1 − Rk−1 Þ=τ: ðA2Þ
[14] H. Arnold, I. Moroz, and T. Palmer, Stochastic Paramet-
rizations and Model Uncertainty in the Lorenz ’96 System,
We interpret the moving average in Eq. (A2) as a moving Phil. Trans. R. Soc. A 371, 20110479 (2013).
average filter that stabilizes the noisy empirical estimates [15] T. Berry and J. Harlim, Linear Theory for Filtering
Qk and Rk . The stochastic nature of the estimate of Qk can Nonlinear Multiscale Systems with Model Error, Proc. R.
lead to excursions that fail to be symmetric and/or positive Soc. A 470, 20140168 (2014).
definite, leading to instability in the EnKF. While the [16] E. N. Lorenz and K. A. Emanuel, Optimal Sites for Supple-
matrix Qk is not changed, the matrix used in the kth step of mentary Weather Observations: Simulation with a Small
the filter is a modified version of Qk , which is forced Model, J. Atmos. Sci. 55, 399 (1998).
~k ¼ [17] T. N. Palmer, A Nonlinear Dynamical Perspective on Model
to be symmetric and positive definite by taking Q Error: A Proposal for Non-Local Stochastic-Dynamic
ðQk þ Qk Þ=2 and then taking the max of the eigenvalues of
T
Parametrization in Weather and Climate Prediction
Q~ k with zero. Again, we emphasize that Q~ k is only used in Models, Q. J. R. Meteorol. Soc. 127, 279 (2001).

011021-11
HAMILTON, BERRY, and SAUER PHYS. REV. X 6, 011021 (2016)

[18] F. Hamilton, T. Berry, and T. Sauer, Predicting Chaotic [35] D. Kugiumtzis, O. Lingjærde, and N. Christophersen,
Time Series with a Partial Model, Phys. Rev. E 92, 010902 Regularized Local Linear Prediction of Chaotic Time
(2015). Series, Physica (Amsterdam) 112D, 344 (1998).
[19] T. Berry and J. Harlim, Variable Bandwidth Diffusion [36] G. Yuan, M. Lozier, L. Pratt, C. Jones, and K. Helfrich,
Kernels, Appl. Comput. Harmon. Anal., 40, 68 (2016). Estimating the Predictability of an Oceanic Time Series
[20] F. Takens, Dynamical Systems and Turbulence, Warwick Using Linear and Nonlinear Methods, J. Geophys. Res.
1980, Lect. Notes Math. 898, 366 (1981). 109, C08002 (2004).
[21] N. H. Packard, J. P. Crutchfield, J. D. Farmer, and R. S. [37] C.-H. Hsieh, S. M. Glaser, A. J. Lucas, and G. Sugihara,
Shaw, Geometry from a Time Series, Phys. Rev. Lett. 45, Distinguishing Random Environmental Fluctuations from
712 (1980). Ecological Catastrophes for the North Pacific Ocean,
[22] T. Sauer, J. Yorke, and M. Casdagli, Embedology, J. Stat. Nature (London) 435, 336 (2005).
Phys. 65, 579 (1991). [38] C. C. Strelioff and A. W. Hübler, Medium-Term Prediction
[23] T. Sauer, Reconstruction of Shared Nonlinear Dynamics in of Chaos, Phys. Rev. Lett. 96, 044101 (2006).
a Network, Phys. Rev. Lett. 93, 198701 (2004). [39] S. Regonda, B. Rajagopalan, U. Lall, M. Clark, and Y.-I.
[24] E. Ott, T. Sauer, and J. A. Yorke, Coping with Chaos: Moon, Local Polynomial Method for Ensemble Forecast
Analysis of Chaotic Data and the Exploitation of Chaotic of Time Series, Nonlinear Processes Geophys. 12, 397
Systems (Wiley, New York, 1994). (2005).
[25] H. Abarbanel, Analysis of Observed Chaotic Data [40] B. Schelter, M. Winterhalder, and J. Timmer, Handbook
(Springer-Verlag, New York, 1996). of Time Series Analysis: Recent Theoretical Developments
[26] H. Kantz and T. Schreiber, Nonlinear Time Series Analysis and Applications (John Wiley and Sons, New York,
(Cambridge University Press, Cambridge, England, 2004). 2006).
[27] J. D. Farmer and J. J. Sidorowich, Predicting Chaotic Time [41] T. Berry, D. Giannakis, and J. Harlim, Nonparametric
Series, Phys. Rev. Lett. 59, 845 (1987). Forecasting of Low-Dimensional Dynamical Systems, Phys.
[28] M. Casdagli, Nonlinear Prediction of Chaotic Time Series, Rev. E 91, 032915 (2015).
Physica (Amsterdam) 35D, 335 (1989). [42] T. Berry and J. Harlim, Physica D (to be published).
[29] G. Sugihara and R. M. May, Nonlinear Forecasting as a [43] D. M. Walker, Parameter Estimation Using Kalman Filters
Way of Distinguishing Chaos from Measurement Error in with Constraints, Int. J. Bifurcation Chaos Appl. Sci. Eng.
Time Series, Nature (London) 344, 734 (1990). 16, 1067 (2006).
[30] L. A. Smith, Identification and Prediction of Low [44] D. Simon, Optimal State Estimation: Kalman, H∞, and
Dimensional Dynamics, Physica (Amsterdam) 58D, 50 Nonlinear Approaches (John Wiley and Sons, New York,
(1992). 2006).
[31] J. Jimenez, J. A. Moreno, and G. J. Ruggeri, Forecasting on [45] E. Lorenz, Deterministic Nonperiodic Flow, J. Atmos. Sci.
Chaotic Time Series: A Local Optimal Linear- 20, 130 (1963).
Reconstruction Method, Phys. Rev. A 45, 3553 (1992). [46] E. N. Lorenz, in Proceedings of the Seminar on Predict-
[32] T. Sauer, Time Series Prediction by using Delay Coordinate ability (AMS, Reading, UK, 1996), Vol. 1, pp. 1–18.
Embedding, in Time Series Prediction: Forecasting the [47] A. Sitz, U. Schwarz, J. Kurths, and H. U. Voss, Estimation
Future and Understanding the Past, edited by N. Gershen- of Parameters and Unobserved Components for Nonlinear
feld and A. Weigend (Addison Wesley, Reading, MA, Systems from Noisy Time Series, Phys. Rev. E 66, 016210
1994), pp. 175–193. (2002).
[33] G. Sugihara, Nonlinear Forecasting for the Classification [48] NOAA, http://www.esrl.noaa.gov/psd/gcos_wgsp/
of Natural Time Series, Phil. Trans. R. Soc. A 348, 477 Timeseries/.
(1994). [49] M. Chekroun, D. Kondrashow, and M. Ghil, Predicting
[34] C. G. Schroer, T. Sauer, E. Ott, and J. A. Yorke, Predicting Stochastic Systems by Noise Sampling, and Application to
Chaos Most of the Time from Embeddings with Self- the El Niño-Southern Oscillation, Proc. Natl. Acad. Sci.
Intersections, Phys. Rev. Lett. 80, 1410 (1998). U.S.A. 108, 11766 (2011).

011021-12

You might also like