Sindy-Pi: A Robust Algorithm For Parallel Implicit Sparse Identification of Nonlinear Dynamics
Sindy-Pi: A Robust Algorithm For Parallel Implicit Sparse Identification of Nonlinear Dynamics
Sindy-Pi: A Robust Algorithm For Parallel Implicit Sparse Identification of Nonlinear Dynamics
Abstract
arXiv:2004.02322v2 [cs.LG] 29 Sep 2020
1 Introduction
Discovering dynamical system models from data is critically important across science and
engineering. Traditionally, models are derived from first principles, although this approach may be
prohibitively challenging in many fields, such as climate science, finance, and biology. Fortunately,
data-driven model discovery (i.e., system identification) is a rapidly developing field [1], with
a range of techniques including classic linear approaches [2, 3], dynamic mode decomposition
(DMD) [4, 5] and Koopman theory [6–9], nonlinear autoregressive models [10, 11], neural net-
works [12–20], Gaussian process regression [21, 22], nonlinear Laplacian spectral analysis [23],
diffusion maps [24], genetic programming [25–27], and sparse regression [28–30], to highlight
some of the recent developments. Of particular note is a recent push towards parsimonious mod-
eling [25, 26, 28], which favors Pareto-optimal models with the lowest complexity required to
describe the observed data. These models benefit from being interpretable, and they tend to
generalize and prevent overfitting. The sparse identification of nonlinear dynamics (SINDy) algo-
rithm [28] discovers parsimonious models through a sparsity-promoting optimization to select only
a few model terms from a library of candidate functions. SINDy has been widely adopted in the
community [30–46], but it relies on the dynamics having a sparse representation in a pre-defined
library, making it difficult to discover implicit dynamics and rational functions. The implicit-SINDy
extension [47] makes it possible to identify these implicit functions, although this algorithm is
extremely sensitive to noise. In this work, we develop a robust, parallel algorithm for the sparse
identification of implicit dynamics, making it possible to explore entirely new classes of systems
that were previously inaccessible.
∗
Corresponding author ([email protected]); Code availalbe at github.com/dynamicslab/SINDy-PI.
1
Parsimonious modeling has a rich history, with many scientific advances being argued on the
basis of Occam’s razor, that the simplest model is likely the correct one. SINDy exemplifies this
principle, identifying a potentially nonlinear model with the fewest terms required to describe
how the measurement data changes in time. The basic idea behind SINDy may be illustrated on a
one-dimensional system ẋ = f (x); the general formulation for multidimensional dynamics will
be described in the following sections. An interpretable form of the nonlinear dynamics may be
learned by writing the rate of change of the state of thesystem x as a sparse linear
combination of a
few terms in a library of candidate functions, Θ(x) = θ1 (x) θ2 (x) . . . θp (x) :
where each θj (x) is prescribed candidate term (e.g. x, x2 , sin(x), · · · ). The derivative of the state
and the library of candidate functions may both be computed from measured trajectory data. It
then remains to solve for a sparse vector ξ with nonzero entries ξj indicating which functions θj (x)
are active in characterizing the dynamics. The resulting models strike a balance between accuracy
and efficiency, and they are highly interpretable by construction. In a short time, the SINDy
algorithm has been extended to include inputs and control [48], to identify partial differential
equations [29, 30], to incorporate physically relevant constraints [34], to include tensor bases [45],
and to incorporate integral terms for denoising [49, 50]. These extensions and its simple formulation
in terms of a generalized linear model in (1) have resulted in SINDy being adopted in the fields of
fluid mechanics [34, 37], nonlinear optics [31], plasma physics [32], chemical reactions [33, 36, 39],
numerical methods [41], and structural modeling [42].
The generalized linear model in (1) does not readily lend itself to representing implicit dynamics
and rational functions, which are not naturally expressible as sum of a few basis functions. Instead,
the implicit-SINDy algorithm [47] reformulates the SINDy problem in an implicit form:
This formulation is flexible enough to handle a much broader class of dynamics with rational
function nonlinearities, such as ẋ = N (x)/D(x) which may be rewritten as ẋD(x) + N (x) = 0.
However, the sparsest vector ξ that satisfies (2) is the trivial solution ξ = 0. Thus, the implicit-
SINDy algorithm leverages a recent non-convex optimization procedure [51, 52] to find the sparsest
vector ξ in the null space of Θ(x, ẋ), which differs from other approaches [53, 54] that identify the
rational dynamics. For even small amounts of noise, the dimension of the null space will become
prohibitively large, making this approach extremely sensitive to noise and compromising the model
discovery process.
This work develops an optimization and model selection framework that recasts implicit-SINDy
as a convex problem, making it as noise robust as the original non-implicit SINDy algorithm and
enabling the identification of implicit ODEs and PDEs that were previously inaccessible. The key to
making the implicit-SINDy algorithm robust is the realization that if we know even a single term in
the dynamics, corresponding to a non-zero entry ξj , then we can rewrite (2) in a non-implicit form
where Θ0 and ξ 0 have the j-th element removed. Because none of these terms are known a priori,
we sweep through the library, term by term, testing (3) for a sparse model that fits the data. This
procedure is highly parallelizable and provides critical information for model selection. Our
approach is related to the recent work of Zhang et al. [46], which also makes the implicit problem
more robust by testing candidate functions individually. However, there are a number of key
2
differences in the present approach. Our work explicitly considers rational nonlinearities to discover
exceedingly complex implicit PDEs, such as a simplified model of the Belousov-Zhabotinsky (BZ)
reaction. Our framework also provides several new greedy algorithms, including parallel and
constrained formulations. We further extend this method to include the effect of control inputs,
making it applicable to robotic systems [55], and we use this procedure to discover Hamiltonians.
Finally, our approach provides guidance on model selection, a comprehensive comparison with
previous methods, and a careful analysis of noise robustness.
2 Background
We briefly introduce the full multidimensional SINDy and implicit-SINDy algorithms, which
will provide a foundation for our robust implicit identification algorithm in Sec. 3.
d
x(t) = f (x(t)), (4)
dt
from time-series data of the state x(t) = [x1 (t), . . ., xn (t)]T ∈ Rn . We assume that the dynamics,
encoded by the function f , admit a sparse representation in a library of candidate functions:
Θ(x) = θ1 (x) θ2 (x) · · · θp (x) . (5)
d
xk (t) = fk (x(t)) ≈ Θ(x)ξ k (6)
dt
where ξ k is a sparse vector, indicating which terms are active in the dynamics.
We determine the nonzero entries of ξ k through sparse regression based on trajectory data. The
T
time-series data is arranged into a matrix X = x(t1 ) x(t2 ) · · · x(tm ) , and the associated time
T
derivative matrix Ẋ = ẋ(t1 ) ẋ(t2 ) · · · ẋ(tm ) is computed using an appropriate numerical
differentiation scheme [1, 28, 56]. It is then possible to evaluate the library Θ on trajectory data in
X so that each column of Θ(X) is a function θj evaluated on the m snapshots in X.
It is now possible to write the dynamical system in terms of a generalized linear model,
evaluated on trajectory data:
Ẋ = Θ(X)Ξ. (7)
There are several approaches to identify the sparse matrix of coefficients Ξ, including sequen-
tially thresholded least squares (STLSQ) [28, 57], LASSO [58], sparse relaxed regularized regres-
sion (SR3) [59, 60], stepwise sparse regression (SSR) [36], and Bayesian approaches [46, 61]. It
is possible to augment the library to include partial derivatives for the identification of partial
differential equations (PDEs) [29, 30]. Similarly, it is possible to include external forcing terms
in the library Θ, enabling the identification of forced and actively controlled systems [48]. To
alleviate the effect of noise, it is possible to reframe the SINDy problem in terms of an integral
formulation [49, 50]. There are a number of factors that affect the robustness of SINDy, some of
which are discussed in App. I.
3
(b) Sparse Regression (c) Selection &
(a) Data Collection Over Many Candidate Nonlinear Terms Reconstruction
(Remove θi from library)
· ·
Michaelis-Menten dynamics Test each candidate
105
Prediction Error
θ1 = x x·
100
10-5 Candidate
Sparse
Test candidate
= Function θi
time t
x x·
10 -10 Accurate Model
x· sin(x)
Correct guess
-15
10
t Accurate 10-2 10-1 100
· ·
x x· = Θ(X, X | x x)ξ1
Sparsity Parameter λ
θ2 = x· sin(x) Accurate Model
1.5x · ·
x· = 0.6 − x x· = Θ(X, X | x x)ξ
…
Test candidate
Dense
0.3 + x = 1
= − 0.3x· + 0.18 − 0.9x
time t
1.5x
Incorrect guess
x x· = − 0.3x· + 0.18 − 0.9x x· = 0.6 −
x· sin(x) = Θ(X, X· | x· sin(x))ξ2 0.3 + x
Inaccurate
Figure 1: The illustration of the SINDy-PI algorithm on Michaelis-Menten dynamics. (a) The
Michaelis-Menten system is simulated, and measurement data is provided to SINDy-PI. (b) Multiple
possible left-hand side functions are tested at the same time. (c) The candidate model prediction
error is calculated, and the best model is selected.
Fig.3 An illustration of the DL-SINDy algorithm on Michaelis-Menten dynamics. (a) The Michaelis-Menten dynamic is
simulated to gather time series data. (b) The time series data is used to construct the left hand side candidate library and right
2.2 Implicit Sparse Identification of Nonlinear Dynamics
hand side library. Each candidate function $c_i$ in the left hand side library $C(\mathbf{X},\dot{\mathbf{X}})$ is tested with a
sparse regression problem. The correct guess as suggested using blue color will generate sparse vector that has low
The implicit-SINDy algorithm [47] extends SINDy to identify implicit differential equations
prediction error. On the contrary the wrong guess shown in red color will result a dense vector that produce a bad model. (c)
The prediction error of the model generated using different left hand side guesses. The left hand side that generates sparse
f (x,
vector and low prediction error is used to reconstruct the ẋ) model.
system = 0, (8)
and in particular, systems that include rational functions in the dynamics, such as chemical reactions
and metabolic networks that have a separation of timescales.
The implicit-SINDy generalizes the library Θ(X) in (7) to include functions of x and ẋ:
However, this approach requires solving for a matrix Ξ whose columns ξ k are sparse vectors in the
null space of Θ(X, Ẋ). This approach is non-convex, relying on the alternating directions method
(ADM) [47, 52], and null space computations are highly ill-conditioned for noisy data [1, 47, 62],
thus inspiring the current work and mathematical innovations.
where Θ(X, Ẋ|θj (X, Ẋ)) is the library Θ(X, Ẋ) with the θj column removed. Equation (10) is no
longer in implicit form, and the sparse coefficient matrix corresponding to the remaining terms
may be solved for using previously developed SINDy techniques [28–30, 36, 46, 49, 50, 59–61]. In
particular, we solve for a sparse coefficient vector ξ j that minimizes the following loss function:
kθj (X, Ẋ) − Θ(X, Ẋ|θj (X, Ẋ)ξ j k2 + β
ξ j
0 , (11)
4
where β is the sparsity promoting parameter. There are numerous relaxations of the non-convex
optimization problem in (11), for example the sequentially thresholded least-squares algorithm [28].
Because there is no null space calculation, the resulting algorithm is considerably more robust to
noise than the implicit-SINDy algorithm [47], i.e. we longer have to deal with an ill-conditioned
null space problem.
In general, the entire point of SINDy is that the dynamics are not known ahead of time, and
so it is necessary to test each candidate function θj until one of the models in (10) admits a sparse
and accurate solution. When an incorrect candidate term is used, then the algorithm results in a
dense (non-sparse) model ξ j and an inaccurate model fit, and when a correct term is included, the
algorithm identifies a sparse model ξ j and an accurate model fit. In this way, it is clear when the
algorithm has identified the correct model. Moreover, there is a wealth of redundant information,
since each term in the correct model may be used as the candidate function on the left hand side,
and the resulting models may be cross-referenced. This approach is highly parallelizable, and each
candidate term may be tested simultaneously in parallel. The non-parallel formulation in (10) was
recently introduced by Zhang et al. [46] in the context of Bayesian regression, where they also make
the implicit problem more robust by testing candidate functions individually; however, they do not
consider dynamics with rational function nonlinearities or control inputs. In this work, we extend
the robust implicit formulation to identify several challenging implicit ODE and PDE systems with
rational function nonlinearities, which are ubiquitous in engineering and natural systems, and
systems with external forcing and control inputs. We also introduce the parallel formulation and
model selection frameworks. Further, we will introduce a constrained optimization framework to
simultaneously test all candidate functions.
5
· ·
Θ(X, X) Θ(X, X) Ξ
θ1 θ2 θ3 θ4 θ5 θp θ1 θ2 θ3 θ4 θ5 θp ξ1 ξ2 ξ3 ξ4 ξ5 ξp
Optimization Problem:
· ·
min ∥Θ(X, X) − Θ(X, X)Ξ∥2 + β∥Ξ∥0
Ξ
s.t. diag (Ξ) = 0
… = …
time t
…
·
p×p Model Selection: θi = Θ(X, X)ξi
Square Matrix
Accurate, Sparse Correct equation
Constrained
to be 0 Inaccurate, Dense Incorrect equation
In practice, for rational dynamics, we select based upon the predicted derivative Ẋt :
model
Ẋt − Ẋt
Fig.4 Illustration of the constrained batch formulation
Error of=DL-SINDy
algorithm.
2(a) Michaelis-Menten dynamic is simulated to
. (13)
gather time series data. (b) The time series data is used to form
one library
Ẋt
matrix. This matrix is then used to form a
constrained sparse regression problem such that the diagonal elements 2 of sparse matrix $\boldsymbol{\Xi}$ is zero. This
formulation allows us to test out all the candidates simultaneously. Each vector in the matrix $\boldsymbol{\Xi}$ represents a
For implicit
possible model.dynamics where
If the elements eachforms
$\theta_i$ statethederivative
system then may be written
it will produce as avector,
a sparse rational function
vice versa. (c) Each vector of
matrix $\boldsymbol{\Xi}$ is used to form candidate model. The model that produce the accurate prediction is selected to
reconstruct the system structure. Nk (x)
ẋk = fk (x) = , (14)
Dk (x)
then we restrict the candidate functions to θj (x, ẋ) = ẋk θj (x) for some θj (x) ∈ Θ(x) to identify a
separate sparse model for each ẋk . Several candidate functions may provide accurate and sparse
models. These different models may further be cross-references to check that the same terms are
being selected in each model, providing additional information for model selection and validation.
We constrain Ξ to have zero entries on the diagonal, as shown in Fig. 2, which is the same as
removing the candidate function from the library in the separate optimization problems in (10).
Without this constraint, the trivial solution Ξ = Ip×p will provide the sparsest Ξ and the most
accurate model. This may be written as a formal constrained optimization problem:
This optimization is non-convex, although there are many relaxations that result in accurate and
efficient proxy solutions [28, 58, 59]. In this work, we will use sequentially thresholded least squares,
6
10
SINDy-PI Correct Region
(# of incorrect terms) 5
Structure Error
0 1e-07 5e-07 1e-06 5e-06 1e-05 5e-05 0.0001 0.0005 0.001 0.002 0.004 0.006 0.008 0.01 0.02 0.04 0.06 0.08 0.1 0.2 0.3 0.4 0.5
Noise Level σ
Figure 3: SINDy-PI and implicit-SINDy are compared on the Michaelis-Menten kinetics, where the
structure error quantifies the number of terms in the model that are incorrectly added or deleted,
compared with the true model. The derivative is computed by the total-variation regularization
difference (TVRegDiff) [56] on noisy state measurements. The violin plots show the cross-validated
distribution of the number of incorrect terms across 30 models. The green region indicates no
x· n = TVRegDIff(x n)
structural difference between the identified model and the ground truth model. Details are provided
in Appendix A.2.
so that any entry Ξij < λ will be set to zero; the sparsity parameter λ is a hyperparameter, and each
column equation may require a different parameter λFig. j . 5The constrained
The performance
dynamics.
formulation
of the DL-SINDy inMichaelis-Menten
and implicit SINDy on (16) can be
The derivative is generated by adding Guassian noise on perfect
solved efficiently in modern optimization packages,derivative
and data.This
we usesimulates
CVX the situation
[67, where
68]. theAfter
derivative solving
is measurable. (16)
The blue color represents the performance of DL-SINDy while the red color
represent the performance of the implicit SINDy. From the figure we could see the
we have numerous candidate models, one for each column ξ k ofbetter
DL-SINDy has much
given
Ξ,noise by
robustness.
The sparse models that result in an accurate fit are candidate implicit models, and they may be
assessed using the model selection approaches outlined above. These various models may be
cross-referenced for consistency, as the same models will have the same sparsity pattern. This
information can then be used to refine the library Θ, for example to only include the nonzero
entries in the sparse columns of Ξ.
Vmax x
ẋ = jx − , (18)
Km + x
where x denotes the concentration of the substrate, jx denotes the influx of the substrate, Vmax
denotes the maximum reaction time, and Km represents the concentration of half-maximal reaction.
We use the same parameters as in [47], with jx = 0.6, Vmax = 1.5, and Km = 0.3. Figure 3 shows the
result of the noise robustness of SINDy-PI and implicit-SINDy. In this example, SINDy-PI is able to
handle over 105 more measurement noise than implicit-SINDy, while still accurately recovering the
correct model. Details are provided in App. A, and key factors that limit robustness are discussed
inn App. I.
7
100%
60%
Implicit-SINDy
40%
x· 6
SINDy-PI:
LHS guess
x· 6 x64
20%
LHS guess
0%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Data Usage
Figure 4: Success rate of SINDy-PI and implicit-SINDy identifying yeast glycolysis (19f) with
different percentage of training data. Each data usage percentage is randomly sampled from the
entire data set composed of all trajectories. The success rate is calculated by averaging the results
of 20 runs.
8
10 0
Modified KdV
Prediction
2g0
Error
ut = − uxxx − 6uux − 0.1u +
10 -10
1+u 0 10 -4 10 -3 10 -2 10 -1 1
Loss Gain g0
SINDy-PI PDE-Find
g0
1
uux = (0.01 − ut − uxxx − uut − uuxxx
6 ut = 0.01987 − 0.1165u − 1.001uxxx − 6.005uux
0.01
Magnitude
L t +7.168uxxx
2
− 1.496uxxx
2
+ 1.246 (Overfits the library)
Figure 5: Comparison of SINDy-PI and PDE-FIND on an implicit PDE problem given by the
modified KdV equation (20). As we increase g0 , the rational term begins to play a significant role in
the system behavior. For small g0 , PDE-FIND compensates for the effect of the rational term by
tuning the other coefficients. When g0 is large, PDE-FIND overfits the library. SINDy-PI, on the
other hand, correctly identifies the rational term.
4 Advanced Examples
We will now demonstrate the SINDy-PI framework on several challenging examples, including
the double pendulum, an actuated single pendulum on a cart, the Belousov-Zhabotinsky PDE, and
the identification of conserved quantities. All examples are characterized by rational nonlinearities,
and we were unable to identify them using SINDy or implicit-SINDy, even in the absence of noise.
9
(a)Data Generation (b) SINDy-PI Identified Model Performance
SINDy-PI Ground Truth
Noise level
Noise level
Noise level
Noise level
ϕ1 3 3 3 3
L1
2.8 2.8 2.8 2.8
m1 4 4 4 4
ϕ2
3 3 3 3
a2
2.5 2.5 2.5 2.5
Time (t)
0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3
m2
ϕ2
L2 Correct
Structure?
Fourth State :Extra term
Third state: Overfits
use SINDy-PI to re-discover the dynamics from noisy measurements of the trajectory data. The
governing equations and SINDy-PI models are provided in App. F. Because these dynamics have
rational nonlinearities, the original SINDy algorithm is unable to identify the dynamics, making
this a challenging test case. The state vector is given by x = [φ1 , φ2 , φ̇1 , φ̇2 ]T , and the parameters
of the simulation are given in App. D. The training data is generated from an initial condition
xtrain = [π + 1.2, π − 0.6, 0, 0]T , simulated for 10 seconds using a time step of dt = 0.001 seconds.
The validation data is generated from an initial condition xval = [π −1, π −0.4, 0.3, 0.4]T , simulated
for 3 seconds with time step dt = 0.001 seconds.
To test the robustness of SINDy-PI, we add Gaussian noise to both the training and validation
data. We test the resulting models using a new testing initial condition xtest = [π+0.3, π−0.5, 0, 0]T .
We construct our library Θ to include over 40 trigonometric and polynomial terms. The most
challenging part of this example is building a library with the necessary terms, without it growing
too large. The library cannot be too extensive, or else the matrix Θ becomes ill conditioned,
making it sensitive to noise. To reduce the library size, we use one piece of expert knowledge: the
trigonometric terms should only consist of φ1 and φ2 , the rotational angles of the pendula.
The candidate functions are chosen as a combination of state derivatives and trigonometric
functions. Fig. 6 shows that SINDy-PI can identify the equations of motion for low noise. For larger
noise, SINDy-PI misidentifies the dynamics, although it still has short term prediction ability.
10
(a) Data Generation (b) SINDy-PI Identified Model Performance
F SINDy-PI Ground Truth
Noise Level
Noise Level
Noise Level
Noise Level
-0.1
-0.1 -0.1 -0.1
m -0.2
s -0.2 -0.2 -0.2 -0.3
ϕ
-0.4
-0.3 -0.3 -0.3
Time (t)
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
L1
M Correct
Structure?
F Third state:
s Missing F cos(ϕ)
Figure 7: SINDy-PI is used to identify the single pendulum on a cart system. Control is applied to
the cart, and both the cart and pendulum states are measured. When the measurement noise is
small, SINDy-PI can identify the correct structure of the model.
d
φ = φ̇, (21a)
dt
d
s = ṡ, (21b)
dt
d (M + m)g sin (φ) + F L1 cos (φ) + mL21 sin (φ) cos (φ)φ̇2
φ̇ = − , (21c)
dt L21 (M + m − m cos (φ)2 )
d mL21 sin (φ)φ̇2 + F L1 + mg sin (φ) cos (φ)
ṡ = , (21d)
dt L1 (M + m − m cos (φ)2 )
Eq. (21) is simulated with a time step of dt = 0.001 to generate the training and testing data for
model selection. The training data is generated using an initial condition xtrain = [0.3, 0, 1, 0]T
with the control input chosen as Ftrain = −0.2 + 0.5 sin (6t), for time t = 0 to t = 16. Similarly, the
validation data is generated using an initial condition xval = [0.1, 0, 0.1, 0]T with the control input
chosen as Fval = −1 + sin (t) + 3 sin (2t), for time t = 0 to t = 2.
The library is constructed using a combination of trigonometric and polynomial terms. Around
50 different basis functions are used for the library, and around 10 terms are tested as candidate
functions. We add Gaussian noise to all system states. We then test the SINDy-PI model on a testing
initial condition xtest = [π, 0, 0, 0]T with control input Ftest = −0.5+0.2 sin (t)+0.3 sin (2t) for time
t = 0 to t = 2. Fig. 7 shows the resulting SINDy-PI models. The structure of the model is correctly
identified up to a noise magnitude of 0.01. Beyond this noise level, the SINDy-PI identified model
only has short term prediction ability.
11
Initial Condition
Ground truth
x
SINDy-PI
Ground truth
z
SINDy-PI
s
Ground truth
SINDy-PI
Magnitude
L1 L2
t=0 t=T
Figure 8: SINDy-PI is able to identify the simplified Belousov–Zhabotinsky reaction model.
12
ϕ1 ϕ2
4.5
3.6
4
a1 3.4
Data Generation 3.5
3.2
L1
3 3
m1
2.8
ϕ1 2.5
2.6
t 2 t
a2 0 5 10 15 0 5 10 15
Physical Law Extracted
m2 SINDy-PI:
ϕ2 ·· · · ·· ·2
L2 ϕ1 = 0.006005ϕ2 − 0.03235ϕ1 + 37.97 sin(ϕ1) − 0.3229ϕ2 cos(ϕ1 − ϕ2) − 0.3229ϕ2 sin(ϕ1 − ϕ2)
·· · · ·· ·2
ϕ2 = 0.02525ϕ1 − 0.02525ϕ2 + 49.94 sin(ϕ2) − 1.358ϕ1 cos(ϕ1 − ϕ2) + 1.358ϕ1 sin(ϕ1 − ϕ2)
Figure 9: SINDy-PI is used to extract the conserved quantity for a double pendulum.
and time step of dt = 0.001. We use n = 128 discretization points with spatial domain ranging from
−10 to 10. The initial condition is chosen to be a mixture of Gaussian functions. 80% of the data is
used for training, and the remaining 20% is used for model selection. The right-hand side library is
normalized during the sparse regression process. A range of sparsity parameters λ are tested from
0.1 to 1, with increments of 0.1 The other system parameters in (22) are given in App. D and the
SINDy-PI model is given in App. G.
13
5 Conclusions and Future Work
In this paper, we develop SINDy-PI (parallel,implicit), a robust variant of the SINDy algorithm
to identify implicit dynamics and rational nonlinearities. SINDy-PI overcomes the sensitivity
of the previous implicit-SINDy approach, which is based on a null-space calculation, making it
highly sensitive to noise. Instead, we introduce both parallel and constrained optimizations to test
candidate terms in the dynamics, making the new SINDy-PI algorithm as robust as the original
SINDy algorithm. We also extend the algorithm to incorporate external forcing and actuation,
making it more applicable to real-world systems. We demonstrate this approach on several
challenging systems with implicit and rational dynamics, including ODEs, actuated systems, and
PDEs. In particular, we discover the implicit dynamics for a simplified model for the BZ chemical
reaction PDE, the double pendulum mechanical system, and the yeast glycolisis model, which
have all been challenging test cases for advanced identification techniques. Throughout these
examples, we demonstrate considerable noise robustness and reductions to the data required, over
the previous implicit-SINDy algorithm.
Despite the advances outlined here, there are still many important avenues of future work. One
limitation of this approach, and of SINDy in general, is in the design of the library of candidate
functions. The goal is a descriptive library, but the library size grows rapidly, which in turn
makes the sparse regression ill-conditioned; other issues effecting robustness are discussed in
App. I. Recently, tensor approaches have been introduced to alleviate this issue, making libraries
both descriptive and tractable [45], and this is a promising approach that may be incorporated in
SINDy-PI as well. More generally, automatic library generation, guided by expert knowledge, is
an important topic. Other research directions will involve parameterizing elements of the library,
so that the algorithm simultaneously identifies the model structure and the parameters of the
sparsely selected terms. Recent unified optimization frameworks, such as SR3 [59, 60], may make
this possible. Model selection is another key area that will required focused attention. Balancing
accuracy on test data, sparsity of the model, and the potential for overfitting are all serious concerns.
The sparse regression and optimization may also be improved for better noise robustness. Finally,
modifying SINDy-PI to incorporate prior physical knowledge and to only model the discrepancy
with an existing model [76] will be the focus of ongoing work.
Acknowledgments
SLB acknowledges support from the Army Research Office (ARO W911NF-19-1-0045) and the
Air Force Office of Scientific Research (AFOSR FA9550-18-1-0200). JNK acknowledges support from
the Air Force Office of Scientific Research (AFOSR FA9550-17-1-0329). We also acknowledge valu-
able discussions with Aditya Nair, Eurika Kaiser, Brian DeSilva, Tony Piaskowy, Jared Callaham,
and Benjamin Herrmann. We thank Ariana Mendible for reviewing the manuscript and providing
useful suggestions.
References
[1] S. L. Brunton and J. N. Kutz, Data-Driven Science and Engineering: Machine Learning, Dynamical Systems,
and Control. Cambridge University Press, 2019.
[2] O. Nelles, Nonlinear system identification: from classical approaches to neural networks and fuzzy models.
Springer, 2013.
[3] L. Ljung, “Perspectives on system identification,” Annual Reviews in Control, vol. 34, no. 1, pp. 1–12,
2010.
14
[4] P. J. Schmid, “Dynamic mode decomposition of numerical and experimental data,” Journal of Fluid
Mechanics, vol. 656, pp. 5–28, Aug. 2010.
[5] J. N. Kutz, S. L. Brunton, B. W. Brunton, and J. L. Proctor, Dynamic Mode Decomposition: Data-Driven
Modeling of Complex Systems. SIAM, 2016.
[6] M. Budišić, R. Mohr, and I. Mezić, “Applied Koopmanism a),” Chaos: An Interdisciplinary Journal of
Nonlinear Science, vol. 22, no. 4, p. 047510, 2012.
[7] I. Mezic, “Analysis of fluid flows via spectral properties of the Koopman operator,” Annual Review of
Fluid Mechanics, vol. 45, pp. 357–378, 2013.
[8] M. O. Williams, I. G. Kevrekidis, and C. W. Rowley, “A data-driven approximation of the Koopman
operator: extending dynamic mode decomposition,” Journal of Nonlinear Science, vol. 6, pp. 1307–1346,
2015.
[9] S. Klus, F. Nüske, P. Koltai, H. Wu, I. Kevrekidis, C. Schütte, and F. Noé, “Data-driven model reduction
and transfer operator approximation,” Journal of Nonlinear Science, 2018.
[10] H. Akaike, “Fitting autoregressive models for prediction,” Ann Inst Stat Math, vol. 21, no. 1, pp. 243–247,
1969.
[11] S. A. Billings, Nonlinear system identification: NARMAX methods in the time, frequency, and spatio-temporal
domains. John Wiley & Sons, 2013.
[12] L. Yang, D. Zhang, and G. E. Karniadakis, “Physics-informed generative adversarial networks for
stochastic differential equations,” arXiv preprint arXiv:1811.02033, 2018.
[13] C. Wehmeyer and F. Noé, “Time-lagged autoencoders: Deep learning of slow collective variables for
molecular kinetics,” The Journal of Chemical Physics, vol. 148, no. 241703, pp. 1–9, 2018.
[14] A. Mardt, L. Pasquali, H. Wu, and F. Noé, “VAMPnets: Deep learning of molecular kinetics,” Nature
Communications, vol. 9, no. 5, 2018.
[15] P. R. Vlachas, W. Byeon, Z. Y. Wan, T. P. Sapsis, and P. Koumoutsakos, “Data-driven forecasting of
high-dimensional chaotic systems with long short-term memory networks,” Proc. R. Soc. A, vol. 474, no.
2213, p. 20170844, 2018.
[16] J. Pathak, B. Hunt, M. Girvan, Z. Lu, and E. Ott, “Model-free prediction of large spatiotemporally
chaotic systems from data: a reservoir computing approach,” Physical review letters, vol. 120, no. 2, p.
024102, 2018.
[17] L. Lu, X. Meng, Z. Mao, and G. E. Karniadakis, “Deepxde: A deep learning library for solving differential
equations,” arXiv preprint arXiv:1907.04502, 2019.
[18] M. Raissi, P. Perdikaris, and G. Karniadakis, “Physics-informed neural networks: A deep learning
framework for solving forward and inverse problems involving nonlinear partial differential equations,”
Journal of Computational Physics, vol. 378, pp. 686–707, 2019.
[19] K. Champion, B. Lusch, J. N. Kutz, and S. L. Brunton, “Data-driven discovery of coordinates and
governing equations,” Proceedings of the National Academy of Sciences, vol. 116, no. 45, pp. 22 445–22 451,
2019.
[20] M. Raissi, A. Yazdani, and G. E. Karniadakis, “Hidden fluid mechanics: Learning velocity and pressure
fields from flow visualizations,” Science, vol. 367, no. 6481, pp. 1026–1030, 2020.
[21] M. Raissi and G. E. Karniadakis, “Machine learning of linear differential equations using gaussian
processes,” arXiv preprint arXiv:1701.02440, 2017.
[22] ——, “Hidden physics models: Machine learning of nonlinear partial differential equations,” Journal of
Computational Physics, vol. 357, pp. 125–141, 2018.
[23] D. Giannakis and A. J. Majda, “Nonlinear laplacian spectral analysis for time series with intermittency
and low-frequency variability,” Proceedings of the National Academy of Sciences, vol. 109, no. 7, pp.
2222–2227, 2012.
[24] O. Yair, R. Talmon, R. R. Coifman, and I. G. Kevrekidis, “Reconstruction of normal forms by learning
informed observation geometries from data,” Proceedings of the National Academy of Sciences, p. 201620045,
2017.
[25] J. Bongard and H. Lipson, “Automated reverse engineering of nonlinear dynamical systems,” Proc. Natl.
Acad. Sciences, vol. 104, no. 24, pp. 9943–9948, 2007.
[26] M. Schmidt and H. Lipson, “Distilling free-form natural laws from experimental data,” Science, vol. 324,
no. 5923, pp. 81–85, 2009.
[27] B. C. Daniels and I. Nemenman, “Automated adaptive inference of phenomenological dynamical
models,” Nature communications, vol. 6, 2015.
[28] S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Discovering governing equations from data by sparse
identification of nonlinear dynamical systems,” Proceedings of the national academy of sciences, vol. 113,
15
no. 15, pp. 3932–3937, 2016.
[29] S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Data-driven discovery of partial differential
equations,” Science Advances, vol. 3, no. 4, p. e1602614, 2017.
[30] H. Schaeffer, “Learning partial differential equations via data discovery and sparse optimization,” in
Proc. R. Soc. A, vol. 473, no. 2197. The Royal Society, 2017, p. 20160446.
[31] M. Sorokina, S. Sygletos, and S. Turitsyn, “Sparse identification for nonlinear optical communication
systems: SINO method,” Optics express, vol. 24, no. 26, pp. 30 433–30 443, 2016.
[32] M. Dam, M. Brøns, J. Juul Rasmussen, V. Naulin, and J. S. Hesthaven, “Sparse identification of a
predator-prey system from simulation data of a convection model,” Physics of Plasmas, vol. 24, no. 2, p.
022310, 2017.
[33] A. Narasingam and J. S.-I. Kwon, “Data-driven identification of interpretable reduced-order models
using sparse regression,” Computers & Chemical Engineering, vol. 119, pp. 101–111, 2018.
[34] J.-C. Loiseau and S. L. Brunton, “Constrained sparse galerkin regression,” Journal of Fluid Mechanics, vol.
838, pp. 42–67, 2018.
[35] M. Quade, M. Abel, J. N. Kutz, and S. L. Brunton, “Sparse identification of nonlinear dynamics for rapid
model recovery,” To appear in chaos, 2018.
[36] L. Boninsegna, F. Nüske, and C. Clementi, “Sparse learning of stochastic dynamical equations,” The
Journal of Chemical Physics, vol. 148, no. 24, p. 241723, 2018.
[37] J.-C. Loiseau, B. R. Noack, and S. L. Brunton, “Sparse reduced-order modelling: sensor-based dynamics
to full-state estimation,” Journal of Fluid Mechanics, vol. 844, pp. 459–490, 2018.
[38] L. Zhang and H. Schaeffer, “On the convergence of the SINDy algorithm,” arXiv preprint
arXiv:1805.06445, 2018.
[39] M. Hoffmann, C. Fröhner, and F. Noé, “Reactive SINDy: Discovering governing reactions from concen-
tration data,” Journal of Chemical Physics, vol. 150, no. 025101, 2019.
[40] N. M. Mangan, T. Askham, S. L. Brunton, J. N. Kutz, and J. L. Proctor, “Model selection for hybrid
dynamical systems via sparse regression,” Proceedings of the Royal Society A, vol. 475, no. 2223, p.
20180534, 2019.
[41] S. Thaler, L. Paehler, and N. A. Adams, “Sparse identification of truncation errors,” Journal of Computa-
tional Physics, vol. 397, p. 108851, 2019.
[42] Z. Lai and S. Nagarajaiah, “Sparse structural system identification method for nonlinear dynamic
systems with hysteresis/inelastic behavior,” Mechanical Systems and Signal Processing, vol. 117, pp.
813–842, 2019.
[43] K. Wu and D. Xiu, “Numerical aspects for approximating governing equations using data,” Journal of
Computational Physics, vol. 384, pp. 200–221, 2019.
[44] B. de Silva, D. M. Higdon, S. L. Brunton, and J. N. Kutz, “Discovery of physics from data: Universal
laws and discrepancies,” arXiv preprint arXiv:1906.07906, 2019.
[45] P. Gelß, S. Klus, J. Eisert, and C. Schütte, “Multidimensional approximation of nonlinear dynamical
systems,” Journal of Computational and Nonlinear Dynamics, vol. 14, no. 6, 2019.
[46] S. Zhang and G. Lin, “Robust data-driven discovery of governing physical laws with error bars,”
Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 474, no. 2217, p.
20180305, 2018.
[47] N. M. Mangan, S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Inferring biological networks by sparse
identification of nonlinear dynamics,” IEEE Transactions on Molecular, Biological and Multi-Scale Commu-
nications, vol. 2, no. 1, pp. 52–63, 2016.
[48] E. Kaiser, J. N. Kutz, and S. L. Brunton, “Sparse identification of nonlinear dynamics for model predictive
control in the low-data limit,” Proceedings of the Royal Society of London A, vol. 474, no. 2219, 2018.
[49] H. Schaeffer and S. G. McCalla, “Sparse model selection via integral terms,” Physical Review E, vol. 96,
no. 2, p. 023302, 2017.
[50] P. A. Reinbold, D. R. Gurevich, and R. O. Grigoriev, “Using noisy or incomplete data to discover models
of spatiotemporal dynamics,” Physical Review E, vol. 101, no. 1, p. 010203, 2020.
[51] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,”
IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 31, no. 2, pp. 210–227, 2009.
[52] Q. Qu, J. Sun, and J. Wright, “Finding a sparse vector in a subspace: Linear sparsity using alternating
directions,” in Proceedings of the 27th International Conference on Neural Information Processing Systems -
Volume 2, ser. NIPS’14. Cambridge, MA, USA: MIT Press, 2014, p. 3401–3409.
[53] Q. Zhu, Y. Wang, D. Zhao, S. Li, and S. A. Billings, “Review of rational (total) nonlinear dynamic
16
system modelling, identification, and control,” International Journal of Systems Science, vol. 46, no. 12, pp.
2122–2133, 2015.
[54] Q. Zhu, L. Liu, W. Zhang, and S. Li, “Control of complex nonlinear dynamic rational systems,” Complex-
ity, vol. 2018, 2018.
[55] I. Koryakovskiy, M. Kudruss, H. Vallery, R. Babuška, and W. Caarls, “Model-plant mismatch compensa-
tion using reinforcement learning,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 2471–2477,
2018.
[56] R. Chartrand, “Numerical differentiation of noisy, nonsmooth data,” ISRN Applied Mathematics, vol.
2011, 2011.
[57] L. Zhang and H. Schaeffer, “On the convergence of the sindy algorithm,” Multiscale Modeling & Simula-
tion, vol. 17, no. 3, pp. 948–972, 2019.
[58] R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society:
Series B (Methodological), vol. 58, no. 1, pp. 267–288, 1996.
[59] P. Zheng, T. Askham, S. L. Brunton, J. N. Kutz, and A. Y. Aravkin, “A unified framework for sparse
relaxed regularized regression: SR3,” IEEE Access, vol. 7, pp. 1404–1423, 2018.
[60] K. Champion, A. Zheng, Peng Aravkin, S. Brunton, and J. Nathan Kutz, “A unified sparse optimization
framework to learn parsimonious physics-informed models from data,” arxiv, vol. 0, p. 1906.10612v1,
2019.
[61] W. Pan, Y. Yuan, J. Gonçalves, and G. Stan, “A sparse Bayesian approach to the identification of nonlinear
state-space systems,” IEEE Transactions on Automatic Control, vol. 61, no. 1, pp. 182–187, January 2016.
√
[62] M. Gavish and D. L. Donoho, “The optimal hard threshold for singular values is 4/ 3,” IEEE Transactions
on Information Theory, vol. 60, no. 8, pp. 5040–5053, 2014.
[63] H. Akaike, “Information theory and an extension of the maximum likelihood principle,” in Selected
papers of hirotugu akaike. Springer, 1998, pp. 199–213.
[64] ——, “A new look at the statistical model identification,” IEEE transactions on automatic control, vol. 19,
no. 6, pp. 716–723, 1974.
[65] G. Schwarz, “Estimating the dimension of a model,” Ann. Statist., vol. 6, no. 2, pp. 461–464, 03 1978.
[Online]. Available: https://doi.org/10.1214/aos/1176344136
[66] N. M. Mangan, J. N. Kutz, S. L. Brunton, and J. L. Proctor, “Model selection for dynamical systems via
sparse regression and information criteria,” Proceedings of the Royal Society A: Mathematical, Physical and
Engineering Sciences, vol. 473, no. 2204, p. 20170009, 2017.
[67] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.1,”
http://cvxr.com/cvx, Mar. 2014.
[68] ——, “Graph implementations for nonsmooth convex programs,” in Recent Advances in Learning and
Control, ser. Lecture Notes in Control and Information Sciences, V. Blondel, S. Boyd, and H. Kimura,
Eds. Springer-Verlag Limited, 2008, pp. 95–110, http://stanford.edu/~boyd/graph_dcp.html.
[69] K. A. Johnson and R. S. Goody, “The original Michaelis constant: Translation of the 1913 Michaelis–
Menten paper,” Biochemistry, vol. 50, no. 39, pp. 8264–8269, 2011.
[70] A. Cornish-Bowden, “One hundred years of Michaelis–Menten kinetics,” Perspectives in Science, vol. 4,
pp. 3–9, 2015.
[71] M. D. Schmidt, R. R. Vallabhajosyula, J. W. Jenkins, J. E. Hood, A. S. Soni, J. P. Wikswo, and H. Lipson,
“Automated refinement and inference of analytical models for metabolic networks,” Physical biology,
vol. 8, no. 5, p. 055011, 2011.
[72] K. Graichen, M. Treuer, and M. Zeitz, “Swing-up of the double pendulum on a cart by feedforward and
feedback control with experimental validation,” Automatica, vol. 43, no. 1, pp. 63 – 71, 2007.
[73] V. K. Vanag, “Waves and patterns in reaction–diffusion systems. Belousov–Zhabotinsky reaction in
water-in-oil microemulsions,” Physics-Uspekhi, vol. 47, no. 9, p. 923, 2004.
[74] Spectral Methods in MATLAB. SIAM, 2000, vol. 10, ch. 8. Chebyshev Series and the FFT, pp. 75–86.
[Online]. Available: https://epubs.siam.org/doi/abs/10.1137/1.9780898719598.ch8
[75] J. N. Kutz, Data-driven modeling & scientific computation: methods for complex systems & big data. Oxford
University Press, 2013.
[76] K. Kaheman, E. Kaiser, B. Strom, J. N. Kutz, and S. L. Brunton, “Learning discrepancy models from
experimental data,” arXiv preprint arXiv:1909.08574, 2019.
[77] M. C. Kennedy and A. O’Hagan, “Bayesian calibration of computer models,” Journal of the Royal
Statistical Society: Series B (Statistical Methodology), vol. 63, no. 3, pp. 425–464, 2001.
[78] P. D. Arendt, D. W. Apley, and W. Chen, “Quantification of model uncertainty: Calibration, model
17
discrepancy, and identifiability,” Journal of Mechanical Design, vol. 134, no. 10, 2012.
Step 1: Randomly generate 2400 different initial conditions of different magnitudes ranging from 0
to 12.5. Simulate those initial conditions using a fourth-order Runge-Kutta method with
time step dt = 0.1 and time horizon T = 5. The testing data is generated using 600 random
initial conditions using the same method as the training data.
Step 2: Add Gaussian noise to the training and testing data. 23 different Gaussian noise levels
with magnitudes ranging from 10−7 to 5 × 10−1 are used. For each noise level, 30 different
random noise realizations are generated, resulting in 30 different noisy data sets for each
noise level.
Step 3: Compute the derivative of the noisy data. We investigate several approaches, including
finite-difference and total-variation regularized difference (TVRegDiff) [56] derivatives. In
all cases, SINDy-PI is several orders of magnitude more robust to noise than implicit-SINDy,
and only the result of using TVRegDiff is shown in this paper. TVRegDiff generates more
accurate derivatives, but also requires hyperparameter tuning and causes aliasing, so we
trim the ends of the time series generated by each initial condition (first and last 30%). It
is possible to add Gaussian noise to the clean derivative data to investigate robustness,
although this is less relevant for real-world scenarios, where only noisy state data is
available.
Step 4: Train SINDy-PI and implicit-SINDy models on noisy training data. For each noise level,
we sweep through 68 different sparsity parameters λ for SINDy-PI, from 0.01 to 5. The λ is
varied by a factor of 2 [52] to calculate the null space in the implicit-SINDy method. The
library for implicit-SINDy and SINDy-PI is
Step 5: Due to the various parameter values, we use model selection to choose a model. We use
the test data with the same noise magnitude to perform the model selection process. The
ratio of training data and testing data is 8 : 2.
18
Table 1: The parameter used for simulating the Eq. (19).
Parameter c1 c2 c3 d1 d2 d3 d4 e1 e2 e3 e4 f1 f2
Value 2.5 -100 13.6769 200 13.6769 -6 -6 6 -64 6 16 64 -13
Parameter f3 f4 f5 g1 g2 h1 h2 h3 h4 h5 j1 j2 j3
Value 13 -16 -100 1.3 -3.1 -200 13.6769 128 -1.28 -32 6 -18 -100
Step 6: The best model generated by the two methods are compared. We use the prediction error,
error in the model structure (i.e., the number of terms that are incorrectly present or missing
from the model), and parameter error as our model performance evaluation criteria. We
average the performance over 30 different noise realizations for each noise level. We then
plot the distribution of structure error in Fig. 3.
Many parameters affect the performance of these methods: the length of training data, predic-
tion steps to calculate prediction error, the initial conditions for training data, choice of the library,
and the derivative computation. We have attempted to carefully optimize each method, although
an exhaustive parameter sweep is beyond the scope of the present work. However, in all cases
SINDy-PI outperforms implicit-SINDy.
Step 1: Generate training data by simulating Eq. (19) with parameters in Table. 1 and a time step
of dt = 0.1, with time horizon T = 5. We simulate the system using 900 random initial
conditions with magnitude ranging from 0 to 3.
Step 2: Shuffle the training data and select j percent of the entire training data set at random to
train the SINDy-PI and implicit-SINDy models. These training data are sampled from all
trajectories, and they are not necessarily consecutive in time. No noise is added since we
only care about the effect of the data length in this case. The sparsity parameter λ is fixed
for both algorithms (different values); this value is selected for a single percentage j where
both methods fail to identify the correct model, and we sweep through λ.
Step 3: Run the numerical experiment for 20 times for each data length and calculate the percentage
of times the two algorithms yield the correct structure of the Eq. (19f).
The final comparison is shown in Fig. 4. Data usage requirements for other state equations
are given in Table 2; Fig. 4 shows results for the hardest equation to identify. The other equations
require less data. Normalizing the SINDy-PI library improves data learning rates as well.
19
Table 2: Comparison of data usage of SINDy-PI and implicit-SINDy on other states.
Equation Eq. (19a) Eq. (19b) Eq. (19c) Eq. (19d) Eq. (19e) Eq. (19f) Eq. (19g)
Library Order 6 6 3 3 3 6 3
Left-Hand Side ẋ1 ẋ1 x46 ẋ2 ẋ2 x46 ẋ3 ẋ4 ẋ5 ẋ6 x64 ẋ7
SINDy-PI
Threshold 0.5 0.05 0.5 0.05 0.2 0.5 0.3 0.01 0.5
un-normalized
Data Usage 50% 7.5% 55% 8.5% 0.5% 0.5% 0.3% 40% 0.5%
Left-Hand Side ẋ1 ẋ1 x46 ẋ2 ẋ2 x46 ẋ3 ẋ4 ẋ5 ẋ6 ẋ6 x64 ẋ7
SINDy-PI
Threshold 0.5 0.5 0.5 0.5 0.6 0.8 0.4 0.1 0.1 0.2
normalized
Data Usage 18% 3% 10% 3% 0.45% 0.45% 0.275% 35% 8% 0.4%
implicit-SINDy Threshold 5 × 10−3 2 × 10−3 8 × 10−3 8 × 10−3 8 × 10−3 3 × 10−3 8 × 10−3
normalized Data Usage 10% 10% 0.5% 0.6% 0.3% 100% 0.5%
Θ(U, Ux , Uxx , Uxxx ) =[1 U Ux Uxx Uxxx U2x U2xx U2xxx UUx
(24)
UUxx UUxxx U2 U2x U2 U2xx U2 U2xxx ]
Θ(U, Ux , Uxx , Uxxx ) =[1 U Ut Ux Uxx Uxxx U2 U2t U2x U2xx U2xxx
(25)
UUt UUx UUxx UUxxx U2 U2x U2 U2xx U2 U2xxx ],
For both SINDy-PI and PDE-FIND, we used 100 different values for the sparsity parameter λ
ranging from 0.1 to 10 with step size 0.1. We use 80% of the simulation data for training and 20%
for testing and model selection. We calculate the normalized prediction error for all models on
state ut and the model with minimum prediction error is selected as the final model.
20
Table 5: Parameters Used in Eq. (22) for Simulating the Belousov-Zhabotinsky Reaction Model.
Parameter q f ε α β γ ε2 ε3 χ Dx Dz Ds Du
Value 1 1.5 0.3 0.3 0.26 0.4 0.15 0.03 0 0.01 0.01 1 1
21
Parameters identified by SINDy-PI under different noise magnitudes are presented in Tables 6
and 7.
Table 6: Parameters identified by SINDy-PI for Eq. (32c) under different noise magnitudes.
Value Basis
Numerator Denominator
Noise
sin (φ) F cos (φ) sin (φ) cos (φ)φ̇2 Constant cos (φ)2
Magnitude
0 19.62 -1 -1 2 -1
0.001 19.618 -1.0005 -0.9999 2 -1
0.005 19.6135 -1.171 -0.9996 2 -0.9997
0.02 19.5881 Not Identified -0.4912 2 -1.0122
Table 7: Parameters identified by SINDy-PI for Eq. (32d) under different noise magnitudes.
Value Basis
Numerator Denominator
Noise
F sin (2φ) sin (φ)φ̇2 Constant sin (φ)2
Magnitude
0 2 -9.81 2 2 2
0.001 1.9992 -9.816 1.9992 2 1.9992
0.005 1.9982 -9.8015 1.9986 2 1.9986
0.02 2.0705 -9.8234 2.0041 2 2.0041
22
The symbolic form of the Eq. (35a) is
If we set k1 = k2 = 0 and combine the equations, it is possible to solve for φ̈1 and φ̈2
φ̈1 = (L1 a22 gm22 sin(φ1 ) − 2L1 a32 φ̇22 m22 sin(φ1 − φ2 ) + 2I2 L1 gm2 sin(φ1 )
+ L1 a22 gm22 sin(φ1 − 2φ2 ) + 2I2 a1 gm1 sin(φ1 ) − L21 a22 φ̇21 m22 sin(2φ1 − 2φ2 )
− 2I2 L1 a2 φ̇22 m2 sin(φ1 − φ2 ) + 2a1 a22 gm1 m2 sin(φ1 ))/(2I1 I2 + L21 a22 m22
+ 2I2 L21 m2 + 2I2 a21 m1 + 2I1 a22 m2 − L21 a22 m22 cos(2φ1 − 2φ2 ) + 2a21 a22 m1 m2 )
and
φ̈2 = (a2 m2 (2I1 g sin(φ2 ) + 2L31 φ̇21 m2 sin(φ1 − φ2 ) + 2L21 gm2 sin(φ2 ) + 2I1 L1 φ̇21 sin(φ1 − φ2 )
+ 2a21 gm1 sin(φ2 ) + L21 a2 φ̇22 m2 sin(2φ1 − 2φ2 ) + 2L1 a21 φ̇21 m1 sin(φ1 − φ2 )
− 2L21 gm2 cos(φ1 − φ2 ) sin(φ1 ) − 2L1 a1 gm1 cos(φ1 − φ2 ) sin(φ1 )))
/(2(I1 I2 + L21 a22 m22 + I2 L21 m2 + I2 a21 m1 + I1 a22 m2 − L21 a22 m22 cos(φ1 − φ2 )2 + a21 a22 m1 m2 )).
and
φ̈2 = (1.7390 sin(φ1 − φ2 )φ̇21 + 0.2808 sin(2φ1 − 2φ2 )φ̇22
− 33.02 sin(2φ1 − φ2 ) + 30.9472 sin(φ2 ))/(1 − 0.2808 cos(2φ1 − 2φ2 )).
With no noise, SINDy-PI discovers the correct equations. When we add random noise with
magnitude of 0.005, SINDy-PI discovers the following
and
φ̈2 = (1.7392 sin(φ1 − φ2 )φ̇21 + 0.2805 sin(2φ1 − 2φ2 )φ̇22
− 33.0035 sin(2φ1 − φ2 ) + 30.9418 sin(φ2 ))/(1 − 0.2813 cos(2φ1 − 2φ2 )).
23
If we increase the noise magnitude to 0.01 then the SINDy-PI discovered equation becomes
and
φ̈2 = (1.7355 sin(φ1 − φ2 )φ̇21 + 0.2794 sin(2φ1 − 2φ2 )φ̇22 + 0.1675 sin(2φ1 − 2φ2 )
− 33.0445 sin(2φ1 − φ2 ) + 31.0065 sin(φ2 ))/(1 − 0.2819 cos(2φ1 − 2φ2 )).
If we continue to increase the noise magnitude to 0.05 then SINDy-PI incorrectly identifies
and
24
Ground Truth SINDy Implicit-SINDy SINDy-PI
(a) (b) (c) (d)
0 0 0 2
-0.1 -0.1 -0.1
x· -0.3
1
-0.2 -0.2 -0.2
-0.3 -0.3
0
Thus, SINDy correctly identifies the first three terms of the Taylor expansion, although the higher
order terms have large parameter errors. This model is compared with the SINDy-PI and Implicit-
SINDy models in Fig. 10 for a test trajectory initialized with x0 = 0.6. From Fig. 10 (a), it can be
seen that both SINDy-PI and Implicit-SINDy match the true solution. However, the SINDy model
only agrees for x near the origin. When Gaussian noise of magnitude σ is added, the SINDy model
degrades further. Moreover, the amount of data used needs to be increased to identify the correct
model. In Fig. 10 (b) to (d), 330, 2200 and 4400 data points ranging from 0 to 12 are used for training.
The same amount of data is used for model selection. From Fig. 10 (b) to (d), it can be seen that the
SINDy model does not work well away from x = 0.
25
Table 8: The effect of Km on the noise robustness of SINDy-PI.
Max Magnitude
Km True Model Identified Model
of Noise
0.01 0.001 ẋ = − (0.9x−0.006)
x+0.01 ẋ = − (x−7.6144)(x−5.6092)(x−2.5078)(x−0.0072)
(x−7.6146)(x−5.6125)(x−2.5092)(x+0.0092)
0.1 0.01 ẋ = − (0.9x−0.06)
x+0.1 ẋ = − (x−5.4132)(x−0.0681)
(x−5.4144)(x+0.099)
1 0.05 ẋ = − (0.9x−0.6)
x+1 ẋ = − (x−4.7143)(x−0.6726)
(x−4.7159)(x+0.9698)
10 0.1 ẋ = − (0.9x−6)
x+10 ẋ = − 0.99x−6.583
x+11.07
26