Interpreting recurrent neural networks behaviour via excitable
network attractors
Andrea Ceni∗1 , Peter Ashwin†2 , and Lorenzo Livi‡§3,1
arXiv:1807.10478v6 [cs.LG] 10 Mar 2019
1
Department of Computer Science, University of Exeter, Exeter EX4 4QF, UK
2
Department of Mathematics, University of Exeter, Exeter EX4 4QF, UK
3
Departments of Computer Science and Mathematics, University of Manitoba, Winnipeg,
MB R3T 2N2, Canada
March 12, 2019
Abstract
Introduction: Machine learning provides fundamental tools both for scientific research and for the
development of technologies with significant impact on society. It provides methods that facilitate the
discovery of regularities in data and that give predictions without explicit knowledge of the rules governing
a system. However, a price is paid for exploiting such flexibility: machine learning methods are typically
black-boxes where it is difficult to fully understand what the machine is doing or how it is operating.
This poses constraints on the applicability and explainability of such methods. Methods: Our research
aims to open the black-box of recurrent neural networks, an important family of neural networks used for
processing sequential data. We propose a novel methodology that provides a mechanistic interpretation
of behaviour when solving a computational task. Our methodology uses mathematical constructs called
excitable network attractors, which are invariant sets in phase space composed of stable attractors and
excitable connections between them. Results and Discussion: As the behaviour of recurrent neural
networks depends both on training and on inputs to the system, we introduce an algorithm to extract
network attractors directly from the trajectory of a neural network while solving tasks. Simulations
conducted on a controlled benchmark task confirm the relevance of these attractors for interpreting the
behaviour of recurrent neural networks, at least for tasks that involve learning a finite number of stable
states and transitions between them.
Keywords— Recurrent neural networks; Dynamical systems; Network attractors; Bifurcations.
1
Introduction
Artificial recurrent neural networks (RNNs) are widely used to solve tasks involving temporal data, e.g.,
speech [18] and handwriting recognition [44], audio classification [26, 52] or time series forecasting [8]. RNNs
are characterised by the presence of feedback connections in a hidden layer, which allows generating a
state-space representation that equips the network with short-term memory capability. RNNs are universal
approximators of dynamical systems [14, 19] meaning that, given enough neurons in the hidden layer, it is
possible to fine tune the weights to achieve any desired level of accuracy. Nevertheless, training via backpropagation through time is difficult due to the vanishing/exploding gradient problem [23, 43]. This has led
to the development of new and faster techniques for training RNNs, including a different paradigm known
∗
[email protected]
†
[email protected]
‡
[email protected]
§ Corresponding
author
1
as reservoir computing [32, 33]. Echo state networks (ESNs) [21, 30] constitute an important example of
reservoir computing, where a recurrent layer (called a reservoir) is composed of a large number of neurons
with randomly initialised connections that are not fine-tuned via gradient-based optimisation mechanisms.
The main idea behind ESNs is to exploit the rich dynamics generated by the reservoir with an output layer,
the read-out that is optimised to solve a specific task.
1.1
Problem statement and research hypothesis
The high-dimensional and non-linear nature of RNNs complicates interpretability of their internal dynamics,
which are characterised by complex, input-dependent spatio-temporal patterns of activity [47, 55]. This
poses constraints on understanding the behaviour of RNNs: they are usually viewed as black-boxes from
which it is hard to extract useful knowledge about their inner workings. As highlighted by recent research
efforts [10, 24, 40], similar interpretability issues affect many other machine learning methods. Furthermore,
an increasing societal need to develop accountability and explainability of decision making by AI [17] is
driving the development of methodologies for explaining the behaviour of such methods.
Our aim in this paper is to develop effective models that capture the essential dynamical behaviour of
RNNs on computational tasks as input-driven responses of a dynamical system, while neglecting microscopic
details of the RNN dynamics in phase space (i.e., the space of all possible neuron activations). To this end,
we hypothesise that RNNs can undertake computations by exploiting (i) transient dynamical regimes and
(ii) excitable connections to switch between different stable attractors, depending on input and current state.
The RNN behaviour depends on the task at hand: for example, long transients are mostly exploited in time
series forecasting problems, while switching between attractors is mostly exploited in classification problems
or tasks requiring to learn a finite set of memory states. These mechanisms are not mutually exclusive and
can be exploited synergistically to realise more complex computations.
1.2
Contribution and paper organisation
In order to test our hypothesis, we develop a theoretical framework based on network attractors of dynamical
systems. An attractor is a subset of the state space of a dynamical system (i.e., the phase space), where
the state will asymptotically converge for “usual” choices of initial conditions. Network attractors are
special kinds of attractor which can be thought of as directed graphs, where nodes correspond to local
attractors and directed edges corresponds to particular trajectories that connect (or almost connect) those
local attractors. Since for this paper, the local attractors of interest are all stable fixed points, we will use
the term local attractor synonymously with stable fixed point. A stable fixed point is the simplest possible
attractors: this is a point in phase space where all variables of the dynamical system assume constant values,
and all close enough initial conditions converge to this point. However, fixed points need not be stable:
they can be partially stable (saddle points), or totally unstable (repellers). According to the nature of
the trajectories connecting local attractors it is possible to consider both heteroclinic network attractors
composed of heteroclinic connections between saddles (i.e. partially stable fixed points), and excitable
network attractors (ENAs) composed by connections that are excitable, i.e. that require a small initial
perturbation to move between stable attractors [5, 60].
Heteroclinic network attractors have already been suggested as models to describe transient computation
[45, 46], here, conversely, we focus attention on ENAs which have the advantage of being more robust
to perturbations. We focus particularly on ENAs between stable fixed points [5], although ENAs can be
theoretically conceived between any kind of local attractor: limit cycles, limit tori or strange attractors.
More precisely, we show how to construct ENAs describing RNN behaviour for tasks that involve switching
between a finite set of attractors. Interestingly, from a directed graph (representing the underlying network
attractor of a dynamical system) it is possible to reverse engineer [4] a set of ordinary differential equations
ruling, in our case, the RNN behaviour. This, in turn, opens the way to a more analytic description of how
RNNs solve computational tasks.
In this paper, we focus on the flip-flop task with two bits [55], since it provides a controlled workbench
to test our hypothesis. The input to discrete-time RNN is assumed to be a discrete temporal sequence with
2
values from {+1, 0, −1}. In general, we consider discrete time k varying input, denoted as u[k], and output,
denoted as z[k], of a system related by
(x[k], z[k]) = R(u[k], x[k − 1]),
(1)
where R represents an RNN with evolving internal state, namely x[k], and connection weights that are
optimised during the training phase. We consider response to inputs where, independently for any k, u[k]
is (0, 0) with probability 1 − p or otherwise chosen to be one of the four inputs (±1, 0) or (0, ±1) with equal
probability. This generates a sequence of inputs that remain at (0, 0) for an exponentially distributed period
of time, but occasionally takes one of the four values (0, ±1), (±1, 0). The target output to be learned by the
RNN is a two-dimensional vector z[k] that can assume four possible configurations: (1, 1), (1, −1), (−1, 1),
and (−1, −1). The system (1) is considered to successfully accomplish the task if the network is able to
reliably and accurately reproduce all 20 possible actions described in Table 1.
Table 1: The two-bit flip-flop task. Depending on the output z[k − 1] and input u[k], we expect the output
z[k] to become as given in the table, where N.C. indicates “no change” from the current output value.
Outputs\Inputs (0, 0)
(1, 0)
(−1, 0)
(0, 1)
(0, −1)
(1, 1)
N.C.
N.C.
(−1, 1)
N.C.
(1, −1)
(1, −1)
N.C.
N.C.
(−1, −1)
(1, 1)
N.C.
(−1, 1)
N.C.
(1, 1)
N.C.
N.C.
(−1, −1)
(−1, −1)
N.C. (1, −1)
N.C.
(−1, 1)
N.C.
We use RNNs that are ESNs subjected to supervised training with a perturbation matrix obtained by
injecting the output into the ESN dynamics. This choice does not limit the validity of our hypothesis: we
claim that our results are general and can be used to describe the behaviour of more general discrete-time,
input-driven RNNs. The contributions of this paper can be summarised as follows:
• We provide a theoretical framework to describe the behaviour of RNNs by means of ENAs. The
proposed theoretical framework is general and covers several types of computational tasks. We present
a specific instance of such a framework applied to describe RNN behaviour on tasks requiring to learn
a finite set of memory states;
• We use bifurcation analysis to manually design (low-dimensional) ESNs that give rise to ENAs able
to solve the flip-flop task with any number of bits. This allows us to justify our choice of modelling
framework and suggests its validity in the context of RNNs;
• As ESNs are driven by inputs and subject to training via perturbation matrices, bifurcation analysis
alone is not sufficient to explain changes to their behaviour. In fact, inputs and perturbation matrices
can affect ESN behaviour in a non-trivial way. To this end, we introduce an algorithm to extract the
ENA describing the ESN behaviour for the task at hand. The algorithm analyses an ESN trajectory
and constructs a directed graph encoding the underlying ENA on which the dynamics takes place.
The vertices (nodes) of such a graph are associated with the fixed points of the dynamics and directed
edges describe excitable connections between them. We apply this algorithm to an ESN that is trained
to solve the flip-flop task and show that the resulting ENA is able to explain how ESNs perform
computations in a detailed and mechanistic way;
• We define a notion of excitability threshold for this high-dimensional, non-linear dynamical system
driven by inputs. We propose a method for computing this excitability threshold that accounts for
inputs and can directly be applied to trajectories generated by a neural network while solving a task;
• Our simulations suggest three important findings. First, as already noted by recent research [1, 55],
the dynamics of high-dimensional RNNs takes place in a much lower-dimensional phase space region
that is determined by the structure introduced with training and inputs. Here, we observe that the
3
dynamics is indeed low-dimensional, but highlight the fact that additional dimensions are used occasionally to switch between stable states according to control inputs. This suggests that, for example,
a simplistic use of methods based on explained variance to reduce dimensions needs to be avoided.
Second, we show how ENA models describing RNN behaviour can be exploited to provide a mechanistic interpretation of errors occurring during the computation undertaken by RNNs. Finally, we
show how excitability thresholds of the extracted ENAs allow us to assess the robustness of RNNs to
noise-induced perturbations.
The remainder of this paper is organised as follows. Section 2 introduces the essential background material
needed in this paper. Section 3 shows how to design low-dimensional ESN models that give rise to ENAs able
to solve the flip-flop task. In Section 4 we propose an algorithm for automatically extracting ENAs directly
from an ESN trajectory generated while solving a task. In Section 5, we present and discuss results of the
simulations. Finally, Section 6 draws conclusions and points to future research directions. We include three
appendices: Appendix A reviews notions of linear stability used throughout the paper. Appendix B discuss
bifurcations of fixed points for low-dimensional ESN maps. Appendix C provides details of the procedure
used to determine fixed points.
2
2.1
Background
Echo State Networks
We consider a specific system of the form (1), corresponding to a discrete-time ESN state-update and related
output:
x[k] =φ(Wr x[k − 1] + Win u[k]),
(2)
z[k] =Wo x[k].
(3)
x[k] ∈ RNr is the state, u[k] ∈ RNi and z[k] ∈ RNo denote input and output, respectively. The activation
function φ(·) is applied component-wise; without loss of generality, we consider φ = tanh : R −→ (−1, 1).
It is worth mentioning [43] that (2) is often written x[k] = Wr φ(x[k − 1]) + Win u[k]. However, Miller and
Fumarola [37] proved that the two formulations are equivalent up to a change of coordinates; they produce
the same discrete-time dynamics. In this paper, we build on (2) and consider a network of discrete-time
leaky-integrator neurons [22] of the form:
x[k] = (1 − α)x[k − 1] + αφ(Wr x[k − 1] + Win u[k] + ).
(4)
Here, α ∈ (0, 1] is called a leak rate and explicitly sets the time-scale of the ESN [56]. The term represents
additive white Gaussian noise with spherical covariance matrix and unit standard deviation.
The reservoir Wr ∈ RNr ×Nr and input Win ∈ RNr ×Ni matrices are usually random with i.i.d. entries
drawn from uniform or Gaussian distributions [31, 32]. However, in the literature it is possible to find
reservoirs with different connection patterns, including deterministic topologies [50] and those exploiting the
norm-preserving property of orthogonal matrices [36]. In our case the read-out matrix Wo ∈ RNo ×Nr is
optimised for the task at hand. Relevant hyperparameters directly affecting ESN performance include the
number of neurons and sparseness of their connections, the spectral radius of the reservoir matrix, and leak
rate α [9, 29]. The so-called echo-state property (ESP) [15, 34, 62] guarantees the existence and uniqueness
of a global attracting trajectory for any input sequence in a compact set. The ESP, although useful in some
tasks like forecasting tasks, is in practice difficult to verify and it is usually formulated only for ESNs with
state-update of the form shown in (2) and (4). Therefore, it is not suitable to ESN models and tasks we
discuss here; see the following subsection.
2.2
Training ESNs with low-rank perturbation matrices
Training of RNNs is typically implemented by means of stochastic gradient descent or variations of thereof
[51]. Learning long-term dependencies in RNNs with gradient descent is known to be problematic, as a
4
consequence of the so-called vanishing/exploding gradient problem [43]. To this end, different approaches
have been proposed that can be summarised in two categories: (i) methods using gating mechanisms (such
as Long Short-Term Memory [20] and Gated Recurrent Unit [12] networks) and (ii) those based on unitary
matrices and constant-slope activations [3]. On the other hand, training of the recurrent layer in ESNs is
typically realised by perturbing a randomly-initialised reservoir with a low-rank, deterministic matrix. This
is conventionally accomplished by feeding back the ESN output to the recurrent layer [27, 48, 49, 59, 61] or,
as Mastrogiuseppe and Ostojic [35] recently proposed, by designing the reservoir directly as Wr = X + D,
where X is a random matrix and D is a deterministic, low-rank matrix encoding the task of interest.
Figure 1: Left: Illustration of a ESN in the open-loop (training) phase (5). Training is supervised by
means of the target signal y. Right: Illustration of an ESN (7) after the optimisation of the read-out
matrix. Feeding back the output signal into the reservoir gives the closed-loop (trained) system which
self-sustains the dynamics driven by the input signal.
In this paper, we use a supervised learning algorithm which exploits the feedback of the ESN output as
a mechanism for training the recurrent layer. The state-update equation (4) takes the following form:
x[k] = (1 − α)x[k − 1] + αφ(Wr x[k − 1] + Win u[k] + Wf b y[k − 1] + ),
(5)
where Wf b ∈ RNr ×No is a matrix with i.i.d. random coefficients usually drawn from a uniform or Gaussian
distribution. Depending on whether training is performed batch or online, y[k − 1] in (5) takes the form of
either the target signal or the output produced by the ESN (3), respectively. In batch mode, it is possible to
distinguish two main phases; see Fig. 1 for an illustration. First, the reservoir is fed with an auxiliary input,
i.e. the target signal y. We construct a matrix X ∈ RN ×Nr containing the states x[k] of the ESN generated
in response to target and input signals. Finally, the weights Wo of the read-out are determined by solving a
−1 > >
regularised least-squares problem, Wo = X> X + λ2 I
X y , where I is an Nr × Nr identity matrix
and λ ≥ 0 is a regularisation parameter. Successively, the target signal is replaced by the generated output
z; this “closed-loop phase” corresponds to the test phase of the trained ESN. An analysis of the stability of
the transition from open- to closed-loop can be found in [49].
Definition 1. We call the trained reservoir the following matrix
M := Wr + Wf b Wo .
(6)
Once the read-out matrix Wo is optimised, by imposing y[k] = z[k] and expanding in (5) with (3), we
obtain:
x[k] =(1 − α)x[k − 1] + αφ(Wr x[k − 1] + Win u[k] + Wf b Wo x[k − 1] + ) =
=(1 − α)x[k − 1] + αφ(Mx[k − 1] + Win u[k] + ).
5
(7)
Remark. The trained reservoir (6) is obtained by adding a matrix Wf b Wo ∈ RNr ×Nr , which is low-rank
No Nr relative to the randomly initialised reservoir matrix Wr . Therefore the reservoir is in some sense
trained using output feedback connections.
Inputs u[k] in (7) play the role of control inputs and are typically constant or impulsive signals. The ESN
read-out matrix Wo is conventionally determined by solving a regularised least-squares problem. Nonetheless, we note that also online training schemes have been developed, e.g., the FORCE learning algorithm
originally introduced in [54] and further extended by De Pasquale et al. [13]. During the test phase, regardless of the adopted training mechanism, the state-update of ESNs is described by (7). In this paper,
we consider batch training via ridge regression and analyse the trajectory generated by (7) during the test
phase.
It is worth nothing that our theoretical framework does not rely on a particular training method or a
particular RNN architecture. We note that complicated (trained) RNN models may be described using a
noisy nonautonomous dynamical system, which in our system is represented by (7). We focus on ESNs as
they are the simplest RNN models to test our hypothesis. For this reason, the terms ESN and RNN are
used interchangeably.
2.3
Network attractors
Many common dynamical systems encountered in nature are dissipative [11, 53]. In such systems, the absence
of any conservation law means that typically the system evolves towards an attracting set of dimension strictly
less than the original phase space dimension; such a set (or a particular subset of it) is commonly called an
attractor : formal definitions are discussed for example in [39]. The basin of attraction of an attractor is the
set of all initial conditions from which the system evolves toward the attractor. Attractors convey crucial
information about the behaviour of the dynamical systems which have generated them.
Here, we consider the following noise-free discrete-time dynamical system with inputs:
x[k] = G(x[k − 1], u[k]),
where G : RNr × RNi −→ RNr is related to (7) as follows:
(1 − α)x1 + αφ M(1) · x + (Win )(1) · u
(1 − α)x2 + αφ M(2) · x + (Win )(2) · u
G(x, u) =
..
.
(1 − α)xNr + αφ M(Nr ) · x + (Win )(Nr ) · u
(8)
,
(9)
where is M the trained reservoir matrix (6), the subscript (i) denotes the ith rows of a matrix and · the
usual dot product. As mentioned before, the input signal for the flip-flop task is null most of the time, at
which point it is governed by the autonomous dynamics of
x[k] = F(x[k − 1]) = G(x[k − 1], 0).
(10)
Fixed points p ∈ RNr are solutions of F(p) = p. Related to the notion of attractor is the notion of limit set
[11], thus we introduce the following
Definition 2. The ω-limit set of a point x0 under the iterated map (10) is defined by
\
{Fh (x0 ) | h > n}.
ωF (x0 ) :=
(11)
n∈N
Remark. The ω-limit set of a point x0 is the set of limit points of the forward trajectory {Fh (x0 )}h∈N . For
a given fixed point p, its basin of attraction is formed by all points x ∈ RNr such that ωF (x) = {p}. If such
a fixed point is stable, then there exists a neighbourhood of p whose ω-limit set corresponds to this fixed
point.
6
More complex attractors consisting of networks of invariant sets in phase space have been proposed in the
literature [41, 60]. Such models found renewed interest in neuroscience [38, 58] and other fields of research, as
they provide a fundamental tool to describe dynamic processes occurring on transients that explore excitable
connections. More relevant to our paper, we focus on networks of stable fixed points that are connected by
excitable connections. Following [5, 6], we say that there exists an excitable connection for amplitude δ > 0
from stable fixed point pi to pj whenever
Bδ (pi ) ∩ W s (pj ) 6= ∅,
(12)
where Bδ (pi ) stands for the closed ball centred on pi with radius δ > 0 and W s (pj ) = x ∈ RNr | ωF (x) =
{pj } denotes the basin of attraction1 of the fixed point pj .
Definition 3. We define excitability threshold [5] (or just threshold ) of the excitable connection from pi to
pj , and denote it as δth (pi , pj ), the following nonnegative real number:
δth (pi , pj ) := inf{δ > 0 : Bδ (pi ) ∩ W s (pj ) 6= ∅}.
(13)
Remark. The quantity (13) can be informally interpreted as the fact that the fixed point pi is δth (pi , pj )
away from the basin of pj ; see Fig. 2 for a visual explanation.
Figure 2: Left: Depiction of an excitable connection from pi to pj . The red area is a closed ball centred
on pi of radius δ. The blue area represents the stable manifold of pj , i.e., its basin of attraction. The open
circle represents a saddle whose stable manifold (blue curves) denotes the boundary of the basin of attraction
of pj . Some points of Bδ (pi ) converge to pi itself, while those points beyond the basin boundary converge
towards pj , as suggested by the black arrows. Right: Representation of the activation of an excitable
connection through the action of the input which allows to accomplish the switch from the stable point pi
to the stable point pj . Firstly, the internal state of the RNN stands nearby the stable point pi , in the
neighbourhood Br (pi ). Then, a control input u[k + 1] ∈ K drives the current internal state x[k] to the next
state x[k + 1] inside the local input response set, represented as the green subregion. Finally, if the state falls
beyond the basin boundary then the internal state converges to the stable point pj . Excitability threshold
δth (pi , pj ), in (13), is computed along the direction where the distance is shortest in order to escape from the
basin of attraction of pi and converge to pj . Nevertheless, input can potentially drive the dynamics towards
alternative dimensions for the purpose of achieving the switch from pi to pj . The input-driven excitability
threshold δinp (pi , pj ), in (16), is computed considering the subspace exploited by the input to solve the task.
Definition 4. A set Xexc ⊂ RNr is called an excitable network attractor (ENA) for amplitude δ > 0 if there
exists a collection of fixed points {pi }M
i=1 , such that
Xexc ({pi }M
i=1 , δ) :=
M
[
h
F (Bδ (pi ))
h≥0
∩ W s (pj ),
(14)
i,j=1
i6=j
1 The
basin of attraction corresponds to the stable manifold whenever pj is hyperbolic. This has interior if pj is an attractor.
7
where F(Bδ (pi )) := {F(x) | x ∈ Bδ (pi )} is based on (10) (see [5, 6]).
The autonomous dynamics on such a set converges to one of the stable fixed points; hence, external
inputs are necessary to get interesting dynamics. If the external input is large enough, then the state will
escape from the current basin of attraction and switch to a different one, until another sufficiently large
input will lead to another change of basin.
The excitability threshold (13) is defined as the (Euclidean) distance between a given stable fixed point,
pi , and the basin of attraction of another fixed point, say pj . Such a quantity measures the minimum
distance in phase space necessary to escape from the basin of pi and converge towards pj . Nevertheless, if
the dynamical system is high dimensional, then there will be a large number of possible escaping directions
that could be exploited by inputs. Therefore, excitability thresholds (13) alone may not be representative
of nonautonomous systems driven by inputs. For this purpose, in order to take into account the action
of inputs on the dynamics, we introduce the notion of input-driven excitability threshold
of an excitable
S
connection. Considering K as a compact subspace of RNi , we define G(x0 ; K) := u∈K G(x0 , u), where
G is the function in (9) defining the nonautonomous dynamical system which describes the trained neural
network. The set G(x0 ; K) contains all states reachable from x0 under the action of input values u ∈ K.
Importantly, the presence of noise has the effect to perturb away the internal state from the exact location of
the stable point in the deterministic counterpart of the dynamics. Therefore, instead of G(pi ; K) we rather
observe the following set.
Definition 5. We call local input response set of the stable point pi the subset of phase space defined by
[
G(Br (pi ); K) :=
G(x; K),
(15)
x∈Br (pi )
where the radius r can be shrunk according to the amplitude of the noise and K is a subset of admissible
input values for the task at hand.
Remark. Continuity of G implies that, if r → 0+ , then the monotonically decreasing sequence of sets
{G(Br (pi ); K)}r≥0 converges to G(pi ; K) regardless of K ⊂ RNi . Furthermore, we note that G(Br (pi ); K),
as a subspace of RNr , is compact if K ⊂ RNi is compact.
If K represents all possible inputs of a particular task2 , then we can drop it and denote (15) simply as
G(Br (pi )). Therefore, the subregion of the phase space represented by the local input response set G(Br (pi )),
encodes the action exercised by the input when the internal state of the RNN is nearby the stable point pi .
Definition 6. We define input-driven excitability threshold of the excitable connection from pi to pj , and
denote it as δinp (pi , pj ), the following nonnegative real number:
δinp (pi , pj ) := inf{δ > 0 : G(pi ) ∩ Bδ (pi ) ∩ W s (pj ) 6= ∅}.
(16)
Remark. The excitability threshold in (16) has a similar meaning to the one defined in (13), although it
considers only the subspace exploited by the action of inputs nearby pi , where the excitable connection
starts from, see Fig. 2. Finally, from the fact that G(pi ) ∩ Bδ (pi ) ∩ W s (pj ) ⊆ Bδ (pi ) ∩ W s (pj ), it follows
that δth (pi , pj ) ≤ δinp (pi , pj ) holds for all pairs of fixed points.
3
Designing low-dimensional ESNs to solve flip-flop tasks
In this section, starting from ESN models we show how to manually design ENAs that realise computations
needed for the flip-flop task. This step allows us to justify the choice of the modelling framework adopted
here and its validity in the context of RNNs. For this purpose, we rely on bifurcation theory; see Appendix
B for technical notions. A bifurcation [28] is a qualitative change in the solution set (phase space topology)
2 For
example, in the flip-flop task we have K = {(0, 0), (1, 0), (−1, 0), (0, 1), (0, −1)}
8
on changing a parameter of a dynamical system. On varying the parameters of the model, if such qualitative
changes appear, then we say the system has undergone a bifurcation. For this reason, the notion of bifurcation
plays an important role in training RNNs and in describing their behaviour. Training the recurrent layer
of RNNs corresponds to shaping their phase space topology, possibly inducing bifurcations that lead to a
qualitative change in behaviour. We argue that adding a low-rank perturbation matrix to the reservoir (6)
can induce bifurcations that lead to the creation of attracting regions in ESN phase space useful to solve
the task at hand. Clearly, this can also happen with more sophisticated training algorithms. Let us assume
the origin as the only global attractor for the autonomous system x[k] = (1 − α)x[k − 1] + αφ(Wr x[k − 1])
associated with (5). Then, the bifurcation induced by (6) leads to a transition from such a trivial dynamic
to one where the origin becomes unstable3 and repels trajectories towards other attracting regions in phase
space, where the nonautonomous dynamics actually takes place.
In Sec. 3.1, we provide a low-dimensional example of an ESN that is able to solve the flip-flop task;
the reservoir of such ESN is formed by two neurons. In Sec. 3.2, instead we design an ESN model with a
reservoir formed by 2k neurons which is able to solve the flip-flop task with k bits. For both examples we
show there is an underlying ENA that explains their dynamics.
3.1
A minimal-dimension example to solve the two-bit flip-flop task
In order to solve the k-bit flip-flop task, one needs to learn 2k memory states (stable fixed points) and the
related switching patterns dictated by control inputs. Here, we show how to design an ESN with two neurons
in the recurrent layer, giving rise to an ENA able to solve the flip-flop task with k = 2 bits. According to
the analysis of Appendix B, we know that it is possible to obtain, with k neurons, up to 3k fixed points, 2k
of which are stable, 3k − 2k − 1 are saddles, and 1 (the origin) is a repeller. Therefore, we set the trained
reservoir weights (6) according to condition (43). In particular,
1 b
M(b) := ωr
(17)
b 1
with ωr = 3 scaling the reservoir weights, and b acting as tuning
parameter. As shown in Fig. 3, for every
0 ≤ b < 0.47, the autonomous system x[k] = tanh M(b)x[k − 1] has four stable attractors located near the
vertices of the invariant square [−1, 1]2 .
The input is injected into the ESN via the following weight matrix, Win = ωin I2 , i.e. a scaled 2 × 2
identity matrix, setting the scaling factor (called a hyperparameter in machine learning) to ωin = 6. Let
us set the remaining parameters in (7) as = 0 and α = 1, and let the ESN output (3) be the identity
mapping. Let us assume that, at time k, the system is in state p = (p1 , p2 ), which is close to one of the
four attractors, i.e., p1 ≈ (1, 1), p2 ≈ (−1, 1), p3 ≈ (−1, −1), p4 ≈ (1, −1). The possible, non-null inputs
at time-step k are u[k] ∈ {(1, 0), (0, 1), (−1, 0), (0, −1)}. The action of the input pulse is encoded in a vector
∆ ∈ [−1, 1]2 , representing the difference between the state before and after the occurrence of such a pulse,
whose components are
∆j ≈ tanh 3pj + 6uj − tanh 3pj , j = 1, 2,
(18)
considering ωr = 3, ωin = 6 and, for the sake of simplicity, b = 0, which corresponds to the case of two
independent neurons. Hence, the switching mechanism between attractors is ruled by the signs of both
pj and uj ; the former encoding the position and the latter the direction where to move. Therefore, the
state changes if and only if pj and uj assume different signs
for some j ∈ {1, 2}. In fact, if they have
the same sign, then (18) is null because tanh sgn(pj )[3 + 6] ≈ tanh sgn(pj )3 due to saturating activation
functions. While, if they have
different sign, then (18) becomes close to −1 or 1, according to sgn(pj ),
because tanh sgn(pj )[3 − 6] = − tanh sgn(pj )3 . Clearly, it is not necessary that |ωr − ωin | = ωr holds, as
in our set up. Although we assumed b = 0 for clarity of explanation, the conclusion is the same for each
b ∈ [0, 0.47). However, when the parameter approaches the bifurcation value b ≈ 0.47, the escaping dynamics
from certain fixed points become significantly slower.
3 The
origin could in principle remain stable and other attractors appear elsewhere: an example can be found in [62].
9
1.0
output (1.00, 1.00)
output (-1.00, -1.00)
output (0.98, -0.98)
output (-0.98, 0.98)
x2
0.5
0.0
0.5
1.0
1.0
0.5
0.0
x1
0.5
1.0
Figure 3: Basins of attraction, nullclines, fixed points and corresponding eigenvectors of the linearized
two-dimensional system (17), with b = 0.2. Black curves represent the nullclines (see Appendix B) whose
reciprocal intersections determine fixed points (black squares). Red segments indicate eigenvectors of the
linearized system for real eigenvalues larger than one. Blue line segments represent eigenvectors of real eigenvalues in (0, 1). Particularly important are blue lines of saddles, which represent local linear approximations
of the boundary of the basins of attraction. The whole phase space is divided in four basins of attraction
associated to the four stable points and their boundaries. These boundaries between these basins coincide
with the stable manifolds of the saddles. The legend shows output values produced at every attractor, which,
in this specific example, correspond with the internal state, i.e. the phase space coordinates of the attractors.
Points on the plane have been coloured according to the attractor to which they apparently converged to
after 100 steps.
Given a set of stable fixed points, a δ > 0 gives rise to a specific ENA; see definition of ENA in (14).
For instance, a large δ will potentially activate all excitable connections between the attractors. However, in
order to properly solve the flip-flop task, some of the connections need not to be active, namely, the diagonal
connections between p1 and p3 , and between p2 and p4 . Due to symmetry, there are only three different
excitability thresholds that an excitable connection can have, that is, δth (p2 , p1 ), δth (p1 , p2 ), δth (p1 , p3 ). In
the particular configuration shown in Fig. 3, it holds that δth (p2 , p1 ) < δth (p1 , p2 ) < δth (p1 , p3 ). Therefore,
the underlying ENA supporting the nonautonomous ESN dynamics
is defined as Xexc ({pi }4i=1 , δ), for every
√
δth (p1 , p2 ) < δ < δth (p1 , p3 ). We note that δth (p1 , p3 ) ≈ 2, regardless of b ∈ [0, 0.47). We can reduce
the size of the basin of attraction of p2 and p4 by increasing the value of b, which in turn reduces the
excitability threshold of the corresponding connections, until at b ≈ 0.47 a bifurcation occurs making them
disappear4 . However, the basins of attraction of the other two stable points, p1 and p3 , become bigger
(δth (p1 , p2 ) ≈ 2 − δth (p2 , p1 )), so increasing their excitability thresholds. Therefore, when the ESN state is
close to p1 (or p3 ), the input needs to be very precise to throw back the state to the narrow basins of p2
and p4 . Moreover, it must have a large amplitude compared to what is needed to escape from such narrow
4 A bifurcation where both saddles collide with the corresponding stable points p and p , simultaneously annihilating all
2
4
six fixed points.
10
basins. Nevertheless, setting ωin large enough and depending on b (until a value of ωin = 6, which is enough
for every 0 < b < 0.47), the system is able to reproduce the flip-flop dynamics without errors.
Unfortunately, it does not seem to be possible to design a two-dimensional ESN for the two-bit flipflop where the excitability of each attractor can be tuned by changing the location of the nearby saddles.
Nevertheless, as we will show in the next section, this becomes possible by including two additional dimensions
in the model.
3.2
A 2k-dimensional model for k-bit flip-flop tasks
In this section, we propose a 2k-dimensional model able to solve k-bit flip-flop tasks. For the sake of clarity,
but without loss of generality, we show the k = 2 bit case. The key insight is to model the switching dynamics
between two fixed points in a two-dimensional space and then build a model formed by two independent
systems with two-dimensional switching dynamics.
Unlike the previous example, here we want to tune the excitability of all connections. This can be
obtained by imposing the condition in (42) and tuning the reservoir weights to change the position of the
saddles. Therefore, we define
1.1 4
B :=
,
(19)
−s 4
where s is a real parameter.
Hence, for 0 ≤ s < 2.15, the autonomous dynamical system defined by
x[k] = tanh Bx[k − 1] has one repeller (the origin), two attractors, namely p1 ≈ (1, 1), p2 ≈ (−1, −1), and
two saddles. By increasing s within the [0, 2.15) interval, we can make the saddles closer to the respective
stable points, see Fig. 4, hence decreasing the excitability thresholds of these attractors, until a saddle-node
bifurcation occurs approximately at s = 2.15, annihilating attractors with saddles.
1.0
0.5
x2
output (1.00, 0.94)
output (-1.00, -0.94)
0.0
0.5
1.0
1.0
0.5
0.0
x1
0.5
1.0
Figure 4: Basins of attraction, nullclines, fixed points and eigenvectors of the linearized two-dimensional
autonomous system having (19) as reservoir matrix, with s = 2. Black curves represent the nullclines
(see Appendix B) whose reciprocal intersections determine fixed points (black squares). Red segments
indicate real eigenvectors associated with real eigenvalues larger than one. Blue line segments represent real
eigenvectors of real eigenvalues in (0, 1). Points of the plane have been coloured according with the attractor
on which they converged after 100 steps.
11
Let us define the input matrix as follows:
Win := ωin
0
1
0
0
,
where ωin is a positive real parameter. For instance, setting
s = 2 and ωin = 1, the two-dimensional
dynamical system defined by x[k] = tanh Bx[k−1]+Win u[k] is able to accomplish the switching mechanism
between p1 and p2 according to inputs (1, 0), (−1, 0), and ignore other inputs, i.e., (0, 1), (0, −1). Of course,
replacing zeros in Win with relatively small values (compared to ωin ) does not change the results.
The complete system able to solve the two-bit flip-flop dynamics can be obtained by defining the reservoir
as the following four-dimensional block diagonal matrix,
B 0
M :=
,
(20)
0 B
where 0 denotes a 2 × 2 matrix with all zeros. Starting from a given initial condition, for every 0 ≤ s < 2.15,
the state of the four-dimensional autonomous dynamical system x[k] = tanh Mx[k − 1] converges to one of
these four attractors (p1 , p1 ), (p1 , p2 ), (p2 , p1 ),(p2 , p2 ). In order
to produce a two-dimensional output (3)
1 0 0 0
suitable for the task at hand, we define Wo :=
, which basically corresponds to a projection
0 0 1 0
onto the first and third component, i.e., (x1 , x2 , x3 , x4 ) 7−→ (x1 , x3 ). Finally, we define the input matrix as
0 0
1 0
Win := ωin
0 0 .
0 1
We considered the case of a 4-dimensional system composed by two independent, two-dimensional systems.
Nevertheless, the dynamics remains qualitatively the same even if the coupling between these two-dimensional
systems is weak, i.e., if the zero matrices in (20) are replaced by matrices with relatively small (absolute)
values. With those definitions in mind and, as before, = 0 and α = 1, the ESN ruled by (7) and (3)
correctly implements the flip-flop task with two bits.
As the dynamics of systems (x1 , x2 ) and (x3 , x4 ) are independent from each other, the set of fixed points
of the overall dynamics turns out to be the Cartesian product of the set of fixed points of (x1 , x2 ) and the
fixed points of (x3 , x4 ) system. This gives rise to a large number of fixed points: there are 4 stable points, 8
saddles with 1 unstable directions, 8 saddles with 2 unstable directions, 4 saddles with 3 unstable directions
and 1 repeller. However, the set of fixed points where the nonautonomous flip-flop dynamics take place is
formed by 4 stable fixed points and 8 saddles with only one unstable direction; see Sec. 5.1 for a detailed
example. Every stable fixed point is close to a pair of saddles and, due to symmetry, they are all at the
same distance δth (p1 , p2 ). This quantity defines the excitability thresholds of the connections needed in the
flip-flop task, and therefore implicitly defines the ENA ruling the behaviour of the four-dimensional ESN.
4
Extracting ENAs from the ESN trajectory
In this section, we describe the proposed algorithm to extract an ENA from an ESN trajectory. The proposed
algorithm, which is schematically illustrated in Fig. 5, takes the trained reservoir matrix M (6) and a
trajectory of the nonautonomous ESN (7) as input and produces a weighted directed graph, representing
the ENA describing the ESN behaviour, as output. The algorithm is composed of two main steps. In Sec.
4.1, we describe the procedure to find fixed points of the ESN dynamics (corresponding to the vertices of the
graph). Successively, in Sec. 4.2, we show how to determine the excitable connections and related thresholds
between stable fixed points (corresponding to the weighted directed edges).
12
!
0
"#$%
(#
*+
'#(%
(
,#*
"#(
'
"#$
%
-
"#
)
&
"#&%
.
"#&$
&#&"
/
"#'%
Figure 5: Illustration of the proposed method to extract ENAs from trajectories.
4.1
Finding fixed points of the dynamics
The optimisation algorithm we have used to find fixed points is based on Sussillo and Barak [55]; see [16]
for an open-source Tensorflow toolbox for finding fixed points in arbitrary RNN architectures and [25] for
an alternative method to identify fixed points. The key idea is to define a scalar function whose minima
correspond to fixed points of the ESN dynamics.
Definition 7. We call velocity field of the autonomous system (10) the vector field Q : RNr −→ RNr ,
defined as
Q(x) := F(x) − x,
(21)
with F being the map in (10).
Therefore, the velocity field takes the following form:
Q(x) = α tanh(M · x) − x ,
(22)
where Q(x[k]) is the vector that needs to be added to the current state x[k] to obtain the next one. In fact,
x[k + 1] = F(x[k]) ⇐⇒ x[k + 1] − x[k] = F(x[k]) − x[k] ⇐⇒ x[k + 1] = x[k] + Q(x[k]).
(23)
Definition 8. We define kinetic energy of the autonomous system (10) to be the following scalar function,
q(x) :=
1
kQ(x)k2 .
2
(24)
Note that fixed points x∗ ∈ RNr satisfy Q(x∗ ) = 0 if and only if q(x∗ ) = 0. Fixed points are hence
identified as the global minima of (24). We use the quasi-Newton algorithm BFGS [42] to minimise (24).
In order to speed-up the optimisation by several orders of magnitude, we explicitly provided the gradient of
(24) to BFGS, which reads:
∇q(x0 ) = JQ (x0 )T Q(x0 ) = α D(x0 )M − INr
T
Q(x0 ),
(25)
where INr is an Nr × Nr identity matrix, JQ (x0 ) denotes the Jacobian matrix of the velocity field (22) and
D(x0 ) is a diagonal matrix defined in (31) of Appendix A, both evaluated on x0 .
13
The initial conditions for minimising (24) are determined by randomly sampling states from a trajectory
of the nonautonomous ESN (7). The convergence of the BFGS algorithm depends on a tolerance. In fact, the
algorithm may converge to similar solutions that, depending on chosen tolerance, are numerically different.
As these solutions represent fixed points of the dynamics, we aggregate them in a meaningful way and return
a non-redundant set of fixed points. For this purpose, as post-processing step, we run a clustering algorithm
on the set of all solutions and return only cluster representatives; details provided in Appendix C.
4.2
Determining excitable connections between attractors
Once stable fixed points have been identified, we determine the excitable connections between them. As
discussed before (and in more detail in Appendix B) , the ESN training (6) induces bifurcations that generate
new fixed points; and possibly also other attracting regions in phase space that, however, are not explicitly
modelled in this work. The ESN is driven by control inputs that allow us to correctly perform switches
between stable states. As a consequence, we first need to understand how such inputs affect the dynamics
from a geometric point of view. This is done in Sec. 4.2.1 by introducing the notion of local switching
subspace (LSS), which is strictly related to the notion of local input response set, (15), introduced in Sec.
2.3. Then, in Sec. 4.2.2 we describe how we determine all excitable connections that are relevant for the task
under consideration and compute their excitability thresholds. The method we propose is based on a grid
of points lying in the LSS, accounting for the input action on the dynamics. By simulating the autonomous
dynamics with initial conditions taken from such a grid, we are able to approximate input-driven excitability
thresholds (16) and also to quantify how likely it is that the RNN uses such connections while solving the
task.
4.2.1
Local switching subspaces
Definition 9. Let us consider an ESN trajectory (7), x[0], x[1], x[2], . . . , x[k], . . ., obtained with inputs
u[1], u[2], . . . , u[k + 1], . . .. Moreover, let us denote with K := {ki }i∈N the set of indices for which u[ki ] 6= 0.
We define pulse difference vector (PDV) a vector containing the difference between pre- and post-input
states, namely
x[ki ] − x[ki − 1], ki ∈ K.
(26)
The PDVs (26) convey relevant information about the action of inputs on ESN state and we exploit such
information to determine the excitable connections and related thresholds.
Remark. We remind the reader that the number of non-zero inputs is controlled by the parameter p of the
exponential distribution (see (1) and related discussion in the text), which in turn determines the (average)
number of PDVs available for the following analysis.
In order to compute only those connections that are actually used by the ESN while solving the task, we
need to focus on the action of inputs in the neighbourhood of each attractor. To this end, we consider an
Euclidean ball Br (p) of radius r ≥ 0 centred on an attractor p, and call local PDVs of p the finite set of
PDVs that originate inside Br (p). Therefore, referring to (15), the local PDVs of p is a collection of vectors
originating in Br (p) and ending in G(Br (p)).
Definition 10. Let L(p) be the vector space obtained by means of principal components analysis of the local
PDVs (26) of the attractor p, retaining only l Nr principal components. We define the local switching
subspace (LSS) of p, and we denote it as p + L(p), the affine space composed by attaching the vector space
L(p) over p; see Fig. 6, top panel.
4.2.2
Estimation of excitability thresholds
The idea is to sample the LSS of a stable point pi and describe it as a grid of points G(pi ). Then, we consider
those points on the grid as initial conditions for the autonomous system (10), which is then iterated for a
14
Local
PDVs
Figure 6: Top: in blue, the low-dimensional space containing the attractors; black arrows represent local
PDVs (26) originating from the attractor pi which in turn define the LSS of pi , sketched in red. Bottom
left: representation of the local input response set G(Br (pi )), in green, and the grid G(pi ), dashed line,
in the LSS of pi , in red. Bottom right: illustration of the partition of a two-dimensional grid G(pi ).
Every colour represents a subset (27). Dashed black lines track the boundary of the basins corresponding to
those attractors reachable from pi through excitable connections, which can be enabled by inputs allowing
exploration of the hypercube centred on pi .
sufficiently large number of steps to ensure convergence to nearby attractors. Finally, tracing the evolution
of these initial conditions allows us to estimate excitability thresholds and other relevant quantities.
The proposed algorithm for finding excitability thresholds is graphically illustrated in Fig. 6. The
algorithm is based on a hypercube H(pi ) centred on attractor pi that is contained within the LSS pi +L(pi ).
The length of the hypercube is such that H(pi ) contains the projection of G(Br (pi )) on pi + L(pi ). In order
to estimate excitability thresholds, we consider a mesh with a pre-defined density of points on the hypercube
H(pi ), thus obtaining a grid of points, G(pi ) = {gji }. To simplify the notation, we use a single index j to
enumerate points of the grid. Through the ω-limit set (11) of the grid G(pi ), that is:
[
ωF (G(pi )) =
ωF (gji ),
j
we can compute the following nonnegative integer c(pi ) := |ωF (G(pi ))| − 1, which counts the number of
excitable connections originating from pi which are allowed to be activated through input. Indeed, ωF (G(pi ))
15
is composed of a set of stable fixed points, {pi0 , pi1 , pi2 , . . . , pic(pi ) }, determining the endpoints of the c(pi )
different excitable connections.
Remark. Note that if the grid is sufficiently dense then the attractor itself is always included in such a set
of fixed points. However we do not count this as an excitable connection. In what follows, we assume that
the attractor is indexed by i0 , i.e., pi0 = pi .
With these definitions in mind, we are ready to compute thresholds of excitable connections. Let us
denote with
σi : {1, . . . , |G(pi )|} −→ {i0 , i1 , . . . , ic(pi ) },
an indexing function such that pσi (j) = ωF (gji ). Through the preimage σi−1 (·), we obtain a partition of
points of the grid into the following subsets:
G(pi ; pit ) := {gji ∈ G(pi ) | j ∈ σi−1 (it )}, t = 0, 1, . . . , c(pi ).
(27)
The points of the grid belonging to the subset defined by (27) are all destined to converge to pit . Therefore,
we estimate the input-driven excitability threshold (16) of the connection from pi to pit as follows:
(28)
δeinp (pi , pit ) = min kgji − pi k : gji ∈ G(pi ; pit ) , t = 1, . . . , c(pi ).
The excitability thresholds (28) represent geometric properties of the attractors and related basins in
phase space learned through training. In order to determine the effective excitability (accounting for inputs)
of each connection outgoing from pi , we exploit the local topology of the LSS of pi by means of the grid
partition (27) induced by σi (·). To this end, we define the ratio of initial conditions taken from the grid that
converged to attractor pit as follows:
νi,it :=
|G(pi ; pit )|
∈ [0, 1].
|G(pi )| − |G(pi ; pi )|
(29)
In the limit of infinite number of points in the grid, the quantity (29) converges to the ratio of volumes
between the portion of the hypercube belonging to the basin of pit and the portion of the hypercube that
does not belong to the basin of pi . Therefore, dense grids give ratios (29) providing accurate information
about how the LSS is distributed between basins of attraction of stable fixed points. Finally, merging both
(28) and (29) into a single expression, we define effective excitability of the connection from pi to pit as
follows:
νi,it
βi,it :=
.
(30)
e
δinp (pi , pit )
Note that this quantity takes into account both the distance between the attractor pi and the basin’s
boundary, and the volume of the basin itself. A low value for βi,it indicates that it is difficult to activate
the connection from pi to pit during the task. This may be due (i) to the small volume occupied by the
basin of the attractor pit in the LSS of pi or (ii) to a very high excitability threshold associated with such
a connection. On the other hand, βi,it 1 necessarily implies that such a connection has a low excitability
threshold, since νi,it ∈ [0, 1]. As a consequence, the distance between the basin of attraction of pit and pi is
small, thus the connection can be easily activated during the task.
Remarks on computational complexity. There are three parameters controlling the complexity and,
accordingly, the accuracy of the search in the grid: the dimension ζ1 of the hypercube, the length ζ2 of
the hypercube’s edge, and the number of points ζ3 on each edge determining the density of the grid. ζ3
and ζ2 have a linear and polynomial impact on the computational complexity of the algorithm, respectively.
However, ζ1 is more critical as it increases exponentially the number of points in the grid and, accordingly,
the overall complexity. In the simulations we always set ζ1 = l, that is, the dimension of the hypercube is
equal to the dimension of the LSS which is low in our case. More generally, the dimension of LSSs depends
on the complexity of the inputs and their impact on the dynamics, and this will need to be assessed on a
case-by-case basis.
16
5
Simulations
In this section we discuss the results of our simulations and relate this to our theoretical framework. In
Sec. 5.1, in order to evaluate the correctness of the algorithm proposed in Sec. 4, we apply it to the
manually-designed, low-dimensional ESN maps discussed in Sec. 3.1 and Sec. 3.2. Sec. 5.2 applies our
method to high-dimensional trained ESNs. We show that, even though the ESN is high-dimensional, the
dynamics generated by the trained reservoir (6) is effectively low-dimensional. We also show the usefulness
of ENAs for giving a mechanistic interpretation of prediction errors occurring during task execution. Finally,
in Sec. 5.3, we show how ENA models can be used to assess the robustness to noise of trained ESNs. For all
simulations, we use p = 0.1 as parameter of the exponential distribution governing the occurrence of input
pulses and set the tolerance of the kinetic energy (24) for detecting minima to 10−6 .
5.1
Evaluation on manually designed low-dimensional ESNs
For the grid search algorithm, we used ζ1 = 2, ζ2 = 4, and ζ3 = 223.
Minimal-dimension model The LSS determined as described in Sec. 4.2.1 corresponds to the whole
2D phase space. As a consequence, excitability thresholds (13) match the corresponding input-dependent
counterparts (16). For each attractor, the Euclidean distances from the two closest saddles are consistent with
the estimation of excitability thresholds provided by our method. As discussed in Sec. 3.1 and graphically
represented in Fig. 7 bottom-centre panel, we find three excitability thresholds in the computed ENA:
0.49, 1.39, and 1.42. In Fig. 7, bottom-right panel, we show that undesired connections (i.e., connections
corresponding to state transitions that are not defined in the flip-flop task) between fixed points (0.97, −0.97)
and (−0.97, 0.97) have very low effective excitability values (30), indicating that it is unlikely to use such
connections during the task execution. The low effective excitability is due to the small volume ratio (29)
of the basins associated with the two attractors. On the other hand, larger thresholds characterise the
undesired connections from (−1, −1) to (1, 1), reflecting a lack of symmetry between the basin volumes of
(1, 1), (−1, −1) and (0.97, −0.97), (−0.97, 0.97).
Four-dimensional model The first two principal components obtained via principal component analysis
are related to all identified fixed points and produce a cumulative variance ratio larger than 0.96; Fig. 8
shows a trajectory of the map described in Sec. 3.2 and related fixed points projected on the plane spanned
by the two principal components. For every attractor, the computed LSS is a two-dimensional plane; this is
expected, since it is actually the plane depicted in Fig. 4. Furthermore, we note that none of these planes
is aligned with the one where the attractors lie, stressing the importance of defining reference frameworks
local to each attractor that take the action of inputs into account. Fig. 8, middle-right panel, shows how the
input moves the states out of the plane, using additional dimensions for the switches. The computed ENA,
weighted with excitability thresholds (28) and effective excitability (30), is shown in Fig. 8, bottom-left and
bottom-right panels, respectively. Symmetries of the dynamical system are clearly present in the resulting
directed graphs. All desired connections are characterised by an excitability threshold of ' 0.83, while
undesired ones have higher thresholds equal to ' 1.19. We note that the presence of undesired connections
does not imply that the ESN actually uses such connections during the task execution. To this end, the
effective excitability thresholds (30), shown in the bottom-right panel of Fig. 8, provide us with a more
realistic picture of the behaviour under the action of inputs. Note that the effective excitability thresholds
are very low for the undesired connections, implying that the LSS used by inputs is mostly occupied by
basins corresponding to attractors adjacent to the end-point of desired connections.
5.2
Application of the proposed method to high-dimensional trained ESNs
We now consider implementation using ESNs (7) with a reservoir composed of 500 neurons. Experimenting
with ESNs with different reservoir sizes confirm that one can get ESNs can be successfully trained independent
of the precise number of neurons, as long as there are enough. We choose 500 neurons for hardware and
17
Flip-flop input-driven dynamics
1
0
1
1
0
100
200
0
100
200
300
400
300
400
0
1
Time
output 1st bit
input 1st bit
500
output 2nd bit
input 2nd bit
500
1.00
0.75
0.50
x2
0.25
0.00
0.25
0.50
0.75
attractor
saddle, one unstable direction
repeller
test trajectory
1.00
1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
x1
Figure 7: Top: output produced by the two-dimensional ESN defined in Sec. 3.1. Bottom left: fixed
points found by the optimisation algorithm with 100 initial conditions. Length of the test trajectory was
1000 steps. Bottom centre: extracted ENA with edges weighted by excitability thresholds (28). Bottom
right: extracted ENA with edges weighted according to (30). Node labels represent output values (3)
produced on the attractors.
computing time considerations. For the grid search algorithm, we use ζ1 = 3, ζ2 = 18,and ζ3 = 12. The
state-update (7) is configured without leakage, α = 1 and standard deviation of noise is set to 10−4 during
training. The entries of matrices Win , Wf b , and Wr are i.i.d. drawn from a uniform distribution in [−1, 1];
the sparseness of Wr is 95%. Moreover, the reservoir matrix was rescaled to obtain a spectral radius equal
to 0.9. Finally, the training set length is always 50000 time-steps.
Low-dimensional dynamics Fig. 9, top panel, shows the output produced by an ESN achieving high
prediction performance. The extracted ENA, shown in the bottom-right panel, reveals that undesired connections have excitability thresholds (16) significantly higher than those of desired connections. This means that,
in the LSS of every stable point, basins corresponding to attractors adjacent to the end-point of undesired
connections stand relatively far away from the stable point compared to basins of attractors adjacent to the
end-point of desired connections. The fixed points of the dynamics lie in a two-dimensional subspace of the
500-dimensional phase space (the cumulative variance ratio is close 1). It is observed that trajectory spends
most of the time close to such a plane. Hence, based on a principal component analysis of the trajectory,
we can claim that the dynamics is two-dimensional. Nevertheless, during the task execution, the trajectory
is occasionally driven away from such a 2D plane by the inputs in order to achieve the switches between
attractors, and this feature is crucial to understand how the trained neural network behaves while solving
the task. The cumulative variance ratio for the identification of the LSS of every attractor, as described
in Sec. 4.2.1, revealed that the switching between stable fixed points takes place in a three-dimensional
subspace of the phase space, highlighting that the overall dynamics of ESNs is effectively low-dimensional
after training. It is worth stressing that such LSSs are usually not aligned with the standard coordinate
system of the original phase space and the subspaces where attractors lie, suggesting that inputs operate in
18
phase space regions that are disjoint with respect to the low-dimensional linear subspace of the attractors;
this is consistent with results reported in [55].
Computation accuracy and spurious attractors Various measures of accuracy exist for quantifying
the performance on prediction tasks. For instance, the mean-squared-error (MSE) is typically adopted in
tasks involving continuous targets. The MSE is a real-valued scalar that informs us about how close the
computed output is to the target one. However, it is not possible to infer additional insights by looking only
at the MSE; for instance, it is not possible to provide a mechanistic description of errors in the computation,
i.e., why and how they occur. Here, we show how the effective ENA extracted from the ESN trajectory can
be used to describe how the computation takes place in phase space and, in particular, how to diagnose the
nature of prediction errors.
The top panel in Fig. 10 shows the output produced by an ESN achieving low MSE of the order of 10−3 .
The small errors observable around time-step 13200 can be explained by looking at the ENA model depicted
in the bottom panels of Fig. 10, which is represented with excitability thresholds (28), left panel, and with
effective excitability values (30), right panel. The directed graphs (whose topology is clearly identical) reveal
the presence of two extra stable fixed points in the ESN phase space, which are generated during training.
Nodes of these graphs are coloured according to output values produced by the ESN. The activation of
the related excitable connections brings the ESN to operate in the proximity of such superfluous states,
producing inaccurate output values and hence explaining the origin of such errors. Notably, the directed
edge – in the graph on the right-hand side – connecting the cyan with the yellow node has a relatively high
value of effective excitability. This means that, in the LSS of the related attractor (cyan), whose output is
(−1.03, −1.07), it is relatively easy to transition to the basin of the other attractor (yellow). This, in turn,
produces an output value equal to (−0.84, −0.40), which is significantly different from the target output, i.e.,
(−1, −1). The prediction error, visible to the naked eye in the top panel around time-step 13200, is indeed
due to the activation of that excitable connection.
Both of these spurious attractors act as a sort of surrogates for the correct ones (i.e., those producing
lower prediction errors). In fact, once the internal state switches to a spurious attractor, the ESN still behaves
consistently. More precisely, spurious attractors are associated with higher effective excitability values on
connections ending up in the correct attractors – see the bottom-right panel of Fig. 10. The existence
of these spurious attractors and the unravelling of their roles in the ESN computation could not easily be
inferred by looking only at the MSE or plotting the outputs produced by the ESN. This demonstrates the
importance of ENA models for describing the ESN behaviour.
5.3
Noise tolerance and effective excitability of ENAs
Here, we aim to provide a further example of the importance of ENAs for characterising the computation
of ESNs. In particular, we ask the following question: given a set of ESNs trained on the same task and
achieving the same performance during training, which one will be more robust to noise during test? Typical
performance measures, such as MSE, cannot be used to answer such a question and provide useful insights.
We show that an ENA model of the computation, weighted with effective excitability values (30), allows us
to assess robustness to noise of ESNs.
As a perturbation, we consider the usual white Gaussian noise term in the state-update (7), corresponding to perturbations directly applied to all neuron pre-activation values. We stress that such perturbations
are applied only during the test phase; training is implemented as described in the previous section. By
increasing the noise standard deviation, the ESN trajectory gets increasingly perturbed neglecting the possibility to reach the proximity of attractors, hence affecting the accuracy of the resulting output values.
We note that: (i) ESNs yielding ENAs with higher effective excitability values are less robust to noise
perturbations than ESNs giving rise to ENAs with low effective excitability values (see [6]), and (ii) ESNs
producing ENAs with balanced edge weights on desired outgoing connections are more robust to noise
perturbations than ENAs with unbalanced outgoing weights. Regarding (i), high excitability values imply the
existence of connections with low excitability thresholds. That is, the basin of the attractor corresponding to
19
the end-point of such a connection is very close to the attractor associated with the origin of the connection,
resulting in unnecessary switches that induce errors. Concerning (ii), for a given attractor, unbalanced
outgoing connection weights could be a symptom of unbalanced distribution of space between basins of
attraction in the LSS or the existence of some basins significantly closer to the attractor than other basins.
In the latter case, a reasoning similar to (i) applies. On the other hand, if there is indeed an asymmetric
distribution of volumes between basins then basins of attractors corresponding to connections possessing
large volume ratios will fill most of the LSS, leaving little space for basins of those attractors related to
connections with small volume ratios. Therefore, if the ESN state is close to an attractor with unbalanced
outgoing connection weights, but with similar excitability thresholds, then under noise perturbations it is
more likely to switch towards an attractor reachable through a connection with high effective excitability
even when such a connection should not be activated. For these reasons, an ENA, where each node is
characterised by balanced weights on outgoing connections and very low effective excitability values on
undesired connections, provides a prototypical example of ESNs whose behaviour is robust to noise.
To quantify the robustness of this behaviour to noise, we selected two ESNs that solve the two-bit flip-flop
task with very high and comparable accuracy – MSE during training is ' 10−4 . We test them on the same
input series composed of 105 time-steps and recorded their MSEs by considering different noise instances
with increasing standard deviation. Directed graphs representing the extracted ENAs are shown in Fig. 11.
Results for increasing noise standard deviation are divided in two columns: on the left column, we report
results obtained by the ESN that is least tolerant to noise. The directed graph on the left-hand side of
Fig. 11 shows the presence of two undesired connections. The edge weights of desired connections are not
balanced, especially those outgoing from attractors with output values (1, 1) and (−1, −1). Conversely, the
directed graph on the right-hand side does not contain undesired connections and possess balanced outgoing
weights. The absence of undesired connections indicates that, in the LSS of every attractor, the basins of the
attractors corresponding to undesired connections do not exist or, alternatively, they are very small and hence
not detectable with the grid density used in our simulations; therefore, they are not relevant for describing
the ESN behaviour according with the numerical precision we considered in our simulations. Finally, we
highlight that a performance breakdown for the ESN on the left-hand side panel is observed starting from
noise standard deviation of 8 × 10−2 . On the other hand, the ESN on the right-hand side is significantly
more robust to noise, denoting a performance breakdown for noise standard deviation of 1.4 × 10−1 .
6
Conclusions
In this paper, we present a novel methodology for modeling and interpreting the behaviour of RNNs driven by
inputs. In order to obtain a mechanistic model describing how RNNs solve tasks, we exploit the theoretical
framework offered by excitable network attractors [5], which are defined as networks of stable fixed points
connected by excitable connections. We introduce a procedure to extract excitable network attractors directly
from a trajectory generated by a trained RNN. Such a procedure is composed of two main steps: first, fixed
points are computed by solving a non-linear optimisation problem [55] and successively excitable connections,
with related thresholds, are determined by simulating the dynamics of the autonomous system.
We validate our theoretical developments by considering ESNs trained on the flip-flop task, a simple
yet relevant benchmark that consists of learning a prescribed number of stable states and related switching
patterns guided by control inputs. We cannot see any particular theoretical limitations in the application
of our framework to more complex RNN architectures, although for more complicated dynamical tasks a
detailed computation of bifurcation behaviour may become unfeasible. Simulation results provide several
interesting insights on how RNNs solve tasks and highlight the usefulness of excitable network attractors in
describing how RNN undertake computations. We train echo state networks by means of ridge regression.
Our results (not shown here) suggest that the regularisation parameter has a direct impact on the number of
attracting regions in phase space generated through training: using too low values produces under-regularised
models with a large number of attractors. An interesting future perspective consists of studying the impact
of different training mechanisms (e.g., via FORCE learning or similar online approaches) on the resulting
network attractor.
20
We believe that the proposed modelling framework based on excitable network attractors will be suitable
to describe the RNN behaviour for many if not all tasks requiring the learning of a number of attractors
(which need not be stable fixed points) and related switching patterns. To this end, in future we will also
examine classification tasks. A related next step includes the possibility to handle inputs that are not
instantaneous pulses, opening the way to more interesting case studies of practical relevance. As mentioned
in the introductory sections network attractors can in principle be constructed between any type of invariant
sets, including limit cycles and strange attractors. Our focus on fixed points was mainly dictated by the
fact that computing fixed points from a trajectory is significantly easier than computing limit cycles, for
instance.
Clearly, fixed points are not powerful enough to accurately model all possible behaviours in phase space.
Therefore, an interesting future perspective consists in extending our modeling framework to handle networks
composed of heterogeneous attractors, such as a network in phase space connecting fixed points and limit
cycles. In turn, this will allow modeling more complex RNN behaviour. Finally, future directions include
embedding the directed graph representing the excitable network attractor extracted from the trajectory in
a new phase space, thus producing a set of ordinary differential equations [4] describing the RNN behaviour
for the task under consideration.
21
Flip-flop input-driven dynamics
1
output 1st bit
input 1st bit
0
1
1
0
100
200
300
400
500
0
1
0
100
200
Time
300
400
attractor
saddle, one unstable direction
saddle, two unstable directions
saddle, three unstable directions
repeller
test trajectory
1.00
0.75
0.50
0.25
0.00
0.25
0.50
0.75
1.00
Third PC
2.0
attractor
1.5 unstable direction
saddle, one
saddle, two unstable directions
saddle, three
1.0 unstable directions
repeller
test trajectory
0.5
Second PC
output 2nd bit
input 2nd bit
500
0.0
0.5
1.0
1.5
1.5
2.0
2.0
1.5
1.0
0.5 0.0 0.5
First PC
1.0
1.5
2.0
1.0
0.5
First0.0P 0.5
C
1.0
1.5
1.5
1.0
0.5
PC
0.0
d
n
0.5
o
1.0 Sec
1.5
Figure 8: Top: output of the four-dimensional ESN defined in Sec. 3.1. Centre left: fixed points found by
the optimisation algorithm with 200 initial conditions, depicted on the space spanned by the first two principal
components. In black, it is shown a trajectory starting on the attractor (marked by a black triangle); to
improve readability, the trajectory is limited to two switches only. Centre right: fixed points and trajectory
depicted in the centre-left picture are embedded in the space spanned by the three principal components.
Bottom left: ENA with edge weights representing excitability thresholds (28). Bottom right: ENA with
edge weights representing the effective excitability of connections (30). Node labels represent output values
produced by the ESN on the attractors.
22
1
target output
generated output
0
1
1
0
1
100
200
300
target output
generated output
100
200
300
Time
Time
400
500
600
400
500
600
Figure 9: Top: output produced by a trained ESN and target output. Bottom left: fixed point found by
means of the optimisation algorithm with 500 initial conditions. The plane in the picture represents the space
spanned by the first two principal components determined over all 500 solutions returned by the optimisation
algorithm. Bottom right: extracted ENA with edge weights representing excitability thresholds (28). Node
labels represent output values produced by the ESN (3) on the attractors.
23
1
target output
generated output
0
1
12400
1
0
1
12600
12400
12800
12600
12800
13200
13000
Time
15
13400
target output
generated output
13400
13200
output (-1.00, 1.00)
output (1.00, -1.00)
output (1.03, 1.07)
output (-1.03, -1.07)
output (0.84, 0.40)
output (-0.84, -0.40)
15
10
10
5
5
attractor
saddle, one unstable direction
saddle, more unstable directions
test trajectory
0
Second PC
Second PC
13000
Time
0
5
5
10
10
15
15
15
10
5
0
First PC
5
10
15
15
10
5
0
First PC
5
10
15
Figure 10: Top: output of the trained ESN showing an incorrect switch around step 13200. Center left:
fixed points found by the optimisation algorithm plotted in their two-dimensional subspace together with
the trajectory. Center right: attractor plane divided by basins of attraction of every stable fixed point.
The figure is drawn assuming a transient of 300 time-steps. Every basin is coloured depending on the
attractor where the ESN converges to without applying any input. Bottom left: extracted ENA weighted
with estimation of input-driven excitability thresholds (28). Bottom right: extracted ENA weighted with
effective excitability value (30). Nodes are coloured according to the colours used in the center-right figure.
24
1
0
1
1
0
1
1
0
1
1
0
1
Noise=0.0001
MSE=0.000455
1
0
1
1
0
1
target output
generated output
target output
generated output
100
200
300
Noise=0.0200
100
200
300
Noise=0.0500
2
Time
Time
400
500
600
MSE=0.005934
400
500
1
0
1
1
0
1
600
MSE=0.135302
1
0
1
1
0
1
0
1
0
1
100
200
1
0
1
100
200
100
200
300
Noise=0.1200
1
0
1
1
0
1
300
Noise=0.1000
1
0
1
1
0
1
300
Noise=0.0800
1
0
1
100
200
300
Noise=0.1400
1
0
1
Time
Time
Time
Time
400
500
600
MSE=0.349465
400
500
1
0
1
600
1
0
1
600
MSE=0.574947
400
500
1
0
1
600
MSE=0.685560
200
300
Noise=0.1600
Time
400
1
0
1
600
200
300
Noise=0.0500
100
200
300
Noise=0.0800
100
200
300
Noise=0.1000
100
200
300
Noise=0.1200
100
200
300
Noise=0.1400
100
200
MSE=0.739878
300
Noise=0.1600
0
2
1
0
1
100
1
0
1
500
300
Noise=0.0200
1
0
1
1
0
1
100
200
1
0
1
500
MSE=0.000356
target output
generated output
100
1
0
1
MSE=0.465977
400
Noise=0.0001
target output
generated output
Time
Time
Time
Time
Time
Time
Time
400
500
600
500
600
500
600
500
600
500
600
500
600
500
600
500
600
MSE=0.002173
400
MSE=0.015065
400
MSE=0.072337
400
MSE=0.138695
400
MSE=0.239920
400
MSE=0.347629
400
MSE=0.460830
0
2
1
0
1
100
200
300
Time
400
500
600
100
200
300
Time
400
Figure 11: Left: results corresponding to a trained ESN giving rise to an ENA with undesired connections
and unbalanced outgoing weights (30). Right: results obtained with a trained ESN giving rise to an ENA
with balanced outgoing weights for all excitable connections. From top to bottom, in both columns, ESN
outputs and related MSEs obtained with increasing noise standard deviation: 10−4 , 2 · 10−2 , 5 · 10−2 , 8 · 10−2 ,
10−1 , 1.2 · 10−1 , 1.4 · 10−1 , and 1.6 · 10−1 .
25
Appendix A
Linear stability analysis
By considering the case where the neural network is autonomous, we follow (10) and write the related map
as x[k] = F(x[k − 1]) = G(x[k − 1], 0). The ith component of the ESN state, xi , i = 1, . . . , Nr , evolves
according to the scalar map Fi (x) = (1 − α)xi + α tanh(M(i) · x). By noting that
∂Fi
(x0 ) = (1 − α)δij + α 1 − tanh2 (M(i) · x0 ) mij ,
∂xj
where δij is the Kronecker delta, it is possible to write the Jacobian matrix evaluated onto x0 as JF (x0 ) =
(1 − α)INr + αD(x0 )M, where
D(x0 ) =
1 − tanh2 (M(1) · x0 )
0
..
.
0
0
...
1 − tanh2 (M(2) · x0 ) . . .
..
..
.
.
0
...
0
0
..
.
1 − tanh2 (M(Nr ) · x0 )
(31)
is an Nr × Nr diagonal matrix representing the squashing action of tanh(·) along the saturating components
of x0 . By linearizing the network state-update (7) around a given fixed point x∗ , we obtain the linear system
δx[k + 1] = JF (x∗ )δx[k], where δx[k] = x[k] − x∗ . It is known [53] that if a fixed point x∗ is hyperbolic (i.e.,
JF (x∗ ) has no eigenvalues on the unit circle in the complex plane), then the linear approximation provides
a bona-fide characterisation of the non-linear behaviour around that fixed point. Therefore, the (linear)
stability of x∗ is completely determined by the spectral radius of JF (x∗ ). If all eigenvalues of JF (x∗ ) are
inside the unit circle, then x∗ is a stable fixed point; on the other hand, if even just one eigenvalues has
norm larger than one, then the linearized map is expanding along the corresponding eigenvectors and x∗ is
called a saddle5 . We conclude observing that holds x∗i = tanh(M(i) · x∗ ) ⇐⇒ αx∗i = α tanh(M(i) · x∗ ) ⇐⇒
x∗i + αx∗i = x∗i + α tanh(M(i) · x∗ ) ⇐⇒ x∗i = (1 − α)x∗i + α tanh(M(i) · x∗ ), hence the number of fixed
points and their positions in phase space do not change on varying α ∈ (0, 1]. However, their linear stability
properties are directly affected by α.
Appendix B
Bifurcation of fixed points in ESNs
In what follows, we perform a bifurcation analysis of one-dimensional ESN map and provide some sufficient
conditions to design two-dimensional ESNs with a desired number of fixed points. Particularly, we focus
on fold bifurcations 6 of low-dimensional ESN maps, which constitute the only codimension-1 bifurcation
that generate new fixed points; as discussed by Tiňo et al. [57], it is the usual mechanism for creating new
attractive fixed point in RNNs. Moreover, as shown by Beer [7], some fold bifurcations are responsible
for reducing the dimensions where the actual dynamics takes place, dividing the RNN parameter space in
different regions of effective dimensionality.
B.1
ESN maps in one dimension
Here, we consider the following one-dimensional map,
x[k] = tanh(mx[k − 1] + w),
(32)
where (m, w) ∈ R2 are the bifurcation parameters; we simplify the state-update in (7) and set leak rate
α = 1, = 0, and remove explicit reference to inputs. Nevertheless, studying the map (32) is still useful to
5 If
every eigenvalue has norm greater than one, then the fixed point is a repeller.
refer to Kuznetsov [28] for the terminology; the fold bifurcation is also known as saddle-node, limit point or turning
point bifurcation.
6 We
26
obtain insights about high-dimensional input-driven ESN dynamics. In fact, in the high-dimensional case
(7) the activation function of the jth neuron is determined by
X
xj [k] = tanh mjj xj [k − 1] +
mjs xs [k − 1] + (Win )(j) · u[k] + εj .
(33)
s6=j
Hence, the parameter w in (32) can be interpreted as the weighted sum of all incoming neurons, plus input
and noise terms. Fixed points of the map F (x) = tanh(mx + w) are the solutions x∗ of the equation
Q(x∗ ) = 0, where
Q(x) := F (x) − x = tanh(mx + w) − x.
(34)
It is not possible to find a closed-form expression for the fixed points. However, it is possible to state that:
(i) for every (m, w) ∈ R2 , there exists at least one fixed point; (ii) there can be one, two, or three fixed
points. The proof of these statements follow straightforwardly from the fact that limx→±∞ Q(x) = ∓∞ and
because, if m < 1 then Q is monotonic. Otherwise, there exist two critical points, xl < xr , namely
"
!
#
r
m−1
1
−1
± tanh
−w ,
(35)
xl,r =
m
m
such that the function Q(x) folds. Moreover, since Q(x∗ ) = 0 ⇐⇒ x∗ = tanh(mx∗ + w) ∈ (−1, 1) for every
(m, w) ∈ R2 , all fixed points lie in the open interval (−1, 1). Critical points in (35) are solutions to Q0 (x) = 0,
or equivalently to F 0 (x) = 1.
Let us assume m > 0. We observe a fold bifurcation whenever a critical point assumes the zero value.
The condition Q(xl,r ) = 0 gives rise to the following parametrization of the fold bifurcation curve,
" r
!#
r
m−1
m−1
−1
w± (m) := ± m
− tanh
,
m ∈ [1, +∞),
(36)
m
m
which possesses two symmetric branches ending on a cusp; see Fig. 12 for an illustration. Crossing that
curve in parameter space towards the region containing the semi-axis of m > 1, a new fixed point is formed
(xl or xr , depending on the branch) and splits in a pair of fixed points, one stable and one unstable7 . This
curve delimits the boundary between a dynamic regime where two stable points coexist with an unstable
one, and a regime where only one fixed point exists. Due to symmetry, in the m < 0 case, there are two
bifurcation branches identifying a flip or period doubling bifurcation, which is analytically described by the
curve w± (−m) with m ≤ −1. Crossing that curve towards the region containing the semi-axis of m < −1, a
stable fixed point loses stability and gives rise to a 2-periodic attracting trajectory surrounding it. We note
that the flip bifurcation is detrimental for the flip-flop task considered here, as it gives period rather than
fixed point attractors.
B.2
ESN maps in two dimensions
Here, we consider a two-neuron reservoir. We denote the activation functions of these neurons as x and y,
so that the ESN state evolution defines a trajectory (x[k], y[k]) ∈ [−1, 1]2 , k = 1, 2, ..., ruled by:
x[k] = tanh(ax[k − 1] + by[k − 1]) ,
y[k] = tanh(cx[k − 1] + dy[k − 1]) .
(37)
It is know that a fixed point can undergo only fold or flip bifurcations in one-dimensional, discrete-time
dynamical systems [28]. Nevertheless, considering two or more dimensions, also Neimark-Sacker bifurcations
could occur, where a stable point loses stability and an invariant curve surrounding it is created. These are
the only possible codimension-1 bifurcations of a fixed point for a discrete-time dynamical system. Among
7 The
particular case of crossing that curve through the cusp gives rise to a pitchfork bifurcation of the origin.
27
Figure 12: Left: three different folding configuration of the function Q(x) in (34) with m = 2.5; purple
w = 1.5, black w ≈ 0.9, green w = 0.5. Right: bifurcation diagram of fixed points in one-dimensional ESN.
them, only the fold bifurcation can generate new fixed
Considering (37), the Jacobian matrix evaluated on the
a
Wr =
c
points. The origin is always a fixed point of (10).
origin reads
b
.
d
Therefore, training ESNs via (6) implies a qualitative change of the dynamics around the origin if some
eigenvalue crosses the unit circle, i.e., fixed point (0, 0) bifurcates. However, adding a low-rank matrix to
the reservoir can induce global bifurcations far away from the origin as well; hence, we cannot rely on the
local bifurcation of the origin to deduce the global attractor structure after training. As
a counterexample,
λ1 γ
suppose λ1 , λ2 > 1. The diagonal matrix diag(λ1 , λ2 ) and the upper triangular matrix
share the
0 λ2
w+ (λ1 )
same spectrum {λ1 , λ2 }. Nevertheless, if the coupling is strong enough, that is, |γ| >
, where w+ (λ1 )
y∗
∗
is the function (36) evaluated on λ1 and y is the positive stable solution of y[k] = tanh(λ2 y[k − 1]), then
the fold bifurcation curve is crossed, making the dynamics for x variable trivial. As a consequence of this,
the upper triangular matrix induces dynamics with just two attractors (plus two saddles and the origin is a
repeller), while the diagonal matrix induces four attractors (plus four saddles and the repeller).
In order to count the number of fixed points of the map (37), we can draw its nullclines. Defining function
1
Nα,β (η) := [−αη + tanh−1 (η)], the nullclines are given by
β
y = Na,b (x),
x = Nd,c (y),
(38)
and they represent the locus of points where x-dynamic / y-dynamic is stationary. As a consequence, the
solutions of the algebraic nonlinear system (38) coincide with the set of fixed points. In the last part of this
28
subsection, we show some sufficient conditions to control the number of fixed points assuming a, d > 1, i.e.,
when both nullclines are folding. We refer to Fig. 13 for a graphical illustration.
• Case bc ≥ 0.
0
0
|Na,b (0)| < |Nd,c (0)|−1 , i.e. (1 − a)(1 − d) < bc
=⇒
there are exactly 3 fixed points.
(39)
On the other hand, when (1 − a)(1q
− d) ≥ bc, there could exist 5, 7 or 9 fixed points. Critical points
of the function Nα,β (η) are η± = ±
r
Na,b
a−1
a
!
r
<
d−1
or Nd,c
d
α−1
α .
r
Therefore, if (1 − a)(1 − d) ≥ bc holds then
d−1
d
!
r
<
a−1
a
=⇒
there are exactly 5 fixed points.
(40)
Fig. 13 depicts a nullcline configuration where there are 5 intersections8 . From that configuration, we
can make the humps of y = Na,b (x) more pronounced by increasing the ab ratio, until a fold bifurcation
occurs, giving rise to a new couple of pair of fixed points. The case of 7 fixed points is obtained after
the fold bifurcation.
• Case bc < 0.
! r
! r
r
r
a−1
d−1
d−1
a−1
<
and Nd,c
<
Na,b
a
d
d
a
=⇒
there is exactly 1 fixed point.
(41)
From the above configuration, increasing the d/c ratio makes stretch the humps of Nd,c until a fold
bifurcation occurs producing exactly three fixed points. Beyond the fold bifurcation, a new pair of
fixed points is generated.
! r
!
r
r
a−1
d−1
d−1
Na,b
<
and Nd,c
> 1 =⇒ there are exactly 5 fixed points.9
a
d
d
(42)
In both cases, a simple sufficient condition to ensure the existence of 9 fixed points, see Fig. 13, is
!
!
r
r
a−1
d−1
Na,b
> 1 and Nd,c
> 1 =⇒ there are exactly 9 fixed points.
(43)
a
d
The maximum number of nullcline intersections is 9, setting the maximum number of fixed points that can
be generated in a two-dimensional ESN map.
q
that it holds Nα,β ± α−1
= − β1 w± (α).
α
9 Due to symmetry, this condition holds also when inverting the role of N
d,c , Na,b and consequently (a, b), (d, c).
8 Note
29
Figure 13: In all panels, filled points denote stable attractors, while circles denote saddles or repellers. Top
left: geometric representation of the sufficient condition (39); nullcline slopes on the origin are highlighted
with different colours. Top right: illustration of the condition (40); the black dashed line acts as an upper
bound ensuring that the maximum height of the peak of the x-nullcline (represented by the green dashed
line) does not cross further the y-nullcline. Centre left: special case of 7 fixed points. Crosses indicate
neutral fixed points where fold bifurcations take place. Centre right: an example of nullclines configuration
holding the condition (43) in the case of bc > 0. Both curves come out of the invariant square [−1, 1]2 with
their humps, that guarantees 9 intersections, i.e., 9 fixed points. Bottom left: depiction of the sufficient
condition (41); the origin is the only fixed point and it is unstable. Bottom right: illustration of the
sufficient condition (42).
30
Appendix C
Aggregation of fixed points via clustering
The optimization procedure described in Sec. 4.1 provides a number of solutions equal to the number
of initial conditions taken into account (assuming all initial conditions converge); however, such solutions
might be similar up to a prescribed numerical precision. In order to reduce the set of all solutions to a few
effective fixed points, we run the k-means clustering algorithm and retain only the elements belonging the
final clusters minimizing the distance w.r.t. the cluster centroids. Instead of aggregating all solution at once,
we first group such solutions according to their linear stability properties as indicated by the spectrum of the
Jacobian matrix of the autonomous map (10). In particular, we form the group of linear stable fixed points,
the group with one unstable direction and so on. Finally, for each group, we run k-means with parameter k
identified according with the minimum of the Davies-Bouldin index [2].
Compliance with Ethical Standards
Conflict of Interest Andrea Ceni, Peter Ashwin, and Lorenzo Livi declare that they have no conflict of
interest.
Funding We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan
Xp GPU used for this research. LL gratefully acknowledges partial support of the Canada Research Chairs
program. PA gratefully acknowledges partial support of the EPSRC Centre for Predictive Modelling in
Healthcare via grant EP/N014391/1.
Human and Animal Rights This article does not contain any studies with human participants or animals
performed by any of the authors.
References
[1] J. Aljadeff, D. Renfrew, M. Vegué, and T. O. Sharpee. Low-dimensional dynamics of structured random networks. Physical
Review E, 93(2):022302, 2016. doi: 10.1103/PhysRevE.93.022302.
[2] O. Arbelaitz, I. Gurrutxaga, J. Muguerza, J. M. Pérez, and I. Perona. An extensive comparative study of cluster validity
indices. Pattern Recognition, 46(1):243–256, 2013. doi: 10.1016/j.patcog.2012.07.021.
[3] M. Arjovsky, A. Shah, and Y. Bengio. Unitary evolution recurrent neural networks. In International Conference on
Machine Learning, pages 1120–1128, New York, USA, June 2016.
[4] P. Ashwin and C. Postlethwaite. On designing heteroclinic networks from graphs. Physica D: Nonlinear Phenomena, 265:
26–39, 2013. doi: 10.1016/j.physd.2013.09.006.
[5] P. Ashwin and C. Postlethwaite. Designing heteroclinic and excitable networks in phase space using two populations of
coupled cells. Journal of Nonlinear Science, 26(2):345–364, 2016. doi: 10.1007/s00332-015-9277-2.
[6] P. Ashwin and C. Postlethwaite. Sensitive finite state computations using a distributed network with a noisy network
attractor. IEEE Transactions on Neural Networks and Learning Systems, pages 1–12, 2018. doi: 10.1109/TNNLS.2018.
2813404.
[7] R. D. Beer. Parameter space structure of continuous-time recurrent neural networks. Neural Computation, 18(12):3009–
3051, 2006. doi: 10.1162/neco.2006.18.12.3009.
[8] F. M. Bianchi, S. Scardapane, A. Uncini, A. Rizzi, and A. Sadeghian. Prediction of telephone calls load using echo state
network with exogenous variables. Neural Networks, 71:204–213, 2015. doi: 10.1016/j.neunet.2015.08.010.
[9] F. M. Bianchi, L. Livi, and C. Alippi. Investigating echo state networks dynamics by means of recurrence analysis. IEEE
Transactions on Neural Networks and Learning Systems, 29(2):427–439, 2018. doi: 10.1109/TNNLS.2016.2630802.
[10] D. Castelvecchi. Can we open the black box of AI? Nature News, 538(7623):20, 2016.
[11] M. Cencini, F. Cecconi, and A. Vulpiani. Chaos: From Simple Models to Complex Systems. World Scientific, Singapore,
2010.
[12] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence
modeling. arXiv preprint arXiv:1412.3555, 2014.
[13] B. De Pasquale, C. J. Cueva, K. Rajan, G. S. Escola, and L. F. Abbott. full-FORCE: A target-based method for training
recurrent networks. PLoS ONE, 13(2):1–18, 2 2018. doi: 10.1371/journal.pone.0191527.
[14] K. Funahashi and Y. Nakamura. Approximation of dynamical systems by continuous time recurrent neural networks.
Neural Networks, 6(6):801–806, 1993.
31
[15] C. Gallicchio and A. Micheli. Echo state property of deep reservoir computing networks. Cognitive Computation, 9(3):
337–350, Jun. 2017. ISSN 1866-9964. doi: 10.1007/s12559-017-9461-9.
[16] M. D. Golub and D. Sussillo. Fixedpointfinder: A tensorflow toolbox for identifying and characterizing fixed points in
recurrent neural networks. The Journal of Open Source Software, 3:1003, 2018. doi: 10.21105/joss.01003.
[17] B. Goodman and S. Flaxman. European union regulations on algorithmic decision-making and a “right to explanation”.
arXiv preprint arXiv:1606.08813, 2016.
[18] A. Graves, A.-R. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In Proceedings of
IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6645–6649, Vancouver, BC, Canada,
May 2013. IEEE.
[19] B. Hammer. On the approximation capability of recurrent neural networks. Neurocomputing, 31(1):107–123, 2000. doi:
10.1016/S0925-2312(99)00174-5.
[20] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
[21] H. Jaeger and H. Haas. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication.
Science, 304(5667):78–80, 2004. doi: 10.1126/science.1091277.
[22] H. Jaeger, M. Lukoševičius, D. Popovici, and U. Siewert. Optimization and applications of echo state networks with
leaky-integrator neurons. Neural Networks, 20(3):335–352, 2007. doi: 10.1016/j.neunet.2007.04.016.
[23] S. Kanai, Y. Fujiwara, and S. Iwamura. Preventing gradient explosions in gated recurrent units. In Advances in Neural
Information Processing Systems, pages 435–444, 2017.
[24] A. Karpatne, G. Atluri, J. H. Faghmous, M. Steinbach, A. Banerjee, A. Ganguly, S. Shekhar, N. Samatova, and V. Kumar.
Theory-guided data science: A new paradigm for scientific discovery from data. IEEE Transactions on Knowledge and
Data Engineering, 29(10):2318–2331, Oct. 2017. ISSN 1041-4347. doi: 10.1109/TKDE.2017.2720168.
[25] G. E. Katz and J. A. Reggia. Using directional fibers to locate fixed points of recurrent neural networks. IEEE transactions
on neural networks and learning systems, 29(8):3636–3646, 2018. doi: 10.1109/TNNLS.2017.2733544.
[26] L. Keuninckx, J. Danckaert, and G. Van der Sande. Real-time audio processing with a cascade of discrete-time delay
line-based reservoir computers. Cognitive Computation, 9(3):315–326, 2017. doi: 10.1007/s12559-017-9457-5.
[27] D. Koryakin, J. Lohmann, and M. V. Butz. Balanced echo state networks. Neural Networks, 36:35–45, 2012. doi:
10.1016/j.neunet.2012.08.008.
[28] Y. A. Kuznetsov. Elements of Applied Bifurcation Theory, volume 112. Springer Science & Business Media, Berlin,
Germany, 2013.
[29] L. Livi, F. M. Bianchi, and C. Alippi. Determination of the edge of criticality in echo state networks through Fisher
information maximization. IEEE Transactions on Neural Networks and Learning Systems, 29(3):706–717, Mar. 2018. doi:
10.1109/TNNLS.2016.2644268.
[30] S. Løkse, F. M. Bianchi, and R. Jenssen. Training echo state networks with regularization through dimensionality reduction.
Cognitive Computation, 9(3):364–378, 2017. doi: 10.1007/s12559-017-9450-z.
[31] M. Lukoševičius. A Practical Guide to Applying Echo State Networks, pages 659–686. Springer Berlin Heidelberg, Berlin,
Heidelberg, 2012. doi: 10.1007/978-3-642-35289-8 36.
[32] M. Lukoševičius and H. Jaeger. Reservoir computing approaches to recurrent neural network training. Computer Science
Review, 3(3):127–149, 2009. doi: 10.1016/j.cosrev.2009.03.005.
[33] W. Maass, P. Joshi, and E. D. Sontag. Computational aspects of feedback in neural circuits. PLoS Computational Biology,
3(1):e165, 2007. doi: 10.1371/journal.pcbi.0020165.eor.
[34] G. Manjunath and H. Jaeger. Echo state property linked to an input: Exploring a fundamental characteristic of recurrent
neural networks. Neural Computation, 25(3):671–696, 2013. doi: 10.1162/NECO a 00411.
[35] F. Mastrogiuseppe and S. Ostojic. Linking connectivity, dynamics, and computations in low-rank recurrent neural networks.
Neuron, 2018. ISSN 0896-6273. doi: 10.1016/j.neuron.2018.07.003.
[36] N. M. Mayer and Y.-H. Yu. Orthogonal echo state networks and stochastic evaluations of likelihoods. Cognitive Computation, 9(3):379–390, 2017. doi: 10.1007/s12559-017-9466-4.
[37] K. D. Miller and F. Fumarola. Mathematical equivalence of two common forms of firing rate models of neural networks.
Neural computation, 24(1):25–31, 2012.
[38] P. Miller. Itinerancy between attractor states in neural systems. Current Opinion in Neurobiology, 40:14–22, 2016. doi:
10.1016/j.conb.2016.05.005.
[39] J. Milnor. On the concept of attractor. In The Theory of Chaotic Attractors, pages 243–264. Springer, 1985.
[40] G. Montavon, W. Samek, and K.-R. Müller. Methods for interpreting and understanding deep neural networks. Digital
Signal Processing, 73:1–15, Feb. 2017. doi: 10.1016/j.dsp.2017.10.011.
[41] F. S. Neves, M. Voit, and M. Timme. Noise-constrained switching times for heteroclinic computing. Chaos: An Interdisciplinary Journal of Nonlinear Science, 27(3):033107, 2017. doi: 10.1063/1.4977552.
[42] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, Berlin, Germany, 2006.
[43] R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In Proceedings of the 30th
International Conference on Machine Learning, volume 28, pages 1310–1318, Atlanta, Georgia, USA, 2013.
[44] V. Pham, T. Bluche, C. Kermorvant, and J. Louradour. Dropout improves recurrent neural networks for handwriting
recognition. In 14th International Conference on Frontiers in Handwriting Recognition, pages 285–290, Crete Island,
Greece, Sep. 2014. doi: 10.1109/ICFHR.2014.55.
[45] M. Rabinovich, A. Volkovskii, P. Lecanda, R. Huerta, H. Abarbanel, and G. Laurent. Dynamical encoding by networks of
competing neuron groups: winnerless competition. Physical Review Letters, 87(6):068102, 2001.
32
[46] M. Rabinovich, R. Huerta, and G. Laurent. Transient dynamics for neural processing. Science, 321(5885):48–50, 2008.
doi: 10.1126/science.1155564.
[47] K. Rajan, L. F. Abbott, and H. Sompolinsky. Stimulus-dependent suppression of chaos in recurrent neural networks.
Physical Review E, 82(1):011903, 2010. doi: 10.1103/PhysRevE.82.011903.
[48] R. F. Reinhart and J. J. Steil. Regularization and stability in reservoir networks with output feedback. Neurocomputing,
90:96–105, 2012. doi: 10.1016/j.neucom.2012.01.032.
[49] A. Rivkind and O. Barak. Local dynamics in trained recurrent neural networks. Physical Review Letters, 118:258101, Jun.
2017. doi: 10.1103/PhysRevLett.118.258101.
[50] A. Rodan and P. Tiňo. Simple deterministically constructed cycle reservoirs with regular jumps. Neural Computation, 24
(7):1822–1852, 2012. doi: 10.1162/NECO a 00297.
[51] S. Ruder. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016.
[52] S. Scardapane and A. Uncini. Semi-supervised echo state networks for audio classification. Cognitive Computation, 9(1):
125–135, 2017. doi: 10.1007/s12559-016-9439-z.
[53] S. H. Strogatz. Nonlinear Dynamics and Chaos. Hachette, UK, 2014.
[54] D. Sussillo and L. F. Abbott. Generating coherent patterns of activity from chaotic neural networks. Neuron, 63(4):
544–557, 2009. doi: 10.1016/j.neuron.2009.07.018.
[55] D. Sussillo and O. Barak. Opening the black box: Low-dimensional dynamics in high-dimensional recurrent neural networks.
Neural Computation, 25(3):626–649, 2013. doi: 10.1162/NECO a 00409.
[56] C. Tallec and Y. Ollivier. Can recurrent neural networks warp time? In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=SJcKhk-Ab.
[57] P. Tiňo, B. G. Horne, and C. L. Giles. Attractive periodic sets in discrete-time recurrent networks (with emphasis on
fixed-point stability and bifurcations in two-neuron networks). Neural Computation, 13(6):1379–1414, 2001.
[58] I. Tsuda. Chaotic itinerancy and its roles in cognitive neurodynamics. Current Opinion in Neurobiology, 31:67–71, 2015.
doi: 10.1016/j.conb.2014.08.011.
[59] P. Vincent-Lamarre, G. Lajoie, and J.-P. Thivierge. Driving reservoir models with oscillations: a solution to the extreme
structural sensitivity of chaotic networks. Journal of Computational Neuroscience, pages 1–18, 2016. doi: 10.1007/
s10827-016-0619-3.
[60] O. Weinberger and P. Ashwin. From coupled networks of systems to networks of states in phase space. Discrete &
Continuous Dynamical Systems - B, 23:2043, 2018. ISSN 1531-3492. doi: 10.3934/dcdsb.2018193.
[61] F. wyffels, J. Li, T. Waegeman, B. Schrauwen, and H. Jaeger. Frequency modulation of large oscillatory neural networks.
Biological Cybernetics, 108(2):145–157, 2014. doi: 10.1007/s00422-013-0584-0.
[62] I. B. Yildiz, H. Jaeger, and S. J. Kiebel. Re-visiting the echo state property. Neural Networks, 35:1–9, 2012. doi:
10.1016/j.neunet.2012.07.005.
33