5 - Lstm-Time-Series
5 - Lstm-Time-Series
5 - Lstm-Time-Series
and Machine Learning. Bruges (Belgium), 22-24 April 2015, i6doc.com publ., ISBN 978-287587014-8.
Available from http://www.i6doc.com/en/.
1 Introduction
Traditional process monitoring techniques use statistical measures such as cu-
mulative sum (CUSUM) and exponentially weighted moving average (EWMA)
over a time window [1] to detect changes in the underlying distribution. The
length of this time window generally needs to be pre-determined and the re-
sults greatly depend on this parameter. LSTM neural networks [2] overcome the
vanishing gradient problem experienced by recurrent neural networks (RNNs)
by employing multiplicative gates that enforce constant error flow through the
internal states of special units called ‘memory cells’. The input (IG ), output
(OG ), and forget (FG ) gates prevent memory contents from being perturbed
by irrelevant inputs and outputs (refer Fig. 1(a)), thereby allowing for long
term memory storage. Because of this ability to learn long term correlations in
a sequence, LSTM networks obviate the need for a pre-specified time window
and are capable of accurately modelling complex multivariate sequences. In this
paper, we demonstrate that by modelling the normal behaviour of a time series
via stacked LSTM networks, we can accurately detect deviations from normal
behaviour without any pre-specified context window or preprocessing.
It has been shown that stacking recurrent hidden layers of sigmoidal activa-
tion units in a network more naturally captures the structure of time series and
allows for processing time series at different time scales [3]. A notable instance of
using hierarchical temporal processing for anomaly detection is the Hierarchical
Temporal Memory (HTM) system that attempts to mimic the hierarchy of cells,
regions, and levels in the neocortex [4]. Also, temporal anomaly detection ap-
proaches like [5, 6] learn to predict time series and use prediction errors to detect
89
ESANN 2015 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence
and Machine Learning. Bruges (Belgium), 22-24 April 2015, i6doc.com publ., ISBN 978-287587014-8.
Available from http://www.i6doc.com/en/.
novelty. However, to the best of our knowledge the retentive power that LSTMs
have to offer has not been combined with recurrent hierarchical processing layers
for predicting time series and using it for anomaly detection.
As in [5], we use a predictor to model normal behaviour, and subsequently
use the prediction errors to identify abnormal behaviour. (This is particularly
helpful in real-world anomaly detection scenarios where instances of normal be-
haviour may be available in abundance but instances of anomalous behaviour are
rare.) In order to ensure that the networks capture the temporal structure of the
sequence, we predict several time steps into the future. Thus each point in the
sequence has multiple corresponding predicted values made at different points
in the past, giving rise to multiple error values. The probability distribution
of the errors made while predicting on normal data is then used to obtain the
likelihood of normal behaviour on the test data. When control variables (such as
vehicle accelerator or brake) are also present, the network is made to predict the
control variable in addition to the dependent variables. This forces the network
to learn the normal usage patterns via the joint distribution of the prediction
errors for the control and dependent sensor variables: As a result, the obvious
prediction errors made when a control input changes are already captured and
do not contribute towards declaring an anomaly.
The rest of the paper is organised as follows: Section 2 describes our ap-
proach. In Section 3, we present temporal anomaly detection results on four
real-world datasets using our stacked LSTM approach (LSTM-AD) as well as
stacked RNN approach using recurrent sigmoid units (RNN-AD). Section 4 offers
concluding remarks.
90
ESANN 2015 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence
and Machine Learning. Bruges (Belgium), 22-24 April 2015, i6doc.com publ., ISBN 978-287587014-8.
Available from http://www.i6doc.com/en/.
correspond to the input variables. A prediction model learns to predict the next
l values for d of the input variables s.t. 1 ≤ d ≤ m. The normal sequence(s)
are divided into four sets: normal train (sN ), normal validation-1 (vN 1 ), normal
validation-2 (vN 2 ), and normal test (tN ). The anomalous sequence(s) are divided
into two sets: anomalous validation (vA ), and anomalous test (tA ). We first
learn a prediction model using stacked LSTM networks, and then compute the
prediction error distribution using which we detect anomalies:
Stacked LSTM based prediction model: We consider the following LSTM
network architecture: We take one unit in the input layer for each of the m
dimensions, d × l units in the output layer s.t. there is one unit for each of the
l future predictions for each of the d dimension. The LSTM units in a hidden
layer are fully connected through recurrent connections. We stack LSTM layers
s.t. each unit in a lower LSTM hidden layer is fully connected to each unit in the
LSTM hidden layer above it through feedforward connections (refer Fig. 1(b)).
The prediction model is learned using the sequence(s) in sN . The set vN 1 is
used for early stopping while learning the network weights.
Anomaly detection using the prediction error distribution: With a
prediction length of l, each of the selected d dimensions of x(t) ∈ X for l < t ≤
n − l is predicted l times. We compute an error vector e(t) for point x(t) as
(t) (t) (t) (t) (t) (t)
e(t) = [e11 , ..., e1l , ..., ed1 , ..., edl ], where eij is the difference between xi and
its value as predicted at time t − j.
The prediction model trained on sN is used to compute the error vectors for
each point in the validation and test sequences. The error vectors are modelled
to fit a multivariate Gaussian distribution N = N (μ, Σ). The likelihood p(t)
of observing an error vector e(t) is given by the value of N at e(t) (similar to
normalized innovations squared (NIS) used for novelty detection using Kalman
filter based dynamic prediction model [5]). The error vectors for the points from
vN 1 are used to estimate the parameters μ and Σ using Maximum Likelihood
Estimation. An observation x(t) is classified as ‘anomalous’ if p(t) < τ , else the
observation is classified as ‘normal’. The sets vN 2 and vA are used to learn τ
by maximizing Fβ -score (where anomalous points belong to positive class and
normal points belong to negative class).
3 Experiments
We present the results of LSTM-AD on four real-world datasets which have
different levels of difficulty as far as detecting anomalies in them is concerned.
We report precision, recall, F0.1 -score, and architecture used for LSTM-AD and
RNN-AD approaches in Table 1, after choosing the network architecture1 and τ
with maximum F0.1 -score using validation sets as described in Section 2.
91
ESANN 2015 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence
and Machine Learning. Bruges (Belgium), 22-24 April 2015, i6doc.com publ., ISBN 978-287587014-8.
Available from http://www.i6doc.com/en/.
Table 1: Precision, Recall and F0.1 -Scores for RNN and LSTM Architectures
{Note: (30-20) indicates 30 and 20 units in the first and second hidden layers,
respectively.}
92
ESANN 2015 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence
and Machine Learning. Bruges (Belgium), 22-24 April 2015, i6doc.com publ., ISBN 978-287587014-8.
Available from http://www.i6doc.com/en/.
3.1 Datasets
Electrocardiograms (ECGs) 2 : The qtdb/sel102 ECG dataset containing a single
short term anomaly corresponding to a pre-ventricular contraction (Fig.2(a)).
Since the ECG dataset has only one anomaly, we do not calculate a threshold
and corresponding F0.1 -score for this dataset; we only learn the prediction model
using a normal ECG subsequence and compute the likelihood of the error vectors
for the remaining sequence.
Space Shuttle Marotta valve time series 2 : This dataset has both short time-
period patterns and long time-period patterns lasting 100s of time-steps. There
are three anomalous regions in the dataset marked a1 , a2 , and a3 in Fig. 2(b).
Region a3 is a more easily discernible anomaly, whereas regions a1 and a2 corre-
spond to more subtle anomalies that are not easily discernable at this resolution.
Power demand dataset 2 : The normal behaviour corresponds to weeks where the
power consumption has five peaks corresponding to the five weekdays and two
troughs corresponding to the weekend. This dataset has a very long term pat-
tern spanning hundreds of time steps. Additionally, the data is noisy because
the peaks do not occur exactly at the same time of the day.
Multi-sensor engine data 2 : This dataset has readings from 12 different sensors:
One of the sensors is the ‘control’ to the engine, and the rest measure dependent
variables like temperature, torque, etc. We train the anomaly detector using
sequences corresponding to three independent faults and measure the Fβ -score
on a distinct set of three independent faults. We choose the ‘control’ sensor
together with one of the dependent variables as the dimensions to be predicted.
93
ESANN 2015 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence
and Machine Learning. Bruges (Belgium), 22-24 April 2015, i6doc.com publ., ISBN 978-287587014-8.
Available from http://www.i6doc.com/en/.
(iii) The activations of selected hidden units, four each from layers LSTM-L1
(lower hidden layer with 30 units) and LSTM-L2 (higher hidden layer with 20
units) for the power dataset are shown in Fig.2 (f.1) and (f.2). Subsequences
marked w1 and w2 in the last activation sequence shown in Fig.2 (f.2) indicate
that this hidden unit activation is high during the weekdays and low during
weekends. These are instances of high-level features being learned by the higher
hidden layer, which appear to be operating at a weekly time-scale.
(iv) As shown in Table 1, for the ‘ECG’ and ‘engine’ datasets, which do not
have any long-term temporal dependence, both LSTM-AD and RNN-AD per-
form equally well. On the other hand, for ‘space shuttle’ and ‘power demand’
datasets which have long-term temporal dependencies along with short-term
dependencies, LSTM-AD shows significant improvement of 18% and 30% re-
spectively over RNN-AD in terms of F0.1 -score.
(v) The fraction of anomalous points detected for periods prior to faults for the
‘engine’ dataset was higher than that during normal operation. This suggests
that our approach could potentially be useful for early fault prediction well.
4 Discussion
We have demonstrated that (i) stacked LSTM networks are able to learn higher-
level temporal patterns without prior knowledge of the pattern duration and so
(ii) stacked LSTM networks may be a viable technique to model normal time
series behaviour, which can then be used to detect anomalies. Our LSTM-AD
approach yields promising results on four real-world datasets which involve mod-
elling small-term as well as long-term temporal dependencies. LSTM-AD gave
better or similar results when compared with RNN-AD suggesting that LSTM
based prediction models may be more robust compared to RNN based mod-
els, especially when we do not know beforehand whether the normal behaviour
involves long-term dependencies or not.
References
[1] M. Basseville and I. V. Nikiforov. Detection of abrupt changes: theory and application.
Prentice Hall, 1993.
[2] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation,
9(8):1735–1780, 1997.
[3] M. Hermans and B. Schrauwen. Training and analysing deep recurrent neural networks.
Advances in Neural Information Processing Systems 26, pages 190–198, 2013.
[4] D. George. How the brain might work: A hierarchical and temporal model for learning
and recognition. PhD Thesis, Stanford University, 2008.
[5] P. Hayton et al. Static and dynamic novelty detection methods for jet engine health
monitoring. Philosophical Transactions of the Royal Society of London, 365(1851):493–
514, 2007.
[6] J. Ma and S. Perkins. Online novelty detection on temporal sequences. In Proceedings
of the ninth ACM SIGKDD international conference on Knowledge discovery and data
mining, pages 613–618. ACM, 2003.
94