Fault Detection and Isolation With Robust Principal Pca

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Int. J. Appl. Math. Comput. Sci., 2008, Vol. 18, No.

4, 429–442
DOI: 10.2478/v10006-008-0038-3

FAULT DETECTION AND ISOLATION WITH ROBUST PRINCIPAL


COMPONENT ANALYSIS

Y VON THARRAULT, G ILLES MOUROT, J OSÉ RAGOT, D IDIER MAQUIN

Centre de Recherche en Automatique de Nancy (CRAN)


UMR 7039, Nancy Université, CNRS
2, Avenue de la forêt de Haye, F-54 516 Vandoeuvre-lès-Nancy, France
e-mail: {yvon.tharrault,gilles.mourot}@ensem.inpl-nancy.fr
{jose.ragot,didier.maquin}@ensem.inpl-nancy.fr

Principal component analysis (PCA) is a powerful fault detection and isolation method. However, the classical PCA, which
is based on the estimation of the sample mean and covariance matrix of the data, is very sensitive to outliers in the training
data set. Usually robust principal component analysis is applied to remove the effect of outliers on the PCA model. In
this paper, a fast two-step algorithm is proposed. First, the objective was to find an accurate estimate of the covariance
matrix of the data so that a PCA model might be developed that could then be used for fault detection and isolation. A
very simple estimate derived from a one-step weighted variance-covariance estimate is used (Ruiz-Gazen, 1996). This is
a “local” matrix of variance which tends to emphasize the contribution of close observations in comparison with distant
observations (outliers). Second, structured residuals are used for multiple fault detection and isolation. These structured
residuals are based on the reconstruction principle, and the existence condition of such residuals is used to determine the
detectable faults and the isolable faults. The proposed scheme avoids the combinatorial explosion of faulty scenarios related
to multiple faults to be considered. Then, this procedure for outliers detection and isolation is successfully applied to an
example with multiple faults.

Keywords: principal component analysis, robustness, outliers, fault detection and isolation, structured residual vector,
variable reconstruction.

1. Introduction By analysing the eigenstructure of the covariance


matrix of data collected under normal operating condi-
Principal component analysis (PCA) has been applied tions, linear relations among the variables are revealed.
successfully in the monitoring of complex systems The PCA model so obtained describes the normal process
(Chiang and Colegrove, 2007; Kano and Nakagawa, 2008; behaviour and unusual events are then detected by refer-
Harkat, Mourot and Ragot, 2006). It is a widely used encing the observed behaviour against this model. The
method for dimensionality reduction. Indeed, PCA trans- performance of the PCA model is then based on the ac-
forms the data to a smaller set of variables which are linear curate estimation of the covariance matrix from the data,
combinations of the original variables while retaining as which is very sensitive to abnormal observations.
much information as possible. In the classical approach, In general, the majority of the training data set is as-
the first principal component corresponds to the direction sociated with normal operating conditions. The remaining
in which the projected observations have the largest vari- data (faulty data, data obtained during shutdown or startup
ance. The second component is then orthogonal to the first periods or data issued from different operating modes) are
one and again maximizes the variance of the data points referred to as “outliers”. Often the outlying observations
projected on it. Continuing in this way, it produces all the are not incorrect but they were made under exceptional
principal components, which correspond to the eigenvec- circumstances. Consequently, they may disturb the cor-
tors of the empirical covariance matrix. From a regression relation structure of the “normal data” and the result will
point of view, PCA also constructs the optimal orthogonal be a model that does not accurately represent the process.
linear projections (in terms of the mean square error) from The fact that multiple outliers can contaminate the model
the eigenvectors of the data covariance matrix. derived from a classical PCA has motivated the develop-
430 Y. Tharrault et al.

ment of robust methods that are less affected by outliers. combined the ideas of both projection pursuit and ro-
In practice, one often tries to detect outliers using bust covariance estimation based on the FAST-MCD algo-
diagnostic tools starting from a classical fitting method. rithm. It first applied projection pursuit techniques in the
However, classical methods can be affected by outliers original data space. These results are then used to project
so strongly that the resulting fitted model does not al- the observations into a subspace of small to moderate di-
low one to detect the deviating observations. This is mensions. Within this subspace, robust covariance estima-
called the masking effect. Additionally, some good data tion is applied. According to the authors, this algorithm is
points might even appear to be outliers, which is known a powerful tool for high dimensional data when the num-
as swamping. To avoid these effects, the goal of robust ber of variables is greater than the number of observations.
PCA methods is to obtain principal components that are The authors also used a diagnostic plot to visualize and
not influenced much by outliers. Large residuals from that classify the outliers. It plots the squared Mahalanobis dis-
robust fit indicate the presence of outliers. tance versus the orthogonal distance of each observation
Several ways of robustifying principal components to the PCA subspace, these indices being known as T2 and
have been proposed. They can be grouped as follows. SPE statistics, respectively, in statistical process monitor-
A first group of robust PCA methods is obtained by ing field (Qin, 2003).
replacing the classical covariance matrix with a robust co- Last proposals for robust PCA include the robust
variance estimator, such as the minimum covariance deter- LTS-subspace estimator and its generalizations (Maronna,
minant (MCD) estimator (Rousseeuw, 1987). The MCD Martin and Yohai, 2006). The idea behind these ap-
looks for those h observations in the data set whose classi- proaches consists in minimizing a robust scale of the or-
cal covariance matrix has the lowest possible determinant. thogonal distances of each observation to the PCA sub-
The user-defined parameter h is the number of fault-free space, similar to the LTS estimator, S-estimators and
data among all the data and determines the robustness as many others in regression. These methods are based on
well as the efficiency of the resulting estimator. The com- iterative procedures for which the problem of starting val-
putation of the MCD estimator is non-trivial and naively ues remains open. For example, for the LTS-subspace es-
requires an exhaustive investigation of all h-subsets out of timator, the classical PCA is performed on the h obser-
the N observations. This is no longer possible for large vations with the smallest orthogonal distance to the PCA
N or in a high dimension. Rousseeuw and Van Driessen subspace. Its drawbacks are the same as for the MCD-
(1999) constructed a much faster algorithm called the estimator: a high computational cost, the choice of the
FAST-MCD, which avoids such a complete enumeration. user-defined parameter h and the starting values. Like
It is obtained by combining a basic subsampling and an the MCD-estimator, a FAST-LTS algorithm has been pro-
iterative scheme with an MCD estimator. posed.
A second approach to robust PCA uses projection Our presentation is devoted to the problem of sen-
pursuit (PP) techniques. These methods maximize a ro- sor fault detection and isolation in data. In this paper, a
bust measure of data spread to obtain consecutive di- fast two-step algorithm is proposed. First, a very sim-
rections on which the data points are projected (Hubert, ple estimate derived from a one-step weighted variance-
Rousseeuw and Verboven, 2002; Li and Chen, 1985; covariance estimate is used (Ruiz-Gazen, 1996). Second,
Croux and Ruiz-Gazen, 2005; Croux, Filzmoser and structured residuals are employed for multiple fault detec-
Oliveira, 2007). The main step of these algorithms is then tion and isolation. These structured residuals are based
to search for the direction in which the projected obser- on the reconstruction principle. The variable reconstruc-
vations have the largest robust spread to obtain the first tion approach assumes that each set of variables, e.g., one,
component. The second component is then orthogonal to two, or n variables is unknown and suggests to reconstruct
the first one and has the largest robust spread of the data these variables using the PCA model from the remaining
points projected on it. Continuing in this way produces variables (Dunia and Qin, 1998). If the faulty variables are
all the robust principal components. To make these algo- reconstructed, the fault effect is eliminated. This property
rithms computationally feasible, the collection of direc- is useful for fault isolation. Moreover, instead of consider-
tions to be investigated are restricted to all directions that ing the isolation of one up to all sensors, we determine the
pass through the robust centre and a data point or through maximum number of faulty scenarios to be taken into ac-
two data points. However, the robust directions obtained count by evaluating the existence condition of structured
are approximations of the true ones. To improve the speed residuals. Note that this number is usually much less than
of algorithms, a PCA compression to the rank of the data the total number of sensors. The proposed scheme avoids
is performed as a first step. According to the authors, these the combinatorial explosion of faulty scenarios related to
algorithms can deal with both low and high dimensional multiple faults to consider. Section 2 is a short reminder,
data. on the one hand, of the principal component analysis in
Another approach to robust PCA was proposed by the traditional case and, on the other hand, of the robust
Hubert et al. (2005) and is called ROBPCA. This method principal component analysis. A detection and isolation
Fault detection and isolation with robust. . . 431

procedure for outliers is proposed in Section 3. Then, in the eigenvectors P̃ of Σ corresponding to its n− smallest
Section 4, it is applied to an example emphasizing the gen- eigenvalues are such that
eration of fault signatures.
X P̃ = 0. (6)
2. PCA fault detection and isolation
2.2. Robust approach. A major difficulty in PCA
Let us consider a data matrix X ∈ RN ×n , with a row vec- comes from its sensitivity to outliers. In order to reduce
tor xTi ∈ Rn , which gathers N measurements collected this sensitivity, various techniques can be applied and, in
on n system variables. particular, that which consists in carrying out PCA di-
rectly on the data possibly contaminated by outliers. One
2.1. Classical approach. In the classical PCA case, approach is to replace the covariance matrix by its robust
data are supposed to be collected on a system being in variant which leads to robust PCA. This seems to be a
a normal process operation. PCA determines an optimal straightforward way since the principal components are
linear transformation of the data matrix X in terms of cap- the eigenvectors of the covariance matrix.
turing the variation in the data: Ruiz-Gazen (1996) define a “local” matrix of vari-
ance in the sense that the suggested form tends to empha-
T = XP and X = TPT, (1)
size the contribution of close observations in comparison
with T ∈ RN ×n being the principal component matrix, with distant observations (outliers). The matrix is defined
and the matrix P ∈ Rn×n contains the principal vectors in the following way:
which are the eigenvectors associated with the eigenvalues N −1 N
 
λi of the covariance matrix (or correlation matrix) Σ of X: wi,j (xi − xj )(xi − xj )T
i=1 j=i+1
Σ = P ΛP T with P P T = P T P = In , (2) T = , (7)
N
 −1 N

where Λ = diag(λ1 , . . . , λn ) is a diagonal matrix with wi,j
diagonal elements in decreasing magnitude order. i=1 j=i+1

The relations (1) are useful when the dimension of


where the weights wi,j themselves are defined by
the representation space is reduced. Once the component
number  to retain is determined, the data matrix X can be  
β T −1
approximated. For that purpose, the eigenvector matrix is wi,j = exp − (xi − xj ) Σ (xi − xj ) , (8)
2
partitioned as follows:
  β being a tuning parameter to reduce the influence of the
P = P̂ P̃ , P̂ ∈ Rn× . (3)
observations far away (the authors recommend a value
close to 2). It is shown in the sequel that the results are
From the decomposition (1), X̂ is the principal part of the
not very sensitive to this parameter.
data explained by the first  eigenvectors and the residual
Thanks to the presence of adapted weights wi,j , PCA
part X̃ is explained by the remaining components:
can then be carried out on this “new” matrix of covariance
X̂ = X P̂ P̂ T = XC , (4) considered robust with respect to outliers. From this new
model, the detection and isolation of outliers are carried
X̃ = X − X̂ = X(I − C ), (5) out using the reconstruction principle. Then a weight of
zeros is applied on the outliers, and a classical covariance
where the matrix C = P̂ P̂ T is not equal to the identity matrix is calculated. The PCA is then constructed from
matrix, except for the case  = n. fault-free data. To ensure the detection of all outliers, a
Therefore, the P CA model partitions the measure- fault detection procedure can then be applied again.
ment space into two orthogonal spaces:
• The principal component space, which includes 3. Fault detection and isolation
data variations according to the principal component
model. The variable reconstruction approach assumes that each
• The residual space, which includes data variation not variable may be faulty (in the case of a single fault) and
explained by the model. Such variations are due to suggests to reconstruct the assumed faulty variable using
noise and model errors in the data. the PCA model from the remaining variables (Dunia and
Qin, 1998). This reconstructed variable is then used to
When the process variables are noise-free or cor- detect and isolate the faults. Moreover, this principle al-
rupted with multivariate zero mean independent and iden- lows us to determine replacement values for the faulty
tically distributed measurement noise (Li and Qin, 2001), variables.
432 Y. Tharrault et al.

3.1. Data reconstruction. The PCA model being The reconstruction x̂R of the vector x is written as
known according to (4) and (5), a new measurement vec- follows:
tor x can be decomposed as  −1 
0 (I1 − c1 ) c2
x̂R = x, (16)
x = x̂ + x̃, x̂ = C x, x̃ = (I − C ) x, (9) 0 I2

where x̂ and x̃ are respectively the projections of x onto with I2 ∈ Rn−r×n−r being the identity matrix.
the principal space and residual space. This form highlights two characteristics. First, the
From (9), it is possible to estimate a part of the vec- reconstructed vector x̂R is formed of the r reconstructed
tor x, for example, the subset R containing the indices of variables and a copy of the remaining n − r variables.
r reconstructed variables. However, the presence of out- Second, the reconstructed variables are estimated without
liers in the observation vector x returns the estimated x̂ using their own measurement.
sensitive to these values. It is then preferable to express
this estimated x̂ by using only the fault-free part of the Proof. Let us note that the matrix C is an idempotent
observation vector x. and symetric
 matrix. From (14) and (15), (11) becomes
The reconstruction of variables consists in estimat- Ξ̃TR = I1 − c1 −c2 . From (11) we get
ing the reconstructed vector x̂R by eliminating the effect
Ξ̃TR Ξ̃R = ΞTR (I − C ) ΞR = I1 − ΞTR C ΞR .
of the faults. Matrix ΞR indicates the reconstruction direc-
tions. This matrix is orthonormal with dimension (n × r) As ΞTR C ΞR = c1 , we have
and is built with 0 and 1, where 1 indicates the recon-
structed variables from the other variables (with 0). For  −1
−1
Ξ̃TR Ξ̃R = (I1 − c1 ) .
example, to reconstruct variables R = {2, 4} among five
variables, matrix ΞR is formed as follows:
These different terms are replaced in (12):
 T
0 1 0 0 0
ΞR = .    
0 0 0 1 0 I1 0 I1  −1

GR = − − (I − c1 ) c2
I1
0 I2 0
The expression for the reconstruction x̂R of the vec-    −1

tor x is given by I1 0 I1 − (I − c1 ) c2
= − .
x̂R = GR x, (10) 0 I2 0 0
where 

Ξ̃R = (I − C ) ΞR , (11)
3.2. Structured residual generation. With a diagno-
GR = I − ΞR (Ξ̃TR Ξ̃R )−1 Ξ̃TR . (12) sis objective in mind, residuals are generated for fault de-
tection and isolation. The residuals are obtained by pro-
jecting the reconstructed variables onto the residual space.
Reconstruction condition. Let us note that if Ξ̃R has full Residuals are defined by x̃R , the projection of x̂R onto the
column rank, then (Ξ̃TR Ξ̃R )−1 exists and the variables of residual space:
the subset R are completely reconstructible. This condi-
tion implies that the number of reconstructed variables r x̃R = (I − C ) x̂R
must satisfy ()
(17)
= (I − C ) GR x = PR x,
r ≤n− (13)
and that the columns of matrix Ξ̃R are neither null nor where
collinear. ()
If we write x̂R in the case where the matrix of the PR = (I − C ) GR (18)
reconstruction directions is reorganized as follows: = (I − C ) − Ξ̃R (Ξ̃TR Ξ̃R )−1 Ξ̃TR . (19)
T
ΞR = I1 0 ∈ Rn×r (14) ()
(r×r) ((n−r)×r) Property 1. Matrix PR has the following property:

with I1 ∈ Rr×r being the identity matrix, then C is split- ()


PR ΞR = 0. (20)
ted in four parts:
⎡ ⎤ This means that the components of x̃R are not sen-
c1 c2 sitive to the components of x belonging to the subset R.
⎢ (r×r) (r×(n−r)) ⎥ n×n
C = ⎣ ⎦∈R . (15) This property can be used to identify which components
cT2 c4
((n−r)×r) ((n−r)×(n−r)) of x are disturbed by faults.
Fault detection and isolation with robust. . . 433

Proof. From the matrix partition adopted above, (18) 3.3. Fault isolation. The structural condition for fault
becomes isolation is as follows:
  −1

() I1 − c1 −c2 0 (I1 − c1 ) c2
PR = r ≤ n −  − 1. (28)
−cT2 I2 − c4 0 I2
  All the directions of reconstruction have to be explored
0 0 for fault detection and isolation. Solutions for which
= −1 . (21)
0 −cT2 (I1 − c1 ) c2 + I2 − c4 the faults associated with the reconstruction directions

are not detectable are useless. The number of possible
reconstructions can then be reduced, and the detectable
Example 1. Consider a measurement x composed of the faults are defined.
true value x∗ , a noise  with zero mean and one fault of
amplitude d and direction ΞF , where F is a subset con- The maximum reconstruction number can be calcu-
taining the indices of the fault directions: lated as follows:
n−

x = x∗ +  + ΞF d. (22) Crn , (29)
r=1
Then the residual is
() () with Crn denoting the number of combinations of n el-
x̃R = PR (x∗ +  + ΞF d) = PR ( + ΞF d), (23) ements taken r at a time. This number takes into ac-
() count only the number of reconstructions and not the am-
with Pr x∗ = 0, cf. (6), and its expected value is
plitude of the projection of the reconstructed directions
()
E(x̃R ) = PR ΞF d. (24) onto the residual space. It can be reduced when the ma-
trix of projected fault directions is rank-deficient or near
The following observations can be made: rank-deficient. To detect these cases, the condition num-
• If the reconstruction directions ΞR are the same as ber (‘Rcond’), defined as the ratio between the smallest
the fault directions, i.e., if R = F , then all compo- singular value and the greatest singular value of the ma-
() trix Ξ̃R , is used:
nents of the vector PR ΞF are zero and E(x̃R ) = 0.
  
• If the reconstruction directions ΞR are different from min σ Ξ̃R
the fault directions, then all components of the vector Rcond =    . (30)
()
PR ΞF are a priori nonzero except the components max σ Ξ̃R
belonging to the subset R.
For the near rank-deficient case, fault detection and
The analysis of the residual amplitudes x̃R for all
localization are possible only if its amplitude is huge.
possible combinations shows the presence of faults and
In the following, faults with huge amplitude are not
makes it possible to determine the components of the mea-
considered as realistic.
surement vector affected by this fault. 
Property 2. If Ξ̃F has full column rank, there is no loss The process to detect useful directions of reconstruc-
in sensitivity for some fault component. tion can be summarized as follows:
Proof. If some components of Ξ̃F and Ξ̃R are orthogonal, 1. r = 1 (single-fault): compute all available directions
then Ξ̃R . If Ξ̃TR Ξ̃R is close to zero, this means that the
Ξ̃TR Ξ̃F = (r×k)
0 × , (25) fault is not projected onto the residual space and then
(r×r−k)
not detectable. To detect and localize this fault, its
with × being a matrix without null data.
projection onto the principal space can be used.
From (24) it follows that 2. r = r + 1: for all available directions Ξ̃R compute
E(x̃R ) = ((I − C ) − Ξ̃R (Ξ̃TR Ξ̃R )−1 Ξ̃TR )ΞF d (26) the values of the condition number Rcond.
 
= Ξ̃F − (n×k) 0 × d. (27) • If Rcond is close to zero, then the r faulty vari-
(n×r−k)
ables of the subset R are not detectable. There-
To suppress some directions of the fault, the first k fore, all combinations which take into account
columns of the matrix Ξ̃F have to be zero. But in this these r variables will not be detectable. So
case, Ξ̃F is rank-deficient and the faults are not detectable. they are useless. Moreover, all the combina-
Therefore if the directions Ξ̃F and Ξ̃R are orthogonal, this tions of r − 1 variables among the variables of
does not disturb the localization process.  the subset R are only detectable because their
434 Y. Tharrault et al.

fault signatures are identical. Then, it is use-


Table 1. Existence condition of residuals.
ful to reconstruct only one combination of these
r − 1 variables selected from these r variables. Ξ̃T1 Ξ̃1 Ξ̃T2 Ξ̃2 Ξ̃T3 Ξ̃3 Ξ̃T4 Ξ̃4
Moreover, all the combinations of r − 2 vari- 0.83 0.83 0.33 0.00
ables among the r variables of the subset R are
isolable.
Table 2. Existence condition of residuals.
• If Rcond is not close to zero, then all the com- R {1, 2} {1, 3} {2, 3}
binations of r − 1 variables selected from the r Rcond 0.82 0.41 0.41
variables of the subset R are isolable.

3. While r ≤ n − , go to Step 2.
The result is shown in Table 2. All the Rcond values are
This analysis of the model structure allows us to de- not close to zero. This means that a fault on x1 , x2 or
termine the detectable and isolable faults. The number of x3 is isolable. From 10 reconstruction possibilities, only
useful reconstructions can then be reduced. 6 are really reconstructible. Among these reconstructible
directions, only 3 combinations are useful to isolate the
faulty variables. To conclude, in the following, only the
4. Numerical example first three variables are reconstructed to ensure localiza-
4.1. Single fault case. A simple example based on four tion.
variables (x1 , x2 , x3 and x4 ) and two models is used as a
device to illustrate the above ideas. Table 3. Fault occurrences.
R = {1} R = {2} R = {3}
4.1.1. Data generation. The data matrix X includes x̃11 x̃12 x̃13 x̃14 x̃21 x̃22 x̃23 x̃24 x̃31 x̃32 x̃33 x̃34
N = 240 measurements defined in the following way: δx1 0 0 0 0 × 0 × × × × 0 ×
δx2 0 × × × 0 0 0 0 × × 0 ×
xi,1 = vi2 + 1 + sin(0.1i), vi ∼ N (0, 1), (31) δx3 0 × × × × 0 × × 0 0 0 0
xi,2 = xi,1 , xi,3 = −2xi,1 , xi,4 ∼ N (0, 1).

Realizations of centred normal distributions with the same


4.1.3. Sensitivity analysis. The data in Table 3 sum-
standard deviation equal to 0.02 are added to the first three
marize the relationship between the residual sensitivity x̃R
variables. A constant bias of the amplitude equal to 30%
(17) and the outliers or faults δx1 , δx2 and δx3 (the fault
of the amplitudes of the variables simulates the presence
δx4 on the variable x4 is not considered). This table was
of outliers δx1 , δx2 , δx3 affecting the variables x1 , x2 and
constructed while taking into account the property (20) of
x3 , from 24 to 44 for the variable x1 , from 80 to 100 for (2)
the variable x2 , from 140 to 160 for the variable x3 , re- the matrix PR . For example, the first four residuals, x̃11
spectively. This bias amplitude is important to highlight to x̃14 (relative to the variables x1 , x2 , x3 and x4 ), were
the robust PCA. It is important to notice that 60 observa- obtained by projecting onto the residual space the recon-
tions contained abnormal values, and hence 25 percent of structed variables without using the variable x1 . As the
(2)
the data are contaminated by these values. The objective first row and the first column of P1 are zero, according
is to detect and especially isolate them. to (20), the residual x̃11 is not sensitive to the variables
Using raw data (Fig. 1), we established a robust PCA x1 , x2 and x3 and, consequently, to the potential faults
model by applying the propositions of Section 2.2. The δx1 , δx2 or δx3 affecting these variables. Moreover, the
analysis of the decrease in the standardized eigenvalues of residuals x̃12 , x̃13 and x̃14 are not sensitive to the variable
the covariance matrix (92.93, 6.96, 0.06, 0.04) allows us x1 and thus to the fault δx1 which can affect them. To
to retain two principal components ( = 2). summarize these different situations, the symbols × and
0 denote the fault influence on the residuals. The other
parts of the table were constructed with this same princi-
4.1.2. Useful reconstruction. From (29), the maxi- (2)
mum number of reconstructions is 10. Now, the direc- ple by considering the different projection matrices P2
(2)
tion projections onto the residual space are studied (Ta- and P3 . By analysing the dependence of the columns
ble 1). From this table, let us note that the last variable of the signature matrix, one can establish necessary con-
x4 is not reconstructible. Then, it is not possible to de- ditions enabling fault detection and isolation.
tect a fault on x4 in the residual space. A solution is to Let us note that only two projection matrices and two
work in the principal space to detect the fault. Then for residuals are necessary for fault detection and isolation.
(2) (2)
r = 2, the indicator Rcond is computed for all useful For example, the matrices P1 and P2 (19) allow us
directions (R = {1, 2}, R = {1, 3} and R = {2, 3}). to build the residuals x̃12 (relative to x2 ), x̃21 (relative to
Fault detection and isolation with robust. . . 435
10
Variable 1
5

−5

10
Variable 2
5

−5

10

−10
Variable 3
−20

5
Variable 4

−5
1 24 45 80 101 140 161 240
Time

Fig. 1. Raw data.


0.01
x̃x11
11

−0.01

3
2 x̃x12
12

1
0
−1

2
x̃x13
13
1

−3
x 10
2
0
−2
−4 x̃x14
14
1 24 45 80 101 140 161 240
Time

Fig. 2. Residuals x̃11 , x̃12 , x̃13 and x̃14 .

x1 ), which permit to detect and isolate one of the three tures are independent and thus the faults are isolable from
faults. Indeed, Table 3 indicates that with these two resid- each other. However, this condition is only structural and
uals, the signature faults δx1 , δx2 and δx3 are respectively does not take into account the sensitivity of the residuals to
(0 ×), (× 0) and (× ×); these three signa- the fault. Indeed, the residuals do not have the same sen-
436 Y. Tharrault et al.

sitivity to different faults. Therefore, if only two residuals 4.2.1. Data generation. The matrix X includes N =
are used, it is not sure if the faulty variable is detectable 108 observations of a vector x with 8 components gener-
and localizable. ated in the following way:
xi,1 = vi2 + sin(0.1i), (34)
4.1.4. Fault detection. Figure 2 shows the residuals xi,2 = 2 sin(i/6) cos(i/4) exp(−i/N ), vi ∼ N (0, 1),
x̃11 , x̃12 , x̃13 and x̃14 (relative to x1 , x2 , x3 and x4 ), de-
fined with  = 2 by (17), and obtained by the projection of xi,3 = log(x2i,2 ), xi,4 = xi,1 + xi,2 ,
all reconstructed variables without using the variable x1 . xi,5 = xi,1 − xi,2 , xi,6 = 2xi,1 + xi,2 ,
As indicated by Property 1, in the case of the residual gen- xi,7 = xi,1 + xi,3 , xi,8 ∼ N (0, 1).
erated without using the variable x1 , only faults affecting
the variables x2 and x3 are detectable on the residuals x̃12 ,
x̃13 and x̃14 . Panels of Fig. 3 are relative to a global indi- To the data thus generated were added realizations
cator SPE R (norm of the projection vector) calculated for of random variables with centred normal distributions and
each observation from time 1 to time 240: standard deviations equal to 0.02 as well as the faults δx1 ,
δx2 , δx3 , δx4 , δx5 , δx6 represented by a bias of the am-
SPE R = x̃R 2 . (32) plitude equal to 10% of the amplitudes of the variables
and defined in the following way: observations from 10
The faulty observations are determined as follows: to 24 for the variable x1 , observations from 35 to 49 for
the variables x2 and x3 , observations from 60 to 74 for
SPE R > δα2 , (33) the variables x4 and x5 , observations from 85 to 99 for
the variable x1 , x4 and x6 . In the following, these four
with δα2 being the detection threshold of SPE R (Jackson intervals are indicated by I1 , I2 , I3 , I4 .
and Mudholkar, 1979). From the contaminated data, the robust PCA model,
with four principal axes ( = 4), was chosen.
Table 4. Table of fault signatures.
SPE 1 SPE 2 SPE 3
Table 5. Existence condition of residuals.
δx1 0 × ×
δx2 × 0 × Ξ̃T1 Ξ̃1 Ξ̃T2 Ξ̃2 Ξ̃T3 Ξ̃3 Ξ̃T4 Ξ̃4 Ξ̃T5 Ξ̃5 Ξ̃T6 Ξ̃6 Ξ̃T7 Ξ̃7 Ξ̃T8 Ξ̃8
δx3 × × 0 0.84 0.72 0.46 0.71 0.41 0.40 0.46 0.00

A global indicator is computed for each direction


(R = {1}, R = {2} and R = {3}). It takes into ac- 4.2.2. Useful reconstruction. From the size of the
count their corresponding residuals, e.g., SPE 1 is com- residual space, we cannot reconstruct more than four vari-
puted from x̃11 , x̃12 , x̃13 and x̃14 . Then the global indi- ables simultaneously. The maximum number of recon-
cators use all the sensitivity of the residuals to the fault. structions is then equal to 162 (cf. (29)). Now, the projec-
Table 4 shows the fault signatures with the global indica- tions of fault directions onto the residual space are studied
tors and is constructed in the same way as Table 3. (Table 5). From this table, let us note that the last variable
Fault detection and isolation are realized without am- is not reconstructible. Indeed, the variable x8 is uncorre-
biguity and are in accordance with the theoretical results lated with the other variables. To detect and isolate a fault
of the isolation procedure (Table 4). on this variable, it is better to work in the principal space.
To analyze multiple fault directions, the indicator Rcond
4.1.5. Choice of β. For different values of β, Fig. 4 is calculated for all available directions (for r = 2 to 4).
represents the indicator SPE 1 . For β = 0, we have the Results for the reconstruction of two variables are shown
same case as that of the classical PCA. We notice that it is in Table 6. For example, for R = {1, 2}, the value of
not possible to detect or isolate the fault. For β = 0.5 to 5, Rcond is the intersection of the first row and the second
we can detect and isolate the fault with a similar result. So column of Table 6.
there is a large range of values for β where the detection Let us notice that for R = {3, 7}, Rcond is close to
ability is identical. Then, we recommend to choose β = 2. zero. This means that the fault signatures for a fault on x3
or on x7 are identical (SPE 3 = SPE 7 ). So we can only
detect the fault and conclude that the variables x3 or x7 or
4.2. Multiple fault case. We consider here the situ- x3 and x7 are faulty. To illustrate this case, Fig. 5 shows
ation in which several faults affect variables at the same the values of the global indicator SPE R for R = {3},
time. R = {7}, R = {2, 3} and R = {2, 7}. This figure shows
Fault detection and isolation with robust. . . 437
10
SPE
1
5

10
SPE2
5

8
6 SPE3
4
2
0
1 24 45 80 101 140 161 240
Time

Fig. 3. SPE without using x1 , x2 and x3 .

10
SPE with β=0
5 1

10
SPE with β=0.5
5 1

10
SPE1 with β=2
5

10
SPE1 with β=5
5

10
SPE1 with β=10
5

1 24 45 80 101 140 161 240


Time

Fig. 4. SPE 1 for different values of β.

Table 7. Table of fault signatures.


SPE 124
SPE 125
SPE 126
SPE 145
SPE 146
SPE 156
SPE 12
SPE 14
SPE 15
SPE 16
SPE 24
SPE 25
SPE 26
SPE 45
SPE 46
SPE 56
SPE 1
SPE 2
SPE 3
SPE 4
SPE 5
SPE 6

δ1 0 × × × × × 0 0 0 0 × × × × × × 0 0 0 0 0 0
δ2 × 0 × × × × 0 × × × 0 0 0 × × × 0 0 0 × × ×
δ3 × × 0 × × × × × × × × × × × × × × × × × × ×
δ4 × × × 0 × × × 0 × × 0 × × 0 0 × 0 × × 0 0 ×
δ12 × × × × × × 0 × × × × × × × × × 0 0 0 × × ×
δ14 × × × × × × × 0 × × × × × × × × 0 × × 0 0 ×
δ24 × × × × × × × × × × 0 × × × × × 0 × × × × ×
δ124 × × × × × × × × × × × × × × × × 0 × × × × ×
δ125 × × × × × × × × × × × × × × × × × 0 × × × ×

that SPE 3 = SPE 7 and SPE 23 = SPE 27 , and then only ered (not with 7). On the interval I2 , SPE 3 is nonzero,
one SPE of each combination is useful to detect a fault. so there is another fault at the same time. Moreover, this
In the following, only the combinations with 3 are consid- figure shows that for SPE 23 , on the interval I2 , of SPE is
438 Y. Tharrault et al.

1.5
1 SPE3
0.5
0

1.5
SPE
1 7
0.5
0

1.5
SPE
1 23
0.5
0

1.5
SPE
1 27
0.5
0
1 10 25 35 50 60 75 85 100
Time

Fig. 5. SPE for R = 2, 3 and R = 2, 7.

the variables 1, 2, 3, 4 and those affecting the couples


Table 6. Existence condition of residuals.
of variables {1, 2}, {1, 4}, {2, 4}, {1, 2, 4}, {1, 2, 5}. The
Rcond 1 2 3 4 5 6 7
columns pertain to the norm SPE R of the residual vectors
1 0.88 0.72 0.88 0.57 0.57 0.72
obtained by the reconstruction-projection of the variables
2 0.79 0.73 0.42 0.68 0.80
3 0.80 0.75 0.76 0.01 by using all the components of x except those with the
4 0.68 0.41 0.80 indices belonging to R. The residuals are defined by (17).
5 0.79 0.75 This table, which the reader will be able to extend,
6 0.75 provides a correspondence between the symptoms SPE R
and the faults δR . For example, the fault δ2 affects all pro-
jections except those established without components 2,
zero. We conclude that on the interval I2 , the variables x2 {1, 2}, {2, 4}, {2, 5}, {2, 6}, {1, 2, 4}, {1, 2, 5}, {1, 2, 6}.
and x3 or/and x7 are faulty.
For all the directions of reconstruction these indica-
4.2.4. Fault detection. The reconstruction is carried
tors are calculated. Another case where Rcond is close
out from all useful directions. Figure 6 visualizes the re-
to zero with R = {2, 4, 5, 6} is detected. Then all the
construction of variables without using Variable 1. This
combinations of three variables selected from among the
figure shows the reconstruction of the first seven variables
variables of the subset R are only detectable and their
which are associated with the column SPE 1 of Table 7
fault signatures are identical (SPE 245 = SPE 246 =
specifying the isolable faults. The N reconstructed data
SPE 256 = SPE 456 ). Therefore only one SPE is use-
were then projected onto the residual space. For each ob-
ful to detect a fault, e.g., SPE 245 . Thus, a fault can be
servation, fault indicators SPE R were calculated.
detected but not isolated. The faulty variables are among
the variables x2 , x4 , x5 and x6 . Let us analyze Fig. 6. Variable 1, biased for the ob-
servations of the interval I1 , is not used for the reconstruc-
From among 162 reconstruction possibilities, only
tion and the other variables which are used for the recon-
81 are really reconstructible. Among these reconstructible
struction do not present any bias. For these observations,
directions, only 21 combinations are useful to isolate the
the reconstructions are thus correct, emphasizing the first
faulty variables. For the others, a set of variables is con-
graph (starting from the top of the figure), which shows
sidered as faulty but it is not possible to determine the
the superposition of the reconstructed variables (the sym-
faulty variables in the set.
bol ‘◦’) with the true variables (in practice, the latter are
unknown, but at the stage where the data are generated the
4.2.3. Sensitivity analysis. All the useful reconstruc- comparison is possible). The measurement of the variable
tion directions and the corresponding SPE are computed. is also indicative (continuous line) in order to compare it
Concerning the a priori analysis of fault isolation, a re- with the reconstruction.
duced table of signatures established from the proper- This result is confirmed by the first graph of Fig. 7,
ties (20) is given (Table 7). It reveals only some possi- where the norm of the residual vector (32) is depicted. For
ble faults, as noted by δ in the first row, those affecting the observations of the interval I1 , this norm close to zero
Fault detection and isolation with robust. . . 439

10 var. 1 recons. var. 1 without var. 1 true

var. 2 recons. var. 2 without var. 1 true


2

−2

5 var. 3 recons. var. 3 without var. 1 true

−5

var. 4 recons. var. 4 without var. 1 true


10

var. 5 recons. var. 5 without var. 1 true


10

20 var. 6 recons. var. 6 without var. 1 true

10

15 var. 7 recons. var. 7 without var. 1 true


10
5
0
−5
10 25 35 50 60 75 85 100
Time

Fig. 6. Variable reconstruction without using Variable 1.

thus shows the absence of outliers in the variables used for tions (I2 , I3 , I4 ) are affected by faults, without knowing
the reconstruction and projection, i.e., all the variables ex- exactly which components of the measurement vector are
cept x1 . Let us note that the three other groups of observa- faulty. Finally, by taking into account the fault presence
440 Y. Tharrault et al.

2
1 SPE1
0
2
1 SPE2
0
1
0.5 SPE
4
0
1
0.5 SPE
5
0
1
0.5 SPE
6
0
2
1 SPE12
0
1
0.5 SPE
14
0
1
0.5 SPE
15
0

0.4 SPE16
0.2
0
1
0.5 SPE
24
0
1
0.5 SPE
25
0
1
0.5 SPE
26
0
1
0.5 SPE45
0

0.4 SPE46
0.2
0
1
0.5 SPE56
0
0.4
0.2 SPE
124
0
0.4
0.2 SPE125
0

0.4 SPE
0.2 126
0

0.4 SPE
0.2 145
0
0.4
0.2 SPE
146
0
0.4
0.2 SPE
156
0
1 10 25 35 50 60 75 85 100
Time

Fig. 7. SPE for isolable directions.


Fault detection and isolation with robust. . . 441

the existence condition of structured residuals. Therefore,


Table 8. Fault signatures.
the detectable and isolable faults are determined, as well
I1 I2 I3 I4 as the different faulty scenarios for which it is not pos-
SPE 1 0 × × × sible to distinguish the faulty variables. This procedure
SPE 45 × × 0 × was applied to two examples, the first with a single fault
SPE 146 0 × × 0 and the second with multiple faults. The presence of ap-
proximately 25 percent of outliers authorizes a correct es-
timation of the principal directions. Then the estimation
in the four intervals, the examination of the first graph of is not very sensitive to these values. In both examples, the
Fig. 7 leads to the conclusion that, in each interval I2 , I3 , method is efficient for fault detection and isolation.
I4 , a variable other than x1 is faulty or more than one vari- A PCA model can thus be built directly from the
able is faulty. available data containing potential faults. One advantage
Other projections are built and are interpreted in a of the suggested method is that it is not iterative unlike
similar way. Figure 7 shows the global indicator for re- a majority of robust methods. The most important re-
construction directions which ensure isolation. Table 8 sult concerns the diagnosis of the systems, applied here
summarizes the conclusions resulting from the projec- to the detection and isolation of outliers. For that purpose,
tion analysis (Fig. 7). SPE 1 pertains to the reconstructed we showed how to build fault indicators and to determine
residuals without using the first variable. The symbol ‘0’ the isolable faults. The use of the principle of the recon-
denotes the fault absence in the interval considered. The struction and projection of the reconstructed data together
diagnosis is as follows: made it possible to detect and isolate outliers in an effec-
tive way.
• in the interval I1 , x1 is faulty, In a future work, a comparison, in terms of fault de-
• in the interval I2 , x2 and x3 or/and x7 are faulty, the tection and localization, of this robust approach with oth-
fault is not isolable, ers like the robust LTS-subspace estimator and its general-
izations will be performed. Another way is to improve the
• in the interval I3 , x4 , x5 are faulty, detection indicator. The detection is carried out only in the
residual space and not in the principal space. For exam-
• in the interval I4 , x1 , x4 , x6 are faulty. ple, a combined fault indicator (Yue and Qin, 2001) using
the residual and the principal space can be used instead.

5. Conclusion
References
Principal components analysis reduces the data represen- Chiang L. H. and Colegrove L. F. (2007). Industrial implementa-
tation space and enables the determination of the redun- tion of on-line multivariate quality control, Chemometrics
dancy relationships (linear relations among the variables). and Intelligent Laboratory Systems 88(2): 143–153.
The redundancy relations are then used to detect and iso- Croux C., Filzmoser P. and Oliveira M. (2007). Algorithms
late the faults. PCA is constructed with fault-free data for projection-pursuit robust principal component anal-
from a decomposition into the eigenvalues and eigenvec- ysis, Chemometrics and Intelligent Laboratory Systems
tors of a covariance matrix. However, real data sets are 87(2): 218–225.
not usually fault-free. Then the covariance matrix is dis- Croux C. and Ruiz-Gazen A. (2005). High breakdown
turbed by outliers. In order to reduce the sensitivity of the estimators for principal components: The projection-
model to outliers, a fast two-step algorithm is proposed. pursuit approach revisited, Journal of Multivariate Anal-
First, the covariance matrix is replaced by its robust vari- ysis 95(1): 206–226.
ant which leads to robust PCA. This one-step weighted es- Dunia R. and Qin S. (1998). A subspace approach to multidi-
timate tends to emphasize the contribution of close obser- mensional fault identification and reconstruction, Ameri-
vations in comparison with distant observations (outliers). can Institute of Chemical Engineers Journal 44 (8): 1813–
Moreover, the results are not very sensitive to the tuning 1831.
parameter β. Therefore, a model robust with respect to Harkat M.-F., Mourot G. and Ragot J. (2006). An improved PCA
outliers was constructed. Second, structured residuals are scheme for sensor FDI: Application to an air quality mon-
generated for multiple fault detection and isolation. These itoring network, Journal of Process Control 16(6): 625–
structured residuals are based on the reconstruction prin- 634.
ciple. For fault isolation, the proposed scheme avoids Hubert M., Rousseeuw P. and Van den Branden, K. (2005).
the combinatorial explosion of faulty scenarios related to RobPCA: A new approach to robust principal component
multiple faults. Indeed, instead of considering all combi- analysis, Technometrics 47 (1): 64–79.
nations of one up to all sensors, we limit the maximum Hubert M., Rousseeuw P. and Verboven S. (2002). A fast
number of faulty scenarios to be considered by evaluating method for robust principal components with applications
442 Y. Tharrault et al.

to chemometrics, Chemometrics and Intelligent Labora- Qin S. J. (2003). Statistical process monitoring: Basics and be-
tory Systems 60(1-2): 101–111. yond, Journal of Chemometrics 17(8-9): 480–502.
Jackson J. and Mudholkar G. S. (1979). Control procedures Rousseeuw P. (1987). Robust Regression and Outliers Detection,
for residuals associated with principal component analysis, Wiley, New York, NY.
Technometrics 21(3): 341–349. Rousseeuw P. and Van Driessen K. (1999). Fast algorithm for
Kano M. and Nakagawa Y. (2008). Data-based process moni- the minimum covariance determinant estimator, Techno-
toring, process control, and quality improvement: Recent metrics 41(3): 212–223.
developments and applications in steel industry, Comput- Ruiz-Gazen, A. (1996). A very simple robust estimator of a dis-
ers & Chemical Engineering 32(1-2): 12–24. persion matrix, Computational Statistics and Data Analy-
Li G. and Chen Z. (1985). Projection-pursuit approach to ro- sis 21(2): 149–162.
bust dispersion matrices and principal components: Pri- Yue, H. and Qin, S. (2001). Reconstruction-based fault identifi-
mary theory and Monte Carlo, Journal of the American cation using a combined index, Industrial and Engineering
Statistical Association 80(391): 759–766. Chemistry Research 40(20): 4403–4414.
Li W. and Qin S. J. (2001). Consistent dynamic PCA based
on errors-in-variables subspace identification, Journal of Received: 23 December 2007
Process Control 11(6): 661–678. Revised: 4 June 2008
Re-revised: 13 June 2008
Maronna R. A., Martin R. and Yohai V. J. (2006). Robust Statis-
tics: Theory and Methods, Wiley, New York, NY .

You might also like