Stochastic Modelling and Analysis of Degradation For Highly Reliable Products
Stochastic Modelling and Analysis of Degradation For Highly Reliable Products
Stochastic Modelling and Analysis of Degradation For Highly Reliable Products
Received 5 May 2014, Revised 7 September 2014, Accepted 8 September 2014 Published online in Wiley Online Library
Keywords: stochastic process models; Wiener processes; general path models; data-driven methods; degradation-based burn-in
1. Introduction
Systems have become more and more reliable today, although the demand for highly reliable products also continues to
increase. Failures of the system in the field are costly, and sometimes disastrous. For complex and especially safety critical
systems, however, it has become far more difficult to predict system failure or failure of a key component in the system. On
the other hand, the development of new technology on design for reliability, product testing and maintenance has greatly
enhanced reliability and quality of a product. It will take an extremely long time for a product unit to fail, even if it is
operated under severe conditions. Reliability prediction and failure prevention for a reliable complex system during the
testing and operational phases have posted a great challenge to reliability engineers.
A promising way for reliability modelling of highly reliable systems is to make use of degradation signals that reflect
the health conditions of a product. The rationale is based on the finding that ageing failures of most products are attributed
to some underlying degradation mechanism, for example, wear of a mechanical component, impurity level of filtered
water, resistance of an electronic component and capacity of a battery. Some important products with serious degradation
problems are gears, which are one of the most common components used in mechanical transmission system, lithium-ion
batteries that are widely used in commercial products, liquid-crystal display (LCD) and light-emitting diode (LED) whose
light intensity drops with usage. The degradation, which can be viewed as damage to a system, accumulates over time and
eventually leads to a product failure when the accumulated damage reaches a failure threshold, either random or stipulated
by industrial standards. See Figure 1 for an illustration of a degradation-threshold failure.
The degradation-threshold failure mechanism provides an intimate link between degradation and product failures. The
failure time distribution and the parameters therein can be determined through analysis of the degradation mechanism and
the data. A well-known example is the Wiener degradation process whose first passage time to a fixed threshold follows
an inverse Gaussian (IG) distribution. When other degradation models are used, we can obtain the first passage time in a
similar way. The natural link between the degradation process and the failure time motivates us to assess product reliability
by making use of degradation signals. If we are able to find a proper degradation model for the degradation signals, the
model can then be used for subsequent forecasting and decision-making, for example, estimation of failure time distribution
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
during product testing, forecasting of warranty costs, degradation-based burn-in testing, remaining useful life prediction
during field use and condition-based maintenance. Therefore, the essence of degradation modelling is to develop ‘good’
probability models that are capable of describing the degradation phenomenon.
In this paper, we will first review existing probability models for modelling the degradation over time. Two broad cate-
gories of degradation models are stochastic process models, for example, Wiener process, Gamma process and IG process,
and general path models. The Wiener process has been used in degradation modelling and analysis so intensively that it
deserves a review using a full section. Various variants of the Wiener process, for example, covariates, random effects and
measurement errors, will be introduced. Then we will use another section to introduce other degradation models in the
first two classes, including the Gamma process, the IG process and the general path models. A comprehensive comparison
between stochastic process models and general path models is given to show pros and cons of these two methods.
Some applications of these models in reliability estimation and decision-making have been reviewed by others. For
example, remaining useful life prediction during field use has been reviewed by Si et al. [1], while Jardine et al. [2] provided
an excellent review on condition-based maintenance models. Therefore, we do not intent to review these topics in this
paper. We find that application of degradation models in burn-in testing is still underdeveloped. Therefore, we will review
this important topic after introducing the degradation models.
The remainder of this paper is organized as follows. Section 2 describes the Wiener process model and various extensions
based on it. Section 3 presents some other continuous-time and continuous-state stochastic process models as well as
general path models. A comparison between these two categories of models is given. Section 4 briefly reviews some other
degradation models. Section 5 reviews degradation test planning for models in these two categories. Section 6 discusses
degradation-based burn-in testing models as one promising applications of the degradation models for decision-making.
A short conclusion is given in Section 7
The premise of degradation-based decision-making is to choose an appropriate degradation model for the product, based
on the degradation physics or on the degradation data. There are two large classes of degradation models for degradation
data, that is, stochastic process models and general path models. In addition, there are some other models that cannot be
classified into these two classes. Because of the intensive application of the Wiener process in degradation modelling, this
section devotes to a comprehensive review on Wiener degradation processes. Various variants of the Wiener process are
reviewed.
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
the resulting degradation increments in disjoint time intervals are also independent. In this regard, the Wiener process,
whose increment is normally distributed, is a good model for the degradation. The normal distribution for the increments
also offers the Wiener process many nice properties. A basic Wiener process model {X(t); t ⩾ 0} is often expressed as:
where 𝜈 is the drift parameter reflecting the rate of degradation, 𝜎 is the volatility parameter and B(⋅) is the standard
Brownian motion. The monotone increasing function Λ(⋅) is called a general time scale to represent the nonlinearity of the
degradation paths, which is proposed by Whitmore and Schenkelberg [3]. X(t) is often used to represent system degradation.
Sometimes, it represents the transformed system degradation. For instance, X(t) might be the logarithm of the degradation
if there is a nonnegative requirement on the degradation values. Then the underlying degradation follows a geometric
Brownian motion [4]. The Wiener process has independent and normally distributed increments, that is, ΔX(t) = X(t +
Δt) − X(t) is independent ofX(t), and
( )
ΔX(t) ∼ N 𝛽Λ(t + Δt) − 𝛽Λ(t), 𝜎 2 Λ(t + Δt) − 𝜎 2 Λ(t) .
The first passage time T of the Wiener process to a fixed threshold( D, which is ) often defined to be the failure time of a
product, follows a transformed IG distribution, that is, Λ(T) ∼ IG D∕𝜈, D2 ∕𝜎 2 , where IG(a, b) has a probability density
function (PDF) of
( )1∕2 [ ]
b b(y − a)2
fIG (y; a, b) = exp − , y > 0.
2𝜋y3 2a2 y
The Wiener process has received wide applications in degradation data analysis. Doksum and Hóyland [5] assumed that
degradation of a cable insulation follows the Wiener process and then failure data can be analysed using the IG distribution.
Park and Padgett [6] used the model to fit fatigue data of metals Wang et al. [7] used the model to fit the head wear data of
hard disk drives. The basic model (1) provides a very useful basis for degradation analysis. Because of the consideration
of different applications, modifications are often made to this model so that it works for different problems. Generally
speaking, there are three variations to the basic model: (a) measurement errors, (b) random effects and (c) covariates. We
review the three classes of variants in what follows.
where 𝜀 represents the error term that is often assumed to be time independent. A deficiency in his inference procedure is
that the first degradation data point is not utilized. Ye et al. [9, 10] made up this deficiency and proposed a new inference
procedure based on the EM algorithm. Tang et al. [11] applied the model to remaining useful life prediction for lithium-ion
batteries. The covariance matrix of the random measurements at different time points, that is, covariance of Y(u), Y(t), u ≠ t,
was investigated by Peng and Hsu [12]. Peng and Tseng [13] further investigated random-effects Wiener process with
measurement errors.
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
these stress factors are observable, they can be incorporated into (1) through some acceleration relation. Because of the
need in analysis of accelerated degradation test data, different approaches have been proposed to incorporate them into
basic Wiener process (1). More specifically, most existing approaches allow some parameters of the stochastic processes
to be functions of the covariates. To choose an appropriate way for incorporating covariates requires a good understanding
about how the acceleration factors affect parameters of the model. For example, Doksum and Normand [14] used the
Wiener process to describe a biomarker series, and they assumed that 𝜈 is a function of the covariates while 𝜎 is a constant.
This assumption is also adopted in Tang et al. [15], Liao and Tseng [16] and Lim and Yum [17]. The rate parameter 𝜈
can be linked to the stress in a number of ways. The functional relation between 𝜈 and the stress is called a link function.
Doksum and Normand [14] and Tang et al. [15] assumed a linear link function for the transformed degradation data, while
Padgett and Tomlinson [18] assumed a power law. Liao and Tseng [16] used the Arrhenius link function to describe the
effect of temperature on LED degradation. Park and Padgett [4] considered various choices for the link function. Tsai et
al. [19] considered a general function called the generalized Eyring model. Liao and Tseng [16] used two stresses, that
is, temperature and electric current, in their LED experiment, and they used an independent combination of the Arrhenius
relation and the inverse power law to model their respective effects. For the ease of readers, we list some useful link
functions with a single stress s in what follows, where 𝛽0 and 𝛽1 are model parameters (Table I).
In practical applications, it is often found that the degradation variation may also change with the stress. An intuition
tells that the variation will also increase with the stress. To capture the dependence of 𝜈 and 𝜎 on the stress, Whitmore and
Schenkelberg [3] used a two-step procedure. They fitted degradation data of each individual self-regulating heating cable
and then used linear regression to establish the relationship between 𝜈 and the transformed stress as well as the relationship
between 𝜎 and the stress. Their analysis revealed that both 𝜈 and 𝜎 are increasing in the testing temperature. A similar
method is adopted in Joseph and Yu [20]. Liao and Elsayed [21] also assumed that both 𝜈 and 𝜎 in (1) are increasing
functions of the stress. Then they applied the model to light intensity degradation of a light-emitting diode. If we assume
𝜈 and 𝜎 are independent functions of the stresses, however, there might be excessive parameters in the model. To reduce
the number of parameters, Peng and Tseng [22] proposed a cumulative exposure model for covariates. The cumulative
exposure model assumes that operation under a given stress for a duration of time t is equivalent to the operation under a
baseline stress for a duration of 𝜌s × t, where 𝜌s is a scaling factor dependent on the stress s.
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
can be found in Tsai et al. [26], Si et al. [27] and Wang et al. [28]. Similar idea of imposing a normal distribution on 𝜈 was
extensively used from the Bayesian perspective. See Bian and Gebraeel [29], Liao and Tian [30] and Bian and Gebraeel
[31]. Peng and Tseng [32] proposed a more general random-drift model by assuming a skew-normal distribution for 𝜈,
which includes the normal distribution and the truncated normal as special cases. All the aforementioned models assume a
constant volatility parameter 𝜎 for all product units. It is natural to suspect that the volatility parameter is also unit-specific.
Wang [33] introduced a random-effects model by letting 𝜎 −2 ∼ Gamma(r, 𝛿) and [𝜈|𝜎 2 ] ∼ N(𝜐, 𝜃𝜎 2 ). He then investigated
semiparametric estimation of the model parameters. Based on this random-effects model, a larger realization of 𝜎 would
result in a larger variation in 𝜈. However, this may not be sensible in real applications. It is not uncommon to see that when
the degradation of a product unit is faster, the variation of the degradation over time is also higher. This means that a large
𝜎 often implies a large 𝜈, rather than a large variation in 𝜈. More studies are needed to make up the deficiency.
A random-effects model can also be built for unobservable exogenous factors such as usage rates. A simple method
is to assume that the usage rate for a product unit is constant while different product units have different usage rates.
Then the usage rate can be treated as unit-specific, and the random-effects models reviewed previously can be applied
to capture the heterogeneities in the usage rate. On the other hand, if the usage rate is significantly non-constant, we can
model the cumulative usage W(t) as a stochastic process and then the time scale t in the basic model (1) is replaced by
W(t). This means that we are using a random time scale. Wang [34] presented such a model by assuming an IG process for
W(t). Alternatively, we can assume that the operating environments of a system have discrete states, each corresponding
to a different degradation rate, and the states evolve in a stochastic way. Si et al. [35] assumed a two-state Markov chain
model for the random environments, and they used the Wiener process to model degradation of a system with working and
storage states.
After reviewing the Wiener process in detail, this section presents some other stochastic process models as well as general
path models. Following the logic in the last section, we will try to explain the physical mechanisms of each model, and
then review variants including covariates, random effects and measurement errors. A comparison between the class of
stochastic models and the class of general path models will also be given.
𝜇(𝜇y)𝜂(t+u)−𝜂(u)−1
fΔY(t) (y; 𝜇, 𝜂(⋅)) = exp(−𝜇y), (3)
Γ(𝜂(t + u) − 𝜂(u))
where 𝜇 is the scale parameter and 𝜂(t) is the shape function, which is required to be nonnegative and monotone increasing.
Similar to the Wiener process, covariates and random effects can be incorporated into the Gamma process. Generally,
there are two methods to incorporate covariates and one method to incorporate random effects. To incorporate covariates,
Bagdonavicius and Nikulin [39] used the method of additive accumulation of damage to incorporate covariates into the
Gamma process (3). The method is similar to the cumulative exposure model introduced in Section 2. Park and Padgett
[40] made a straightforward extension by considering more than one stress factors. Lawless and Crowder [41] assumed
that the scale parameter 𝜇 is a function of covariates. To incorporate random effects, Lawless and Crowder [41] assumed
that the scale 𝜇 is random for different units and it follows a Gamma distribution.
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
If we know the parametric form of 𝜂(⋅) and the effect of the stress, then statistical inference for the parameters can
be easily carried out through maximum-likelihood estimation (MLE) after degradation data are collected. For example,
parametric inferences of the Gamma process have been investigated by Bagdonavicius and Nikulin [39] and Lawless and
Crowder [41] and summarized by van Noortwijk [37]. When the parametric form is unknown, we can estimate 𝜂(⋅) non-
parametrically and other parameters parametrically. However, optimization of the resulting likelihood function tends to
be difficult because of excessive parameters. Wang [42] proposed a pseudo-likelihood method, which ignores the depen-
dence between degradation measurements from the same unit. Then the pool-adjacent-violators algorithm can be used to
maximize the pseudo-likelihood function. Large sample properties of the estimators were investigated by him. Wang [43]
investigated pseudo-likelihood estimation for the Gamma process with Gamma-distributed random effects. Ye et al. [44]
showed that the EM algorithm can be used to obtain the ML estimators under the semiparametric setting. They showed
that the ML estimators are more efficient than the pseudo-likelihood estimator in terms of bias and standard deviations.
It is very likely that degradation measurements are contaminated by measurement errors. Most often, the measurement
errors are white noises and do not accumulate over time. If we assume a normal distribution with mean 0 for the mea-
surement error, the resulting distribution for the measurements does not have a closed-form. Kallen and van Noortwijk
[45] considered the measurement errors problem in the Gamma process. They use direct integration to obtain the distri-
bution of the measurements. Numerical methods were then used to evaluate the integral. But it is well recognized that a
high-dimensional integral is generally hard to compute and the approximation bias would be quite large in this brute-force
method. Zhou et al. [46] used the particle filter algorithm and the EM algorithm to filter the measurement errors. Lu et
al. [47] used the Genz transform and a quasi-Monte Carlo method to maximize the likelihood function. They find that the
method is much more effective than the direct maximization method.
The IG process is seldom used as a degradation model until a recent study by Wang and Xu [48]. This might be because
there is no physical explanation for this process. So reliability engineers may not know how and when to use this process
for degradation modelling. To bridge the gap, Ye and Chen [49] explored physical meanings of the process. They found that
similar to the Gamma process, the IG process is also a limit of compound Poisson processes, but the shock size distribution
is different from that of the Gamma process. In addition, they found that the IG process is very flexible in incorporating
covariates and random effects compared with the Gamma process. The flexibility comes from the inverse relation between
the Wiener process and the IG process, as shown in what follows.
Recall( that the first)passage time of the Wiener process X(u) = 𝜈u+𝜎B(u) to a fixed threshold Λ is an IG random variable
Y ∼ IG Λ∕𝜈, Λ2 ∕𝜎 2 . If we have a series of thresholds Λ indexed by t, we ( will have a series
) of the corresponding failure
times Y(t). It is easy to show that Y(t) follows an IG process Y(t) ∼ IG 𝜇Λ(t), 𝜆Λ2 (t) , where 𝜇 = 1∕𝜈 and 𝜆 = 1∕𝜎 2 .
It is interesting to find that as long as there is a way to incorporate covariates and random effects in the Wiener process,
there is a corresponding way to incorporate covariates and random effects in the IG process through the inverse relation.
For example, in the ( Wiener
) process, we may assume that 𝜈 is normal and 𝜎 fixed, or we may assume 𝜎 −2 ∼ Gamma(r, 𝛿)
and [𝜈|𝜎 2 ] ∼ N 𝜐, 𝜃𝜎 2 . Accordingly, we can assume 𝜇 = 1∕𝜈 is normal, or assume 𝜆 = 𝜎 −2 ∼ Gamma(r, 𝛿) and
[1∕𝜇|𝜆] ∼ N(𝜐, 𝜃∕𝜆) in the IG process. Then the marginal distributions of the resulting IG random-effects models are
tractable. Similarly, the ways of incorporating covariates in the Wiener process, as reviewed in Section 2.3, can be extended
to the IG process through the inverse relation.
In the work by Wang and Xu [48], a semiparametric estimation procedure based on the EM algorithm was proposed
for the IG process, where the function Λ(t) is estimated nonparametrically. The EM algorithm requires expectations of
the conditional increments of the IG process, which is hard to obtain. Closed-form expressions for the expectations were
derived in Ye [50]. The results facilitate implementation of the EM algorithm for the IG process. Peng et al. [51] developed
Bayesian methods for the IG process. Nevertheless, the IG process is still new in the degradation literature. Properties of
the process for degradation modelling need further exploration.
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
where Λi (t) is the true degradation path that is dependent on some random parameters called random effects, and 𝜀i are
normally distributed measurement errors with 𝜀 ∼ N(0, 𝜎 2 ). The random-effects parameters in Λi (t) are fixed yet unknown.
This means that conditional on the random-effects parameters, Λi (t) is a fixed function. If the general path model assumption
is made, then as long as the random-effects parameters are known, the failure time, that is, the first hitting time to a fixed
threshold D, is deterministic. Therefore, the randomness in the failure time comes from the randomness of the parameters.
For example, we may assume
Λ(t) = 𝜙 + Θt, (6)
where 𝜙 is a fixed parameter the same for all product units, while Θ is unit-specific and it follows a certain distribution Φ(⋅).
Then conditional on Θ, the failure time is fixed and equal to (D − 𝜙)∕Θ. Unconditionally, the failure time distribution is
P(T < t) = P((D − 𝜙)∕Θ < t) = P(Θ > (D − 𝜙)∕t) = 1 − Φ((D − 𝜙)∕t).
The linear degradation path model is used by Freitas et al. [53] to model train wheel wear data. They found that a Weibull or
lognormal distribution is sufficient for the random effects. When other parametric forms for Λ(t) are assumed, the lifetime
distribution can be obtained in a similar way. Parameters in the model should be estimated before we apply the model for
decision-making and failure forecasting. The parameters depend on the specific form of Λ(t). If Λ(t) takes the form in (6),
the parameters are 𝜎 2 , 𝜙 and parameters in the distribution of Θ.
Regarding the parameter estimation, Lu and Meeker [52] proposed a two-stage method for statistical inference. At stage
1, a regression method is used to estimate the unknown parameters for each product unit, including the measurement-error
variance 𝜎 2 , the fixed-effects parameters and the random-effects parameters. At the second stage, the variance estimates and
fixed-effects parameter estimates are averaged to obtain an overall estimate, while the random-effects parameters are fitted
using the assumed random-effects distribution. Su et al. [54] considered the scenario of random number of measurements
for each product unit. They showed that the two-stage least-squares estimators are not consistent under this setting. On the
other hand, MLE provides a consistent estimator and is also statistically more efficient under small sample sizes. Weaver
et al. [55] also used MLE for the estimation. They further studied planning of degradation tests (without acceleration) and
examined effects of sample size on the estimation precision.
There are several ways to extend the general path model (6). The first is to consider other forms of the mean degradation
path Λ(t) based on different application problems. Boulanger and Escobar [56] suggested that there might be a ceiling for
Λ(t), for example, degradation of tyres and hard disks. They assumed that the amount of degradation over time levels off
towards a maximum degradation level. This is realized by assuming an increasing function for Λ(t) with Λ(∞) < ∞. Bae
and Kvam [57] extended the model by assuming that there are two phases in the degradation path. Bae et al. [58] investigated
different distributions for the random effects and derived the implied failure time distributions. Yu [59] assumed a linear
degradation path with a reciprocal Weibull-distributed degradation rate. He then developed optimal ADT based on the
reciprocal Weibull assumption. In the analysis of vibration signals of rolling bearings, an exponential form for Λ(t) is often
considered [31]. More generally, Shiau and Lin [60] proposed nonparametric regression methods to estimate Λ(t), while
Zhou et al. [61] used cubic spline for Λ(t).
The second way to extend the model is to consider different error structures. Lu et al. [62] extended (6) to non-constant
variance 𝜎 2 of the measurement errors. Yuan and Pandey [63] further extended to the case where the variance 𝜎 2 is a
function of the mean path Λ(t). Lin and Lee [64] considered correlated random errors and used the ARMA model to
fit the data.
The third way is to incorporate effects of stress factors into the model. The purpose to introduce the stress factors is
mainly for accelerated degradation test (ADT) planning and the subsequent data analysis. Meeker et al. [65] extended the
basic model (6) so that stress effects can be incorporated into the model. The idea of incorporating covariates into the model
is the same as that for stochastic-process models. That is, we let some parameters to be function of the stress. The link
function can be determined from degradation kinetics, or from empirical experience. After a stress-acceleration relation is
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
found, it can be applied to the data, and the parameters are readily obtained through maximizing the likelihood function.
After specifying an appropriate model and having a good estimate, an ADT plan can be derived to minimize costs or to
minimize variance of an estimated reliability characteristic of interest. Details can be found in Section 4.
The fourth way is to use Bayesian methods to make use of prior information on the parameters. Robinson and Crowder
[66] and Hamada [67] discussed Bayesian estimation using the general path model. The main task is to find a proper prior
for the model parameters. Pan and Crispin [68] carried out a LED experiment to verify the general path assumption by
using a hierarchical Bayes model. Chen and Tsui [69] proposed a two-phase degradation model where the degradation
path is piecewise linear while the change time of the mean function is random. Historical data were used to obtain a prior
for the model parameters by using a Bayesian framework. Shi et al. [70] developed Bayesian inference and planning for
accelerated destructive degradation tests. Tang and Liu [71] made use of product degradation physics by using Bayesian
methods. They built physics-based statistical models and discussed the optimal design.
3.4. Comparisons between stochastic process models and general path models
At the beginning, most degradation models were built based on the framework of general path models. This might be
because the general path models are very easy to use and the theory has been well established. Essentially, the general path
models are mixed-effects regression models, where some parameters are fixed for all units while others are unit-specific.
The mixed-effects regression models have been very well studied in the biostatistics literature. See the book by Lindstrom
[72]. Other than the simplicity, there are advantages of using the general path model: (a) it is very flexible in incorporating
random effects. After specifying a mean path function Λ(t), we can let some parameters to be random across the product
population, and the resulting distribution for the failure time, that is, the first passage time to a fixed threshold, often has a
closed-form. In comparison, a stochastic process model has limited ways in incorporating covariates and random effects.
For example, the Gamma process has only one way to incorporate random effects. (b) The general path model is more robust
than process-based models. The basic model (6) is essentially a regression model. For a specific unit, the true degradation
path conditional on the random-effects parameters is a fixed function of time. Any deviation of the observed degradation
from the true path is due to measurement errors. This means that we can adjust the error term distribution to make the
model work for a specific degradation data set. Methods of adjusting the error distribution have been well-studied. For
example, we have efficient methods to deal with heteroscedasticity and autocorrelation.
The property of fixed degradation path conditional on Θ is an advantage of the generalized linear models. However,
a fixed underlying degradation path is a great simplification of the reality. The randomness of the observed degradation
consists of three parts; the first is the inherent randomness of each unit, for example, variations in the raw materials, the
randomness of crystalline grain and initial defect size. In addition, possible random changes in the physical conditions of
the product also contribute to the unexplained randomness of the degradation process. This source of variation is depicted
by using random-effects models. The second is measurement errors. The randomness from measurement errors can be
reduced if more advanced and accurate measurement device is used. In some degradation problems, for example, tyre
wear and fatigue crack, the measurement device is often quite accurate and the measurement errors are negligible. If the
degradation is measured using electronic devices, such as acoustic emission devices, then the measurement errors are
often quite significant. The third is the unexplained randomness and dynamics due to unobserved environmental factors,
or unknown effects of random environments on the degradation process. Random ambient environments include random
usage pattern, temperature, voltage, humidity and vibration. If we have sufficient information on these environmental
factors, we can treat them as time-varying covariates in the degradation process and then the randomness in the degradation
would be reduced [73]. This means that the randomness in the environmental process explains parts of the randomness in
the observed degradation. In principle, if the inherent randomness of degradation is from random ambient environments,
we can reduce, or even eradicate, the randomness if all environment information is known. However, it is almost impossible
to observe all environmental factors. The randomness in the unobserved environmental factors will convert to unexplained
randomness in the observed degradation.
Stochastic processes are a natural choice for modelling the randomness in degradation processes caused by inherent
randomness and environmental factors. However, most stochastic process models are often so complex that it is not handy
to use for engineers. In addition, not good properties can be obtained from these complex processes in general. For a
stochastic process model to be useful for degradation modelling, there are several requirements: (a) it has clear physical
explanations, (b) it is easy to understand and use and (c) it has good properties, for example, good mathematical properties,
easy to incorporate prior information and flexible in dealing with covariates and random effects. The three stochastic process
models reviewed previously, that is, the Wiener process, Gamma process and IG process, meet all the requirements, and
they are good candidates for degradation modelling. The stochastic nature of these processes is capable of modelling the
unexplained randomness of the degradation over time because of unobserved environmental factors, or the unknown effects
of the environmental factors on the degradation process. On the other hand, the general path model simply assumes that
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
the inherent degradation is deterministic, which is an oversimplification of reality. The general path model is applicable
only when the unexplained randomness due to environmental factors is small enough. If the randomness due to random
environments is not captured in a degradation model, the variance of the lifetime would be underestimated. Decision
supports, for example, lifetime prediction and condition monitoring, will also be inaccurate if the randomness caused by
environments is significant.
Other than the models discussed in the previous two sections, there are many other models available for degradation
modelling in the literature, although they have not received as much attention yet. The first class of models is delay time
model. The basic model assumes that degradation process consists of two phases with a defect initiation time as the split
point. Before the defect initiation, there is no deterioration. The period from product installation to defect initiation is called
delay time. After the introduction of a defect, degradation would accumulate and cause product failure. See Wang [74] for
an excellent overview on this topic.
The second class is shock models. Models in this class focus on the cumulative damage caused by random shocks, where
the unit sustains a random amount of damage each time a shock arrives. Realistic shock models include extreme shock
models [75], cumulative shock models [76], mixed shock models [77] and 𝛿-shock models [78]. Li and Luo [79] further
considered a Markov-modulated shock process where both the shock arrival process and the random shock damage are
governed by a Markov chain. Nakagawa [80] provided a book-length treatment on this topic.
The third class is continuous-time Markov models. Markov chains {Xt , t > 0} are an important class of probability
models because Markov chains have good properties and have been well-studied in the literature of probability theory.
There are two ways to use Markov chain in degradation modelling. The first way is to assume that degradation states of a
system are discrete and finite. With usage over time, the degradation status jumps towards the worst state, which is often
a failure state. The jump time distribution can be either exponential or non-exponential, leading to Markov models and
semi-Markov models, respectively. This type of models is often seen in preventive maintenance modelling, as existing
tools such as Markov decision process can be utilized. See Soro et al. [81], Yin et al. [82] and Zhong and Jin [83] for some
applications. Another way is to assume that evolution of the environment has a Markovian structure. Kharoufeh [84] and
Kharoufeh and Cox [85] assumed that the degradation rate of a system is totally dependent on random ambient environment
and the evolution of the random environment is characterized as a stationary continuous-time Markov Chain. Kharoufeh
et al. [86] further extended to semi-Markov models for the environment evolution. It is possible that observations of the
states of degradation or environments are compounded with measurement errors. Markov and semi-Markov models with
measurement errors are called hidden Markov models (HMM) and hidden semi-Markov models, respectively. For example,
Xu et al. [87] considered a real-time reliability prediction problem for a dynamic system whose degradation measurements
constitute a HMM. They utilized particle filtering to identify the unobservable systems states. Byon and Ding [88] used
HMM to model a multi-state deteriorating wind turbine.
The remainder can be classified into the class of various data-driven approaches. Data-driven approaches attempt to esti-
mate reliability characteristics directly from various data sources, including degradation data. Most models can be classified
as filtering, machine learning data fusion, fuzzy methods and data-driven statistical methods. Machine learning methods
include artificial neural networks [89], support vector machines [90] and Bayesian networks [91]. Data fusion techniques
try to fuse different sources of data in order to improve the estimation accuracy as much as possible. Degradation data are
of course an important source of data for fusion [92]. Fussy methods are an alternative to probability methods. Applica-
tions of fussy methods in degradation data are also not uncommon [93]. In addition to these methods, there are many other
statistical methods, such as wavelet analysis, particle filter and Kalman filter. Because it is very difficult to systematically
review these data-driven methods, we decide to review models developed for two important classes of products with seri-
ous degradation problems, that is, gears and lithium-ion batteries. We hope that developments of degradation models in
the two concrete problems can give readers a better idea on these various data-driven methods.
Gear is one of the most common components used in mechanical transmission system. Its failure results in machine
breakdown. To prevent any unexpected gear failure, gear performance degradation assessment should be conducted, and
it attracts much attention in recent years. Wang et al. [94] analysed gearbox vibration signal and used discrete wavelet
transform to retain a frequency band covering gear meshing frequency and modulating frequencies around the gear mesh-
ing frequency. However, this method is sensitive to inconstant loads. To relieve this problem, gear residual error signals
extracted from acceleration error signals can be used [95]. Based on gear residual error signals, much work was carried
out. Miller [95] developed a fault growth indicator using three-sigma rule to monitor abnormal gear health condition and
assess gear deterioration. Lin et al. [96] improved the fault growth indicator by adding some weights to it and applied the
improved indicator to proportional-hazards modelling for replacement decision-making. Miao and Makis [97] used wavelet
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
modulus maxima to train hidden Markov models for monitoring gear health condition. Anomaly detection by using hid-
den Markov models for gear degradation assessment was investigated by Miao et al. [98]. Considering signal singularity,
Miao et al. [99] constructed Lipschitz exponent from wavelet modulus maxima and used it to construct a health indicator
for reflecting gear health evolution. However, the aforementioned methods only aim to construct one health indicator for
early gear fault diagnosis and performance degradation assessment. Because early gear fault diagnosis and performance
degradation assessment are two different tasks, it is better to design two health particular indicators for the two different
tasks separately. Based on this idea, Wang et al. [100] extracted 11 statistical features from gear residual error signal and
artificially divided them to two subsets according to their sensitivities to early gear faults, based on which early gear faults
and gear health degradation can be tracked simultaneously.
Lithium-ion batteries are widely used in commercial products. Accurate estimation of remaining useful lives of lithium-
ion batteries ensures continuous power supply. In recent years, many data-driven models have been developed to analyse
degradation of lithium-ion batteries. Kalman filter is the basic algorithm for iteratively calculating and updating the param-
eters of a state space model for battery prognosis. However, the state functions and the measurement functions used in the
state space model must be restricted to linear functions and Gaussian distribution for the use of the Kalman filter [101]. In
many real cases, these restrictions are not satisfied. Variants of the Kalman filter, such as the extended Kalman filter [101]
and the unscented Kalman filter, should be used. Extended Kalman filter linearizes the non-linear state and measurement
functions used in the state space model. Nevertheless, Jacobians of the state and measurement functions have to be derived.
In many cases, these Jacobians are difficult to obtain [102]. In other words, linearization of the state and measurement
functions is not an easy task. To avoid this problem, the unscented Kalman filter was developed to use unscented transform
to approximate the moments of the posterior distribution [103]. Calculation of the square root of the covariance matrix is a
major disadvantage of the unscented Kalman filter because this step is sometimes unstable. In addition to various variants
of the Kalman filters, the particle filer also attracts considerable attention for prognostics [104]. The major idea of particle
filter is to use many particles and their associate weights to approximate the posterior PDF. Compared with Kalman filters,
the use of particle filters is more flexible for any non-linear functions [105, 106]. However, how to choose a proper proposal
distribution for the use of the particle filter is still a difficult problem. To relieve this problem, unscented particle filter can
be used to provide a proposal distribution by using the unscented Kalman filter first. Then, the particle filter can be used to
estimate remaining useful life of lithium-ion batteries [107]. Another efficient tool for battery degradation is the relevance
vector machine. This method aims to find some critical data called relevance vectors to determine the parameters in the
battery degradation model. We also note that the aforementioned data-driven methods used for battery lifetime prognosis
have the same disadvantage, that is, a simple exponential function is assumed for the mean degradation model of a battery.
This may not be true because battery degradation is a complicated electrochemistry process. Therefore, a physical battery
degradation model may be used to improve prediction accuracy, which can be an interesting topic for future research.
Degradation data may come from historical database, such as field data. For a newly developed product, degradation data
may have to come from lab testing on a number of initial specimens. The lab test is a necessary procedure in manufacturing
industry to assess the reliability of a product before product launch. Traditionally, the test is conducted to collect failure
data of the product. As products become increasingly reliable, it needs an unaffordable testing time to see a failure. When
degradation of the product is observable and the degradation is highly related to product lifetime, degradation test can be
used instead. Degradation data are collected from the test and are used to estimate reliability characteristics of interest.
In order to effectively use the data in the estimation, a well-planned degradation test is needed. The plan variables can be
testing stress, number of samples allocated to each stress, measurement times and so on. In this section, degradation test
planning for the stochastic process models and the general path models are reviewed.
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
Before an ADT planning, it is technically convenient to normalize the stress so that the link function can be written in a
uniform way for different acceleration relations. See Lim and Yum [17] and Ye et al. [109] for details of the transformation.
Then, it is important to specify the effects of stresses on the degradation process. The stress is a covariate for the Wiener
process, and thus, the stress can be incorporated into the Wiener process through the relations discussed in Section 2.3. The
objective of the test can be minimizing testing costs [15] or minimizing the variance of reliability characteristics of interest,
for example, the p-th quantile. Most often, the planning requires the first hitting time distribution, that is, the induced failure
time distribution. The first hitting time of a Wiener process follows an IG distribution, and reliability characteristics such
as the p-th quantile can be obtained accordingly.
In the literature, Lim and Yum [17] investigated optimal constant-stress ADT for the Wiener process. Their objective is
to minimize the asymptotic variance of the estimated p-th quantile by properly choosing the testing levels and the allocation
proportion. There are some advantages of using the constant-stress ADT: (1) it is very convenient to conduct the test, and
(2) it is easy to verify the Wiener-process assumption and the validity of the acceleration relation. However, ADT is quite
costly because it consumes much energy and man power. The cost is approximately linear in the number of samples. If a
constant-stress ADT is used, we need to specify several stress levels first and then allocate a number of samples to each
level. The number of samples in each stress level should not be too small. Otherwise, the extrapolation to the nominal
use stress would be inaccurate. When the product is expensive or when it is difficult to produce too many prototypes for
testing, it may not be practical to have sufficient units for a simple ADT. A method to avoid too many testing samples is to
use a step-stress ADT under which the test stress is increased in steps from the lowest to the highest sequentially. Tang et
al. [15] investigated optimal step-stress ADT design based on the Wiener process. They minimized the testing costs with
a prerequisite precision level for the estimated mean life as a constraint. Liao and Tseng [16] investigated optimal step-
stress ADT for their LED product. Their purpose is to minimize the asymptotic variance of the p-th quantile by properly
choosing the sample size, the measurement frequency under each stress level and the termination time. Except for the cost
issues, another advantage of the step-stress ADT is that it begins the test from a low stress and increases the stress gradually.
The gradual change in the stress avoids sudden stress shocks that may cause unexpected failures. Peng and Tseng [22]
even considered progressive stress ADT, under which the stress of the testing units increases with time at some constant
rate. This type of test is quite complicated in real applications, because it is not an easy task to ask some equipment to
continuously increase the stress, especially when the stress is the temperature. In addition, under progressive stress, many
commonly used acceleration relations, for example, the Arrhenius relation, will fail because of the acceleration physics.
All the aforementioned planning does not consider measurement errors in the observations. Ignorance of the measure-
ment errors might cause serious problems. This is because without measurement errors, the degradation increment in a
very small time interval might carry almost the same information as the increment over a large time interval. Then the
planning results may suggest multiple measurements within a very short testing period, which is not realistic in practice.
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
optimal ADT design to minimize the testing costs. Marseguerra et al. [116] considered ADT planning with dual objec-
tives, that is, asymptotic variance of an estimator and the testing costs. Shi et al. [70] considered destructive degradation
measurement to the testing device so that each device can have a single measurement during the test. Shi and Meeker
[117] revisited the problem by using Bayesian methods. All the aforementioned studies fixed measurement times to obtain
degradation measurements. Yang and Yang [118] proposed an inverse sampling plan by fixing the degradation levels and
collecting the first hitting time of the degradation process to these levels.
To meet a sequence of performance specifications, the design reliability of a new product is often very high. After a product
is manufactured, however, some defects might be introduced, for example, non-conforming components and assembly
defects, making some product units fail much earlier than the majority. Before the product is sold to customers, all product
units are subject to a screening test called burn-in testing. Burn-in is an important engineering approach towards the end
of the production process. It is used to identify and eliminate week units by subjecting all units to harsh environments
that simulate the severest working conditions for a certain duration, with the purpose of bringing out latent defects that
might otherwise surface as early failures in the field Ye et al. [119]. Most units surviving the burn-in test are believed to
be defect-free and can then be sold to customers.
Traditionally, most burn-in models are failure-based in the sense that a unit is scrapped only if it fails at the end of
burn-in. As we have argued earlier, many modern products are so well designed and manufactured that they are highly
reliable. It may take a very long time for a defective unit to fail even under highly accelerated stresses. In addition, failure
mechanisms for modern reliable products are increasingly more complex. In practice, some quality characteristic of a
product closely related to the product lifetime usually degrades over time and causes a product failure when the degradation
level of such quality characteristic exceeds a certain threshold, which is often stipulated by the industrial standards. If the
quality characteristic of a defective unit degrades faster than a normal one, this unit can be effectively identified through
degradation-based burn-in. In a degradation-based burn-in test, all units are subject to severe testing environments for a
certain duration, after which the degradation of each unit is measured. If the degradation exceeds a burn-in cut-off level,
which is much smaller than the failure threshold, then the unit is deemed unacceptable and it is scrapped. Otherwise, it
passes the burn-in test. See Figure 2 for a demonstration of this burn-in scheme. Because the cut-off level is much smaller
than the failure threshold, degradation-based burn-in is able to identify a defective unit much earlier than it fails. However,
degradation-based burn-in modelling is still a rather underdeveloped area.
The first attempt at degradation-based burn-in model is found in Tseng and Tang [120]. They used a diffusion process
to describe the degradation path for the LED lamps. They assumed there are two subpopulations. The degradation path
of the weak subpopulation has the same variance but different mean compared with the normal subpopulation. Then they
considered misspecification costs, that is, classification of a weak unit as normal and a normal unit as weak, and obtained
the optimal cut-off level for the test. This model is further extended by Tseng et al. [121] and Tseng and Peng [122] based
on variants of the Wiener process. Tseng et al. [121] considered a different scrapping criterion, where there are a number of
measurement times. Each measurement time is associated with a cut-off level. A unit is classified as a normal unit only if its
Figure 2. A demonstration of degradation-based burn-in testing: the cut-off level is much smaller than the failure threshold, while the
burn-in time is much shorter than the failure time of a defective unit.
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
degradation measurements are smaller than the respective cut-off levels at all measurement time points. This method may
be able to bring down the misspecification rate of weak and strong units. It also introduces inconvenience during practical
use when the measurements are costly. Tseng and Peng [122] proposed using integrated degradation, that is, integral of the
degradation path for burn-in decision. A unit is scrapped if its integrated degradation is higher than a threshold. It seems that
this method requires continuous measurements of the degradation process in order to have the integrated degradation value
for each unit at the end of burn-in. Ye et al. [123] also used the Wiener process for degradation-based burn-in modelling.
They considered single check-up measurement at the end of burn-in and derived the expected cost for burn-in. They further
considered preventive maintenance after a burnt-in unit is put in use. Therefore, the total cost includes burn-in costs and
maintenance costs. Optimal burn-in cut-off level, burn-in duration and preventive maintenance intervals can be determined
by minimizing the total cost function.
On the other hand, burn-in models can be built based on the Gamma processes. Tsai et al. [124] obtained optimal cut-off
level based on mixed Gamma process models. That is, there are two subpopulations in the product population each with a
different degradation rate. It is well-known that products are generally subject to both degradation-failures, which are often
called soft failures, and sudden failures, which are often called hard failures. All degradation-based failures investigated
optimal cut-off levels based on the assumption that the product is subject to the degradation-threshold failure only. Ye
et al. [125] investigated an interesting problem about the influence of other failure modes on the optimal cut-off levels.
They found that in the presence of other failure modes, a more stringent scrapping criterion is needed. This means that the
optimal cut-off level is smaller than that without other failure modes. This is because when the product fails because of
some other failure mechanisms, the only way we can enhance the reliability is to adopt a more stringent criterion for the
degradation-threshold failure mode.
7. Conclusions
In this paper, we have presented a comprehensive review on degradation models useful for reliability analysis of complex
and highly reliable systems for which failure is costly and failure data are scarce. We focused on the class of stochastic
process models because they are able to capture the degradation dynamics of a system. We also briefly reviewed the class
of general path models and gave a comprehensive comparison between these two classes of models. To simply put, general
path models are easy to use, but they lack the capacity to capture the system dynamics. While stochastic process models are
on the other way around. If we believe that one of these models provides a good fit to our product, a design of degradation
test helps collect degradation data more efficiently. The design of a degradation test was also reviewed in this study. Models
that cannot be classified into these two classes were reviewed using one separate section. To make these models easier to
understand, we review them under the context of two specific applications, that is, gear wear and lithium-ion battery. At
the end, we reviewed degradation-based burn-in models and conclude that this is an underdeveloped area.
As we come across more degradation problems, we will find that existing degradation models are far from adequate.
Real problems are often too complex while existing models are too simplistic. More research is needed to make the mod-
els approach the reality. There are several ways to go. The first is to complete the current degradation models. The IG
process, although meaningful and flexible, is still new in degradation. Development of this model for reliability modelling
and decision-making is of interest. The second is the degradation physics. Current models for degradation are mostly data-
driven, although some of them have clear physical interpretation. Nevertheless, a good degradation model should be able
to integrate the degradation physics as well as available degradation data, which is also called a physical-statistical model.
The physics provides the basis for the modelling, while statistics captures the variations unexplained by the physics. For
example, the physics often suggests that there is an upper bound for the degradation, for example, wear of a tyre and cor-
rosion of a smelting furnace. Wang [126] presented a continuous-state degradation model where the degradation follows
a Beta distribution. The model has a finite support, but it lacks physical explanation as the relation between degradation
levels at different time points is unknown. Degradation models with finite supports require more attention. Appropriate
models can probably come from degradation physics. The degradation physics may also suggest that the degradation incre-
ments are state-dependent, while the Wiener processes, Gamma processes and IG process have independent increments.
The Ornstein–Uhlenbeck process may be used in this context, which can be a good family of models for degradation. The
third concerns the time scale. A product is often used intermittently, and the cumulative usage is often random [10, 34].
The usage can be used as a random time scale for the degradation models reviewed in this study. Properties of the resulting
models need further investigation. The fourth is goodness-of-fit testing. Model selection can be readily carried out using
AIC. After identifying the model with the smallest AIC, a goodness-of-fit test is needed to check if the model really fits the
data. Except for some graphical methods, no work is found on goodness-of-fit tests for degradation processes. The fifth is
about multivariate degradation processes. Most degradation problems reported in the literature are one-dimensional. With
the development of modern sensor technology, however, high-dimensional degradation measurements may be made avail-
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
able. In multivariate degradation problems, the most challenging issue is to capture the dependence between degradation
in different dimensions. In addition, definition of soft failure in multivariate degradation is not straightforward. A feasi-
ble way to deal with multivariate degradation is to reduce to one dimension by using a performance measure, which is a
function of the multivariate degradation measurements. But how to define and use such a function needs intensive inves-
tigation. Last but not least, degradation models with measurement errors need much more investigation. Degradation is
often measured indirectly, and measurement errors are a large source of variations in the degradation measurements. The
Wiener process and the general path models are good at capturing the errors. However, this is not the case for the other
processes and models. How to incorporate measurement errors into the other processes, for example, the Gamma process
and IG process, is a good topic for future research. After introducing the measurement errors, a series of questions should
be addressed, such as parameter estimation, model selection, goodness-of-fit testing and degradation test planning.
Acknowledgements
The research by Zhisheng Ye is supported by the National Research Foundation Singapore under its Campus for Research
Excellence and Technological Enterprise (CREATE). The research by M. Xie is partially supported by a grant from
City University of Hong Kong (Project No.9380058) and also by the National Natural Science Foundation of China
(no. 71371163).
References
1. Si XS, Wang W, Hu CH, Zhou DH. Remaining useful life estimation – a review on the statistical data driven approaches. European Journal of
Operational Research 2011; 213(1):1–14.
2. Jardine AKS, Lin D, Banjevic D. A review on machinery diagnostics and prognostics implementing condition-based maintenance. Mechanical
Systems and Signal Processing 2006; 20(7):1483–1510.
3. Whitmore GA, Schenkelberg F. Modelling accelerated degradation data using Wiener diffusion with a time scale transformation. Lifetime Data
Analysis 1997; 3(1):27–45.
4. Park C, Padgett WJ. Accelerated degradation models for failure based on geometric Brownian motion and gamma processes. Lifetime Data
Analysis 2005a; 11(4):511–527.
5. Doksum KA, Hóyland A. Models for variable-stress accelerated life testing experiments based on Wiener processes and the inverse Gaussian
distribution. Technometrics 1992; 34(1):74–82.
6. Park C, Padgett WJ. New cumulative damage models for failure using stochastic processes as initial damage. IEEE Transactions on Reliability
2005b; 54(3):530–540.
7. Wang Y, Ye ZS, Tsui KL. Stochastic evaluation of magnetic head wears in hard disk drives. IEEE Transactions on Magnetics 2014; 50(5):1–7.
8. Whitmore GA. Estimating degradation by a Wiener diffusion process subject to measurement error. Lifetime Data Analysis 1995; 1(3):
307–319.
9. Ye ZS, Wang Y, Tsui KL, Pecht M. Degradation data analysis using Wiener processes with measurement errors. IEEE Transactions on
Reliability 2013a; 62(4):772–780.
10. Ye ZS, Chen LP, Xie M, Tang LC. Accelerated degradation test planning using the inverse Gaussian process. IEEE Transactions on Reliability
2014a; 63(3):750–763.
11. Tang S, Yu C, Wang X, Guo X, Si X. Remaining useful life prediction of lithium-ion batteries based on the wiener process with measurement
error. Energies 2014; 7(2):520–547.
12. Peng CY, Hsu SC. A note on a Wiener process with measurement error. Applied Mathematics Letters 2012; 25(4):729–732.
13. Peng CY, Tseng ST. Mis-specification analysis of linear degradation models. IEEE Transactions on Reliability 2009; 58(3):444–455.
14. Doksum KA, Normand SLT. Gaussian models for degradation processes – part I: methods for the analysis of biomarker data. Lifetime Data
Analysis 1995; 1(2):131–144.
15. Tang LC, Yang G, Xie M. Planning of step-stress accelerated degradation test. Annual Reliability and Maintainability Symposium, 2004,
287–292.
16. Liao CM, Tseng ST. Optimal design for step-stress accelerated degradation tests. IEEE Transactions on Reliability 2006; 55(1):59–66.
17. Lim H, Yum BJ. Optimal design of accelerated degradation tests based on Wiener process models. Journal of Applied Statistics 2011; 38(2):
309–325.
18. Padgett WJ, Tomlinson MA. Inference from accelerated degradation and failure data based on Gaussian process models. Lifetime Data Analysis
2004; 10(2):191–206.
19. Tsai CC, Tseng ST, Balakrishnan N. Optimal design for degradation tests based on Gamma processes With random effects. IEEE Transactions
on Reliability 2012a; 61(1):604–613.
20. Joseph VR, Yu IT. Reliability improvement experiments with degradation data. IEEE Transactions on Reliability 2006; 55(1):149–157.
21. Liao H, Elsayed EA. Reliability inference for field conditions from accelerated degradation testing. Naval Research Logistics 2006; 53(6):
576–587.
22. Peng CY, Tseng ST. Progressive-stress accelerated degradation test for highly-reliable products. IEEE Transactions on Reliability 2010;
59(1):30–37.
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
23. Ye ZS, Hong Y, Xie Y. How do heterogeneities in operating environments affect field failure predictions and test planning The Annals of
Applied Statistics 2013b; 7(4):2249–2271.
24. Meeker WQ, Escobar LA. Statistical Methods for Reliability Data. Wiley: New York, 1998.
25. Si XS, Wang W, Hu CH, Zhou DH, Pecht MG. Remaining useful life estimation based on a nonlinear diffusion degradation process. IEEE
Transactions on Reliability 2012; 61(1):50–67.
26. Tsai CC, Tseng ST, Balakrishnan N. Mis-specification analyses of gamma and Wiener degradation processes. Journal of Statistical Planning
and Inference 2011a; 141(12):3725–3735.
27. Si XS, Wang W, Hu CH, Chen MY, Zhou DH. A Wiener-process-based degradation model with a recursive filter algorithm for remaining
useful life estimation. Mechanical Systems and Signal Processing 2013; 35(1):219–237.
28. Wang X, Jiang P, Guo B, Cheng Z. Real-time reliability evaluation with a general Wiener process-based degradation model. Quality and
Reliability Engineering International 2013; 30(2):205–220.
29. Bian L, Gebraeel N. Computing and updating the first-passage time distribution for randomly evolving degradation signals. IIE Transactions
2012; 44(11):974–987.
30. Liao H, Tian Z. A framework for predicting the remaining useful life of a single unit under time-varying operating conditions. IIE Transactions
2013; 45(9):964–980.
31. Bian L, Gebraeel N. Stochastic framework for partially degradation systems with continuous component degradation-rate-interactions. Naval
Research Logistics 2014; 61(4):286–303.
32. Peng CY, Tseng ST. Statistical lifetime inference with Skew-Wiener linear degradation models. IEEE Transactions on Reliability 2013;
62(2):338–350.
33. Wang X. Wiener processes with random effects for degradation data. Journal of Multivariate Analysis 2010; 101(2):340–351.
34. Wang X. Nonparametric estimation of the shape function in a Gamma process for degradation data. Canadian Journal of Statistics 2009;
37(1):102–118.
35. Si X, Hu CH, Kong X, Zhou DH. A residual storage life prediction approach for systems with operation state switches. IEEE Transactions on
Industrial Electronics 2014; 61(11):6304–6315.
36. Singpurwalla ND. Survival in dynamic environments. Statistical Science 1995; 10(1):86–103.
37. van Noortwijk JM. A survey of the application of gamma processes in maintenance. Reliability Engineering & System Safety 2009; 94(1):2–21.
38. Singpurwalla N. Gamma processes and their generalizations: an overview. In Engineering Probabilistic Design and Maintenance for Flood
Protection. Springer: Dordrecht, The Netherlands, 1997; 67–75.
39. Bagdonavicius V, Nikulin MS. Estimation in degradation models with explanatory variables. Lifetime Data Analysis 2001; 7(1):85–103.
40. Park C, Padgett WJ. Stochastic degradation models with several accelerating variables. IEEE Transactions on Reliability 2006; 55(2):379–390.
41. Lawless JF, Crowder MJ. Covariates and random effects in a gamma process model with application to degradation and failure. Lifetime Data
Analysis 2004; 10(3):213–227.
42. Wang X. Semiparametric inference on a class of Wiener processes. Journal of Time Series Analysis 2009; 30(2):179–207.
43. Wang X. A pseudo-likelihood estimation method for nonhomogeneous gamma process model with random effects. Statistica Sinica 2008;
18(3):1153–1163.
44. Ye ZS, Chen N, Tsui KL. A Bayesian approach to condition monitoring with imperfect inspections. Quality & Reliability Engineering
International 2014b. DOI: 10.1002/qre.1609, to appear.
45. Kallen MJ, van Noortwijk JM. Optimal maintenance decisions under imperfect inspection. Reliability Engineering & System Safety 2005;
90(2):177–185.
46. Zhou Y, Sun Y, Mathew J, Wolff R, Ma L. Latent degradation indicators estimation and prediction: a Monte Carlo approach. Mechanical
Systems and Signal Processing 2011; 25(1):222–236.
47. Lu D, Pandey MD, Xie WC. An efficient method for the estimation of parameters of stochastic gamma process from noisy degradation
measurements. Journal of Risk and Reliability 2013; 227(4):425–433.
48. Wang X, Xu D. An inverse Gaussian process model for degradation data. Technometrics 2010; 52(2):188–197.
49. Ye ZS, Chen N. The inverse Gaussian process as a degradation model. Technometrics 2014; 56(3):302–311.
50. Ye ZS. On the conditional increments of degradation processes. Statistics & Probability Letters 2013; 83(11):2531–2536.
51. Peng W, Li YF, Yang YJ, Huang HZ, Zuo MJ. Inverse Gaussian process models for degradation analysis: a Bayesian perspective. Reliability
Engineering & System Safety 2014; 130:175–189.
52. Lu CJ, Meeker WQ. Using degradation measures to estimate a time-to-failure distribution. Technometrics 1993; 35(2):161–174.
53. Freitas MA, de Toledo MLG, Colosimo EA, Pires MC. Using degradation data to assess reliability: a case study on train wheel degradation.
Quality and Reliability Engineering International 2009; 25(5):607–629.
54. Su C, Lu JC, Chen D, Hughes-Oliver JM. A random coefficient degradation model with ramdom sample size. Lifetime Data Analysis 1999;
5(2):173–183.
55. Weaver BP, Meeker WQ, Escobar LA, Wendelberger JR. Methods for planning repeated measures degradation studies. Technometrics 2013;
55(2):122–134.
56. Boulanger M, Escobar LA. Experimental design for a class of accelerated degradation tests. Technometrics 1994; 36(3):260–272.
57. Bae SJ, Kvam PH. A nonlinear random-coefficients model for degradation testing. Technometrics 2004; 46(4):460–469.
58. Bae SJ, Kuo W, Kvam PH. Degradation models and implied lifetime distributions. Reliability Engineering & System Safety 2007; 92(5):
601–608.
59. Yu HF. Designing an accelerated degradation experiment with a reciprocal Weibull degradation rate. Journal of Statistical Planning and
Inference 2006; 136(1):282–297.
60. Shiau JJH, Lin HH. Analyzing accelerated degradation data by nonparametric regression. IEEE Transactions on Reliability 1999; 48(2):
149–158.
61. Zhou R, Serban N, Gebraeel N. Degradation-based residual life prediction under different environments. The Annals of Applied Statistics 2014.
http://www.e-publications.org/ims/submission/AOAS/user/submissionFile/16350?confirm=05fabeee, to appear.
62. Lu JC, Park J, Yang Q. Statistical inference of a time-to-failure distribution derived from linear degradation data. Technometrics 1997; 39(4):
391–400.
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
63. Yuan XX, Pandey M. A nonlinear mixed-effects model for degradation data obtained from in-service inspections. Reliability Engineering &
System Safety 2009; 94(2):509–519.
64. Lin TI, Lee JC. On modelling data from degradation sample paths over time. Australian & New Zealand Journal of Statistics 2003; 45(3):
257–270.
65. Meeker WQ, Escobar LA, Lu JC. Accelerated degradation tests: modeling and analysis. Technometrics 1998; 40(2):89–99.
66. Robinson ME, Crowder MJ. Bayesian methods for a growth-curve degradation model with repeated measures. Lifetime Data Analysis 2000;
6(4):357–374.
67. Hamada MS. Using degradation data to assess reliability. Quality Engineering 2005; 17(4):615–620.
68. Pan R, Crispin T. A hierarchical modeling approach to accelerated degradation testing data analysis: a case study. Quality and Reliability
Engineering International 2011; 27(2):229–237.
69. Chen N, Tsui KL. Condition monitoring and remaining useful life prediction using degradation signals: revisited. IIE Transactions 2013;
45(9):939–952.
70. Shi Y, Escobar LA, Meeker WQ. Accelerated destructive degradation test planning. Technometrics 2009; 51(1):1–13.
71. Tang LC, Liu X. Planning and inference for a sequential accelerated life test. Journal of Quality Technology 2010; 42(1):103–118.
72. Lindstrom MJ. Linear and Nonlinear Mixed-Effects Models for Repeated Measures Data. PhD thesis, University of Wisconsin–Madison, 1987.
73. Hong Y, Duan Y, Meeker WQ, Stanley DL, Gu X. Statistical methods for degradation data with dynamic covariates information and an
application to outdoor weathering data. Technometrics 2014. DOI: 10.1080/00401706.2014.915891, to appear.
74. Wang W. An overview of the recent advances in delay-time-based maintenance modelling. Reliability Engineering & System Safety 2012;
106:165–178.
75. Ye ZS, Tang LC, Xie M. A burn-in scheme based on percentiles of the residual life. Journal of Quality Technology 2011a; 43(4):334–345.
76. Esary J, Marshall A. Shock models and wear processes. The Annals of Probability 1973; 1(4):627–649.
77. Gut A. Mixed shock models. Bernoulli 2001; 7(3):541–555.
78. Lam Y. A geometric process-shock maintenance model. IEEE Transactions on Reliability 2009; 58(2):389–396.
79. Li G, Luo J. Shock model in Markovian environment. Naval Research Logistics 2005; 52(3):253–260.
80. Nakagawa T. Shock and Damage Models in Reliability Theory. Springer: London, 2007.
81. Soro IW, Nourelfath M, Ait-Kadi D. Performance evaluation of multi-state degraded systems with minimal repairs and imperfect preventive
maintenance. Reliability Engineering & System Safety 2010; 95(2):65–69.
82. Yin ML, Angus JE, Trivedi KS. Optimal preventive maintenance rate for best availability with hypo-exponential failure distribution. IEEE
Transactions on Reliability 2013; 62(2):351–361.
83. Zhong C, Jin H. A novel optimal preventive maintenance policy for a cold standby system based on semi-Markov theory. European Journal of
Operational Research 2014; 232(2):405–411.
84. Kharoufeh JP. Explicit results for wear processes in a Markovian environment. Operations Research Letters 2003; 31(3):237–244.
85. Kharoufeh JP, Cox SM. Stochastic models for degradation-based reliability. IIE Transactions 2005; 37(6):533–542.
86. Kharoufeh JP, Solo CJ, Ulukus MY. Semi-Markov models for degradation-based reliability. IIE Transactions 2010; 42(8):599–612.
87. Xu Z, Ji Y, Zhou D. Real-time reliability prediction for a dynamic system based on the hidden degradation process identification. IEEE
Transactions on Reliability 2008; 57(2):230–242.
88. Byon E, Ding Y. Season-dependent condition-based maintenance for a wind turbine using a partially observed Markov decision process. IEEE
Transactions on Power Systems 2010; 25(4):1823–1834.
89. Gebraeel N, Lawley M, Liu R, Parmeshwaran V. Residual life predictions from vibration-based degradation signals: a neural network approach.
IEEE Transactions on Industrial Electronics 2004; 51(3):694–700.
90. Sotiris VA, Tse PW, Pecht MG. Anomaly detection through a Bayesian support vector machine. IEEE Transactions on Reliability 2010;
59(2):277–286.
91. Hamada MS, Wilson A, Reese CS, Martz H. Bayesian Reliability. Springer: New York, 2008.
92. Peng W, Huang HZ, Xie M, Yang Y, Liu Y. A Bayesian approach for system reliability analysis with multilevel pass-fail, lifetime and
degradation data sets. IEEE Transactions on Reliability 2013; 62(3):689–699.
93. Chinnam RB. A neuro-fuzzy approach for estimating mean residual life in condition-based maintenance systems. International Journal of
Materials and Product Technology 2004; 20(1):166–179.
94. Wang D, Miao Q, Kang R. Robust health evaluation of gearbox subject to tooth failure with wavelet decomposition. Journal of Sound and
Vibration 2009; 324(3):1141–1157.
95. Miller AJ. A New Wavelet Basis for the Decomposition of Gear Motion Error Signals and Its Application to Gearbox Diagnostics. The
Pennsylvania State University: PA, 1999.
96. Lin D, Wiseman M, Banjevic D, Jardine AKS. An approach to signal processing and condition-based maintenance for gearboxes subject to
tooth failure. Mechanical Systems and Signal Processing 2004; 18(5):993–1007.
97. Miao Q, Makis V. Condition monitoring and classification of rotating machinery using wavelets and hidden Markov models. Mechanical
Systems and Signal Processing 2007; 21(2):840–855.
98. Miao Q, Wang D, Pecht M. A probabilistic description scheme for rotating machinery health evaluation. Journal of Mechanical Science and
Technology 2010; 24(12):2421–2430.
99. Miao Q, Huang HZ, Fan X. Singularity detection in machinery health monitoring using Lipschitz exponent function. Journal of Mechanical
Science and Technology 2007; 21(5):737–744.
100. Wang D, Tse PW, Guo W, Miao Q. Support vector data description for fusion of multiple health indicators for enhancing gearbox fault diagnosis
and prognosis. Measurement Science and Technology 2011; 22(2):025102.
101. Saha B, Goebel K, Christophersen J. Comparison of prognostic algorithms for estimating remaining useful life of batteries. Transactions of the
Institute of Measurement and Control 2009a; 31(3-4):293–308.
102. Saha B, Goebel K, Poll S, Christophersen J. Prognostics methods for battery health monitoring using a Bayesian framework. IEEE Transactions
on Instrumentation and Measurement 2009b; 58(2):291–296.
103. Santhanagopalan S, White RE. State of charge estimation using an unscented filter for high power lithium ion cells. International Journal of
Energy Research 2010; 34(2):152–163.
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014
Z.-S. YE AND M. XIE
104. Wang D, Tse P. Prognostics of oil sand pumps based on a moving-average wear degradation index and a general sequential Monte Carlo
method. Mechanical Systems and Signal Processing 2014. to appear.
105. He W, Williard N, Osterman M, Pecht M. Prognostics of lithium-ion batteries based on Dempster-Shafer theory and the Bayesian Monte Carlo
method. Journal of Power Sources 2011; 196:10314–10321.
106. Xing Y, Ma EWM, Tsui KL, Pecht M. An ensemble model for predicting the remaining useful performance of lithium-ion batteries.
Microelectronics Reliability 2013; 53(6):811–820.
107. Miao Q, Xie L, Cui H, Liang W, Pecht M. Remaining useful life prediction of lithium-ion battery with unscented particle filter technique.
Microelectronics Reliability 2013; 53(6):805–810.
108. Baussaron J, Barreau-Guérin M, Gerville-Réache L, Schimmerling P. Degradation test plan for Wiener degradation processes. Annual
Reliability and Maintainability Symposium, 2011, 1–6.
109. Ye ZS, Revie M, Walls LA. A load sharing system reliability model with managed component degradation. IEEE Transactions on Reliability
2014c; 63(3):721–730.
110. Tseng ST, Balakrishnan N, Tsai CC. Optimal step-stress accelerated degradation test plan for Gamma degradation processes. IEEE Transactions
on Reliability 2009; 58(4):611–618.
111. Tsai TR, Lin CW, Sung YL, Chou PT, Chen CL, Lio Y. Inference from lumen degradation data under Wiener diffusion process. IEEE
Transactions on Reliability 2012b; 61(3):710–718.
112. Ye ZS, Xie M, Tang LC, Chen N. Efficient semiparametric estimation of gamma processes for deteriorating products. Technometrics 2014d.
DOI: 10.1080/00401706.2013.869261, to appear.
113. Tseng ST, Yu HF. A termination rule for degradation experiments. IEEE Transactions on Reliability 1997; 46(1):130–133.
114. Yu HF, Chiao CH. An optimal designed degradation experiment for reliability improvement. IEEE Transactions on Reliability 2002; 51(4):
427–433.
115. Kim SJ, Bae SJ. Cost-effective degradation test plan for a nonlinear random-coefficients model. Reliability Engineering & System Safety 2013;
110:68–79.
116. Marseguerra M, Zio E, Cipollone M. Designing optimal degradation tests via multi-objective genetic algorithms. Reliability Engineering &
System Safety 2003; 79(1):87–94.
117. Shi Y, Meeker WQ. Bayesian methods for accelerated destructive degradation test planning. IEEE Transactions on Reliability 2012; 61(1):
245–253.
118. Yang G, Yang K. Accelerated degradation-tests with tightened critical values. IEEE Transactions on Reliability 2002; 51(4):463–468.
119. Ye ZS, Tang LC, Xu HY. A distribution-based systems reliability model under extreme shocks and natural degradation. IEEE Transactions on
Reliability 2011b; 60(1):246–256.
120. Tseng ST, Tang J. Optimal burn-in time for highly reliable products. International Journal of Industrial Engineering 2001; 8(4):329–338.
121. Tseng ST, Tang J, Ku IH. Determination of burn-in parameters and residual life for highly reliable products. Naval Research Logistics 2003;
50(1):1–14.
122. Tseng ST, Peng CY. Optimal burn-in policy by using an integrated Wiener process. IIE Transactions 2004; 36(12):1161–1170.
123. Ye ZS, Shen Y, Xie M. Degradation-based burn-in with preventive maintenance. European Journal of Operational Research 2012a; 221(2):
360–367.
124. Tsai CC, Tseng ST, Balakrishnan N. Optimal burn-in policy for highly reliable products using Gamma degradation process. IEEE Transactions
on Reliability 2011b; 60(1):234–245.
125. Ye ZS, Xie M, Tang LC, Shen Y. Degradation-based burn-in planning under competing risks. Technometrics 2012b; 54(2):159–168.
126. Wang W. A prognosis model for wear prediction based on oil-based monitoring. Journal of the Operational Research Society 2007; 58(7):
887–893.
Copyright © 2014 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2014