Random Eng

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Stochastic processes

A
The classical theory of random variables provides a mathematical framework for
the description of aleatory events which are not dependent on time. Anyway, in
many engineering applications, there is the need to deal with time-dependent phys-
ical phenomena in a probabilistic context. Consider for instance the case of the
ground acceleration at a given point due to seismic excitation, rather than the in-
ternal stresses of a structure subject to ambient vibrations: these are two examples of
stochastic processes. More generally, an aleatory phenomenon may depend on one
or more deterministic parameters, such as time or the spatial coordinates of a refer-
ence frame. Then, it is referred to as stochastic – or random – process, mono or multi-
dimensional, according to the number of independent parameters which character-
ize the problem. In the present work only monodimensional processes depending
on time are considered. For a more detailed description of stochastic processes refer
to (Bendat and Piersol, 1993; Papoulis, 1981), or, with particular reference to applica-
tions in structures dynamics see (Clough and Penzien, 1975; Muscolino, 2002).
A stochastic process X (t ) is defined by the ensemble of the theoretically infinite
time history records x (j ) (t ), j = 1, 2, 3, . . ., each one representing a possible realization
of the process. Then, given a fixed time instant t 1 , the set of values assumed by all the
realizations in t 1 forms the sample space of the random variable X (t 1 ) ≡ X 1 , which
can be dealt with by making use of the classical statistics tools. Hence, if n different
time instants are considered, an n-order vector can be built, gathering the random
variables from X 1 to X n :
 T
X = X1 X2 . . . Xn , (A.1)
then, it is possible to define the joint probability distributions and density functions,
which respectively take the form


n
 Zx 1 Zx 2 Zx n
\
FX (x) = P  Xi ≤ xi  = · · · p X (ρ1 , ρ2 , . . . , ρn ) dρ1 dρ2 . . . dρn (A.2)
i =1
−∞ −∞ −∞

107
APPENDIX A. STOCHASTIC PROCESSES

and
∂ nFX (x)
p X (x) = . (A.3)
∂ x 1∂ x 2 . . . ∂ x n

A.1 Stationary processes


When the statistical characteristics of a random process are independent of time,
then such process is called stationary. In particular, by choosing n couples of time
instants t j and t j +1 , so that τ = t j +1 − t j for j = 1, . . . , n, one has second-order station-
arity if
p X j (x j ) = p X j +1 (x j +1 ) = p X (x ) (A.4)
and
p X 1 X 2 (x 1 , x 2 ) = p X j X j +1 (x j , x j +1 ) = p X n X n+1 (x n , x n+1 ) . (A.5)
Hence, the marginal probability density is invariant with respect to time, and the
joint densities depend only on the time interval τ. Hereafter such densities will be
referred to as
p X j (x j ) = p (x j , t j ) = p (x j )
p X j X k (x j , x k ) = p (x j , t j , x k , t k ) = p (x j , x k , τ) .
Given a pair of random variables X (t j ) and X (t k ), the autocorrelation function is
defined as their cross mean square value:

Z+∞Z+∞
 
R X (t j , t k ) = E X j X k = x j x k p (x j , x k , τ) dx j dx k = R X (τ) . (A.6)
−∞ −∞

Thus, R X (t j , t k ), in virtue of the equalities in (A.5), depends only on τ = t k − t j and is


symmetric with respect to the ordinate axis, moreover, the following relations hold:

R X (0) = E X 2 = ϕX2 ;
 
(A.7)

lim R X (τ) = µ2X ; (A.8)


τ→∞

R X (τ) = R X (t j , t k ) = R X (t k , t j ) = R X (−τ) . (A.9)


If a Gaussian process is stationary up to the second order, then it is also strongly
stationary, that is, for any order. Such a process is entirely defined by its mean value
(independent of time) and by the autocorrelation function. Hence, by introducing
the autocovariance function

R X (τ) − µ2X
ρX (τ) = , (A.10)
σX2

108
A.1. STATIONARY PROCESSES

the joint probability density writes:

1
p (x j , x k , τ) = p ×
2πσX2 1 − ρX2 (τ)
‚ Œ (A.11)
(x j − µX )2 + (x k − µX )2 − 2 ρX (τ)(x j − µX )(x k − µX )
× exp −  .
2 σX2 1 − ρX2 (τ)

A.1.1 Ergodic processes


A stationary process is referred to as ergodic if there is no difference, from the prob-
abilistic point of view, between observing all the realizations at a given time instant
and observing a single realization along the time axis. Thus, it is possible to describe
an ergodic process by knowing a single realization.
The time stochastic average is defined as:
Z T
1
(j )
x (j ) (t ) dt .
 
E X (t ) = lim (A.12)
T →∞ 2T
−T

One has ergodicity in mean if


E X (j ) (t ) = µX .
 
(A.13)
Moreover, if
Z T
(j ) (j ) 1
x (j ) (t ) x (j ) (t + τ) dt =
 
E X (t )X (t + τ) = lim
T →∞ 2T (A.14)
−T
 
= E X (t )X (t + τ) = R X (τ) ,

the process is ergodic with respect to the autocorrelation function. An ergodic pro-
cess is also stationary, but in general the vice versa does not hold.

A.1.2 Spectral power density


Hereafter the interest is focused on aleatory processes with zero mean.1 Hence, con-
sidering a possible realization of the stationary process X (t ), the finite Fourier Trans-
form (FT) of the related time history is given by:
Z T
(j )
x̃ ( f , T ) = x (j ) (t ) e−i 2π f t dt . (A.15)
−T

1 This does not represent a restrictive condition, since, if necessary, it is possible to separate the mean
value of the signal and statistically characterize the time dependent fluctuation only.

109
APPENDIX A. STOCHASTIC PROCESSES

Then, the Spectral Power Density (S.P.D.) function is defined as

1  (j )
E x̃ ( f , T ) x̃ ∗(j ) ( f , T ) =

S X ( f ) = lim
T →∞ 2T
(A.16)
1 h (j ) 2
i
= lim E x̃ ( f , T ) ,
T →∞ 2T

in which x̃ ∗(j ) ( f , T ) stands for the complex conjugate of the finite FT and the sto-
chastic average is performed over all the realizations transforms. Notice that, by
definition, the S.P.D. is a real symmetric function. Moreover, the limit in expression
(A.16) converges to a finite value provided that the auotocorrelation of the process is
a stepwise continuous, bounded and absolutely integrable function. Indeed, it can
be shown that
Z ∞ Z ∞
SX ( f ) = R X (τ) e−i 2π f τ dτ = R X (τ) cos(2π f τ) dτ
Z−∞
∞ Z−∞
∞ (A.17)
i 2π f τ
R X (τ) = SX ( f ) e df = S X ( f ) cos(2π f τ) df
−∞ −∞

that is, the S.P.D. is the FT of the autocorrelation function (Wiener-Khinchine theo-
rem). Hence, considering such relations, if τ = 0, one has
Z ∞
R X (0) = σX2 = S X ( f ) df (A.18)
−∞

meaning that the area under the curve S X ( f ) is the process variance. It is worth notic-
ing that a stationary process, similarly to a periodic signal, possesses infinite energy
in the time interval (−∞, +∞); it can be computed as
–Z ∞ ™ Z ∞ –Z ∞ ™
2
WX = E X 2 (t ) dt = σX2 dt = E X ( f ) df = +∞ , (A.19)
−∞ −∞ −∞

recalling that the process has zero mean and having used the Parseval’s theorem.
 
The quantity E |X ( f )|2 is a specific energy with respect to the frequency unit (spec-
tral energy density) of the process. Therefore, by using the finite FT, dividing by the
observation time 2T and considering the limit as T → ∞, one obtains the spectral
power density as defined in expression (A.16).
In the engineering context, the unilateral version of the S.P.D. is often given:

0 if f < 0
GX ( f ) = (A.20)
2S ( f ) if f ≥ 0 .
X

110
A.1. STATIONARY PROCESSES

A.1.3 Spectral moments


The generic spectral moment of order r is defined as
Z ∞
λr,X = f r G X ( f ) df . (A.21)
0

Hence, the zero-order moment is equal to the variance, while λ1,X is the statical mo-
ment of the area under G X ( f ). The centroidal frequency of the spectrum1 G X ( f ) can
be computed as
λ1,X
f 1,X = 2 . (A.22)
σX
The radius of gyration with respect to the axis f = 0 is

2 λ2,X
f 2,X = , (A.23)
σX2
in which λ2,X is the moment of inertia. Then, the centroidal radius of gyration is given
by the expression:
sR
∞ È
( f − f ) 2 G ( f ) df 2
0 0 1,X X f 1,X
f 2,X = = f 2,X 1 − 2 . (A.24)
λ0,X f 2,X
0
If f 2,X = 0 the realizations of the process are single harmonics (sinusoidal process),
as a consequence, the resulting S.P.D. is a Dirac delta distribution; on the contrary, if
0
f 2,X = f 2,X , the process is called white noise (white random process), which is charac-
terized by uniform energy contributions with respect to frequency, then, G X ( f ) turns
out to be constant. Moreover, narrow-band and wide-band processes are defined by
S.P.D. functions which can be written as

G 0 if f 1 ≤ f ≤ f 2
GX ( f ) = (A.25)
0 otherwise ,

with a small or large bandwidth B = f 2 − f 1 respectively.

A.1.4 Process derivatives


Given a stationary process X (t ), the derivative with respect to the time parameter τ
of the autocorrelation function is
 
d d  
R XX (τ) = E X (t ) X (t + τ) = E X (t )Ẋ (t + τ) = R XẊ (τ) , (A.26)
dτ dτ
1 The term “power spectrum” is often used as a synonym of “spectral power density”, that is, with the
meaning of a specific power per frequency unit.

111
APPENDIX A. STOCHASTIC PROCESSES

then, recalling the stationarity property, one has


d  
R XẊ (τ) = E X (t − τ)X (t ) = −R ẊX (τ) . (A.27)

Analogously, it is possible to obtain the autocorrelation of the process Ẋ (t ):
d2 d d  
2
R XX (τ) = R XẊ (τ) = E X (t − τ)Ẋ (t ) = −R ẊẊ (τ) , (A.28)
dτ dτ dτ
from which
d2 d4
R Ẋ (τ) = −
R X (τ) R Ẍ (τ) = R X (τ) . (A.29)
dτ2 dτ4
In the frequency domain, taking into account that
Z ∞
R X (τ) = S X ( f ) e i 2π f τ df
Z−∞

R Ẋ (τ) = S Ẋ ( f ) e i 2π f τ df (A.30)
Z−∞

R Ẍ (τ) = S Ẍ ( f ) e i 2π f τ df ,
−∞

the relations between the spectral densities of the process derivatives can be deter-
mined:
S Ẋ ( f ) = (2π f )2S X ( f )
(A.31)
S Ẍ ( f ) = (2π f )4S X ( f ) .
Hence, in general, the S.P.D. of the time derivative of a signal is characterized by a
shift of the energy contributions towards higher frequency values.

A.1.5 Cross spectral power density


Let x (j ) (t ) and y (j ) (t ) be two possible corresponding realizations of a stochastic pro-
cess, obtained by the same experiment j , but related, as an example, to different
points in the reference frame. Their Cross Spectral Power Density (C.S.P.D.) is de-
fined as
1  (j )
E x̃ ( f , T ) ỹ ∗(j ) ( f , T ) ,

S XY ( f ) = lim (A.32)
T →∞ 2T

such a process is called bivariate (or multivariate according to the number of consid-
ered corresponding realizations). Moreover, S XY ( f ) is related to the cross-correlation
function by the expression
Z ∞
S XY ( f ) = R XY (τ) e−i 2π f τ dτ , (A.33)
−∞

112
A.1. STATIONARY PROCESSES

R XY (τ), in the stationary case, being dependent on the difference between the obser-
vation time instants only:

R XY (τ) = E x (j ) (t ) y (j ) (t + τ) .
 
(A.34)

In general, the cross-correlation function is not symmetric (with respect to the axis
τ = 0), but is such that R YX (τ) = R XY (−τ). Then, since R XY is real, equation (A.33)
implies that the C.S.P.D. has even real part and odd imaginary part; also, it can be
verified that

S YX ( f ) = SXY (f ). (A.35)
Similarly to the monovariate case, the Wiener-Kinchine relations (A.17) still hold:
Z ∞
S XY ( f ) = R XY (τ) e−i 2π f τ dτ
Z−∞
∞ (A.36)
R XY (τ) = S XY ( f ) e i 2π f τ df ,
−∞

from which, if τ = 0, one obtains:


Z ∞
S XY ( f ) d f = E x (j ) (t ) y (j ) (t ) = σXY .
 
R XY (0) = (A.37)
−∞

Consider now the case in which two corresponding realizations are identical, but
shifted by a time t¯ one with respect to the other, meaning that, omitting for simplicity
the superscript (j ), y (t ) = x (t + t¯). Then, it follows:
 
R XY (τ) = E x (t ) x (t + t¯ + τ) = R X (t¯ + τ) ,

while the C.S.P.D. assumes the form


Z ∞
¯
S XY ( f ) = R X (t¯ + τ) e−i 2π f τ dτ = S X ( f ) ei 2π f t .
−∞

Thus, the imaginary part of S XY ( f ) is related to the systematic phase displacement


between the signals, and can be frequency dependent. In general it can be written:

S XY ( f ) = |S XY ( f )| ei θ ( f ) . (A.38)

As a consequence, if the considered physical phenomenon can be assumed to have


no systematic phase displacements between the different signals (that is, in the case
of the wind turbulence field, between the velocity time histories related to different
points in the space), then, it is possible to neglect the imaginary part of the C.S.P.D.
function.

113
APPENDIX A. STOCHASTIC PROCESSES

In the engineering context the C.S.P.D. is often defined through the coherence
function, which is expressed as

S XY ( f )
CohXY ( f ) = p (A.39)
S X ( f )S Y ( f )

where the effect related to the single components spectra is removed.1


In general, a multivariate process is characterized, in the time domain, by a cor-
relation matrix
h i
RX (τ) = E x(j ) (t ) x(j )T (t + τ) = (A.40)
 
R X 1 (τ) R X 1 X 2 (τ) · · · R X 1 X n (τ)
 
 R X 2 X 1 (τ) R X 2 (τ) · · · R X 2 X n (τ)
= ,
 
 .
.. .
.. . .. .
.. 
 
R X n X 1 (τ) R X n X 2 (τ) · · · R X n (τ)

and, in the frequency domain, by a C.S.P.D. matrix

1 h (j ) i
SX ( f ) = lim E x̃ ( f , T ) x̃∗(j )T ( f , T ) = (A.41)
T →∞ 2T
 
S X1 ( f ) S X1X2 ( f ) · · · S X1Xn ( f )
 
S X 2 X 1 ( f ) S X 2 ( f ) · · · S X 2 X n ( f )
= .
 
 .
.. .
.. . .. .
.. 
 
SXn X1 ( f ) SXn X2 ( f ) · · · SXn ( f )

Such a matrix has even real part and odd imaginary part, and, in virtue of equation
(A.35), is Hermitian. Moreover, it holds:
Z ∞
SX ( f ) = RX (τ) e−i 2π f τ dτ . (A.42)
−∞

A.2 Extreme value estimate


Given a Gaussian stochastic process Y (t ) with zero mean, a possible realization can
have, in general, positive and negative maxima and minima. It can be proved, start-
ing from the joint probability density function of the variables y (t ), ẏ (t ) and ÿ (t ),
p
1 In some references the coherence function is defined as |S XY ( f )|/ S X ( f )S Y ( f ).

114
A.2. EXTREME VALUE ESTIMATE

that the probability density which statistically describes the occurrence of maxima
(or minima) of y (t ) assumes the form
 p 
Z r" 1−"2
1  2 2
p
2 2
p max (r ) = p "e−r /2" + 1 − " 2 r e−r /2 e−x /2 dx  , (A.43)

2π −∞

in which r = y max /σY is expressed in adimensional form and " ∈ [0, 1] is defined as

2
λ0,Y λ4,Y − λ22,Y
" = . (A.44)
λ0,Y λ4,Y

The parameter " provides informations about the considered process: indeed, for
a narrow-band process, " is approximately close to zero, and in the limit case of a
sinusoidal process, " = 0. In particular, if " = 0, expression (A.43) assumes the form
of a Rayleigh density function:
1 2
p max (r ) = r e− 2 r . (A.45)

Such a function is defined for r ∈[ 0, ∞), hence, no negative maxima or positive min-
ima are present. On the contrary, if the process can be approximated as a white noise,
then, " tends to 2/3; finally, when " → 1, expression (A.43) becomes a zero mean
Gaussian density function.
Consider now N independent maxima of the signal y (j ) (t ), the probability that
each of them is smaller than y = r σY is:

P [N maxima < r σY ] = P(r )N , (A.46)

where the distribution P(r ) is


Z r

P(r ) = p max (ρ) dρ . (A.47)


−∞

Hence, equation (A.46) provides the probability distribution L D of the largest maxi-
mum, that is, of the extreme value, which depends on the observation interval D. It
can be proved that

L D (r, D) = exp − f r D , (A.48)
with r
− 12 r 2 λ2,Y
fr = f 0 e , f0 = = f 2,Y , (A.49)
σY2
where f 0 is the average frequency of zero crossings with positive (or negative) deriva-
tive, and can be viewed as the average frequency of the signal.

115
APPENDIX A. STOCHASTIC PROCESSES

The density function related to expression (A.48), as the parameter f 0 D increases,


shows a progressively sharper profile, with almost all the subtended area concen-
trated about the mean. For this reason, in general, the peak factor is defined as the
mean value of such a density function, that is
p γ
g= 2 ln( f 0 D) + p , (A.50)
2 ln( f 0 D)

γ = 0.5772 being the Euler’s constant. Removing the hypothesis of zero mean, such
equation provides the effective peak factor if the considered process has non-zero
mean, since the extreme value will be an absolute maximum or minimum according
to the sign of the process mean. On the other hand, if the process has zero mean,
the extreme value can be both positive and negative, as a consequence, the average
frequency of zero crossings should account for both positive and negative maxima.
Hence, formula (A.50) becomes
p γ
g= 2 ln(2 f 0 D) + p . (A.51)
2 ln(2 f 0 D)

Finally, the extreme value of y (t ) is computed as y extr = µY ± g σY .

116

You might also like