Random Eng
Random Eng
Random Eng
A
The classical theory of random variables provides a mathematical framework for
the description of aleatory events which are not dependent on time. Anyway, in
many engineering applications, there is the need to deal with time-dependent phys-
ical phenomena in a probabilistic context. Consider for instance the case of the
ground acceleration at a given point due to seismic excitation, rather than the in-
ternal stresses of a structure subject to ambient vibrations: these are two examples of
stochastic processes. More generally, an aleatory phenomenon may depend on one
or more deterministic parameters, such as time or the spatial coordinates of a refer-
ence frame. Then, it is referred to as stochastic – or random – process, mono or multi-
dimensional, according to the number of independent parameters which character-
ize the problem. In the present work only monodimensional processes depending
on time are considered. For a more detailed description of stochastic processes refer
to (Bendat and Piersol, 1993; Papoulis, 1981), or, with particular reference to applica-
tions in structures dynamics see (Clough and Penzien, 1975; Muscolino, 2002).
A stochastic process X (t ) is defined by the ensemble of the theoretically infinite
time history records x (j ) (t ), j = 1, 2, 3, . . ., each one representing a possible realization
of the process. Then, given a fixed time instant t 1 , the set of values assumed by all the
realizations in t 1 forms the sample space of the random variable X (t 1 ) ≡ X 1 , which
can be dealt with by making use of the classical statistics tools. Hence, if n different
time instants are considered, an n-order vector can be built, gathering the random
variables from X 1 to X n :
T
X = X1 X2 . . . Xn , (A.1)
then, it is possible to define the joint probability distributions and density functions,
which respectively take the form
n
Zx 1 Zx 2 Zx n
\
FX (x) = P Xi ≤ xi = · · · p X (ρ1 , ρ2 , . . . , ρn ) dρ1 dρ2 . . . dρn (A.2)
i =1
−∞ −∞ −∞
107
APPENDIX A. STOCHASTIC PROCESSES
and
∂ nFX (x)
p X (x) = . (A.3)
∂ x 1∂ x 2 . . . ∂ x n
Z+∞Z+∞
R X (t j , t k ) = E X j X k = x j x k p (x j , x k , τ) dx j dx k = R X (τ) . (A.6)
−∞ −∞
R X (0) = E X 2 = ϕX2 ;
(A.7)
R X (τ) − µ2X
ρX (τ) = , (A.10)
σX2
108
A.1. STATIONARY PROCESSES
1
p (x j , x k , τ) = p ×
2πσX2 1 − ρX2 (τ)
(A.11)
(x j − µX )2 + (x k − µX )2 − 2 ρX (τ)(x j − µX )(x k − µX )
× exp − .
2 σX2 1 − ρX2 (τ)
the process is ergodic with respect to the autocorrelation function. An ergodic pro-
cess is also stationary, but in general the vice versa does not hold.
1 This does not represent a restrictive condition, since, if necessary, it is possible to separate the mean
value of the signal and statistically characterize the time dependent fluctuation only.
109
APPENDIX A. STOCHASTIC PROCESSES
1 (j )
E x̃ ( f , T ) x̃ ∗(j ) ( f , T ) =
S X ( f ) = lim
T →∞ 2T
(A.16)
1 h (j ) 2
i
= lim E x̃ ( f , T ) ,
T →∞ 2T
in which x̃ ∗(j ) ( f , T ) stands for the complex conjugate of the finite FT and the sto-
chastic average is performed over all the realizations transforms. Notice that, by
definition, the S.P.D. is a real symmetric function. Moreover, the limit in expression
(A.16) converges to a finite value provided that the auotocorrelation of the process is
a stepwise continuous, bounded and absolutely integrable function. Indeed, it can
be shown that
Z ∞ Z ∞
SX ( f ) = R X (τ) e−i 2π f τ dτ = R X (τ) cos(2π f τ) dτ
Z−∞
∞ Z−∞
∞ (A.17)
i 2π f τ
R X (τ) = SX ( f ) e df = S X ( f ) cos(2π f τ) df
−∞ −∞
that is, the S.P.D. is the FT of the autocorrelation function (Wiener-Khinchine theo-
rem). Hence, considering such relations, if τ = 0, one has
Z ∞
R X (0) = σX2 = S X ( f ) df (A.18)
−∞
meaning that the area under the curve S X ( f ) is the process variance. It is worth notic-
ing that a stationary process, similarly to a periodic signal, possesses infinite energy
in the time interval (−∞, +∞); it can be computed as
Z ∞ Z ∞ Z ∞
2
WX = E X 2 (t ) dt = σX2 dt = E X ( f ) df = +∞ , (A.19)
−∞ −∞ −∞
recalling that the process has zero mean and having used the Parseval’s theorem.
The quantity E |X ( f )|2 is a specific energy with respect to the frequency unit (spec-
tral energy density) of the process. Therefore, by using the finite FT, dividing by the
observation time 2T and considering the limit as T → ∞, one obtains the spectral
power density as defined in expression (A.16).
In the engineering context, the unilateral version of the S.P.D. is often given:
0 if f < 0
GX ( f ) = (A.20)
2S ( f ) if f ≥ 0 .
X
110
A.1. STATIONARY PROCESSES
Hence, the zero-order moment is equal to the variance, while λ1,X is the statical mo-
ment of the area under G X ( f ). The centroidal frequency of the spectrum1 G X ( f ) can
be computed as
λ1,X
f 1,X = 2 . (A.22)
σX
The radius of gyration with respect to the axis f = 0 is
2 λ2,X
f 2,X = , (A.23)
σX2
in which λ2,X is the moment of inertia. Then, the centroidal radius of gyration is given
by the expression:
sR
∞ È
( f − f ) 2 G ( f ) df 2
0 0 1,X X f 1,X
f 2,X = = f 2,X 1 − 2 . (A.24)
λ0,X f 2,X
0
If f 2,X = 0 the realizations of the process are single harmonics (sinusoidal process),
as a consequence, the resulting S.P.D. is a Dirac delta distribution; on the contrary, if
0
f 2,X = f 2,X , the process is called white noise (white random process), which is charac-
terized by uniform energy contributions with respect to frequency, then, G X ( f ) turns
out to be constant. Moreover, narrow-band and wide-band processes are defined by
S.P.D. functions which can be written as
G 0 if f 1 ≤ f ≤ f 2
GX ( f ) = (A.25)
0 otherwise ,
111
APPENDIX A. STOCHASTIC PROCESSES
R Ẋ (τ) = S Ẋ ( f ) e i 2π f τ df (A.30)
Z−∞
∞
R Ẍ (τ) = S Ẍ ( f ) e i 2π f τ df ,
−∞
the relations between the spectral densities of the process derivatives can be deter-
mined:
S Ẋ ( f ) = (2π f )2S X ( f )
(A.31)
S Ẍ ( f ) = (2π f )4S X ( f ) .
Hence, in general, the S.P.D. of the time derivative of a signal is characterized by a
shift of the energy contributions towards higher frequency values.
such a process is called bivariate (or multivariate according to the number of consid-
ered corresponding realizations). Moreover, S XY ( f ) is related to the cross-correlation
function by the expression
Z ∞
S XY ( f ) = R XY (τ) e−i 2π f τ dτ , (A.33)
−∞
112
A.1. STATIONARY PROCESSES
R XY (τ), in the stationary case, being dependent on the difference between the obser-
vation time instants only:
R XY (τ) = E x (j ) (t ) y (j ) (t + τ) .
(A.34)
In general, the cross-correlation function is not symmetric (with respect to the axis
τ = 0), but is such that R YX (τ) = R XY (−τ). Then, since R XY is real, equation (A.33)
implies that the C.S.P.D. has even real part and odd imaginary part; also, it can be
verified that
∗
S YX ( f ) = SXY (f ). (A.35)
Similarly to the monovariate case, the Wiener-Kinchine relations (A.17) still hold:
Z ∞
S XY ( f ) = R XY (τ) e−i 2π f τ dτ
Z−∞
∞ (A.36)
R XY (τ) = S XY ( f ) e i 2π f τ df ,
−∞
Consider now the case in which two corresponding realizations are identical, but
shifted by a time t¯ one with respect to the other, meaning that, omitting for simplicity
the superscript (j ), y (t ) = x (t + t¯). Then, it follows:
R XY (τ) = E x (t ) x (t + t¯ + τ) = R X (t¯ + τ) ,
S XY ( f ) = |S XY ( f )| ei θ ( f ) . (A.38)
113
APPENDIX A. STOCHASTIC PROCESSES
In the engineering context the C.S.P.D. is often defined through the coherence
function, which is expressed as
S XY ( f )
CohXY ( f ) = p (A.39)
S X ( f )S Y ( f )
1 h (j ) i
SX ( f ) = lim E x̃ ( f , T ) x̃∗(j )T ( f , T ) = (A.41)
T →∞ 2T
S X1 ( f ) S X1X2 ( f ) · · · S X1Xn ( f )
S X 2 X 1 ( f ) S X 2 ( f ) · · · S X 2 X n ( f )
= .
.
.. .
.. . .. .
..
SXn X1 ( f ) SXn X2 ( f ) · · · SXn ( f )
Such a matrix has even real part and odd imaginary part, and, in virtue of equation
(A.35), is Hermitian. Moreover, it holds:
Z ∞
SX ( f ) = RX (τ) e−i 2π f τ dτ . (A.42)
−∞
114
A.2. EXTREME VALUE ESTIMATE
that the probability density which statistically describes the occurrence of maxima
(or minima) of y (t ) assumes the form
p
Z r" 1−"2
1 2 2
p
2 2
p max (r ) = p "e−r /2" + 1 − " 2 r e−r /2 e−x /2 dx , (A.43)
2π −∞
in which r = y max /σY is expressed in adimensional form and " ∈ [0, 1] is defined as
2
λ0,Y λ4,Y − λ22,Y
" = . (A.44)
λ0,Y λ4,Y
The parameter " provides informations about the considered process: indeed, for
a narrow-band process, " is approximately close to zero, and in the limit case of a
sinusoidal process, " = 0. In particular, if " = 0, expression (A.43) assumes the form
of a Rayleigh density function:
1 2
p max (r ) = r e− 2 r . (A.45)
Such a function is defined for r ∈[ 0, ∞), hence, no negative maxima or positive min-
ima are present. On the contrary, if the process can be approximated as a white noise,
then, " tends to 2/3; finally, when " → 1, expression (A.43) becomes a zero mean
Gaussian density function.
Consider now N independent maxima of the signal y (j ) (t ), the probability that
each of them is smaller than y = r σY is:
Hence, equation (A.46) provides the probability distribution L D of the largest maxi-
mum, that is, of the extreme value, which depends on the observation interval D. It
can be proved that
L D (r, D) = exp − f r D , (A.48)
with r
− 12 r 2 λ2,Y
fr = f 0 e , f0 = = f 2,Y , (A.49)
σY2
where f 0 is the average frequency of zero crossings with positive (or negative) deriva-
tive, and can be viewed as the average frequency of the signal.
115
APPENDIX A. STOCHASTIC PROCESSES
γ = 0.5772 being the Euler’s constant. Removing the hypothesis of zero mean, such
equation provides the effective peak factor if the considered process has non-zero
mean, since the extreme value will be an absolute maximum or minimum according
to the sign of the process mean. On the other hand, if the process has zero mean,
the extreme value can be both positive and negative, as a consequence, the average
frequency of zero crossings should account for both positive and negative maxima.
Hence, formula (A.50) becomes
p γ
g= 2 ln(2 f 0 D) + p . (A.51)
2 ln(2 f 0 D)
116