Extreme Value Theory An Introduction
Extreme Value Theory An Introduction
Extreme Value Theory An Introduction
net/publication/265142104
CITATIONS READS
2,301 6,625
2 authors, including:
Ana Ferreira
Universidade de Évora
21 PUBLICATIONS 3,057 CITATIONS
SEE PROFILE
All content following this page was uploaded by Ana Ferreira on 02 August 2020.
With this webpage the authors intend to inform the readers of errors or mistakes found in the book after publication. We also give extensions
for some material in the book. We acknowledge the contribution of many readers.
Data files
1 /14 11/21/2010
(correction 2nd part made by MFH)
Chapter 1: Limit Distributions and Domains of Attraction
2 /14 11/21/2010
(correction 2nd part made by MFH)
Chapter 3: Estimation of the Extreme Value Index and Testing
3 /14 11/21/2010
(correction 2nd part made by MFH)
213 -12 Remark 6.1.8 Relations (6.1.15) does not Relations (6.1.15) can be written
hold for all Borel sets Ax,y . P ( Ac ) = exp −ν ( A
x, y { x, y )} (6.1.15*)
Where P is the probability measure with distribution function G0. Relation
(6.1.15*) does not necessarily hold for Borel sets A not of the form (6.1.16)
4 /14 11/21/2010
(correction 2nd part made by MFH)
231 6 converges to 2-1 … converges to g(x,y) : = 2-1 …
232 5 r3 q (θr, r(1- θ)) . r3 q (θr, r(1- θ)) = q (θ, (1- θ)) (cf. Coles and Tawn (1991)).
16 the random vector (V d j=1 r1,j Vj, …, V d j=1 the random vector (V d j=1 r1,j Vj, V d j=1 r2,j Vj )
rd,j Vj )
18 two-dimensional simple distribution two-dimensional distribution function with Fréchet marginals can be
function can be
-9 ∨ twice ∧ twice
233 6 complement 6.14. Let (X,Y) have a standard spherically symmetric Cauchy distribution. Show
that the probability distribution of (|X|,|Y|) is in the domain of attraction of an
extreme value distribution with uniform spectral measure Ψ. Show that the
probability distribution of (X,Y) is also in a domain of attraction. Find the limit
distribution.
263 -12 = →
265 18 … = x + y - L(x,y) . … = x + y - L(x,y) = R (x,y).
-10 … does not imply asymptotic … does not imply asymptotic dependence.
independence.
268 -2 EW (x1, …, xd) W (x1, …, xd) = μ (R (x1, EW (x1, …, xd) W (y1, …, yd) = μ (R (x1, …, xd) ∩ R (y1, …, yd))
…, xd) ∩ R (x1, …, xd))
269 4 and N is a standard normal random and N indicates a normal probability distribution.
variable.
6 /14 11/21/2010
(correction 2nd part made by MFH)
All simple max-stable processes in All simple max-stable processes η in C + [ 0,1] can be generated in the following
C [ 0,1] can be generated in the following way.
+
way.
-4
stochastic processes V1, V2,… in C + [ 0,1] stochastic processes V1, V2,… in C + [ 0,1] := { f ∈ C [ 0,1] : f ≥ 0}
7 /14 11/21/2010
(correction 2nd part made by MFH)
315 5 Theorem 9.6.1 (Resnick and Roy Theorem 9.6.1 (Resnick and Roy (1991) and de Haan(1984))
(1991))
320 11 … of the theorem is easy. … of the theorem is easy. ■
{( Z , T )} be a realization of a Poisson point process on
∞
Next we turn … Corollary 9.6.8A Let i i
i =1
∫ f ( t ) dt = 1 ,
−∞
s (9.6.7A)
∫ sup f ( t ) dt < ∞ ,
−∞ s∈I
s
⎧ ∞
⎫
{η ( s )}s∈ = ⎨∨ Zi f s (Ti )⎬
d
such that . (9.6.8A)
⎩ i =1 ⎭s∈
Conversely every process of the form exhibited at the right-hand side of (9.6.8A)
with the stated conditions, is a simple max-stable process in C + ( ) .
Proof. Let H be a probability distribution function with a density H` that is
positive for all real x. With the functions fs from Theorem 9.6.7 define the
functions f s ( t ) := f s ( H ( t ) ) H ' ( t ) . Since for any s1 , s2 ,..., sd ∈ and
x1 , x2 ,..., xd positive
∞ f si ( t ) 1 f si ( t )
∫ max
−∞ 1≤ i ≤ d xi
dt = ∫ max
0 1≤ i ≤ d xi
dt ,
8 /14 11/21/2010
(correction 2nd part made by MFH)
the representation of the corollary follows easily from that of Theorem 9.6.8. ■
Next we turn …
14,16,20,24 [0,1]
1
∞
18, 22, 25 ∫
0 ∫
−∞
23, 24 ∫
0
∫
−∞
26 distributions. distributions. ■
*
323 -1 (2#), W W
-6, -8
-6 … independent Brownian motions. … independent two-sided Brownian motions (cf. correction to Example 9.4.6) .
-5 x y
325 10 (2#) W W*
11 (2#)
12 (2#)
9 /14 11/21/2010
(correction 2nd part made by MFH)
-6 Hence for s1 < 0 < s2 Hence for s1 < 0 < s2 and in fact for all real s1, s2
326 2 Let W be Brownian motion independent Let W* be two-sided Brownian motion:
of Y. Consider the process… ⎧ W + ( s ) for s ≥ 0
W ( s ) := ⎨ −
*
⎩W (− s ) for s < 0
where W and W are independent Brownian motions. Let Y and W* be
+ -
( )
X ( s ) = 1{s ≥0} e − s / 2 N + ∫ eu / 2 dW + ( u )
s
( (u ))
−s
+ 1{s <0} e s / 2 N + ∫ eu / 2 dW −
0
and W - standard Brownian motions. Since for s ≠ t the random vector (X(s),
X(t)) is multivariate normal with correlation coefficient less than 1, Example 6.2.6
tells us that relation (9.5.1) can not hold for any max-stable process in C[0,1]:
since Y has continuous sample paths, Y(s) and Y(t) can not be independent. Hence
we compress space in order to create more dependence, i.e., we consider the
convergence of
⎧⎪ n ⎛ ⎛ s ⎞ ⎞ ⎫⎪
⎨∨ bn ⎜⎜ X i ⎜ 2 ⎟ − bn ⎟⎟ ⎬ (9.8.4)
⎩⎪ i =1 ⎝ ⎝ bn ⎠ ⎠ ⎭⎪s∈
in C [ − s0 , s0 ] for arbitrary s0 > 0, where X1, X2, … are independent and identically
distributed copies of X and the bn are the appropriate normalizing constants for
the standard one-dimensional normal distribution, e.g., (cf. Example 1.1.7)
bn = ( 2 log n − log log n − log ( 4π ) ) . Then
1/ 2
10 /14 11/21/2010
(correction 2nd part made by MFH)
⎛ ⎛ s ⎞ ⎞
bn ⎜⎜ X ⎜ 2 ⎟ − bn ⎟⎟
⎝ ⎝ bn ⎠ ⎠
( )⎛
− s / 2 bn2 s / bn 2
⎛1 − e s / ( 2bn2 ) ⎞ b 2 ⎞
=e ⎜ n
b ( N − b ) + bn∫ e u/2
dW ±
( u ) + ⎜ ⎟ n⎟
⎝ ⎝ ⎠ ⎠
n
0
s ≤ s0
− s / 2b2( ) ⎛ 1⎞
e = 1+ O ⎜ 2 ⎟.
⎝ bn ⎠
Further, since eu / 2 = 1 + O (1/ bn2 ) for u < s0 / bn2 ,
s / bn2 ⎛ ⎛ 1 ⎞⎞ ⎛ s ⎞
bn ∫ eu / 2 dW ± ( u ) = ⎜⎜1 + O ⎜ 2 ⎟ ⎟⎟ bnW ± ⎜ 2 ⎟ .
0
⎝ ⎝ bn ⎠ ⎠ ⎝ bn ⎠
Finally, for s ≤ s0 ,
⎛1 − e s / ( 2bn2 ) ⎞ b 2 = − s + O ⎛ 1 ⎞ .
⎜ ⎟ n ⎜ 2⎟
⎝ ⎠ 2 ⎝ bn ⎠
It follows that
⎛ ⎛ s ⎞ ⎞
bn ⎜⎜ X ⎜ 2 ⎟ − bn ⎟⎟
⎝ ⎝ bn ⎠ ⎠
⎛ ⎛ 1 ⎞⎞⎛ ⎛ s ⎞ s⎞ ⎛ 1 ⎞
= ⎜⎜ 1 + O ⎜ 2 ⎟ ⎟⎟ ⎜ bn ( N − bn ) + bnW ± ⎜ 2 ⎟ − ⎟ + O ⎜ 2 ⎟ .
⎜ ⎟
⎝ ⎝ bn ⎠ ⎠ ⎝ ⎝ bn ⎠ 2 ⎠ ⎝ bn ⎠
We write W ( s ) := bnW ± ( s / bn2 ) . Then W is also Brownian motion. We have
⎧⎪ n ⎛ ⎛ s ⎞ ⎞ ⎫⎪
⎨∨ bn ⎜⎜ X i ⎜ 2 ⎟ − bn ⎟⎟ ⎬
⎩⎪ i =1 ⎝ ⎝ bn ⎠ ⎠ ⎭⎪s∈
⎛ ⎛ 1 ⎞ ⎞ ⎪⎧ n ⎛ s ⎞ ⎪⎫ ⎛ 1⎞
= ⎜⎜ 1 + O ⎜ 2 ⎟ ⎟⎟ ⎨∨ ⎜ bn ( N i − bn ) + Wi ( s ) − ⎟ ⎬ + O ⎜ 2 ⎟ .
⎝ ⎝ bn ⎠ ⎠ ⎩⎪ i =1 ⎝ 2 ⎠ ⎪⎭ ⎝ bn ⎠
11 /14 11/21/2010
(correction 2nd part made by MFH)
Hence the limit of (9.8.4.) is the same as that of
⎧n s⎫
(
⎨∨ bn ( N i − bn ) + Wi ( s ) − ⎬ .
2 ⎭s∈
) (9.8.5)
⎩ i =1
⎧∞ s⎫
(
⎨∨ log Z i + Wi ( s ) − ⎬
2 ⎭s∈
)
⎩ i =1
with {Zi}the point process from (9.8.1).
Note that the point process {Zi} and the random processes W i are
independent.
328 -7 independent of V independent of Y
329 1 u>0 x>0
339 -3 Theorem 10.4.1 Theorem 10.4.1 (de Haan and Lin (2003))
341 -3 ζ n-k+1,n ζ n-k,n
352 6 υn (S) υ (S)
12 /14 11/21/2010
(correction 2nd part made by MFH)
-9 (B.1.24) (B.2.13)
376 -9 Hence f(t) is bounded for t ≥ t0 . Hence f(t) is locally bounded for t ≥ t0 .
379 3 f (∞ ) − f (t ) = f (∞) − f (t ) ∼
13 /14 11/21/2010
(correction 2nd part made by MFH)
View publication stats
R.L.Smith and I.Weissman: Maximum likelihood estimation of the lower tail of a probability distribution. J. Royal Statist. Soc. Ser. B 47, 285 – 298 (1985).
Data files
14 /14 11/21/2010