Media e Esperancas Est

Download as pdf or txt
Download as pdf or txt
You are on page 1of 89

70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


P(X ≥ 1) = 1 − P(X = 0) = 1 − e−λ = 1 − e−2 ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

U3 – Médias e Esperanças
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X(ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X(ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

Estatísticas We also have


P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

2ELE105 – Introd. Processos Estocásticos

Taufik Abrão
Department of Electrical Engineering
State University of Londrina, PR – Brazil

Programa de Pós-Graduação em Eng. Elétrica da UEL, 2014


Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Conteúdo
0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Médias e Esperanças 3

Esperanças Estatísticas . . . . . . . . . . . . . . . . . . . 4

Média e Variância de Amostras . . . . . . . . . . . . 11

Esperanças - R.V. n−Dimensional . . . . . . . . . . 16

Momentos Estatísticos 31

Taufik Abrão 2 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Função Geradora de Momentos 0.02

0
0 10 20 30

Solution. Let X denote the number of hits. Then


P(X ≥ 1) = 1 − P(X = 0) = 1 − e
52
40 50

−λ
60

= 1−e −2
70 80
k

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.

Definição . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Função Característica . . . . . . . . . . . . . . . . . . . . 57

Teorema do Limite Central (CLT) . . . . . . . . . . . . . 66

Chernoff Upper Bound on the Tail Probability . . . . . . . 77

U3 - Exercícios 83

U3 - Exercícios . . . . . . . . . . . . . . . . . . . . 84

Taufik Abrão 3 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Médias e Esperanças

Taufik Abrão 4 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

Esperanças Estatísticas
0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write

• Médias são extremamente importantes no estudo de variáveis


{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

aleatórias.
• Por exemplo, suponhamos que se deseja determinar a média de
altura de uma população. Vamos admitir que possamos
obter os dados sobre a altura de todos os N indivíduos, com
precisão de centímetros.
• Dessa forma teremos um número finito, n, de valores possíveis
para a altura, xi, com i variando de 1 a n.
• Se o número de pessoas com altura xi é Ni ⇒ a altura
média é:
N1x1 + N2x2 + . . . + Nnxn N1 N2 Nn
x= = x1 + x2 + . . . + xn ,
N N N N
P
com N = i Ni

Taufik Abrão 5 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

Ni
0.04

• No limite, quando N → ∞, a razão → PX (xi). Assim,


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

N Solution. Let X denote the number of hits. Then

Similarly,
P(X ≥ 1) = 1 − P(X = 0) = 1 − e −λ

P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)


= 1 − e−λ − λ e−λ
= 1−e −2
≈ 0.865.

= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

n
X 2.3 Multiple random variables
If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.

X = E{X} =
Putting all of our shorthand together, we can write

xiPX (xi). (1)


{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

i=1

• A média é também chamada de esperança ou valor


esperado de uma v.a..
• Para v.a. contínuas, o valor esperado será calculado, de
forma análoga:
Z ∞
X = E{X} = xfX (x)dx. (2)
−∞

o qual é equivalente a:
Z ∞ Z 0
E{X} = xfX (x)dx + xfX (x)dx.
−∞
Z0 ∞ Z 0
= [1 − FX (x)]dx − FX (x)dx.
0 −∞
Taufik Abrão 6 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

Continuous random Variables


0.14

74
0.12

0.10

0.08

0.06

⇒ última igualdade obtida via integração por partes e assumindo


0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.

que E{|X|} < ∞


Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.

⇒ A experança da r.v. X é a diferença das áreas sombreadas:


Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

(A2 − A1): Fx (x)

0 x

Figure 4.2 The expectation of the RV X is the difference of the shaded regions.
Observações:
provided the integral exists. We may also write
! ∞ ! 0
1. se fX (x) for uma função par, então E{X} = 0;
µX = x f X (x) d x + x f X (x) d
2. se fX (x) for simétrica em relação a x = !a,0 então E{X} =−∞
a;!
∞ 0
3. v.a X pode nunca assumir o valor X em
= nenhum
[1 − Fexperimento.
X (x)] d x − FX (
0 −∞
Taufik Abrão 7 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

4. Quando X for uma r.v. não negativa, então vale:


0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,


P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
Z = 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

E{X} = [1 − FX (t)]dt
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

0
⇒ X contínua, não-negativa; FX (x): função cumulativa de X

5. Se X for uma r.v. não negativa e de valores inteiros:



X
E{X} = P [X > k]
k=0

Taufik Abrão 8 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Exemplo – Esperança Estatística para uma v.a. Uniforme. Para uma


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

v.a. com densidade uniforme ∼ U[a; b],


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write

Z b
b2 − a2
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

1 b+a
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

E{X} = X = x dx = =
a b−a 2(b − a) 2
⇒ exatamente o ponto médio no intervalo [a, b]

Exemplo – Esperança Estatística para uma v.a. Exponencial. Tempo


X entre chegada de clientes em uma estação de serviço (trem, banco etc)
apresenta uma distribuição Exponencial ∼ exp(λ).
⇒ Encontre o intervalo temporal médio de chegada de clientes.

Taufik Abrão 9 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Exemplo – Esperança Estatística para uma v.a. Exponencial. O


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

tempo X entre chegada de clientes em uma estação de serviço (trem,


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

banco etc) apresenta uma distribuição Exponencial ∼ exp(λ).


{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Encontre o intervalo temporal médio de chegada de clientes.


Da PDF exponencial, segue:
Z ∞ Z ∞
E{X} = x · fX (x)dx = t · λe−λtdt
0 0

Usando integral por partes udv = uv − vdu, com u = t e dv = λe−λtdt


R R
Z ∞

E{X} = −te−λt 0 + e−λtdt
0  −λt ∞
−λt −e e−λt 1 1
= − lim te + 0 + = −0 + 0 − lim + =
t·→∞ λ 0
t→∞ λ λ λ
⇒ Para este exemplo é mais expedito calcular (da Obs. 4 acima):
Z ∞ Z ∞
1
E{X} = [1 − FX (t)]dt = e−λtdt =
0 0 λ
Lembre-se:
⇒ λ = taxa de chegada de clientes [clientes/segundo]
⇒ E{X} = λ1 = tempo médio entre chegadas [segundos/cliente]
Taufik Abrão 10 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Exemplo – Esperança Estatística para uma v.a. Gaussiana.


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

Seja X uma v.a. Gaussiana, a esperança é dada por:


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to

Z ∞ {ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write

1
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have

2 2
x √ e−(x−m) /2σ dx = m
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})

E{X} = X =
= P({X ∈ B} ∩ {Y ∈ C}).

−∞ σ 2π
x−m
Dica: adotando z = σ e o resultado da Figura abaixo:
1
2
e−0.5z
0.8 2
z.e−0.5z
0.6

0.4

0.2 2 /2
z · e−z
f(z)

é uma função
0 ímpar.
−0.2

−0.4

−0.6

−8 −6 −4 −2 0 2 4 6 8
z

Taufik Abrão 11 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Média e Variância de Amostras


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.

 Considere a situação de selecionar aleatoria e independentemente


Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

n amostras de uma população cuja distribuição possui


média : µ e variância σ 2
 conjunto de amostras (x1, x2, . . . , xn) denominado: amostras
aleatórias de tamanho n.
⇒ Random samples are an important foundation of statistical
theory, because a majority of the results known in mathematical
statistics rely on assumptions that are consistent with a random
sample.
 Como estimar a média da população µX e a variância da
2
população σX a partir das amostras?

Taufik Abrão 12 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

 média de amostras (or empirical average) de x:


0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ

n
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

1
2.3 Multiple random variables
If X and Y are random variables, we use the shorthand
X which is equal to
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

x= xi
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.

(3)
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})

n
= P({X ∈ B} ∩ {Y ∈ C}).

i=1

 Cada amostra xi é vista como uma "realização" (instância) da


r.v. Xi ⇒ média de amostras x é uma instância da r.v. média
de amostras X:
n
1X
X= Xi (4)
n
i=1
⇒ o termo ’sample mean’ é usado na teoria estatística para
descrever a r.v. X, mas a quatidade observável é sua realização x
 Tomando a esperança de ambos os lados em (4)
n
1 X
E[X] = E[Xi] (5)
n
i=1
Taufik Abrão 13 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

 Uma vez que X1, X2, . . . , Xn são variáveis aleatórias


0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.

independentes e identicamente distribuídas (i.i.d.), tem-se:


Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write

E[X1] = E[X2] = . . . = E[Xn] = µX


{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

 Substituindo em (5) tem-se esperança estatística da r.v.


média de amostras
E[X] = µX (6)
⇒ confirmando que x de (3) é uma estimativa
não-polarizada de µX
 Unbiased estimate é uma estimativa que na média resulta igual
valor esperado (estimativa não polarizada ou enviesada).

Taufik Abrão 14 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

 Considere a Variância de X
0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
h 2 i = 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

Var[X] = E X − E[X]
2.3 Multiple random variables
If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

sendo o argumento
n n
1X 1X
X − E[X] = X − µX = (Xi − µX ) = Yi
n n
i=1 i=1

A Variância de X pode ser obtida:


 !2
n
h 2 i 1 X
Var[X] =E X − E[X] = E Yi  (7)
n
i=1
n n n
1 X  2 1 X X
= 2 E Yi + 2 E [YiYj ]
n n
i=1 i=1 j=1, j6=i

e uma vez que as r.v. {Yi = Xi − µX ; 1 ≤ i ≤ n} são

Taufik Abrão 15 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

2
0.04

independentes de média zero e variância σX , tem-se:


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )

2
= 1 − e−2 (1 + 2) ≈ 0.594.

σX 2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

Var[X] = (8)
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have

n
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

⇒ variância da r.v. média de amostras é a variância da


população dividido pelo tamanho das amostras

Taufik Abrão 16 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

Esperanças - R.V. n−Dimensional


0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have

Média, caso n−dimensional


P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Sendo X um vetor aleatório de dimensão n, a média resulta


Z ∞ Z ∞
E{X} = ··· xfX (x1, · · · , xn) dx1, · · · , dxn
−∞ −∞
= [E{X1} E{X2} ··· E{Xn}]T (9)
onde
Z ∞ Z ∞
E{Xi} = ··· xifX (x1, · · · , xn) dx1, · · · , dxn
−∞ −∞

Note que a média também é um vetor.

Taufik Abrão 17 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

v1 = v ( t1 ) = {v ( t1 , Ei ) , todo i}
0.02

v2 = v ( t 2 ) = {v ( t 2 , Ei ) , todo i}
0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write

v ( t , E1 )
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

v ( t , E2 )
t

v ( t , E3 ) t

v ( t , E4 ) t

t1 t2

Figura 1: Amostras para um processo Gaussiano (AWGN): média temporal,


{v(t, Ei)} , i =
E 1, . . . , 4. Média do vetor aleatório: E {V } =
E [V1 V2 V3 V4]T .
Taufik Abrão 18 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Valor esperado de Y = g(X)


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

Sendo X e Y v.a e g(·) a função de mapeamento, a média para o


We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

caso de v.a. discretas, será, de (1):


n
X n
X
Y = E{Y } = yiPY (yi) = g(xi)PX (xi) (10)
i=1 i=1

Para o caso de v.a. contínuas, de forma análoga, a partir de (2),


resulta:
Z ∞ Z ∞
Y = E{Y } = yfY (y)dy = g(x)fX (x)dx. (11)
−∞ −∞

Taufik Abrão 19 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Prova. Para v.a. discretas a prova é imediata. Para variáveis 0.02

0
0 10 20 30

P(X ≥ 1) = 1 − P(X = 0) = 1 − e
40 50

−λ
60

= 1−e −2
70 80
k

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


≈ 0.865.

contínuas, referindo-se novamente à Fig. 2, tem-se:


Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

Z ∞ which is equal to

We also have
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})

E{Y } = yfY (y)dy


= P({X ∈ B} ∩ {Y ∈ C}).

−∞
sendo:
X
yfY (y)dy = yP{y ≤ Y < y + dy} = y P{xi ≤ X < xi + dxi}
i
X
= g(xi)fX (xi)dxi
i

• Conforme dy varre o eixo y, os correspondentes dxi, os quais


nunca se sobrepõem, cobrem todo o eixo x.
• Dessa forma, segue-se a expressão (11).
• Esse resultado é interessante ⇒ permite cálculo da média
da v.a. y sem ter que se determinar sua densidade de
probabilidade, o que é, em geral, muito trabalhoso.
Taufik Abrão 20 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

y + ∆y
y
y = g(x)

x1 + ∆x1 x2 + ∆x2
x1 x2 x x3 x3 + ∆x3

Figura 2: Exemplo da v.a. Y = g(X) para o caso de 3 raízes. Note


que, neste caso, a raiz x2 é negativa.

Taufik Abrão 21 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Exemplo – Média e Valor Médio Quadrático de v.a. Transformada.


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

Seja um gerador senoidal com saída A cos ωt. Essa saída é então amostrada
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

em instantes aleatórios. A saída amostrada constitui então uma v.a., que which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write

We also have
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})

denominaremos de x, e que pode assumir qualquer valor no intervalo (−A, A).


= P({X ∈ B} ∩ {Y ∈ C}).

Determine a média, E{x}, e o valor quadrático médio, E{x2} dessa saída


amostrada.
Se a saída é amostrada em um instante aleatório, a v.a. x é uma função da outra
v.a. t:
x(t) = A cos ωt
designando θ = ωt, a variável θ também será aleatória.
Uma vez que instante t é escolhido aleatoriamente, podemos admitir que a variável
θ, terá distribuição ∼ U[0, 2π].
Posto isto,
Z ∞ Z 2π
1
x = E{x} = x(θ)fθ (θ)dθ = A cos θdθ = 0
−∞ 2π 0
e Z ∞ Z 2π
2 2 2 1 2 2 A2
x = E{x } = x (θ)fθ (θ)dθ = A cos θdθ =
−∞ 2π 0 2

Taufik Abrão 22 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Exemplo – Momento Estatístico de v.a. Gaussiana.


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

Seja a v.a. X cuja função densidade é dada por ∼ N (m, σ). Mostre que E[X] =
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})

m e a var(X) = σ 2. a) Basta mostrar E[X − m] = 0.


= P({X ∈ B} ∩ {Y ∈ C}).

(x−m)2
Z ∞ Z ∞ −
e 2σ 2
E[X − m] = (x − m)fX (x)dx = (x − m) √ dx
−∞ −∞ σ 2π
x−m
mudança variável: y = σ ⇒ σdy = dx
Z ∞
1 − y2
2
∴ E[X − m] = σ × √ ye dy = 0
2π −∞
| {z }
primeiro momento de N [0,1]

b) Empregando a mesma abordagem para o segundo momento:


(x−m)2
Z ∞ Z ∞ −
e 2σ 2
E[(X − m)2] = (x − m)2fX (x)dx = (x − m)2 √ dx
−∞ −∞ σ 2π
2
 x − m 2 e− (x−m)
Z ∞
2σ 2
= σ √ dx
−∞ σ 2π

Taufik Abrão 23 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

com a mesma mudança variável, resulta: Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then

Similarly,
P(X ≥ 1) = 1 − P(X = 0) = 1 − e −λ

P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)


= 1−e −2
≈ 0.865.

= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
Z ∞ = 1 − e−2 (1 + 2) ≈ 0.594.

1 2
2 − y2
2.3 Multiple random variables

E[(X − m)2] = σ 2 × √ 2
If X and Y are random variables, we use the shorthand

y e dy = σ
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

∴ which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.


We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})

−∞
= P({X ∈ B} ∩ {Y ∈ C}).

| {z }
segundo momento de N [0,1]

na qual pode-se identificar o segundo momento da v.a. y na integral acima.

Taufik Abrão 24 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

2
0 k
0 10 20 30 40 50 60 70 80

Exemplo – Momento Estatístico de Y = X .


Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

2 2
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

Seja Y = X , com X ∼ N [0; σ ]. Calcule a esperança de Y


which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Z ∞
1 2
E{Y } = Y = x2 √ e−0.5(x/σ) dx
−∞ σ 2π
x
Com z = σ e admitindo σ > 0, resulta:
 Z ∞ 
1 2
E{Y } = σ 2 √ z 2e−z /2dz
2π −∞
O termo entre parênteses pode ser reduzido a
Z ∞
1 2
√ e−z /2dz = 1
2π −∞
com o auxílio da integração por partes. Finalmente:
E{Y } = E{X 2} = σ 2

Mais genericamente vale, para v.a. X ∼ N [µ; σ 2]:


E{X 2} = µ2 + σ 2 e E{(X − µ)2} = σ 2
Taufik Abrão 25 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Valor Esperado do Vetor Aleatório


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ

Analogamente, para o caso de vetores aleatórios, com Y = g(X) um


= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.

vetor aleatório de dimensão n, e X um vetor aleatório de dimensão


Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

m:

Z ∞ Z ∞
E{Y } = ··· g(x1, · · · , xm)fX (x1, · · · , xm)dx1, · · · , dxm.
−∞ −∞
(12)

Taufik Abrão 26 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Média da Soma
0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

Sejam g1(X), g2(X), · · · , gn(X) funções das variáveis aleatórias Xi,


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.

então
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

g1(X) + g2(X) + · · · + gn(X) = g1(X)+g2(X)+· · ·+gn(X) (13)

A prova é trivial, a partir de (12).

Casos especiais:

a) Soma de duas v.a.:


aX + bY = aX + bY (14)
b) Seja Y = AX + b, sendo A matriz determinística de dimensão
n × m e b determinístico de dimensão n × 1, resultando na
média do vetor aleatório:
E{Y } = AE{X} + b (15)
Taufik Abrão 27 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Média do Produto
0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ

Para o caso do produto de duas funções, não há um resultado


= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.

simples como para o da soma!


Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Casos especiais:

a) Se g(X, Y ) = g1(X)g2(Y ) então vale:


Z ∞ Z ∞
g1(X)g2(Y ) = g1(x)g2(y)fXY (x, y)dxdy (16)
−∞ −∞

b) Se adicionalmente X e Y são independentes, então vale:


Z ∞ Z ∞
g1(X)g2(Y ) = g1(x)fX (x)dx g2(y)fY (y)dy = g1(X) g2(Y )
−∞ −∞
(17)

Taufik Abrão 28 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Exemplo – Densidade Conjunta de Z = g(X, Y ). Seja g(x, y) = xy.


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

Calcule a esperança de Z = g(X, Y ), sabendo-se que a função densidade conjunta


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

é expressa por:
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have

1 − (x−a)2+(y−b)
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})

2 = P({X ∈ B} ∩ {Y ∈ C}).

fXY (x, y) = e 2σ 2
2πσ 2
A direta substituição na Eq. (12)
Z ∞Z ∞
E{Z} = g(x, y)fXY (x, y)dxdy
−∞ Z−∞ Z
∞ ∞
1 −
(x−a)2 +(y−b)2
= 2
x·y·e 2σ 2 dxdy
2πσ −∞ −∞
Z ∞ Z ∞
1 −
(x−a)2 1 −
(y−b)2
= √ x · e 2σ2 dx × √ y · e 2σ2 dy
σ 2π −∞ σ 2π −∞
= a·b (vide resultado Exerc. anterior )

Taufik Abrão 29 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Valor Esperado Condicional


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

• valor esperado condicional de uma v.a. X, dado um evento M é:


Z ∞
E{X|M} = xfX (x|M)dx, (18)
−∞
sendo fX (x|M) a pdf condicional
 Por exemplo, seja o evento M : X ≥ α
R∞
α
xfX (x)dx
E{X|X ≥ α} = R ∞ (19)
α
fX (x)dx

Taufik Abrão 30 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

Extensão para um vetor aleatório Y


0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

Resultado anterior pode ser estendido para um vetor aleatório


{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Y de ordem n:

R∞
xfXY (x, Y )dx
Z
−∞
E{X|Y } = xfX (x|Y )dx = . (20)
−∞ fY (Y )
que é uma v.a. função do vetor aleatório Y , cuja média
pode ser então calculada por:
R∞
Z∞ Z∞ xfXY (x, y)dx
−∞
E{E{X|Y }} = E{X|y}fY (y)dy = fY (y)dy
fY (y)
−∞ −∞
Z∞ Z∞
= xfXY (x, y)dxdy = E{X} (21)
−∞ −∞

Taufik Abrão 31 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Momentos Estatísticos
Definição 0.1 Momento de ordem k da v.a. X:
Z ∞
k
mk = E{X } = X k = xk fX (x)dx. (22)
−∞
e os momentos centrais de ordem k por
Z ∞
µk = E{(X − X)k } = (X − X)k = (X − X)k fX (x)dx. (23)
−∞

Taufik Abrão 32 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

⇒ m0 = 1; m1 = E{X};
0.04

µ0 = 1; µ1 = 0
0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,

Variância da v.a. X : µ2 = σx2 ;


P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ

desvio padrão: σx
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

k   We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

X k
Relação entre mom. e mom. central: µk = (−1)r mr1mk−r
r
r=0
(24)

Definição 0.2 Momento de ordem r associado ao vetor


aleatório X:
Dadas n v.a. X1, · · · , Xn definem-se seus momentos de ordem
r = k1 + k2 + · · · + kn por:
mk1···kn = E{X1k1 X2k2 · · · Xnkn } (25)
e seus momentos centrais de ordem r por:
µk1···kn = E (X1 − X 1)k1 · (X2 − X 2)k2 · . . . · (Xn − X n)kn

(26)

Taufik Abrão 33 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Momento de 2a. ordem de duas v.a.: 0.02

0
0 10 20 30

Solution. Let X denote the number of hits. Then


40 50

−λ
60 70 80
k

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Covariância de X e Y
−2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
 Putting all of our shorthand together, we can write

µ11 = E (X − X) · (Y − Y ) = σXY
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

(27)
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Mostra-se facilmente que


σXY = E {X · Y } − E{X} · E{Y } (28)

Covariância Normalizada ou Coeficiente de


Correlação: X, Y

σXY E {X · Y } − E{X} · E{Y }


ρ= = (29)
σX σ Y
q  
E (X − X) · E (Y − Y )2
2

Taufik Abrão 34 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

O coeficiente de correlação indica, na média, quão parecido X é de


0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.

Y . Por exemplo, se X = Y ⇒ ρ = +1; se X = −Y ⇒ ρ = −1;


Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

caso X seja independente de Y ⇒ ρ = 0.


{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Propriedades

I. módulo do coeficiente de correlação |ρ| ≤ 1 (sempre)


II. |ρ| = 1 quando Y = aX + b (combinação linear), sendo
σXY σXY
a= 2 e b = E{Y } − 2 E{X}
σX σX

Taufik Abrão 35 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

Prova
0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have

I. Considere a expressão não-negativa:


P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

n 2o
E λ(X − X) − (Y − Y ) ≥ 0, λ∈R

que pode ser re-escrito como:


Z ∞ Z ∞  2
C(λ) = λ(X − X) − (Y − Y ) fXY (x, y)dxdy ≥ 0
−∞ −∞
onde ≥ segue pelo fato da integral de uma quantidade
não-negativa não pode ser negativa.
C(λ) é quadrática em λ:
C(λ) = λ E (X − X) + E (Y − Y )2 +
2 2
 

−2λE (X − X)(Y − Y ) ≥ 0
= λ2σx2 − 2λσxy + σy2 ≥ 0 eλ ∈ R

Taufik Abrão 36 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

σx2
0.04

Assim, dado que > 0 (sempre) e C(λ) ≥ 0 (concavidade para


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,

cima e raiz única) então segue que a função quadrática em λ


P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

deve ter no máximo uma raiz real, i.e, o discriminante deve


which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

satisfazer ∆ ≤ 0:
2
∆ = 4σxy − 4σx2 σy2 ≤ 0 ⇒ σxy 2
≤ σx2 σy2 ∴ |ρ| ≤ 1
II. Caso |ρ| = 1. Considere ainda a expressão não negativa acima,
n 2o
µ11 σXY
E λ(X − X) − (Y − Y ) ≥ 0, com λ = m20
= 2 ,
σX
então: n 2o
E λ(X − X) − (Y − Y ) =0 (30)
ou ( )
   2
σXY σXY
=E 2 X + Y − 2 X −Y =0 (31)
σX σX
ou equivalentemente
Z ∞ Z ∞  2
µ11
C(λ) = (X − X) − (Y − Y ) fXY (x, y)dxdy= 0
−∞ −∞ m20
Taufik Abrão 37 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Uma vez que fXY (x, y) nunca é negativa.


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.

A expressão acima para C(λ) implica que o termo entre


Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

colchetes é sempre zero:


which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

µ11
(X − X) − (Y − Y )= 0
m20
Conclusão: da eq. (30) tem-se que quando |ρ| = 1, então Y será
combinação linear de X, i.e., Y = aX + b. Aplicado-se a (31)
resulta finalmente:
 
σXY σXY µ11 µ11
Y = 2 X+ Y − 2 X = X + my − mx
σX σX m20 m20

Taufik Abrão 38 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Exemplo – Definição Covariância. Mostre que a covariância entre as v.a.


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

X e Y é dada por
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write

σXY = E {X · Y } − E{X} · E{Y }


{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Desenvolvendo a definição (27)



µ11 = σXY = E (X − X) · (Y − Y )
= E[X · Y ] − my E[X] − mxE[Y ] + mxmy
= E[X · Y ] − mxmy

Taufik Abrão 39 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Exemplo – Transformação de v.a.


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

Sejam U, W1 e W2 v.a. independentes de média zero. Adicionalmente, W1 e W2


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

2 2
= σ 2. Admitindo-se a transformação:
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

apresentam mesma variância: σW 1


= σW 2
which is equal to

We also have
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})


= P({X ∈ B} ∩ {Y ∈ C}).

X = U + W1
Y = W2 − U, sendo U um sinal
1. encontre o coeficiente de correlação entre X e Y . Expresse-o em função da
2]
relação sinal-ruído de U , i.e., SN R , E[U
σ2

2. interprete ρXY quando SN R → ∞.


De (29), obtém-se:
σXY E {X · Y } − E{X} · E{Y }
ρXY = =
σX σY
q  
E (X − X) · E (Y − Y )2
2

Calculando os termos que compõem a expressão acima:


a) E{X · Y } = E{(U + W1) · (W2 − U )}
= E{−U 2} + E{U W2} − E{U W1} + E{W1W2}
= −E{U 2}

Taufik Abrão 40 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.

2 2 2
E{W12} 2
Similarly,

b) E{(X − X) } = E{X } = E{(W1 + U ) } = + E{U }


P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})

c) E{(Y − Y )2} = E{Y 2} = E{(W2 − U )2} = E{W22}+E{U 2}


= P({X ∈ B} ∩ {Y ∈ C}).

Se W1 e W2 possuem a mesma variância, σ12 = σ22 = σ 2, então


2 2 2
d) σW 2
= σW 1
= E{W12} − W1 = E{W12} = σ 2
−E{U 2} −E{U 2}
∴ ρXY =q = 2 2
(σW1 + E{U }) · (σW2 +E{U }) σ + E{U }
2 2 2 2

Re-escrevendo a expressão anterior em termos da relação sinal-ruído:


PU E{U 2}
SN R = = , resulta:
σ σ
−E{U 2} −SN R
ρXY = 2 =
σ + E{U 2} 1 + SN R
lim ρXY = lim ρXY = −1
SN R→∞ σ 2 →0
ou seja, X e Y são linearmente dependentes. De fato, se σ 2 → 0, então W1 =
W2 = 0 e portanto X = U e Y = −U
Taufik Abrão 41 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Exemplo – Covariância entre duas RVs.


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

Seja as r.v. X e Y estejam funcionalmente relacionadas por


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write

Y = cos X
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

e que a PDF de X seja dada por


1
fX (x) = , −π < x < π; 0, c.c.

Encontre a Cov[X, Y ].
Uma vez que E[X] = 0, de (27)
cov[X, Y ] = µ11 = σXY = E [(X − E[X]) · (Y − E[Y ])]
= E[XY ] − E[X]E[Y ] = E[XY ]
com Z π Z π
1
E[XY ] = fX (x)x cos xdx = x cos xdx = 0
−π 2π −π
∴ cov[X, Y ] = 0 ⇒ X e Y são descorrelacionados;
mas X e Y NÃO são independentes

Taufik Abrão 42 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

Matriz Covariância para o Vetor


0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

Aleatório X Real
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

CX = E (X − X)(X − X)T

(32)
Z ∞ Z ∞
= ··· (X − X)(X − X)T fX (x)dx
−∞2 −∞
σ1 σ12 · · · σ1n

 σ21 σ22 · · · σ2n 
=  . .. . . . .. 
.
σn1 σn2 · · · σn2

Note que o elemento da i–ésima linha, j–ésima coluna dessa matriz


resulta:
ci,j = σxi,xj = σxj ,xi = cj,i (simetria)
ou seja, representa a covariância da v.a. Xi com a Xj .
Taufik Abrão 43 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Independência, Correlação e
0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

Ortogonalidade
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

• Duas variáveis aleatórias X e Y são Descorrelacionadas,


Fig.3, se:
E{X · Y } = E{X} · E{Y }, ou seja, σxy = 0 (33)
• Elas são ditas Ortogonais (produto interno = 0) se:
E{X · Y } = 0 (34)
• Se X e Y forem Independentes ⇒ são fatoráveis
∴ são também Z Não-Correlacionadas:
Z ∞ ∞
E{X · Y } = x · y · fXY (xy)dxdy se independ. =
−∞ −∞
Z ∞Z ∞ Z ∞ Z ∞
= x·y·fX (x)·fY (y)dxdy = x·fX (x)dx· y·fY (y)dy
−∞ −∞ −∞ −∞
= E{X} · E{Y }
Taufik Abrão 44 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

3
0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.

v.a. X e Y correlacionadas 1 v.a. X e Y não−correlacionadas


Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

2 which is equal to
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

0.8 {ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write

We also have
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})


= P({X ∈ B} ∩ {Y ∈ C}).

0.6
1
0.4

0 0.2

0
y

−1 −0.2

−0.4
−2
−0.6

−0.8
−3
−1

−4
−4 −3 −2 −1 0 1 2 3 −1 −0.5 0 0.5 1
x x

Figura 3: Exemplo gráfico de v.a. (não) correlacionadas.

Taufik Abrão 45 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

Conseqüências:
0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

1. se X e Y são não-correlacionadas então sua covariância


We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

e coeficiente de correlação são nulos (veja definição do


índice correlação normalizado, eq. (29));
2. se X e Y são não-correlacionadas e a média de pelo uma delas
é nula, então elas são ortogonais.
Se X e Y Descorrelacionadas ⇒ de (33) implica
E{X · Y } = E{X} · E{Y }
e admitindo mx = 0 ou my = 0, resulta
E{X · Y } = E{X} · E{Y }= 0
3. se X e Y são não-correlacionadas e mx = 0 ou my = 0 então
E{(X + Y )2} = E{X 2} + E{Y 2}

Taufik Abrão 46 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Desigualdades: Aplicação dos Momentos


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have

A. Desigualdade de Tchebvcheff
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Seja X uma v.a. com função densidade de probabilidade f (x).


Admitindo-se variância e média conhecidas e com valores finitos:
σ2 e µ = E{X}
respectivamente. Qualitativamente, quanto menor a variância, menos
provável a ocorrência de amplos desvios em torno da média.
Limite de Tchebvcheff para esta probabilidade:
E[X 2] σ2
P (|X| ≥ ) ≤ , ou P (|X − µ| ≥ ) ≤ 2 , ∀>0
2 
(35)
⇒ Escolhendo-se  = k · σ, a desigualdade acima resulta:
1
P(|X − µ| ≥ kσ) ≤ 2 (36)
k
Taufik Abrão 47 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Tomando-se k = /σ em (37), para qualquer formato de


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.

PDF fX (x), a probabilidade de X tomar valores no intervalo


Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

[µ − , µ + ], i.e., centrado em sua média, é:


{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

σ2 σ2
1 − P (|X − µ| ≤ ) ≤ 2 , ⇒ P (|X − µ| ≤ ) ≥ 1 − 2 ,
 
 σ 2
∴ P(µ −  ≤ X ≤ µ + ) ≥ 1 − (37)

sendo próxima a 1 para σ << 

Prova:
Z ∞ Z
σ2 , (x − X)2 · fX (x)dx ≥ (x − X)2 · fX (x)dx ≥
−∞
|x−X|≥kσ
Z
≥ (kσ)2 fX (x)dx = k 2σ 2 P(|X − µ| ≥ kσ)
|x−X|≥kσ

Taufik Abrão 48 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

B. Generalização: Desigualdade de Markov


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,

Dada uma variável aleatória Y tal que fY (y) = 0 para y < 0,


P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

segue-se, para α > 0:


which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

E{Y }
P(Y ≥ α) ≤ (38)
α

Em contraste com o limitante de Tchebvcheff, o limitante acima


envolve apenas o valor esperado de Y .

Prova:
Z ∞ Z ∞ Z ∞
E{Y } = y · fY (y)dy ≥ y · fY (y)dy ≥ α fY (y)dy
0 α α
= αP(Y ≥ α)

Taufik Abrão 49 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

C. Chernoff Bound
0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have

 Se mais informações além da média e variância estivem


P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

disponíveis
⇒ possível obter limitantes mais apertados que as desigualdades
de Chebyshev e Markov.
 Considere a desigualdade de Markov novamente:
◦ região de interesse é o conjunto A = {Y ≥ α}
◦ função indicador

1, se y ∈ A
Ia(y) =
0, c.c.

 Ponto chave na obtenção do Chernoff Bound:


◦ αy ≥ 1 na região de interesse;
◦ limita-se a função indicador por Ia(y) ≤ αy , Fig. 4

Taufik Abrão 50 Londrina, 3 de Maio de 2018


Chernoff bound further in the next section. 70

0.14
Introduction to discrete random variables

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then

es(t ! a)
−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

0 a

FIGURE 4.15
Figura 4: Bounds função indicador Ia(y) para A = {Y ≥ α}
Bounds on indicator function for A = 5t Ú a6.
 Assim, tem-se:
Z ∞ Z ∞
y E{Y }
Pr(Y ≥ α) = Ia(y)·fY (y)dy ≤ ·fY (y)dy =
0 0 α α
 Alterando o limitante superior sobre a função indicador Ia(y)
⇒ obtém-se diferentes bounds sobre Pr(Y ≥ α):

Taufik Abrão 51 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

◦ Consider o bound, Fig. 4


0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ

Ia(y) ≤ es(y−α),
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

s>0 2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

which is equal to
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

 Chernoff Bound:
Z ∞ Z ∞
Pr(Y ≥ α) = Ia(y) · fY (y)dy ≤ es(y−α) · fY (y)dy
0 0
Z ∞
= e−sα esy · fY (y)dy = e−sα · E{esY } (39)
0

 depende do valor esperado de uma função exponencial de Y


(= função geradora de momento)

Pr(Y ≥ α) = e−sα · E{esY } = e−sα · Φy (s) (40)

Taufik Abrão 52 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Função Geradora de
Momentos

Taufik Abrão 53 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Definição
0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

A função geradora de momentos de uma variável aleatória X


é definida por
Z ∞
sX
Φx(s) , E{e } = esx · fX (x)dx (41)
−∞

sendo s uma variável complexa. A função geradora de momentos de


uma r.v. discreta é definida a partir da PMF:
X
Φx(s) = esxi · PX (xi) (42)
i

• Da eq. (41), exceto pelo sinal invertido no expoente, a função


geradora de momentos é equivalente à tranformada de
Laplace bilateral, na qual existe uma fórmula de inversão:

Taufik Abrão 54 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

– Em geral, conhecendo Φx(s) é equivalente a conhecer a


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.

respectiva pdf fX (x) e vice-versa.


Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

 Razões para a introdução da função geradora Φx(s):


We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

– viabiliza um caminho para o cálculo dos momentos de X;


– pode ser utilizada para estimar fX (x) a partir de medidas
experimentais dos momentos;
– pode ser utilizada para resolver problemas envolvendo o
cômputo de somas de v.a.
– ferramenta analítica que pode ser empregada para demonstrar
resultados básicos, tal como o Teorema do Limite Central.

Taufik Abrão 55 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

sX
0.04

Expandindo e (série de Taylor) e tomando as esperanças:


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ

(sX)2 (sX)n
  = 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables

sX
If X and Y are random variables, we use the shorthand

E{e } = E 1 + sX +
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

+ ... + + ...
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

2! n!
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

s2 sn
= 1 + s m1 + m2 + . . . + mn + . . .
2! n!
onde mn é o n− ésimo momento estatístico da v.a. X, eq. (22)

◦ Uma vez que os momentos ξn podem não existir1, então, nestes


casos, a função geradora de momentos Φx(s) não existirá.
◦ No entanto, se Φx(s) existe, o cômputo de qualquer um
dos momentos estatísticos é facilmente obtido por
diferenciação:

dk
Φ(k)
x (0) , k Φx(s) ⇒ mk = Φ(k)
x (0), k = 0, 1, . . . (43)
ds s=0
1
Por exemplo, nenhum momento existe para a distribuição e Cauchy.

Taufik Abrão 56 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Exemplo – Função Geradora e Momentos Estatísticos de uma v.a.


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

Gaussiana
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

Seja X ∼ N (µ, σ). Obtenha o primeiro e segundo momentos estatísticos de X.


{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

De (41), a função geradora de momentos resulta:


Z ∞
sX 1 sx −
(x−µ)2
Φx(s) = E{e } = √ e · e 2σ2 dx
σ 2π −∞
Completando quadrados no expoente acima, obtem-se:
Z ∞
2 1 1
Φx(s) = eµs+(σs) /2 × √ exp{− 2 · [x − (µ + σ 2s)]2}dx
σ 2π −∞ 2σ
A integral resulta igual a 1, pois é a integral de uma pdf Gaussiana de média µ+σ 2s
e variância σ 2. Portanto:
σ 2 s2
 
Φx(s) = exp µs +
2
E finalmente de (43):
m1 = Φ(1)
x (0) = µ, m2 = Φ(2) 2
x (0) = µ + σ
2

Taufik Abrão 57 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

Função Característica
0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

A função característica é utilizada invariavelmente na


determinação da pdf da soma de v.a. independentes.
Também mostra-se útil na determinação dos momentos de v.a.

A função característica de uma variável aleatória X é definida a


partir da função geradora
√ de momentos (41), substituindo-se o
parâmetro s → jω = −1 ω
Z ∞
Mx(ω) , E{ejωX } = fX (x) · ejωxdx (44)
−∞

que é, exceto pelo sinal negativo (−) no expoente, a transformada


de Fourier da função densidade de probabilidade da v.a. X.

Note que Mx(0) = 1 e que |Mx(ω)| ≤ 1.


Taufik Abrão 58 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

De forma análoga à eq. (42), a função característica de uma r.v.


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,

discreta é obtida:
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
X {ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.

jωxi
Putting all of our shorthand together, we can write

Mx(ω) = e · PX (xi)
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

(45)
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

1. A função característica Mx(ω) apresenta as mesmas


características da função geradora, Φx(s)
2. A Transf. de Fourier é largamente utilizada, e uma vez que a
fórmula de inversão
Z ∞
1
fX (x) , Mx(ω) · e−jωxdω (46)
2π −∞
é facilmente calculável, quer seja a partir da integração direta
acima (integral de Fourier), ou mesmo utilizando tabelas de pares
transformados de Fourier ⇒ função característica Mx(ω) é
largamente empregada no cômputo de somas de v.a.
independentes
Taufik Abrão 59 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

3. A soma de n v.a. independentes, X1 + X2 + . . . + Xn envolve a


0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.

convolução (recorrente) de suas respectivas pdf s. Assim, se


Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write

Z = X1+X2+. . .+Xn ⇒ fZ (z) = fX1 (z) ∗ fX2 (z) ∗ . . . ∗ fXn (z)


{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

(47)
i.e., o operador convolução ∗ é aplicado de forma recorrente.
• Uma vez que o cálculo de (47) pode ser tedioso,
alternativamente, pode-se então empregar o Teorema
Convolução de Fourier:
a FT de dois ou mais sinais convoluidos é equivalente ao
produto dos sinais no domínio transformado:

R∞ Convolução no Tempo
v1(t) ∗ v2(t) = −∞ v1 (τ ) v2 (t − τ ) dτ V1 (f ) V2 (f )
Convolução na Frequência R∞
v1 (t) v2 (t) V1 ∗ V2 = −∞ V1 (λ) V2 (f − λ) dλ

Taufik Abrão 60 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Exemplo – Função Característica de uma v.a. Gaussiana Obtenha a Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then

Similarly,
P(X ≥ 1) = 1 − P(X = 0) = 1 − e −λ

P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)


= 1−e −2
≈ 0.865.

função característica da v.a. X ∼ N (µ, σ)


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

Do Exemplo anterior, obtém diretamente a função Mx(ω) substituindo s → jω:


{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

σ2 · ω2
 
Mx(ω) = exp jωµ −
2

Taufik Abrão 61 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Exemplo – Função Característica para a Soma de v.a. Dada uma v.a. Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then

Similarly,
P(X ≥ 1) = 1 − P(X = 0) = 1 − e −λ

P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)


= 1−e −2
≈ 0.865.

Z = X + Y , sendo X e Y duas v.a., e fZ (z), fX (x), fY (y) as respectivas pdf s,


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

mostre que a função característica de Z pode ser obtida simplesmente através de


which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write

We also have
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})

Mz (ω) = Mx(ω) · My (ω), caso X e Y forem independentes.


= P({X ∈ B} ∩ {Y ∈ C}).

Do resultado do Exemplo 26 da U2, obteve-se a pdf da v.a. Z em função das


pdfs conjuntas de X, Y :
Z ∞ Z ∞
fZ (z) = fZW (z, w)dw = fXY (z − w, w)dw
Z−∞∞
−∞

= fX (z − w)fY (w)dw = fX (z) ∗ fY (z)


−∞

sendo que a última igualdade só vale se X e Y forem independentes. ∗ indica


operador convolução.
Portanto, de (44) resulta
Z ∞
Mz (ω) , E{ejωZ } = fZ (z) · ejωz dz
Z ∞ Z ∞ −∞ 
= fX (z − p)fY (p)dp · ejωz dz
Z−∞

−∞ Z ∞
= fY (p)dp · fX (z − p) · ejωz dz
−∞ −∞
Taufik Abrão 62 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

com a mudança de variável: α = z − p, obtém-se imediatamente:


0
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )

Z ∞ Z ∞ 2.3 Multiple random variables


= 1 − e−2 (1 + 2) ≈ 0.594.

If X and Y are random variables, we use the shorthand

fY (p)ejωpdp · fX (α) · ejωα dα


{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

Mz (ω) = which is equal to

We also have
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})


= P({X ∈ B} ∩ {Y ∈ C}).

−∞ −∞
= My (ω) · Mx(ω)

Taufik Abrão 63 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Exemplo – PDF e Função Característica para uma Soma de v.a.


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

Gaussianas Seja Xi i = 1, . . . , n, uma sequência aleatória de r.v. independentes


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

e identicamente distribuidas (i.i.d), caracterizadas por X ∼ N (0, 1). Obtenha a


{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

pdf de
Xn
Z= Xi
i=1
A pdf de Z é dada por (47) e está relacionda à funçao característica através
do Teo Convolução de Fourier:
fZ (z) = fX1 (z) ∗ fX2 (z) ∗ . . . ∗ fXn (z)
m
Mz (ω) = Mx1 (ω) × Mx2 (ω) × . . . × Mxn (ω)
Adicionalmente, uma vez que as n v.a. Xi são i.i.d. N (0, 1), a função característica
de todas as v.a. são idênticas:
Mx(ω) , Mx1 (ω) = Mx2 (ω) = . . . = Mxn (ω)
sendo: Z ∞
1 2
− x2
Mx(ω) = √ e ejωxdx
2π −∞

Taufik Abrão 64 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Completando o quadrado no expoente acima: Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then

Similarly,
P(X ≥ 1) = 1 − P(X = 0) = 1 − e −λ

P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)


= 1−e −2
≈ 0.865.

= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )

Z ∞ = 1 − e−2 (1 + 2) ≈ 0.594.

1 1 2 2 2
2.3 Multiple random variables

e− 2 [x −2jωx+(jω) −(jω) ]dx


If X and Y are random variables, we use the shorthand

Mx(ω) = √
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

2π −∞ Z
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).


− ω2
2 1 1 2
= e ×√ e− 2 (x−jω) dx
2π −∞
| {z }
pdf Gaussiana

A integral da pdf Gaussiana, com média jω, no intervalo [±∞] resulta igual a 1.
Portanto,
2
− ω2
Mx(ω) = e
e a correspondente função característica de Z vale:
2
n − nω2
Mz (ω) = [Mx(ω)] = e
Inspecionando Mz (ω) deduz-se que fZ (z) deve ser Gaussiana. Utilizando a fórmula
de inversão, (46) obtém-se:
Z ∞ Z ∞
1 −jωz 1 − nω2
2
fZ (z) = MZ (ω) · e dω = e · e−jωz dω
2π −∞ 2π −∞

Taufik Abrão 65 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02


0 k
0 10 20 30 40 50 60 70 80
Z
1
Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

n 2 Solution. Let X denote the number of hits. Then

e−( 2 ω +jzω)dω
−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.

fZ (z) =
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )

2π −∞
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

Manipulando os termos do expoente, obtém-se finalmente: We also have


P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

1 z2
− 2n
fZ (z) = √ e
2πn
De fato, a pdf de Z é Gaussiana de média zero e variância n.

Exemplo – Momentos Estatísticos para Y = sin θ. Calcule os primeiros


momentos da v.a. Y = sin θ, caso θ ∼ U[0, 2π] ...

Taufik Abrão 66 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Teorema do Limite Central (CLT)


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write

• Às vezes diz-se que a soma de uma grande número de v.a. tende


{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

a uma distribuição normal


• Sob quais condições isto é verdade?
– Teorema do Limite Central é adequado para responder esta
questão.
• O Teorema do Limite Central estabelece que a soma normalizada
de uma grande número de r.v. mutuamente independentes,
X1, . . . , Xn com média zero e variâncias finita, σ12, . . . , σn2 tende
para uma função distribuição Normal se as variâncias individuais
2
σ
P,nk = 21, . . . , n são pequenas se comparadas à soma destas,
k
i=1 σi

– restrições sobre as variâncias são conhecidas como condições


de Lindeberg.

Taufik Abrão 67 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Enunciado do Teorema do Limite Central


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.

Admitindo-se que as v.a. Xi são do tipo contínuo e independentes,


Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

2 2
com E{Xi} = mi e σX i
= σ i , forma-se a v.a. soma normalizada:
n
X1 + X2 + . . . + Xn 1 X
X, √ =√ Xi .
n n
i=1

cuja média e variância serão dadas por:


m1 + m2 . . . + mn 2σ12 + σ22 . . . + σn2
m= √ e σ =
n n
função densidade de probabilidade e função característica:
√ √ √
fX (x) = fX1 ( nx) ∗ fX2 ( nx) ∗ . . . ∗ fXn ( nx)
m
     
ω ω ω
Mx(ω) = Mx1 √ × Mx2 √ × . . . × Mxn √
n n n
Taufik Abrão 68 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

O teorema do limite central diz que, sob certas condições gerais, 0.02

0
0 10 20 30

P(X ≥ 1) = 1 − P(X = 0) = 1 − e
40 50

−λ
60

= 1−e −2
70 80
k

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


≈ 0.865.

fX (x) aproxima-se de uma pdf Normal quando n cresce:


Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write

1 (x−m)2
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})


= P({X ∈ B} ∩ {Y ∈ C}).

lim fX (x) = √ e 2σ 2 (48)


n→∞ σ 2π

satisfeitas as condições:

a) σ12 + σ22 . . . + σn2 → ∞


R∞ α
b) para algum α > 2, −∞
x fXi (x)dx < C = cte.

1. Essas condições não são as mais gerais, mas cobrem um grande


número de aplicações.
2. A demonstração desse teorema é longa e tediosa e portanto será
omitida. Ver, por exemplo, [?], pp. 213-7.
Taufik Abrão 69 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

3000 2500 X=X +...+X


0.02

0
0 10 20 30 40 50 60 70 80
k

X : Gaussiana X2: Rayleigh,1000 X3: Unif [−4;6.7] 2000


Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

1 6
Solution. Let X denote the number of hits. Then

1
−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.

μ= 2.46 σ 1.52
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

2500 μ=0; σ=1 2000


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

B=1 2.3 Multiple random variables

800 1500
If X and Y are random variables, we use the shorthand

which is equal to
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.

2000 Putting all of our shorthand together, we can write

We also have
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})

1500 600
= P({X ∈ B} ∩ {Y ∈ C}).

1500 1000
1000 400
1000
500
500 200
500
0 0
0 0 −5 0 5 −5 0 5 10
−5 0 5 0 5 4
x 10
15000 4000 2.5 Normal Probability
Weibull, A=B=1.3 Lognormal 0.999
Poisson, λ=2 0.997
μ=−2; σ=1 0.99
0.98
2 0.95
3000 0.90
10000 0.75
1.5 0.50
2000 0.25
1 0.10
0.05
5000 0.02
0.01
1000 0.003
0.5 0.001

0 0 0 −2 0 2 4 6 8
0 5 10 0 5 0 5 X

Figura 5: Exemplo gráfico de aplicação do Teorema do Limite Central (CLT).

Taufik Abrão 70 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Exemplo – Suponha que pedidos em um restaurante sejam modelados por Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then

Similarly,
P(X ≥ 1) = 1 − P(X = 0) = 1 − e −λ

P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)


= 1−e −2
≈ 0.865.

variável aleatória i.i.d de média µ = 8 dolares e desvio padrão σ = 2 dolares.


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

a) Estime a probabilidade de que os primeiros 100 clientes gastem um total de which is equal to

We also have
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})


= P({X ∈ B} ∩ {Y ∈ C}).

mais de 840 dolares.


b) Estime a probabilidade de que os primeiros 100 clientes gastem um total entre
780 e 820 dolares.
c) Após quantos pedidos (≡ n clientes) pode-se ter 90% de certeza de que o total
consumido por todos os clientes será maior que 1000 dolares

Solução a) Seja Xk o consumo do k-ésimo cliente. O total gasto pelos primeiros


100 clientes será expresso por outra r.v.:
n=100
X
SΣ = Xi
i=1

cuja média é µΣ = nµ = 800 variância: σΣ2 = nσ 2 = 400


Independente da distribuição de cada v.a., a v.a. soma tenderá para uma
distribuição Gaussiana quando n for grande:
SΣ ∼ N (µΣ, σΣ2 ) = N (800, 400)
Σ −µΣ
cuja forma normalizada de SΣ é: ZΣ = S√ 2
= SΣ −800
20
σΣ

Taufik Abrão 71 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

 probabilidade dos primeiros 100 clientes gastarem um total de mais de 840 dolares:
Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

 
840 − 800
2.3 Multiple random variables
If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to

Pr[SΣ > 840] = Pr Z100 > ≈ Q(2) = 0.0228


{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have

20
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Solução b) probabilidade dos primeiros 100 clientes gastarem um total entre 780
e 820 dolares.
Pr[780 ≤ SΣ ≤ 820] = Pr [−1 ≤ Z100 ≤ 1] = 1 − 2Q(1) ≈ 0.682
Solução c) O problema aqui é encontrar o valor de n para o qual:
Pr[Sn > 1000] = 0.9
 
1000 − 8n
Pr[Sn > 1000] = Pr Zn > √ = 0.9
2 n
Lembrando que Q(−x) = 1 − Q(x) e considerando que Q−1(0.9) = −1.2816 ou
(tabelado), resulta:
1000 − 8n
√ = −1.2815
2 n
√ √
a qual produz a equação quadrática em n: 8n − 2.573 · n − 1000 = 0

Raiz positiva resulta: n = 11.34 ou n = 128.6 ⇒ n = 129 clientes

Taufik Abrão 72 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Exemplo – Suponha que as v.a. X e Y sejam independentes e ambas sejam


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

exponencialmente distribuídas:
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.

−ax −by
Putting all of our shorthand together, we can write

fX (x) = a · e fY (y) = b · e
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

u(x), u(y) We also have


P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

Determine:
1. A PDF de Z = X + Y caso a = b;
2. A PDF de Z = X + Y caso a 6= b

Solução: Uma vez que o número de v.a. a serem somadas não é grande o
suficiente (a pdf Z NÃO será Gaussiana, conforme (48)), a PDF de Z = X + Y
é obtida a partir da convolução:
Z ∞ Z ∞
fZ (z) =fX ∗ fY = fX (z − y) · fY (y)dy = ab e−a(z−y) · e−by u(z − y)u(y)dy
Z z −∞ −∞
−az (a−b)y ab h
−az (a−b)y y=z
i
abe e dy · u(z) = e e y=0
u(z)
0 a − b
ab  −by −az

= e −e u(z) válido para a 6= b.
a−b
Caso (a = b) resulta:
fZ (z) = a2z · e−az · u(z)
Taufik Abrão 73 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Exemplo – (Teorema Central do Limite) Admita que Xi, i = 1 . . . n sejam todas


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

v.a. independentes e uniformente distribuídas sobre o intervalo [±0.5]. Considere


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

a v.a. Z como sendo a soma de v.a.:


which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

r n
12 X
Z= Xi
n
i=1

Note que a soma acima foi normalizada tal que Z tenha média zero e variância
unitáriaa. Determine:
1. o tipo de distribuição que Z assumirá caso n seja grande o suficiente;
2. a PDF de Z.
Sugestão:
utilize os seguintes fatos: Y = X1 + X2 + . . . + Xn e portanto fY (z) = fX1 (z) ∗
fX2 (z) ∗ · · · ∗ fXn (z)
Solução. Se Y = X1 +X2 +. . .+Xn , então fY (z) = fX1 (z)∗fX2 (z)∗· · ·∗fXn (z)
r r 
n n
fZ (z) = fY
12 12
quando n cresce a pdf fz (z) ⇒ Gaussiana.
a 2 1
A variância de Xi é σX i
= 12
Taufik Abrão 74 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

A qual pode ser calculada a partir da relação entre função densidade de Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then

Similarly,
P(X ≥ 1) = 1 − P(X = 0) = 1 − e −λ

P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)


= 1−e −2
≈ 0.865.

probabilidade e função característica:


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

√ √ √
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write

fY (x) = fX1 ( nx) ∗ fX2 ( nx) ∗ . . . ∗ fXn ( nx)


{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

m
     
ω ω ω
My (ω) = Mx1 √ × Mx2 √ × . . . × Mxn √
n n n

Taufik Abrão 75 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Exemplo – Um valor de tensão constante porém desconhecido é medido. Cada Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then

Similarly,
P(X ≥ 1) = 1 − P(X = 0) = 1 − e −λ

P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)


= 1−e −2
≈ 0.865.

medida Xi pode ser modelada como sendo a soma do valor desejável de tensão v
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

e um ruído de tensão Ni de média zero e desvio padrão de 1µV: which is equal to

We also have
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})


= P({X ∈ B} ∩ {Y ∈ C}).

Xi = v + Ni
Assuma que as tensões de ruído constituem v.a. independentes. Quantas medições
são necessárias para que tenhamos uma probabilidade maior ou igual
Pn a 0.99
 de que
1
a esperança estatística da média de amostras, dada por E n j=1 Xi , esteja
dentro do intervalo  = 1µV da média verdadeira?
Solução. Assumindo um número de medidas n suficientemente grande para que
o Teorema do limite central possa ser aplicado, tem a distribuição:
2 2

X ∼ N µ = v; σ = (1µ)

Normalizando: e,X−
X √
µ
∼ N [0; 1]
σ/ n
Evento: probabilidade maior ou igual a 0.99 de que a esperança estatística da
média de amostras, esteja dentro do intervalo  = 1µV da média verdadeira
Pr[−b0.99 ≤ X
e ≤ b0.99] = 0.99

Taufik Abrão 76 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Consultando tabela para valores da distribuição Normal tem-se Q(2.5758) = 0.005,


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

e portanto b0.99 = 2.5758.


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

O evento {−b0.99 ≤ X e ≤ b0.99} é idêntico a: which is equal to


{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

 
σ σ
−b0.99 √ ≤ X − µ ≤ b0.99 √
n n
ou ainda  
σ σ
µ − √ b0.99 ≤ X ≤ µ + √ b0.99
n n
Finalmente:
σ 1µ
√ b0.99 ≤  ou √ 2.5758 ≤ 1µ ou n ≥ 7 amostras
n n

Taufik Abrão 77 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

Chernoff Upper Bound on the Tail


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )

Probability
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

 The "tail probability"(area under the tail of PDF) often has to be


evaluated to determine the error probability of digital
communication systems
 Closed-form results are often not feasible ⇒ the simple Chernoff
upper bound can be used for system design and/or analysis
 Chernoff Bound: The tail probability is given by
Z ∞ Z ∞
P (X ≥ δ) = p(x)dx = g(x)p(x)dx = E[g(X)] (49)
δ −∞
with the following definition of step function:

1, X ≥ δ
g(X) =
0, X < δ

Taufik Abrão 78 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

 g(X) can be upper bounded by


0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ

g(X) ≤ eα(X−δ),
= 1 − e−λ (1 + λ )

with α ≥ 0
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.

We also have
40
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})


= P({X ∈ B} ∩ {Y ∈ C}).

eα(X−δ)

g(X)
1

δ X
Figura we
Therefore, 6: get
Function approximation for the Chernoff bound
the bound
≥ δ) = E{g(X)}
P (Xbound:
 Therefore, we get the
≤ E{eα(X−δ)}
P (X ≥ δ) = E[g(X)]
α(X−δ)= e−α δ−αδ
E{eαX
≤ E[e ] = e · E}[eαX ], α ≥ 0 (50)
Taufik Abrão 79 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

 In practice, however, we are interested in the tightest upper


0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.

bound.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

⇒ we optimize α:
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

d −αδ
e · E[eαX ] = 0

⇒ The αopt can be obtained from
E[XeαoptX ] − δ E[eαoptX ]= 0
 Solution to this equation gives the Chernoff bound
P (X ≥ δ) ≤ e−αoptδ · E[eαoptX ] (51)

Taufik Abrão 80 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Exemplo – Bounds. Seja uma v.a. X com distribuição de Poisson random de


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)

parâmetro λ = 1/2. Utilizando as desigualdades de Markov e de Chebyshev, bem


= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

como o limitante de Chernoff, compare os valores encontrados como o valor exato


which is equal to

We also have
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})


= P({X ∈ B} ∩ {Y ∈ C}).

para Pr(X > 2).


Solução – Da Distribuição de Poisson (U2, slide 86)
a) Dado que X assume apenas valores inteiros: Pr(X > 2) ≡ Pr(X ≥ 3)
b) média e variância: E[X] = λ = 21 E[X 2] = λ2 + λ = 41 + 12 = 0.75
c) Valor exato para probabilidade de Poisson:
Pr(X ≥ 3) = 1 − Pr(X < 3) = 1−Pr(X = 0)−Pr(X = 1)−Pr(X = 2) = 0.0144
d) Bounds:
 Desigualdade de Markov: Da eq. (38) segue:
E{X} 1/2
P(X ≥ α) ≤ ⇐⇒ P(X ≥ 3) ≤ = 0.167
α 3
 Desigualdade de Chebyshev: Uma vez que a v.a. de Poisson assume apenas
valores positivos, a desigualdade em (35) pode ser re-escrita como:
E[X 2] 0.75
P (|X| ≥ α) ≡ P (X ≥ α) ≤ ,⇐⇒ P(X ≥ 3) ≤ = 0.0833
α2 32

Taufik Abrão 81 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

 Chernoff bound: Da eq. (50)


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

α(X−δ) −αδ αX
We also have

P (X ≥ δ) ≤ E[e ] = e · E[e ], α ≥ 0
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

P (X ≥ 3) ≤ e−3α · E[eαX ], ≡ e−3α · ΦX (s)

sendo a Função Geradora de Momento para a distribuição Poisson é dada por


∞ ∞ n −λ X (sλ)n∞
nλ e
X X
−λ
ΦX (s) = E[esX ] = sn Pr(X = n) = s =e = e−λesλ
n! n!
n=0 n=0 n=0

Portanto, o Chernoff bound desejado é obtido minimizando a função em


α: α
P (X ≥ 3) ≤ e−3α · E[eαX ] ≡ min e−3α · eλ(e −1)
α≥0
⇒ Uma vez que a função exponencial é estritamente crescente, basta
minimizar o argumento.
⇒ Derivando e igualando a zero:
d λ(eα−1)−3α
e =0 ⇐⇒ λeα − 3 = 0 αopt = ln(3/λ)

Taufik Abrão 82 Londrina, 3 de Maio de 2018
70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k

Substituindo o valor de αopt:


0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

αopt
−1)−3αopt
P (X ≥ 3) ≤ eλ(e = e3−λ−3ln(3/λ)
2.3 Multiple random variables
If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

com λ = 1/2 resulta finalmente:


P (X ≥ 3) ≤ e2.5−3ln6 = 0.0564
o qual representa um bound mais apertado para o P (X ≥ 3) = 0.0144

Taufik Abrão 83 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

U3 - Exercícios

Taufik Abrão 84 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

U3 - Exercícios
0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.

1. Para uma função densidade de probabilidade Gaussiana


Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

x2
 
1
fX (x) = √ exp − 2
σ 2π 2σ
mostre que

n 1 · 3 · . . . · (n − 1)σ n. n par
E{X } =
0. n mpar

2. Seja Z = X + Y , sendo X e Y duas variáveis aleatórias Gaussianas


independentes com pdf s
(x − mx)2
 
1
fX (x) = √ exp −
σx 2π 2σx2
e
2
 
1 (y − my )
fY (y) = √ exp −
σy 2π 2σy2
mostre que Z também é Gaussiana com E{Z} = E{X} + E{Y } e
σz2 = σx2 + σy2.

Taufik Abrão 85 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

3. Mostre que se duas variáveis aleatórias são relacionadas por Y = aX + b,


0.04

0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then

onde a e b são duas constantes arbitrárias, o coeficiente de correlação ρ = 1


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

se a > 0 ou ρ = −1 se a < 0.
2.3 Multiple random variables
If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

4. Dadas as v.a X = cos(Θ) e Y = sin(Θ), onde Θ ∼ U[0; 2π], mostre que X


e Y são descorrelacionadas, porém não são independentes.
5. Seja Xi, i = 1, . . . , N , uma seqüência de v.a. independentes (resultados de
N experimentos idênticos mas independentes), com E{Xi} = µ e
E{(Xi − µ)2} = σ 2. Deseja-se estimar o parâmetro µ com o seguinte
estimador:
N
1 X
b=
µ Xi
N
i=1
Calcule a média e a variância de µ
b.
6. No problema anterior, qual o menor valor de N para que
σ
P |µ b − µ| > 10 ≤ 0, 01? Dica: desigualdade de Tchebycheff.
7. Repita o problema anterior admitindo que N é grande o suficiente para que a
distribuição de µ
b possa ser admitida Gaussiana.

Taufik Abrão 86 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

T
E{X} = [5 − 5 6]T
0.04

8. Seja X = [X1 X2 X3] um vetor aleatório  com média 


0.02

0 k
0 10 20 30 40 50 60 70 80

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

Solution. Let X denote the number of hits. Then


−λ −2
P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.

5 2 −1
Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables

de matriz de covariância dada por CX =  2 5 0 . Calcule a média e


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

−1 0 4
We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

a variância de
Y = AT X + b,
com A = [2 − 1 2]T e b = 5.
9. Dado duas r.v. X e Y , definindo uma nova r.v.:
 2
X − µX Y − µY
Z = κ· +
σX σY
sendo κ uma constante real.
(a) Determine E[Z]
(b) Mostre que −1 ≤ ρXY ≤ 1, onde ρXY é o coeficente de correlação entre
X e Y.
10. Suponha que gastos de clientes em uma lanchonete sejam r.v. i.i.d com
média m = 8 Reais e desvio padrão and σ = 2 Reais. Estime a probabilidade
dos 100 primeiros clientes gastarem um total:

Taufik Abrão 87 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

(a) de pelo menos 840 Reais. 0.02

0
0 10 20 30

Solution. Let X denote the number of hits. Then


40 50

−λ
60

−2
70 80
k

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.


Similarly,
P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ

(b) entre 780 e 820 Reais.


= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.
We also have

11. Função geradora de momentos (MGF)


P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

(a) Mostre que a MGF para a distribuição uniforme:


1
fX (x) = , 0<x<a
a
é dada por:
eas −1

as , s 6= 0
Φx(s) =
1, s=0
(b) Derive a MGF da seguinte distribuição uniforme:
1
fX (x) = , |x| < a
2a
12. Encontre a MGF para a seguinte distribuição exponencial
fX (x) = µe−µx 0 ≤ x < ∞

Taufik Abrão 88 Londrina, 3 de Maio de 2018


70 Introduction to discrete random variables

0.14

0.12

0.10

0.08

0.06

0.04

13. Encontre a função característica (CF) da seguinte PDF distribuição 0.02

0
0 10 20 30

Solution. Let X denote the number of hits. Then


40 50

−λ
60

−2
70 80
k

Figure 2.5. The Poisson(λ ) pmf pX (k) = λ k e−λ /k! for λ = 10, 30, and 50 from left to right, respectively.

P(X ≥ 1) = 1 − P(X = 0) = 1 − e = 1−e ≈ 0.865.

exponecial bilateral Similarly,


P(X ≥ 2) = 1 − P(X = 0) − P(X = 1)
= 1 − e−λ − λ e−λ
= 1 − e−λ (1 + λ )
= 1 − e−2 (1 + 2) ≈ 0.594.

2.3 Multiple random variables


If X and Y are random variables, we use the shorthand

−|x|
{X ∈ B,Y ∈ C} := {ω ∈ Ω : X (ω ) ∈ B and Y (ω ) ∈ C},

e
which is equal to
{ω ∈ Ω : X (ω ) ∈ B} ∩ {ω ∈ Ω : Y (ω ) ∈ C}.
Putting all of our shorthand together, we can write
{X ∈ B,Y ∈ C} = {X ∈ B} ∩ {Y ∈ C}.

f (x) = , −∞ < x < ∞


We also have
P(X ∈ B,Y ∈ C) := P({X ∈ B,Y ∈ C})
= P({X ∈ B} ∩ {Y ∈ C}).

2
14. Soma de r.v. i.i.d.: Encontre a média e a variância para a soma de n
variáveis aleatórias independentes e identicamente distribuídas, cada uma
com média m e variância σ 2
15. Soma de r.v. i.i.d. Exponenciais. Encontre a PDF da soma de n r.v.
exponenciais, todas com parâmetro a.
16. Encontre o Chernoff bound para uma r.v. exponencial com λ = 1. Compare
o limitante com o valor exato para a Pr[X > 5].

Taufik Abrão 89 Londrina, 3 de Maio de 2018

You might also like