Solutions 01
Solutions 01
Solutions 01
m(t) = E[e
]=
tx
e p(x) =
x=1
et /2
1 (et /2)
tx
e /2 =
x=1
(et /2)x
x=1
= (2et 1)1 ,
for t < ln 2. With m(t) = (2et 1)1 , we are able to compute the first and
second derivatives of m(t),
m0 (t) = 2et (2et 1)2
m00 (t) = 2et (2et + 1)(2et 1)3 .
The first and second moments of X are = m0 (0) = 2 and 2 = m00 (0) = 6,
and the variance is 2 = 2 2 = 6 4 = 2. Therefore the mgf, the mean,
and the variance of X are
m(t) = (2et 1)1 ,
= 2,
2 = 2.
so the variance is = 2
= 3/10 (1/2)2 = 1/20 and the standard
deviation is = 1/ 20 = 5/10 < 0.224. Hence
1
1
5
5
P ( 2 < X < + 2) = P (
<X< +
)
2 5
2
5
Z 1+ 5
2
5
=
6x(1 x) dx
1
55
2
11 5
=
0.984.
25
1
Remark: f (x) = 6x(1 x) is the density for a Beta distribution with parameters = 2, = 2, so you can quickly find the mean and variance using
the equations on page 667.
=
2
and
=
2. Since 2 =
4
X
1
15
=
= 0.9375.
x
2
16
x=1
Z c
Z
=
(x c)f (x) dx +
(x c)f (x) dx.
uf (c u) du +
uf (c + u) du
0
Z
=
Z
uf (c + u) du +
uf (c + u) du = 0,
0
X
t
E exp t
= et/ M
,
h < t < h.
Solution 1.9.6. Using the linear properties of expected value (see Theorem
1.8.2) and the definition of = E[X], we calculate
X
E[X ]
E[X]
E
=
=
=
= 0,
2
2
2
which verifies the second equation.
If h < t < h then h < t/ < h, which shows that t/ is in the
domain of M . Using the definition of M (t) = E[exp(tX)] and the linear
properties of the expected value, we calculate
h t i
h t t i
h t
i
h (X) i
t
t
t
t
= e E e X = E e e X = E e X = E et
,
e M
1 3
1
Problem 1.9.11. Let X denote a random variable such that K(t) = E[tX ]
exists for all real values of t in a certain open interval that includes the
point t = 1. Show that K (m) (1) is equal to the mth factorial moment
E[X(X 1) (X m + 1)].
Solution 1.9.11. Differentiating k(t) = tx m times we find
k (m) (t) = x(x 1) (x m + 1)txm .
We may therefore expand tx in its Taylor series about t = 1
tx =
X
m=0
k (m) (1)
X
(t 1)m
(t 1)m
=
x(x 1) (x m + 1)
.
m!
m!
m=0
=
=
X
m=0
X
m=0
E [X(X 1) (X m + 1)]
K (m) (1)
(t 1)m
m!
(t 1)m
.
m!
tu
e f (u) du =
etu f (u) du
= M (t).
Problem 1.10.3. If X is a random variable such that E[X] = 3 and
E[X 2 ] = 13, use Chebyshevs inequality to determine a lower bound for
the probability P (2 < X < 8).
Solution 1.10.3. Chebyshevs inequality states that P (|X | < k)
1 (1/k 2 ). In this problem = 3 and 2 = 13 9 = 4, giving = 2. Thus
P (2 < X < 8) = P (5 < X 3 < 5) = P (|X 3| < 5)
5
= P (|X 3| < 2)
2
2
2
4
21
1
=1
= .
5
25
25
From the Chebyshev inequality we conclude that P (2 < X < 8) 21/25.
Problem 1.10.4. Let X be a random variable with mgf M (t), h < t < h.
Prove that
P (X a) eat M (t),
0 < t < h,
and that
P (X a) eat M (t),
h < t < 0.
Solution 1.10.4. We will use Markovs inequality (Theorem 1.10.2), which
states that if u(X) 0 and if c > 0 is a constant, then
P (u(X) c) E[u(X)]/c.
We will also use the facts that increasing functions preserve order and decreasing functions reverse order.
Consider the function u(x) = etx > 0, which is an increasing function
of x as long as t > 0. So, for t > 0 we have that X a if and only if
etX = u(X) u(a) = eta . Applying Markovs inequality and the definition
of M (t) we have for 0 < t < h
P (X a) = P (etX eta ) E[etX ]/eta = eta M (t).
When t < 0, u(x) = etx > 0 is a decreasing function of x, so X a if and
only if etX = u(X) u(a) = eta . Applying Markovs inequality again we
have for h < t < 0
P (X a) = P (etX eta ) E[etX ]/eta = eta M (t).
Problem 1.10.5. The mgf of X exists for all real values of t and is given
by
et et
M (t) =
, t 6= 0, M (0) = 1.
2t
Use the result of the preceding exercise to show that P (X 1) = 0 and
P (X 1) = 0.
Solution 1.10.5. Taking a = 1 in problem 1.10.4, we see that for all t > 0,
P (X 1) et M (t) = (1 e2t )/(2t). Taking the limit as t
1 e2t
=0
t
2t
0 P (X 1) lim
0 P (X 1) lim
which shows that P (X 1) = 0.
Problem 3.3.1. If (1 2t)6 , t < 1/2, is the mgf of the random variable
X, find P (X < 5.23).
Solution 3.3.1. The mgf of X is that of a 2 -distribution with r = 12
degrees of freedom. Using Table II on page 658, we see that the probability
P (X < 5.23) 0.050.
Problem 3.3.2. If X is 2 (5), determine the constants c and d so that
P (c < X < d) = 0.95 and P (X < c) = 0.025.
Solution 3.3.2. Using Table II on page 658 we find P (X < 0.831) 0.025
and P (X < 12.833) 0.975. So, with c = 0.831 and d = 12.833 we have
P (c < X < d) = 0.975 0.025 = 0.95 and P (X < c) = 0.025.
Problem 3.3.3. Find P (3.28 < X < 25.2) if X has a gamma distribution
with = 3 and = 4.
Solution 3.3.3. The mgf of X is MX (t) = (1 4t)3 . From this we see
that M (t/2) = E[etX/2 ] = (1 2t)3 , which is the mgf for a 2 with r = 6
degrees of freedom. Using Table II on page 658 we calculate
P (3.28 < X < 25.2) = P (1.64 < X/2 < 12.6) 0.950 0.050 = 0.900.
Problem 3.3.5. Show that
Z k1 z
k1 x
X
e
z
e
(1)
dz =
,
(k)
x!
k = 1, 2, 3, . . . .
x=0
This demonstrates the relationship between the cdfs of the gamma and Poisson distributions.
Solution 3.3.5. An easy calculation shows that equation (1) is valid for
k = 1, which establishes the base case for an induction proof.
The key to establishing the induction step is the following calculation:
k e
z k ez
=
(k + 1)
(k + 1)
Z
z k ez
d
dz
=
(k + 1)
dz
(2)
Z
kz k1 ez
z k ez
=
+
dz
k(k)
(k + 1)
Z
z k1 ez
z k ez
+
dz.
=
(k)
(k + 1)
x=0
x=0
Using equation (2) to simplify the left hand side of the previous equation
yields
Z k z
k
X
z e
x e
dz =
,
x!
(k + 1)
x=0
which shows that equation (1) is true for k + 1 if it is true for k. Therefore,
by the principle of mathematical induction, equation (1) is true for all k =
1, 2, 3, . . ..
Problem 3.3.9. Let X have a gamma distribution with parameters and
. Show that P (X 2) (2/e) .
Solution 3.3.9. From appendix D on page 667, we see that the mgf for
X is M (t) = (1 t) for t < 1/. Problem 1.10.4 shows that for every
constant a and for every t > 0 in the domain of M (t), P (X a) eat M (t).
Applying this result to our gamma distributed random variable we have for
all 0 < t < 1/
e2t
P (X 2)
.
(1 t)
Let us try to find the minimum value of y = e2t (1t) over the interval
0 < t < 1/. A short calculation shows that the first two derivatives of y
are
y 0 = e2 (1 t)1 (2t 1)
y 00 = 2 e2 (1 t)2 1 + (2t 1)2 .
Since y 0 = 0 at t = 1/(2) (0, 1/) and since y 00 > 0 we see that y takes
its minimum value at t = 1/(2) and therefore
e2t
2
P (X 2)
=
.
(1 t) 1
e
t= 2
mx+1 e2m
dm
(letting u = 2m, du = 2dm)
x!
0
Z
1
= x+2
u(x+2)1 eu du
2 x! 0
(x + 2)
= x+2
2 x!
x+1
= x+2 ,
2
for x = 0, 1, 2, . . ., zero elsewhere. This allows us to find
1
P (X = 0) = pX (0) = ,
4
1
P (X = 1) = pX (1) = ,
4
3
P (X = 2) = pX (2) = .
16
Z
3.3.19. Determine the constant c in each of the following so that each f (x)
is a pdf:
(1) f (x) = cx(1 x)3 , 0 < x < 1, zero elsewhere.
(2) f (x) = cx4 (1 x)5 , 0 < x < 1, zero elsewhere.
(3) f (x) = cx2 (1 x)8 , 0 < x < 1, zero elsewhere.
Solution 3.3.19.
(1) c = 1/B(2, 4) = (6)/((2)(4)) = 5!/(1!3!) = 20.
(2) c = 1/B(5, 6) = (11)/((5)(6)) = 10!/(4!5!) = 1260.
(3) c = 1/B(3, 9) = (12)/((3)(9)) = 11!/(2!8!) = 495.
Problem 3.3.22. Show, for k = 1, 2, . . . , n, that
Z 1
k1
X
n!
n x
k1
nk
z
(1 z)
dz =
p (1 p)nx .
(k
1)!(n
k)!
x
p
x=0
This demonstrates the relationship between the cdfs of the and binomial
distributions.
Solution 3.3.22. This problem is very similar to problem 3.3.5. In this
case the key to the induction step is the following calculation:
1
n k
n!
k
nk
nk
z (1 z)
p (1 p)
=
k!(n k)!
k
p
Z 1
n!
d
k
nk
z (1 z)
dz
=
k!(n k)!
p dz
Z 1
kn!
(n k)n! k
=
z k1 (1 z)nk +
z (1 z)nk1 dz
k!(n k)!
k!(n k)!
p
Z 1
n!
n!
=
z k1 (1 z)nk +
z k (1 z)nk1 dz
(k
1)!(n
k)!
k!(n
1)!
p
The rest of the proof is similar to the argument in problem
3.3.5.
Pk1 n x
p
(1 p)nx is a
Remark: an alternate proof is to observe that x=0
x
telescoping series, when you take advantage of the above calculation.
Problem 3.3.23. Let X1 and X2 be independent random variables. Let
X1 and Y = X1 + X2 have chi-square distributions with r1 and r degrees
of freedom, respectively. Here r1 < r. Show that X2 has a chi-square
distribution with r r1 degrees of freedom.
Solution 3.3.23. From appendix D on page 667, we see that the mgfs of X1
and Y are, respectively, MX1 (t) = (1 2t)r1 /2 and MY (t) = (1 2t)r/2 .
Since Y = X1 + X2 is the sum of independent random variables, MY (t) =
MX1 (t)MX2 (t) (by Theorem 2.2.5). Solving, we find that
MX2 (t) =
MY (t)
(1 2t)r/2
=
= (1 2t)(rr1 )/2 ,
MX1 (t)
(1 2t)r1 /2
10