Important Inequalities
Important Inequalities
Important Inequalities
Important Inequalities 79
E[u(X)]
P [u(X) ≥ c] ≤ .
c
Proof. The proof is given when the random variable X is of the continuous type;
but the proof can be adapted to the discrete case if we replace integrals by sums.
Let A = {x : u(x) ≥ c} and let f (x) denote the pdf of X. Then
∞
E[u(X)] = u(x)f (x) dx = u(x)f (x) dx + u(x)f (x) dx.
−∞ A Ac
Since each of the integrals in the extreme right-hand member of the preceding
equation is nonnegative, the left-hand member is greater than or equal to either of
them. In particular,
E[u(X)] ≥ u(x)f (x) dx.
A
Since
f (x) dx = P (X ∈ A) = P [u(X) ≥ c],
A
it follows that
E[u(X)] ≥ cP [u(X) ≥ c],
Hence, the number 1/k 2 is an upper bound for the probability P (|X − μ| ≥ kσ).
In the following example this upper bound and the exact value of the probability
are compared in special instances.
Example 1.10.1. Let X have the uniform pdf
" 1 √ √
2
√
3
− 3<x< 3
f (x) =
0 elsewhere.
Based on Example 1.9.1, for this uniform distribution, we have μ = 0 and σ 2 = 1.
If k = 32 , we have the exact probability
3/2 √
3 1 3
P (|X − μ| ≥ kσ) = P |X| ≥ =1− √ dx = 1 − .
2 −3/2 2 3 2
By Chebyshev’s inequality, this probability has the upper bound 1/k 2 = 49 . Since
√
1 − 3/2 = 0.134, approximately, the exact probability in this case is considerably
less than the upper bound 49 . If we take k = 2, we have the exact probability
P (|X − μ| ≥ 2σ) = P (|X| ≥ 2) = 0. This again is considerably less than the upper
bound 1/k 2 = 14 provided by Chebyshev’s inequality.
In each of the instances in Example 1.10.1, the probability P (|X − μ| ≥ kσ) and
its upper bound 1/k 2 differ considerably. This suggests that this inequality might
be made sharper. However, if we want an inequality that holds for every k > 0
and holds for all random variables having a finite variance, such an improvement is
impossible, as is shown by the following example.
Example 1.10.2. Let the random variable X of the discrete type have probabilities
1 6 1 2 1
8 , 8 , 8 at the points x = −1, 0, 1, respectively. Here μ = 0 and σ = 4 . If k = 2,
2 1 1
then 1/k = 4 and P (|X − μ| ≥ kσ) = P (|X| ≥ 1) = 4 . That is, the probability
P (|X − μ| ≥ kσ) here attains the upper bound 1/k 2 = 14 . Hence the inequality
cannot be improved without further assumptions about the distribution of X.
A convenient form of Chebyshev’s Inequality is found by taking kσ = for > 0.
Then Equation (1.10.2) becomes
σ2
P (|X − μ| ≥ ) ≤ , for all > 0 . (1.10.3)
2
The second inequality of this section involves convex functions.
186 Some Special Distributions
(b) If r(x) = cebx , where c and b are positive constants, show that X has a
Gompertz cdf given by
" % &
1 − exp cb (1 − ebx ) 0<x<∞
F (x) = (3.3.13)
0 elsewhere.
This integral exists because the integrand is a positive continuous function that is
bounded by an integrable function; that is,
2
−z
0 < exp < exp(−|z| + 1), −∞ < z < ∞,
2
and ∞
exp(−|z| + 1) dz = 2e.
−∞
To evaluate the integral I, we note that I > 0 and that I 2 may be written
∞ ∞ 2
1 z + w2
I2 = exp − dzdw.
2π −∞ −∞ 2
This iterated integral can be evaluated by changing to polar coordinates. If we set
z = r cos θ and w = r sin θ, we have
2π ∞
1 2
I2 = e−r /2 r dr dθ
2π 0 0
2π
1
= dθ = 1.
2π 0
Because the integrand of display (3.4.1) is positive on R and integrates to 1 over
R, it is a pdf of a continuous random variable with support R. We denote this
random variable by Z. In summary, Z has the pdf
2
1 −z
f (z) = √ exp , −∞ < z < ∞. (3.4.2)
2π 2
For t ∈ R, the mgf of Z can be derived by a completion of a square as follows:
∞ " #
1 1 2
E[exp{tZ}] = exp{tz} √ exp − z dz
−∞ 2π 2
" # ∞ " #
1 2 1 1 2
= exp t √ exp − (z − t) dz
2 −∞ 2π 2
" # ∞ " #
1 2 1 1
= exp t √ exp − w2 dw, (3.4.3)
2 −∞ 2π 2
where for the last integral we made the one-to-one change of variable w = z − t. By
the identity (3.4.2), the integral in expression (3.4.3) has value 1. Thus the mgf of
Z is " #
1 2
MZ (t) = exp t , for −∞ < t < ∞. (3.4.4)
2
The first two derivatives of MZ (t) are easily shown to be
" #
1 2
MZ (t) = t exp t
2
" # " #
1 2 2 1 2
MZ (t) = exp t + t exp t .
2 2
188 Some Special Distributions
X = bZ + a,
for b > 0. This is a one-to-one transformation. To derive the pdf of X, note that
the inverse of the transformation and the Jacobian are z = b−1 (x − a) and J = b−1 ,
respectively. Because b > 0, it follows from (3.4.2) that the pdf of X is
$ 2 *
1 1 x−a
fX (x) = √ exp − , −∞ < x < ∞.
2πb 2 b
The parameters μ and σ2 are the mean and variance of X, respectively. We often
write that X has a N (μ, σ 2 ) distribution.
In this notation, the random variable Z with pdf (3.4.2) has a N (0, 1) distribution.
We call Z a standard normal random variable.
For the mgf of X, use the relationship X = σZ + μ and the mgf for Z, (3.4.4),
to obtain
f(x)
1
2
x
–3 –2 – + +2 +3
From calculus we know that the integrand does not have an antiderivative; hence,
the integration must be carried out by numerical integration procedures. The R
software uses such a procedure for its function pnorm. If X has a N (μ, σ 2 ) distribu-
tion, then the R call pnorm(x, μ, σ) computes P (X ≤ x), while q = qnorm(p, μ, σ)
gives the pth quantile of X; i.e., q solves the equation P (X ≤ q) = p. We illustrate
this computation in the next example.
Example 3.4.1. Suppose the height in inches of an adult male is normally dis-
tributed with mean μ = 70 inches and standard deviation σ = 4 inches. As a
graph of the pdf of X use Figure 3.4.1 replacing μ by 70 and σ by 4. Suppose
we want to compute the probability that a man exceeds six feet (72 inches) in
height. Locate 72 on the figure. The desired probability is the area under the curve
over the interval (72, ∞) which is computed in R by 1-pnorm(72,70,4) = 0.3085;
hence, 31% of males exceed six feet in height. The 95th percentile in height is
qnorm(0.95,70,4) = 76.6 inches. What percentage of males have heights within
one standard deviation of the mean? Answer: pnorm(74,70,4) - pnorm(66,70,4)
= 0.6827.
Before the age of modern computing tables of probabilities for normal distribu-
tions were formulated. Due to the fact (3.4.8), only tables for the standard normal
distribution are required. Let Z have the standard normal distribution. A graph of
190 Some Special Distributions
its pdf is displayed in Figure 3.4.2. Common notation for the cdf of Z is
z
1 2
P (Z ≤ z) = Φ(z) =dfn √ e−t /2 dt, −∞ < z < ∞. (3.4.9)
0 2π
Table II of Appendix D displays a table for Φ(z) for specified values of z > 0. To
compute Φ(−z), where z > 0, use the identity
Φ(−z) = 1 − Φ(z). (3.4.10)
This identity follows because the pdf of Z is symmetric about 0. It is apparent in
Figure 3.4.2 and the reader is asked to show it in Exercise 3.4.1.
φ(z)
Φ(zp) = p
z
zp (0,0)
Figure 3.4.2: The standard normal density: p = Φ(zp ) is the area under the curve
to the left of zp .
As an illustration of the use of Table II, suppose in Example 3.4.1 that we want
to determine the probability that the height of an adult male is between 67 and 71
inches. This is calculated as
P (67 < X < 71) = P (X < 71) − P (X < 67)
X − 70 71 − 70 X − 70 67 − 70
= P < −P <
4 4 4 4
= P (Z < 0.25) − P (Z < −0.75) = Φ(0.25) − 1 + Φ(0.75)
= 0.5987 − 1 + 0.7734 = 0.3721 (3.4.11)
= pnorm(71, 70, 4) − pnorm(67, 70, 4) = 0.372079. (3.4.12)
Expression (3.4.11) is the calculation by using Table II, while the last line is the cal-
culation by using the R function pnorm. More examples are offered in the exercises.
As a final note on Table II, it is generated by the R function: