Economics 508 Duration Models

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

University of Illinois Department of Economics

Fall 2008 Roger Koenker

Economics 508
Lecture 22
Duration Models

There is considerable interest, especially among labor-economists in mod-


els of duration. These models originated in biomedical applications, insur-
ance, and quality control but are now being applied broadly to unemploy-
ment, retirement and an array of other issues.

Survival Functions and Hazard Rates


Often duration models are described in terms of survival models of the
sort that might be appropriate for biomedical clinical trials in which we
are interested in evaluating the effectiveness of a medical treatment and the
response variable is the length of time that the patient lives following the
treatment. But there are a wide variety of other applications. I like to think
of this in terms of predicting time of birth, ex ante we have some positive
random variable, T , with density f (t), and distribution function F (t). One
can then consider the conditional density of the birth date given that a birth
hasn’t occurred up to time t. This is rather like the computations we consid-
ered in the previous lecture. There is a considerable amount of specialized
terminology which we will need to introduce. The survival function is simply
S(t) = 1 − F (t) = P [T > t]
and the hazard function is
f (t)
λ(t) =
1 − F (t)
note that λ(t)dt = P [t < T < t + dt|T > t] = P ( born in hour t +
1| not born by hour t] clearly,
Z t
λ(u)du = − log(1 − F (u))|t0 = − log(1 − F (t)) = − log S(t)
0
so∗ Z t
S(t) = exp{− λ(u)du}.
0

Such so-called product integrals have a rich theory which has been considered by Gill
& Johanson Annals, 1990, but we will not concern ourselves with this here.

1
Digression on the Mills’ Ratio & Hazard Rates.

Suppose X is a positive r.v. representing life time of an individual, with


density f , and df F, obviously, P (X > x) = 1 − F (x) Given that survival
until x what is probability of death before x + t

P (x < X < x + t) F (x + t) − F (x)


P (X > x + t|X > x) = =
P (X > x) 1 − F (x)

to get a death rate (deaths per unit time) between x and x + t compute

t−1 (F (x + t) − F (x)) f (x)


lim =
t→0 1 − F (x) 1 − F (x)

which is called the hazard rate. The reciprocal of the hazard rate is some-
times called the Mills ratio.
A common problem in data of this sort is that we observe T for only
some observations, while for others we observe only that T is greater that
some censoring time tc , e.g., in a clinical trial, individuals may be still alive
at the end of the experimental period. So we see

Ti if Ti < tc
Yi =
tc if Ti ≥ tc

Maximum Likelihood Estimation of Parametric Models.

The likelihood for a fully parametric model is given by,


n
Y
L(θ) = f (yi , θ)δi S(yi , θ)1−δi
i=1

where δi denotes the censoring indicator,



1 Ti < tc
δi =
0 Ti ≥ tc

so this is somewhat like the tobit model of the last lecture. Of course we
now need to specify the parametric model for f and S.

Menu of Choices for the Parametric Specification

2
1. Exponential – this is simplest
λ(t) ≡ λ > 0 ⇒ S(t) = e−λt
f (t) = λe−λt
E(T ) = λ−1
V (T ) = λ−2
median = − log(1/2)/λ ∼
= .69/λ

2. Gamma – generalization of exponential


λα α−1 −λt
f (t) = t e α > 0, λ > 0
Γ(α)
E(T ) = α/λ
V (T ) = α/λ2
S(T ) is messy (involves incomplete gamma)

3. Weibull – another generalization of exponential model


λ(t) = αλ(λt)α−1 α > 0, λ > 0
−(λt)α
S(t) = e
α
f (t) = αλ(λt)α−1 e−(λt)
Note that depending upon whether α ≶ 1 you get either increasing or
decreasing hazard. This model is probably the most common para-
metric one.
4. Rayleigh λ(t) = λ0 + λ1 t
1
5. Uniform U [0, 1] λ(t) = 1−t
Clearly there is some à priori ambiguity as to which probability model should
be used. This leads naturally to the next topic.

Nonparametric Methods – The Kaplan-Meier Estimator


Suppose you have a reasonably homogeneous sample like our WECO
employees and we want to estimate a “survival” distribution for them – how
long they stay on-the-job. We can chop the time axis into arbitrary intervals
and write,
S(τk ) = P [T > τk ]
= P [T > τ1 ]P [T > τ2 |T > τ1 ] . . . P [T > τk |T > τk−1 ]
= p1 · p2 · · · pk

3
as an estimate of pi we could use
   
di # quit in period i
p̂i = 1 − = 1−
ni # left in period i

Then the survival function can be estimated as,


k
Y
Ŝ(τk ) = p̂j
j=1

The Kaplan Meier estimator of S(t) is like the previous method except
that we replace the fixed intervals with random intervals determined by the
observations themselves. As above, we observe pairs: (Y1 , δ1 ), . . . , (Yn , δn )
where Yi is observed duration for ith subject and

1 uncensored
δi =
0 censored

Let (Y(i) , δ(i) ) denote the ordered observation (ordered on Y ’s!). Then set
as above

ni = # alive at time Y(i) − ε


di = # died at time Y(i)
pi = P [ surviving through period Ii | alive at beginning of Ii ]
q i = 1 − pi

then q̂i = δi /ni , so


 1
1− ni if δ(i) = 1
p̂i = 1 − q̂i =
1 if δ(i) = 0

Then (drum roll!) the product-limit Kaplan-Meier estimate is,

Y Y  1
δi Y  1
δi Y  n − i δi
Ŝ(t) = p̂i = 1− = 1− =
ni n−i+1 n−i−1
y(i) ≤t y(i) ≤t y(i) ≤t y(i) ≤t

This estimator satisfies several nice requirements


(i) It is consistent
(ii) It is asymptotically normal (involves weak convergence to Brownian
motion argument.

4
1.0
0.8
0.6
0.4
0.2
0.0

• • • • •

0 20 40 60 80

Figure 1: A Simple Kaplan Meier Plot for 5 Observations: The figure illus-
trates a very simple version of the Kaplan Meier estimator of the survival
function for 5 observations, one of which is censored and the others of which
are uncensored. The 5 observed times are represented on the horizontal
axis as plotted points with vertical coordinate zero. A useful exercise is to
compute the vertical ordinates of Ŝ(t) given in the figure. Note that there
is no drop in the estimated function at y2 since this observation is censored.
The dotted lines denote a confidence band for S(t) which, since there are so
few observations is essentially uninformative.

5
(iii) It is a generalized MLE à la Kiefer-Wolfowitz.
(iv) Without
P censoring it is the empirical df , i.e. Ŝ(t) = 1 − F̂ (t) where
F̂ (t) = n−1 I(Ti < t).
This is particularly good for one-and two-sample problems.
The estimates p̂i ’s are the conditional probabilities, while one needs to
compute the associated conditional survival probabilities to find the Survival
Function Estimate, and the product accomplishes this.
The Kaplan Meier estimator is particularly good in situations in which
we have a small number of groups and we would like to ask: do they have
similar survival distributions. An example of this sort of question is ad-
dressed in the next figure. Using data from Meyer (1990) we consider the
survival distributions estimated by the Kaplan-Meier technique for individ-
uals who have more than $100 per week in unemployment benefits versus
those with less than $100 per week in benefits.
As the figure indicates, those with higher benefits appear to stay unem-
ployed longer. The median unemployment spell for the high benefits group
is roughly 2 weeks longer than for the low benefits group. Note, however,
that the difference is unclear in the right tail of the distribution; the higher
benefit group appears to have a somewhat lower probability of a spell greater
than 35 weeks. This plot was produced ty the splus command,
plot (surv.fit(dur,cens,strata=exp(ben) > 100, type=‘kaplan-meier’))
where dur is the observed durations, cens is the censoring indicator, and
ben is the log of weekly benefits.
The difficulty of this approach in most econometric applications is that
we can’t usually rely on a simple categorization of the sample observations
into a small number of groups, we have covariates which we would like to
use in a way which is close to the usual linear regression model fashion. This
leads to an attempt to make some compromise between the nonparametric
and parametric approaches.

Semi-Parametric Models – Cox’s Proportional Hazard Model

This is a common econometric approach. Let {Ti } and {Ci } be indepen-


dent r.v’s. Ci is the censoring time associated with survival times Ti . We
observe {(Yi , δi )} where
Yi = min{Ti , Ci }
δi = I(Ti ≤ Ci )
we also observe a vector of covariates xi for each “individual.” Of course
“individuals” might be firms which we are modeling bankruptcy decisions

6
1.0
0.8

low
high
0.6
0.4
0.2
0.0

0 10 20 30 40

Figure 2: Two Kaplan Meier Survival Curves for the length of unemploy-
ment spells: The figure plots Kaplan-Meier estimates of the duration of
unemployment function for two groups of individuals. One is a high UI-
benefits group (those with weekly benefits more than 100 dollars, the other
with weekly UI benefits less than 100. The data is taken from Meyer(1990).

7
for, or some other unit of economic analysis. Recall,
f (t|x)
λ(t|x) =
1 − F (t|x)
The crucial assumption of the Cox model is,

λ(t|x) = exβ λ0 (t)

Note that the form h(x) = exβ is far less essential than multiplicative sep-
arability of the function in x and t. We now introduce a rather high-brow
definition which is useful in interpretating the essential role of the Cox as-
sumption.

Definition: A family of df’s F constitute a family of Lehmann alternatives


if there exists F0 ∈ F such that for any F ∈ F, 1 − F (t) = (1 − F0 (t))γ for
some γ > 0 and all t. I.e. S(t) = S0γ (t).

Clearly the proportion hazard model implies a family of L alternatives


since,
Z t
S(t; x) = exp{− λ(u; x)du}
0
Z t
= exp{−exβ λ0 (u)du} Recall(!) eax = (ex )a
0
= S0 (t)γ where γ = exβ .

Special case: if we have the two sample problem, then xβ = either 0 or


1 so S1 (t) = S0γ (t) for some constant γ. This is obviously quite restrictive.
In particular it prohibits covariate effects that are favorable for a while and
then unfavorable, or vice-versa.

Estimation (Sketchy)
Let ℜ(i) denote the set of individuals at risk at time y(i) − ε, for each
uncensored time, y(i)
X
P [a death in [y(i) , y(i) + ∆y)|ℜ(i) ] ∼
= exj β λ0 (y(i) ) ∆y
j∈R(i)
| {z }
hazard
so
ex(i) β
P { death of (i) at time y(i) | a death at time y(i) } = P x(i) β
.
j∈ℜi e

8
Note that the λ0 effect cancels in numerator and denominator. and this
gives the partial likelihood
" #
Y ex(i) β
L(β) = P x(j) β
i j∈ℜ(i) e

Cox’s proposal was to estimate β by maximizing this partial likelihood. In


what sense is this likelihood partial? This is really a question for Economics
476, but here I will just say that we have ignored the λ0 contribution to
the full likelihood by the trick used above, and we have to hope that this
isn’t really very informative about the parameter β. This turns out to be
more or less true, of course we still might worry about the loss of efficiency
entailed, and also about the plausibility of the Cox model assumption which
we would like to test. This too will be left for 476. A partial explanation of
why the partial likelihood doesn’t sacrifice much information is the following.
It conditions on the set of instants at which “failures” occur, since λ0 (t) is
assumed arbitrary no information about β is contained in those instants.
Why? This mystery is revealed in recent martingale reformulations of the
Cox Model.

Estimating the Baseline Hazard

It remains to discuss how to estimate λ0 from the Cox model,


Z t
Λ0 (t) = λ0 (u)du
0

in the Cox Model. Breslow assumes à la Cox that λ0 (t) is constant between
uncensored observations,
1
λ̂0 (t) = P
(yu(i) − yu(i−1) ) j∈ℜu(i) eβ̂xj

for t ∈ (yu(i−1) , yu(i) ) and u(i) index of ith censored observations. Then,
!
Y δ(i)
Ŝ0 (t) = 1− P
j∈ℜ(i) eβxj
{i:y(i) <t}

Note here
Ŝ0 (t) 6= e−Λ0 (t)

9
P
But it has the virtue that we get Kaplan Meier when β = 0! Since eβxj
in this case is just the number of observations in the risk set.
Tsiatis uses instead,
Ŝ0 (t) = e−Λ0 (t)
X δ(i)
Λ0 (t) = P
j∈ℜ(i) eβ̂xj

This doesn’t simplify like the Breslow estimator. The relationship between
Tsiatis and Breslow estimates is seen simply by noting that − log(1 − x) ≈ x
for small x.
Duration Models and Binary Response

This section is based mainly on Doksum and Gasko (1990, Intl Stat
Review). We can think of the usual binary response model as a survival
model in which we fix the time of survival and ask, what is the probability
of surviving up to time t. For example, in the problem set we can ask what
is the probability of not quitting up to time 6 months. By then varying t
we get a nice 1-1 correspondence between the two classes of models. We can
specify the general failure-time distribution,
F (t|x) = P (T < t|x)
and fixed t so we are simply modeling a survival probability, say S(t|x) = 1−
F (t|x) which depends on covariates. We will consider two leading examples
to illustrate this, the logit model, and the Cox proportional hazard model.

Logit
In the logit model we have,
logit(S(t|x)) = log(S(t|x)/(1 − S(t|x)) = x′ β
where F (z) = (1 + e−z )−1 the df of the logistic distribution. In survival
analysis this would correspond to the model
logit(S(t|x)) = x′ β + log Γ(t)
where Γ(t) is a baseline odds function which satisfies the restriction that
Γ(0) = 0, and Γ(∞) = ∞. For fixed t we can simply absorb Γ(t) into the
intercept of x′ β. This is the proportional-odds model. Let
Γ(t|x) = S(t|x)/(1 − S(t|x)) = Γ(t) exp{x′ β}

10
and by analogy with other logit type models we can characterize the model
as possessing the property that the ratio of the odds-on-survival at any time
t don’t depend upon t, i.e.

Γ(t|x1 )/Γ(t|x2 ) = exp(x′1 β)/ exp(x′2 β).

Now choosing some explicit functional form for Γ(t) for example log Γ(t) =
γ log(t), ie. Γ(t) = tγ , gives the survival model introduced by Bennett
(1983).

11
Proportional Hazard Model

One can, of course, model not S, as above, but some other aspect of S
which contains equivalent information, like the hazard function,

λ(t|x) = f (t|x)/(1 − F (t|x))

or the cumulative hazard,

Λ(t|x) = − log(1 − F (t|x)).

In the Cox model we take



λ(t|x) = λ(t)ex β ,

so

Λ(t|x) = Λ(t)ex β ,
which is equivalent to

log(− log(1 − F (t|x)) = x′ β + log Λ(t).

This looks rather similar to the the logit form,

logit(F (t|x)) = x′ β + log Γ(t).

but it is obviously different. This form of the proportional hazard model


could also be written as,

F (t|x) = Ψ(x′ β + log Λ(t)).


z
where Ψ(z) = 1−e−e is the Type I extreme value distribution. For fixed t we
can again absorb the log Λ(t) term into the intercept of the x′ β contribution
and we have the formulation,

log(− log(1 − θ(x))) = x′ β

this is sometimes called the complementary log − log model in the binary
response literature. So this would provide a binary response model which
would be consistent with the Cox proportional hazard specification of the
survival version of the model. In general, this strategy provides a useful
way to go back and forth between binary response and full-blown survival
models, but I will leave a full discussion of this to 478.

Accelerated Failure Time Model

12
A third alternative, which also plays an important role in the analysis
of failure time data is the accelerated failure time (AFT) model, where we
have
log(T ) = x′ β + u
with the distribution of u unspecified, but typically assumed to be iid. A
special case of this model is the Cox model with Weibull baseline hazard,
but in general we have
′ ′
P (T > t) = P (eu > te−x β ) = 1 − F (te−x β )

where F denotes the df of eu and therefore in this model,


′ ′
λ(t|x) = λ0 (te−x β )ex β

where λ0 denotes the hazard function corresponding to F . In effect the


covariates are seen to simply rescale time in this model. An interesting
extension of this model is to write,

Qh(T ) (τ |x) = x′ β(τ )

and consider a family of quantile regression models. This allows the covari-
ates to act rather flexibly with respect to the shape of the survival distribu-
tion.

References

Kalbfleisch, J.D. and R.L. Prentice (1980). The Statistical Analysis of Fail-
ure Time Data, Wiley.
Koenker, R. and O. Geling, (2001) Reappraising Medfly Longevity, JASA
96, 458-468.
Koenker, R. and Y. Bilias (2001) Quantile regression for duration data:
A reappraisal of the Pennsylvania reemployment bonus experiments, Em-
pirical Economics, 26, 199-220.
Meyer, B. (1990). Unemployment insurance and unemployment spells, Econo-
metrica, 57, 757-782.
Miller, R.G. (1981). Survival Analysis, Wiley.

13
Lancaster, T. (1990). The Econometric Analysis of Transition Data, Cam-
bridge U. Press.
Cox, D.R. (1972). Regression models and life tables, JRSS(B), 34, 187-200.

14

You might also like