Taxes, Time, and Support for Security
AMY K. DONAHUE, MARK D. ROBBINS, and BILL SIMONSEN
New technologies have been developed in response to terrorism. These present
problems for local officials: implementing technologies will be expensive, and no
technologies exist that can be used to gauge demand. We apply contingent
valuation methodologies to determine support for additional taxes to pay for
new terrorism-related technologies and services. We present findings from a
national survey about people’s attitudes toward terrorism prevention and
response. We find that respondents generally support new services and
technologies and local tax increases to pay for them. We also find that
respondents are willing to pay more if programs have everyday uses that would
enhance public safety, but are less supportive as inconveniences increase.
INTRODUCTION
The attacks of September 11, 2001, prompted United States governments at all levels to
initiate policies and programs designed to make citizens safer from terrorism. At the
federal level, the Department of Homeland Security (DHS) has given substantial attention to the development of specialized services and technologies and that can be used to
help prevent, detect, prepare for, and respond to terrorist attacks. These new services and
technologies will impose costs, in terms of taxes, reduced spending in other areas, and
potential inconvenience. As these services and technologies become available for deployment in communities, DHS and local governments alike would like to understand
whether citizens are inclined to supportFand ultimately pay forFthem.
In this article, we report the results of a study funded by DHS’s Science and Technology Directorate that examines citizen behaviors, attitudes, and preferences with regard to preparedness in general, and services and technologies related to terrorism in
Amy K. Donahue is Department Head and Associate Professor of Public Administration, Department of
Public Policy, University of Connecticut, 1800 Asylum Avenue, West Hartford, CT 06011-2697. She can be
reached at:
[email protected]. Mark D. Robbins is an associate professor of Public Administration,
Department of Public Policy, University of Connecticut, 1800 Asylum Avenue, West Hartford, CT 060112697. He can be reached at:
[email protected]. Bill Simonsen is a professor of Public Administration, Department of Public Policy, University of Connecticut, 1800 Asylum Avenue, West Hartford, CT
06011-2697. He can be reached at:
[email protected].
Donahue et al. / Taxes, Time, and Support for Security
69
particular. We address the question: How much do citizens support new services and
technologies designed to protect them from terrorist attacks? Specifically, we investigate
whether citizens say they support these programs, and whether they are also willing to
pay for them. We also consider whether people’s experience with terrorism matters, by
comparing residents of cities that were the targets of recent terrorist attacks to citizens in
general. Answering this question poses reliability and validity challenges that require
careful methodological attention to resolve. A substantial portion of this article is dedicated to contingent valuation and its adaptation for use in gauging demand for new
public services.
To explore citizens’ views of terrorism preparedness, we queried a national sample of
respondents about their attitudes and beliefs about services and technologies related to
terrorism prevention and response. We also surveyed people in New York City and
Washington, D.C. We use contingent valuation techniques to discern their willingness to
pay for particular services. We find evidence that people are generally supportive of new
services and technologies designed to keep them safer from terrorist attacks. In most
cases, a majority of residents will support local government tax increases to pay for
greater levels of prevention and preparedness. We also find that they would be willing to
pay even more if such programs also had everyday uses that would enhance existing
public safety services, and that they might be less supportive as inconveniences posed by
the technologies increase.
Our paper is organized as follows. We first discuss approaches to discerning the nature
of citizens’ willingness to pay for public services as raised in the literature. Then we
describe our empirical investigation and present our findings. Finally, we consider the
implications of our analysis for government programs related to terrorism and identify
future avenues for research.
ASSESSING SUPPORT FOR PUBLIC SERVICES
One common approach to studying citizen demand for public services employs expenditure determinants models where local expenditures are regressed on income, price, and
other variables.1 Such studies explain variations in public services and budgets as functions of the socioeconomic features of the population, which serve as proxies for direct
1. Robert P. Inman, ‘‘Testing the Political Economy’s ‘As If’ Proposition: Is the Median Voter Really
Decisive?,’’ Public Choice 33, no. 4 (1978): 45–65; Theodore C. Bergstrom, Daniel L. Rubinfeld, and Perry
Shapiro, ‘‘Micro-Based Estimates of Demand Functions for Local School Expenditures,’’ Econometrica 50
(1982): 1183–1205; Helen Ladd and John Yinger, America’s Ailing Cities: Fiscal Health and the Design of
Urban Policy (Baltimore: Johns Hopkins University Press, 1991); David Cutler, Douglas Elmendorf, and
Richard Zeckhauser, Demographic Characteristics and the Public Bundle (Mimeo, Cambridge, MA: Harvard University, 1992); William Simonsen, ‘‘Changes in Federal Aid and City Finances: A Case Study of
Oregon Cities,’’ Publius: The Journal of Federalism 24, no. 2 (1994a): 37–51; William Simonsen, ‘‘Aging
Population and City Spending,’’ Journal of Urban Affairs 16, no. 2 (1994b): 125–140.
70
Public Budgeting & Finance / Summer 2008
measures of citizen desires. While this work has shown associations between population
demographics and other variables and aggregate spending, it does not focus on individuals and their preference structures for public goods.
Voting behavior studies, on the other hand, examine referenda and initiatives to
understand citizens preferences for expenditure levels directly. In particular, a few studies
have looked at citizen attributes and their relationship to support for local ballot measures.2 Voting studies have the advantage of observing how citizen’s actually vote; as
opposed to aggregate studies that assume citizen preferences are transformed through
political and institutional processes into actual government spending. Although voting
studies can reveal the determinants of voting behavior associated with a specific issue
(e.g., a bond referendum), they are less helpful for understanding the nature of citizen
support for the variety of activities reflected in a local government budget. This view is
cogently summed up by Brubaker: ‘‘Most citizens have only limited opportunities to
express in crude fashion their budgetary preferences by voting relatively infrequently for
representatives whose positions, obscured in the fog of political rhetoric, come bundled
with nonbudgetary issues in large cryptic packages.’’3
Increasingly, local governments use citizen surveys to assess the level of satisfaction
with their services, and to identify potential for improvements. Government-sponsored
surveys typically ask citizens about their satisfaction with, and support for, public services. Despite the common use of general satisfaction surveys, few obtain representative
samples and almost all do not include a budget constraint. Without a budget constraint,
respondents tend to either understate their preferences for inexpensive services or overstate them for more expensive services.4
A family of methodologies have developed that provide survey respondents with
individual or governmental budget constraints, or that assess willingness to pay taxes
within the context of such constraints. One of these methods, contingent valuation, is
2. Daniel Rubinfield, ‘‘Voting in a Local School Election: A Micro Analysis,’’ Review of Economics and
Statistics 59, no. 1 (1977): 30–42; James Button and Walter Rosenbaum, ‘‘Seeing Gray: School Bond Issues
and the Aging in Florida,’’ Research on Aging 11, no. 2 (1989): 158–173; Walter A. Rosenbaum and James
W. Button, ‘‘Is There a Gray Peril? Retirement Politics in Florida,’’ Journal of Aging Studies 6 (1992): 385–
396; Susan A. MacManus, ‘‘The Widening Gap between Florida’s Public Schools and Their Communities:
Taxes and a Changing Age Profile,’’ Policy Report No. 8. (1996), Tallahassee, FL: James Madison Institute; William Duncombe, Mark Robbins, and Jeffrey Stonecash, ‘‘Measuring Citizen Preferences for
Public Services Using Surveys: Does a ‘Gray Peril’ Threaten Funding for Public Education?,’’ Public
Budgeting and Finance 23, no. 1 (2002): 45–72.
3. Earl R. Brubaker, ‘‘Eliciting the Public’s Budgetary Preferences: Insights from Contingent Valuation,’’ Public Budgeting and Finance 24, no. 1 (2004): 73.
4. William Simonsen and Mark D. Robbins, Citizen Participation in Resource Allocation (Boulder, CO:
Westview Press, 2000a); William Simonsen, and Mark D. Robbins, ‘‘The Influence of Fiscal Information
on Preferences for City Services,’’ Social Science Journal 37, no. 2 (2000b): 195–214; Mark D. Robbins, Bill
Simonsen, and Barry Feldman, ‘‘The Impact of Tax Price on Spending Preferences,’’ Public Budgeting and
Finance 24, no. 3 (2004): 82–97.
Donahue et al. / Taxes, Time, and Support for Security
71
designed to reveal each respondent’s personal willingness to pay.5 This approach attempts to quantify the amount of money a citizen hypothetically would be willing to pay
for a specified quality improvement in that good.6
Contingent valuation methodology (CVM) is typically used to estimate the value of
goods or services not sold or traded on markets. The technique is widely used to estimate
the value of environmental impacts, such as the value of natural resources like public
lands. In fact, the U.S. Supreme Court in State of Ohio v. United States Department of the
Interior has validated the use of this methodology in this area, stating:
Department of the Interior’s inclusion of contingent valuation as methodology to be employed in
assessing damages resulting from harm to natural resources . . . was proper; contingent valuation
process includes techniques of setting up hypothetical markets to elicit individual’s economic
valuation of natural resource, and the methodology qualified as best available procedure for
determining damages flowing from destruction of or injury to natural resources if properly applied
and structured to eliminate undue upward biases.7
CVM is a survey or interview based technique that iteratively queries respondents with
escalating prices for various non-market goods and services in order to find the upper
bounds of their willingness to pay:
Choice situations are constructed in which individuals trade off money for the public good and
reveal their willingness to pay. Contingent valuation usually entails asking about prior knowledge
and attitudes about the public good, description of the public good, how payment will be made,
elicitation of the willingness to pay amount, debriefing questions, and personal and demographic
characteristics.8
While CVM has generally been used in the area of recreation and the environment,9
the technique has also been successfully applied to support for local public services.
(Brubaker provides an excellent review of CVM application to local public services.)10
One major critique of CVM is that, unlike voter referenda studies, the respondent is
not actually required to buy anything or spend any money whatsoever. The worry is that
this will lead to an overestimation of their willingness to pay, called hypothetical bias.
One method researchers have developed to address hypothetical bias is so-called ‘‘cheap
talk,’’ a process where the caller explains to the respondent what hypothetical bias is and
5. Robert Mitchell and Richard Carson, Using Surveys to Value Public Goods: The Contingent Valuation
Method (Washington, DC: Resources for the Future, 1989).
6. J. C. Whitehead, ‘‘Willingness to Pay for Quality Improvements: Comparative Statistics and Interpretation of Contingent Valuation Results,’’ Land Economics 71, no. 2 (1995): 207–215.
7. Congressional Research Service Report for Congress, RL30242: Assessing Non-Market Values
through Contingent Valuation, 1999; available from: http://www.ncseonline.org/nle/crsreports/natural/
nrgen-24.cfm: accessed 15 November 2007.
8. Glenn C Blomquist, Michael A. Newsome, and D.Brad Stone, ‘‘Public Preferences for Program
Tradeoffs: Community Values for Budget Priorities,’’ Public Budgeting and Finance 24, no. 1 (2004): 53.
9. Mitchell and Carson, 307–354.
10. Brubaker, 94–95.
72
Public Budgeting & Finance / Summer 2008
how important it is to provide a true measure of their willingness to pay. Another
method measures respondents’ certainty about their stated willingness to pay either on a
certainty scale (e.g., 1–10 from uncertain to very certain) or through categories (e.g.,
respondent is ‘‘probably sure’’ or ‘‘definitely sure’’ they would pay a certain amount).11
A field experiment by Blumenschein et al. recently tested these two techniques and found
cheap talk to be ineffective, but that using follow-up certainty statements removed the
hypothetical bias.12
A simpler approach was adopted by Arrington and Jordan.13 When they estimated
residents’ willingness to pay for government services in Mecklenburg County, North
Carolina, they asked whether they were willing to pay certain amounts per capita for 20
different services if government was not to provide them. A control group was given
corresponding questions but without the per capita fiscal information. They found that
‘‘for virtually every (government service) activity the support was less when respondents
were asked whether they would pay the costs directly.’’14 Glaser and Hildreth studied
willingness to pay taxes for park and recreation services and found that respondents’
willingness to pay corresponded to their use of these services.15 However, they also found
many heavy users with low levels of willingness to pay as well as the reverse: low service
users with higher levels of willingness to pay.
METHODOLOGY
The CVM literature recommends interviews to assess willingness to pay. We followed
this strategy, and also applied the certainty correction for hypothetical bias, when conducting computer assisted telephone interviews of the sample populations. This section
discusses how we selected the sample, the substantive focus of the surveys, and the
interview procedures that we used.
Survey Samples
Our survey was administered to representative samples of adult residents throughout the
United States, in New York City, and in the metropolitan District of Columbia area.
New York City and Washington, D.C. were included because both are large cities
11. Karen Blumenschein et al., ‘‘Eliciting Willingness to Pay without Bias: Evidence from a Field
Experiment’’ The Economic Journal 24 (forthcoming): 4–5.
12 Ibid., 23–24.
13. Thomas S. Arrington and David D. Jordan, ‘‘Willingness to Pay Per Capita Costs as a Measure of
Support for Urban Services,’’ Public Administration Review 42, no. 2 (1982): 168–171.
14. Ibid., 169.
15. Mark A. Glaser and W. Bartley Hildreth, ‘‘A Profile of Discontinuity between Citizen Demand and
Willingness to Pay Taxes: Comprehensive Planning for Park and Recreation Investment,’’ Public Budgeting
and Finance 16, no. 4 (1996): 96–124.
Donahue et al. / Taxes, Time, and Support for Security
73
that have been the target of terrorist attacks, both have subsequently received
substantial funding for terrorism-related programs, and both already employ some
terrorism-related technologies. If support and willingness to pay more taxes for terrorism
technologies and services exists among citizens anywhere, we might expect to see it most
in these two cities.
We used probability sampling to select respondents within the universe of private
households with telephones. Subjects were contacted by random digit dialing (RDD) and
interviewed by phone.16 Because our population of interest is adult residents, the
youngest male or oldest female over 18 years old in each household was selected to be
interviewed.17 Spanish-speaking interviewers conducted the survey with non-Englishspeaking respondents.18 We used a sample size of 1,000 for our survey of residents in the
United States to produce average responses on survey items that are within the margin of
error of 3 percent of the values in the total population at a 95 percent level of
confidence. Likewise, our sample sizes were 400 each for New York City and Washington, D.C. to produce average responses on survey items that are within the margin of
error of 5 percent at a 95 percent confidence level. Ultimately, we completed 1,802 20minute computer-assisted telephone interviews during the period February 6–March 17,
2006.
The RDD method, coupled with these sample sizes, assured a set of survey responses
that fairly represent the responses of the general public.19 We confirmed the representativeness of our sample of respondents to the population of adults by comparing the
demographics of our survey population to those compiled by the U.S. Census Bureau.
We found that our national sample underrepresented residents who are of Hispanic
origin, black, and Asian, as well as Spanish speakers who speak English less than very
well. Among our Washington, D.C. respondents, blacks were strongly underrepresented,
and whites overrepresented. In New York City, black and Asian residents were slightly
underrepresented. In addition, our survey respondents were more likely to be female,
older, more educated, and wealthier than in the general populations nationally and in
16. Random digit dialing generates phone numbers for all households with landline telephones (even
those that are unlisted). In RDD, all valid three-digit area codes and valid three-digit prefixes within those
area codes are selected for the population of interest. A computer then appends randomly generated fourdigit suffixes to create complete phone numbers. RDD does not allow households without landlines,
institutional living units, or businesses to be included in the sample.
17. Because men are less likely to be home than women, and younger people are less likely to be home
than older people, there is a tendency for phone survey samples to underrepresent young men. The
‘‘youngest male, oldest female’’ method brings the demographics of the survey population more in line with
the actual population.
18. For the national sample, 4.4 percent of the surveys were conducted in Spanish. Two percent of the
Washington, D.C. surveys and 5 percent of the New York City surveys were in Spanish.
19. Nonresponse (which arose because some people refused to complete the survey or did not answer at
the times when the interviews are conducted) does mean that the sample of respondents who completed the
survey is a subset of the sample of households generated, and thus may no longer be representative of the
population of interest.
74
Public Budgeting & Finance / Summer 2008
both cities. These biases are typical in sample surveys with even very high response rates
(e.g., Brehm 1993),20 so our sample seems to be reasonably representative of our population of interest.
We also investigated the potential for nonresponse bias by examining the characteristics of those who were selected for but refused to participate in the national survey. We
recontacted a random selection of 100 people across the United States who had originally refused to complete the telephone survey and asked them only demographic
questions. We found that the characteristics of those who refused to participate in our
survey are very similar to those of United States respondents across all demographic
characteristics. This assures us that the bias introduced by nonresponse is unlikely to
affect our findings in any manner associated with the observable attributes of the respondents.21
Questionnaire Design
The telephone survey questions were constructed based on feedback we solicited from
three sets of focus groups in three different cities, and on reviews of hundreds of previously used public opinion questions related to security, technology, threats, attitudes
toward government, emergency management, and willingness to pay taxes obtained
through archival polling data provided by the University of Connecticut’s Roper Center.22 These questions were narrowed to those most related to the current study. We
drafted new questions based on the unique purposes of this research project. Once a full
set of questions was developed, questions were reviewed for clarity, validity, and reliability. To assist with this process, we solicited review by experts in contingent valuation
methodologies. We then pretested the survey instrument with members of the general
public. Questions were again revised to arrive at a final question set.
Most of the survey questions targeted people’s views about new services and new
technologies designed to enhance terrorism preparedness. With respect to services, we
chose to ask about two services currently under consideration by governments. These
were described to survey respondents as follows:
20. John Brehm, The Phantom Respondents: Opinion Surveys and Political Representation (Ann Arbor:
University of Michigan Press, 1993).
21. We found no significant differences in the demographic characteristics between respondents and
nonrespondents. This reassures us that nonresponse bias is not a concern for any measures in deterministic
systems with respect to those characteristics. Those characteristics, however, are not the only ways in which
nonrespondents could differ from respondents. We do no know how different they look on other questions
of interest, such as trust in government, or trust in government to make technology that works, both of
which arguably contribute to the prices citizens find desirable for public services.
22. The Roper Center is an archive based at the University of Connecticut containing decades of public
opinion surveys completed in the United States.
Donahue et al. / Taxes, Time, and Support for Security
75
Terrorism Prevention and Detection. This new service is different from existing fire
and police services. It could include things like new antiterrorist computer tracking
systems, or specially trained security personnel.
Terrorism Response and Recovery. This new service could include such items as
response plans for terrorism events and exercises to train police and fire to respond
to terrorism events.
Likewise, we asked about three technologies currently under development. These were
described to survey respondents as follows:
Persistent Surveillance Cameras. These cameras are mounted on special planes and
record detailed movements in entire cities. The cameras can help manage a disaster
response and help an investigation after a terrorist attack.
Stand-Off Detectors. These devices monitor an area for hazards, such as explosives, chemical, biological, or radiological materials. If a hazard is present, emergency responders and the public would be alerted.
Portal Detectors. These are devices that people can walk through, similar to metal
detectors. They detect explosive, chemical, biological, and radiological materials.
Our survey questions were designed to ascertain people’s support for these services
and technologies. We pursued three strategies to understand support levels. Specifically,
we examined: (1) whether people say they would support particular programs, (2)
whether they would be willing to pay for these programs, and (3) whether they would be
willing to bear other costs that might result from these programs. These three types of
support were measured as follows:
Professed support. Our first strategy was to ask respondents directly about their support. In particular, we asked whether they would support or oppose having each new
technology used in different settings in their community. This is because support might
vary depending on where the technologies are deployed. For example, people might
support the use of detectors in transportation hubs more than they would in other public
gathering places.
Willingness to pay. Our second strategy to measure support considered whether people
would ‘‘put their money where their mouth is’’Fin other words, would they be willing to
pay additional taxes for services they profess to support? Support for and satisfaction
with local government services is typically strong in the general population. Scholars
studying preferences for taxing and spending have observed less support once questions
are stated in terms that reveal trade-offs or costs to the citizens considering them.23 For
this reason, asking how much people would be willing to pay for services is considered a
more reliable measure of their true preferences. In this area, we followed the CVM
literature, and employed several techniques to avoid hypothetical bias and obtain valid
23. This depends on the cost of the service. When per household service costs are low, people are more
likely to support a service if they know what its cost is, but when service costs are high, people are less likely
to support them when they see their prices (Robbins, Simonsen, and Feldman, 93).
76
Public Budgeting & Finance / Summer 2008
measures of willingness to pay. We began by using focus groups and pretests to gain
insight into how people formed their responses and to test our ability to get valid and
reliable answers to questions about willingness to pay for services that do not yet exist.
Then, in our final survey, we asked both about the new services and technologies in
which we were interested, and about two important and ubiquitous local services, education and fire services.
The contingent valuation method has a high burden of proof associated with it because of the many biases and measurement failures endemic in survey research and in
preference revelation, and the difficulty in validating measures. Field tests have revealed
the potential accuracy of CVM to find the willingness to pay of private goods and
services. Emulating the approaches shown to produce valid and reliable measures for
goods in private production are the best that can be done to give surety that a result is
valid and reliable for public services not yet produced.
One threat to validity occurs when respondent service choices are presented without
the budget constraints that frame them. Citizens are generally satisfied with local government services and supportive of them. This results in a corresponding halo effect
where levels of support are reported that do not actually correspond with underlying
willingness to pay. To combat this it is helpful to provide fiscal information about the
costs of a particular service before querying respondents about them.
Price, however, is just one part of a budget constraint. Households face opportunity
costs when consuming goods and services that come in the form recurring claims on
household spending, such as food and rent. In order to ground people in their budget
constraint it is helpful to get them to think specifically about these claims. A survey
designed such that questions about general household spending immediately precede
CVM questions is one way to cause respondents to gauge their own ability to pay as they
ponder their bids for new public services.
Finally, the income of the household binds the budget constraint. The respondent
from a household with amounts of disposable income that are larger relative to the
population may have a substantively different level of support for public spending than
those at or below the center of that distribution. The most common approach to control
for this is to ask people to reveal their income. Respondents are generally reluctant to
provide this information, but we have achieved good results by asking people to identify
where they are in an income distribution, often posed (as in this research), in
quartiles.
We expect people to be the most sensitive to price when contemplating the reality of
their own financial circumstances. A more precise measure of the willingness of consumers to pay for a new public service should therefore occur when contemplating price
in the context of one’s own household budget constraint. This kind of framing also
reduces the chance of hypothetical bias. The research design for this project employs
each of these approaches.
We used an iterative bidding structure, whereby respondents are confronted with a
series of nested choices that ask them what additional tax they would pay for a given
Donahue et al. / Taxes, Time, and Support for Security
77
public good, beginning with an arbitrary initial payment. This is an appropriate approach, because we seek a measure of people’s willingness to pay for particular technologies and services and to evaluate support, not to provide policymakers a ‘‘real’’
number to include in a budget.24 To begin such a process, respondents are asked if they
are willing to pay a specific amount. In our survey, we varied this ‘‘anchor’’ randomly
between $25 and $75.25 Specifically, we described a particular service or technology to
respondents. They were then asked if they were willing to pay the anchor amount in
additional taxes per year to implement the particular service or technology in their
community.26 If they answered ‘‘no,’’ the amount was reduced incrementally until the
greatest amount the respondent would pay was identified. If they answered ‘‘yes,’’ the
amount was increased incrementally until the greatest amount the respondent would pay
was identified.
The CVM research further suggests that techniques that use hypothetical bias corrections can provide better estimates of the value of nonmarket goods. After soliciting
their preferred payments for these services, we asked for the amount that respondents
were ‘‘definitely sure’’ that they would pay, ‘‘without a doubt.’’ This strategy has been
demonstrated to provide correction for hypothetical bias in other settings and increases
our confidence in the spending preferences revealed in this study.27 Evidence of a correction is apparent based on the consistently lower amounts offered following this certainty check. (The results we report below rest on the final corrected amounts that
respondents said that they would pay ‘‘without a doubt.’’) In addition, we surmised that
residents might be more inclined to pay for new services, particularly ones that might
seem very specialized, if they would also prove useful in other ways. Thus we asked
respondents whether it made a difference in what they were willing to pay for a new
service if that service could also be used to enhance everyday public safety activities.
These questions about how much more they would be willing to pay for this additional
benefit were also subject to the ‘‘definitely sure, without a doubt’’ bias mitigation.
Other ‘‘payments.’’ Our third approach to measuring support was to consider the
possibility that other things besides monetary cost might matter to people’s acceptance of
a technology. In particular, some of these technologies are likely to pose an inconvenience, in effect requiring ‘‘payments’’ of time. We hypothesized that as inconvenience
24. Richard M. Bennett and R. B. Tranter, ‘‘The Dilemma Concerning Choice of Contingent Valuation
Willingness to Pay Elicitation Format,’’ Journal of Environmental Planning and Management 41, no. 2
(1998): 253–257; and Mitchell and Carson, 55–90.
25. This anchor range was set around the mean value of responses obtained during focus groups
conducted prior to the survey.
26. We did two things to minimize the effects of anchor bias. First we allow respondents to name their
own amount after a few iterations where they are queried on specific amounts. Second, we randomly varied
the anchor amount presented to our respondents. We expect that it is for this reason that the resulting
correlation coefficients between the amounts ultimately selected by respondents and the anchor amount
were very small (0.10 or less).
27. Blumenschein et al. (forthcoming).
78
Public Budgeting & Finance / Summer 2008
increases, people’s support for a technology would fall, and that their tolerance for
inconvenience might vary depending on the circumstances under which it occurredFso
people might be willing to wait longer in line at an airport than at a shopping mall, for
example. To test this, we asked how long respondents would be willing to wait. We posed
our willingness to wait questions in a similar structure to questions about willingness to
pay for technologies. We assigned respondents at random to one of four groups. We
asked each group whether they would wait one of the four amounts of time (15, 30, 45, or
60 min) to walk through a portal detector. We repeated the question to each respondent
for four venues (airports, stadiums, schools, and malls).
FINDINGS
This section presents our findings with respect to our research question: How much do
citizens support services and technologies designed to protect them from terrorist attacks, and what are citizens willing to give up for them in terms of time or money?
Descriptive statistics for the variables discussed in this section are in the Appendix.
Support
As we have described, we asked survey respondents about their support for a variety of
new technologies. Table 1 shows the proportion of respondents from across the nation,
in New York City, and in Washington, D.C. who support or strongly support each
technology. We found that most people (over 60 percent) said that they would support
both stand-off and portal detectors deployed in all locales, with one exception: only 48
percent of residents of Washington, D.C. support the use of portals in shopping malls.
Support for both types of detectors clearly varied based on the venue in which these
TABLE 1
Percent of Respondents Who Support Technologies by Location
Support
Support
Support
Support
Support
Support
Support
Support
Support
portal detectors in airports
portal detectors in stadiums
portal detectors in schools
portal detectors in malls
stand-off detectors in airports
stand-off detectors in stadiums
stand-off detectors in schools
stand-off detectors in malls
persistent surveillance cameras
% of people
Nationwide
% of people in
New York City
% of people
in Washington, D.C.
91
81
84
76
91
77
72
58
31
95
87
84
85
92
81
71
69
45
91
84
79
77
87
71
61
48
37
Donahue et al. / Taxes, Time, and Support for Security
79
TABLE 2
Amount Respondents Are Willing to Pay for Services and Technologies
$ people are willing to pay for:
Current local school services
% willing to pay $0
Number of valid responses
Current local fire services
% willing to pay $0
Number of valid responses
Prevention and detection services
% willing to pay $0
Daily use extra payment
Number of valid responses
Response and recovery services
% willing to pay $0
Daily use extra payment
Number of valid responses
Stand-off detector technology
% willing to pay $0
Number of valid responses
Portal detector technology
% willing to pay $0
Number of valid responses
Persistent surveillance camera technology
% willing to pay $0
Number of valid responses
Nationwide
New York City
Washington, D.C.
66.69
40.18
891
43.52
31.69
915
26.72
52.28
50.18
922
25.35
49.79
41.33
936
19.21
50.32
950
19.20
52.96
946
13.97
69.35
956
74.84
36.25
320
52.36
26.00
350
46.01
44.31
82.10
343
39.99
39.44
61.20
360
29.55
36.36
363
25.70
39.34
366
23.70
51.52
361
157.17
26.77
325
84.58
21.96
337
47.85
44.29
67.36
350
43.95
36.29
63.28
361
26.50
35.98
353
22.05
42.13
356
22.67
60.77
362
devices would be deployed. Support was highest for technologies that would be used at
airports and decreased progressively when deployed in stadiums, schools, and shopping
malls. Support for persistent surveillance cameras was lower (31 percent in favor
nationally; 37 percent in Washington, D.C.; and 45 percent in New York). Some
regional variation exists in general support for new technologies. Residents of New York
City were by far the most supportive in almost every case, while support among
Washington, D.C. residents is more similar to people across the nation as a whole,
though they show somewhat less support for stand-off detectors deployed in stadiums,
schools, and malls.
Willingness to Pay
As previous research has shown, asking how much people would actually be willing to
pay for something can provide another reliable measure of their support. Table 2
80
Public Budgeting & Finance / Summer 2008
presents the amounts people are willing to pay for new technologies; for existing local
services; and for new terrorism-related services. As the table demonstrates, people not
only profess support for new detector and surveillance technologies, but many are willing
to pay for them. Notably, though, the proportion who says they are willing to pay at
least some amount is indeed lower than the proportion that expressed support in the
abstract. The structure of support is similar between professed support (Table 1) and
willingness to pay (Table 2). That is, people are willing to pay less for persistent surveillance cameras than for detectors. New Yorkers are willing to pay more than Washingtonians, who in turn will pay more than people nationwide.
People are also willing to pay for new services focused on preventing, detecting,
responding to, and recovering from terrorist attacks, though they are not willing
to pay as much for these services as they are for existing local school and fire services,
which garner the most support of all services and technologies we asked about. Generally, while the proportion of people who say they won’t pay anything for these new
services is about the same as for new technologies, the amount people are willing to pay
for new services is much higher than for technologies. Among those who would pay for
them at all, the additional amount that people were willing to pay in taxes for technologies ranged from a low of about $15 per year for persistent surveillance nationwide
to a high of about $31 per year for stand-off detectors in New York City. The amount
that people were willing to pay in taxes for new services ranged from a low of about $25
per year for response and recovery nationwide to a high of about $51 per year for
prevention and detection in Washington, D.C. The average levels of additional taxes
citizens were willing to pay also varied by area. In all cases the means for New York City
and the District of Columbia respondents were significantly greater than for the national
sample.
We also asked our survey respondents to quantify the extra amount that they would
be willing to pay, on top of what they had already agreed to, if the new services would
also be put to daily use to enhance ‘‘normal’’ public safety functions (i.e., aside from
combating the threat of terrorism). In the case of terrorism prevention and detection
services, respondents nationwide were willing to pay over $50 more for the added benefit
of daily use, compared with a willingness to pay about $27 for the new service itself. For
terrorism response and recovery services the extra payment was about $16. Considerably
higher payments were offered by residents of New York and Washington, D.C. than by
people nationwide.
Residents of New York City and the District of Columbia metropolitan area display
consistently higher levels of willingness to pay than those from the United States at large.
The differences are real. Weighting the sample back to resemble the population of their
cities on demographic traits results in few significant differences in the means. These
populations are different from the nation on demographic traits, however, and also in
their experiences as residents of areas where terrorist attacks have recently occurred.
That may account for the higher taste for spending. We hope to pursue an exhaustive
analysis of these differences in subsequent research.
Donahue et al. / Taxes, Time, and Support for Security
81
Implications for the Median Voter
The reader can see from the results displayed in Table 2 that the majority of respondents
from the national sample offered $0 to add stand-off detectors, portals, and persistent
surveillance technologies to their communities, despite evidence of substantial willingness
to pay when expressed as a mean value. Several conclusions logically follow. If offered
these new services by referendum, they would likely be defeated. If the benefits to society
sum to at least the total amount that residents are willing to pay, such a result
would be suboptimal. Policymakers who believe in the benefits that these technologies
provide should not rely upon local residents to support them. If a case for
extra-jurisdictional benefits can be made there is even less of a chance of a single jurisdiction vote resulting in the optimal outcome. Because homeland security in general is
a public good we have both theoretical and empirical support for its central rather than
local provision.
We asked residents about two ubiquitous local services, classroom instruction and fire
services, in addition to the new services that we attempt to price. The questions were
included in the design as additional checks to hypothetical bias. Current services will
continue to cost more money and the decision environment facing communities debating
new services will include concerns about paying for them. Respondent bids for new
services should be attenuated by the implicit constraint implied by this other spending.
We pose the school and fire questions in terms of additional spending for the current
level of service. For that reason the results convey each individual’s consumer surplus for
that service and are interesting in their own right. In this case the majority of respondents
offer more than $0 as their additional willingness to pay. These services have a clear local
constituency.
In Table 3, we report the average of the amounts that respondents first offer when
they are asked what they are ‘‘willing to pay’’ and their response after they are asked to
revise that offer to what they are ‘‘definitely willing to pay, without a doubt.’’ The table
reveals the magnitude of these differences. For almost every service, the certainty correction produces significantly lower averages of willingness to pay. This confirms the
presence of hypothetical bias and the necessity of such corrections.
Willingness to Wait
Another measure of support is the amount of inconvenience that citizens will bear if a
particular technology is implemented. We measured inconvenience by asking how long
citizens would be willing to wait in line to walk through a portal detector at various
locations. We used CVM to get at inconvenience, and asked our respondents about wait
time in the same manner as we asked about willingness to pay, but framed our questions
in terms of time rather than dollars. Our findings are presented in Table 4 and Figure 1,
which show that when respondents were confronted with potential inconvenience (waiting time), support was markedly lower than it was when respondents were asked about
82
Public Budgeting & Finance / Summer 2008
TABLE 3
Amounts Respondents Are Willing to Pay for Services and Technologies Compared with
Amounts They Are ‘‘Definitely Sure’’ That They Would Pay ‘‘Without a Doubt’’
$ people are willing to pay for:
Nationwide
New York
City
Washington, D.C.
Current local school services
‘‘Willing to pay’’
‘‘Definitely [willing to pay] . . . without a doubt’’
Difference
t-score
71.43
66.69
4.74
5.01***
78.56
74.84
3.72
2.16**
Current local fire services
‘‘Willing to pay’’
‘‘Definitely [willing to pay] . . . without a doubt’’
Difference
t-score
45.32
43.52
1.79
4.86***
66.75
52.36
14.39
1.26
95.00
84.58
10.42
1.25
Prevention and detection services
‘‘Willing to pay’’
‘‘Definitely [willing to pay] . . . without a doubt’’
Difference
t-score
28.01
26.72
1.29
2.46***
47.73
46.01
1.72
4.29***
51.45
47.85
3.61
2.33**
Response and recovery services
‘‘Willing to pay’’
‘‘Definitely [willing to pay] . . . without a doubt’’
Difference
t-score
25.27
25.35
0.76
0.07
43.79
39.99
3.79
1.69**
45.24
43.95
1.29
1.37*
Stand-off detector technology
‘‘Willing to pay’’
‘‘Definitely [willing to pay] . . . without a doubt’’
Difference
t-score
20.43
19.21
1.23
6.51***
31.44
29.55
1.89
3.97***
29.50
26.50
3.01
5.56***
Portal detector technology
‘‘Willing to pay’’
‘‘Definitely [willing to pay] . . . without a doubt’’
Difference
t-score
20.32
19.20
1.12
4.08***
26.88
25.70
1.18
3.92***
23.87
22.05
1.82
5.88***
Persistent surveillance camera technology
‘‘Willing to pay’’
‘‘Definitely [willing to pay] . . . without a doubt’’
Difference
t-score
14.71
13.97
0.74
3.84***
24.31
23.70
0.61
1.58*
23.45
22.67
0.78
1.91**
*
Po.10.
**
Po.05.
165.17
157.17
8.00
3.84***
***
Po.01.
Donahue et al. / Taxes, Time, and Support for Security
83
TABLE 4
Percent of Respondents Willing to Wait to Walk through a Portal Detector
Wait time (min)
Nationwide
15
30
45
60
New York City
15
30
45
60
Washington, D.C.
15
30
45
60
Airports
Stadiums
Schools
Malls
83
71
68
63
66
51
42
35
54
37
30
27
36
25
15
15
85
76
64
62
70
61
44
40
63
34
28
30
48
28
23
20
71
65
54
50
53
41
32
29
43
26
20
25
27
13
12
12
support in the abstract. That is, the proportion of respondents willing to wait at all was
lower than the proportion that expressed support for portal technologies. This was true
of respondents in all three samples, but residents of New York remained the strongest
supporters. Further, our suspicion that the willingness of respondents to wait declined as
FIGURE 1
Willingness to Wait to Walk through a Portal Detector (by Venue)
90
80
% Willing to Wait
70
60
50
40
30
20
10
0
15
84
30
45
Wait Time (minutes)
Airports Stadiums
Schools
60
Malls
Public Budgeting & Finance / Summer 2008
waiting time increased was confirmed, suggesting that demand falls as cost increases,
even if that cost is priced in time rather than dollars. This effect was evident for all
venues, but varied by venue: Respondents were willing to wait the longest at airports,
and were least willing to wait at shopping malls.
In Figure 1, we plot the amount of wait time presented to respondents in each of four
venues against the proportion of respondents agreeing to wait that long to walk through
a portal detector. The specific proportions should be viewed with caution as we have not
developed or applied hypothetical bias corrections for bids in units of time. The key to
this finding is the pattern. The proportion of respondents agreeing to wait declines
steeply as wait time increases. These results resemble a classic demand curve with time
instead of price. People are valuing time in a similar way as they would money, even
though the presumed outcome of the service (some cataclysm avoided) is much different
than a traditional good. Even a service that does not directly impose dollar costs on
consumers has support boundaries and limitations associated with the degree of inconvenience that it imposes.
CONCLUSIONS
Government decision makers seeking to deploy new homeland security technology or
attempting to gain local support for them can be both encouraged and cautioned based
on our findings. Citizens generally value local services highly, and respondents in this
project were no different. As we have seen, citizens support new services and technologies
that would make them safer from terrorism. We find clear evidence of the willingness of
citizens to pay substantial amounts of additional taxes to support these new services and
technologies, but it is a minority of citizens offering those payments. Unless these initiatives result in some daily use benefit, the majority of citizens are not willing to pay any
amount to have them deployed in their area.
We also see that support in the abstract is greater than support that comes at a
personal cost, even if that cost is not monetary. Our results show that resistance to new
technologies is likely to rise as inconvenience (such as longer waiting time) increases.
And, the amount of inconvenience citizens will bear varies according to the conditions
under which these burdens are experienced.
Public managers and elected officials are obliged to act on behalf of the public. They
employ their own judgment when selecting services to provide and service levels to
deliver. Their perceptions of public preferences inform these choices. Voting can sometimes determine if a service will be provided or not and is one way to reveal preferences.
Public hearings about new or proposed service initiatives can also reveal preferences,
particularly when they are strongly held. Knowing that a certain vocal subset of the
population supports or opposes a service does not, however, provide much useful information about jurisdiction-wide wide support or the level of (spending) effort that is
preferred.
Donahue et al. / Taxes, Time, and Support for Security
85
Contingent valuation has the potential to reveal support for public services in a
more nuanced manner than public hearings, voting, or surveys with closed ended or
forced choice designs. It is particularly helpful when estimating the demand for public
goods, such as homeland security technology and services that might not yet exist in a
community. We have demonstrated here what we believe to be a valid and reliable way to
gauge such support. Lessons from this research could be extended with field experiments
that gather contingent values before, and then observe price support after, a new service
is introduced. Additional research should probe experimentally how levels of support
vary based on method of preference revelation.
NOTES
This paper presents results from surveys conducted as part of the Regional Technology Integration
Initiative, a research study conceived and funded by the U.S. Department of Homeland Security’s
Science and Technology Directorate through the National Science Foundation under Grant No.
000409. We are especially grateful to Nancy Suski for her support of this project, Glenn Blomquist
(University of Kentucky) for providing his expert assistance in contingent valuation, to Emily Shepard and Candace Fitzpatrick for project management, and Binu Chandy for research assistance.
86
Public Budgeting & Finance / Summer 2008