Author's personal copy
ARTICLE IN PRESS
Journal of Environmental Economics and Management 57 (2009) 65–86
Contents lists available at ScienceDirect
Journal of
Environmental Economics and Management
journal homepage: www.elsevier.com/locate/jeem
Virtual experiments and environmental policy$
Stephen M. Fiore a, Glenn W. Harrison b,d, Charles E. Hughes c, E. Elisabet Rutström b,
a
Cognitive Sciences Program, Department of Philosophy and Institute for Simulation and Training, University of Central Florida, Orlando, FL, USA
Department of Economics, College of Business Administration, University of Central Florida, Orlando, FL, USA
c
School of Electrical Engineering and Computer Science, University of Central Florida, Orlando, FL, USA
d
Durham Business School, Durham, UK
b
a r t i c l e in fo
abstract
Article history:
Received 30 July 2007
Available online 3 December 2008
We develop the concept of virtual experiments and consider their application to
environmental policy. A virtual experiment combines insights from virtual reality in
computer science, naturalistic decision-making from psychology, and field experiments
from economics. The environmental policy applications of interest to us are the
valuation of wild fire management policies such as prescribed burn. The methodological
objective of virtual experiments is to bridge the gap between the artefactual controls of
laboratory experiments and the naturalistic domain of field experiments or direct field
studies. This should provide tools for policy analysis that combine the inferential power
of replicable experimental treatments with the natural ‘‘look and feel’’ of a field domain.
We present data from an experiment comparing valuations elicited by virtual
experiments to those elicited by instruments that have some of the characteristics of
standard survey instruments, and conclude that responses in the former reflect beliefs
that are closer to the truth.
& 2008 Elsevier Inc. All rights reserved.
Keywords:
Virtual reality
Field experiments
Laboratory experiments
Risk perception
Subjective beliefs
Wildfires
Environmental policy
1. Introduction
We develop the concept of virtual experiments (VXs) and consider their application to environmental policy. A VX
combines insights from virtual reality (VR) simulations in computer science, naturalistic decision making (NDM) and
ecological rationality from psychology, and field and lab experiments from economics. The environmental policy
applications of primary interest to us are traditional valuation tasks, but the concept is easily extended to include less
traditional normative decision making. The methodological objective of VXs is to combine the strengths of the artefactual
controls of laboratory experiments with the naturalistic domain of field experiments or direct field studies. This should
provide tools for policy analysis and research on decision making that combine the inferential power of replicable
experimental treatments with the natural ‘‘look and feel’’ of a field domain.
We start by reviewing the technological frontier provided by VR (Section 2). That general review is then related to some
major issues in environmental economics (Section 3), and then finally illustrated in an application to wildfire risk
management (Section 4). One surprising theme for economists is that applications of this technology stress the importance
of having an underlying model that simulates the natural processes defining the environment: this is not just cool graphics
$
Detailed instructions, raw data, and statistical files may be accessed from http://exlab.bus.ucf.edu and high-resolution images from http://
www.bus.ucf.edu/erutstrom/research/fireimages.htm.
Corresponding author. Fax: +1 253 830 7636.
E-mail addresses: sfi
[email protected] (S.M. Fiore),
[email protected] (G.W. Harrison),
[email protected] (C.E. Hughes),
[email protected] (E.E. Rutström).
0095-0696/$ - see front matter & 2008 Elsevier Inc. All rights reserved.
doi:10.1016/j.jeem.2008.08.002
Author's personal copy
ARTICLE IN PRESS
66
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
untethered to the laws of nature. Indeed, this connection is one of the methodological insights from this frontier, requiring
attention to how psychologists define a ‘‘naturalistic’’ decision-making environment (Section 2.3).
The frontier we examine also has relevance for broader debates in economics, beyond the applications to environmental
economics. It is now well known and accepted that behavior is sensitive to the cognitive constraints of participants. It has
been recognized for some time that field referents and cues are essential elements in the decision process, and can serve to
overcome such constraints, e.g., [50], even if there are many who point to ‘‘frames’’ as the source of misbehavior from the
perspective of traditional economic theory [38]. The concept of ‘‘ecological rationality’’ captures the essential idea of those
who see heuristics as potentially valuable decision tools [24,57]. According to this view, cognition has evolved within
specific decision environments. If that evolution is driven by ecological fitness then the resulting cognitive structures, such
as decision heuristics, are efficient and accurate within these environments. But they may often fail when applied to new
environments.
At least two other research programs develop similar views. Glimcher [25] describes a research program, following Marr
[45], which argues for understanding human decision making as a function of a complete biological system rather than as a
collection of mechanisms. As a biological system he views decision-making functions as having evolved to be fit for specific
environments. Clark [20] sees cognition as extended outside not just the brain but the entire human body, defining it in
terms of all the tools used in the cognitive process, both internal and external to the body. Field cues can be considered
external aspects of such a process. Behavioral economists are paying attention to these research programs and what they
imply for the understanding of the interactions between the decision maker and his environment. For our purposes here,
this means we have to pay careful attention to the role of experiential learning in the presence of specific field cues and
how this influences decisions.
The acceptance of the role of field cues in cognition provides arguments in favor of field rather than lab experiments
[30]. Where else than in field experiments can you study decision makers in their natural environment using field cues that
they have come to depend on? We actually challenge this view, if it is taken to argue that the laboratory environment is
necessarily unreliable [41]. While it is true that lab experiments traditionally use artefactual and stylized tasks that are free
of field cues, in order to generate the type of control that is seen as essential to hypothesis testing, field experiments have
other weaknesses that a priori are equally important to recognize [26]. Most importantly, the ability to implement
necessary controls on experimental conditions in the field is much more limited than in the lab, as is the ability to
implement many counterfactual scenarios. In addition, recruitment is often done in such a way that it is difficult to avoid
and control for sample selection effects; indeed, in many instances the natural process of selection provides the treatment
of interest, e.g., [31]. However, this means that one must take the sample with all of the unobservables that it might have
selected on, and just assume that they did not interact with the behavior being measured. Finally, the cost of generating
observational data can be quite significant in the field, at least in comparison to that cost in the lab.
For all these reasons we see lab and field experiments as complementary, a persistent theme of Harrison and List [30].
A proper understanding of decision making requires the use of both. While lab experiments are better at generating
internal validity, imposing the controlled conditions necessary for hypothesis testing, field experiments are better at
generating external validity, including the natural field cues.
We propose a new experimental environment, the virtual experiment, which has the potential of generating both the
internal validity of lab experiments and the external validity of field experiments. A VX is an experiment set in a controlled
lab-like environment using either typical lab or field participants that generates synthetic field cues using virtual reality
technology. The experiment can be taken to typical field samples, such as experts in some decision domain, or to typical lab
samples, such as student participants. The VX environment will generate internal validity since it is able to closely mimic
explicit and implicit assumptions of theoretical models, and thus provide tight tests of theory; it is also able to replicate
conditions in past experiments for robustness tests of auxiliary assumptions or empirically generated hypotheses. The VX
environment will generate external validity because observations will be made in an environment with cues mimicking
those occurring in the field. In addition, any dynamic scenarios can be presented in a realistic and physically consistent
manner, making the interaction seem natural for the participant. Thus the VX builds a bridge between the lab and the field,
allowing the researcher to smoothly go from one to the other and see what features of each change behavior. VX is a
methodological frontier enabling new levels of understanding via integration of laboratory and field research in ways
not previously possible. Echoing calls by others for such an integration, we argue that ‘‘research must be conducted in
various settings, ranging from the artificial laboratory, through the naturalistic laboratory, to the natural environment
itself’’ [34, p. 343].
Two necessary requirements for a successful VX are ‘‘presence’’ and ‘‘coherency.’’ Presence is the degree to which
participants have a sense of ‘‘being there.’’ When participants are present, their sensory inputs are dominated by those
generated in the VR environment. We will use the term ‘‘naturalistic’’ environment for a synthetically generated
environment that induces presence, to contrast it with genuinely natural or artefactual non-synthetic environments.
Compared to textually and pictorially generated descriptions in artefactual environments, VX includes dynamically
generated experiences. For presence to occur it is important that these experiences are physically and scientifically
coherent. Therefore, a scientifically accepted model generating the temporal sequence of cues must underlie the VR
environment experienced by the decision maker.
The potential applications for VX are numerous. Apart from simulating actual policy scenarios, such as the wildfire
prevention policies investigated here, it can also be used to mimic environments assumed in a number of field data
Author's personal copy
ARTICLE IN PRESS
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
67
analyses. For example, popular ways of estimating valuations for environmental goods include the Travel Cost Method
(TCM), the Hedonic Pricing Method (HPM), and the Stated Choice Method (SCM). To mimic TCM the simulation can present
participants with different travel alternatives and observe which ones are chosen under different naturalistic conditions.
To mimic HPM the simulation can present participants with different real estate options and observe purchasing behavior,
or simply observe pricing behavior for alternative options [18]. Finally, to mimic SCM, participants can experience the
different options they are to choose from through naturalistic simulation. For all of these types of scenarios, some of the
most powerful applications of VX will involve continuous representations of dynamically generated effects of policy
changes. Visualizing and experiencing long-term effects correctly should improve short-run decisions with long-run
consequences.
In our application to wildfire prevention policies we use actual choices by subjects that bear real economic
consequences from those choices. In this application we present participants with two options: one simply continues the
present fire prevention policies, and the other increases the use of prescribed burns. Participants get to experience two fire
seasons under each policy and are then asked to make a choice between them. The scenario that simulates the continuation
of the present fire prevention policies will realistically generate fires that cause more damage on average and that also vary
substantially in intensity. This option therefore presents the participant with a risky gamble with low expected value (EV).
The alternative option presents a relatively safe gamble with a higher expected value, but there will be a non-stochastic
cost involved in implementing the expansion of prescribed burns. It is possible in VX to set the payoff parameters in such a
way that one can estimate willingness to pay (WTP) for the burn expansion option that is informative to actual fire policy.
These values of WTP could then be compared to those generated through the popular Contingent Valuation Method (CVM)
to test the hypothesis that they should be different. Alternatively, it is possible to manipulate the payoff parameters in such
a way that one estimates parameters of choice models such as risk attitudes, loss aversion, and probability weights. We
estimate structural choice models and compare the responses in VX to those made using more standard representations of
consequences using still pictures.
In summary, we review and illustrate the use of VX as a new tool in environmental economics, with an emphasis on the
methodological issues involved. In Section 2 we introduce VR to economists with lessons for the design of VX. We also
discuss what can be learned by comparing behavior by experts and non-experts in a VX. In Section 3 we review the stateof-the-art in using visualization technologies in applications to environmental economics. Section 4 then presents a case
study in which these techniques are applied to the assessment of the consequences of wildfire risks and fire management
options. Section 5 draws conclusions.
2. Virtual environments
2.1. Virtual reality
In a broad sense, a VR is any computer-mediated synthetic world that can be interactively experienced through sensory
stimuli. The range of senses supported nearly always includes sight and very often hearing. Touch, smell, and taste are less
often supported, unless required by the VR system’s application. For example, touch, or haptic feedback, is often provided
when using VR to develop surgical skills, e.g., [14,56].
Experiencing a VR requires that the user has some means to navigate through the virtual world. Although not strictly
required, such experiences typically allow interaction, and not just observation, such that the actions of the user affect the
state of the virtual environment. For all cases of allowed interaction, including navigation, the actions of the user must
result in a response that occurs as quickly as in the physical world. Thus, navigation should be smooth and natural. Where
interaction is allowed, the action of a user must result in a reaction in the virtual world that is appropriate, within the
framework of the virtual world’s model of truth (its physics, whether real or fanciful), and as responsive as interaction
would be with the virtual object’s real world counterpart, if such a counterpart exists.
An immersive VR refers to one that dominates the affected senses. As VR is typically visual, the primary sense of vision is
often dominated by having the user wear a head-mounted display (HMD) or enter a CAVE (a ‘‘box’’ in which the
surrounding visual context is projected onto flat surfaces). The user’s position and orientation are then tracked so that the
visual experience is controlled by normal user movement and gaze direction. This supports a natural means of navigation,
as the real movements of the user are translated into movements within the virtual space.1
The realistic navigation paradigm described above works well within constrained spaces and simple movements, but is
not as appropriate for motions such as rising up for a god’s eye view and then rapidly dropping to the ground for a more
detailed view. There are HMD (see Fig. 1) and CAVE-based solutions here, but such experiences are more often provided
with either a dome screen, illustrated in Fig. 2, or a flat panel used to display the visual scene. Since dome screens and flat
panels are clearly less immersive than HMDs and CAVEs, experiences that are designed for these lower-cost solutions need
1
There is a difference between a Virtual World (VW) that is online and VR environments, although there are many similarities in terms of the
objectives of rendering an environment that is naturalistic in some respects. There have been some extreme statements about the differences between VR
and VW environments by those seeking to promote VW environments, such as Castronova [17, pp. 286–294]. These online worlds can quickly stir the
imagination of experimenters, but pose significant problems for implementing critical controls on participants and the environment. Considerable work is
needed before useful and informative experiments can be run in these environments.
Author's personal copy
ARTICLE IN PRESS
68
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
Fig. 1. Head-mounted display.
Fig. 2. A dome screen.
to employ some means to overcome the loss of physical immersion and therefore presence. This is generally done through
artistic conventions that emotionally draw the user into the experience. In effect, the imagination of the user becomes one
of the tools for immersion [59]. This is somewhat akin to the way a compelling book takes over our senses to the point that
we ‘‘jump out of our skins’’ when a real world event occurs that seems to fit in with the imagined world in which we find
ourselves. Essentially, a successful VR leads to what is referred to in literature and theater as a ‘‘willing suspension of
disbelief’’ [53].
In the experiment we describe later, experimental participants interact with three-dimensional (3D) simulations using a
mouse, a keyboard, and a large, flat panel monitor.
2.2. Virtual trees, forests, and wild fires
Our case study considers wildfire prevention policies, and requires significant attention to the modeling of virtual trees,
forests, and fires. A number of methods have been proposed for generating virtual plants with the goal of visual quality
Author's personal copy
ARTICLE IN PRESS
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
69
without relying on botanical knowledge. Oppenheimer [49] used fractals to design self-similar plant models. Bloomenthal
[13] assumed the availability of skeletal tree structures and concentrated on generating tree images using splines and
highly detailed texture maps. Weber and Penn [64] and Aono and Kunii [2] proposed various procedural models. Chiba
et al. [19] utilized particle systems to generate images of forest scenes.
These and other approaches have found their way into commercial products over the last decade. When the degree of
reality that is needed can be provided without uniqueness, and where dynamic changes are limited, virtual forests are
generally created using a limited number of trees, with instances of these trees scaled, rotated, and otherwise manipulated
to give the appearance of uniqueness. For our case study, we have selected a commercial library of trees and its associated
renderer, SpeedTree.2
Computer visualizations of fire and smoke are active areas of research: for example, Balci and Foroosh [7], Adabala and
Hughes [1], Nguyen et al. [47], and Stam [58]. Some of these efforts focus on physical correctness and others on real-time
performance with visually acceptable results. Our goal is the latter since we require the fire to cover a large area, with no
constraints on the point-of-view from which the user views the fire (close-up or far-away, in the air or on the ground). The
need for interactivity drives much of what we do, so long as this goal does not interfere with the primary requirement that
the experience be perceived as realistic.
Our approach to visualization of the fire and its attendant effects of illumination, charring, denuding, and smoke is
centered on computing the illumination of an area. Our lighting model is based on the time-of-arrival of the fire in a given
cell of the area being modeled. The entire area is broken up into 30 30 m2 cells, consistent with the fire spread model we
are using (see Section 4.2). For the terrain appearance, the illumination is determined by a simple Gaussian falloff function.
In effect, we use a normal distribution with its peak at the time-of-arrival. Thus, the terrain starts to light up prior to the
arrival of the fire and retains a glow for a period of the time after the fire leaves. Its brightest red tint occurs when the fire
first arrives. This arrival is accompanied by the display of flames, the height of which is determined by the state of the
fire (ground or crown). The burned region is computed in a similar manner, but there is an inverse squared falloff function
determining the darkness of the charred remains.
The illumination of the trees and leaves is done in a manner nearly identical to that of the terrain, but the distance
relative to the ground attenuates the effect of the light from the fire. Depending on whether or not the burn causes a crown
fire determines if the leaves disappear after the burn is complete.
Fire and smoke particles appear if the ground illumination is above a certain threshold. New particles are spawned by
selecting one emitting cell (i.e., a cell on fire) at uniform probability. The list of emitting cells is recomputed at each tick
of simulated time, where an hour is currently partitioned into 300 ticks. Smoke and fire particles have fixed lifetimes of
20 and 3 s, respectively.
The required performance for the above visual effects would be hard to achieve in a purely CPU-based solution, so the
illumination is actually handled using the graphics-processing unit. This means that our implementation requires a
relatively high-end graphics card. At present, we get acceptable performance on an NVIDIA 7900 GT card and good
performance on an NVIDIA 8800 GTX card. Of course, references to hardware performance reflect what can be purchased
on readily available systems as of mid-2007.
2.3. VR and expert decision making
A future extension of the methodology of VX is to compare the decisions made by experts and non-experts.
The decision-making process of expert decision makers has been studied by researchers in the field of naturalistic decision
making [39,54]. The term ‘‘naturalistic’’ here refers to the desire to study these decision makers in their natural
environment to learn how the decisions depend on naturally occurring cues. Experts are particularly good at making
efficient and accurate decisions in their field of expertise under dynamic conditions of time pressure and high stakes.
Crucial elements in such decision making are contextual cues, pattern recognition, and template decision rules. Contextual
cues are critical to this pattern-matching process in which the expert engages in a type of mental simulation. Theories
emerging out of this literature suggest that experts employ forward-thinking mental simulations in order to ensure that the
best option is selected. We hypothesize that the VR environment can be used to generate cues that are sufficiently natural
and familiar that decisions will be significantly more like those that would be generated in the field with sufficient
expertise.
Comparisons of experts and non-experts are important because in many economic policy issues the preferences and
recommendations expressed by these two groups are often in conflict. Nevertheless, both provide important inputs into
many policy-making processes. The conflict that arises between the two sets of recommendations is at least partially due to
differences in the perceptions of the problem at hand, and we hypothesize that generating experiences through VR may
serve to generate converging perceptions and therefore also recommendations.
2
SpeedTree is a product sold by Interactive Data Visualization, Inc. (http://www.speedtree.com/).
Author's personal copy
ARTICLE IN PRESS
70
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
3. The value-added of VXs
The main use of VX is to generate counterfactual dynamic scenarios with naturalistic field cues and scientific realism.
This contrasts sharply with the standard presentation frames of the mainstay technology of environmental valuation.
The most popular method is the CVM, which presents respondents with textual and pictorial descriptions of hypothetical
counterfactual scenarios. A CVM presents information in a static manner, providing cues that are more artefactual than the
dynamic cues provided in VX. Due to the presentation frame, a CVM is also weaker in terms of scientific realism, for
the simple reason that such realism is often quite complex and difficult to convey in text form. Further, in VX the
information acquisition process is active rather than passive: the respondent chooses what to view and how to view it by
moving through the environment. In the case of forest fires, for example, respondents can choose whether to view the fire
during a fly-over or by walk-through, from a ‘‘safe’’ distance close to the burning forest.
3.1. Frontier of visualization in environmental valuations
Many environmental valuation techniques provide respondents with some visualization of the choice options.
VX provides a rich extension to such practices by adding dynamic rendering over time, properly constrained by scientific
theories that predict the future paths, possibly in a stochastic manner.
Bateman et al. [8] and Jude et al. [37] use visualization methods for coastal areas to design hypothetical choice tasks for
the respondent, with two basic designs. In the first respondents are shown two-dimensional (2D) photo-realistic static
side-by-side images of ‘‘before and after’’ scenarios. In the second they are shown 3D visualizations, where they can do flyovers of the landscapes but they have lesser realism. The idea in this method is to import geographic information system
(GIS) data for a naturally occurring area, render that area to the respondent in some consistent manner, and then use a
simulation model to generate a counterfactual land use. Fig. 3 shows some of the introductory stimuli presented to their
respondents, to help locate the property on a map, and to have a photographic depiction of the property. Fig. 4 shows the
critical ‘‘before and after’’ comparisons of the property, using their VR rendering technology. The top panel in Fig. 4 shows
the property as it is now, from three perspectives. The bottom panel in Fig. 4 shows the property in the alternative land use,
rendered from the same perspective. In this manner the respondent can be given a range of counterfactual alternatives to
consider. From interviews with coastal management organizers that were presented with these visualizations, Bateman
et al. [8] conclude that VR adds value over traditional methods.
Using VX extends this methodology by allowing participants to manipulate the perspective of the visual information
even further. Participants can wander around the property and examine perspectives that interest them. Some people
like the perspective of a helicopter ride, some like to walk on the ground, and most like to be in control of where
they go.3 It is desirable not to ‘‘force feed’’ participants with pre-ordained perspectives so that they can search and find
the cues that are the most valuable to their decision-making process. Of course, this puts a much greater burden
on the underlying VR-rendering technology to be able to accommodate such real-time updates, but this technology is
available (as modern gaming illustrates). To an economist it also raises some interesting questions of statistical
methodology, in which participants self-select the attributes of the choice set to focus on. There is a rich literature
on choice-based sampling, and such methods will be needed for environments that are endogenously explored, e.g., see
[15, Section 14.5].
In fact, such extensions have in part been undertaken. Bishop [10] and Bishop et al. [11] allowed their respondents to
visualize a recreational area in Scotland using 3D rendering, and walk virtually wherever they wanted within that area.
They recorded the attributes of the locations the respondents chose, and then modeled their choices statistically. The
attributes of interest to them, such as how many pixels are in the foreground trees as compared to the background trees,
are not of any interest to environmental economists, and we have our own way of modeling latent choices such as these,
but the methodological contribution is clear. Moreover, these exercises were undertaken as one part of a broader
conception of what we would call a VX; for example, see [12].
One general elicitation problem that is common to many environmental issues is the temporally latent nature of the
impacts of choices made today. One example with particularly long-run consequences would be the simulation of the
effects of failing to mitigate the risk of global warming. Many researchers are providing numerical projections of these
effects, but it has been a standard complaint from advocates of the need for action that people find it hard to comprehend
the nature or scope of the possible consequences. If we have access to VR systems that can display forests, for example, and
we have projections from climate change models of the effects of policies on forests, then one can use the VR setting to
‘‘fast forward’’ in a manner that may be more compelling than numbers (even monetary numbers).4 The need for a
simulation model of the economic effects of climate change is stressed in the Stern Review on the Economics of Climate
3
In VR one important distinction made is between egocentric and exocentric views. For our purposes, the egocentric view is most like the walkthrough perspective and the exocentric view is most like the fly-over. These distinctions are explored in order to understand their impact on general
presence in the environment, as well as the impact on performance in a variety of tasks [21].
4
An equally interesting possibility might be to turn the ecological clock backward, allow participants to change past policies, and then run the clock
forward conditional on those policies instead of the ones that were actually adopted. Then the participant can see the world as it is now and compare it to
what it might have been, perhaps with greater salience than some notional future world (however beautifully rendered).
Author's personal copy
ARTICLE IN PRESS
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
71
Fig. 3. Introductory displays from Bateman et al. [8].
Fig. 4. GIS rendering of alternative land used, from Bateman et al. [8].
Change [60, p. 253]. Of course, one positive side-benefit of undertaking this activity properly would be a reflection of the
uncertainty over those future predictions.
Finally, the immersive capacity of a full-scale VX is much greater than that in any of these studies, especially if dome
screens or HMDs are used. Since the immersive capacity depends crucially on the synchronization of the actions by
the participant and the response of the simulation, the amount of real-time updating that is required can be quite large.
This requirement increases substantially with the amount of endogenous changes that the environment is undergoing in
real time. Fast computers and high-end graphics cards will be essential for these exercises, but the rapidly increasing power
of commodity processors and graphics cards makes this cost less important over time, as the performance of today’s highend machines becomes the standard for next year’s entry level systems.
The potential value-added of including VR in the information given to participants is clear. Nevertheless, many problems
that are present in traditional valuation exercises are not resolved by the use of VR, and in some cases, responses may
become even more sensitive to the presence of these problems. We discuss some of these issues next.
Author's personal copy
ARTICLE IN PRESS
72
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
3.2. Scenario rejection: the ‘‘R’’ in VR
For some valuation scenarios the counterfactuals may produce perceptually similar consequences. In order to generate
greater statistical power, researchers may then simulate physical or economic changes that are not consistent with
scientific predictions. That is, the counterfactual scenarios may not make sense from a physical and economic perspective.
Violating such constraints, however, can lead participants to ‘‘scenario rejection.’’5 This problem is bound to be especially
severe in situations with which the participant is familiar, such as when expert decision makers are put into VR
simulations. Underlying the rendering of the visual cues there must therefore be a scientifically consistent simulation
model: in effect, putting the ‘‘R’’ into VR.
Another reason for ‘‘scenario rejection’’ may be an inappropriate, or vaguely stated, choice process. This is not a problem
specific to the VR setting but occurs for all valuation instruments. We can imagine the participant asking himself: Why am I
being asked to choose between these two options? Is this a referendum vote? If so, am I the pivotal voter? Or am I the King
(it is good to be King), who gets to say what the land use will be no matter what others think? Every one of these questions
needs some answer for the responses to be interpretable [29]. In addition, it is important to remove auxiliary motivations
for specific answers, such as moral approval by the experimenter. Carson et al. [16] and Krosnick et al. [40] report on a
replication and extension of an earlier Exxon Valdez oil spill survey. They find that, when the response format is completely
anonymous in the sense that it is clear the experimenter cannot identify the response of any individual, the vote in favor of
the policy proposal presented drops by a significant amount. Specifically, they considered naturalistic features of the voting
environment in America, in which respondents were given a voting box to place their private vote, and had the option of
simply not voting. The interaction of these treatments had a dramatic effect on willingness to vote for the referendum, such
that the implied valuation for the Exxon Valdez clean-up policy plummeted [27]. One could readily imagine such referenda
occurring in Second Life. The general point is that such naturalistic features of a task environment are exactly the sorts of
things that VR can handle well, and make the environment less artefactual. Specifically, since the counterfactual scenarios
can be fully played out, a VX provides the opportunity to use real consequences as a way to make the choice process salient.
4. Application: evaluating the risks of wildfires
The policy context we consider as an application of VXs is the assessment of how individuals evaluate the risks and
consequences of forest fire. Recent policy of the US Forest Service has been to undertake prescribed burns as a way of
reducing the ‘‘fuel’’ that allows uncontrolled fires to become dangerous and difficult to contain, although there has been
considerable variation in policy toward controlling wildfire over the decades [51]. Additionally, many private citizens
and local communities undertake such prescribed burns for the same purpose. The benefit of a prescribed burn is the
significant reduction in the risk of a catastrophic fire; the costs are the annoyance that smoke causes to the local
population, along with the potential health risks, the scarring of the immediate forest environment for several years, and
the low risk of the burn becoming uncontrolled. So the policy decision to proceed with a prescribed burn is one that
involves balancing uncertain benefits against uncertain costs. These decisions are often made by individuals and groups
with varying levels of expertise and inputs, often pitting political, scientific, aesthetic, and moral views against each other.
Decisions about prescribed burns are not typically made directly by residents. Ultimately, however, residents exert
significant influence on the policy outcome through the political process. The use of prescribed burns as a forest
management tool is extremely controversial politically, and for understandable reasons.6 Many homeowners oppose
prescribed burns if they are too close to their residences, since it involves some risk of unplanned damage and ruins the
aesthetics of the area for several years. Conversely, other residents want such burns to occur because of the longer-term
benefits of increased safety from catastrophic loss, which they are willing to balance against the small chance of localized
loss if the burn becomes uncontrolled. And there are choices to be made about precisely where the burns should go, which
can have obvious consequences for residents and wildlife. Thus, the tradeoffs underlying household decisions on whether
to support prescribed burns involve their risk preferences, the way they trade-off short-term costs with long-term gains,
their perception of the risks and aesthetic consequences of the burns, and their view of the moral consequences of animal
life lost in human-prescribed burns versus the potentially larger losses in uncontrolled fires.
The policy context of this application has most of the characteristics that one finds in other environmental policy
settings. The standard economic model of decision making, Expected Utility Theory (EUT), suggests that decisions are
5
This term is an unfortunate shorthand for situations in which the participant views the scenario presented in an instrument as incredible for some
reason, and hence rejects the instrument. There is an important semantic problem, though. If ‘‘incredible’’ just means ‘‘with 0 probability,’’ then that is
something that deserves study and can still lead to rational responses. There is nothing about a zero probability that causes expected utility theory to be
invalid at a formal level. But from an experimental or survey perspective, the term refers to the instrument itself being rejected, so that one does not know
if the participant has processed the information in it or not. This semantic issue is important for our methodology, since we are trying to mitigate scenario
rejection in artefactual instruments. Hence we do not want to view it as an all or nothing response, as it is in some literature in environmental valuation
using surveys.
6
The notion of a ‘‘prescribed burn’’ also includes allowing a naturally occurring fire to spread but in a controlled manner. Some of the most
controversial wildfires have resulted from such fires not being subject to a clear plan for control before the fire began. Two particularly well-known fires
that got out of control have given prescribed burns a bad name in some circles: the 1998 Yellowstone fire and the 2000 Los Alamos fire.
Author's personal copy
ARTICLE IN PRESS
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
73
determined by the subjective valuation of each possible outcome, perceptions of the value of outcomes over time as well as
their subjective discounted values, perceptions of the extent and distribution of the risk, and subjective risk preferences.
As discussed earlier, NDM has shown that experts employ contextual cues, pattern recognition, and template decision rules
in a very efficient manner, engaging in forward-thinking mental simulations to choose the best option. We conjecture that
non-experts may learn how to judge risks through synthetic experiences using VR similar to the real experiences of experts.
4.1. Representation as a policy lottery
Forest fire management options such as a prescribed burn can be viewed as choices with uncertain outcomes. In the
language of economists these options are ‘‘lotteries’’ or ‘‘prospects’’ that represent a range of final outcomes, each with
some probability, and hence can be viewed as a policy lottery. One outcome in this instance might be ‘‘my cabin will not
burn down in the next 5 years’’ or ‘‘my cabin will burn down in the next 5 years.’’ Many more outcomes are obviously
possible: these are just examples to illustrate concepts. Contrary to decisions involving actual lottery tickets, a policy
lottery has outcomes and probability distributions that are not completely known to the agent. We therefore expect the
choices to be affected by individual differences not just in risk preferences, but also in risk perceptions.
The canonical laboratory choice task that experimental economists use to identify risk preferences in a setting like this
is to present individuals with simple binary choices. One choice option might be a ‘‘safe’’ lottery that offers outcomes
that are close to each other but different: for example, $16 and $20. Another choice option would then be a ‘‘risky’’ lottery
that offers outcomes that are more extreme: for example, $1 or $38.50. If each outcome in each lottery has a 12 chance
of occurring, and the decision maker is to be paid off from a single realization of the lottery chosen, then one cannot say
which is the better choice without knowing the risk preferences of the decision maker. In this example a risk-neutral
individual would pick the risky lottery, since the EV is $19.75, compared to the EV of $18 for the safe lottery. So someone
observed picking the safe lottery might just be averse to risk, such that the expected increment in value of $1.75 from
picking the risky lottery is not enough to compensate for the chance of having such a wide range of possible outcomes.
A risk-averse person is not averse to having $38.50 over $1, but just to the uncertain prospect of having one or the other
prior to the uncertainty being realized.
Armed with a sufficient number of these canonical choice tasks, where the final outcomes are monetary payments and
the probabilities are known, economists can estimate the parameters of alternative latent choice models. These models
generally rely on parametric functional forms for key components, but there is now a wide range of those functional forms
and a fair understanding of how they affect inferences (e.g., see [32] for a review). Thus one can observe choices and
determine if the individual is behaving as if risk neutral, risk averse, or risk loving in this situation. And one can further
identify what component of the decision-making process is driving these risk preferences: the manner in which final
outcomes are valued by the individual, and/or the manner in which probabilities are transformed into decision weights.
There remains some controversy in this area, but the alternative approaches are now well established.
Our research may be viewed as considering the manner in which the lottery is presented to respondents, and hence
whether that has any effect on the way in which they, in turn, perceive and evaluate it. Outcomes in naturally occurring
environments are rarely as crisp as ‘‘you get $20 today,’’ and the probability of any single outcome is rarely as simple and
discrete as 12, 13, 14, or 18.
One difficulty with using more natural counterparts to prizes or probabilities is that control is lost over the underlying
stimuli, with consequent loss of internal validity. This makes it hard, or impossible, to identify and estimate latent choice
models. The advantage of VX is that one can smoothly go where no lab experiment has gone before: while using stimuli
that are more natural to the participant and the context, the experimenter has complete control over the actual
probabilities and outcomes.
4.2. Affecting perceived probabilities
One of the things that is immediately apparent when working with VR, but unexpected to outsiders, is the ‘‘R’’ part.
Mimicking nature is not a matter left solely to the imagination. When a participant is viewing some scene, it is quite easy to
generate images in a manner that appears unusual and artefactual. That can occasionally work to one’s advantage, as any
artist knows, but it can also mean that one has to work hard to ground the simulation in certain ways. At one affectual level
there might be a concern with ‘‘photo-realism,’’ the extent to which the images look as if they were generated
by photographs of the naturally occurring environment, or whether there is rustling of trees in the wind if one is rendering
a forest.
We utilize software components that undertake some of these tasks for us. In the case of trees and forests, a critical part
of our application, we presently use SpeedTree to undertake most of the rendering. This commercial package requires that
one specify the tree species (including miscellaneous weeds), the dimensions of the tree, the density of the trees in a forest,
the contour structure of the forest terrain, the prevailing wind velocity, the season, and a number of other characteristics of
the forest environment. We discuss how this is done in a moment. This package is widely used in many games, including
Tiger Woods PGA Tour, Call of Duty 3, and of course numerous fantasy games. The forest scene in Fig. 5 illustrates the
capabilities we needed for a walk-through, and is from the game The Elder Scrolls IV: Oblivion. The image that is rendered
Author's personal copy
ARTICLE IN PRESS
74
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
Fig. 5. SpeedTree rendering from The Elder Scrolls IV: Oblivion.
Fig. 6. Rendering of a forest from a distance.
allows for real-time updating as the participant moves through the forest, and handles light and shade particularly well
from this walking perspective. As the participant gets closer to a specific tree, it renders that in greater detail as needed.
One can also take a helicopter tour of a forest. Figs. 6 and 7 were each produced by our software that uses SpeedTree as
its underlying renderer. These are each scenes from the same forest used in our application, first from a distance, and then
close-up. Fig. 6 displays many thousands of trees, but is based on a virtual forest covering 18.8 square miles and over 112
million trees and shrubs. So the helicopter could travel some distance without falling off the end of this virtual earth. Fig. 7
is a close-up of the same forest. The point is that this is the same forest, just seen from two different perspectives, and the
‘‘unseen forest’’ would be rendered in a consistent manner if the user chose to go there.
This last point is not trivial. There is one underlying computer representation of the entire forest, and then the
‘‘window’’ that the participant looks at is rendered as needed. In the case of the sample forest shown in Figs. 6 and 7, the
rest of the forest is rather boring and repetitive unless you are a squirrel, but one can add numerous variations in
vegetation, topography, landmarks, and so forth. All of these factors may play important roles in determining the dynamics
of the VR experience.
To add such factors and their effects on the evolution of the scenarios, one links a separate simulation software package
to the graphical rendering software that displays the evolution of a landscape. Since we are interested in forest fires, and
how they spread, we can use one of several simulation programs used by professional fire managers. There are actually
many that are available, for different fire management purposes. We wanted to be able to track the path of a wildfire in real
time, given GIS inputs on vegetation cover, topography, weather conditions, points of ignition, and so on. Thus, the model
has to be able to keep track of the factors causing fires to accelerate, such as variations in slope, wind or vegetation, as well
as differences between surface fires, crown fires, and the effects of likely ‘‘spotting’’ (when small blobs of fire jump
discretely, due to the effects of wind or fleeing animals).
Author's personal copy
ARTICLE IN PRESS
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
75
Fig. 7. Rendering of the same forest close-up.
Fig. 8. GIS input layers for FARSITE simulation of path of a fire.
For our purposes the FARSITE software due to Finney [23] is ideal. It imports GIS information for a given area and then
tracks the real-time spread of a fire ignited at some coordinate, say by lightning or arson. Fig. 8 shows the GIS layers needed
to define a FARSITE simulation. Some of these, such as elevation and slope, are readily obtained for most areas of the United
States. The ‘‘fuel model’’ and vegetation information (the last four layers) are the most difficult to specify. Vegetation
information can be obtained for some sites, although it should ideally be calibrated for changes since the original survey.7
The fuel model defines more precisely the type of vegetation that is present at the location. The most recent tabulation of
40 fuel models is documented by Scott and Burgan [55]. One formal type of vegetation is water, for example, which does
not burn. One of the other fuel models is shown in Fig. 9; photographs help one identify what is described, and then the
fuel model implies certain characteristics about the speed with which a simulated fire burns and spreads. In this case we
have an example of the type of fuel load that might be expected the year after hurricanes had blown down significant forest
growth, a serious problem for the wildfire cycle in Florida.
The output from a typical FARSITE simulation is illustrated in Fig. 10.8 The fire is assumed to be ignited at the point
indicated, and the path of the fire as of a certain time period is shown. The output from this simulation consists of a series
of snapshots at points in time defining what the location and intensity of the fire is. We can take that output and use it for
detailed visual rendering of the fire as it burns. The output from FARSITE is information on the spread of the forest fire, but
7
This is an important feature of the data underlying the Florida Fire Risk Assessment System of the Division of Forestry of the Florida Department of
Agriculture & Consumer Affairs, documented at http://www.fl-dof.com/wildfire/wf_fras.html.
8
This is an image taken from a GIS layer for Ashley National Forest, Utah, which is the geographic area that we simulate in the lab experiments
reported here.
Author's personal copy
ARTICLE IN PRESS
76
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
Fig. 9. Illustrative fuel load model from Scott and Burgan [55].
Fig. 10. Illustrative FARSITE simulation.
our rendering also uses the inputs to FARSITE to ‘‘set the stage’’ in a manner that is consistent with the assumptions used in
FARSITE. For example, we use the same topography and wind conditions as assumed in FARSITE. Thus the underlying
numerical simulation of the path and severity of the fire are consistent with the visual rendering to the decision maker.
We can then take that output and render images of the forest as it burns. Using SpeedTree as the basic rendering module,
we have adapted its software for our needs. This has included the incorporation of other landscape components, smoke,
fire, and the effects that fire has on the appearance of the forest, both in altering illumination and causing damage.
The images in Fig. 11 display the evolution of a wildfire, initially from a distance, then close-up, and finally after the front of
the fire has passed. These images have been generated using SpeedTree and our own programming, based on the simulated
path of the fire from FARSITE, conditional on the landscape. Thus we have simulated a GIS-consistent virtual forest, with a
fire spreading through it in a manner that is consistent with one of the best models of fire dynamics that is widely used by
fire management professionals, and then rendered it in a naturalistic manner. The participant is free to view the fire from
any perspective, and we can track that; or we can force feed a series of perspectives chosen beforehand, to study the effect
of allowing endogenous information accumulation.
The static images in Fig. 11 are actually snapshots of an animation that plays out in real time for the participant. This
dynamic feature of the evolution of the fire is one reason that a model such as FARSITE is needed, since it computes the path
and intensity of the fire, given the conditions of the environment. The dynamic feature also adds to the experience that the
participant has, in ways that can again be studied orthogonally in a VX design. In other words, it is a relatively simple
matter to study the effects of presenting static images versus a dynamic animation on behavioral responses.
4.3. The virtual experimental task
We now have all of the building blocks to put together the task that will be given to participants in our VX. The logic
of the instrument we have developed can be reviewed to see how the building blocks fit together. We focus on our
Author's personal copy
ARTICLE IN PRESS
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
77
Fig. 11. Authors’ rendering of simulated forest fire.
value-added from a methodological perspective, building on existing contingent valuation surveys of closely related
matters due to Loomis et al. [43,44]. The experimental design involves a control experiment in which we present subjects
with a VR simulation of fires in the Ashley National Forest, Utah, and ask them to make choices with consequences in that
VR environment. We then consider two treatments. One is a traditional ‘‘CVM-like’’ counterpart, in which the instructions
and choices are the same but we do not provide the VR simulations to the subject. Instead we show them two 2D images
from the simulated fires. The idea behind this treatment is to evaluate the effect of the VR simulation experience; we refer
to this as the 2-picture treatment. The other treatment is also a still-image counterpart to our control, but here we provide
a series of 2D images showing the spread of the fire over time from the VR simulation. In this way we can see the pure
effect of immersion in the VR environment, as distinct from the information provided by just seeing static time-shot images
from that environment. We refer to this second treatment as the 52-picture treatment, and it is akin to an extended CVM.
We discuss the control VR treatment first, and then the differences in the 2-picture and 52-picture treatments. We
emphasize that the reference to CVM here is only with respect to how information is provided to the participants. All of the
treatments involve actual choices with monetary consequences that are paid out.
4.3.1. The control VR experiment
The initial text of the instrument is standard to CVM investigations, in the sense of explaining the environmental and
policy context, the design of forest fire management in Florida.9 We explain the risks from wildfires, as well as the
opportunity cost of public funds allocated to prescribed burns already. The participant is introduced to a policy option
of expanding the prescribed burn policy from 4% to 6% of the forest area. Typical damages from the devastating 1998
wildfires in Florida are explained, and represented as roughly $59,000 per lost home. Other damages are discussed, such as
lost timber, health costs, and lost tourist income.
9
The policy motivation is based on Central Florida wildfires to make it relevant to the participants, who are all students at the University of Central
Florida. All our instructions are available at the ExLab Digital Archive http://exlab.bus.ucf.edu.
Author's personal copy
ARTICLE IN PRESS
78
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
Number of Times
Out of 48
Effect of Wildfire on Ashley National Forest
With No Prescribed Burn Policy
40
30
20
10
0
0
5
10
15
20 25 30 35 40 45 50 55 60
Percent of Ashley National Forest that Burned
Number of Times
Out of 48
Effect of Wildfire on Ashley National Forest
With A Prescribed Burn Policy in Place
40
30
20
10
0
0
5
10
15
20 25 30 35 40 45 50 55 60
Percent of Ashley National Forest that Burned
Fig. 12. Representation of risk provided to participants.
We then specifically introduce prescribed burning as a fire management tool that can reduce the frequency and severity
of fires. The idea of VR computer simulation is introduced, with explanations that the predicted path depends on things
such as topography, weather, vegetation, and ignition points. Participants are made familiar with the idea that fires and fire
damages are stochastic and can be described through frequency distributions. The distributions that are presented to them
are generated through Monte Carlo simulations using the FARSITE model, as described earlier. The participants then
experience four dynamic VR simulations of specific wildfires, two for each of the cases with and without previous
prescribed burns, rendered from the information supplied by FARSITE simulations that vary weather and fuel conditions.
We selected these simulations to represent high and low risk of fire damage, and the participants are told this.
Participants are told that the VR simulations are based on Ashley National Forest in Utah, and that they have a property
in this area. The simulated area is subject to wildfire, and they must make a decision whether to pay for an enhanced
prescribed burn policy or not, which would reduce the risk that their property would burn. The information about risks to
their property that participants receive is threefold.
First, they are told that the background uncertainties are generated by (a) temperature and humidity, (b) fuel moisture,
(c) wind speed, (d) duration of the fire, and (c) the location of the ignition point. They are also told that these uncertainties
are binary for all but the last, which is ternary; hence that there are 48 background scenarios. They are also told the specific
values for these conditions that are employed (e.g., low wind speed is 1 mph, and high wind speed is 5 mph). And it is
explained that these background factors will affect the wildfire using a computer simulation model developed by the US
Forest Service to predict the spread of wildfire. Thus participants could use this information, and their own sense of how
these factors play into wildfire severity, to form some probability judgments about the risk to their property. We fully
appreciate that only fire experts would likely have the ability to translate such information into relatively crisp
probabilities. But the objective is to provide information in a natural manner, akin to what would be experienced in the
actual policy-relevant choice, even if that information does not directly ‘‘tell’’ the participant the probabilities.
Second, the participant is shown some histograms displaying the distribution of acreage in Ashley National Forest that is
burnt across the 48 scenarios. Fig. 12 shows the histograms presented to participants. The instructions provided larger
versions, and explained slowly how to read these graphs. The scaling on the vertical axis is deliberately in terms of natural
frequencies defined over the 48 possible outcomes, and the scaling of the axes of the two histograms is identical to aid
comparability. The qualitative effect of the enhanced prescribed burn policy is clear—to reduce the risk of severe wildfires.
Of course, the information here is about the risk of the entire area burning, and not the risk of their personal property
burning, and that is pointed out clearly in the instructions.
Third, they are allowed to experience several of these scenarios in a VR environment that renders the landscape and fire
as naturally as possible. We provide some initial training in navigating in the environment, which for this software is
essentially the same as in any so-called ‘‘first person shooter’’ video game. The mouse is used to change perspective
and certain keys are designated for forward, backward, and sideward movements. All participants in the VR treatment, and
many of those in the other ones, had experience in playing video games; so navigation was not an issue for them.
Author's personal copy
ARTICLE IN PRESS
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
79
Participants are then presented with the four scenarios and are then free to explore the environment, the path of the fire,
and the fate of their property during each of these. Apart from the ability to move across space, participants also had the
option of moving back or forth in time within each fire scenario.
Finally, participants are introduced to the choice problem they face. In order to make the choice salient, we pay them
according to damages to their personal property, which is simulated as a log cabin at the edge of the forest. They are given a
credit of $80 and if their cabin burns, they lose $59 (which is simply scaled down from the $59,000 value of an average
home lost to fire in Florida). They have the option of paying for an enhanced prescribed burn policy, which would reduce
the risk of their property burning, but it would cost them some money up front—non-refundable in the event that their
property does not burn, of course. We presented the participant with an array of 10 possible costs for the policy, between
$2 and $20 dollars in increments of $2, all in payment for the same prescribed burn program. In each case the participant
was asked to simply say ‘‘yes’’ or ‘‘no’’ as to whether they wanted to contribute to the policy at that cost. This elicitation
procedure is recognized in the experimental economics literature as a multiple price list (MPL) [3–5], and is well known to
environmental economists from contingent valuation surveys [46, p. 100, fn. 14]. The participant was told that one of these
costs would be selected at random, using a 10-sided dice, and if prescribed burn had been selected for that cost it would be
put in place, otherwise not. After the cost was selected at random, the participant was told that we would then select each
of the background factors using the roll of a die for each factor, so that one of the 48 background scenarios would be
selected with equal probability. Thus the participant’s choices, plus the uncertainty over the background factors affecting
the severity of a fire conditional on the chosen policy, would determine final earnings.
These choice data can be analyzed as data from an MPL of discrete pairwise lottery options, except that the subjective
probabilities are not known; instead they are estimated. For comparison, panel A of Table 1 displays the payoff matrix in a
conventional instrument for eliciting risk attitudes, and is a scaling by a factor of 10 of the baseline instrument used by Holt
and Laury [35]. The participant picks option A or B in each row, and one row is selected at random to be played out in
accord with these choices. A risk-neutral participant would select the safe option A until row 5, and then switch to the risky
option B, as can be seen from the expected value information on the right of the table. A risk-averse (risk-loving) participant
would switch from option A after (before) row 5.
Panel B of Table 1 shows the comparable payoff matrix that is implied by our VX instrument, assuming that the
participant accurately infers the true probabilities of his own property burning. Of course, these are implied payoffs, and
the participant does not get to see a decision sheet in this artefactual manner: that is actually the whole point of the VX
exercise.
In panel B of Table 1 we see that the true probabilities of the cabin burning are 0.06 if there is an enhanced prescribed
burn policy in effect, and 0.29 if the participant chooses not to pay for policy.
4.3.2. The 2-picture and 52-picture treatments
The 2-picture treatment replaced the four VR simulations with two static pictures taken from the simulations. One is a
fly-over picture taken from the simulation that generates the widest fire spread, burning 65% of Ashley National Forest,
based on a simulation where no prescribed burn has been used to manage the forest. The other is a fly-over from the
simulation that generates the smallest fire spread, burning 1% of the forest, based on a simulation where prescribed burn
has been used to manage the forest. Each picture has an explanatory text. For example, for the first picture the text is: ‘‘This
is an example of a wildfire in an area that has not been prescribed burned. The weather conditions are severe, the
vegetation has low moisture content and wind speed is high. About 65% of the Ashley Forest burned in this fire. The house
burnt.’’ The text for the second picture is: ‘‘This is an example of a wildfire in an area that has been prescribed burned. The
weather conditions are benign, the vegetation has high moisture content and wind speed is low. A little over 1% of Ashley
forest burned. The house did not burn.’’ The display was akin to the type of 2D graphic art rendering that one might find on
a reasonably sophisticated CVM instrument. No information was provided about the temporal or geographic path of the
wildfire.
The 52-picture treatment augmented the 2-picture instrument with several more static 2D images from the VR
environment. The difference is that these images provided ‘‘snapshots’’ of the dynamic path and intensity of the wildfire.
The same two extreme scenarios were employed here as in the 2-picture treatment. Images from a fly-over and images
close-up to the fire, to provide more detailed rendering of the intensity of the fire and nearby vegetation and terrain, were
displayed side-by-side taken at time intervals 5 min and 2.5, 5, 10, 15, 20, and 24 h after the ignition of the fire. In addition a
picture of the cabin at the end of the simulation, either burnt or not as the case may be, was included. For the first scenario
each subject was also told that the cabin burnt after 6.5 h.
Detailed instructions for each treatment are available in an appendix (available on request).
4.3.3. Additional tasks and survey questions
Since the choice between expanding the prescribed burn program or not depends on each participant’s risk attitude as
well as risk perception, we also give them a standard lottery task after they are finished with the fire policy. This task is
used to elicit the risk attitudes of subjects. The lottery choices we offer are those presented in panel A of Table 1, although
participants are not given the information on the EVs. Participants are paid for both of these tasks in cash at the end of the
experiment.
Author's personal copy
ARTICLE IN PRESS
80
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
Table 1
Explicit and implicit payoff matrices in experimental tasks..
Lottery A
p($20)
Lottery B
p($16)
A. The risk aversion instrument [35]
0.1
$20
0.9
0.2
$20
0.8
0.3
$20
0.7
0.4
$20
0.6
0.5
$20
0.5
0.6
$20
0.4
0.7
$20
0.3
0.8
$20
0.2
0.9
$20
0.1
1
$20
0
Lottery A
p(safe)
p($38.50)
$16
$16
$16
$16
$16
$16
$16
$16
$16
$16
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
p($1.00)
$38.50
$38.50
$38.50
$38.50
$38.50
$38.50
$38.50
$38.50
$38.50
$38.50
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
$1.00
$1.00
$1.00
$1.00
$1.00
$1.00
$1.00
$1.00
$1.00
$1.00
EVA
EVB
Difference
$16.40
$16.80
$17.20
$17.60
$18.00
$18.40
$18.80
$19.20
$19.60
$20
$4.80
$8.50
$12.30
$16.00
$19.80
$23.75
$27.30
$31.00
$34.80
$38.50
$11.70
$8.30
$4.90
$1.60
$1.70
$5.10
$8.40
$11.80
$15.20
$18.50
EVA
EVB
Difference
$74.31
$72.31
$70.31
$68.31
$66.31
$64.31
$62.31
$60.31
$58.31
$56.31
$62.79
$62.79
$62.79
$62.79
$62.79
$62.79
$62.79
$62.79
$62.79
$62.79
$11.52
$9.52
$7.52
$5.52
$3.52
$1.52
$0.48
$2.48
$4.48
$6.48
Lottery B
p(burn)
p(safe)
B. The implied forest fire instrument assuming true beliefs
0.94
$78
0.06
$19
0.71
0.94
$76
0.06
$17
0.71
0.94
$74
0.06
$15
0.71
0.94
$72
0.06
$13
0.71
0.94
$70
0.06
$11
0.71
0.94
$68
0.06
$9
0.71
0.94
$66
0.06
$7
0.71
0.94
$64
0.06
$5
0.71
0.94
$62
0.06
$3
0.71
0.94
$60
0.06
$1
0.71
p(burn)
$80
$80
$80
$80
$80
$80
$80
$80
$80
$80
0.29
0.29
0.29
0.29
0.29
0.29
0.29
0.29
0.29
0.29
$21
$21
$21
$21
$21
$21
$21
$21
$21
$21
In order to conduct a preliminary assessment of the user experience with the virtual environment we also collected
subjective data on perceptions of presence. In VR research the concept of presence is used to assess the degree to which the
artificial environment engages participants’ senses, while capturing attention and supporting an active involvement on the
part of the user [36,66,67]. Research in this area has sought to understand how the virtual environment mediates
this experience through factors such as the fidelity of the presented world, the task executed in the environment, and the
interactions required by the task. Concepts such as involvement and immersion are additionally used to capture the degree
to which the user’s experience is altered by the VR environment. Involvement has to do with user attention on the
presented stimuli, and immersion describes ‘‘a psychological state characterized by perceiving oneself to be enveloped by,
included in, and interacting with an environment that provides a continuous stream of stimuli and experiences.’’ [66,
p. 299]. For our initial assessment, we used 23 appropriate tasks from a 32-item questionnaire reported in [66].10 Using
principal-components analysis, they found a best fit with a four-factor model consisting of Involvement, Adaptation/
Immersion, Sensory Fidelity, and Interface Quality. An appendix (available on request) lists the factors and the
corresponding questions used to identify the factor.
We also ask some questions about prior exposure to VR simulations via video games. We suspect that such experience
influences the ease with which a user navigates the environment, and that this has consequences for the extent to which
participants feel a sense of presence. In addition we have observations on the time that the participant uses in the paid
navigation test as a measure of the ease of navigation.
4.4. Results
We report findings from a laboratory experiment undertaken using GIS data from the Ashley National Forest in Utah.
Our participants were recruited from the student population of the University of Central Florida. The complete session
consisted of (a) the wildfire policy task described above; (b) a traditional risk aversion instrument in the form of a lottery
choice; and (c) a demographic survey and the Presence and game experience questionnaires. The purpose of the traditional
risk aversion task was to provide independent estimates of risk attitudes, so that we could draw inferences in the main wild
fire task about beliefs without the two being confounded. That is, we assume for the purposes of identification that the risk
attitudes used in the two tasks are the same, and then draw inferences about the interaction of beliefs and risk attitudes in
the wildfire task. The same logic of combining experimental tasks has been used elsewhere to draw inferences about the
10
We did not include items that referred to auditory and other sensory experiences since they were not part of our VR environment.
Author's personal copy
ARTICLE IN PRESS
81
Density
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
VR
2-pictures
52-pictures
2
4
6
8
10
12
14
16
Maximum WTP for Prescribed Burn ($)
18
20
Fig. 13. Distribution of willingness to pay for prescribed burn.
discount rate defined over the present value of utility streams, and is becoming common in behavioral econometrics,
e.g., see [6].
We recruited a total of 45 subjects, and each participated in the session individually. Our design involved 15 subjects
participating solely in the VR control, 17 subjects participating in the 2-picture treatment and then the 52-picture
treatment, and 13 subjects participating in the 52-picture treatment and then the VR control. Thus we have 28 subjects in
the VR control overall, 17 subjects in the 2-picture treatment, and 30 in the 52-picture treatment. There are 75 sets of
observations (45 from the participant’s first set of choices, and 30 from their second set of choices, when the treatments
included two sets). We check for order effects in our statistical analysis. Each set of observations consists of choices for the
10 rows in the MPL for prescribed burns. Average earnings in each session were $90.68, including a $5 show-up fee.
The average earning for the main prescribed burn choice was $62.71, with a standard deviation of $22.32. Each session
lasted roughly 1 h.
Fig. 13 displays kernel densities of the raw responses in our main task, in the form of the maximum willingness to pay
for the prescribed burn option. The maximum WTP is simply the switch point in the MPL table, assuming that subjects
choose to have prescribed burn on the lower rows with lower WTP, but switch to no prescribed burn at some higher WTP.
These results suggest some differences between the VR control and the two treatments, with WTP being lower in the VR
setting. The two CVM treatments, 2-pictures and 52-pictures, appear to generate the same WTP. On the other hand, these
raw results might be confounded. First, it is possible that risk aversion is smaller in the VR sample, so that the lower WTP is
simply due to their greater reluctance to reduce the risk by paying for the prescribed burn. Or perhaps the VR environment
led the subjects to have lower subjective probabilities about the chances of a wildfire affecting their virtual cabin. The same
confounds also suggest that the apparent similarity of responses under the two CVM treatments (2-pictures and
52-pictures) might be illusory, and might reflect differences in the perceived probabilities that are roughly offset by
differences in risk attitudes.
For these reasons, inferences about the effects of the treatments require structural estimation of a latent model of
behavior, to allow us to make inferences about the effect of the treatments on perceived probabilities. The general
econometric methods employed here are explained in detail in Harrison and Rutström [32], and the particular application
to estimate subjective beliefs is explained in detail in [3]. For pedagogic reasons we begin by writing out the model to be
estimated under the assumption that we are dealing only with choice tasks in which the probabilities of outcomes are
known, as in our risk aversion task from Holt and Laury [35]. Then we describe the extension to estimate subjective
probabilities since these are not given in our prescribed burn choice task.
4.4.1. Estimating risk attitudes with known probabilities
We assume a CRRA utility function11 defined over lottery prizes y as
UðyÞ ¼ ðy1r Þ=ð1 rÞ
(4.1)
where r is the CRRA coefficient and yX0. With this specification, and for yX0, r ¼ 0 denotes risk neutral behavior, r40
denotes risk aversion, and ro0 denotes risk loving. All arguments of utility are defined as the gross earnings from each bet,
as defined in Table 1, so they are non-negative.
11
This is simply a convenience assumption since it is a popular and tractable functional form. Other functional forms could easily be incorporated in
this type of analysis.
Author's personal copy
ARTICLE IN PRESS
82
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
The utility function (4.1) can be estimated using maximum likelihood and a latent structural model of choice, such as
EUT. Let there be K possible outcomes in a lottery; in our lottery choice task K ¼ 2. Under EUT the probabilities for each
outcome k in the lottery choice task, pk, are those that are induced by the experimenter, so expected utility is simply the
probability-weighted sum of the utility of each outcome in each lottery i:
EUi ¼ Sk¼1;K ½pk uk
(4.2)
The EU for each lottery pair is calculated for a candidate estimate of r, and the index
r EU ¼ EUB EUA
(4.3)
calculated, where EUA is the safer lottery and EUB is the riskier lottery. Indifference was not a choice option in these
experiments. The index (4.3), based on latent preferences, is then linked to the observed choices using a standard
cumulative normal distribution function F( ). This ‘‘probit’’ function takes any argument between 7N and transforms it
into a number between 0 and 1. The agent chooses lottery B if r EU+e40, where e is a normally distributed error term with
mean zero and variance s2. Thus we have the probit link function, showing the probability that B is chosen, as
probðchoose lottery BÞ ¼ pðr EU þ 40Þ ¼ pð=s4 r EU=sÞ ¼ Fðr EU=sÞ
(4.4)
The latent index defined by (4.3) is linked to the observed choices by specifying that the B lottery is chosen when r EU412,
which is implied by (4.4).
Thus the likelihood of the observed responses, conditional on the EUT and CRRA specifications being true, depends on
the estimates of r given the above statistical specification and the observed choices. The log likelihood is then
ln Lðr; y; XÞ ¼ Si ½ðln Fðr EUÞIðyi ¼ 1ÞÞ þ ðln Fð1 r EUÞIðyi ¼ 1ÞÞ
(4.5)
where yi ¼ 1(1) denotes the choice of the option B (A) lottery in lottery task i, I( ) is the indicator function, and X is a
vector of individual characteristics reflecting age, sex, and so on, as well as treatment variables. When we pool responses
over subjects the X vector will play an important role to allow for some heterogeneity of preferences.
To allow for subject heterogeneity with respect to risk attitudes, the parameter r is modeled as a linear function of
observed individual characteristics of the subject. For example, assume that we had information only on the age and sex of
the subject, denoted Age (in years) and Female (0 for males, and 1 for females). Then we would estimate the coefficients a,
b, and Z in r ¼ a+b Age+Z Female. Therefore, each subject would have a different estimated r, r^ , for a given set of estimates
of a, b, and Z to the extent that the subject had distinct individual characteristics. So if there were two subjects with the
same sex and age, to use the above example, they would literally have the same r^ , but if they differed in sex and/or age they
would generally have distinct r^ . In fact, we use four individual characteristics in our model. Apart from age and sex, these
include binary indicators for subjects who self-declare their ethnicity as hispanic, and their college major as business. Age
is measured as years in excess of 19, so it shows the effect of age increments.
An important extension of the core model is to allow for subjects to make some errors. The notion of error is one that
has already been encountered in the form of the statistical assumption (4.4) that the probability of choosing a lottery is not
1 when the EU of that lottery exceeds the EU of the other lottery.12 By varying the shape of the link function implicit in
(4.4), one can informally imagine subjects who are more sensitive to a given difference in the index r EU and subjects who
are not so sensitive. We use the Fechner error specification of Becker et al. [9] and Hey and Orme [33]. It posits the latent
index
r EU ¼ ðEUB EUA Þ=m
(4.30 )
instead of (4.3), where m40 is a structural ‘‘noise parameter’’ used to allow some errors from the perspective of the
deterministic EUT model. As m-N this specification collapses r EU to 0, so the probability of either choice converges to 12.
So a larger m means that the difference in the EU of the two lotteries, conditional on the estimate of r, has less predictive
effect on choices. Thus m can be viewed as a parameter that flattens out, or ‘‘sharpens,’’ the link functions implicit in (4.4).
This is just one of several different types of error story that could be used, and Wilcox [65] provides a masterful review
of the implications of the strengths and weaknesses of the major alternatives. Thus we extend the likelihood specification
to include the noise parameter m. Additional details of the estimation methods used, including corrections for ‘‘clustered’’
errors when we pool choices over subjects and tasks, are provided by Harrison and Rutström [32].
4.4.2. Recursive estimation of risk attitudes and subjective beliefs
The responses to the prescribed burn task can be used to generate estimates about the belief that each subject holds if
we are willing to assume something about how they make decisions under risk. Using the two experimental tasks in our
design, we recursively estimate the subjective probability of a wildfire burning the property of the subject, along with other
parameters of the model defining the risk attitudes of the subject. Consider the schema in Panel B of Table 1, where the
subject has to consider lotteries A and B, but where we now substitute the subjective probability instead of the true
12
This assumption is clear in the use of a link function between the latent index r EU and the probability of picking one or other lottery; in the case of
the normal CDF, this link function is F(r EU). If the subject exhibited no errors from the perspective of EUT, this function would be a step function: zero
for all values of r EUo0, anywhere between 0 and 1 for r EU ¼ 0, and 1 for all values of r EU40.
Author's personal copy
ARTICLE IN PRESS
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
83
probability shown in Panel B. The EU of the safer lottery A is
EUA ¼ pA Uðpayout if cabin burns j pay for prescribed burnÞ
þ ð1 pA ÞUðpayout if cabin is safe j pay for prescribed burnÞ
(4.6)
where pA is the subjective probability that the cabin will burn if one opts for the prescribed burn and chooses lottery A. In
Panel B of Table 1 this probability is shown as 0.06, the true probability, but our subjects do not know that probability, and
the whole point of the experiment is to determine the effect of the instruments on their subjective probability. The payouts
that enter the utility function are defined by the net earnings for different levels of cost for the prescribed burn, and are
shown in Panel B of Table 1. For the first row, for example, the cost of the prescribed burn is $2; so these payouts are $78
( ¼ $80–$2) and $19 ( ¼ $21–$2), and we have
EUA ¼ pA Uð$19Þ þ ð1 pA ÞUð$78Þ
(4.60 )
We similarly define the EU received from a choice of the riskier lottery B, not to purchase the prescribed burn:
EUB ¼ pB Uðpayout if cabin burns j do not pay for prescribed burnÞ
þ ð1 pB ÞUðpayout if cabin is safe j do not payfor prescribed burnÞ
(4.7)
This translates for each row of Panel B of Table 1 into payouts of $80 and $21; so we have
EUB ¼ pB Uð$21Þ þ ð1 pB ÞUð$80Þ
(4.70 )
for this option. We observe the decision to purchase the prescribed burn by the subject at different cost levels for the
prescribed burn, so we can calculate the likelihood of that choice given values of r, pA, pB, and m.
We need estimates of r to evaluate the utility function in (4.6) and (4.7); we need estimates of pA and pB to calculate the
EU in (4.6) and (4.7), respectively, once we know the utility values; and we need estimates of m to calculate the latent
indices (4.30 ) that generate the probability of observing the choice of lottery A or lottery B when we allow for some noise in
that process.13 The maximum-likelihood problem is to find the values of these parameters that best explain observed
choices in the prescribed burn task as well as observed choices in the risk aversion task.
In effect, the risk aversion task allows us to identify r under EUT, since pA and pB play no direct role in explaining
the choices in that task. Individual heterogeneity is allowed for by estimating the r parameter as a linear function
of demographic characteristics, and this also allows us to separately identify both pA and pB.14 We include dummy
variables for the VR, 2-picture, and 52-picture treatments to ascertain their effect on the estimated subjective probabilities
pA and pB.
The estimates indicate that our subjects are risk averse. The default estimate is r ¼ 0.57, which implies risk aversion
since r40 and the estimate is significantly different from zero. Thus, our subject pool is modestly risk averse, consistent
with substantial evidence from comparable subject pools [28,35]. In our case, these estimates are used to recursively
condition the inferences about beliefs, which are the main focus of the experimental design.
The estimates for the determinants of the subjective probabilities that the cabin will burn, pA and pB, are the main focus
of this experimental design. Let us initially discuss the estimates for the ‘‘risky lottery,’’ pB, in which there is no prescribed
burn in place. The 2-picture treatment, with its traditional information method, has an estimated subjective probability of
0.49. The 52-picture treatment increases this to 0.52 and the VR treatment lowers it to 0.25, quite close to the true
probability of 0.29. The effect of the VR treatment is significant (p-values ¼ 0.048). Thus we find evidence that the VR
treatment affects subjective probability estimates compared to the 2-picture and 52-picture treatments, but that the static
representation of the dynamic information in the 52-picture environment does not affect those estimates.
The effects for the estimated subjective probability for the ‘‘safe lottery,’’ pA, in which there is a prescribed burn in place,
are much more muted. In part this may be due to the overall estimate for this probability being so low, since the true
probability is only 0.06. The estimate for the 2-picture treatment is estimated to be 0.19. The VR and 52-picture treatments
have no statistically significant effect, and the estimated probabilities are 0.05 and 0.22, respectively. Although the VR
treatment is closer to the true probability, these subjective probabilities are not estimated with sufficient precision for that
to be a statistically significant difference in treatments.
Finally, Fig. 14 shows kernel densities of the results from asking subjects about the virtual environment, using the
Presence Questionnaire. We translate the responses to the indices measuring the four factors explained earlier; hence these
indices correspond to the scale of 1–7 used by the subjects on the original survey instrument, where 7 indicates the
strongest acceptance of the VR. Research on presence has used indices of this kind in a variety of ways, ranging from
13
Since the focus of the estimation is on the subjective probability, it is important to ensure that the estimates of p lie in the closed interval [0, 1]. It is
straightforward to constrain the estimates to the open unit interval, which is sufficient for our purposes, by estimating the log-odds transform k of the
parameters we are really interested in, and converting using k ¼ 1/(1+ep). This transform is one of several used by statisticians; for example, see RebeHesketh and Everitt [52, p. 244]. These commands use the Delta Method from statistics to calculate the standard errors correctly; see Oehlert [48] for an
exposition and historical note.
14
Even if the monetary prizes of the lotteries are the same across subjects, (estimated) risk attitudes are not, so the utility values of the prizes differ
across subjects. With enough heterogeneity, it is as if we had varied the monetary prizes for a given subject with a given risk preference, and thereby
identified the probabilities.
Author's personal copy
ARTICLE IN PRESS
84
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
Kernel densities of factors derived from Presence Questionnaire; N = 28 subjects
Involvement
1
2
Adaption/Immersion
3
4
5
6
7
1
2
3
Index Value
Sensory Fidelity
1
2
3
4
5
6
7
6
7
Index Value
Interface Quality
4
5
Index Value
6
7
1
2
3
4
5
Index Value
Fig. 14. Indices of presence in the virtual environment.
comparisons of differing VR environments, to manipulations within particular environments, to investigations of how a
specific environment produces differences across a given set of presence questions. For example, Lin et al. [42] found
significantly increasing reports of experienced presence with increases in field of view presented in a driving-simulatorbased virtual environment. Sutcliffe et al. [61] compared a different set of virtual environments and evaluated the
experienced presence across these environments to make claims about which produced more. They compared a highly
immersive VR CAVE, an Interactive Workbench, and a Reality Room; they found overall average presence scores ranging
from 4.9 for the CAVE, 4.6 for the Interactive Workbench, and 4.4 for the Reality Room, with the data showing that presence
scores in the CAVE were significantly greater. Others have used the presence indices to determine whether their VR system
produced scores that differ significantly from the neutral value of the scale, with the neutral value being ‘‘4.’’ For example,
in a study of a simulator to train aircraft fault inspection Vora et al. [63] found that their system scored significantly greater
than the neutral value on approximately only 50% of the questions; see Vembar et al. [62] for a similar analysis using VR for
medical training. As such, we are encouraged that our scores produced average values ranging from 4.81 to 5.83 for the
sub-components, suggesting that participants did report high levels of presence within our VX.
5. Conclusions
We have introduced the concept of virtual experiments (VXs) as a way to introduce natural cues into a lab decision task,
providing a bridge over the methodological gap between lab and field experiments. VX has the potential to generate the
internal validity of controlled lab experiments with the external validity of field experiments. The central aspect of the
VX methodology is a VR environment that makes participants experience a sense of presence, a psychological state of
‘‘being there.’’ This sense depends on the degree of involvement that participants experience as a consequence of focusing
attention on the set of stimuli and activities generated by the VR simulation. Presence also depends on immersion, the
perception that one is enveloped by, included in, and interacting with a continuously stimulating environment.
In a VX, participants can experience long-run scenarios in a very short amount of time, and many counterfactual
scenarios can be generated. In order for such counterfactual simulations to be coherent it is important to use scientifically
accurate and validated models, such as the fire prediction model FARSITE. The application we discuss here is the prevention
of forest fire risks, a significant economic problem in many parts of the world. Using VX we can elicit beliefs under
conditions that generate important field cues, and we can model counterfactual scenarios that cannot be implemented in
the field. Our study results are encouraging, and demonstrate the feasibility and application of this new methodological
frontier. By comparing our treatments we conclude that the immersive aspect of the VX has the effect of generating
subjective beliefs that are closer to actual risks. Our 52-picture treatment showed no effect from the provision of dynamic
Author's personal copy
ARTICLE IN PRESS
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
85
information even without immersion. We conclude that VX holds promise as an experimental method in the field
of environmental and resource economics, where actual field experiments are rare, and where field events often evolve
over long time horizons.
Acknowledgments
We thank the US National Science Foundation for research support under NSF/HSD 0527675; Emiko Charbonneau, Mark
Colbert, Jared Johnson, Jaakko Konttinen, Greg Tener, and Shabori Sen for research assistance; and James Brenner, Don
Carlton, Zach Prusak, and Walt Thomson for helpful comments. We are also grateful for comments from referees and the
editor, and participants of the RFF/EPA Conference Frontiers of Environmental Economics, Washington, February 26–27, 2007.
References
[1] Neeharika Adabala, Charles E. Hughes, Gridless controllable fire, in: K. Pallister (Ed.), Game Programming Gems 5, Charles River Media, Boston, 2005.
[2] Masaki Aono, Tosiyasu L. Kunii, Botanical tree image generation, IEEE Computer Graphics Appl. 4 (5) (1984) 10–34.
[3] Steffen Andersen, John Fountain, Glenn W. Harrison, E. Elisabet Rutström, Eliciting beliefs: theory and experiments, Working Paper 07-08,
Department of Economics, College of Business Administration, University of Central Florida, 2007.
[4] Steffen Andersen, Glenn W. Harrison, Morten Igel Lau, E. Elisabet Rutström, Elicitation using multiple price lists, Exper. Econ. 9 (4) (2006) 383–405
(December).
[5] Steffen Andersen, Glenn W. Harrison, Morten Igel Lau, E. Elisabet Rutström, Valuation using multiple price list formats, Appl. Econ. 39 (6) (2007)
675–682 (April).
[6] Steffen Andersen, Glenn W. Harrison, Morten Igel Lau, E. Elisabet Rutström, Eliciting risk and time preferences, Econometrica 76 (3) (2008) 583–618
May.
[7] Murat Balci, Hassan Foroosh, Real-time 3D fire simulation using a spring-mass model, in: Proceedings of the IEEE International Multimedia Modeling
Conference, 2006.
[8] Ian J. Bateman, Andrew P. Jones, Simon Jude, Brett H. Day, Reducing gain/loss asymmetry: a virtual reality choice experiment (VRCE) valuing land use
change, Unpublished Manuscript, School of Environmental Sciences, University of East Anglia, December 2006.
[9] Gordon M. Becker, Morris H. DeGroot, Jacob Marschak, Stochastic models of choice behavior, Behavioral Sci. 8 (1963) 41–55.
[10] Ian D. Bishop, Predicting movement choices in virtual environments, Landscape Urban Planning 56 (2001) 97–106.
[11] Ian D. Bishop, JoAnna, R. Wherrett, David R. Miller, Assessment of path choices on a country walk using a virtual environment, Landscape Urban
Planning 52 (2001) 225–237.
[12] Ian D. Bishop, H.R. Gimblett, Management of recreational areas: geographic information systems, autonomous agents and virtual reality, Environ.
Planning B 27 (2000) 423–435.
[13] Jules Bloomenthal, Modeling the mighty maple, in: Proceedings of SIGGRAPH 85, 1985, pp. 305–311.
[14] H.K. C
- akmak, H. Maas, U. Kühnapfel, VSOne: a virtual reality simulator for laparoscopic surgery, Minimally Invasive Therapy and Allied Tech. (MITAT)
14 (3) (2005) 134–144.
[15] A. Colin Cameron, Pravin K. Trivedi, Microeconometrics: Methods and Applications, Cambridge University Press, New York, 2005.
[16] Richard T. Carson, W. Michael Hanemann, Jon A. Krosnick, Robert C. Mitchell, Stanley Presser, Paul A. Ruud, V. Kerry Smith, Referendum design and
contingent valuation: the NOAA panel’s no-vote recommendation, Rev. Econ. Statist. 80 (2) (1998) 335–338 (May); reprinted with typographical
corrections in Rev. Econ. Statist. 80(3) (August 1998).
[17] Edward Castronova, Synthetic Worlds: The Business and Culture of Online Games, University of Chicago Press, Chicago, 2005.
[18] Edward Castronova, The price of bodies: a hedonic pricing model of avatar attributes in a synthetic world, Kyklos 57 (2) (2004) 173–196.
[19] Norishige Chiba, Kazunobu Muraoka, Akio Doi, Junya Hosokawa, Rendering of forest scenery using 3D textures, J. Visualization Computer Animation
8 (1997) 191–199.
[20] Andy Clark, Being There: Putting Brain, Body, and World Together Again, MIT Press, Cambridge, MA, 1997.
[21] Rudolph P. Darken, Helsin Cevik, Map usage in virtual environments: orientation issues, in: Proceedings of IEEE Virtual Reality, 1999, pp. 133–140.
[23] Mark A. Finney, FARSITE: fire area simulator—model development and evaluation, Research Paper RMRS-RP-4, Rocky Mountain Research Station,
Forest Service, United States Department of Agriculture, March 1998.
[24] Gerd Gigerenzer, Peter M. Todd, Simple Heuristics that Make Us Smart, Oxford University Press, New York, 1999.
[25] Paul W. Glimcher, Decisions, Uncertainty, and the Brain: The Science of Neuroeconomics, The MIT Press, Cambridge, MA, 2003.
[26] Glenn W. Harrison, Field experiments and control, in: J. Carpenter, G.W. Harrison, J.A. List (Eds.), Research in Experimental Economics, Field
Experiments in Economics, vol. 10, JAI Press, Greenwich, CT, 2005, pp. 17–50.
[27] Glenn W. Harrison, Making choice studies incentive compatible, in: B. Kanninen (Ed.), Valuing Environmental Amenities Using Stated Choice Studies:
A Common Sense Guide to Theory and Practice, Kluwer, Boston, 2006, pp. 65–108.
[28] Glenn W. Harrison, Eric Johnson, Melayne M. McInnes, E. Elisabet Rutström, Risk aversion and incentive effects: comment, Amer. Econ. Rev. 95 (3)
(2005) 897–901 (June).
[29] Glenn W. Harrison, Bengt Kriström, On the interpretation of responses to contingent valuation surveys, in: P.O. Johansson, B. Kriström, K.G. Mäler
(Eds.), Current Issues in Environmental Economics, Manchester University Press, Manchester, 1996.
[30] Glenn W. Harrison, John A. List, Field experiments, J. Econ. Lit. 42 (4) (2004) 1013–1059 (December).
[31] Glenn W. Harrison, John A. List, Naturally occurring markets and exogenous laboratory experiments: a case study of the winner’s curse, Econ. J. 118
(2008) 822–843 (April).
[32] Glenn W. Harrison, E. Elisabet Rutström, Risk aversion in the laboratory, in: J.C. Cox, G.W. Harrison (Eds.), Risk Aversion in Experiments, Research in
Experimental Economics, vol. 12, Emerald, Bingley, UK, 2008.
[33] John D. Hey, Chris Orme, Investigating generalizations of expected utility theory using experimental data, Econometrica 62 (6) (1994) 1291–1326
(November).
[34] R.R. Hoffman, K.A. Deffenbacher, An analysis of the relations of basic and applied science, Ecolog. Psych. 5 (1993) 315–352.
[35] Charles A. Holt, Susan K. Laury, Risk aversion and incentive effects, Amer. Econ. Rev. 92 (5) (2002) 1644–1655 (December).
[36] B.E. Insko, Measuring presence: subjective, behavioral and physiological methods, in: G. Riva, F. Davide, W.A. Ijsselsteijn (Eds.), Being There:
Concepts, Effects and Measurement of User Presence in Synthetic Environments, Ios Press, Amsterdam, 2003.
[37] Simon Jude, Andrew P. Jones, J.E. Andrews, Ian J. Bateman, Visualisation for participatory coastal zone management: a case study of the Norfolk Coast,
England, J. Coastal Res. 22 (6) (2006) 1527–1538.
[38] Daniel Kahneman, Amos Tversky (Eds.), Choices, Values, and Frames, Cambridge University Press, New York, 2000.
[39] Gary Klein, Sources of Power: How People Make Decisions, MIT Press, Cambridge, MA, 1998.
Author's personal copy
ARTICLE IN PRESS
86
S.M. Fiore et al. / Journal of Environmental Economics and Management 57 (2009) 65–86
[40] Jon A. Krosnick, Allyson L. Holbrook, Matthew K. Berent, Richard T. Carson, W. Michael Hanemann, Raymond J. Kopp, Robert C. Mitchell, Stanley
Presser, Paul A. Ruud, V. Kerry Smith, Wendy R. Moody, Melanie C. Green, Michael Conaway, The impact of ‘no opinion’ response options on data
quality: non-attitude reduction or an invitation to satisfice?, Public Opinion Quart. 66 (2002) 371–403.
[41] Steven D. Levitt, John A. List, What do laboratory experiments measuring social preferences reveal about the real world?, J. Econ. Perspect. 21 (2)
(2007) 153–174 (Spring).
[42] James Lin, Henry Duh, Donald Parker, Habib Abi-Rached, Thomas Furness, Effects of field of view of presence, enjoyment, memory, and simulator
sickness in a virtual environment, in: Proceedings of IEEE virtual reality 2002, 2002, pp. 164–171.
[43] John B. Loomis, Lucas S. Bair, Armando Gonzáles-Cabán, Language-related differences in a contingent valuation study: English versus Spanish, Amer.
J. Agr. Econ. 84 (4) (2002) 1091–1102 (November).
[44] John, B. Loomis, Trong Le Hung, Armando Gonzáles-Cabán, Testing transferability of willingness to pay for forest fire prevention among three states
of California, Florida and Montana, J. Forest Econ. 11 (2005) 125–140.
[45] David Marr, Vision, Freeman, San Francisco, 1982.
[46] Robert C. Mitchell, Richard T. Carson, Using Surveys to Value Public Goods: The Contingent Valuation Method, Johns Hopkins Press, Baltimore, 1989.
[47] Duc Q. Nguyen, Ronald Fedkiw, Heinrich W. Jensen, Physically based modeling and animation of fire, in: Proceedings of SIGGRAPH 02, 2002,
pp. 721–728.
[48] Gary W. Oehlert, A note on the delta method, Amer. Statistician 46 (1) (1992) 27–29 (February).
[49] Peter E. Oppenheimer, Real time design and animation of fractal plants and trees, in: Proceedings of SIGGRAPH 86, 1986, pp. 55–64.
[50] Andreas Ortmann, Gerd Gigerenzer, Reasoning in economics and psychology: why social context matters, J. Inst. Theoretical Econ. 153 (1997)
700–710.
[51] Stephen J. Pyne, Tending Fire: Coping with America’s Wildland Fires, Island Press, Washington, DC, 2004.
[52] Sophia Rabe-Hesketh, Brian Everitt, A Handbook of Statistical Analyses Using Stata, third ed., Chapman & Hall/CRC, New York, 2004.
[53] Marie-Laure Ryan, Possible Worlds, Artificial Intelligence, and Narrative Theory, Indiana University Press, Bloomington, IN, 1991.
[54] E. Salas, G. Klein (Eds.), Linking Expertise and Naturalistic Decision Making, Erlbaum, Mahwah, NJ, 2001.
[55] Joe H. Scott, Robert E. Burgan, Standard fire behavior fuel models: a comprehensive set for use with Rothermel’s surface fire spread model, Research
Paper RMRS-GTR-153, Rocky Mountain Research Station, Forest Service, United States Department of Agriculture, June 2005.
[56] C. Sewell, N.S. Blevins, S. Peddamatham, H.Z. Tan, D. Morris, K. Salisbury, The effect of virtual haptic training on real surgical drilling proficiency,
in: Proceedings of the IEEE World Haptics, March 2007.
[57] Vernon L. Smith, Constructivist and ecological rationality, Amer. Econ. Rev. 93 (3) (2003) 465–508.
[58] Jos Stam, Interacting with smoke and fire in real time, Communications ACM 43 (7) (2000) 76–83.
[59] Christopher B. Stapleton, Charles E. Hughes, Believing is seeing, IEEE Computer Graphics Appl. 27 (1) (2006) 80–85 (January–February).
[60] Nicholas Stern, The Stern Review of the Economics of Global Warming, 2006, available from web site at /http://www.hm-treasury.gov.uk/
independent_reviews/stern_review_economics_climate_change/sternreview_index.cfmS.
[61] A. Sutcliffe, B. Gault, J. Shin, Presence, memory and interaction in virtual environments, Int. J. Human–Computer Stud. 62 (3) (2005) 307–327.
[62] D. Vembar, S. Sadasivan, A.T. Duchowski, P. Stringfellow, A.K. Gramopadhye, Design of a virtual borescope: a presence study, in: Proceedings of the
HCI International, Las Vegas, NV, July 22–27, 2005, 2005.
[63] J. Vora, S. Nair, E. Meldin, A.K. Gramopadhye, A. Duchowski, B. Melloy, B. Kanki, Using virtual reality technology to improve aircraft inspection
performance: presence and performance measurement studies, in: Proceedings of the Human Factors and Ergonomics Society Meeting, 2001,
pp. 1867–1871.
[64] Jason Weber, Joseph Penn, Creation and rendering of realistic trees, in: Proceedings of SIGGRAPH 95, 1995, pp. 119–128.
[65] Nathaniel T. Wilcox, Stochastic models for binary discrete choice under risk: a critical primer and econometric comparison, in: J. Cox, G.W. Harrison
(Eds.), Risk Aversion in Experiments, Research in Experimental Economics, vol. 12, Emerald, Bingley, UK, 2008.
[66] B.G. Witmer, C.J. Jerome, M.J. Singer, The factor structure of the presence questionnaire, Presence 14 (2005) 298–312.
[67] B.G. Witmer, M.J. Singer, Measuring presence in virtual environments: a presence questionnaire, Presence: Teleoperators Virtual Environ. 7 (1998)
225–240.