European Review, Vol. 11, No. 3, 417–435, (2003) Academia Europaea, Printed in the United Kingdom
Risk communication: pitfalls and
promises
RAGNAR LOFSTEDT
King’s Centre for Risk Management, King’s College, Strand, London,
WC2R 2LS, UK. E-mail:
[email protected]
Over the past 30 years, researchers and practitioners have discussed the
importance of risk communication in solving disputes ranging from the
public outcry regarding importing GMO foods from the United States to
Europe, the siting of waste incinerators in many parts of Europe to the
building a permanent high level nuclear waste facility in the United States.
In this paper the history of risk communication is discussed, focusing
particularly on the importance of the social amplification of risk and trust.
This is followed by a detailed discussion on trust as it relates to public
perception of risk, where it is argued that trust is composed of three
variables. The third section covers the theoretical debate of how to best deal
with the decline in public trust. This is followed by a short analysis in which
it is concluded that there is no simple solution to increasing public trust (and
thereby assuring greater risk communication successes).
1. Risk communication
Risk communication has its roots in risk perception, a field developed by Gilbert
White in the 1940s. White’s work on natural hazards1 and that of Baruch
Fischhoff, Paul Slovic and others on technological hazards in the 1970s2,3 showed
that the public perceive some risks differently than others for a series of reasons,
such as degree of control, catastrophic potential, and familiarity. In the late 1980s,
there began to be application of some of the findings of risk perception research
to risk communication.4,5 Whilst risk communication cannot be defined as an
independent discipline, it is perhaps best described as ‘the flow of information and
risk evaluations back and forth between academic experts, regulatory practitioners, interest groups, and the general public’.6 Thus at its best, risk
communication is not a top-down communication from expert to the lay public,
but rather a constructive dialogue between all those involved in a particular debate
about risk.
418
Ragnar Lofstedt
To date, the outcomes of the various risk communication programmes relating
to environmental hazards in Europe and the United States have largely been
ineffective. The public tends to remain hostile to the local siting of waste
incinerators and nuclear waste dumps, a reaction that has not been significantly
influenced by the risk communication programmes.7–9 Whilst, in part, such
responses might be attributable to the lack of funding of risk communication
programmes and hence failure to conduct proper evaluations to learn why
programs failed,10 it is more a failure to understand that it is necessary to work
together with the public rather than simply ‘educate’ them.11,12 More attention
must be played to the social amplification of risk and the role of trust.13,14
1.1. Social amplification
The relationship between media representation of risks and associated public
perceptions of these same risks (and their impact on behaviour) is complex.
Theory about the social amplification of risk15 takes into account the integration
of different models of risk perception and risk communication. ‘The social
amplification of risk is based on the thesis that events pertaining to hazards interact
with psychological, social, institutional and cultural processes in ways that can
heighten or attenuate individual and social perceptions of risk and shape risk
behaviour’.16 Social amplification is made possible by the occurrence of a
risk-related event (an event of physical nature) or by a potential for a risk-related
event, which has some kind of substantive or hypothetical reality.17 The
risk-related event is selected by a ‘transmitter’, in most cases the mass media or
an interpersonal network, which amplifies or attenuates the risk. The transmission
is then continued by people or institutions within society who may also attenuate
or amplify the risk into a message (the so called ’ripple effect’). Such messages
lead to secondary effects, which might be financial, e.g. rises in insurance rates,
affective such as anti-technology feelings, or economic such as a decline in tourist
activity.
The model of social amplification has been criticized, primarily because of the
simplistic views of the mechanisms of risk amplification and risk attenuation.
1.2. Trust
One of the most likely explanations for the failures of risk communication
initiatives is that reactions to risk communication are not only influenced by the
message content and the hazards, but also by trust in those responsible for
providing the information.18–25 This distrust of policy makers and industry officials
is due to past history or social alienation.26
Leiss27 points out that in the early stages of risk communication, between
Risk communication
419
1975–84, the main concern of the technical experts was to provide accurate
numerical information. It was believed that if a risk was estimated correctly then
the public would believe the experts and their recommendations on the basis of
their expertise. However, public opposition to risk-based decision making resulted
in experts expressing an ‘open contempt toward the public perception of risk’,
which was discounted as being irrational.27 This resulted in further distrust in
experts by the public who viewed their actions as arrogant, self serving, and
reflecting a ‘hidden agenda’ of vested interests. Today, experts realize that public
trust is extremely important if they are to achieve effective risk communication.28
Trust, once lost, is very difficult to regain.29 It is far easier to destroy trust than
to build it, particularly as trust-undermining events tend to take the form of specific
events or accidents whereas trust-building events are often fuzzy or indistinct.
Barber30 has identified some reasons that have contributed to a decline in trust in
science and the professions more generally. He suggested, for example, that the
increased influence that professions have over people’s welfare, the high value
placed on equality and a better-educated public all contributed to this trend. The
political issue of who makes important decisions for others is central to recent
discussions of reactions to potential technological hazards.31 Perhaps a greater
understanding of the trust causing/destroying phenomena could contribute to
resolving social, environmental and political problems.
2. Trust and risk communication today
Without public trust in authorities/regulators it is very difficult to assemble a
successful risk communication strategy.32 There is a direct relationship between
high public trust in authority and low perceived risk and vice versa;33,34 indeed,
it is possible to communicate issues of high uncertainty in a top-down fashion,
when the public trusts authorities/regulators.35,36
What exactly is trust? Trust can be an expression of confidence between the
parties in an exchange transaction37–39 and can be both process/system- or
outcome-based. For example, in some cases, the public will trust regulators even
if they do not agree with a regulatory decision, as long as they see the process
as credible, i.e. fair, competent and efficient. However, in most cases, the public
judges regulators on their past decisions. If the public perceives the regulator as
competent, fair and efficient, based on previous decisions, the public is likely to
trust these regulatory bodies in the future. I use the term trust in the sense of a
complexity reduction thesis, in which the public delegates to authority. That is
to say trust means acceptance of decisions by the constituents without questioning
the rationale behind it. In such a case, constituents are in effect agreeing to accept
a ‘risk judgement’ made by the regulators.40 The three most important components
of trust are fairness, competence, and efficiency.41,42
420
Ragnar Lofstedt
2.1. Fairness
Impartiality and fairness are an important element of any regulatory decision that
will have an impact on public trust.43–48 There are two ways to measure fairness
in regulation, either via the process itself or through the outcome of the process.
Fairness is usually defined by a view of the process or outcome as being impartial.
Did the regulators take everybody’s interests into account, and not just those of
certain powerful industrial bodies? If the regulators are not seen as impartial or
fair they are unlikely to gain trust.
2.2. Competence
Public perceptions of risk managers’ competence is viewed as the most important
component of trust.49–51 Did the regulators handle the process as proficiently as
possible? Did the risk managers have the necessary scientific and practical
background to deal with the range of issues associated with the process?
2.3. Efficiency
The third component of trust is efficiency and can be viewed as how taxpayers’
money is used in the regulatory process (saving lives or safeguarding the
environment).52,53 The efficiency argument is particularly important during
periods of economic stress, when levels of government expenditure have
significant effects on the public’s welfare and state of well-being.54 The concept
of relating efficiency to trust has not been much developed because, what
economists or technocrats see as inefficient – such as spending public funds on
cleaning up contaminated land sites (e.g. the US super fund project) – are seen
by the public as very important, for reasons other than efficiency.55–58
3. Risk communication and present risk management tools – how to
rebuild public trust
3.1. Description of strategies that have been implemented to rebuild trust
What is the best way to deal with the decline in public trust? One view is to
decrease public involvement, arguing that the public already have too much
influence on risk regulation, resulting in both the wrong types of problems being
prioritized59 and inefficient decision making.60 Others argue that only by
increasing public deliberation, from as early as possible can public trust be
increased, leading to less public opposition and higher compliance with any
regulatory measures put in place.61–63
Risk communication
421
3.2. Deliberation
In ancient Greece, citizens participated in policy making, and direct democracy
likened to the Greek model was also practised in a few small Cantons in central
Switzerland in the 13th century. In the free cities of Italy (e.g. Venice), there was
citizen participation in the early Renaissance.64 However, it is fair to say that it
was not until the Enlightenment period of the late Renaissance that the
fundamental elements of democracy, division of power, and equal opportunity for
political action such as voting and running for political office were articulated.65
Following the Renaissance, and significantly after the US and French
revolutions of the late 18th century when democracy began to bloom, democracy
with public involvement in the policy making process became firmly established,
in theory if not always in practice.
In the United States, some commentators argue that participation has been a
recurrent theme in American history, but that demands for participation have
increased over time as the Government has expanded. For example, referenda –
a form of public participation, as formal interest representation at the national level
– developed in the early part of the 20th century. By the end of the 1930s, the
basic types of public participation were well established and used. Following the
Second World War, participation in the policy making process continued to grow
after the Administrative Procedure Act was passed in 1946. This called for due
process and the public’s right to comment, and gave opportunity for hearings. The
1966 Freedom of Information Act granted similar privileges. These Acts have
been termed the ‘old’ school of participation, in which participation was seen as
a privilege, in which only those organizations that had resources could
participate.66
During the 20 years after 1960, wide public participation was seen as a
necessary element in many federal statutes as an important contribution to
democracy and to the quality of the decision making process itself. What
distinguishes the ‘new’ from the ‘old’ school, is that participation is now seen as
a right.
The first ‘experiment’ with mass public participation was Lyndon Johnson’s
so-called Great Society initiative, where public participation contributed to
defining and establishing anti-poverty programs.66,67 In the 1970s, participation
in the policy making process grew in popularity. The National Environmental
Policy Act of 1970 called for public participation in the preparation of
Environmental Impact Assessments, which were enforced by the US Federal
Courts. In addition, the so-called reformation of the appeal law in the late 1960s
and early 1970s led to the use of fairness and equity measures to protect new
classes of interests under an expanding government. Undoubtedly, however, the
greatest impetus for the growth of participation in the 1970s was from the increase
422
Ragnar Lofstedt
in federal statutory innovations. Of all the participation provisions enacted in the
1970s, 60% were based on a Government statute.68
In the 1980s, administrative enthusiasm for participation in the policy making
process declined in the United States government. The Reagan Administration cut
funding for participation initiatives, viewing participation in the policy making
process as a way for the opposition to influence it. The Reagan Administration
instead promoted the use of cost-benefit analysis as an approach in setting
environmental standards69 (see discussion of rational risk management approach)
something that had been originally put in place in the 1946 Administrative
Procedures Act and used (on a small scale) by the Nixon Administration.70 This
changed again in the 1990s, when participation grew in popularity at both the
national and state levels. On the national level, federal reform proposals, for
example, have been used to make agencies more responsive to smaller businesses,
while states such as Florida and North Carolina have adopted participation
techniques for negotiated rule making.71
In Europe, deliberation grew in prevalence in the 1970s when it was used as
an aid to urban planning in many communities.72 The purpose was to understand
better and to incorporate into the decision making process public values and
preferences.73 This work led to the development of various public participation
techniques ranging from consensus conferences, to advisory panels and citizen
juries.74
Deliberation itself is a style of exchange of arguments with its origins in
theology. There it refers to the Laws of the Catholic Church, in which there was
an exchange of arguments among equals or stakeholders. Such exchanges are now
commonly found among experts, such as in a Royal Commission in the UK.
Deliberation can be used to refer to an exchange of ideas between the public and
interest groups, policy makers and industry representatives. It refers to the
involvement of the public and various interest groups in a multilevel framework
in characterizing the risk that is to be managed.75
Deliberation has four main purposes.76–78 These are the normative democracy,
equity and fairness reasons, more effective risk communication, and relativism
of knowledge.
Equity and fairness
Public participation levels the playing field by providing citizens with an
opportunity to influence their governing. This is an ideological perspective and
its proponents argue that we live in an unjust capitalist society where risks are
everywhere affecting poor people proportionally higher as burdens and benefits
are not equally distributed.79–81 The proponents do not want economists telling the
public what the goals should be. Economists, they argue, cannot tell us what is
Risk communication
423
an intolerable risk; involved in discussion, the affected public can help to decide
what burdens are tolerable and the concept of fairness gets due consideration.82
More effective risk communication
In both Europe and the United States, the use of deliberation did not become
popular to risk managers until the advent of dialogue risk communication in the
late 1980s.83–86
Risk communication studies indicated that the most common form of risk
communication, ‘top-down’, was not successful in alleviating public fears.87–89
The main reasons identified for this failure was poor communication among the
experts themselves, and their disdainful response to public opposition90 which
contributed to increased public distrust in experts.91
The development of dialogue risk communication techniques72,83,91–93 was
welcomed by industry and regulators, especially in the United States. Industry and
local and federal government regulators, frustrated by the difficulties in siting
plants and dumping and burning wastes, were keen not only to learn how to
increase public trust via more active engagement but also to gain information on
the affected citizens’ preferences by having the citizens directly involved in the
policy making process.71 The supposed ability to increase public trust that has
made dialogue risk communication very much in vogue at the moment.73,74,94–95
Public/interest group participation has been identified as important in
rebuilding the legitimacy of the decision-making process,74 particularly after the
1995 Brent Spar crisis.96 Present European policy overwhelmingly favours the
increase of public and interest group involvement in the policy making process,
be they based in the EU, Sweden or the UK.63 For over 100 years, most European
countries such as Belgium, France, Sweden and the UK had the elitist model in
place with little public involvement,97 but after a series of regulatory failures –
ranging from BSE in the UK, to tainted chicken feed in Belgium, or contaminated
blood in France, it had to be abandoned.98
3.3. Technocracy
In the United States, which historically has had much greater public involvement,
regulators, policy makers and some academics have begun to question the wisdom
of having the public involved in the policy making process. Proponents of this
technocratic perspective take the view that risk management should be left to the
experts advising government ministers and policy makers with minimal or no
public involvement. Only through strong science-led expert advice and strict peer
review will risk management ultimately work. Technocrats/experts want risk
managers (civil servants) to create outcomes that citizens, after careful
424
Ragnar Lofstedt
deliberation and training in relevant sciences, would want the democracy to
produce.99 They see themselves as delegated agents of lay citizens who lack the
time, expertise, resources and cognitive capacity to make complex risk-management decisions. Notions of fairness as well as efficiency are important for
technocrats who are reluctant to accept stakeholder-based decisions as well as
those based on opinion polls or raw popular opinion. Involving the public and
interest groups in a deliberative fashion can lead to inefficiencies both in time and
funds, to the wrong prioritization of the hazards, to unforeseen difficulties as well
as surprises, breeding distrust. By leaving risk management to experts, who know
the issue better than anyone else, society benefits. That said, technocrats argue
that some form of public participation is needed to ensure accountability, and to
force technocrats/experts to formulate decisions that are understood by the
public.100 The technocrats are experts and know the area that they are set to
regulate better than any other. They serve as advisors to civil servants and
ministers via expert advisory councils and agencies, but are not part of the
politically appointed establishment, but are rather a politically insulated group that
have been assigned to deal with risks.
The technocratic risk management approach is well mapped out by Graham and
Hartwell who argue that regulation of environmental and health problems should
be based on the following criteria:
• Scientific expertise indicating that exposure to identified pollutants can
represent significant harm to the environment or human health.
• The environmental problems identified should be prioritized by some
type of ‘comparative risk process’ as to ensure efficient use of
resources.
• To avoid risk–risk trade-offs, the proposed regulations should be
shown to reduce the risks of the targeted pollutants to a greater extent
than they increase other risks to the environment.
• The economic costs of the proposed actions must be reasonably related
to the degree of risk reduction.
Overall, the view is that regulatory reforms should be based on risk criteria
drawn from economic and scientific spheres, avoiding regulatory arrangements
that may satisfy the concerned public, but which could have negative effects on
the environment as a whole.
Economic risk management alternative. The technocratic approach is the
opposite of the deliberative one. Ruckelshaus, the former US EPA Administrator,
for example, argued in the early 1980s that having scientists and experts
characterizing the risks and carrying out the risk assessments would restore the
credibility of the US EPA.101–104 The process and would not only lead to more
efficient and competent decisions, but also greater public trust in the institution.
Risk communication
425
3.4. Rational risk policy – efficiency
Rational risk policy takes the view that there is only a limited amount of funding
available for risk management and that this funding should be used in the best
possible way. It differs from the technocratic approach in two ways: first, it wants
risk managers to create outcomes that would have been created by perfectly
functioning markets (if such markets had existed); only efficiency counts and there
is no room for public or stakeholder involvement. Secondly, it argues for risks
to be individualized. The individual should him/herself decide whether it is
worthwhile to take a risk or not. For example, by putting warning labels on
cigarettes, an individual should know that by smoking, he/she increases the chance
of reducing his/her lifespan because of lung cancer. In a free market with warning
labels, individuals can decide which risks to take and which not. Also in such a
society, insurance is a major factor. If individuals feel exposed they can take out
insurance thereby hedging their exposure.105
The two leading proponents of this view are Kip Viscusi and Richard
Zeckhauser.106–109 They argue that present day regulation is killing people and
costing unnecessarily large amounts of money. As Dana argues:
The central thesis of the critique is that government could achieve the
designated ends of environmental regulation at a much lower social costs by
replacing rigid ‘command and control regulation’ with more market-oriented
system of tradeable pollution rights, pollution taxes, and monetary incentives for
pollution prevention. Proponents claimed that market-oriented reforms would
reduce industry’s compliance costs and government’s enforcement costs.
Moreover, a market-oriented system, unlike a command and control system,
would give industry an ongoing incentive to develop better pollution prevention
technology.110
The main thrust of Viscusi and Zeckhauser’s argument is that the public and
policy makers are prone to certain biases, strongly grounded in cognitive
psychology.The most common arguments are:111,112
• An underestimation of large risks and overestimation of small ones.
• Greater value attached to eliminating a hazard rather than reducing the
risk.
• Greater concern about visible dramatic and well publicised risks.
• More concern about low probability high consequence risk (e.g. a
nuclear plant accident or a plane crash) than high probability low
consequence risk (a car crash).
• More concern about artificial than natural risks.
These biases can lead to irrational regulation, affecting decisions that policy
makers must take on a daily basis. They are illustrated in the EPA example
426
Ragnar Lofstedt
mentioned below, but they also come through many other Federal policies.
Other examples are seen in the US Food and Drug Administration legislation
where regulation of new synthetic chemicals is more frequent than for natural
chemicals.
Biases can affect risk management policies in many ways. Irrational fears
among the public caused by these biases, for example, can affect local and national
policy makers if they perceive that a particular regulation may be popular among
the voting public. This was seen in Clinton and Gore’s campaign promises to
continue funding clean-up of Super Fund sites in 1996, even though research had
shown that this would not be efficient in terms of lives saved.
Another issue that rational risk managers see as problematic is the intentional
conservative bias in cancer risk assessments which extrapolate animal data (e.g.
mice) to humans, which may have led to multiple errors in overestimating risk.113
The basis of rational risk policy is the 90–10 principle, in which government
regulators may incur 90% of the cost to address the last 10% of the risk.114 Hence,
reducing the risk of a particular problem to absolutely zero is extremely inefficient.
Viscusi applies the 90–10 hypothesis to the Superfund case example. His
calculations show that the first 5% of expenditure eliminates 99.46% of the total
expected cases of cancers averted by hazardous waste clean-up efforts. The
remaining 95% of the expenditure leads to virtually no health risk reduction (Ref.
114, pp. 99–100). Moreover these calculations show that the mean value of a life
saved by the Superfund clean-up is a massive $11.7 billion. Critics point out,
however, that this calculation focuses on existing and not future risks. Superfund
was put in place not to re-mediate existing risks, but rather to prevent potential
risks by cleaning up sources of exposure before a risk is made real.
Under a rational risk policy, the cost of saving a life or avoiding an illness or
injury should be the same across all government departments. When this is not
the case, what can happen is that safety is reduced through diverting funding from
effective life saving activities to less effective ones. Some industries, due to how
the public perceives them, have higher regulatory bands, as measured per lives
saved, than others. The nuclear industry is notorious for putting forward regulatory
measures that would cost millions of dollars per life saved, and which if
implemented would take funding away from road or railway safety where
regulatory measures are more cost effective.114
Zeckhauser and Viscusi argue that rational risk policies, based on economic
criteria, should take precedence over deliberative procedures as bias in this can,
in effect, allow people to die unnecessarily. One of the fundamental reasons for
the success of the rational risk approach to date has been the so called ‘no losses’
phenomenon, where:
• regulatory costs are reduced (no litigation);
Risk communication
427
• industry does not face extensive uncertainty related to costs and
environmental benefits and there is no excessive conservatism in the
cancer risk assessments.115
4. Strategies for trust building – is there in fact one simple solution?
It can be seen that to deal with the decline in public trust there are several possible
solutions. These should ideally be matched up with the three components of trust,
that is fairness, competence and efficiency. To deal with public distrust caused
by the lack of fairness, for example, some form of public/stakeholder involvement
may be necessary, to ensure that the regulators/public authorities have the public’s
best interest at heart. However, if the lack of public trust is caused by
incompetence, greater involvement of experts (technocracy) may be required, and
if the process is seen as inefficient then a rational risk analytical approach may
be needed. In sum, in addressing how best to communicate to the general public
I am not convinced that greater public dialogue (the most popular tool at the
present time) is necessarily the best one for increasing public trust in regulators.
The simplest way to address the trust conundrum is to test for public trust (via
open-ended face-to-face interviews on random populations where the issue has
been raised) and, based on the results, develop a communication programme and
act accordingly. In so doing the four following risk communication programmes
can be developed.
(a) If the interviews show that there is public trust in authorities,
top-down risk communication with the general public will suffice.
Such a strategy will work even under issues of extreme uncertainty,
as long as the message being communicated can be made simple and
understandable.116
(b) If the interviews show that the public do not trust authorities because
they are not seen as fair, then some form of a dialogue needs to be
built up with the public to ensure successful risk communication.
(c) If the interviews show that the public do not trust authorities because
they are not seen as competent, competent senior civil servants and
scientists need to be hired before a top-down risk communication
process can commence.
(d) If the interviews show that the public do not trust authorities because
they are seen as inefficient, then competent well respected economists
need to be brought on board before a top-down risk communication
process can commence.
This degree of flexibility is presently not available to many risk managers. In the
428
Ragnar Lofstedt
UK, for example, the present New Labour government is arguing (at least in
public) for greater public and stakeholder participation in the policy making
process as it sees that this is the best way of increasing public trust in policy makers
(e.g. UK Strategy Unit 2002). This view is shared by the European Commission,
which takes the view that only through greater stakeholder involvement as well
as transparency can trust be restored. In the United States, however, the Office
of Management and Budget is increasingly advocating the use of strict cost benefit
analysis as to help build up a more competent and efficient regulatory strategy
(thereby also increasing public trust).117
5. Conclusions
Risk communication is never easy, particularly not when dealing with
complicated issues fraught with uncertainty. However, if researchers and
practitioners address the crucial component of public trust or distrust, the
communication process can be more successful. Hence, there is a need to better
establish firstly whether the public trust the authorities (of course, on a
case-by-case basis), and secondly to address the reasons why, if public does not
trust the authorities. To do this, researchers and practitioners need first to break
down trust into three components, namely fairness, competence and efficiency and
then test for trust. Based on the outcome of these tests, three possible solutions
can be proposed. These solutions are for the public distrust caused by lack of
fairness – deliberative techniques, for lack of competence – technocratic measures
and inefficiency – rational risk strategies. Finally, we should be wary of accepting
one-size-fits-all solutions to risk communication no matter who is preparing them,
as implementing faulty solutions may actually decrease public distrust rather than
increase it.
Acknowledgements
The research on which this article is based was funded by the Swedish Research
Foundation via the Centre for Public Sector Research, University of Gothenburg,
where the author is a visiting professor. The author wrote large parts of this article
when on a sabbatical at the Harvard Centre for Risk Analysis, Harvard School
of Public Health. I am indebted to John Graham and Ortwin Renn for their
comments on an earlier version of this article.
References
1. G. F. White (1945) Human Adjustment to Floods: A Geographical
Approach to the Flood in the United States (Chicago: University of
Chicago Press, Department of Geography).
2. B. Fischhoff, S. Lichtenstein, P. Slovic, S. L. Derby and R. L. Keeney
(1981) Acceptable Risk (New York: Cambridge University Press).
Risk communication
429
3. P. Slovic (1987) Risk perception. Science, 236, 280–285.
4. National Research Council (1989). Improving Risk Communication
(Washington DC: National Academy Press).
5. P. C. Stern (1991) Learning through conflict: a realistic strategy for risk
communication. Policy Sciences, 24, 99–119.
6. W. Leiss (1996) Three phases in the evolution of risk communication
practice. Annals of the American Academy of Political and Social
Science, 545, 85–94.
7. R. S. Adler and R. D. Pittle (1984) Cajolery or command: are
educational campaigns an adequate substitute for regulation? Yale
Journal on Regulation, 1, 159–193.
8. G. T. Cvetkovich, G. B. Keren and T. C. Earle (1986) Prescriptive
considerations for risk communications. Paper presented at the meeting
of the International Research Group on Risk Communication.
9. P. Slovic and D. MacGregor (1994) The Social Context of Risk
Communication (Eugene, Oregon: Decision Research).
10. R. E. Kasperson and I. Palmlund (1987) Evaluating risk
communication. In V. T. Covello, D. B. McCullum and M. T. Pavlova
(Eds), Effective Risk Communication: The Role and Responsibility of
Government and Non-government Organisations (New York: Plenum).
11. B. Fischhoff (1995) Risk perception an communication unplugged:
twenty years of process. Risk Analysis, 15, 137–145.
12. W. Leiss (1996) Three phases in the evolution of risk communication
practice. Annals of the American Academy of Political and Social
Science, 545, 85–94.
13. B. Fischhoff (1995) Risk perception an communication unplugged:
twenty years of process. Risk Analysis, 15, 137–145.
14. W. Leiss (1996) Three phases in the evolution of risk communication
practice. Annals of the American Academy of Political and Social
Science, 545, 85–94.
15. R. E. Kasperson, D. Golding and J. X. Kasperson (1998) Risk, trust,
and democratic theory. In G. Cvetkovich and R. E. Löfstedt (Eds),
Social Trust (London: Earthscan).
16. O. Renn (1991) Risk communication and the social amplification of
risk. In R. E. Kasperson and P. J. Stallen (Eds), Communicating Risks
to the Public: International Perspectives (Dordrecht: Kluwer).
17. R. E. Kasperson, O. Renn and P. Slovic H. S. Brown, J. Emel, R.
Goble, J. X. Kasperson and S. Ratick (1988) The social amplification
of risk: A conceptual framework. Risk Analysis, 6, 177–187.
18. T, Earle and G. Cvetkovich (1995) Social Trust: Toward a
Cosmopolitan Society (Westport, CT: Praeger).
19. R. E. Kasperson, D. Golding and S. Tuler (1992) Siting hazardous
facilities and communicating risks under conditions of high social
distrust. Journal of Social Issues, 48, 161–172.
20. W. Leiss (1996) Three phases in the evolution of risk communication
practice. Annals of the American Academy of Political and Social
Science, 545, 85–94.
21. O. Renn (1991) Risk communication and the social amplification of
430
Ragnar Lofstedt
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
risk. In R. E. Kasperson and P. J. Stallen (Eds), Communicating Risks
to the Public: International Perspectives (Dordrecht: Kluwer).
M. Siegrist (2000) The influence of trust and perceptions of risks and
benefits on the acceptance of gene technology. Risk Analysis, 20,
195–203.
M. Siegrist (2003) Perception of gene technology and food risks:
results of a survey in Switzerland. Journal of Risk Research, 6(1),
45–60.
P. Slovic (1993) Perceived risk, trust, and democracy. Risk Analysis,
13(6), 675–682.
P. Slovic and D. MacGregor (1994) The Social Context of Risk
Communication (Eugene, Oregon: Decision Research).
R. E. Löfstedt and T. Horlick-Jones (1999) Environmental regulation in
the UK: politics, institutional change and public trust. In G. Cvetkovich
and R. E. Löfstedt (Eds), Social Trust and the Management of Risk
(London: Earthscan).
W. Leiss (1996) Three phases in the evolution of risk communication
practice. Annals of the American Academy of Political and Social
Science, 545, 85–94.
UK Strategy Unit (2002) Risk: Improving Government’s Capability to
Handle Risk and Uncertainty (London: Cabinet Office Strategy Unit).
P. Slovic (1993) Perceived risk, trust, and democracy. Risk Analysis,
13(6), 675–682.
B. Barber (1983) The Logic and Limits of Trust (New Brunswick, NJ:
Rutgers University Press).
U. Beck (1992) Risk Society (London: Sage).
UK Strategy Unit (2002) Risk: Improving Government’s Capability to
Handle Risk and Uncertainty (London: Cabinet Office Strategy Unit).
R. E. Löfstedt (1996) Risk communication: the Barsebäck nuclear plant
case. Energy Policy, 24(8), 689–696.
P. Slovic (1993) Perceived risk, trust, and democracy. Risk Analysis,
13(6), 675–682.
R. E. Löfstedt (1996) Risk communication: the Barsebäck nuclear plant
case. Energy Policy, 24(8), 689–696.
R. E. Löfstedt (2001) Risk and regulation: boat owners’ perceptions to
recent antifouling legislation. Risk Management: International Journal,
3(3), 33–46.
R. Axelrod (1984) The Evolution of Cooperation (New York: Basic
Books).
P. Bateson (1988) The biological evolution of cooperation and trust. In
D. Gambetta (Ed.), Trust: Making and Breaking Cooperative Relations
(Oxford: Basil Blackwell).
L. G. Zucker (1987) Institutional theories of organization. Annual
Review of Sociology, 13, 443–464.
T. Earle and G. Cvetkovich (1995) Social Trust: Toward a
Cosmopolitan Society (Westport, CT: Praeger).
O. Renn and D. Levine (1991) Credibility and trust in risk
communication. In R. E. Kasperson and P. M. Stallen (Eds)
Risk communication
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
431
Communicating Risks to the Public: International Perspectives
(Kluwer, Amsterdam).
W. K. Viscusi (1998) Rational Risk Policy: The 1996 Arne Ryde
Memorial Lectures (New York: Oxford University Press).
C. Albin (1993) The role of fairness in negotiation. Negotiation
Journal, 9, 223.
J. Linnerooth-Bayer and K. B. Fitzgerald (1996) Conflicting views on
fair siting processes: Evidence from Austria and the US. Risk: Health,
Safety and Environment, 7, 19–134.
O. Renn and D. Levine (1991) Credibility and trust in risk
communication. In R. E. Kasperson and P. M. Stallen (Eds),
Communicating Risks to the Public: International Perspectives
(Amsterdam: Kluwer).
O. Renn, T. Webler and P. Wiedemann (Eds) (1995) Fairness and
Competence in Citizen Participation (Dordrecht: Kluwer).
O. Renn, T. Webler and H. Kastenholz (1996) Procedural and
substantive fairness in landfill siting. Risk: Health, Safety and
Environment, 7(.2), 145–168.
H. P. Young (1994) Equity: In Theory and Practice (Princeton:
Princeton University Press).
B. Barber (1983) The Logic and Limits of Trust (New Brunswick, NJ:
Rutgers University Press).
T. R. Lee (1986) Effective communication of information about
chemical hazards. The Science of the Total Environment, 51, 149–183.
P. Slovic (1993) Perceived risk, trust, and democracy. Risk Analysis,
13(6), 675–682.
R. W. Hahn (Ed) (1996) Risks, Costs and Lives Saved: Getting Better
Results from Regulation (New York: Oxford University Press).
R. E. Löfstedt and E. Rosa (1999) The strength of trust in Sweden,
UK, and the US: Some hypothesis. Paper prepared for TRUSTNET,
Paris.
C. D. Foster and F. J. Plowden (1996) The State under Stress
(Buckinghamshire: Open University Press).
US Environmental Protection Agency (1987) Unfinished Business: A
Comparative Assessment of Environmental Problems (Washington DC:
US EPA).
US Environmental Protection Agency (1990) Reducing Risk: Setting
Priorities and Strategies for Environmental Protection (Washington
DC: US EPA).
J. D. Graham and J. K. Hartwell (1997) The Greening of Industry: A
Risk Management Approach. (Cambridge, MA: Harvard University
Press).
W. K. Viscusi (1998) Rational Risk Policy: The 1996 Arne Ryde
Memorial Lectures (New York: Oxford University Press).
S. Breyer (1993) Breaking the Vicious Circle: Toward Effective Risk
Regulation (Cambridge, MA: Harvard University Press).
C. Coglianese (2001) Is consensus an appropriate basis for regulatory
policy? In E. Orts and K. Deketelaere (Eds), Environmental Contracts:
432
Ragnar Lofstedt
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
76.
77.
Comparative Approaches to Regulatory Innovation in the United States
and Europe (Dordrecht: Kluwer Law International).
National Research Council (1996) Understanding Risk (Washington
DC: National Academy Press).
O. Renn (1999) A model for an analytic-deliberative process in risk
management. Environmental Science and Technology, 33(18),
3049–3055.
Royal Commission for Environmental Pollution (1998) Setting
Environmental Standards (London: The Stationary Office).
O. Renn (1999) A model for an analytic-deliberative process in risk
management. Environmental Science and Technology, 33(18),
3049–3055.
O. Renn (1999) A model for an analytic-deliberative process in risk
management. Environmental Science and Technology, 33(18),
3049–3055.
D. Fiorino (1989) Environmental risk and democratic processes. A
critical review. Columbia Journal of Environmental Law, 14(2),
501–547
J. Rossi (1997) Participation run amok: the costs of mass participation
for deliberative agency decision making. Northwestern University Law
Review, 92, 173–250.
M. Landy, M. J. Roberts and S. R. Thomas (1994) The Environmental
Protection Agency: Asking the Wrong Questions from Nixon to Clinton
(New York: Oxford University Press).
D. Fiorino (1989) Environmental risk and democratic processes. A
critical review. Columbia Journal of Environmental Law, 14(2),
501–547.
L. Lundqvist (1980) The Hare and the Tortoise: Clean Air Policies in
the US and Sweden (Ann Arbor: University of Michigan Press).
J. Rossi (1997) Participation run amok: the costs of mass participation
for deliberative agency decision-making. Northwestern University Law
Review, 92(1), 173–250.
O. Renn (1999) A model for an analytic-deliberative process in risk
management. Environmental Science and Technology, 33(18),
3049–3055
Royal Commission for Environmental Pollution (1998) Setting
Environmental Standards (London: The Stationary Office).
O. Renn, T. Webler and P. Wiedemann (Eds) (1995) Fairness and
Competence in Citizen Participation (Dordrecht: Kluwer).
National Research Council (1996) Understanding Risk (Washington
DC: National Academy Press).
N. Pidgeon (1997) Stakeholders, decisions and risk. In A. Mosleh and
R. A. Bari (Eds), Probabilistic Safety Assessment and Management,
PSAM 4, Vol. 3, pp.1583–1588.
O. Renn (1999) A model for an analytic-deliberative process in risk
management. Environmental Science and Technology, 33(18),
3049–3055.
Risk communication
433
78. J. Rossi (1997) Participation run amok: the costs of mass participation
for deliberative agency decision making. Northwestern University Law
Review, 92(1), 173–250.
79. U. Beck (1992) Risk Society (London: Sage).
80. P. Blaikie, T. Cannon, I. Davis and B. Wisner (1994) At Risk: Natural
Hazards, People’s Vulnerability and Disasters (London: Routledge).
81. H. P. Young (1994) Equity: In Theory and Practice (Princeton:
Princeton University Press).
82. K. S. Shrader-Frechette (1990) Scientific method, anti-foundationalism
and public decision-making. Risk: Health, Safety and Environment, 1,
23–41
83. B. Fischhoff (1995) Risk perception an communication unplugged:
twenty years of process. Risk Analysis, 15, 137–145.
84. B. Fischhoff, S. Lichtenstein, P. Slovic, S. L. Derby and R. L. Keeney
(1981) Acceptable Risk (New York: Cambridge University Press).
85. P. Slovic (1993) Perceived risk, trust, and democracy. Risk Analysis,
13(6), 675–682.
86. B. Wynne (1992) Risk and social learning. In S. Krimsky and D.
Golding (Eds) Social Theories of Risk (Westport CT: Praeger).
87. R. S. Adler and R. D. Pittle (1984) Cajolery or command: are
educational campaigns an adequate substitute for regulation? Yale
Journal on Regulation, 1, 159–193.
88. G. T. Cvetkovich, G. B. Keren and T. C. Earle (1986) Prescriptive
considerations for risk communications. Paper presented at the meeting
of the International Research Group on Risk Communication.
89. P. Slovic and D. MacGregor (1994) The Social Context of Risk
Communication (Eugene, Oregon: Decision Research).
90. E. Siddall and C. R. Bennett (1987) A people-centered concept on
society-wide risk management. In R. S. McColl (Ed.), Environmental
Health Risks: Assessment and Management (Waterloo, Ontario:
University of Waterloo Press).
91. W. Leiss (1996) Three phases in the evolution of risk communication
practice. Annals of the American Academy of Political and Social
Science, 545, 85–94.
92. National Research Council (1989) Improving Risk Communication
(Washington DC: National Academy Press).
93. P. C. Stern (1991) Learning through conflict: a realistic strategy for risk
communication. Policy Sciences, 24, 99–119.
94. National Research Council (1996) Understanding Risk (Washington
DC: National Academy Press).
95. Presidential/Congressional Commission on Risk Assessment and Risk
Management (1997) Risk Assessment and Risk Management in
Regulatory Decision Making, Final report (Washington DC)
96. R. E. Lofstedt (1999) The role of trust in the North Blackforest: An
evaluation of a citizen panel project. Risk: Health, Safety and
Environment, 10, .7–30
97. D. Vogel (1986) National Styles of Regulation (Ithaca: Cornell
University Press).
434
Ragnar Lofstedt
98. R. E. Lofstedt and D. Vogel (2001) The changing character of
regulation: a comparison of Europe and the United States. Risk
Analysis, 21(3), 399–405.
99. R. Pildes and C. R. Sunstein (1995) Reinventing the regulatory state.
University of Chicago Law Review, 62(1), 1–129.
100. S. Breyer (1993) Breaking the Vicious Circle: Toward Effective Risk
Regulation (Cambridge, MA: Harvard University Press).
101. J.D. Graham (1997) The risk management approach. In J. Graham
and K. Hartwell (Eds) The Greening of Industry: A Risk
Management Approach (Cambridge, MA: Harvard University
Press).
102. National Research Council (1983) Risk Assessment in the Federal
Government (Washington DC: National Academy Press).
103. W. Ruckelshaus (1983) Science, risk and public policy. Science, 221,
1026–1028.
104. W. Ruckelshaus (1985) Risk, science, and democracy. Issues in Science
and Technology, 1(3), 19–38.
105. H. Kunreuther, R. Ginsburg, L. Miller, P. Sagi, P. Slovic, B. Borkan
and N. Katz (1978) Disaster Insurance Protection: Public Policy
Lessons (New York: Wiley).
106. W. K. Viscusi (1992) Fatal tradeoffs: Public and Private
Responsibilities to Risk (New York: Oxford University Press).
107. W. K. Viscusi (1998) Rational Risk Policy: The 1996 Arne Ryde
Memorial Lectures (New York: Oxford University Press).
108. A. L. Nichols and R. Zeckhauser (1986) The Dangers of Caution:
Conservatism in Assessment and the Mismanagement of Risk. Advances
in Applied Micro–Economics, vol. 4, V. Kerry Smith (Ed.) (Greenwich,
CT: JAI Press), pp. 55–82.
109. R. J. Zeckhauser (1975) Procedures for valuing lives. Public Policy,
23(4), 419–464.
110. D. A. Dana (1994) Review essay: Setting environmental priorities:
the promise of a bureaucratic solution: breaking the vicious circle:
toward effective risk regulation. Boston University Law Review, 74,
366.
111. B. Fischhoff, S. Lichtenstein, P. Slovic, S. L. Derby and R. L. Keeney
(1981) Acceptable Risk (New York: Cambridge University Press).
112. P. Slovic (1987) Risk perception. Science, 236, 280–285.
113. A. L. Nichols and R. Zeckhauser (1986) The Dangers of Caution:
Conservatism in Assessment and the Mismanagement of Risk. Advances
in Applied Micro–Economics, vol. 4, V. Kerry Smith (Ed.) (Greenwich,
CT: JAI Press), pp. 55–82.
114. W. K. Viscusi (1998) Rational Risk Policy: The 1996 Arne Ryde
Memorial Lectures (New York: Oxford University Press).
115. K. Schneider (1993) New view calls environmental policy misguided.
New York Times, 21 March, p. 1
116. O. Renn, H. Kastenholz and W. Leiss (2002) OECD-Guidance
Document on Risk Communication for Chemical Risk Management
(Paris: OECD).
Risk communication
435
117. J. D. Graham (2002) The precautionary principle. Paper presented at
the Society for Risk Analysis annual meetings, 9 December New
Orleans.
About the Author
Ragnar E. Lofstedt is Professor of Risk Management and Director of the King’s
Centre of Risk Management at King’s College, London. He was educated at
UCLA and Clark Universities. He has been involved in many international studies
on risk management in renewable energy, telecommunications, biosafety etc. He
is editor-in-chief of the Journal of Risk Management.