ESPOO 2007
Pia Oedewald & Teemu Reiman
Special characteristics of
saf et y crit ical organizat ions
Work psychological perspective
VTT PUBLICATIONS 633
VTT PUBLICATIONS 633
Special characteristics of safety
critical organizations
Work psychological perspective
Pia Oedewald & Teemu Reiman
ISBN 978-951-38-7005-8 (soft back ed.)
ISSN 1235-0621 (soft back ed.)
ISBN 978-951-38-7006-5 (URL: http://www.vtt.fi/publications/index.jsp)
ISSN 1455-0849 (URL: http://www.vtt.fi/ publications/index.jsp)
Copyright © VTT Technical Research Centre of Finland 2007
JULKAISIJA – UTGIVARE – PUBLISHER
VTT, Vuorimiehentie 3, PL 1000, 02044 VTT
puh. vaihde 020 722 111, faksi 020 722 4374
VTT, Bergsmansvägen 3, PB 1000, 02044 VTT
tel. växel 020 722 111, fax 020 722 4374
VTT Technical Research Centre of Finland, Vuorimiehentie 3, P.O.Box 1000, FI-02044 VTT, Finland
phone internat. +358 20 722 111, fax +358 20 722 4374
VTT, Tekniikantie 12, PL 1000, 02044 VTT
puh. vaihde 020 722 111, faksi 020 722 7046
VTT, Teknikvägen 12, PB 1000, 02044 VTT
tel. växel 020 722 111, fax 020 722 7046
VTT, Tekniikantie 12, P.O.Box 1000, FI-02044 VTT, Finland
phone internat. +358 20 722 111, +358 20 722 7046
Technical editing
Cover picture
Photos
Anni Kääriäinen
Kirsi-Maarit Korpi
VTT Technical Research Centre of Finland
Edita Prima Oy, Helsinki 2007
Oedewald, Pia & Reiman, Teemu. Special characteristics of safety critical organizations. Work
psychological perspective. Espoo 2007. VTT Publications 633. 114 p. + app. 9 p.
Keywords
operational safety, safety management, human safety, environmental safety,
safety-critical organizations, decisionmaking, nuclear power plants, aviation
industry, chemical industry, oil refining industry
Abstract
This book deals with organizations that operate in high hazard industries, such as
the nuclear power, aviation, oil and chemical industry organizations. The society
puts a great strain on these organizations to rigorously manage the risks inherent
in the technology they use and the products they produce. In this book, an
organizational psychology view is taken to analyse what are the typical
challenges of daily work in these environments.
The analysis is based on a literature review about human and organizational
factors in safety critical industries, and on the interviews of Finnish safety
experts and safety managers from four different companies. In addition to this,
personnel interviews conducted in the Finnish nuclear power plants are utilised.
The authors suggest eight themes that arise as common organizational
challenges across the industries, e.g. how the personnel understands the risks and
what the role of rules and procedures in guiding the work activities is.
The primary aim of this book is to contribute to the nuclear safety research and
safety management discussion. However, the book is equally suitable for risk
management, organizational development and human resources management
specialists in different industries. The purpose is to encourage readers to
consider how the human and organizational factors are seen in the field they
work in.
3
Preface
This publication is a summary of our views, which have evolved during our
research on organizational factors and safety. Our goal is to describe the internal
challenges and tensions of safety critical organizations to all those involved in
the development and risk management of such organizations. We have
especially wanted to offer analyses, as well viewpoints from other industrial
sectors, to the nuclear power industry, which has a very strong, distinct approach
to safety management.
As psychologists we have noted that safety critical organizations form an own,
diverse field of research that differs in many ways from other fields of
organizational psychology. Some of the approaches developed for analysing
human and organizational performance seem almost absurd at first sight.
Nonetheless, ideas that have emerged in the safety critical human factors field
have spread to other sectors of organizational research. The goal of our own
work has been to build a bridge between traditional work psychology and safety
critical organizations. After all, it is work that these organizations engage in.
This publication is a revised version of the publication originally written in
Finnish in 2006 (VTT Publications 593: Turvallisuuskriittisten organisaatioiden
toiminnan erityispiirteet). The publication has been written as part of the CulMa
(Organizational culture and management of change) project in the Finnish
nuclear safety research programme, SAFIR. Additional funding for the revised
edition has been provided by VTT Technical Research Centre of Finland. The
authors wish to thank Finnish and Swedish power companies for open cooperation. We would also like to thank the representatives of Finnair, Kemira,
Fortum and TVO who contributed to this publication by taking part in interviews
and providing versatile information and colourful examples about the challenges
and solutions in their organizations. Thanks also go to the steering group
members, who provided valuable comments at different phases of the
publication.
Espoo, January 2007
Pia Oedewald
Teemu Reiman
4
Contents
Abstract................................................................................................................. 3
Preface .................................................................................................................. 4
Abbreviations........................................................................................................ 8
1. Introduction..................................................................................................... 9
1.1 Challenges to safe operations in organizations...................................... 9
1.2 Objectives and scope of the publication .............................................. 11
2. Methods ........................................................................................................ 12
3. Viewpoints on safety and safety critical organizations................................. 14
3.1 Human as a risk factor ......................................................................... 14
3.2 Organizations as open systems ............................................................ 19
3.3 Decision-making.................................................................................. 22
3.4 Tension between safety and economy ................................................. 24
3.5 Safety management ............................................................................. 25
3.6 Safety culture....................................................................................... 28
3.7 Impact of publicity on organizational operations ................................ 32
3.8 From the HRO theory to the characteristics of organizational
culture ................................................................................................. 35
3.8.1 Organizations as cultures ........................................................ 40
3.8.2 How does the safety critical nature of the domain show in the
culture of an organization?...................................................... 43
4. Special characteristics of safety critical organizations ................................. 46
4.1 Risk and safety perceptions ................................................................. 46
4.2 Motivational effects of risks and safety............................................... 51
4.3 Complexity of organizational structures and processes....................... 54
4.4 Modelling and predicting organizational performance........................ 60
4.5 Importance of training ......................................................................... 67
4.6 Role of rules and procedures ............................................................... 72
4.7 Coping with uncertainty ...................................................................... 77
4.8 Ambiguity of responsibility................................................................. 82
6
5. Conclusions................................................................................................... 89
References........................................................................................................... 98
Appendices
Appendix A: Challenger space shuttle
Appendix B: Piper Alpha oil rig
7
Abbreviations
IAEA, International Atomic Energy Agency. An international organization for
nuclear power that harbours practically all nuclear power nations as its members.
HRO, high reliability organization. A term introduced by the Berkeley
University for organizations that have operated successfully (without accidents)
in risky fields, such as nuclear power, air traffic control or air traffic carriers.
HSE, Health and Safety Executive. The authority in Great Britain responsible
for occupational health and safety.
NDM, naturalistic decision making. A research approach that studies how
people use their experience to make decisions in field settings. Acting in real
(work) situations differs from decision-making in conscious experiments. It is
constrained by situation-specific conditions and social restrictions.
NII, Nuclear Installations Inspectorate. The body, subject to HSE, that is
responsible for nuclear power plant regulation in Great Britain.
PSA, Probabilistic Safety Assessment. PSA is an analysis method that aims to
identify and delineate the combination of events that may lead to a severe reactor
accident, meaning damage to the reactor core. PSA assesses the probability or
frequency of each combination and evaluates the consequences of an accident.
PSA uses both logical (based on the probability of different combinations of
events) and physical models. (Sandberg 2004.)
STUK, Finnish Radiation and Nuclear Safety Authority. A state-governed
organization that acts, among other things, as a regulatory and inspection
authority, research institution, emergency organization for nuclear and radiation
hazards, as well as provider of measurement and expert services. STUK
regulates the use of nuclear power in Finland.
8
1. Introduction
A company’s success is greatly affected by how effectively the organization
operates at different life cycle phases and how it reacts to unexpected challenges.
However, defining, explaining and managing of organizational performance are
far from easy tasks. Numerous approaches to analysing and improving
organizational performance have been created on the basis of ideas from various
scientific disciplines. Most of the approaches are financially oriented, meaning
that the company’s key figures are used as the main criteria for performance.
This is often a reasonable premise since profitability is the prerequisite for the
existence of commercial businesses. However, organizations that produce for
example public services may find other criteria to be more suitable. This also
applies to organizations that operate in fields involving significant safety risks to
the environment or society. The criteria used to assess the performance of these
safety critical organizations emphasise reliability and management of risks.
Safety is the prerequisite for their existence.
1.1 Challenges to safe operations in organizations
The starting point for this publication is the definition of effective organizations
provided by Vicente (1999). In order to be effective, an organization must be
productive, as well as financially and environmentally safe. It must also ensure
the personnel’s well-being. These goals are usually closely interlinked. A
company with serious financial difficulties will have trouble investing in the
development of safety and may even consider ignoring some safety regulations.
Financial difficulties cause insecurity in employees and may decrease their
commitment to work and weaken their input. This will have a further impact on
financial profitability or the reliability of operations. Occupational accidents are
costly to companies and may lead to a decline in reputation or loss of customers.
The close connections between the goals should not be taken to mean that
improvements in one field, such as financial profitability, automatically lead to
improvements in other fields, such as well-being or safety. On the other hand,
the goals do not necessarily conflict, for example, in the sense that
improvements to safety would threaten financial profitability. When analysing
organizational performance, one should pay attention to the methods and
9
concepts used to achieve the goals, as well as to the ways in which the goals are
balanced.
Modern industrial organizations are complex socio-technical systems (Vicente
1999). In addition to complexity of technology, overall system complexity arises
from the organizing of work, standard operating procedures, decision-making
routines and daily communication. Work is often specialised, meaning that many
tasks require special know-how which takes long to acquire. The chain of
operations involves many different parties and different technical fields should
cooperate flexibly. Industrial work is also increasingly carried out through
various technologies. This has led to a decrease in craftwork, in which people
were able to immediately see the results of their work. In many fields, work
involves significant risks to the environment and safety of people.
In addition to inherent complexity, different kinds of internal and external
changes lead to new challenges for the management of organizational
performance. For example, organizations keep introducing new technology and
upgrading or replacing old technology. Technological changes have been found
decades ago to often affect organizations much more profoundly than anticipated
(Trist & Bamforth 1951, Rice 1958). Technological changes influence the social
aspects of work, such as information flow, collaboration and power structures
(Barley 1986, Zuboff 1988). Different kinds of business arrangements, such as
mergers, outsourcing or privatisation, also have a heavy impact on social matters
(see, for example, Stensaker et al. 2002, Clarke 2003, Cunha & Cooper 2002).
The exact nature of the impact is often difficult to anticipate.
Increasing organizational complexity has been brought up especially when
serious failures, termed ‘organizational accidents’, have taken place in safety
critical fields. Such failures include, for example, the explosion of the
Challenger shuttle (see Appendix A, Presidential Commission 1986, Vaughan
1996), the fire on the Piper Alpha oil rig (see Appendix B, Cullen 1990, Wright
1994, Paté-Cornell 1993), the explosion at the Bhopal chemical plant in India in
1986, the accident at the Chernobyl nuclear power plant in 1986 and the runway
collision of two airplanes in Tenerife in 1977 (see Weick 1993).These incidents
are more than “mere” accidents; in Turner’s (1978) terminology, they can be
categorised as disasters to society and the organizations involved. A disaster is
an incident that was considered to be impossible but that happened nevertheless.
10
It contradicts general conceptions and presumptions about safe and effective
operations, which is why the incident is difficult to understand. Disasters in
different fields have shown that organizations and the risks involved have
become so complex that insight into individual and group psychology is needed
for management of organizational safety.
1.2 Objectives and scope of the publication
This publication deals with organizations which operate in high hazard domains.
It considers how safety critical organizations differ from other organizations and
whether organizations in safety critical fields have common characteristics or
challenges that are not treated in “traditional” organizational research.
The focus of this publication is on the management of internal organizational
activities. It considers how the safety critical nature of the business affects the
daily work and decision making and what kinds of special demands it puts on
the competence and behaviour of the personnel. The characteristics of safety
critical organizations are viewed from inside out, from the perspective of how
the employees themselves perceive their work and their organizations.
The goal is to present ideas for dealing with these phenomena for the
management, HR and training, as well as for the organizational and safety
research. Since the authors are researchers in the field of work and
organizational psychology, purely technical, legal and financial aspects are
treated only to the extent that the interviewees have raised them spontaneously.
11
2. Methods
Three types of material have been used in this publication. The authors have
carried out research projects at Finnish and Swedish nuclear power plants and
the Radiation and Nuclear Safety Authority of Finland (see Reiman & Oedewald
2002b, 2004a, 2006, Oedewald & Reiman 2003, Reiman et al. 2005, Nuutinen et
al. 2003, Reiman & Norros 2002, Reiman 2007). These projects made use of the
notion of organizational culture, which was used to assess the effectiveness of
the organizations. The results and material of the research projects are now
handled as a whole to bring up common characteristics in the nuclear industry.
The material includes 61 interviews carried out in nuclear power plant
maintenance, 13 interviews carried out in a nuclear power plant engineering
organization and 24 interviews with control room employees, notes of personnel
seminars that we have arranged for some two hundred maintenance employees,
as well as small group work in different compositions with some 30 maintenance
employees responsible for a variety of duties.
The second set of material consists of international literature on safety critical
organizations. This publication reviews both scientific approaches and popular –
or at least more descriptive – studies about the accidents. Scientific analysis of
safety critical organizations has been conducted, for example, at the University
of Berkeley, in the HRO (High Reliability Organizations) group (see, for
example, La Porte 1996, Roberts 1993). Group members have analysed widely
recognised reliable organizations (such as the Diablo Canyon nuclear power
plant, two aircraft carriers and the U.S. air traffic control centre). Another
interesting theory of safety critical organizations is Charles Perrow’s (1984)
theory of normal accidents. Normal Accidents Theory (NAT) was developed by
analysing accidents in various industrial fields. According to the theory, modern
industrial systems are so complex that accidents in them are normal and nearly
unavoidable events. Other interesting research will also be discussed briefly.
The third part of the material consists of seven interviews conducted at four
organizations, each of which are considered to be a safety critical organization.
The interviewees’ duties and organization are listed below:
12
−
Safety Manager, Fortum Corporation
−
Senior Vice President, Administration and Human Resources, Finnair plc
−
Vice President, Corporate Security, Finnair plc
−
Vice President, Aviation Security, Finnair plc
−
Vice President, Environment and Safety, Kemira plc
−
Head of Nuclear Reliability, TVO
−
Head of Operational Safety, TVO.
In the next chapter we will discuss ideas brought up in literature concerning
connections between safety and organizational factors. General premises of
management of safety will be discussed, and the authors will take a stand on
them. This will identify key issues that will be handled in more detail in Chapter
4 where they are analysed in light of the interview data. In Chapter 5, the authors
will summarise these thoughts and present their opinion of how these
phenomena should be taken into account in the management, organizational
development, training and research on safety critical organizations.
13
3. Viewpoints on safety and safety critical
organizations
3.1 Human as a risk factor
The impact that employees’ actions and organizational processes have on
operational safety became a prominent topic after the nuclear disasters at Three
Mile Island (TMI) in 1979 and Chernobyl in 1986. The accidents gave rise to
new research and management concepts, such as ‘human error’ after TMI and
‘safety culture’ after Chernobyl. These accidents showed the nuclear power
industry that developing the reliability of technology and technical barriers was
not enough to ensure safety. It needed to pay attention to human activities and
organizational aspects. It was soon detected that a considerable part of
disturbances, malfunctions and production losses resulted from human activities,
or at least they could have been prevented had humans acted in ‘the best
possible’ manner. Similar observations were made in other industries.
Psychologist James Reason (1990, 1997) and many others have stated that
human errors constitute the single biggest threat to risky technologies.
Approaches that focus on (human) errors have prevailed in research and
management and training practices, to date. The ‘discovery’ of human error and
organizational factors can also be seen in statistics, which now consider these
phenomena to be by far the main reason for accidents (see Figure 1).
14
% Attributed cause
Technology
and
Technology
equipment
,
Human
performance
Organisation
100
90
?
80
70
60
50
40
?
30
20
?
10
1960
1965
1970
1975
1980
1985
1990
1995
2000
Figure 1. Causes of accidents have been attributed in different ways over the
years (source: Hollnagel 2002, see also 2004, p. 46).
According to Reason (1990), accidents take place when organizational
protective measures against human errors fail or are broken down. That is why
he, and many others, has developed characterizations of typical human errors.
Reason, for example, identifies three distinct error types which occur at different
levels of performance: skill-based, rule-based, or knowledge-based1. The basic
error types are skill-based slips and lapses, rule-based mistakes and knowledgebased mistakes (see Table 1).
1
The distinction between three performance levels was originally made by Rasmussen
(1986).
15
Table 1. Main headings for the failure modes at each of the three performance
levels, according to Reason (1990, p. 69).
Skill-based performance
Inattention
Double-capture slips
Omissions following interruptions
Reduced intentionality
Perceptual confusions
Interference errors
Overattentions
Omissions
Repetitions
Reversals
Rule-based performance
Application of bad rules
Encoding deficiencies
Action deficiencies
wrong rules
inelegant rules
inadvisable rules
Misapplication of good rules
First exceptions
Countersigns and nonsigns
Informational overload
Rule strength
General rules
Redundancy
Rigidity
Knowledge-based performance
Selectivity
Workspace limitations
Out of sight out of mind
Confirmation bias
Overconfidence
Biased reviewing
Illusory correlation
Halo effects
Problems with causality
Problems with complexity
The analysis and training models developed for the identification and prevention
of human errors have undoubtedly led to positive results in many of the
organizations in which they have been applied. However, they have not done
away with the fact that humans and organizations continue to be the number one
cause for accidents as shown by statistics. According to Figure 1, the share of
organizational accidents has increased. The Figure raises a number of questions:
Is it difficult for organizations to learn from errors analysed once? Or does the
environment push people to make mistakes? Do people always come up with
new types of errors that previous analyses have not prepared us for? Are the
accident models used too restricted? These questions are receiving more and
more attention nowadays. This publication wishes to emphasise the following
question: Is an approach that aims to eliminate errors an effective way to
develop people’s work or the performance of the overall system?
16
Error-focused safety improvement approaches suffer from certain basic
problems. The concepts of ‘human error’ and ‘organizational factors’ are but
general designations for unique actions, measures or decision-making processes.
They do not explain past incidents or predict the future any better than the term
‘technical failure’ explains or prevents disturbance situations. To understand a
problem, it should be examined and the contributing phenomena must be
analysed and understood. In case of ‘technical failures’ this usually comes true.
If technical failures cause considerable problems to safety or productivity in
safety critical domains, they are analysed in sufficient detail to identify the
measures that are worth a try to rectify the failure. Experts from various fields
are brought in to carry out investigations and laboratory tests. This typically
results in a technical repairs and modifications or in a new monitoring system to
make problem diagnosis easier the next time.
If, however, the reason for the failure is traced back to a ‘human error’, the
procedures described above are usually not adopted, with the exception of
investigations into major accidents, which make routine use of experts in
behavioural sciences. Typically, the label of ‘human error’ ends up being both
the starting point and result of the analysis. Organizations find it difficult to
analyse human errors (because experts in the field are rarely available), and the
contributing phenomena can only be guessed at. The people in charge, such as
superiors, often feel unsure whether they can ask the person ‘responsible for the
error’ to explain his/her actions. Unclear and distressing events may not be
widely discussed in organizations to avoid blaming anyone.
To facilitate the handling of human and organizational errors, researchers and
consultants have developed a variety of analysis models2. These enable human
errors to be categorised, for example, on the basis of their appearance or the
information processing stage at which they took place. This, however, is
comparable to the idea of classifying all technical failures in industry on the
basis of a limited number of categories. If the goal were to prevent human errors,
all events would most likely need as individual a treatment as technological
defects and errors. Our purpose is not to deny the existence of certain general
rules governing the actions of people and organizations, but rather to show that
2
See, e.g., Reason (1990), Kirwan (1992), Dekker (2002), Reason and Hobbs (2003).
17
examinations focusing only on errors in organizations will make the treatment of
problems a slow and reactive process.
Another basic problem in error-oriented approaches comes from too narrow a
definition of the safe human activity and organizational safety. Human errororiented models seem to depart from the assumption that reliability is
synonymous to avoiding errors. People are seen as a threat to safety because
their performance varies and they may perform unexpected actions. This makes
the control of variation in human behaviour one of the main challenges of the
approach (though often visible only between the lines). The variation is seen as
negative risk factor. This is a very problematic viewpoint (Hollnagel 2004). In
modern working environments, the simplest tasks have been automated, leaving
complex tasks that call for case-by-case analysis to humans (for example,
recovery from technical failures when automation breaks down). The
explanation for this is that humans are particularly capable of using their senses,
emotions and social networks when operating in challenging environments. The
variation, adaptability and innovation inherent in human activities enable
complex organizations to carry out their tasks. More often than causing hazards,
people probably carry out their duties exactly as they should, fixing defects in
technology, compensating for bad tool or work design or stopping a dangerous
chain of events based on intuition. This is why heavy constraints on normal
human behaviour would most likely harm the effectiveness of activities and
reduces work motivation. Organizations in safety critical fields must naturally
try to carry out their duties in the right way, aiming at high quality and safety.
Sometimes, however, the performance development could benefit more from a
focus on the organization’s strengths and daily work than a treatment of
problems and exceptional situations.
All in all, we find approaches that view organizations as dynamic systems and
analyse the laws and boundary conditions of their operations to be more
interesting and practicable than human error-oriented approaches. Technical and
social aspects are closely connected and develop in different ways in different
organizations over the years, depending, among other things, on the operating
environment, ownership structure, national culture and personality of managers.
Systems theory was one of the first attempts to understand the overall dynamics
of the activities in the organizations.
18
3.2 Organizations as open systems
Approaches based on systems theory have been used in organizational research
since the 1950s and 60s (see, e.g., Katz & Kahn 1966). Systems theory posits
that an organization consists of subsystems and has a goal that it aims at.
Operations correspond to energy flows, and information on their success is
received through feedback loops (see Figure 2).
environment
system
social system
technical
system
input
output
feedback
Figure 2. Simplified model of a system.
The systems-theoretic approach has been used in safety critical fields, for
example, to expand fault models. The basic idea has been that failures in human
and organizational activities result from the system as a whole becoming too
complex and the information available in different situations being so uncertain
that it is difficult to determine whether a specific action is the right one. Even if
an action provides an immediate and sufficient solution to a problem, it may
affect other parts of the system later on. Interactions between subsystems may
also make it difficult to understand causes and effects. Different kinds of risk
analyses and probabilistic safety analyses, such as PSA, are based on systemic
view of activity and thus they can produce information for the prevention of
19
problems. Many authors link system analysis and error prevention in the
development of safety (Reason & Hobbs 2003, see also Wilbert 2004).
Organizational research based on systems theory emphasises issues that differ
from those highlighted by the error-oriented approaches, which aim to restrict
the variation in human activities. Research that draws on systems theory studies,
for example, how the feedback systems of organizations, the technical
presentation of information and information distribution channels can be
developed so that humans can more easily adopt correct measures. These ideas
have been applied, among other things, in research on control rooms, which will
be discussed in more detail in the following section. Task analysis is a popular
tool used to model work requirements, task distribution between humans and
technology, as well as information flow in different kinds of situations. As an
indication of the popularity of this approach, a recently published collection of
articles edited by professor Erik Hollnagel includes some 30 approaches,
methods and case analyses related to task design (Hollnagel 2003, see also
Rasmussen & Vicente 1989). The now so fashionable studies of learning
organizations and organizational learning are often based on systems thinking
(Wilbert 2004). The basic notion is that errors provide feedback on the
functioning of systems, and that feedback enables activities to be adjusted.
In his well-known book about ‘normal accidents’ (1984), Perrow suggests that
some organizations work in environments where the complexity of systems and
mutual couplings are so difficult to manage that watertight designing and
anticipating the course of activities is impossible. Tight and unpredictable, often
‘incomprehensible’, couplings between different subsystems sometimes make
accidents inevitable, and in that sense ‘normal’ events. In Perrow’s view,
however, different fields of industry show variation in terms of the intelligibility
of couplings and the complexity of technology (see Figure 3). This also means
differences in their susceptibility to accidents. The closer a system is to the top
right-hand corner in Figure 3, the more susceptible it is to accidents, says
Perrow.
20
Interactions
Linear
Tight
Complex
Dams
Nuclear plant
Power grids
Aircraft
Marine transport
Chemical plants
Rail transport
Space missions
Airways
Degree of
coupling
Assembly line
production
Mining
R& D
firms
Most
manufacturing
Universities
Loose
Figure 3. Categorisation of organizations according to the complexity of
operations and type of couplings (Perrow 1984).
Perrow’s model does not differentiate between technical complexity and the
complexity of the organization’s structure or operations. In other words, both the
impact that technical systems have on one another and the interaction between
organizations can be either straightforward or complex. The quality of couplings
also includes both social and technical considerations. In universities, for
example, loose couplings refer to the independence of professors’ and teachers’
views and teaching methods. That is to say, a professor’s opinions cannot be
used to deduce a teacher’s opinion about the same topic. In the manufacturing
industry, loose couplings may refer to, for example, the independence of
production lines.
The main weakness in models of organizational behaviour based on the systems
philosophy is similar to that of the error-oriented models discussed above. The
concepts of complexity and uncertainty are too general to provide tools for
21
employees and organizational developers working on different kinds of work
situations. Barley (1996) criticized the fact that different types of tasks, such as
managerial duties, a doctor’s work or nuclear power plant control, are compared
with one another in an attempt to identify similar characteristics simply because
all of the tasks are complex and involve uncertainties. According to Barley, the
attempt to find characteristics that are valid in all fields is not a fruitful one
(Barley 1996, p. 405). We are largely of the same opinion. When developing the
performance of an organization, it should be kept in mind that people carry out
tasks with different content in a variety of fields, all of which have their own
characteristics.
Another weakness is that systems thinking sometimes puts too much emphasis
on the functional, goal-related aspects of organizations and their attempt to adapt
to the requirements of their environment. In practice, organizations often engage
in activities that seem non-rational: politics, power struggle and ‘entertainment’.
With hindsight, such activities may have led to useful new ideas or solutions to
problems. At other times, organizations may face problems because they use
methods and thought patterns that have traditionally worked well but are no
longer suitable due to changes in the environment. This is why the ‘non-rational’
sides of organizational activities should not be excluded from analyses and
management philosophies. Systems thinking has formed the basis, for example,
for the organizational culture approach, which pays more attention to the internal
dynamics of organizations (Schein 1985). This has later been expanded by
questioning the ability and will of organizations to make objective observations
of their environment (Weick 1995, Martin 2002). The culture approach will be
discussed in more detail in Chapter 3.8.1.
Systems research has also generated considerable interest in the principles
governing individuals’ and teams’ situational activities in complex work
environments. Researchers have tried to identify general principles, especially in
decision-making studies, which will be discussed in the following chapter.
3.3 Decision-making
As early as 1958 organization theorists March and Simon identified systematic
limitations to the rational activities of human beings at work contexts. These
22
include the inability of organizations to offer valid information for decisionmaking purposes, as well as restrictions on the reasoning of individuals, which
limit their ability to evaluate the information available to them. According to
March and Simon (1958), decision-making is controlled by limited or bounded
rationality and the search for a satisfactory solution instead of a perfect one.
When considering safety critical organizations, it is easy to imagine that daily
decisions are made in truly demanding conditions (see, e.g., Norros 2004). The
available information may contain inaccuracies, operations are linked to the
tasks of many different parties and situations often involve time pressure.
Research revolving around the decision-making is the broadest and most
traditional field of Human Factors research. It focuses on situational rather than
strategic decision-making and usually studies those who control the process:
employees in power plant control rooms, aircraft pilots, ship bridge crews and
air traffic controllers. Current research increasingly emphasises the fact that
human information processing has been distributed outside the individual in both
social and technical terms. In other words, memory, learning ability and reaction
capacity are more than just individual characteristics. Attention has also been
drawn to the fact that decision-making in natural work situations is often not
synonymous with conscious selection between different alternatives. The
available tools, the environment, people and the terminology used affect the
perceptions and interpretations of individuals. This type of approach to decision
making is called naturalistic decision making (NDM3). The premises of the
NDM school present great challenges to the development of training and tools,
such as information and control systems and procedures. Studies show, for
example, that the control room operators who are considered to be the most
professional in the field do not use detailed manuals in urgent failures, but rather
begin to diagnose the situation by mentally reviewing past failures. On the other
hand, an experienced air traffic controller can easily notice and remember more
planes on a radar image than the capacity of people’s working memory would
lead to expect. The efficiency of such an employee would suffer if tools were
designed in compliance with general laws of usability.
In his popular book Cognition in the Wild (1995), Hutchins describes a personal
experience on board a US aircraft carrier. The ship’s steam turbine halted and
3
See e.g., Klein et al. (1993) and Klein (1997).
23
the ship lost all power as it was approaching the port of San Diego. Electronic
manoeuvring and navigation devices were momentarily fully inoperable. In
addition, the carrier could not be slowed down with propellers since the
propellers could not be operated without steam. Hutchins was on board the
carrier as a researcher and got to follow first-hand how the worried crew tried to
determine the vessel’s location without electronic tools. An experienced
navigator, Hutchins realised the rule of thumb known as “Can Dead Men Vote
Twice”, or C+D=M, M+V=T, (compass heading + deviation = magnetic
heading, magnetic heading + variation = true heading), could have been used for
this purpose. The crew, however, found it difficult to “free themselves” from
normal tools and procedures. Furthermore, no task distribution or operating
model had been determined on board for such exceptional cases. Crew members
first focused all their energy on trying to figure out how to obtain the
information normally provided by their devices. Gradually, they came up with a
way to use the available information and actually deduced the formula needed
after several unsuccessful attempts. They developed an operating model for each
party to report the required figures at regular intervals. Hutchins produced a
revealing document of the 25-minute problem-solving and learning phase,
involving several calculations, which took place on board before the vessel was
safely brought to anchor.
While research on naturalistic decision making emphasises the importance of the
situation and circumstances, many empirical studies are carried out using
simulators. In a sense, simulator studies are also exceptional cases and as a result
may not exhibit the kind of challenges met in normal everyday work. One such
challenge comes from financial boundary conditions for operations.
3.4 Tension between safety and economy
In addition to the complexity of systems, attention in safety critical
environments has been paid to the multiple organizational goals and to the
interaction between these goals. Not only do organizations need to ensure
economic profitability but also the safety of operations. This applies to all
employees. In addition, organizations have usually set up a function or
department whose main responsibility is to ensure safety. Safety and economy
are often expressed as contradicting goals in literature (Perrow 1984, Sagan
24
1993, Kirwan et al. 2002, p. 255). Accident analyses often describe the
personnel’s task of achieving the optimum balance between safety and efficient
operations. Hollnagel (2002, 2004) calls this balancing between efficiency and
caution in daily work the ETTO principle (efficiency-thoroughness-trade-off). In
his view, most organizations do not begin to emphasise thoroughness over
efficiency as the guiding principle of operations until a hazard has been detected
or an accident has taken place. Putting more weight on efficiency than
thoroughness is often silently approved and even desirable, as long as it does not
result in unwanted consequences. Hollnagel points out that stressing efficiency
should not automatically be considered to be improper behaviour. An
organization must take care of its business. Weaker efficiency may also have
detrimental effects on safety. Humans are able to cope with multiple goals by
adjusting their behaviour and optimising the amount of effort needed. In the end,
the question amounts to whether trade-offs can be accepted if the system’s
complexity is accepted. How can trade-offs be managed?
The conflict between safety and economy is not necessarily brought up and
discussed in daily activities. Work practices and decisions suitable for individual
situations are strongly based on the employees’ competence and the
organizational culture (rules, norms and conceptions). The actions that best suit
different situations and ensure fluent activity are part of tacit knowledge. If
workers frequently come across clearly describable conflicts between safety and
economy in their daily work, this alone is a sign of problems. It means that the
organization lacks a sufficiently clear and solid understanding of the importance
of safety, or that safety is not valued sufficiently high, despite acknowledged
risks. The way organizations handle safety risks will be discussed in further
detail in Chapter 4.1. An unsolved conflict between safety and economy is often
considered to reflect problems in management (IAEA 1991). Similarly, the need
for regulatory control is often justified with reference to the conflicting demands
of safety and economy (Kirwan et al. 2002, p. 255). The emphasis put on safety
management is largely due to these challenges.
3.5 Safety management
To ensure sufficient attention to safety, many companies have implemented
different types of safety management procedures or a safety management system
25
that forms an actual management philosophy4. As defined by the Finnish Centre
for Occupational Safety (www.tyoturva.fi), safety management means
comprehensive control of safety and includes the management of methods,
procedures and people. Safety management comprises both preventive and
corrective actions that aim to improve the working environment. It emphasises
the management’s role as a body that controls and takes charge of safety. The
management is responsible for setting goals, providing resources and
supervising implementation.
According to Reason and Hobbs (2003, p. 161), the fire on oil rig Piper Alpha in
1988 (see Appendix B) transformed safety control from an expressly
prescriptive activity to a goal-oriented one (see also Hopkins & Hale 2002, p. 4).
This has also influenced the management procedures in companies. Following
the accident, safety management systems became obligatory in many industrial
sectors. While safety management systems may vary considerably in their
practical implementation, they are all used to control the following fields (Booth
& Lee 1995):
−
safety policies and design (including the definition of safety objectives,
prioritisation of objectives, development of programmes)
−
organization and communication (definition of responsibilities, creation of
communication channels)
−
risk management (identification of risks, assessment of risks, control
methods)
−
auditing and assessment.
The main rationale for developing safety management systems has been strongly
related to occupational safety. This is the impression one gets when studying
cases written about safety management worldwide. Practical principles and
programmes are often of the “zero accidents” type. The reason for this may be
that a more extensive control of environmental or personal risks is thought to be
part of or linked to normal activities and management. Another reason is the
attempt to make safety management systems generally suitable also for fields
4
For safety management and safety management systems, see, e.g., Hale et al. (1997), Hale and
Baram (1998), Hale and Hovden (1998), Kuusisto (2000), Levä (2003), EPSC (1996), HSE
(1997).
26
that do not involve significant environmental risks. Safety management was
originally modelled on quality management. In fact, safety and quality
management systems have certain characteristics and assumptions in common.
Reason and Hobbs (2003, p. 162) mention the following shared characteristics
for quality and safety, which cannot be achieved with individual and unrelated
fixes and repairs:
−
Both must be planned and managed.
−
Both rely heavily on measurement, monitoring and documentation.
−
Both encompass the organization’s entire staff and all its activities.
−
Both aim at continued, gradual improvement instead of dramatic changes.
Safety management puts heavy emphasis on the role played by rules and
standards. The Seveso II directive demands that safety risks be identified,
assessed and controlled in a way approved by the authorities. The ISO 14001
standard can be used for the management of environmental affairs, while
national standards have been devised for the management of occupational safety
issues. The most famous of these are the British BS 8800 (2004) and the
occupational health and safety management system specification OHSAS 18001,
based on BS 8800. OHSAS 18001 follows the same principles used in ISO
14001. Safety and environmental systems can also be combined with the ISO
9001:2000 quality management standards.
Reason and Hobbs (2003) point out that, in practice, safety management suffers
from the same problem as quality management. Even massive management
systems and documentation of information do not make quality or safety; they
can only be used to aim at ensuring safety. The focus of audits and inspections is
easily drawn to the organizational structures and processes instead of their
content. Little empirical research has been conducted on the impact that safety
management systems have on the level of safety in organizations (cf. Levä
2003). In addition, most of the studies are not based on an explicit theoretical
model of safety or management (Hale & Hovden 1998).
The authority in charge of occupational health and safety in Great Britain, HSE
(1997), emphasises the definition of measurable safety objectives and the
systematic follow-up of the achievement of objectives (see also Henttonen
27
2000). HSE has defined four characteristics of safe organizations, the ‘4 Cs’:
control, communication, co-operation, competence. All of these must function
well in the organization. HSE (1997) emphasises the importance of a positive
safety culture, as well as the management’s role in creating and maintaining such
a culture.
3.6 Safety culture
The concept of safety culture aims to draw attention to the principles that
underlie operations and guide daily activities and decision-making. Closely
related to the notion of ‘organizational culture’ in organizational research,
‘safety culture’ is used to study organizations’ activities especially in relation to
safety. Safety culture is also a clearly normative concept. It is used to assess the
‘goodness’ of an organization’s performance in terms of safety. It also sets
requirements for the organization.
Safety culture entered the picture after the Chernobyl disaster when analysts (in
the West) puzzled over the reasons for numerous decisions made in the
organization. Were safety risks not taken sufficiently into consideration? Did
people perhaps lack the courage to point them out to high-ranking decisionmakers? How could operators accept the plant to be run against rules and
regulations? What was the attitude towards safety? In 1991 INSAG gave the
following definition for the concept: “Safety Culture is that assembly of
characteristics and attitudes in organizations and individuals which establishes
that, as an overriding priority, nuclear plant safety issues receive the attention
warranted by their significance” (IAEA 1991).5
HSE (1997, p. 16) defines safety culture more generically: “The safety culture of
an organisation is the product of individual and group values, attitudes,
perceptions, competencies, and patterns of behaviour that determine the
commitment to, and the style and proficiency of, an organisation’s health and
safety management. Organisations with a positive safety culture are characterised
by communications founded on mutual trust, by shared perceptions of the
importance of safety and by confidence in the efficacy of preventive measures.”
5
For safety culture in general, see, e.g., IAEA (1991, 1996), HSE (1997, 2005), Sorensen (2002).
28
Safety culture studies and development programs have been conducted in e.g.
nuclear industry (Lee 1998, Lee & Harrison 2000, Harvey et al. 2002, see also
IAEA 1996), aviation (McDonald et al. 2000), offshore platforms (Mearns et al.
1998, 2003, Cox & Cheyne 2000), chemical industry (Donald & Canter 1994),
manufacturing (Williamson et al. 1997, Cheyne et al. 1998), healthcare sector
(Singer et al. 2003, Pronovost et al. 2003) and the transport sector, including
railways (Clarke 1998, 1999, Farrington-Darby et al. 2005).
The safety culture concept is used to address, for example, the following types of
questions:
−
What is the staff’s attitude towards safety rules and the practical
arrangements that result from them (e.g., use of safety helmets)?
−
How does the management react to expenses from actions taken to ensure
safety? What kind of an example does the management set to subordinates,
for example, in communications?
−
Is safety prioritised over economy when making decisions?
−
How openly are problems and errors treated?
−
Does the organization try to improve operations continuously and learn from
mistakes?
−
Are risky decisions and operating methods questioned?
As indicated by the definitions and the list of questions, safety culture is an
evaluative concept that includes criteria for the operations of a good safety
critical organization. These include the staff’s positive attitude towards safety
rules, the management’s opinion that safety must always be prioritised over
economy and the disclosure of errors so that they can be learned from. The bases
for selecting these as criteria have been widely discussed and questioned. While
the criteria as such seem sensible, whether they are put into practice is more
difficult to determine. Evaluating the risk that would result from not realising the
criteria is usually left to individuals themselves. Analyses of connections
between criteria have also been scarce.
Several methods and applications have been developed for the assessment of
organizations’ safety culture. However, the aspects that they measure and
29
evaluate often differ widely from one another (cf. IAEA 1998, Mearns et al.
2003, Flin et al. 2000). Several methods concentrate on studying people’s
attitudes under the assumption that these have a straightforward impact on
behaviour (cf. Grote & Künzler 2000). Other approaches work like audits
analysing the organization’s processes and considering whether the organization
has the resources (and intentions) to act in a way deemed good. In this case,
organizations are assumed to be able and willing to act as officially agreed, or
that rewards and punishments can be used to ‘force’ organizations to act in such
a way. Safety culture has also been assessed using different kinds of indicators
for the performance of organizations. Such indicators include accidents, events
reported to the authorities, as well as employee participation in safety training
(see Flin et al. 2000).
Some indicators make it difficult to determine when a result implies a good
safety culture and when it points to a bad one. If, for example, an organization
reports a clearly higher number of events to the authorities this year than last
year, is this a sign of a weaker or better safety culture? In other words, has the
organization been more open about reporting events this year or have
deteriorating safety attitudes led to an increase in events? Other matters that
have given rise to discussion include the connection between occupational
accidents and operational safety, that is, whether occupational accidents are a
good indicator of the general safety culture.
Because of these issues the academic sector has adopted a critical attitude
towards the concept of safety culture6. Nevertheless, the concept is a useful tool
for management. The goal is to emphasise safety as the organization’s central
objective and discuss the organization’s possibilities and obstacles to achieve it.
The concept thus works as a tool for the development of operations. The safety
culture philosophy also spells out an important idea: the preconditions for safety
can be assessed and improved without any visible problems in the organization,
before any significant failures take place. Good performance of a safety critical
organization means more than mere avoidance of accidents and reaction to
incidents.
6
See, e.g., Pidgeon (1998a), Reiman and Oedewald (2002a), Guldenmund (2000), Cox and Flin
(1998), Reiman et al. (2005).
30
The use of safety culture as a tool in organizations does, however, involve some
risks, one of them being an excessive emphasis on the management’s role in the
creation and maintenance of safety culture. Some assessment and development
tools seem to be based on the notion that the management has a ‘more correct’
view of the role and significance of safety compared to employees (see, e.g., the
HSE’s definition in the previous chapter). The manager’s role as an attitudesetter and good example has been heavily emphasised (Reason 1997, 1998, HSE
1997, McDonald et al. 2000). Correspondingly, the obedience and commitment
of employees is considered to be a sign of good safety culture. This kind of
reasoning is not necessarily valid if culture is understood to mean principles that
really guide operations, since it would make the organization act only on the
basis of the managers’ assessments and possibly blindly against safety
considerations were the management to propose such action. In fact,
disobedience shown by the staff could be a sign of good safety culture in this
kind of an organization.
Another problem related to safety culture is the assumption that a shared view of
(safety) matters is a good sign, and, correspondingly, that differing opinions
constitute a risk. To keep oneself alert and evaluate the bases for one’s own
thinking, it is often good to have to deal with views that question the principles
prevailing in organizations. Too uniform a culture may become blind to its own
weaknesses and seek to find corroboration for the old and familiar opinions
(Sagan 1993, Weick 1998, Reiman 2007). For questioning attitude to work, the
organizational climate must be of the kind that allows open discussion of issues.
The third problem is that when safety is made a topic of discussion and the
concept of safety culture becomes a tool for organizational discussions these
may become separated from normal work routines. This is also a risk because in
many safety culture tools the ‘right’ answers, ‘right’ attitudes and ‘right’
practices in terms of questions and exercises are easy to deduce. Organizations
and employees are tempted to deal with safety culture characteristics that are
easy to discuss or for which corrective measures are easiest to find. On the other
hand, an organization may put collectively the blame for all of its problems and
defects on poor safety management or insufficient safety values. This results in
loosing the point for which the concept of culture was originally introduced in
safety critical organizations: the possibility to discuss the subconscious, tacit
principles that guide the decisions on daily work.
31
Nowadays, IAEA (1998), among others, believes that safety culture can take
many different forms and be realised at many qualitatively different levels (see
also Hudson 2006). At the basic level, safety is seen as a requirement set by
external parties, and requirements are met by obediently following rules and
procedures. At the second level, the safety of operations interests the
management as part of the supervision of the company’s general success. In this
case, the measures used to enhance safety are usually technical or related to
rules. According to IAEA, safety culture has reached the highest level when the
organization has adopted the principle of continuous development as the
cornerstone of its safety. Each member of the organization has an impact on the
level of safety. This is why organizations try to influence the employees’
attitudes and behaviour, for example, through training, communication and
management style. Safety is not emphasised only because of publicity or
external pressure.
3.7 Impact of publicity on organizational operations
Organizations that operate in fields involving safety risks may attract more
external interest than companies on average. The media and official stakeholders
follow the activities of safety critical organizations and these, in turn, are
expected to be open and forthcoming with the media. As their name says, safety
critical organizations engage in operations that entail some level of risk to the
society, and the organizations are responsible for minimising the risk and
disclosing information about it. Regulatory control of the risk has usually been
assigned to an authority that is responsible for ensuring that the risk owner is
capable of managing the organization as determined by society.
The trust of citizens and politicians is one of the main preconditions for an
organization to operate in safety critical field. For trust to be born, society must
feel that the organization’s decisions are made with society’s best interests in
mind. However, as Kirwan et al. (2002, p. 277) point out, authority control is
always based on a certain lack of confidence in the organization’s ability and
willingness to operate safely without supervision. On the other hand, the authors
state that a certain level of trust is necessary for authority control to succeed. The
authorities play an important part in the creation of trust (or distrust) between
citizens and safety critical organizations (Reiman & Norros 2002). In order to
32
trust an organization, citizens must believe that the organization is technically
and socially capable of handling the risk involved. There are both national and
industry-specific differences in the principles of authority control. However, the
scope of this publication does not enable a systematic analysis to be made of the
impact of public opinion or differences in the principles of authority control. As
stated in the chapter on safety management, authority control has evolved from
an expressly prescriptive activity to a goal-oriented one after the fire on Piper
Alpha. Parallel development has been going on in other safety critical domains.
Kirwan et al. (2002, p. 260) discuss the differences between prescriptive and
goal-oriented approaches and raise the question whether the Piper Alpha
accident could have been avoided had the prevailing prescriptive, rule-oriented
authority control been actively implemented.
The way in which public opinion is reflected on the internal activities of
organizations and the staff’s attitudes has not been studied much. In their article,
Mannarelli et al. (1996) examine the relations between the oil industry and the
authorities and public. In their opinion, the relationship is somewhat ambiguous:
on the one hand regulatory control and guidelines are criticised, on the other
hand their importance as a safety enhancing measure is well understood.
Sinkkonen (1998) reached similar results in a study on the attitudes toward
regulation of nuclear power in Finland. The High Reliability Organizations
research group has, to some extent, also examined how the organizations in
question ensure their reliability also to the general public. These issues will be
discussed in more detail in Chapter 3.7.
Public control may increase the conservatism of safety critical organizations, the
slowness of changes and phase-wise approach to them. It may also steer the
decisions of organizations in specific directions (see e.g., Garrick 1998). On the
other hand, market deregulation has caused pressures for change in many
sectors. Referring to literature in the field, Kettunen and Reiman (2004) suggest
that the regulators in the nuclear power sector in different countries have shown
concern about the following issues:
−
changes in the financial operating environment and their impact on the
operations of plants
−
changes in ownership and organization structures
33
−
cuts in the plants’ own staff
−
increasing use of outside contractors
−
increasing workload of employees and problems related to well-being at
work
−
availability of competent employees and preservation of competence
−
clarity of duties and responsibilities
−
the ability of power plants to control and manage the operations of
subcontractors at plants.
Technical changes and their safety impacts are controlled and assessed very
closely, for example, in the nuclear power industry. It has been suggested that
organizational changes should be controlled and assessed in the same way
(OECD 2002). This is, in fact, the case in many countries, such as Great Britain,
Sweden, Belgium and Spain. In Finland, the Radiation and Nuclear Safety
Authority has taken organizational factors into consideration more
systematically in recent years.
Some thought has also been given to the question whether public opinion affects
the number or quality of those applying for jobs in the field. For example, the
European nuclear power industry has studied problems related to the
attractiveness of the field. This concern results in part from a general generation
shift among employees, which means tough competition for competent
employees. Another reason is the scarcity of education in the nuclear power
field, which, in turn, results from the industry’s outlook in Europe. Some
countries are planning to run down nuclear power or have given up plans for
further construction. Although the nuclear power industry will still provide jobs
for many decades in these countries, the situation affects young people planning
their career. Authorities in Great Britain and the USA are worried that
increasingly uncertain work conditions will affect the staff’s morale and wellbeing at work. Owing to weaker employment benefits and restricted career
development opportunities, some employees move to different fields, taking
their competence with them. This is slowed down, to a certain extent, by many
nuclear power plant employees being highly specialised in the plants’ (often
very old) systems and technology and may find it difficult to get a competitive
job outside the industry. (Bier et al. 2001.)
34
3.8 From the HRO theory to the characteristics of
organizational culture
The High Reliability Organization group (La Porte 1996, Rochlin 1999a) formed
in 1984 at the University of Berkeley by Todd La Porte, Karlene Roberts and
Gene Rochlin, has been influential in illustrating the organizational aspects of
safety and reliability of safety critical organizations. They had observed that the
attention paid to studies and cases of organizational failure was not matched by
the number of parallel studies of organizations that are operating safely and
reliably in similar circumstances (Rochlin 1996, p. 55). Their aim was to identify
facets of these “high reliability organizations” that differentiate them from
ordinary organizations and to understand the design and management of HROs
(La Porte 1996, Roberts 1990). The questions that the project focused on
included the following:
−
What patterns of formal organizational structure and rules have developed in
response to the requirements of achievement under conditions of constrained
resources?
−
What decision-making and communication dynamics evolve in the processes
of day-to-day planning and operation?
−
What group norms are evident within and between units, group members
and organization as a whole? How is this organizational culture created and
maintained?
After examining organizations that had performed particularly well in fields
involving different kinds of risks they found that the organizations typically
exhibited the following internal characteristics:
−
The organizations had a strong sense of a common mission, which meant
equal commitment to productivity and safety. Furthermore, high reliability
organizations and their operating environments showed a tacit agreement of
the risks related to operations, the value of operations and the consequences
of any errors that might occur in operations.
−
The staff is very competent in technical matters and professionalism is central
to the dominant position and in decision-making processes. Attention is paid
35
to the staff’s competence at all times. Activities that enhance safety are
secured with a visible position in the organization.
−
Strict quality assurance and inspections are used to ensure good technical
performance. Technical information is collected and analysed and models
are made of accidents. Redundant methods aimed at ensuring safety are
reflected on the organization’s structure. Positive competition may arise
between different groups responsible for safety.
−
The organization’s ability to react to unexpected incidents is promoted by
structural flexibility and redundancy. Work processes have been designed to
include parallel or overlapping activities that can be used in other units, if
needed. In addition, the operators and main superiors have received training
for many tasks. Job rotation is used to provide an individual with several
fields of competence. Duties and working groups are designed so that
incompatible operations do not depend on one another.
−
Safety critical organizations have usually adopted a hierarchical operating
model. On the other hand, when the pace of work increases or an emergency
takes place more collegial patterns of authority based on skill emerge.
Channels of communication and roles change so that competence can be
combined as required by the situation.
−
Decision-making (especially the operative kind) has been distributed among
those who carry out operations. Tactical decisions are discussed in detail with
different experts.
−
Decisions are executed quickly with little chance for recovery or alteration.
Because of this, one of the central aims is to have all the possible
information available when making decisions.
−
For the same reason, the organizations also try to identify any room for
improvement after implementation by systematically collecting feedback
using a variety of methods. These include programmes to detect defects or
errors at an early stage. In fact, the organizations were particularly willing to
identify and report errors.
−
The culture of the organizations integrates the norms of mission
accomplishment and productivity with the safety culture norms.
−
Professional pride and demands on oneself are typical norms. Co-workers are
encouraged and supported in demanding situations irrespective of the
36
official fields of responsibility. This type of behaviour is supported by team
spirit and the achievement of a respected position in one’s own group.
−
Operators and superiors have the power to make independent decisions in
situations that suffer from a lack of time and are critical in terms of safety.
They are strongly aware of their responsibility for the situation.
−
Technical expertise and operative activities have usually been separated in
organizations although they support one another. There appears to be tension
between operators and technical experts. (La Porte 1996.)
Apart from the characteristics listed above, La Porte says (1996) that HROs also
need outside support to ensure their reliability. According to him, the active
participation of external parties in the achievement of the companies’ objectives
was especially characteristic of these organizations. External parties include, for
example, the company’s headquarters, authorities and international umbrella
organizations. For external players to work usefully they need competence,
statistical information and annual reviews related to the organization’s
operations. These activities create public credibility and promote internal efforts
in organizations since the objective is to reach a state in which the more
information there is about the company the more reliable it seems.
La Porte lists the following five requirements that safety critical organizations
must meet in order to achieve operations that are publicly recognised as
trustworthy:
−
High professional and managerial competence, as well as discipline, to
create technically viable schedules.
−
Aiming at technical solutions whose consequences are easy to present to the
public.
−
Self-assessment processes that aim to identify problems in the organization
before they are visible outside of it.
−
Strict internal reviewing and discovering actual operating activity and
results.
−
Clear and official distribution of responsibilities that aims to secure the
organization’s efforts to maintain credibility.
37
It should be noted that public credibility or trust does not automatically mean
that the organization is safe. In the HRO group’s research these ideas were
linked together. According to its own definition of high reliability organizations,
the HRO group studied organizations that had performed well up to the time of
research. The theory can be criticised for the notion that a past safety level
(mainly the lack of accidents) could be used to explain safety in the future.
The HRO group’s stand on whether the characteristics it identified are good or
necessary in safety critical organizations – and whether they are prerequisites for
safety – is rather unclear. La Porte (1996) says that the characteristics are
necessary but not necessarily sufficient to ensure a good level of safety. The
group did not aim to create a theory of accidents but reliability. However, group
members have not wanted to take a stand on what kind of factors might still be
missing from their lists of characteristics or how much of safe performance the
factors could account for. La Porte emphasises that the characteristics identified
in the organizations that the group studied are so demanding that it may be
impossible to adopt them in other lines of business without causing interference,
hard work and big expenses.
Berkeley’s HRO research and Perrow’s theory of normal accidents (see Chapter
3.2) are two significant attempts to describe how organizations act in complex
environments that involve safety risks. Both theories try to answer the same
question: what factors make an organization act as safely as it does. However,
the theories start with opposite concepts: while one speaks about accidents the
other talks about reliability. Neither of the theories really deals with the relation
between reliability and accidents nor with the way in which the theory of
reliability differs from that of accidents. Sagan (1993) presents a view about the
issues in which the theories most clearly differ from one another (Table 2).
38
Table 2. Comparison of the HRO theory and the theory of normal accidents
(Sagan 1993).
High Reliability Theory
Normal Accidents Theory
Accidents can be prevented through
good organizational design and
management.
Accidents are inevitable in complex and
tightly coupled systems.
Safety is the priority organizational
objective.
Safety is one of a number of competing
values.
Redundancy enhances safety:
duplication and overlap can make
‘a reliable system out of unreliable parts’.
Redundancy often causes accidents: it
increases interactive complexity and
opaqueness and encourages
risk-taking.
Decentralized decision-making is needed
to permit prompt and flexible field-level
responses to surprises.
Organizational contradiction:
decentralization is needed for complexity,
but centralization is needed for tightly
coupled systems.
A ‘culture of reliability’ will enhance safety
by encouraging uniform and appropriate
responses by field-level operators.
A military model of intense discipline,
socialization, and isolation is incompatible
with [American] democratic values.
Continuous operations, training, and
simulations can create and maintain high
reliability operations.
Organizations cannot train for unimagined,
highly dangerous, or politically unpalatable
operations.
Trial and error learning from accidents
can be effective, and can be
supplemented by anticipation and
simulations.
Denial of responsibility, faulty reporting,
and reconstruction of history cripples
learning efforts.
In the 1990s the theories were compared in public, with their proponents
correcting views presented by others and offering adjustments to their own
statements (see Sagan 1993, La Porte & Rochlin 1994, Perrow 1994, 1999,
Rijpma 1997). These discussions will not be handled in much detail in this
publication, although they show the difficulty of the topics involved when
talking about accidents, safety, organizations and their connections.
HRO and NAT have illustrated the significance of organizational factors such as
organizational structures, management, and organizational culture to safety and
reliability of complex sociotechnical systems. One might argue that one of the
theories views organizations optimistically, looking at opportunities, while the
other treats them more pessimistically (or realistically, in Perrow’s opinion)
39
from the viewpoint of risks and problems. In many cases both approaches are
possible (and at least they are not mutually exclusive) when analysing
organizational activities from an outsider’s point of view. But since the HRO
group (nor the NAT) has not decided on the criteria for reliability, they have not
developed methods for assessment and development of safety critical
organizations.
What is even more important in our opinion is the existence of different kinds of
philosophies of reliability and accidents, as well as notions of a good
organization in a risky environment, within the organizations themselves. HRO
theories and NAT have discussed little the possibility of having diverse views on
the meaning of reliability, accidents, risks and adequate organizational
practices inside the given organization. They neglect the psychological
dimension of working in complex safety-critical organization: how the personnel
experience and cope with their work and the associated risks (Reiman 2007).
One should also bear in mind that high reliability organizations may appear to
share characteristics when examined at a general level, but in practice they may
follow widely differing strategies. For example, in some organizations the
‘culture of reliability’ may signify strict adherence to rules and procedures and
avoidance of individual initiative. Other organizations associate the achievement
of reliability with heroic individual performances in which cleverness and speed
are key. This is partly dependent on the environment in which and the
technology with which the organization operates (cf. Schulman 1996). Our aim
is to get a step closer to determining how the requirement of safety is reflected in
the organizations and what kind of problems and challenges it involves.
3.8.1
Organizations as cultures
In our own research, we have used the concept of organizational culture (see
e.g., Reiman & Oedewald 2004a, 2006, in press, Reiman et al. 2005, Oedewald
et al. 2005, Reiman 2007). Organizational culture can be understood as a multilevel phenomenon that can be seen and heard, for example, in the organization
of work, selection of tools, staff’s clothing, meeting practices and the jargon
used in the organization. This type of visible characteristics can be identified in
an organization through persistent work, but they cannot explain the whole
40
culture. More to the point, they describe the ‘achievements’ of the culture. To
understand the culture, one must find out why particular characteristics can be
found in the organization, whether the staff considers them to be functional and
how they serve the success of work.
According to Schein (1985), what underlies the visible characteristics of
organizations, that is, the artefacts, is primarily a set of espoused values.
Organizations do certain things because they value, for example, safety and
customer service. Working deeper down are tacit, and partly subconscious,
conceptions and assumptions. These may be related to, for example, the
company’s basic mission, work-related risks, the role of technology in success,
suitable management of customer relations or the use of rewards and sanctions.
They may also be general notions of the nature of human activities, right and
wrong, and a perspective on time: is the emphasis on the here and now or the
future. These deep contents of culture are difficult to extract using a single
method. Nevertheless, they play an important role in guiding daily activities, and
their impact can be seen in solutions and policies. This makes them important
when trying to understand, explain or predict the organizational performance.
Figure 4 shows examples of the content of the organizational culture of a
hypothetical maintenance organization.
Organizational culture consists of:
surface level
deepest level
• practices (e.g. work order procedure, incident reporting system)
• norms (“if you are not sure that you know how to carry out the
task do not accept it”, ”an electrician doesn’t touch a wrench”)
• values (occupational safety, efficiency, craftsmanship)
• conceptions (”one becomes a professional by doing, not by
reading”, “fault repairing is more important than preventive
maintenance tasks”)
• assumptions (”if there is an event someone has made an error”,
“technology is more reliable than human being”) .
Figure 4. Levels of culture (adapted from the model by Schein [1985]).
In our view, culture is not a permanent structure with layered content. Instead,
culture is a continuous process in which both visible and subconscious issues
41
and elements are created, maintained and modified7. To further explain the
process-like nature of culture, we present the following, oversimplified example
of an organization that initiates a new phase involving big efforts and strong
emotions: the implementation of a new work reporting system. In phases such as
this the assumptions and norms inherent in the culture become clearly visible
and are often jointly strengthened to enable the organization to cope with
difficulties.
In our example organization, decision on system implementation is made by a
selected group of people. Most of the actual system users are not consulted, the
assumption being that they do not possess the kind of information that would be
useful in the decision-making phase. Those participating in decision-making
have a kind of power structure in relation to one another. In the organization in
question, the opinions of people responsible for operative activities weigh more
than those of experts. This time, however, the person responsible for IT
development gets more attention because he is familiar with the customer’s
system and expectations. The IT responsible concludes that the adoption of a
new financial system must be the reason for the revision of the work reporting
system since the old system did not show any obvious problems. In the end,
system selection is clearly influenced by previous experiences of information
system deliveries. Since the previous project was delayed by nearly 12 months,
the company does not even consider cooperating with the same, less than
satisfactory supplier this time. Once the system is implemented, the field staff
wonders why the new system was acquired. Does the company want to monitor
the employees’ use of time in more detail? Is this a sign of lack of confidence?
Or does this mean that financial values are emphasised even more than before?
The same information could have been collected with the old system had there
been enough time to spend on ‘unproductive’ work, that is, on work reporting.
The secretaries, in turn, are satisfied and feel they are valued since they now
have a more sophisticated tool. The field staff is not aware of this since it is not
directly involved with the secretaries. When a new employee joins the company,
the field staff explains that the new work reporting system has been adopted to
keep tabs on employees. The new employee gets the impression that the
management does not trust employees and instead manages the organization
7
The process-like and socially constructed nature of organizational culture is emphasised, for
example, by the following researchers, whose ideas our concept of culture is based on: Kunda
(1992), Hatch (1993), Rochlin (1999a, 1999b), Alvesson (2002), Martin (2002).
42
through monitoring and punishment. The employee only records the obligatory
information in the system, which he considers to be unpleasant.
According to Schein’s (1985) theory, a culture has two tasks: it maintains
internal integration in an organization and creates ways to meet demands from
the outside. A culture aims to create simplifications: tools, norms and beliefs that
enable the organization to operate in the face of heavy demands. If the tools,
norms and beliefs that have come about in the culture are found to work
sufficiently well, they are taught to new members as the right way to perceive,
think and feel in relation to the issues that they address (Schein 1985).
Organizational culture is a difficult phenomenon to study. We have emphasised
in our own work that we do not view culture as a single ‘variable’ influencing
the organization’s activities. Instead, we use culture as a metaphor for an
organization. An organization is a culture. The concept of culture is a tool that
we use to analyse the organization, including its tools, technology, history and
attitudes to the environment and its core task (Reiman & Oedewald in press,
Reiman 2007).
3.8.2 How does the safety critical nature of the domain show in the
culture of an organization?
Based on the approaches discussed above, as well as our own research, we
highlight eight themes that we consider to be important for safety critical
organizational culture. Our assumption is that these themes are central to the
achievement of safety and efficiency. They are questions that safety critical
organizations must and do take a stand on. In other words, we expect that these
questions are somehow handled in the ‘culture’ of companies operating in safety
critical fields. Some of the topics have most likely received more attention in
companies and are better managed than others, but they still deserve closer
inspection since no solution works forever and the interrelations between the
themes should also be considered. We presented the topics to our company
interview participants, whose opinions will be used in Chapter 4 where we
discuss each topic in more detail.
43
Our first claim is that categorising an organization as a safety critical one is
neither an easy nor straightforward a task. There are many different types of
risks, such as occupational, environmental, financial and socio-political ones.
Understanding and relating them may be difficult at times. Safety risks often
remain at an abstract level even for the organization’s members. This is why we
focus on the ways in which employees in the particular organization treat and
understand safety or risks.
Another field of interest involves the question on how awareness of safety risks
affects the staff’s work. Are risks a stress factor or do they have a motivating
effect? How can the attitudes of employees toward risk and safety be
influenced?
Thirdly, we will discuss how work is typically organized in safety critical
organizations. Emphasis on safety would appear to make the structure and
processes of organizations more complex, presenting the staff with new demands
for control. We will consider the possible effects that this structural complexity
may have on safety.
Our fourth subject is the broad and difficult topic of the predictability of
organizational activity. Safety critical fields place heavier demands on
predictability than do other sectors. But are organizational activities generally
considered to be predictable?
The fifth field focuses on the methods used for staff training and their objectives.
Does emphasis on training constitute a method to improve the predictability of
organizational activities? What should the staff receive training for?
Our sixth theme involves the role of rules and procedures in controlling
performance and ensuring safety. Rules get more emphasis in safety critical
fields than in other industrial work, but what are they expected to solve? If
emphasis is given to staff training, do employees need instructions to guide their
work?
Our focus in the seventh field is on daily work and on how organizations handle
the uncertainty involved in complex systems despite risk management methods,
training and procedures. The identification, acknowledgement and treatment of
44
uncertainty are general problems among field staff. Organizations seem to
emphasise certainty although this apparently contradicts the philosophy of safety
culture.
Our last topic focuses on the broad question of how responsibility has been set
up in organizations whose operations include various types of risks.
Responsibility is often given as the basis for organizational solutions to
questions related to the other seven fields described above. In practice,
responsibility often constitutes as complex a structure as the organizations in
question. Of key interest to employees is the relationship between legal and
personal responsibility.
These characteristics are not the result of a single systematic research project or
an analysis of one particular set of material. We focus on themes that seem to
come up repeatedly in literature and our own research when approaching
organizations from our viewpoint. Our purpose is not to use these topics as
criteria for the functioning or safety of organizations but rather as an inspiration
for organizational development and research.
45
4. Special characteristics of safety critical
organizations
4.1 Risk and safety perceptions
We argue that identifying risks of an organization is not always a
straightforward and unambiguous task. Risks can be of many different types, and
some of them often seem very abstract to the majority of employees. Any
conception that an organization (or society) may have about the safety of certain
activities is socially formulated based on certain assumptions about the nature of
risks and safety (Turner & Pidgeon 1997, Pidgeon 1998b, Rochlin 1999a,
1999b). Similarly, the risk that is considered to be the primary one, the level of
different risks and the most sensible risk management methods are all things that
are learned in the organization. Some safety experts may oppose this kind of an
approach to risks. A frequent claim is that risk is a quantifiable phenomenon, the
product of the probability of an unwanted event and the magnitude of the
consequences. Another point is that airplane crashes and chemical plant
explosions are real events, not just beliefs. We agree with all this. In principle,
risks can be expressed as figures and their management should be developed to
prevent them from materialising. However, risk calculations lose their
significance if the organization is not aware of or does not understand or believe
in the existence of such risks. In practice organizations try to manage risks that
they believe to be the most significant ones. These beliefs, whether right or
wrong, affect all daily activities in organizations and ultimately have an impact
on the real risks, reliability and safety as well. This is why they cannot be
overlooked.
Organizations have different ways of internally handling the risks and safety
inherent in their operations. The degree of attention paid to operational risks
surely differs depending on the sector. In the nuclear power industry, for
example, risks are discussed prominently. Traditionally, probabilistic safety
assessments (PSA) have been prepared as the foundation for plant licensing and
for big technical modifications8. However, since safety assessments are carried
out by dedicated experts, insight into risks may not be reflected on routine work.
8
For information about the method and its history, see, e.g., Garrick and Christie (2002), and
Spitzer (1996).
46
The authorities also call for risk analyses to be made in the chemical industry,
which, however, favours qualitative methods. The chemical and oil industries
have lately been developing numerical risk analysis methods (see, e.g., Marono
et al. 2006)
The type of safety that is primarily focused on also differs depending on the
sector. In the nuclear power industry, safety engineers appear to focus on reactor
safety and flight safety is the primary topic in the aviation industry. On oil rigs,
on the other hand, safety is often linked to occupational or environmental safety
since activities rarely cause risks to civilians (Brandsaeter 2002). However, any
organization operating in a safety critical field faces a variety of risks, many of
which must also be approached in different ways. A good example of this is the
chemical industry, which was described in the following way by one of our
interviewees:
“We have to deal with all of the occupational accident risks involved in
normal industrial work (tripping, falling), risks related to the handling of
hazardous chemicals (loading, unloading, container safety), safety risks in
the actual process (accidental emissions, accidents, explosions, fires,
leakage of chemicals into work spaces or the environment)… and related
accidents, as well as damage to property and the environment and damage
from unplanned outages. Although we use outside transport companies for
transport and loading, accidents affect our image… Add to that the risks
involved in the production of chemicals. Normal emissions may also affect
the environment… And our product risks are in a class of their own… One
should, perhaps, also mention intentional violence against production plants
and transports, as well as intentional misuse. That is, our products involve
risks that are not found elsewhere.”
A nuclear power plant representative described the relation between different
risks in the following way:
“There are many different types of risks. Nuclear safety risk as such is very
small. The consequences are relatively big, but they have a small
probability. And the product of consequences and probability is small. (…)
But... for a company like this – for which image is very important – to be
able to operate, it must be accepted by the public and by politicians – and
47
those risks are pretty big. Risks are also related to one another. Not all of
them, but many risks form chains. Any event that might threaten nuclear
safety, even if it presents only a small risk, is reflected in other risks. It can
be seen in image and other similar risks, and ultimately as a financial risk.”
Some organizations have adopted specific terminology to deal with different
types of risks. Finnair’s director of flight safety talked about risks in the
following way:
“We make a distinction between ‘safety’ and ‘security’. We used to think of
the airplane door as the dividing line. ‘Safety’ refers to activities inside the
plane, while ‘security’ is used for ground operations. As for your question
on how risks are discussed in the organization, my organization talks about
nothing but risks… Risk assessment is part of our everyday work
[description of different operating models used with different
stakeholders]… The captain is the last link in risk management and really
has an enormous responsibility. The captain is responsible for all
operational matters, also for the mistakes that we have made.”
The Fortum Corporation has different types of business units in terms of risks.
The company’s safety director described risks and their management in the
following way:
“All of our work is based on minimising occupational injuries and risks. In a
company of this size the risk scale covers the whole field. We have obviously
identified major risks that affect our whole Group and that we must focus on
in more detail. These include a serious nuclear power plant accident, tanker
accident, big dam accident or an oil refinery fire.
(Q: How are risks of occupational injury linked to these?)
There is a link. I don’t really agree on this with people from nuclear and
process industry. They claim that they can handle nuclear safety risks in
detail. That is, although the frequency of occupational accidents is high,
they can still take care of nuclear safety. Oil tanker captains say the same
thing. They claim that safety is taken care of. People in the process industry
also say that safety is under control... Still, hazards are always more
common in units where these [personal injury] indicators are worse.”
48
Since serious accidents are rare, they cannot be used to determine organizations’
risk behaviour or safety. This is why occupational accidents, among other things,
are used as indicators. As seen from the interview excerpts above, opinions vary
as for a possible link between the frequency of occupational hazards and
process, nuclear or flight safety. A representative of Kemira, a chemicals group,
explained that although risky behaviour is seen more often at the company’s
Finnish plants than, for example, at its plants in the USA (based on the number
of occupational accidents), accidents have often occurred in tasks and situations
that do not involve chemicals. The representative doubted that this kind of risk
behaviour could increase, for example, the risk for explosion. Finnair, on the
other hand, separates explicitly flight safety from other types of safety (such as
occupational safety and security of information systems), also in organizational
terms, in order to deal with them more clearly.
Risk perception is influenced by the employee’s duties, as well as his or her
department and work role (ACSNI 1993). Thus, people may observe risks in
their organization in systematically different ways. Although employees may,
generally speaking, understand that operations include risks, it may be difficult
to see how one’s own work or work group affects risks. Commitment to safety
may be emotional, without fully understanding the practical implications and
how to ensure safety in one’s own tasks. We, for example, noticed that recently
employed maintenance technicians at nuclear power plants emphasised the
notion that their work promoted safety. When asked whether maintenance
affects nuclear power plant safety, they categorically answered in the
affirmative. However, few respondents could explain how maintenance tasks
might lead to an emergency at the plant, with multiple technical systems and
several safety systems in place.
Organizations face a big challenge as they must create a realistic picture of the
risks and ways to influence safety for people who are employed in different tasks
and come from a different educational background. A nuclear power plant safety
expert mentioned that pointing out threats to occupational safety is a lot easier
than communicating threats to plant safety. He described this in the following
way:
“One way to explain what is important is direct radiation [indicating the
radiation level, for example, on the door to the room]. Another way is to use
49
safety classification for systems or devices. It can show to a certain degree
that a system is important… Owing to historical reasons, these
classifications are rarely seen inside the plant. Earlier we weren’t even
allowed to mark the system that pipes belonged to so that terrorists couldn’t
figure out these things. It’s difficult to notice the important systems unless
you really know what system you are dealing with. They all look the same,
but some of them are important, others aren’t. The way these things are
communicated is by making all work subject to permission. You aren’t
allowed to touch anything you do not have permission for. And permission
must always be given in writing.”
In certain duties, safety is a very concrete matter. Risk management is a core
element in such duties. A nuclear power plant control room employee
commented on the demands and characteristics of the work:
“Of course a certain worry and fear for safety always looms in the
background. That’s to say, if anything should go wrong, we play a very
important part in preventing any extra emissions. It’s something that isn’t
seen in normal power plants. In their case, well… production is interrupted
and that’s it. Our problems start when production is interrupted.”
Apart from increasing the personnel’s awareness of risks, companies can create
procedures, instructions and technical barriers to ensure that occasional risk
taking does not affect significantly to the overall system. The latter alternative
has been emphasised in the solutions adopted by safety critical organizations,
especially because the notion of humans as the cause of errors has prevailed in
the past decades (see Chapter 3.1). However, organizations should determine the
kinds of risks in which human activities must be restricted using technology or
procedures and those that call for ‘education’, that is, ensuring that employees
have a concrete understanding of the risk and a clear picture of the links between
things. Finnair’s corporate security director provided the following analysis of
his field of responsibility:
“Contrary to flight safety, where the goal is a maximum level of safety, we
[in corporate safety] seek the optimum level of safety… In corporate safety
the focus is definitely on information security. These days nearly everyone
has to use computers. We often have to prevent certain actions and access to
50
certain places. [Matters related to information security] have not been
subjected to norms to such extent that we would have authority
requirements. Since we cannot offer as massive training as we do in flight
safety… there is risk behaviour and we have had to technically restrict it.”
The way in which organizations handle risks and safety, as well as the
approaches they use to control organizational and human risks and safety, reflect
on many other decisions in organizations. The following chapter discusses the
motivational impact of safety critical nature of work.
4.2 Motivational effects of risks and safety
Our studies carried out in nuclear power plants (Reiman & Oedewald 2004a,
2004b, 2006, Reiman et al. 2005, Reiman 2007), as well as many other studies
of safety critical organizations show that high reliability organizations value and
emphasise safety very highly. A question that has not received as much attention
is how emphasis on safety affects employees’ attitudes to their work and their
work motivation. Stress researchers have suggested that responsibility for the
safety of other people is a major cause of work-related stress (Cooper et al.
2001, Kinman & Jones 2005). One sometimes hears lay people mention that
nuclear power plants are unattractive workplaces because they are ‘scary’
environments. Is work in a safety critical environment found to be stressful? Or
do employees consider discussions about safety risks and different kinds of
training events and rehearsals to be a normal part of work? Can work-related
risks affect the degree of importance that employees attach to work?
Based on our studies, we claim that the safety critical nature of an organization
makes work to be perceived as more significant. Consequently, it also motivates
the staff. This is true at least when safety is considered to be part of one’s own
work instead of a separate demand. Nevertheless, employees sometimes find that
maintaining safety is challenging and causes pressure, as described by a shift
manager at a nuclear power plant:
“It’s a huge responsibility and if you make a mistake, well… it will be all
over the press and company, and depending on the mistake, it will be
discussed for months on end. Not that the person’s name would be
51
mentioned, but you will feel it. In that sense it involves a lot of responsibility
and can be stressful.”
The interviewed safety experts were of the opinion that the safety critical nature
of work is most often experienced as a positive challenge and something that
increases motivation, rather than as a stress factor.
“There’s been no indication that the responsibilities or risks would cause
stress. I’ve thought about it myself. These issues simply haven’t come up in
our surveys and studies, which is rather interesting. Do people just keep
quiet about these questions or… I suppose, even if you’re a fighter pilot…
once you learn your job, it just becomes [your job]… and of course this has
its own risks.”
“I haven’t found this risk to cause stress… I think it’s experienced more like
a positive challenge, sort of like: these are the kind of things we know how to
do.”
“I think our staff is at its best in crises, no one keeps track of the hours put
in.”
“After stressful situations, we have a debriefing system that is always used
[after crises]… We are proud that we haven’t lost a single employee due to
mental well-being-related issues in the past five years.”
“And it [motivation] isn’t that bad in ground operations either; when I
described the work of airplane technicians, it's a real long education
programme. They are the elite of workers; they know their value and feel
certain pride in it… After all, Finnair is Finland’s second most popular
employer.”
Interviewees also linked employees’ positive work attitude to companies
focusing on occupational safety and emphasising training. Measures adopted to
improve safety were considered to create a positive work atmosphere. This
phenomenon has also been seen in other research. For example, projects on oil
rigs that aimed to promote the health of employees and improve occupational
safety led to a better picture of management and increased loyalty to the
52
employer, in addition to decreasing the number of occupational accidents.
(Mearns et al. 2003.) Following reforms in French nuclear power plants, new
superiors were able to gain the trust of employees by proving their competence
especially in nuclear safety-related issues (Reicher-Brouard & Ackermann
2002).
On the other hand, if employees have a recognised status as safety supervisors,
as is the case for air traffic controllers (see Palukka 2003), safety may also
become a contentious issue. In Palukka’s PhD research, air traffic controllers
used safety as a key issue to determine their position and professional identity.
When they talked about aviation safety in general, they were very critical.
Palukka identified three ways in which air traffic controllers discussed (in
interviews) aviation safety. Firstly, they drew an analogy between safety and
systems. Safety, in their mind, is a disciplined and carefully designed activity,
and air traffic controllers a necessary element in it – often the only ones
promoting safety. Secondly, air traffic controllers seemed to think that safety is a
facade. This view contradicts with the first one. Safety was discussed as
something that does not exist or is under threat, and air traffic controllers often
referred to Civil Aviation Administration’s (management’s) views on safety. In
other words, air traffic controllers took a stand on who has the right to define
safe operations. This kind of talk gives the impression that air traffic controllers
know what safety means, while for the management it is a façade. Thirdly, air
traffic controllers drew a parallel between safety and expertise. Likening safety
to expertise was a way to question power differences between decision-makers
and executors. A person with the expertise to handle safety critical situations
should also have the power to make decisions. In the end, Palukka suggests that
the professional identity of Finnish air traffic controllers is built around the
battle to preserve their own status and self-government. This was true at least in
the uncertain conditions during the study, shadowed by the commercialisation of
the Civil Aviation Administration and the air traffic controllers’ pay struggle.
The message that air traffic controllers wanted to communicate was that safety
should be determined by those able to offer and manage it.
Many studies have also shown that the attitudes and values of the field staff and
management differ from one another (see Cameron & Quinn 1999, McDonald et
al. 2000, Harvey et al. 2002). This holds true in both safety critical and other
environments. Harvey et al. (2002) reviewed differences in the field and among
53
the management in two British nuclear power plants. The results indicated that
both groups were committed to safety, but the field staff had a more negative
picture of communication in the organization, management in general and
personal responsibility for safety (that is, whether one feels responsibility for
overall safety at the plant). The researchers wondered whether differences
between the field and management could arise from their notions of risk
structure. Differences can be found, for example, in the ways in which the
parties define responsibility for one’s own occupational safety and participation
in risk prevention (Harvey et al. 2002, p. 32). This is why employees took a
critical attitude to management and communication.
Making risks concrete to employees appears to be a reasonable strategy since
work that furthers safety motivates people. However, motivation does not do
away with the fact that responsibility is heavier on certain employee groups, and
employees feel stressed in certain situations. As one of our interviewees said,
personal work-related stress or fears are not always expressed in words. This
may also be caused by the ‘macho culture’ sometimes seen in technical fields
(see e.g., Ignatov 1999). Self-confidence is valued, and feelings are usually not
shown in the (Finnish) work community. In addition, safety critical
organizations are structured so as to prevent safety from being the responsibility
of one person alone. In such an environment, expressing concern could be
interpreted as a lack of confidence in co-workers or the system as a whole.
4.3 Complexity of organizational structures and
processes
The organizational structure in safety critical organizations is typically a
hierarchical one divided into several departments that support and control one
another. Operations, however, are usually carried out in a traditional line
organization. Not even the process, team and matrix organization models
proposed in recent years have shaken the traditional line organization, which at
least theoretically is responsible for everyday operations. However, companies
have typically set up several parallel functions that support, control and assist the
line organization. These include, for example, technical support, quality control,
laboratory, radiation protection and training. Their role in relation to the line
organization is complicated in terms of both practical work and responsibility.
54
An interviewee working in a managerial position in maintenance described the
organization in the following way:
“The starting point [is] a line organization divided into technical fields. The
disadvantage is that many things are so comprehensive in terms of
technology: valve, actuator, pump – that means there are engines and you
need electrical engineering, automation and mechanical engineering.
Maintenance of this type of objects, especially if something bigger is wrong,
isn’t nearly as efficient as it could be…
As for advantages… well, in most cases it is obvious who holds
responsibility… Clear responsibility-power relations, that is, who is
responsible and who handles things. I mean, ours isn’t a pure line
organization. We have certain expert fields, vibration controls, quality
controls and so on… that sort of overlap. Plus we have people responsible
for systems… the idea is that they consider the system as a whole and use
the line organization in this work… The goal is to manage the interfaces and
prevent any no-man’s-land from coming about.”
A general comment is that the safety critical nature of work makes the
organization structure and processes more complex (Perrow 1984). The goal of
independent quality control, for example, is to review work from an outsider’s
point of view in an attempt to more easily detect errors. In addition, many safety
critical organizations maintain a group of experts familiar with the company’s
technology to ensure that help is close at hand when needed and that the
organization is not dependent on market fluctuations. On the other hand, a more
complex social system (organization) sets new demands on the personnel. Work
processes become longer chains, whose flow can be difficult to understand.
Apart from a complex organizational structure and convoluted processes,
challenges are also caused by responsibilities being distributed among several
companies. For example, contractors are commonly used in service and
maintenance operations. Annual outages of nuclear power plants, for example,
require hundreds of outside workers for a few weeks. In a similar vein, oil
drilling may involve employees from many different companies. And even when
all participants belong to the same corporation, company functions can have
widely differing subcultures (Nilsen 2006).
55
Complex organizational structure and redundant safety systems may actually
increase the safety risks. Rijpma (1997, p. 19) states, in line with Perrow (1984)
and Sagan (1993) that redundancy makes the system opaque and in that sense
more complex. Component failure and human errors may be more difficult to
detect since they are compensated for by backups. Backup systems may turn out
to be less independent than expected and systems may be out of order at the
same time. Rijpma also suggests that redundant data collection practices may
lead to uncertainties and contrasting opinions (Rijpma 1997, p. 20, see also
Perrow 1999).
A particular risk resulting from the complexity of organizations is that an
employee may start to rely on external control and believe that someone else
ensures that even weak performance will not endanger the safety of the entire
organization. Hints of such attitudes can be found in many studies (Reiman et al.
2005, Oedewald & Reiman 2005) and in the fact that questions related to
personal responsibility are particularly complex in safety critical organizations
(see Chapter 4.8). Hackman and Oldham (1980, p. 75), studying work
motivation, present the following thought-provoking observation: “[t]he irony is
that in many such significant jobs [such as an aircraft brake assembler], precisely
because the task is so important, management designs and supervises the work
to ensure error-free performance, and destroys employee motivation ... in the
process.”
The safety impact of employee motivation has not received much attention in
research. As explained in Chapter 4.2, the safety critical nature of work
motivates employees, but this motivation may be dampened down if the
procedures used in the organization create the impression that the actions of
individual employees do not affect safety. ‘Ensuring’ that the employees’ work
is safe, using, for example, detailed procedures, redundant operations and
independent inspectors may weaken work motivation and the feeling of the
safety impact of one’s work, which, in turn, may influence safety in the long run.
An employee who had worked long in a nuclear power plant expressed this in
the following way:
“(Q: Has your work come to involve any new competence requirements over
the years?)
Yes… paperwork, there’s more of that.
56
(Q: As a result of what?)
All the new rules. And all kinds of safety systems and controls of controls of
controls, well, they result in all that paper. It’s as if the actual work was left
in the background.
(Q: So all that paper is becoming a nuisance?)
Exactly.
(Q: I guess it isn’t always easy to understand how all that paper affects end
results?)
Well, if, for example, you notice a small defect somewhere, you don’t
necessarily feel like pointing it out since work orders are so difficult to
create, plus you get into all that paperwork.”
Even proponents of the HRO theory admit that safety critical organizations are
structurally complex. Talking about the Diablo Canyon nuclear power plant,
Schulman said that it was very difficult for an individual employee to describe
the structure of organizational responsibilities or the process needed to carry out
work. However, HRO advocates do not believe that the complexity of
organization structures weakens safety. In their opinion, complex structures are
needed in complex environments (Schulman 1993, 1996).
Specific structural features do not, however, determine the culture of an
organization. This makes it impossible to state that a certain kind of a structure
would promote safety or would automatically be more prone to risk whatever the
organization. Sociologist Mathilde Bourrier (1999) has studied how the
reliability of safety critical organizations is developed in social practices. She
studied the design and implementation of annual maintenance in four nuclear
power plants (two in France, two in the USA). Bourrier identifies four general
challenges in these tasks: 1) The design of annual maintenance calls for
coordination and cooperation so that hundreds of people with different
backgrounds and competences can carry out their work punctually according to
schedule. 2) Annual maintenance tasks must be carried out in compliance with
work orders and permission to start work must be obtained. This means that
tasks must be prioritized and their interrelations must be identified. 3) The roles
of the operating organization and maintenance during annual maintenance differ
from those during normal operation. 4) The control of outside contractors and
57
their integration into the work community is an additional challenge. The
solutions that the organizations’ had adopted to meet annual maintenance
challenges differed from one another. For example, operation and maintenance
had different roles in different plants, and this also influenced the relation
between the two functions. One plant had not clearly defined the roles; instead,
employees improvised along the way. In two plants, cooperation between
operation and maintenance was difficult and the atmosphere contentious, among
other things, because in one of the plants the performance-based pay system
made operation shifts compete against one another. As a result, shifts tried to
delay the start of risky or otherwise laborious maintenance work to the next
shift. In the fourth plant, the roles and tasks had been agreed and planned so
carefully in advance that task distribution between operation and maintenance
showed no problems. The superiors, however, felt that employees did not take
initiative and instead expected management to provide solutions also to
unexpected situations.
Bourrier analyses the effects that these solutions might have on reliability.
Different types of official or unofficial ways to follow rules and to act in the case
of incomplete procedures emerged in each plant. Bourrier refrains from putting
the strategies into order of superiority and instead underlines that a particular
strategy may either promote reliability or be a threat. For example, the reliability
of one plant was based on situation-specific improvisation if no applicable
procedures were available. Supervisors ‘secretly’ approved the practice, trusting
both themselves and the staff. The strategy’s strength came from a good team
spirit and strong experience in solving problems. In another plant, reliability was
based on following the procedures word for word. If no procedure existed for a
specific situation, the organization had set up a practice that enabled new
procedures to be produced without delay. In Bourrier’s view, the strategy’s
weakness was that it supported the employees’ tendency to shun from
independent decision-making. This may constitute a threat to reliability in an
emergency where quick, spontaneous action is crucial. In summary, it seems fair
to say that different organizations have their own ways to use the same structures
and processes.
It is worth highlighting two common viewpoints on complex structures. Firstly,
as observed by HRO theorist La Porte (1996), to be able to control a complex
technical system one needs theoretical technical competence as well as
58
operational experience. These skills have usually been assigned to different
organization units, which, however, depend on one another for getting and using
information. This often leads to internal conflicts between operators and
technical experts. Process and matrix organizations, as well as interdisciplinary
equipment responsibility areas, attempt to deal with such conflicts. What makes
the situation difficult is that expert and operating organizations have different
languages and methods. Calculations made by experts might be considered
theoretical or difficult to grasp. Expert organizations, in turn, may criticize
operating organizations for not providing enough information that experts need
to produce more reliable and useful calculations. The chasm between these
‘worlds’ has manifested itself in a number of accidents, such as the Challenger
explosion (Appendix A).
Does this mean that differentiation between thinkers and doers is necessary for
safety? Some of our interviewees said that the goal is to benefit from the
interests of different people: some are more interested in analyses, while others
are more practically oriented. However, this does not mean that they should
work in different units. Do organizations believe that those in charge of
operative activities should adhere to an attitude that complies with rules and
respects routines, an attitude which technical experts might question? One of the
underlying factors may be the notion of independence of assessments needed in
decision-making situations. One of the interviewees also suggested that this
model makes it easier to prove the existence of certain (safety) resources to
authorities. Considering how often internal communication in organizations is
deemed to be deficient, relatively little attention has been given to the bases for
this differentiation in safety critical environments.
Another common topic involves the safety and efficiency impacts of specialised
organization units. For example, the maintenance organization of industrial
plants is often divided into electrical, mechanical, automation and construction
departments. The advantage of this is that expertise in a particular field
accumulates within the organization, and work that requires specific competence
can be handled efficiently. However, many tasks call for the contribution of
other parties, as shown by the description given by the maintenance employee at
the beginning of this chapter. Organizations are not always aware of the amount
of concrete activities needed to collect information and convey it to other parties.
Lintern et al. (2002) call this a coordination load. As an organization grows
59
bigger and more complex and the competence sectors become narrower, the
share of overall work taken up by coordination also increases. This is nobody’s
primary duty, and organizations usually do not have special competence for the
task. In fact, it is rarely even thought of as actual work and is not necessarily
described in procedures. Ironically, this is why initiatives aiming at promoting
coordination and cooperation, for example, information systems or reorganizing
may result in unexpectedly big effects in the short and long run (see also Reiman
et al. 2006).
In other words, a complex organizational structure makes planned change a
difficult and challenging task. Changing the responsibilities of a single function
or department is reflected on many things and calls for changes in other
functions, as well as updates to procedures and documentation. The impact of
changes on practical activities is difficult to predict since practical activities, as
indicated by Bourrier’s examples, are often something quite else than simple
adherence to rules and the line organization.
4.4 Modelling and predicting organizational performance
Our fourth topic deals with the predictability of organizational activity. Safety
critical organizations cannot follow a reactive strategy in which they act after
problems appear. This is secured by the requirement for public trust, as well as
authority control, in practice. Even small problems and incidents can be
unbearable in financial terms. This makes the requirement for predictability
different from that in other business fields. But how can organizations predict
the future? And how do they prepare for the safety impact of changes?
The functioning of the overall safety critical system is typically ensured using
many different practical methods. The redundancies of technical and
organizational structures described above are one way to reach the required
result even if some component were to fail. On the other hand, organizations
also try to ensure that all parts function without failure. Components of technical
systems can be tested in laboratories and functional characteristics in simulators.
Processes have been designed to make it possible to systematically carry out
changes to technical systems. In some fields, such as the nuclear power and
chemical industries, attention must also be paid to the human and organizational
60
characteristics of systems. Organizations must at least prove that they have the
required competence and that sufficient human resources have been allocated to
different functions.
Safety critical organizations strive to predict also the possible ways in which the
system might face an accident. The concept of Design-Base Accidents is used in
the nuclear field to denote those accidents which are anticipated to be possible in
the given design. Beyond Design-Base Accidents are then those accidents which
have not been anticipated or are considered to be extremely improbable. After
accident scenarios have been defined, various physical, functional and symbolic
barriers (Hollnagel 2004) are set in place to prevent the event from developing
into an accident. For beyond design basis accidents, mitigation of the radioactive
release into the environment is the primary goal. Thus, this kind of accident
prediction is based on two principles: First, experience from various accidents is
accumulated and barriers are set in place to prevent their recurrence. Second,
risk analysis and various failure analyses are utilised in order to predict the
mechanisms of possible system failure.
As explained earlier, safety critical organizations use both qualitative and
probabilistic (PSA) risk analysis methods. The results of risk assessment are
used to support decision-making and prove to authorities that the safety impacts
of design solutions have been taken into consideration. The nuclear power
industry has made efforts to include human and organizational factors in PSA.
To put it simply, this means that someone has to determine the probability for a
person committing an error in a particular chain of events. It is difficult to make
reliable evaluations of the probability without experimental research results or
experience accumulated over the years of human activities in the situation in
question. After all, people continuously adapt their actions depending on their
analysis of the situation and act on the basis of their history and the practices of
the work community. One of our interviewees, a safety department manager at a
nuclear power plant, gave the following answer to our question about the
possibility to predict an organization’s operations:
“I’m not sure you can model it in PSA. The only way I’ve ever tried to model
an organization is as an entity. That is, in the sense that the control room, or
the control room staff, does certain things. But I haven’t thought
about…how the organization defines… or splits up the task in detail… I’ve
61
sometimes followed simulations and seen how different shifts carry out the
same run. Their behaviour was pretty different. Different shifts had
completely different roles. No, in my opinion it’s impossible to model – at
least in control room operations. What I mean is that since we have many
different kinds of shifts, predictability is very weak. It doesn’t mean that
things couldn’t be studied with risk analysis. After all, it is exactly in
uncertain cases that risk analysis is needed. We don’t really know how it
works, but we have certain limits and know that it works within those limits.
We even have some idea of the distribution so we could, in fact, use risk
analysis. If we start a deterministic study after the worst case, it might lead
to a pretty lousy result, a terrible oversizing. But risk analysis at least gives
us a possibility for training and development.”
The interviewee automatically approached the topic from the point of view of
PSA. Although the answer brings up the difficulty of predicting organizational
activity, it indicates that risks can be evaluated and controlled. The interviewee
also mentioned a typical method used to control variation in human activities
and the unpredictability of organizations: training.
Automating operations is another way to control the ‘unpredictability’ of human
beings. This means replacing humans with technology. This is an ongoing
development trend, but far from a problem-free one. When designing
automation, the reliability of the person, the designer, plays a central role. It is
what largely determines the reliability of automation. On the other hand, as
Zuboff wrote back in the 1980s, automation usually changes the overall system
more than we can realise beforehand. The control of the most demanding
situations is left to humans. It is a particularly difficult task since automation
makes it more difficult to perceive and get a feel for the overall system. It may
be difficult for people to succeed in their task unless the system has been
designed extremely well. This, in turn, may emphasise the impression that
human beings are less reliable than technology.
Many of the activities of humans in organizations are impossible to replace with
technology. Furthermore, the risks related to human and organizational
performance are difficult to model in mathematical terms. Organizations have
had to develop other types of strategies to understand the impact that human
activities have on safety. We enquired into our interviewees’ opinions about the
62
significance of and ways to predict organizational activities. A representative of
the chemical industry provided the following answer:
“[The safety impact of an organization] is very significant, especially in
occupational accidents. In our experience, most of the accidents are related
to the safety of behaviour, which is why we have used behavioural safety
consultants in recent years. One option is to define clear responsibilities so
that everyone understands what the employee, shift supervisor and
department head is in charge of. That means dealing with responsibilities…
and, in practice, that superiors tackle matters (…) And, of course, there are
these barriers developed by engineers that eliminate human errors.”
(Q: It is possible to predict an organization’s activities?)
“We use HAZOP and Sixstep for investments and major changes. Change
management, of course, is emphasised in policies and other matters, but I’m
not sure if it is included in Sixstep. Training-related topics are certainly
included, but I don’t know if it models, for example, organization structures.
Experienced plant managers, of course, know their staff and can make
decisions on that basis… but whether there’s a better way to do it than by
knowing the staff… (…) Studies also show that downsizing lower the level of
safety. Safety is often of a securing nature, that is, it involves securing
elements in the background. Heavy downsizing remove these backups,
analytical, passive elements from the organization. (…) And everyone in the
company understands that we cannot make organizational changes that
might put safety at risk, but organizational changes are often carried out by
business consultants…”
The interviewee first discussed the reliability of organizations in terms of error
prevention or the minimisation of their impact, and later expressed the idea that
safety is created through action. An organization must have sufficient resources,
experts, backups and a management that knows the staff. Finnair aims to predict
organizational activities using a well-known approach.
“We use the Reason model to predict these matters. Identifying latent errors,
for example, during organizational changes, when companies are
outsourced and streamlined... Those who have to study these safety effects
are sometimes left behind in this respect. We cannot always see the overall
63
picture. It doesn’t happen often but the rate of incorporation has been pretty
high… We deal with the lower part of the pyramid to predict that medium
and higher-level [events] do not take place. So in this field, the Reason
model is at its purest. Plus reporting on our own mistakes is also good.”
The interviewee identified the problem involved in the ‘pyramid’ or ‘iceberg’
approach9. It is, to a certain extent, a reactive, instead of a purely proactive,
approach. The goal is to predict the most significant future problems and prevent
them by identifying and tackling less critical deviations.
A nuclear power plant representative described the kinds of matters that can be
reviewed when considering the safety of organizations:
(Q: Are you expected to prove these points to the authorities?)
“Yes, we have to explain the decision-making routines: the people who make
decisions plus all of the responsibilities. The administrative rules, at least
explain all of these, and they constitute a document that the authorities
approve10. The administrative rules define all of the plant’s responsibilities.
It does not deal with resources. The sufficiency of resources, of course,
interests them, but you cannot have authority requirements for that.
People… try to ensure that we have sufficient resources. But the authorities
cannot, obviously, be the driving force in this respect.”
Fortum’s representative was of the opinion that a more difficult, and perhaps
more essential, question than predicting an organization’s resources was the
staff’s ability to carry out its tasks.
“Managerial skills are clearly more important than organizational factors.
An organization must obviously have sufficient management resources; self-
9
The interviewee refers to a theory, according to which a serious accident is the tip of the iceberg.
For each serious accident there exist hundreds of hazards and smaller accidents. Each of these, in
turn, correspond to several deviations and errors.
10
The regulatory guide YVL 1.1 (STUK 2006) defines the content of administrative rules as
follows: “In accordance with Section 122 of the Nuclear Energy Decree, the administrative rules
shall determine the duties, authority and responsibilities of the designated responsible manager of
a nuclear facility, his/her deputy and the rest of the personnel needed for operation of the nuclear
facility.”
64
guiding teams cannot ensure safety, nor many other issues, for that matter
(…) It is relatively easy to objectively say that the structure is such and such,
but whether individuals have the required competence to handle matters,
well, that involves completely different questions, and that is more difficult
to determine based on an engineering education.”
All interviewees emphasised the management’s role in ensuring the safety of
organizational activities. The management is, in a sense, in charge of ensuring
that organizational activities stay within certain limits, even though all of the
challenges related to the behaviour of humans and organizations cannot be
understood or solved. All of the interviewees expressed the need to predict
events and to make practices more systematic. They did not show much
confidence in the self-guidance of organizations or work groups in this context.
After interviewing several employees in safety critical organizations, Schulman
(1996) states that the interviewees often embellished their statements with
stories about heroic actions. Heroism involves actions outside job descriptions
and roles, and the element of surprise is crucial to it (Schulman 1996, p. 73). The
situations usually call for fast action, decisiveness and improvisation, and the
‘hero’ has been under physical risk when acting to save the situation. Schulman,
however, points out that one of the interviewed organizations made no
references to heroic stories. The organization in question was the Diablo Canyon
nuclear power plant. In fact, Diablo Canyon’s company culture was downright
anti-heroic. In Schulman’s opinion, Diablo Canyon aims to ensure reliability by
protecting itself against errors caused by hubris rather than timidity. The
company trusts analytical anticipation more than organizational flexibility.
Procedures are used to make information collected from technical systems into
principles that restrict human activities. In this way the organization can
basically create a foundation for operations that do not need heroic activities
(Schulman 1996, p. 76).
Our interviews indicate that Finnish high reliability organizations work much
like the ‘anti-heroic’ nuclear power plant. Schulman discusses the role of
heroism in high reliability organizations, which aim to maintain a determined
state rather than achieve a completely new one, and introduces the term ‘static
heroism’ as opposed to ‘change heroism’. As an example, Schulman uses the
Challenger accident (see Appendix A), in which heroic activities of certain
65
individuals (that would have gone against the prevailing culture) might have
prevented the catastrophe.
The attempt of safety critical organizations to predict operations may make them
more static than conventional industrial organizations. Staying the same is easier
than changing in the sense that even when organizations do not work in the best
possible way, problems related to tried and tested solutions are predictable. The
studies we carried out in Finnish and Swedish nuclear power plants show that
employees experience organizational changes as stressful events that cause
insecurity11. In many cases the changes also affect the employees’ confidence in
the management’s attitudes and commitment to safety. We have noted that the
opposition shown by employees is more than mere change resistance. It often
involves real concern about the employees’ own and the whole organization’s
safety. One of the main reasons for concern is the deterioration in predictability.
Tacit, and sometimes written, information about the organization’s
responsibilities and work processes, as well as the roles of cooperating parties,
deteriorates at least for a while. This can be seen, for example, in employees
feeling that they have less control of their own work.
Staff reductions have often been justified by saying that the work decreases as a
result of operations being merged and overlaps being removed. This, however, is
not always the case. In Great Britain, for example, privatisation projects have
created considerable amounts of additional work for power plants, much of
which had not been predicted. Power plants have often taken on downsizing
projects without first determining the scope of the required competence and the
minimum staff needed for different activities (Bier et al. 2001). In some cases
change projects have resulted in departments losing core competence. NII is
concerned, for example, about streamlining projects that consist of several
individual and, as such, small measures, but whose overall impact may be of
great significance. (OECD/NEA 2002.)
11
For the impact of organizational changes see Reiman et al. (2005), Reiman and Oedewald
(2005) and Reiman et al. (2006), as well as HSE (1996), Baram (1998), Wright (1998), Woods and
Dekker (2000), Bier et al. (2001), Ramanujam (2003), Vicente (2004), Kettunen and Reiman
(2004).
66
All of our interviewees also considered organizational changes to be difficult in
terms of safety. According to them, especially outsourcing, staff cuts and
acquisitions can lead to unexpected impacts.
“I don’t believe anyone plans their resources in the same way as a sales or
IT company. It wouldn't work here. I have sometimes come across the
problem that when we outsource work, the resources allocated to it [by the
contractor] are insufficient.”
“The outsourcing of service and maintenance is particularly interesting.
We’ve got several figures on that, showing that it isn’t so easy… It might be
worthwhile to review the number of accidents that occur at the contractors’
and compare it to our own people. I haven’t done that. But they simply
cannot be as familiar with the plant, and they don’t feel as strongly about it.
I have the feeling that they might have [more accidents than our own
employees].”
Research results on occupational accidents happening to contractors and
companies’ own staff are contradicting (see Kettunen & Reiman 2004, Clarke
2003). In the chemical industry, results seem to indicate that contractors have
more accidents than companies’ own employees (see Kochan et al. 1994, Baram
1998). Companies have tried to prevent the negative effects of changes by
emphasising, for example, staff training.
4.5 Importance of training
Our fifth theme focuses on the methods used for staff training and their
objectives. Safety critical organizations make considerable investments in
training, especially in relation to critical tasks. All of our interviewees were of
the opinion that their companies make bigger investments in staff training than
Finnish companies on average.
“Accident investigations etc. always say that training, training and training
is the only remedy for all safety risks (…) And psychological tests focus on…
risk-takers are not accepted for all duties. But these [psychological tests]
67
are a special case, they only apply to people at the higher ranks of the
organization.
(Q: What do you consider to be the main objective of training?)
Attitude and expertise. We have many different types of training, what with
the field being so extensive. But whatever the case, we also convey our way
of thinking.
(Q: What is the right attitude?)
A professional attitude to safety. In other words, we know the risk involved.
And an extremely serious, anticipatory and serious attitude to it, a
responsible one.”
Staff training is one way to improve the predictability of organizations. It can be
used to ‘standardise’ human behaviour. Finnair’s representative described this in
the following way:
“All aviation [training] systems have their origins in the shipping business
(…) What differentiates us from shipping is that all of our operative
measures follow a certain procedure [describes the checklist procedures of
the pilot and copilot]… The same applies to service and maintenance.
Service operations may also display personal individualism, but the goal is
to do away with that (...) Our training is really tough. The goal is to make
sure that the task distribution between the crew and flight attendants is
always the same, whatever the crew… [Our goal is] that certain events
trigger certain behaviour. We have noticed that the better the training, the
bigger the chances to survive [serious events].”
When asked about the nature of the service actions taken in unexpected
incidents, such as failure alarms, Finnair’s representatives gave the following
answer:
“The automation in our most modern aircraft carries out self-tests and gives
instructions on what to do. This suits the nature of Finns, what with our
being phlegmatic and slow, waiting to see what the plane does (…) If the
system doesn’t provide any information, we go to our library, look up the
system in question, don’t start to fix anything and send a message to the
68
technical department. They give further instructions, if needed.
Troubleshooting is carried out only to the extent needed to determine
whether we [the flight staff] can continue [the flight operation]. We never
intervene with maintenance, that would make us guilty of misconduct. Our
cooperation is quite seamless. Our technical department is top-class
worldwide, its expertise and attitude are extremely good, and all of this
enables us to keep our fleet in the air better than others in technical terms
(…)”
(HR director): “Service technician training takes six to seven years after you
graduate from school. It’s a lifetime career.”
On the other hand, specialised training leads to expensive, and to a certain extent
inflexible, staff: tasks cannot be given to just anyone. Many of the tasks in safety
critical organizations also call for considerable expertise. Expertise as a concept
contains the notion of people taking responsibility and being able to question
existing information. In this sense, training that is strongly aimed at
standardisation may be seen to conflict with expertise and the value given to
expertise.
An electrician in a nuclear power plant emphasised the importance that
predicting the condition of equipment has to the success of maintenance. When
asked how much failure anticipation depends on expertise and experience, the
electrician gave the following answer:
“I’d say that… even if you’re professionally competent but are an outsider,
and don’t know the equipment and conditions… A lot depends on years’ of
experience of the equipment and their functioning. Sounds, for example, are
telling… equipment that moves, whether it is worn… And control-related
jobs, which mean you must be familiar with earthing devices. The values
may drift, so you must check, use a meter…”
Another problem is that the expert fields in safety critical organizations are very
narrow. The assumption is that people cannot reliably manage extensive topics.
However, it is not clear that reliability can be best achieved by organizing
activities into narrow fields of specialisation. Making the right decisions in
complex environments may be very difficult without a clear knowledge of the
69
whole system that the decisions will affect. However, the practical question that
organizations often have to answer is which of the two strategies is the more
sensible one: to deepen the competence of individuals in their own field or train
multi-skilled people. A similar question applies to the organization of work:
should an organization be divided into narrow fields or should it use multiprofessional teams and job rotation.
Electrical engineers in a nuclear power plant were asked which was more
important to successful maintenance, specialisation or an overall understanding:
“It has to be an overall understanding as well... We’ve got many tasks, and
if you don’t have an overall picture of things, it’s even more difficult to
carry out the smaller ones… But, if you want, I mean expertise… The
equipment base here, for example, is so broad that no one can manage
everything. Nobody can know and do everything. On the other hand, people
here generally know how to do things (...) But it’s good to understand the
whole, plus it makes things more meaningful... to understand a set of
equipment, as well as individual equipment, or know what it does. If you
don’t know what a particular device is supposed to do, it’s pretty difficult to,
say, start to fix faults. First you have to know how it works when undamaged
and operating. And then you can understand the relations between
independent parts.”
The same interviewee also pondered on the implications of organizing work so
that a single individual would have a broader field of responsibility:
“At some point we emphasised this jack-of-all-trades type of training; that
everyone should be capable of doing all tasks. We had some expensive
courses. This kind of an organization makes a big mistake if everyone is
supposed to do… The goal should be that whoever does something, must
understand what he is doing (…) In firms that don’t deal with this kind of
equipment, well, it’s completely different, I mean, most of the people here
can deal with many different tasks… but you have to know what you’re
doing.”
The opinions of this interviewee seem to contradict. The question is a difficult
one, and the right way to ensure safety has not been determined. The problem
70
has been discussed by both Perrow and advocates of the HRO theory. Perrow
(1984) suggests that complex organizational systems that contain tight couplings
(see Figure 3) are dangerous from the point of view of decision-making.
Complexity calls for distributed decision-making, that is, that decisions are made
where the subject is best known. This could be taken to mean that (narrow)
expertise is necessary for safety. However, as Perrow points out, tight couplings
between subsystems call for centralised decision-making. In other words, if tasks
carried out in different systems have an impact on other systems, decisionmaking should be handled ‘at the top’. This, in turn, might support the
philosophy of standardisation.
Proponents of the HRO theory have noted that high reliability organizations
usually act in a centralized and formal way in normal situations, but in
emergencies in a decentralized way, based upon the expertise needed. This
means that they take actions more flexibly, use social networks more efficiently
and emphasise individual responsibility when faced with an emergency. (La
Porte 1996.) The problem is how to resolve what kind of action each situation
calls for, what is a “normal” situation. That is, when should the organization act
in compliance with standards and when can it use expertise in a creative way.
This, in turn, supposedly calls for a comprehensive view of the overall situation!
Official training is not the only way to create operating models for organizations.
Inherent in the organizational culture is the attempt to standardise behaviour.
New employees, for example, face the social pressure to learn the norms of the
workplace, especially those emphasised by the nearest co-workers. New
employees deduce the norms controlling the group’s actions from the behaviour
of group members. Some group members are considered to be role models, and
they have a stronger impact on norm creation. (Hogg & Abrams 1988, see also
Helkama et al. 1998.) In other words, learning is more than an accumulation of
knowledge; it involves a continuous change and development of thinking (and
action) in a specific operating environment12. Learning also does not mean
simply an accumulation of (work) experience. Long experience does not
necessarily and automatically lead to more advanced models of thinking and
action, but may rather result in restricted routines that are difficult to change.
12
For learning and the significance of experience see, e.g., Hakkarainen et al. (1999), Engeström
(1998, p. 176), as well as Lave and Wenger (1995).
71
Our interviewees also commented on the limitations of training as concerns the
impact on behaviour:
“These attitudes… it’s difficult to know how they can be changed, unless in
interaction. (…) If training is taken to mean an accumulation of knowledge,
it isn’t enough for everything.”
“Training is just one element, it cannot usually change attitudes. As an
example, we cannot bring about a change in attitudes by putting people back
to school. If we want to see a change in attitudes, we need procedures that
make people... To put it in other words, if a new employee comes to plant A
and we require that helmets are used at all times, and a report is made of
each accident, that would certainly affect my attitude. And that attitude is
also reflected on how I drive on the road. If I come to plant B, where nothing
is of any concern, however much training I get, if the operating model is
what it is, well, training won’t have any effect.”
4.6 Role of rules and procedures
Our sixth topic deals with the strong role of written rules in guiding the
operations of safety critical organizations. In addition to training, rules are often
considered to be a way to make the activities of humans and organizations more
reliable. This notion is based on the (as such correct) notion of humans making
mistakes and forgetting things. Rules and procedures try to control these
‘human’ characteristics. On the other hand, procedures are drawn up by people
and are thus subject to the same human characteristics as are other human
activities.
A central question in developing and using rules is whether the rules are
supposed to control or support activities. This allows for various views on the
part of rule imposer and users. Dien (1998, p. 181) claims that the design of
procedures in nuclear power plants is based on a mechanistic and static image of
their users: procedures are usually not considered to be a tool that operators can
use to control the process; instead, they are thought of as something that
‘controls’ the operator (cf. Schulman 1996, p. 76). Procedures are typically
designed in accordance with the constraints and characteristics of the process to
72
be controlled instead of taking into consideration the characteristics of the users
(Dien 1998).
The role and details of rules differ, for example, in process control and
maintenance. Safety critical organizations have a wide variety of rules. Hale and
Swuste (1998) divide safety rules into three categories:
1. Rules that set objectives, such as
a. maximum value for toxic substances in the air
b. ‘as low as reasonably achievable’, ALARA.
2. Rules that determine how decisions about operating methods and actions
should be made, e.g.,
a. ‘if normal instructions for use cannot be applied, the actions taken in the
situation are determined jointly by the shift manager and the
maintenance supervisor on duty’
b. ‘the calculation of risk limits must be based on certain emission and
spread models’.
3. Rules that define concrete actions or required system states, such as
a. ‘protective eyewear must always be used in laboratory facilities’
b. ‘lifting appliances must be inspected at least once a year by a party
competent for the task’
c. ‘pressure vessels must have at least two independent pressure reducing
systems’
d. ‘smoking not allowed’.
The third category leaves the least room for choice of action. According to Hale
and Swuste, the third category maximises both the benefits and disadvantages of
rules. From the point of view of the rule follower, the rules in the third category
save time and effort in familiar situations. They also clarify tasks and
responsibilities, providing a basis for self-assessment. On the other hand, rules in
this category may make people blind to new situations in which existing rules do
not apply. Furthermore, detailed procedures may cause resentment (employees
may feel they are being watched over or that they are not trusted). This, in turn,
may lead to rule violations.
73
From the point of view of the rule designer, detailed instructions increase the
predictability across individuals, define responsibilities clearly, and so provide
basis for assessment and compulsion. However, these types of instructions also
repress individual initiative and consequently can weaken learning in the
organization. Maintaining rules and ensuring that they are followed also
consume resources.
When considering the meaning of rules in the organization it is important to
understand what role is given to them and what are the staff’s attitude to them.
The quality and amount of rules as such cannot be used to predict organizational
activity. We also claim that the role of rules is understood in very different ways
depending on the organization level and duties. One of the interviewees
described the significance of rules from the company’s point of view:
“(Q: Are rules supposed to support operations or define how to carry out
things?)
Company policy, for example, explains how to act, the minimum rules of the
game. They reflect our values, of course. I’d say that the guiding, controlling
and advising side is handled by function superiors… The company also uses
rules to safeguard itself, to avoid community sanction. If a company has not
clearly arranged its responsibilities and resources it may be punished. And,
unfortunately, we have to assume that the world is an evil place.”
Hale and Swuste say that the more interaction there is (has been) between the
rule designer and rule follower, the easier it is for rules to be accepted in
practical work. This advice has been adopted, for example, in the nuclear power
industry when creating rules for Finnish plants. With the generation change
currently taking place at plants, new employees are faced with procedures
designed by others, and their foundations are not always clearly understood.
“The procedures are of the kind that even if you follow them, things don’t
always work out. What’s good about them is that they force you to think, to
try to figure out what they are trying to say. We’ve sometimes discussed
whether procedures have to be perfect so that you just follow the
instructions on paper and do what they say. But no… Some of the tests and
other stuff are so complicated that they can’t be put on paper, especially
since testing involves factors that may not be possible to… If something fails
74
during testing, well, that changes the whole situation. But I don’t think they
are bad, the instructions. They are like a checklist, so at least you know what
you ought to do…”
Hale and Swuste (1998, p. 169) suggest that rules should be treated as
progressive restrictions to freedom of action. Rules in the first category define
the objectives, those in the second category explain how to make decisions, and
those in the third category impose actual restrictions on activities. Hale and
Swuste also discuss the organizational and social levels at which rules in
different categories must be designed. As one criterion, they present a dimension
corresponding to Perrow’s tight couplings. It suggests that when interaction
based on common rules increases at the lower levels of a company, and when
interaction becomes physically more dispersed, the (third-category) rules that
restrict activities must be defined at an increasingly higher level. On the other
hand, they also propose that if the level of professional competence in the field is
high and the system is very complex and difficult to predict, third-category rules
can be defined at a lower level.
Organizations could benefit from discussing their policy on rules with the staff.
The categories explained above may come in handy when identifying and
developing rules that guide objectives, decision-making and concrete activities.
Another topic that we have found to be relatively poorly understood at field level
involves the consequences of not following rules. According to Leplat (1998),
the safety consequences from not following instructions can be divided roughly
into two categories: probabilistic and deterministic. For example, leaving an
electric current in a device before maintenance procedures will (inevitably) lead
to an electric shock (deterministic), while speeding in urban areas only increases
the probability of an accident (probabilistic) and makes the consequences more
serious, but does not deterministically lead to an accident.
Employees are surely more willing to bend the rules if the consequences are
interpreted as being probabilistic. In other words, if the staff believes that failure
to comply with the rules will inevitably lead to safety or other consequences, it
will more likely follow them carefully. The problem is that unlikely events also
happen at times. In some cases it is also difficult to determine whether the
effects are deterministic or probabilistic and under what conditions they are one
75
or the other. Many rules include precautionary measures. Non-compliance with
them does not lead to deterministic consequences unless redundant defences
have also been made inactive. This relates to Hale and Swuste’s criteria of tight
couplings, which posits that rules should be created at a higher level of the
organization. One might claim, however, that it is particularly important in these
cases that the lowest organizational levels that use rules participate in their
creation so as to get an overall picture of why certain measures must be carried
out, what all they affect and, especially, how non-compliance with them might
affect oneself and others.
Referring to Norman’s (1988) ideas, Leplat (1998) suggests that work conditions
and the work environment should be developed in a way that reduces the need
for separate rules. The environment and tools should ‘offer’ the right operating
model to employees. Leplat’s views are linked to the principles of ecological
interface design, see Rasmussen and Vicente (1989) and Norros and Savioja
(2004). Leplat also emphasises different kinds of coercive functions, or physical
obstacles, to prevent harmful actions from being taken. According to Leplat, the
organization should assess each instruction and think about ways to do away
with the need for it.
A maintenance employee at a nuclear power plant relied on knowing when to
use procedures:
“Procedures work well when you begin to break the whole into phases, but
they are pretty useless for details. You have to improvise and take small
risks. What it boils down to is that if you follow procedures too closely, it
will hamper other people’s work.”
This employee was not the only one to think along these lines. Most
maintenance employees working at field level said that their work is
characterised by the impossibility to create rules for all cases. Procedures must
often be applied depending on the situation. Part of one’s professionalism is
knowing how to interpret, apply and neglect the procedures so that the work can
be carried out.
Training and safety management emphasise the danger in not following
procedures. This message can be interpreted as non-compliance automatically
76
having dangerous consequences. When the staff notices that this is not the case,
their confidence in the correctness of rules wavers. Organizations might,
therefore, find it useful to clearly communicate differences in the importance
that different rules have for safety. This would enable employees to use rules as
a way to deal with the uncertainties involved in their work.
4.7 Coping with uncertainty
Our seventh topic focuses on how uncertainty involved in complex systems is
handled in safety critical environments. In the previous chapters we have
described the extent to which society and organizations in safety critical fields
go to understand, minimise and control risks. Safety critical organizations are so
complex and deal with such technically difficult phenomena that it is unrealistic
to think that all of the uncertainties could be removed. In our view, however,
organizations do not discuss this aspect very much (cf. Perin 2005). The
responsibility for dealing with uncertainties and doubts as to whether all of the
consequences of activities are known is often left to the work groups and
individuals in charge of actual work. In some environments, such as aviation,
this is a conscious choice: the pilot carries full responsibility for the flight
operation. Industrial plants emphasise the importance of employees following
rules and executing their duties carefully. The assumption seems to be that they
will never face such uncertainties in their work that could not be handled by
either normal operating procedures or emergency procedures.
A representative of a nuclear power plant emphasised the fact that the
organization aims to prepare for all possible technical phenomena using a variety
of methods. According to the representative, the organization’s experience and
information collection methods are so comprehensive that hardly any unknown
phenomena take place.
“(Q: Are there any subfields in nuclear power production that are not fully
known or controlled?)
It’s all known and controlled, tried and tested. That’s what I’ve understood.
Or at least broadly speaking none [unknown issues] exist.
77
(Q: As for electricity production, well, it’s quite a well known process, and
these phenomena…?)
Exactly. If we talk about pure technology, yes, that’s well known. I mean,
things might always come up. You can never take the attitude that you know
everything.
(Q2: If you think about… for example, the aging phenomenon or something
similar, is there anything that is not known or...?)
…This device aging in general… well it’s only about monitoring the
condition of devices. Monitoring is continuous. Depending, of course, on
how the device or system has been categorised, safety classified. And devices
are subjected to preventive maintenance (…) It’s like everyday activity. And
they collect information about operating experiences of devices, which also
gives information about aging (…)
(Q: (…) What about fuel behaviour, is that something that can be
calculated?)
Yes, behaviour can be calculated. There are efficient calculation tools, and
extensive analyses are made before anything is rolled out. And when you
think about fuel, it’s carefully controlled at all times. [The company] also
carries out inspections and controls the production of its own product,
starting from uranium mining. So it really is closely controlled and… fuel
goes through extensive analyses before it is rolled out....”
Our interviews in nuclear power plants show that maintenance employees, for
example, have to deal with uncertain situations all the time in everyday work
(see also Reiman 2007). The concern caused by such situations can be
frustrating if the organization assumes that everything proceeds securely as
predicted:
“It’s pretty difficult because these devices are given an order of importance,
maximum unavailability have been defined for their failure, ranging from
eight hours to twenty-one days. They are quite common times. For example,
we are now designing a pump [overhaul]… the time allowed for fixing the
device is three days. It has to be done in that time. The arrangements
involved… and preferably no overtime. Money is the other aspect. So there
should be no hassle.”
78
In the name of safety philosophy, safety critical organizations emphasise that it
is not approvable to carry out work if the consequences are uncertain. One
should never experiment or guess. When in doubt, ask a person who is better
informed. This makes the handling of uncertainties a personal question linked to
professional competence. Maintenance employees described the challenges
involved in their work in the following way:
“Professionals are more certain about everything, beginners are, well, more
uncertain. Here we also emphasise that if you aren’t sure, don’t do
anything. If you don’t know what you’re doing, don’t do anything. It’s
very… we’re very worried that something might happen. It could be
because, if there are problems at a plant, it’s all over the news now.”
“Our principle is that if you’re not sure, you don’t do anything. But you can
never be absolutely certain of course. There’s always something… At some
point you just have to decide that you’re going ahead with it.”
It is important to understand that uncertainty is never caused by an individual
alone but is rather related to the object of work, such as soil structure in oil
drilling (cf. Nilsen 2006) or the condition of technical systems in nuclear power
plants or the reliability of measurement data in process control. The object of
work contains uncertainty; the progress and effects of work can never be fully
predicted. This is why employees really should feel a suitable amount of
uncertainty when dealing with them.
Uncertainties are apparent during times of crises or incidents. When risk
management methods and safety develop, uncertainties are embedded into
standard operating procedures. Safety becomes business as usual. Sense of
certainty and control become norms. Starbuck and Milliken (1988, p. 329) argue
that “success breeds confidence and fantasy”. Feeling safe is not, however,
necessarily same as being safe (Rochlin 1999b, p. 10). On the one hand, a
certain level of a sense of control is needed in order to be able to act. On the
other hand, illusion of control is an error provoking factor (Reason & Hobbs
2003) as is a lack of control (Clarke & Cooper 2004).
The need to maintain a feeling of being in control over events is very strong
(Fiske & Taylor 1991, pp. 197–204, Clarke & Cooper 2004, p. 9, Weick 1995)
79
and thus probably has an effect on the cultural solutions of any organization
(Reiman 2007). Low sense of control can lead to compensating mechanisms
such as belittling the meaningfulness and importance of one’s job, or to the
narrowing of one’s interest to some specific aspect of the work, such as
following the instructions to the letter no matter what happens.
Vidal-Gomel and Samurcay (2002) studied the work and training of electricians.
They noticed that young electricians were injured more often than the
experienced ones. They also noted that training focused on formal safety
instructions and the technical content of work, while the overall picture of the
work process often remained somewhat sketchy. However, other studies have
also produced opposite results. There are indications that accidents happen most
often to the most experienced employees, those who consider their work as
routine business. This means that the uncertainties of work are not handled
consciously and instead employees take unconscious – and sometimes conscious
– risks (cf. the discussion about the probabilistic and deterministic consequences
of non-compliance with safety rules). Some safety measures have been ‘seen’ to
be ‘unnecessary’, that is, that the probability of non-compliance leading to an
accident is very small.
Recognizing and coping with uncertainty is related to the development of
expertise. Klemola and Norros (1997, 2002) have studied the work of
anaesthetists, which is a highly safety critical and complex task. They noticed
that the doctors’ orientation to their patients, and thus to their own duties
differed. Some doctors said that the medical substances and the amount needed
in anaesthesia can be determined in advance using basic medical information,
such as the weight and age of the patient and the duration of surgery. Other
doctors emphasised that each patient is an individual and the correct medication
can never be known in advance. In their view, there is always the risk that a
patient may not react to anaesthesia as expected. When the researchers followed
doctors in real-life situations, they noted a similar difference in practices.
Doctors of the latter opinion administered medication in small doses, checked
the patient’s vital signs on monitors more often and sometimes physically felt
the patient. Interestingly, young doctors with this approach seemed to learn
professional content more efficiently over time than other doctors. That is,
enhanced awareness of the uncertainties involved in their work made it easier for
doctors to acquire professional competence. This suggests that it is better if
80
tools, procedures and training do not support the creation of excessive certainty
and a simplified picture of the situation. (Klemola & Norros 1997, Klemola &
Norros 2002.)
In some environments the nature of work makes it necessary to emphasise
certainty. These include cases in which employees work under heavy time
pressure in independent decision-making situations. Palukka’s doctoral thesis
(2003) dealt with the creation of air traffic controllers’ professional identity (see
above). To summarise, the thesis focused on whether air traffic controllers were
worth their (high) salary. Palukka describes various strategies that air traffic
controllers use to justify their status and salary. The most common strategy was
what Palukka called the ‘perfection requirement’. According to air traffic
controllers, their work consists of demanding decision-making with a lot of
responsibility, and it must be carried out to perfection. This distinguishes it from
many other professions that require the same length of education. One of the air
traffic controllers interviewed by Palukka said the following:
“I suddenly realised this year, about our work and salary, that we are better
off [in terms of salary] than many others with a higher academic degree.
And what they have to… first of all they study a lot longer, like, say, doctors,
lawyers, architects and engineers. And they need to know a lot more
information than we do. We can manage with a single handbook for air
traffic controls. They’re very simple, if you look at them, the things we need
to know.
If our salary was based on that, I mean, relative to the study time and the
amount of information that we need to master, we’d be getting too much. I’m
honest about that. But, see, what we do can be difficult since it is done under
pressure. If you had to deal with a huge amount of information and had to act
fast, under terrible pressure and the fear of making a mistake, the information
would have to be quickly cut down to very simple things. It’s impossible to
carry out complicated things under great pressure (…) This is very important
to me when I think about the salary I get.” (Palukka 2003, p. 56.)
In contrast, another air traffic controller explained that most of the time their
work does not involve fast and unexpected situations. It calls for a systematic
approach.
81
“Everyone thinks that our work just consists of [airplanes] coming by and
that we live one minute or two at a time. But that’s not the case. Cases in
which we make that kind of decisions are just the tip of the iceberg. If all the
basics weren’t in order, well, you’d begin to feel nervous pretty soon. The
system has been kept up largely thanks to the hard, unselfish work of our
people.”(Palukka 2003, p. 137.)
The attitude to uncertainties is clearly dependent on the culture in each sector.
Shipping, for example, has a culture of ‘tough guy’, which very strongly hides
personal insecurity (see, e.g., Nuutinen & Norros 2001). The problem is that
legal responsibility and the creation of a sense of responsibility mean that
uncertainties and insecurities have to be brought up and discussed.
4.8 Ambiguity of responsibility
Our last topic focuses on the broad and multifaceted question of how
responsibility has been set up in safety critical organizations. At what level is the
responsibility for operations located and how are responsibilities considered to
differ from one another? The term ‘responsibility’ was frequently brought up in
our discussions with employees in safety critical fields13. It is a concept that is
considered to be characteristic of such organizations, as well as a vital, but
difficult, issue to them. Responsibility can be treated as a legal question, in terms
of work organization or as a personal experience of assuming responsibility or
behaving responsibly. These viewpoints often intermingle in discussions. A
nuclear power plant instrument technician was of the following opinion:
“Hmmm. It’s difficult to pinpoint your responsibilities. I guess I don’t have
sole responsibility for anything.
(Q: At what level is responsibility situated, is the whole group usually
responsible for things or…?)
Yes, I’m a part of this automation group. Sort of like a… say, a tooth in a
gear wheel.”
13
In the Finnish language, there is only word, “vastuu”, that means both “responsibility” and
“accountability”.
82
The interviewer originally meant to ask the question from the point of view of
work organization, that is, focusing on the systems and responsibility areas on
which the technician worked. The interviewee, in turn, understood the question
from a more legal point of view, and responded with a slightly startled answer.
As mentioned earlier, many safety critical organizations have been organized
hierarchically. All of the safety directors we interviewed said that legal
responsibility lies in the line organization.
“Responsibility is a legal term, let’s start with that to make things easier. A
plant must have the licences required by law, and each licence may have 50
conditions for company operations. When talking about this kind of a case,
where a company must satisfy conditions, legal responsibility is held by the
company, the Board of Directors. The company is also responsible for
sufficient resources and expertise. It is the plant director’s responsibility to
ask for them. And the Board of Director’s responsibility is to grant them. If
it doesn't it could be made liable. The plant director is responsible for all
operational matters and is never fully free from responsibilities. When
delegating matters, one must make sure that people are competent… All
responsibilities are documented. If something happens, they will be
reviewed. Lack of documentation means that the company has not met its
responsibilities, which may lead to a community sanction. We are talking
about Finnish legislation here, and about crimes against the environment
and occupational health. (…) By following procedures the person with
ultimate responsibility can limit his or her responsibility. But the police will
investigate, not only what was done, but also what was left undone, since
neglect is also a crime. (…) They will check to see who was truly
responsible. In practice, responsibility is limited to three people: the shift
supervisor, department engineer and plant director. However, education,
work experience and position also make a difference when considering the
degree of neglect. Someone described responsibility as the mirror image of
real authority. I, for example, am in no way responsible for any plant-level
matters, and the environmental manager isn’t necessarily responsible for
anything in the case of accidents… Usually only line superiors and the plant
director hold responsibility for matters.”
83
Safety critical organizations have different kinds of expert and support
organizations, such as safety and environmental departments, which complicates
the responsibility structure and decision-making. In our interviewees’ opinion,
support organizations should not change the structure of responsibilities, but this
is not always understood. Fortum’s safety manager explained this well:
“The challenge that we’ve faced and still face… or let’s put it this way,
responsibility is always held by the line organization. Whatever the case,
any person who is aware of the problem and could have dealt with it is also
responsible for it. The line organization needs to be supported by an expert
organization, and the big problem is that the responsibility, even though it
has never been transferred to such an expert organization, well… If, for
example, we have a radiation accident in Loviisa [nuclear power plant],
STUK will turn directly to the radiation protection manager instead of the
employee and superior… An expert organization can easily take over
responsibilities, and that’s dangerous. If an expert organization is under the
impression that the line organization has assumed responsibility, and the
line organization thinks that the expert organization is responsible... This is
a typical problem for Finns...”
Operations in a complex organization are highly distributed and controlled by
various regulations and standard operating procedures. This sometimes obscures
the fact that the organization and its employees have certain legal
responsibilities. As mentioned, complicated structures and work processes may
lead to employees feeling that their work has a smaller impact on safety than is
really the case. They may also affect the sense of personal responsibility. Safety
is secured with multiple measures, the result being that no one feels the need to
take personal responsibility for it. In the maintenance organizations that we
interviewed we heard the opinion that employees who follow instructions to the
letter are freed from personal responsibility. None of the interviewed safety
experts agreed with the claim that compliance with procedures always frees
employees from personal responsibility even if the result of work is
unsatisfactory. The question was, however, considered a difficult one:
“(Q: If you carry out a job or task according to procedures, and something
happens anyway, does the fact that you followed the instructions free you
from responsibility?)
84
I don’t know if it does legally. But basically yes… yes. It’s difficult to say
whether it frees you from responsibility. On the one hand, rules must be in
order, on the other hand, you have to know what you’re doing. I think I’m
more of the opinion that it doesn’t free you. Then again, you could go higher
up… the superior is responsible for procedures in order, it’s sort of like the
superior’s responsibility. In any case, responsibility falls more on the
superior or the whole organization. Superiors are never freed from them –
responsibilities.”
Sometimes emphasising one’s responsibility is considered to be a sign of
inflexibility or bureaucracy on an organizational level. When asked why there
were such clear borderlines between different maintenance work groups, one of
the maintenance experts answered the following:
“[The organization is] old-fashioned in certain ways, whatever the reason
for that is. There are bosses, and they have their subordinates, and – this is
just one view – the bosses want to show that they are in charge when their
subordinates are at work… they’re these old occupational safety issues.”
A maintenance supervisor commented on the advantages that a hierarchical
organization model has in terms of responsibility and uncertainty:
“This hierarchy has its benefits, it makes things safe, in a sense. If I’m not
absolutely sure what to do, I can negotiate and talk about things with my
superior. That is, we make decisions in the line organization and together…
This is… like… pretty safe.”
According to maintenance employees in Finnish nuclear power plants, the
second most important strength in their organizations was a responsible staff (the
first one being the competence of staff). However, the meaning of responsibility
proved to be difficult to explain in detail. Some emphasised close compliance
with instructions, while others took it to mean exceeding the requirements in
one’s own field of accountability. That is, giving more than required or
expected. The safety director at Kemira offered the same interpretation:
“It [responsibility] means more than just following instructions. That is also
important, but it’s more like looking after things. I can look after my
85
children even though I’m no longer responsible for them. It’s based on one’s
own set of values, the goal to look at things more widely than the
responsibility documented on paper. I don’t know if there is any other way
than to increase commitment to the company. So that people pay a bit more
attention to things than required.”
A practical problem in organizations is that while it is important for the staff to
be responsible, it would be better for predictability if everyone did only that for
which they have been made accountable. It is largely a question of culture which
of the solutions gets wider support. As mentioned earlier, sometimes the
situation may involve a trade-off. Employees might, for example, basically
believe that more time should be spent on solving a particular issue, but due to
rushed schedules and cost pressures they are urged to focus on their own,
essential task without delay (cf. Hollnagel 2004).
From a legal point of view this means determining whether responsibilities have
been ‘delegated competently’. For example, in the case of an accident
investigations can examine whether delegation was carried out in such a way
that the person responsible had sufficient resources to ensure the job’s success.
The supervisor must ensure that the person assigned to the job is competent for
it. Employees have developed different kinds of strategies to cope with conflicts
related to responsibilities. One of the nuclear power plant technicians gave the
following answer when asked what one needs to know to get along in the
organization:
“You don’t necessarily need to know anything, much of it depends on yourself.
If you say that you’re not at all familiar with something, the reaction is: ok,
let’s forget it and ask someone else. If you just sit back twiddling your
thumbs, answering I don’t know to everything… that’s one way to get along.
(laughs) (Q: So there are no real requirements?) Not necessarily. No one is
willing to take responsibility for requiring something.”
This example shows that the organization basically believes that superiors are
responsible for distributing tasks in such a way that employees can deal with
them. On the other hand, the interviewee gave the impression that the principle
is difficult to adhere to in practical work or that it can even be misused. If an
86
organization has not decided how to deal with uncertainties, the staff may have
conflicting attitudes to responsibilities.
Nissinen (1996) discusses the requirements for competent delegation, that is,
how to tell when a responsibility has been clearly delegated to a particular
individual and the superior has been freed from legal consequences. According
to Nissinen, the allocation of responsibility (in penal terms) can be described
using the relations between prohibited risk-taking and the intensity of control
(Table 3).
Table 3. Relation between risk-taking and the requirement for control under
certain conditions.
Intensity of control and risk-taking
Increases
Decreases
Criteria
task
- complex, requires guidance
- known to involve risks
- known possibility of serious
risks
- urgent
- simple, usually handled
independently
- considered to be risk-free
- possibility of serious risks not
known in advance
- no time pressure
subordinate
- limited work experience
- experienced in the task at hand
- weak professional competence
- known to be professional
- previous failures known
- history of impeccable handling of
duties
- special interference detected in
performance (e.g., hangover)
- work ability seemingly normal
superior
- favourable time for control
- no opportunity or poor external
conditions for control
organization
- hierarchical, based on,
e.g., the assumption of control
- functional, emphasis on
expertise and self-guidance
- temporary staff
- normal staff
- poor information flow
- functioning chain of information
- risky work conditions
(e.g., darkness)
- no risk or interference factors in
the work environment
- control a central or the only
guarantee of safety
- other safety mechanisms also in
use (e.g., control in group work)
other
conditions
87
Questions of responsibility are emphasised in accidents and other events that
have endangered the operations of a safety critical organization, when discussing
the legal allocation of responsibility. The allocation of responsibility is also an
individual matter. Studies carried out in different fields show that employees
who have been victims of occupational accidents believe that the management
puts less emphasis on safety than do employees who have not had occupational
accidents14. The problem of these studies is that the cause and effect relations are
not fully clear. Research results have often been used to conclude that the
management’s commitment to safety reduces the number of accidents. However,
the results can also be taken to indicate that accidents to employees erode
confidence in the management’s commitment. Some people may believe that the
management is ‘responsible’ for the accident. Taylor (1981) emphasises that
accidents as such have no meaning since they are the result of an unintentional
and unpredictable chain of events. One of the basic characteristics of human
beings is, however, the attempt to find meaning, and – especially in the case of
accidents – the reason or culprit. The organization’s management is a natural
target when trying to find the reason for an accident. Thus, management is
considered responsible for the accident.
Another typical human characteristic is the tendency to blame a mistake on a
person, instead of a situation or the conditions. People typically associate
accidents that happen to others to the laziness, foolishness or indifference of the
victims. The victims, in turn, most likely emphasise the impossibility of the
situation or conditions, that is, external reasons. Studies also show that the more
serious the situation is (to the individual or society), the more disagreeable is the
idea of the accident being pure chance. Chance also implies that the same
incident could target or could have targeted me. This is why people so readily
stress the fact that an incident could have been prevented and the person
involved could have caused it. (Fiske & Taylor 1991, pp. 67–86.)
14
For the link between accidents and safety attitudes, see, e.g., Lee (1998), Mearns et al. (1998),
Rundmo (1995) and Barling et al. (2003).
88
5. Conclusions
Safety critical organizations can be found in many different sectors. In addition,
many industrial jobs and, for example, nursing and the food industry involve
significant occupational risks or health threats to outsiders. In this publication
safety critical organizations have been taken to mean companies whose
operations are important to society but involve risks to it and the environment.
These kinds of organizations can be found, for example, in the nuclear power,
aviation and chemical industries. Their operations are intensively monitored by
the authorities, and the precondition for their existence is society’s confidence in
the organizations’ ability to manage risks.
This publication discussed the nature of activities in controlled and regulated
organizations. How do employees define safety and risk and how does it affect
daily operations and decision-making? What other challenges do safety critical
sectors pose on the staff’s professional competence and conceptions? The goal
was to study whether safety critical organizations show any common
characteristics or challenges that are usually not treated in the field of
organizational research. Another topic of interest was the extent to which the
operations of safety critical organizations are influenced by the same problems,
solutions and rules as other organizations. We focused on organizations that
have defined safety to be one of their defining characteristics and that have
succeeded in ensuring safe operations. The themes discussed in this publication
are also relevant to organizations which seek to improve their safety
performance, as well as to organizations that have not (yet) classified themselves
as safety critical despite the risks inherent in their operations.
Why would one wish to examine safety critical organizations as a group of their
own? First of all, the organizations themselves do it. The management and other
staff often explain the characteristics of their operations as resulting, for
example, from authority requirements, the ensuring of safety and the staff’s
responsibility. Similarly, employees may use the same reasons to oppose change.
Secondly, to an outsider the organizations seem to be somewhat different from
‘normal’ organizations. The organizational structures are conservative,
documentation is more thorough than usual, the organization harbours small
internal expert units and staff training is extensive. The time span for operations
is long, matters are planned in considerable detail and investments are typically
89
big. Thirdly, society does not allow safety critical organizations to go through
the life cycle of ‘normal’ organizations, possibly testing their limits by taking
risks and learning from serious failures. History shows that catastrophes caused
by organizations are a very real, not only an emotional, threat.
Safety critical organizations must find a success strategy that ensures that the
degree of safety is always sufficient. It seems fair to assume that finding,
implementing, establishing and changing such a strategy and communicating it
to society is, on the one hand, a subtle search for balance and, on the other hand,
strict decision-making. According to Weick and Sutcliffe (2001), high reliability
organizations differ from other organizations in that they have adopted a
philosophy of continuously reinterpreting the environment, possible problems
and solutions. The main difference compared to typical organizational activities
is that even weak signals get a strong reaction. Nevertheless, normal work
community phenomena and financial boundary conditions influence these
organizations as well. In this sense, safety critical organizations are normal
organizations with normal problems (Bourrier 2002). However, the safety
impact of normal work community phenomena may be of great significance in
safety critical organizations.
Based on literature and our own research projects, we identified eight themes of
organizational behaviour related to the work in safety critical fields. These were:
1. Risk and safety perceptions
2. Motivational effects of risks and safety
3. Complexity of organizational structures and processes
4. Modelling and predicting of organizational performance
5. Importance of training
6. Role of rules and procedures
7. Coping with uncertainty
8. Ambiguity of responsibility.
We consider these eight themes to be special characteristics of safety critical
organizations or special topics that must be handled within the organization. All
of the eight characteristics have given rise to a wide variety of opinions in
literature and the organizations themselves. There have been nearly opposite
views of the right and safe way to solve challenges related to the themes. We
have tried to shed light on the discussions carried on in the field, using both
excerpts from interviews and commenting on academic debates. Another
90
characteristic shared by the eight themes is that they are viewed often differently
by the management and field staff. None of them can be solved without the
decision affecting all daily activities.
In our view, some of the characteristics have been widely discussed in companies.
These include the complexity of organization structures and processes. One reason
for initiating discussions may be that companies incur considerable expenses from
the characteristics. Some characteristics, such as the personal uncertainty, are
rarely considered to be actual safety challenges in organizations.
Our eight characteristics form a framework that can be used to discuss solutions
related to an organization’s safety. We believe that organizations would benefit
from critically reviewing the topics when determining their safety policies and
objectives and carrying out audits. Similarly, the impact of organizational
changes could be discussed in view of the eight characteristics listed above.
Our categorisation can also be used to compare organizational solutions and
decisions in different sectors. Solutions to the first characteristic, that is, the
conception of risks, presumably affects the other characteristics. If an organization
simplistically believes that the staff has no need to understand the risks related to
operations, it probably does not expect employees to consider their work
significant in terms of safety. In this case, employees must be motivated and their
activities ensured using coercive organizational structures. Training will also focus
on standardisation. Procedures have a determining role, and individuals cannot be
expected to understand when instructions may be deficient. It may be difficult for
employees to formulate a realistic picture of the uncertainties related to operations.
They consider to have fulfilled their responsibilities by doing what they are told.
This makes our list a kind of a hierarchy.
It is hardly likely that any organization would follow such simple logic to create
an operating philosophy for a safety critical environment. However, operations
will suffer if the solutions are conflicting. It is also worth remembering that a
solution that works well in a particular sector may not be suitable for other
environments. Solutions are linked to the legislation, history and culture of the
sector in question. Table 4 shows some of the typical answers to each question in
the aviation, nuclear power, oil and chemical industries. The table is based on
literature and our own research material. It should be considered primarily as a
tool for facilitating discussion and reflection on one’s own solutions.
91
Table 4. Comparison of the emphases in different industrial sectors and the
focus of research carried out in the sectors concerning the characteristics of
safety critical organizations.
1. risk and
safety
perceptions
NUCLEAR
POWER
- risks are
discussed a lot
both in the
organization and
with outsiders
- focus on
nuclear/plant
safety
- mathematical
methods
AVIATION
15
- risks are discussed
a lot
- focus on aviation
safety, nowadays
also security
- risks related to the
human and
organizational
factors also
discussed
- qualitative methods
- emphasis on the
significance of work
- responsibility for
other lives a known
characteristic of
work
2. motivational
effects of risks
and safety
- emphasis on the
significance of
work
- personal threat
not experienced
- radiation as an
occupational safety
hazard
3. complexity
of
organizational
structures and
processes
- organizational
overlaps
considered to be a
safety mechanism
and necessity
- technical
redundancy and
safety systems
complicate work
- work order
procedures make
clear operating
models a necessity
- characterised by
multiple backup
procedures
- clearly agreed
working practices in
flight operations
4. modelling
and predicting
organizational
performance
- goal: model the
impact of
organizational
activities on
nuclear risk
- organizational
assessments in
general, e.g safety
culture
assessment, event
investigations
- goal: complete
standardisation of
working practices in
flight operations
15
16
OIL INDUSTRY
16
- focus on
occupational
health and safety
- mainly qualitative
methods
- work considered
to be risky
- personal threat
not experienced
- great deal of
research in the
field of work
psychology on
stress and
attitudes to risks
- numerous
(relatively
autonomous)
players from
different
organizations
- not as clearly
determinant
working practices
as found in the
aviation and
nuclear power
industries
- work order
procedure
- organizational
assessments and
development
programmes
related to
occupational
safety
- different
practices for
different players,
no strong,
harmonised way to
handle matters in
the whole sector
Literature, e.g., Helmreich and Merritt (1998).
Literature, e.g., Mearns et al. (1998, 2003, 2004), Visser (1998), Nilsen (2006).
92
CHEMICAL
INDUSTRY
- environmental,
occupational
and product
safety linked to
one another
- qualitative and
mathematical
methods
- personal threat
not experienced
- less complexity
than, e.g., in the
nuclear power
industry
- operations
easier to
describe to
employees
- organizational
assessments
and
development
programmes
related to
occupational
safety
- qualitative risk
analyses
examine
organizational
prerequisites
- training to
increase
expertise
- behavioural
safety approach
quite common in
(occupational)
safety training
- training is not
considered to
aim at changing
attitudes but
behaviour
- not very strict
procedures
- rules
considered to
support
operations
5. importance
of training
- training to
standardise
operations and
increase expertise
- rather strict
authority
requirements for
competence
- training to change
attitudes
- training to
standardise working
methods
- strict competence
requirements
- behavioural
safety approach
quite common in
(occupational)
safety training
- training is not
considered to aim
at changing
attitudes but
behaviour
6. role of rules
and
procedures
- compliance with
procedures an
unquestioned,
clearly expressed
norm
- procedures that
both regulate and
support work
- strong emphasis
on certainty
- uncertainty
experienced
personally
- on the other
hand, the learning
organization
approach and risk
analyses
- avoidance of
heroism as the
norm
- structure of
responsibilities
unclear to
employees or
considered to be
complex
- emphasis on the
sense of
responsibility
- compliance with
procedures an
unquestioned norm
- great deal of
regulating
procedures
- procedures
considered to
support work
- rules considered
to be impossible in
certain situations
- situation-specificity
identified in flight
operations
- otherwise a culture
that puts strong
emphasis on
certainty
- heroism accepted
in flight operations
- uncertainties
identified, e.g., in
oil drilling
- based on
expertise
- uncertainty
factors related to
chemicals
identified
- quite clear
structure of
responsibilities
- pilot has big
personal
responsibility
- often rather
unclear structure
of responsibilities
- responsibility
clearly assigned
to line
organization
7. coping with
uncertainties
8. ambiguity of
responsibilities
All of the fields compared in Table 4 encompass different kinds of organizations
that have adopted various approaches. Risk management, safety management
and safety culture have been discussed longest in aviation and the nuclear power
industry. This has led to companies in both fields having relatively harmonised
characteristics. However, even in these fields organizations differ in their
practical attitudes to safety. It seems likely that all of the sectors have
organizations of varying levels of safety culture.
Our opinion about the right, safety-promoting culture (that is, the right solution
to the listed characteristics) has two sides to it. First of all, we wish to respect the
93
history of different organizations and sectors and the expertise of employees.
This is why we emphasise that different kinds of solutions to risk management,
staff management and handling of safety critical situations may work well in
terms of safety. If a company has long experience in coping with a certain
tension it can very well cope with it in the future. The requirement is that the
organization does not drift along without understanding the reasons for its
organizational decisions or the consequences of situations and that is does not
fix newly detected problems with measures that wholly contradict its basic
philosophy. Staff training, for example, should follow the policy that the
organization has adopted for specific characteristics. It is unrealistic to imagine
that a new safety policy or training could easily change the organization’s or
sector’s fundamental notions of ensuring safety or its ways to act. The explosion
of space shuttle Columbia in 2002 was a sad reminder of the difficulty of
change. The organizational factors that underlay the accident were much the
same as those that led to the explosion of Challenger in 1986 (Feldman 2004).
On the other hand, research indicates that certain choices seem to be better
justified than others. As regards the first characteristic of safety critical
organizations, risk and safety perceptions, it is a fact that people formulate their
own understanding of risks and use this to guide their activities. Organizations
might benefit from taking into consideration the socially constructed nature of
risks. This does not mean that risks shouldn’t be analysed and expressed more
objectively. Different kinds of risk assessments can be used to provide the staff
with risk descriptions based on research results. This may be a more fruitful
option in terms of overall safety compared to the goal of introducing the risk
caused by employees into calculations. These are not, of course, mutually
exclusive alternatives.
Another reason to discuss risks is that the safety impact of work motivates
employees. Naturally, motivating effect can be achieved only when the
organization works to promote safety. This fact could be used better in tasks that
have traditionally not been considered to be core tasks, such as service and
maintenance and financial administration. The organization of work should
avoid systematically allocating routine tasks and special tasks or tasks directly
related to safety to different people. This is a sure way to distance a group of
employees from safety. Instead, general training in the overall system, its risks
94
and ways to influence safety would be a natural, although laborious, way to
motivate people in safety critical environments.
As for the safety impact of organizational structures, we partly agree with
Perrow’s theory of normal accidents. The way in which safety critical activities
are currently organized is a very complicated one and makes good management
very demanding. Accident reports indicate that the management of this type of
systems can lead to serious failures17. In the future, we hope that the adaptability
(variation) of human activities will not always be restricted by adding technical
obstacles or new rules. Instead, we favour the attempt to understand the
strengths of human activities and use them in plans for organizations and their
tools. If organizations were not based so heavily on the notion of human beings
as unreliable components they might be simple enough for people to manage
them more reliably.
It is difficult to predict organizational activities because an essential element of
human and organizational behaviour is the ability to act based on the situation at
hand, as well as previous experience. We do not believe that organizations can
be assessed with methods similar to those used for technical systems. Still, we
find the attempt to predict operations in safety critical environments to be
necessary. For example, despite a lot of criticism, the safety culture approach
makes sense in this respect. We have worked on developing an approach for
assessing and predicting the ability of organizations to act. We study the ability
of organizational culture to identify the demands of the organizational core task,
as well as their willingness and capacity to meet the demands. We also try to
understand the dynamics of organizational activities and the incremental drift of
operating methods in dangerous directions (Reiman & Oedewald in press,
Reiman 2007).
Training in safety critical organizations varies depending on the employees’
tasks. We have suggested that the most common motive of training is to make
human activities more predictable, that is, to standardise practices and
performance. It obviously depends on the field and task whether this is
considered to be a sensible approach. In aviation, standardising the operations of
17
See, e.g.,. Snook (2000), Vaughan (1996), Wright (1994), Cullen (1990) and Presidential
Commission (1979, 1986).
95
the flight crew makes sense since the composition of the crew changes all the
time. To ensure profound professional competence and sustained interest, it is
necessary to think of training as a way to increase and maintain expertise in
addition to a way to control behaviour. Training that maintains expertise is
needed throughout working life. Both experienced and less experienced
employees need to review the fundamentals from time to time. Old-timers may
often have deficient theoretical skills but they have learned to cope using rules of
thumb and by getting to know local conditions, such as the condition and
characteristics of different equipment, particularly well.
Rules have an important role in safety critical organizations. In addition to
supporting work, they have also proved to be an important tool for analysing
complex operations and producing information for other parties and new
employees. As a result, rules and procedures as such is not something that needs
to be forgotten. However, the frequent debate about compliance and noncompliance with rules is often fruitless. We find it important that organizations
examine their rules and the foundations that they are based on, as well as
determine the type of safety impacts that may result from bending rules or
misinterpreting instructions. Furthermore, rule bending usually indicates systemlevel problems and not only individual disobedience. Rules are bended because
the inherent uncertainties of the system are not as apparent as pressures to be
efficient.
We believe that it is impossible to achieve full certainty about the impact and
success of measures when working with highly complex socio-technical
systems. This does not, however, mean that we consider accidents to be
unavoidable. More likely, if organizations could better understand that their
operations involve uncertainties, they could promote the right attitude to work.
This would also prevent uncertainty in work from becoming a personal load and
instead functioning as an important indicator of significant observations.
Learning and development of competence would most likely improve as well.
The structure of responsibility is, perhaps, the question that has been analysed
least from the point of view of behavioural sciences. The field of organizational
research uses the related concepts ‘commitment’ and ‘organisational citizenship
behaviour’. However, responsibility gets a whole new weight and significance in
safety critical fields. Approaches to safety culture and safety management have
96
caused confusion by emphasising responsibility. The term ‘responsibility’ has
come to signify any kind of good operation in an organization. We consider it to
be of utmost importance that organizations are also aware of the legal structure
of responsibility. If there is no understanding of true responsibility and authority
it is difficult to act responsibly. Responsibility without authority is stressful in
the long run. We claim that responsibility comes about more or less on its own
in a healthy work environment as long as people understand how their work
affects safety.
97
References
ACSNI. (1993). Organising for safety. Third report of the Human Factors Study
Group of the Advisory Committee on Safety in the Nuclear Industry. Health &
Safety Commission. London: HMSO.
Alvesson, M. (2002). Understanding organizational culture. London: Sage.
Baram, M. (1998). Process safety management and the implications of
organisational change. In: Hale, A. R. & Baram, M. (Eds.), Safety Management.
The Challenge of Change. Oxford: Pergamon.
Barley, S. R. (1986). Technology as an occasion for structuring: evidence from
observations of CT scanners and the social order of radiology departments.
Administrative Science Quarterly 31, 78–108.
Barley, S. R. (1996). Technicians in the workplace: Ethnographic evidence for
bringing work into organization studies. Administrative Science Quarterly 41,
404–441.
Barling, J., Kelloway, E. K. & Iverson, R. D. (2003). Accidental outcomes:
Attitudinal consequences of workplace injuries. Journal of Occupational Health
Psychology 8, 74–85.
Bier, V. M., Joosten, J. K., Glyer, J. D., Tracey, J. A. & Welch, M. P. (2001).
Effects of deregulation on safety: Implications drawn from the aviation, rail, and
United Kingdom nuclear power industries (NUREG/CR-6735). Washington DC:
U.S. Nuclear Regulatory Commission.
Booth, R. T. & Lee, T. R. (1995). The role of human factors and safety culture in
safety management. Journal of Engineering Manufacture 209, 393–400.
Bourrier, M. (1999). Constructing organisational reliability: the problem of
embeddedness and duality. In: Misumi, J., Wilpert, B. & Miller, R. (Eds.),
Nuclear safety: A human factors perspective. London: Taylor & Francis.
98
Bourrier, M. (2002). Bridging research and practice: The challenge of ‘normal
operations’ studies. Journal of Contingencies and Crisis Management 10,
173–180.
Brandsaeter, A. (2002). Risk assessment in the offshore industry. Safety Science
40, 231–269.
BS 8800. (2004). Occupational Health and Safety Management Systems. Guide.
British Standards Institution.
Cameron, K. S. & Quinn, R. E. (1999). Diagnosing and Changing
Organisational Culture: Based on the Competing Values Framework.
Massachusetts: Addison-Wesley.
Cheyne, A., Cox, S., Oliver, A. & Tomás, J. M. (1998). Modelling safety climate
in the prediction of levels of safety activity. Work & Stress 12, 255–271.
Clarke, S. (1998). Safety culture on the UK railway network. Work & Stress 12,
285–292.
Clarke, S. (1999). Perceptions of organizational safety: Implications for the
development of safety culture. Journal of Organizational Behavior 20, 185–198.
Clarke, S. (2003). The contemporary workforce. Implications for organisational
safety culture. Personnel Review 32, 40–57.
Clarke, S. & Cooper, C. L. (2004). Managing the risk of workplace stress.
London: Routledge.
Cooper, C., Dewe, P. & O’Driscoll, M. (Eds.) (2001). Organizational stress.
A review and critique of theory, research and applications. Thousand Oaks: Sage
Publications.
Cox, S. & Flin, R. (1998). Safety culture: Philosopher’s stone or man of straw?
Work & Stress 12, 189–201.
99
Cox, S. J. & Cheyne, A. J. T. (2000). Assessing safety culture in offshore
environments. Safety Science 34, 111–129.
Cullen, Hon Lord W. D. (1990). The public inquiry into the Piper Alpha
disaster. London: HMSO.
Cunha, R. C. & Cooper, C. L. (2002). Does privatization affect corporate culture
and employee wellbeing? Journal of Managerial Psychology 17, 21–49.
Dekker, S. (2002). The Field Guide to Human Error Investigations. Ashgate.
Dien, Y. (1998). Safety and application of procedures, or ‘how do ‘they’ have to
use operating procedures in nuclear power plants?’ Safety Science 29, 179–187.
Donald, I. & Canter, D. (1994). Employee attitudes and safety in the chemical
industry. Journal of Loss Prevention in the Process Industries 7, 203–208.
Engeström, Y. (1998). Kehittävä työntutkimus. Perusteita, tuloksia ja haasteita.
Helsinki: Edita [in Finnish].
EPSC. (1996). Safety Performance Measurement. Van Steen, J. (Ed.). European
Process Safety Centre.
Farrington-Darby, T., Pickup, L. & Wilson, J. R. (2005). Safety culture in
railway maintenance. Safety Science 43, 39–60.
Feldman, S. P. (2004). The culture of objectivity: Quantification, uncertainty,
and the evaluation of risk at NASA. Human Relations 57, 691–718.
Fiske, S. T. & Taylor, S. E. (1991). Social Cognition. Second Edition. Reading,
MA: Addison-Wesley.
Flin, R., Mearns, K., O’Connor, P. & Bryden, R. (2000). Measuring safety
climate: Identifying the common features. Safety Science 34, 177–192.
Garrick, B. J. (1998). Technological stigmatism, risk perception, and truth.
Reliability Engineering and System Safety 59, 41–45.
100
Garrick, J. & Christie, R. (2002). Probabilistic risk assessment practices in the
USA for nuclear power plants. Safety Science 40, 177–201.
Grote, G. & Künzler, C. (2000). Diagnosis of safety culture in safety
management audits. Safety Science 34, 131–150.
Guldenmund, F. W. (2000). The nature of safety culture: a review of theory and
research. Safety Science 34, 215–257.
Hackman, J. R. & Oldham, G. R. (1980). Work Redesign. Reading, MA:
Addison-Wesley.
Hakkarainen, K., Lonka, K. & Lipponen, L. (1999). Tutkiva oppiminen.
Älykkään toiminnan rajat ja niiden ylittäminen. Porvoo: WSOY [in Finnish].
Hale, A. R. & Baram, M. (1998). Safety Management. The Challenge of
Change. Oxford: Pergamon.
Hale, A. R. & Hovden, J. (1998). Management and culture: The third age of
safety. A review of approaches to organizational aspects of safety, health and
environment. In: Feyer, A.-M. & Williamson, A. (Eds.), Occupational injury.
Risk, prevention and intervention. London: Taylor & Francis.
Hale, A. R. & Swuste, P. (1998). Safety rules: procedural freedom or action
constraint? Safety Science 29, 163–177.
Hale, A. R., Heming, B. H. J., Carthey, J. & Kirwan, B. (1997). Modelling of
safety management systems. Safety Science 26, 121–140.
Harvey, J., Erdos, G., Bolam, H., Cox, M. A. A., Kennedy, J. N. & Gregory,
D. T. (2002). An analysis of safety culture attitudes in a highly regulated
environment. Work & Stress 16, 18–36.
Hatch, M. J. (1993). The dynamics of organizational culture. Academy of
Management Review 18, 657–693.
101
Helkama, K., Myllyniemi, R. & Liebkind, K. (1998). Johdatus sosiaalipsykologiaan. Helsinki: Edita [in Finnish].
Helmreich, R. L. & Merritt, A. C. (1998). Culture at Work in Aviation and
Medicine. National, Organizational and Professional Influences. Aldershot:
Ashgate.
Henttonen, T. (2000). Turvallisuuden mittaaminen. TUKES-julkaisu 7/2000.
Helsinki: Turvatekniikan keskus [in Finnish].
Hogg, M. A. & Abrams, D. (1988). Social Identifications. A Social Psychology
of Intergroup relations and Group Processes. London: Routledge.
Hollnagel, E. (2002). Understanding Accidents – From Root Causes to
Performance Variability. In: Proceedings of the IEEE 7th Conference on Human
Factors and Power Plants. Scottsdale, Arizona, USA, September 2002.
Hollnagel, E. (Ed.) (2003). Handbook of Cognitive Task Design. Mahwah, New
Jersey: LEA.
Hollnagel, E. (2004). Barriers and accident prevention. Aldershot: Ashgate.
Hopkins, A. & Hale, A. (2002). Issues in the regulation of safety: Setting the
scene. In: Kirwan, B., Hale, A. R. & Hopkins, A. (Eds.), Changing Regulation –
Controlling Risks in Society. Oxford: Pergamon.
HSE. (1996). Literature survey business re-engineering and health and safety
management: Contract research report 124/1996. London: HMSO.
HSE. (1997). Successful Health and Safety Management. London: Health and
Safety Executive, HMSO.
HSE. (2005). A review of safety culture and safety literature for the
development of the safety culture inspection tool. Health and Safety Executive.
Research Report 367. London: Health and Safety Executive.
102
Hudson, P. (2006). Applying the lessons of high risk industries to health care.
Quality and Safety in Health Care 12, 7–12.
Hutchins, E. (1995). Cognition in the wild. Massachusetts: MIT Press.
IAEA, Safety Series No. 75-INSAG-4. (1991). Safety Culture. Vienna:
International Atomic Energy Agency.
IAEA, TECDOC-860. (1996). ASCOT Guidelines. Revised 1996 Edition.
Guidelines for Organizational Self-Assessment of Safety Culture and for
Reviews by the Assessment of Safety Culture in Organizations Team. Vienna:
International Atomic Energy Agency.
IAEA, Safety Report. (1998). Developing Safety Culture. Practical Suggestions
to Assist Progress. Vienna: International Atomic Energy Agency.
Ignatov, M. (1999). Implicit social norms in reactor control rooms. In: Misumi,
J., Wilpert, B. & Miller, R. (Eds.), Nuclear Safety: A Human Factors
Perspective. London: Taylor & Francis.
Katz, D. & Kahn, R. L. (1966). The Social Psychology of Organizations. New
York: John Wiley.
Kettunen, J. & Reiman, T. (2004). Ulkoistaminen ja alihankkijoiden käyttö
ydinvoimateollisuudessa. VTT Research Notes 2228. Espoo: VTT [in Finnish].
http://virtual.vtt.fi/inf/pdf/tiedotteet/2004/T2228.pdf.
Kinman, G. & Jones, F. (2005). Lay representations of workplace stress: What
do people really mean when they say they are stressed? Work & Stress 19,
101–120.
Kirwan, B. (1992). Human error identification in human reliability assessment.
Part 1: Overview of approaches. Applied Ergonomics 23, 299–318.
Kirwan, B., Hale, A. R. & Hopkins, A. (2002). Insights into safety regulation.
In: Kirwan, B., Hale, A. R. & Hopkins, A. (Eds.), Changing Regulation –
Controlling Risks in Society. Oxford: Pergamon.
103
Klein, G. (1997). The current status of naturalistic decision making framework.
In: Flin, R., Salas, E., Strub, M. & Martin, L. (Eds.), Decision Making Under
Stress. Emerging Themes and Applications. Aldershot: Ashgate.
Klein, G. A., Orasanu, J., Calderwood, R. & Zsambok, C. E. (Eds.) 1993.
Decision making in action. Models and methods. Norwood, NJ: Ablex
Publishing Corporation.
Klemola, U.-M. & Norros, L. (1997). Analysis of the clinical behaviour of
anaesthetists: recognition of uncertainty as basis for practice. Medical Education
31, 449–456.
Klemola, U.-M. & Norros, L. (2002). Activity-based analysis of information
characteristics of monitors and their use in anaesthetic practice. Eleventh
European Conference on Cognitive Ergonomics, Catania, Sept. 8–11, 2002.
Kochan, T., Smith, M., Wells, J. & Rebitzer, J. (1994). Human resource
strategies and contingent workers: The case of safety and health in the
petrochemical industry. Human Resource Management 33, 55–77.
Kunda, G. (1992). Engineering culture: Control and commitment in a High-Tech
corporation. Philadelphia: Temple University Press.
Kuusisto, A. (2000). Safety management systems – Audit tools and reliability of
auditing. VTT Publications 428. Espoo: VTT.
http://virtual.vtt.fi/inf/pdf/publications/2000/P428.pdf.
La Porte, T. R. (1996). High reliability organizations: Unlikely, demanding and
at risk. Journal of Contingencies and Crisis Management 4, 60–71.
La Porte, T. R. & Rochlin, G. (1994). A rejoinder to Perrow. Journal of
Contingencies and Crisis Management 2, 221–227.
Lave, J. & Wenger, E. (1995). Situated learning, legitimate peripheral
participation. New York: Cambridge University Press.
104
Lee, T. (1998). Assessment of safety culture at a nuclear reprocessing plant.
Work & Stress 12, 217–237.
Lee, T. & Harrison, K. (2000). Assessing safety culture in nuclear power
stations. Safety Science 34, 61–97.
Leplat, J. (1998). About implementation of safety rules. Safety Science 29,
189–204.
Levä, K. (2003). Turvallisuusjohtamisjärjestelmien toimivuus: Vahvuudet ja
kehityshaasteet suuronnettomuusvaarallisissa laitoksissa. TUKES-julkaisu
1/2003. Helsinki: Turvatekniikan keskus [in Finnish].
Lintern, G., Diedrich, F. & Serfaty, D. (2002). Engineering the community of
practice for maintenance of organisational knowledge. In: Proceedings of the
IEEE 7th Conference on Human Factors and Power Plants. Scottsdale, Arizona,
USA, September 2002.
Mannarelli, T., Roberts, K. H. & Bea, R. G. (1996). Learning how organizations
mitigate risk. Journal of Contingencies and Crisis Management 4, 83–92.
March, J. & Simon, H. (1958). Organizations. Wiley.
Marono, M., Pena, J. A. & Santamaria, J. (2006). The ‘PROCESO’ index: a new
methodology for the evaluation of operational safety in the chemical industry.
Reliability Engineering And System Safety 91, 349–361.
Martin, J. (2002). Organizational culture. Mapping the terrain. Thousand Oaks:
Sage.
McDonald, N., Corrigan, S., Daly, C. & Cromie, S. (2000). Safety Management
Systems and Safety Culture in Aircraft Maintenance Organisations. Safety
Science 34, 151–176.
Mearns, K., Flin, R., Gordon, R. & Fleming, M. (1998). Measuring safety
climate on offshore installations. Work & Stress 12, 238–254.
105
Mearns, K., Whitaker, S. M. & Flin, R. (2003). Safety climate, safety
management practice and safety performance in offshore environments. Safety
Science 41, 641–680.
Mearns, K., Rundmo, T., Flin, R., Gordon, R. & Fleming, M. (2004). Evaluation
of psychosocial and organizational factors in offshore safety: A comparative
study. Journal of Risk Research 7, 545–561.
Nilsen, S. (2006). Challenges to safety management when incorporating
integrated operations solutions in the oil industry. In: Svenson, O., Salo, I.,
Skjerve, A. B., Reiman, T. & Oedewald, P. (Eds.), Nordic perspectives on safety
management in high reliability organizations: Theory and applications.
Valdemarsvik: Akademitryck.
Nissinen, M. (1996). Rikosvastuun kohdentamisesta yhteisössä: erityisesti
laiminlyönti- ja tuottamusvastuun edellytyksistä osakeyhtiössä. Lisensiaatintutkimus. Helsinki: Helsingin yliopisto [in Finnish].
Norman, D. A. (1988). The Psychology of Everyday Things. New York: Basic
Books.
Norros, L. (2004). Acting under uncertainty. The core-task analysis in ecological
study of work. VTT Publications 546. Espoo: VTT.
http://virtual.vtt.fi/inf/pdf/publications/2004/P546.pdf.
Norros, L. & Savioja, P. (2004). Ihmisen ja tekniikan välisen vuorovaikutuksen
toimivuuden arviointi monimutkaisissa tietointensiivisissä töissä. Työ ja ihminen
18, 100–112 [in Finnish].
Nuutinen, M. & Norros, L. (2001). Co-operation on bridge in piloting situations.
Analysis of 13 accidents on Finnish fairways. Paper presented at CSAPC’01 8th
Conference on Cognitive Science Approaches to Process Control. 24–26
September 2001, Neubiberg, Germany.
Nuutinen, M., Reiman, T. & Oedewald, P. (2003). Osaamisen hallinta
ydinvoimalaitoksessa operaattoreiden sukupolvenvaihdostilanteessa. VTT
Publications 496. Espoo: VTT [in Finnish].
http://virtual.vtt.fi/inf/pdf/publications/2003/P496.pdf.
106
OECD. (2002). Regulatory Aspects of Management of Change. Summary and
Conclusions. OECD/CSNI Workshop 10–12 September 2001, Chester, UK.
NEA/CSNI/R(2002)20. Nuclear Energy Agency.
Oedewald, P. & Reiman, T. (2003). Core task modelling in cultural assessment:
A case study in nuclear power plant maintenance. Cognition, Technology &
Work 5, 283–293.
Oedewald, P. & Reiman, T. (2005). Enhancing maintenance personnel’s job
motivation and organizational effectiveness. CSNI workshop on Better Nuclear
Plant Maintenance: Improving Human and Organisational Performance. Ottawa,
Canada. 3–5 October 2005.
Oedewald, P., Reiman, T. & Kurtti, R. (2005). Organisaatiokulttuuri ja
toiminnan laatu metalliteollisuudessa. 11 tapaustutkimusta suomalaisissa pkyrityksissä. VTT Research Notes 2316. Espoo: VTT [in Finnish].
http://virtual.vtt.fi/inf/pdf/tiedotteet/2005/T2316.pdf.
Palukka, H. (2003). Johtotähdet. Lennonjohtajien ammatti-identiteetin
rakentuminen ryhmähaastatteluissa. Akateeminen väitöskirja. Tampereen
yliopisto, sosiologian ja sosiaalipsykologian laitos. Tampere: Tampereen
yliopistopaino Oy – Juvenes Print [in Finnish].
Paté-Cornell, M. E. (1993). Learning from the Piper Alpha accident: A post
mortem analysis of technical and organizational factors. Risk Analysis 13,
215–232.
Perin, C. (2005). Shouldering Risks. The Culture of Control in the Nuclear
Power Industry. New Jersey: Princeton University Press.
Perrow, C. (1984). Normal Accidents: Living with High-risk Technologies.
New York: Basic Books.
Perrow, C. (1994). The limits of safety: The enhancement of a theory of
accidents. Journal of Contingencies and Crisis Management 2, 212–220.
107
Perrow, C. (1999). Organizing to reduce the vulnerabilities of complexity.
Journal of Contingencies and Crisis Management 7, 150–155.
Pidgeon, N. (1998a). Safety culture: Key theoretical issues. Work & Stress 12,
202–216.
Pidgeon, N. (1998b). Risk assessment, risk values and the social science
programme: why we do need risk perception research. Reliability Engineering
and System Safety 59, 5–15.
Presidential Commission on the Space Shuttle Challenger Accident. (1986).
Report to the President by the Presidential Commission on the Space Shuttle
Challenger Accident, 5 vols. Washington, D.C.: Government Printing Office.
President’s Commission on the Accident at Three Mile Island. (1979). The Need
for Change: The Legacy of Three Mile Island. Washington, D.C.: Government
Printing Office.
Pronovost, P. J., Weast, B., Holzmueller, C. G., Rosenstein, B. J., Kidwell, R. P.,
Haller, K. B., Feroli, E. R., Sexton, J. B. & Rubin, H. R. (2003). Evaluation of
the culture of safety: Survey of clinicians and managers in an academic medical
center. Quality and Safety in Health Care 12, 405–410.
Ramanujam, R. (2003). The effects of discontinuous change on latent errors in
organizations: The moderating role of risk. Academy of Management Journal 46,
608–617.
Rasmussen, J. (1986). Information Processing and Human-Machine Interaction:
An Approach to Cognitive Engineering. North-Holland Series in System Science
and Engineering. Elsevier.
Rasmussen, J. & Vicente, K. J. (1989). Coping with human errors through
system design: implications for ecological interface design. International
Journal of Man-Machine Studies 31, 517–534.
Reason, J. (1990). Human Error. Cambridge: Cambridge University Press.
108
Reason, J. (1997). Managing the Risks of Organizational Accidents. Aldershot:
Ashgate.
Reason, J. (1998). Achieving a Safety Culture: Theory and Practice. Work &
Stress 12, 293–306.
Reason, J. & Hobbs, A. (2003). Managing Maintenance Error. A Practical
Guide. Hampshire: Ashgate.
Reicher-Brouard, V. & Ackermann, W. (2002). The Impact of Organizational
Changes on Safety in French NPP. In: Wilpert, B. & Fahlbruch, B. (Eds.),
System Safety: Challenges and Pitfalls of Intervention. Amsterdam: Pergamon.
Pp. 97–117.
Reiman, T. (2007). Assessing organizational culture in complex sociotechnical
systems – Methodological evidence from studies in nuclear power plant
maintenance organizations. Academic dissertation. VTT Publications 627.
Espoo: VTT. http://virtual.vtt.fi/inf/pdf/publications/2007/P627.pdf.
Reiman, T. & Norros, L. (2002). Regulatory Culture: Balancing the Different
Demands of Regulatory Practice in the Nuclear Industry. In: Kirwan, B., Hale,
A. R. & Hopkins, A. (Eds.), Changing Regulation – Controlling Risks in
Society. Oxford: Pergamon.
Reiman, T. & Oedewald, P. (2002a). The Assessment of Organisational Culture
– a Methodological Study. VTT Research Notes 2140. Espoo: VTT.
http://virtual.vtt.fi/inf/pdf/tiedotteet/2002/T2140.pdf.
Reiman, T. & Oedewald, P. (2002b). Contextual Assessment of Organisational
Culture – methodological development in two case studies. In: Kyrki-Rajamäki,
R. & Puska, E.-K. (Eds.), FINNUS. The Finnish Research Programme on
Nuclear Power Plant Safety, 1999–2002. Final Report. VTT Research Notes
2164. Espoo: VTT. http://virtual.vtt.fi/inf/pdf/tiedotteet/2002/T2164.pdf.
Reiman, T. & Oedewald, P. (2004a). Kunnossapidon organisaatiokulttuuri.
Tapaustutkimus Olkiluodon ydinvoimalaitoksessa. VTT Publications 527.
Espoo: VTT [in Finnish]. http://virtual.vtt.fi/inf/pdf/publications/2004/P527.pdf.
109
Reiman, T. & Oedewald, P. (2004b). Measuring maintenance culture and
maintenance core task with CULTURE-questionnaire – a case study in the
power industry. Safety Science 42, 859–889.
Reiman, T. & Oedewald, P. (2005). Exploring the effect of organizational
changes on the safety and reliability of maintenance. CSNI workshop on Better
Nuclear Plant Maintenance: Improving Human and Organisational Performance.
Ottawa, Canada. 3–5 October 2005.
Reiman, T. & Oedewald, P. (2006). Assessing the maintenance unit of a nuclear
power plant – identifying the cultural conceptions concerning the maintenance
work and the maintenance organization. Safety Science 44, 821–850.
Reiman, T. & Oedewald, P. (In press). Assessment of Complex Sociotechnical
Systems – Theoretical issues concerning the use of organizational culture and
organizational core task concepts. Safety Science.
Reiman, T., Oedewald, P. & Rollenhagen, C. (2005). Characteristics of
organizational culture at the maintenance units of two Nordic nuclear power
plants. Reliability Engineering and System Safety 89, 333–347.
Reiman, T., Oedewald, P., Rollenhagen, C. & Kahlbom, U. (2006). Management
of change in the nuclear industry. Evidence from maintenance reorganizations.
MainCulture Final Report. NKS-119. Roskilde: Nordic nuclear safety research.
Rice, A. K. (1958). Productivity and social organisation: The Ahmedebad
Experiment. London: Tavistock Publications.
Rijpma, J. A. (1997). Complexity, tight-coupling and reliability: Connecting
normal accident theory and high reliability theory. Journal of Contingencies and
Crisis Management 5, 15–23.
Roberts, K. H. (1990). Some characteristics of one type of high reliability
organization. Organization Science 1, 160–176.
Roberts, K. H. (1993). (Ed.) New Challenges to Understanding Organizations.
New York: Macmillan.
110
Rochlin, G. I. (1996). Reliable organizations: Present research and future
directions. Journal of Contingencies and Crisis Management 4, 55–59.
Rochlin, G. I. (1999a). Safe operation as a social construct. Ergonomics 42,
1549–1560.
Rochlin, G. I. (1999b). The social construction of safety. In: Misumi, J., Wilpert,
B. & Miller, R. (Eds.), Nuclear safety: A human factors perspective. London:
Taylor & Francis.
Rundmo, T. (1995). Perceived risk, safety status, and job stress among injured
and noninjured employees on offshore petroleum installations. Journal of Safety
Research 26, 87–97.
Sagan, S. D. (1993). The Limits of Safety. Organizations, Accidents, and
Nuclear Weapons. New Jersey: Princeton University Press.
Sandberg, J. (Ed.) (2004). Ydinturvallisuus. Hämeenlinna: Karisto [in Finnish].
Schein, E. H. (1985). Organizational Culture and Leadership. San Francisco:
Jossey-Bass.
Schulman, P. R. (1993). The negotiated order of organizational reliability.
Administration & Society 25, 353–372.
Schulman, P. R. (1996). Heroes, organizations and high reliability. Journal of
Contingencies and Crisis Management 4, 72–82.
Singer, S. J., Gaba, D. M., Geppert, J. J., Sinaiko, A. D., Howard, S. K. & Park,
K. C. (2003). The culture of safety: Results of an organization-wide survey in 15
California hospitals. Quality and Safety in Health Care, 12, 112–118.
Sinkkonen, S. (1998). Ydinturvallisuusvalvonta Imatran Voima Oy:n ja
Teollisuuden Voima Oy:n edustajien silmin. Tarkastelu menettelytapojen
oikeudenmukaisuuden ja rooliodotusten näkökulmasta. Pro gradu -työ. Helsinki:
Helsingin yliopisto [in Finnish].
111
Snook, S. A. (2000). Friendly fire. The accidental shootdown of U.S. Black
Hawks over Northern Iraq. New Jersey: Princeton University Press.
Sorensen, J. N. (2002). Safety culture: A survey of the State-of-the-Art.
Reliability Engineering and System Safety 76, 189–204.
Spitzer, C. (1996). Review of probabilistic safety assessments: insights and
recommendations regarding further developments. Reliability Engineering &
System Safety 52, 153–163.
Starbuck, W. H. & Milliken, F. J. (1988). Challenger: Fine-tuning the odds until
something breaks. Journal of Management Studies 25, 319–340.
Stensaker, I., Meyer, C. B., Falkenberg, J. & Haueng, A. C. (2002). Excessive
change: coping mechanisms and consequences. Organizational Dynamics 31,
296–312.
STUK. (2006). Guide YVL 1.1. Regulatory control of safety at nuclear facilities.
Radiation and nuclear safety authority publications. Vantaa: Dark Oy.
Taylor, D. H. (1981). The hermeneutics of accidents and safety. Ergonomics 24,
487–495.
Trist, E. L. & Bamforth, K. W. (1951). Some social and psychological
consequences of the longwall method of coal-getting. Human Relations 4, 3–38.
Turner, B. (1978). Man-made disasters. London: Wykenham.
Turner, B. & Pidgeon, N. (1997). Man-made disasters. Second edition. Oxford:
Butterworth-Heinemann.
Vaughan, D. (1996). The Challenger Launch Decision. Chicago: University of
Chicago Press.
Vicente, K. (1999). Cognitive work analysis. Toward safe, productive, and
healthy computer-based work. London: Lawrence Erlbaum.
112
Vicente, K. (2004). The Human Factor. Revolutionizing the way people live
with technology. New York: Routledge.
Vidal-Gomel, C. & Samurcay, R. (2002). Qualitative analyses of accidents and
incidents to identify competencies. The electrical systems maintenance case.
Safety Science 40, 479–500.
Visser, J. P. (1998). Developments in HSE management in oil and gas
exploration and production. In: Hale, A. R. & Baram, M. (Eds.), Safety
Management. The Challenge of Change. Oxford: Pergamon.
Weick, K. E. (1987). Organizational culture as a source of high reliability.
California Management Review 29, 112–127.
Weick, K. E. (1993). The vulnerable system: An analysis of the Tenerife air
disaster. In: Roberts, K. H. (Ed.), New challenges to understanding
organizations. New York: Macmillan.
Weick, K. E. (1995). Sensemaking in Organizations. Thousand Oaks: Sage.
Weick, K.E. (1998). Foresights of failure: an appreciation of Barry Turner.
Journal of Contingencies and Crisis Management 6, 72–75.
Weick, K. E. & Sutcliffe, K. M. (2001). Managing the Unexpected. Assuring
High Performance in an Age of Complexity. San Francisco: Jossey-Bass.
Wilbert, B. (2004). Characteristics of learning organisations. Presentation at
Learnsafe Final Seminar. 28th–29th April 2004, VTT, Espoo, Finland.
Williamson, A. M., Feyer, A.-M., Cairns, D. & Biancotti, D. (1997). The
development of a measure of safety climate: The role of safety perceptions and
attitudes. Safety Science 25, 15–27.
Woods, D. & Dekker, S. (2000). Anticipating the effects of technological
change: a new era of dynamics for human factors. Theoretical Issues in
Ergonomics Science 1, 272–282.
113
Wright, C. (1994). A fallible safety system: Institutionalised irrationality in the
offshore oil and gas industry. The Sociological Review 38, 79–103.
Wright, M. S. (1998). Management of health and safety aspects of major
organisational change. In: Hale, A. R. & Baram, M. (Eds.), Safety Management.
The Challenge of Change. Oxford: Pergamon.
Zuboff, S. (1988). In the Age of the Smart Machine: The Future of Work and
Power. USA: Basic Books.
114
Appendix A: Challanger space shuttle
(Vaughan 1996, Feldman 2004, Report on the Presidential Commission on the
Space Shuttle Challenger Accident 1986, www.nasa.gov)
Description of the event
In 1986 NASA space shuttle Challenger exploded 73 seconds after launch. All
of its seven crew members were killed. The cause of the accident was found to
be a leak in the O-ring, which failed due to excessively cold temperature. The
shuttle had several O-rings, made of a rubber compound, which were used to
seal the Solid Rocket Booster field joints (see Figure A1). The weather on
launch day was exceptionally cold (36 F), 15 degrees lower than that measured
for the next coldest previous launch, and the durability of O-rings had not been
tested at such temperatures. Post-accident investigations found that the resiliency
of O-rings was directly related to the temperature. The colder the ring, the
slower it returns to its original shape after compression.
Figure A1. Solid Rocket Booster of Space Shuttle Challenger (www.nasa.gov).
A1
Hot gas leaked through the O-ring seals into the right Solid Rocket Booster
causing the shuttle to explode. The official report describes the beginning of the
chain of events in the following way: “Just after liftoff at .678 seconds into the
flight, photographic data show a strong puff of gray smoke was spurting from
the vicinity of the aft field joint on the right Solid Rocket Booster ... increasingly
blacker smoke were recorded between .836 and 2.500 seconds ... The black color
and dense composition of the smoke puffs suggest that the grease, joint
insulation and rubber O-rings in the joint seal were being burned and eroded by
the hot propellant gases.” At 64 seconds into the flight, flames from the right
Solid Rocket Booster ruptured the fuel tank and resulted in an explosion 73
seconds after launch. (Report on the Presidential Commission on the Space
Shuttle Challenger Accident 1986, www.nasa.gov.)
Figure A2. Space Shuttle Challenger with Solid Rocket Booster and external fuel
tank (www.nasa.gov).
Background
Post-accident investigations found that O-rings had caused problems for a longer
period of time. The first erosion damage (0.053) was detected in Challenger’s Oring in 1981. However, no clear reason could be determined. The worst possible
A2
erosion (0.090) was calculated at this point and tests were carried out to
determine how big an erosion the primary O-ring could tolerate. Tests put this
value at 0.095. The safety margin was set at 0.090. Feldman (2004, p. 700)
emphasises that engineers were not sure why erosion had been 0.053 the first
time. They only stated this to be the case based on measurements. The safety
margin was a kind of a compromise achieved in the crossfire of different
demands and groups: engineers, managers, high-level NASA officials, political
decision-makers and ‘stubborn technology’, which had already been developed
and which could not be significantly modified within the given time limit.
(Feldman 2004, p. 700.) NASA seems to have introduced the safety margin
concept so that the demands of different parties could be discussed using shared
terminology. This (seemingly) did away with conflicts in demands since the
parties could now use a neutral (objective) quantitative concept.
In 1983, heat was found to reach the primary O-rings in both nozzle joints. Since
no erosion was detected, engineers decided that the problem was within the
experience base, that is, it was not a new threat to safety. By this time, 14 flights
had been made, 3 of which had exhibited problems with O-rings. Neither the
safety margin nor the experience base could explain the problem or shuttle
operations. In other words, the concepts were of no use for predicting operations.
The parties also did not use experience accumulated from other shuttle
programmes or airplane design. The safety margin and experience base offered
NASA measurable concepts used to quantify moral judgement. (Feldman 2004,
p. 701.) One could claim that the responsibility for safety-related decisions at
NASA was transferred to quantifiable abstract concepts instead of people taking
personal responsibility (see Chapter 4.8).
New issues related to O-rings were detected in the following years. In 1984 the
primary seal was endangered for the first time when soot was blown by the
primary O-ring to the nozzle joint. Erosion was also detected in two primary Orings. In 1985 lubricating oils burned in both the primary and secondary O-rings.
This was the first time that heat reached a secondary O-ring. However, not even
this event changed the plans. Based on their experiments, NASA researchers
determined erosion to be a self-limiting phenomenon, which would thus not
endanger shuttle safety. The new incidents did nothing but strengthen this
‘belief’. In addition, both incidents and the erosion in the primary and secondary
O-ring came under experience base and the safety margin. Said the engineers:
A3
“the condition is not desirable but is acceptable” (Vaughan 1996, p. 156).
According to Feldman, it was still unknown when and where erosion took place
although previous investigations had already showed that gas eroded the O-ring
through putty. In Feldman’s view, interpreting the phenomenon as a self-limiting
one was not plausible in view of the new evidence. Damage to the secondary Oring should have raised doubts as to the redundancy of rings. This, however, was
not the case. (Feldman 2004, p. 706.)
The hypothesis that erosion was caused by cold weather was presented for the
first time during the 1985 flight. However, since there was no quantitative
support for this hypothesis it received hardly any attention in investigations. This
despite it being a ‘known fact’ that the rubber used for the O-rings hardens in
cold weather reducing the effectiveness of the seal. According to the accident
report, four out of 21 flights had shown damage to the O-ring when the
temperature on launch day had been 61 F or higher. However, all flights in lower
temperatures showed heat damage to one or more O-rings. (Report on the
Presidential Commission on the Space Shuttle Challenger Accident 1986.)
Later during a flight in spring 1985 the primary nozzle joint O-ring burned and
the secondary O-ring was seriously damaged. The primary O-ring had not sealed
as expected. For the first time erosion was also detected on the secondary Oring. The primary erosion was 0.171, clearly exceeding the safety margin
(0.090). According to Vaughan (1996), erosion and O-ring redundancy became
related technical issues after this flight. Investigations of the incident determined
that the primary O-ring could have eroded this badly only if the incident had
taken place within the first milliseconds of ignition. This, in turn, was only
possible if the primary seal had been in the wrong position from the start.
According to investigators, had the joint itself leaked, all of the six joints should
have leaked identically. Investigators attributed the problem to inspections
overlooking the incorrectly installed seal. The pressure used for seal checks was
increased based on the report.
Feldman (2004, p. 711) points out that after the events of spring 1985 the
significance of safety margin changed to mean the durability of the secondary Oring. Similarly, experience base referred to events prior to spring 1985 and did
not include the primary ring burn-through experienced in the previous flight.
The finding was that an increase in the check pressure would cause erosion in
A4
the primary O-ring but should eliminate all erosion in the secondary O-ring. This
convinced all parties that both redundancy and safety margins were in order.
Feldman emphasises that a ‘devaluing of memory’ culture prevailed at NASA.
The organization lacked the capacity for individual and organizational memory
(Feldman 2004, p. 714).
The weather on Challenger’s final launch day was exceptionally cold. Citing
cold weather engineers recommended that the launch be postponed to the next
day but the management team, which had no technical experience, decided to go
through with the launch. The launch had already been postponed due to poor
weather and a technical fault. In addition, NASA was behind the planned launch
schedule (12 flights in 1986). Engineers at Morton Thiokol, the subcontracting
manufacturer of the Solid Rocket Booster and the O-rings, also had their doubts
about the cold tolerance of the rings. They expressed their doubts in a
teleconference held the evening before launch18. They weren’t given enough
attention, however: “I was asked to quantify my concerns, and I said I couldn’t. I
couldn’t quantify it.”
Reasons and consequences of the accident
According to Feldman (2004), the pressure to increase the number of flights meant
that only extremely well documented evidence could have led to a flight being
cancelled. NASA had developed a culture in which engineers had to prove that a
particular flight was not safe instead of proving before each flight that it could
be carried out safely. Under pressure from the flight schedule, engineers were
forced to look for information that would enable them to reach their objectives
instead of finding information about the phenomenon itself. (Feldman 2004, p.
699.) Feldman says that NASA overemphasised quantitative measurement data
to the extent that all other information was either overlooked or misinterpreted.
It had become so demanding to produce measurement data that decisions were
difficult to question in the short or mid-long range. In Feldman’s view NASA
18
Weick (1987) suggested that the subcontractor’s engineers could have prevented the launch had
they been physically present at the meeting. The engineers’ concern and emotional state could not
be conveyed sufficiently well in the teleconference. Consequently, NASA decision-makers did not
take the doubts seriously. Face-to-face meetings convey much more information than words:
gestures, worried expressions, insecure appearance. These are not conveyed as easily over
electronic media. Concern may also be more ’contagious’, when people are in the same facilities.
A5
was under the misconception that ‘objectivity is always absolute’ and, secondly,
NASA failed to maintain the ‘objectivity is absolute’ culture it had created.
Vaughan (1996, pp. 409–410) summarises: “The explanation of Challenger
launch is a story of how people who worked together developed patterns that
blinded them to the consequences of their actions. It is not only about the
development of norms but also about the incremental expansion of normative
boundaries: how small changes – new behaviors that were slight deviations from
the normal course of events – gradually became the norm, providing a basis for
accepting additional deviance. No rules were violated; there was no intent to do
harm. Yet harm was done.” The organization gradually drifted to a state in
which it no longer operated safely. Earlier danger signals had become part of
‘normal’ work and they were no longer noted.
After the accident the commission proposed, for example, the establishment of
an independent quality control unit. It was also worried about the post-1970s
change in the way NASA filled its managerial posts, which meant fewer
astronauts now worked in the management. The commission recommended
using the practical experience of astronauts by employing them in managerial
positions. Another key issue was a redefinition of the programme manager’s
responsibilities. At the time, many matters bypassed the programme manager,
and managers of subprojects felt they were more accountable to their own
management than to the manager of the whole programme. The commission was
also of the opinion that communication at the Marshall Space Flight Center
needed improvement. The commission worried about the isolation of
management: “NASA should take energetic steps to remove this tendency [to
management isolation] ... whether by changes in personnel, organization,
indoctrination or all three.”
A6
Appendix B: Piper Alpha oil rig
(Paté-Cornell 1993, Wright 1994, Reason & Hobbs 2003, www.ukooa.co.uk)
A serious fire took place on the Piper Alpha oil and gas production platform
operated by Occidental Petroleum in 1988. The catastrophe killed 165 rig
employees (of a total of 226), as well as two members of the rescue vessel. The
accident was caused by a process failure. One of the primary condensate
injection pumps (pump B) stopped working, and the control room decided to
start up pump A. However, unknown to the shift workers, pump A was
inoperable due to overhaul and maintenance. The start-up led to a flange leak
and condensation leak in the pump. The condensate was ignited by a spark or a
hot surface. This resulted in several explosions and cut off an onshore pipeline,
which lead to a massive fire. The fire ruptured a riser to another platform
(Tartan), starting an extremely intense fire under the deck of Piper Alpha.
Flames were visible at a distance of over one hundred kilometres.
The platform layout enabled the fire to spread extremely fast from production
modules B and C to critical facilities. The fire destroyed the control room and
the radio transmitter room at the outset. No evacuation order was ever given, but
even if it had, the location of accommodation areas along with defective rescue
tools would most likely have rendered it useless. Many of the evacuation routes
were blocked and the life rafts – all of them located in the same place – were
unreachable from the beginning of the disaster. The platform’s extinguishers
were of no use, since the diesel pumps could not be reached and apparently were
damaged at the outset of the fire. The Tharos firefighting vessel, which
happened to be on site, waited for an extinguishing command from the platform
commander, who, however, had been killed at the start of the fire. The power
failure that occurred early on had immediately made all electrical
communication equipment unavailable. (Paté-Cornell 1993.)
Both pumps had been serviced during the day shift preceding the accident.
According to Paté-Cornell (1993, p. 220), the pumps had most likely undergone
only minimum maintenance procedures, which involved the replacement of
obviously damaged components. Other aspects of pump operations had probably
not been inspected. A pressure safety valve (PSV 504) had been removed from
pump A for certification measures carried out by the contractor every 18
B1
months. It was replaced by a blind flange for the duration of the certification
inspection. The valve could not be fitted back in place during the same shift
because no crane was available at the time. Therefore, the shift decided to
suspend work. The contractor mentioned this to the head of maintenance in the
day shift and also told him about the blind flange. The pump could be left
inoperable since the redundant pump B was in operation. Information about the
removal of PSV 504 did not, however, reach the night shift. The day shift head
of maintenance did not mention it to his peer in night shift and did not note it in
the maintenance journal as required by rules. When pump B unexpectedly
failed, the night shift decided to turn it off and started pump A with disastrous
consequences. The blind flange proved to leak, emitting condensate in the air.
According to Paté-Cornell, maintenance workers did not inspect the fitting, and
the defective flange went unnoticed. In Paté-Cornell’s words: “this maintenance
failure was rooted in a history of short cuts, inexperience, and bypassed
procedures” (1993, p. 226).
In Paté-Cornell’s view (1993), one of the main reasons for information about the
state of pumps being lost between shifts was the work permit procedure used on
the platform. A single work permit could be used to perform several jobs – both
officially and unofficially. Reason and Hobbs (2003, pp. 86–89) say that the
main deficiencies leading to the accident were related to shift turnover and work
permit procedures. Employees were not in the habit of discussing the state of
work orders during shift turnover although it was a requirement in company
rules. Suspended work permits were kept in the safety office instead of the
control room, the reason being a lack of space in the control room. Operators
rarely enquired about suspended work permits before the start of their shift.
Paté-Cornell (1993, p. 231) explains that the night shift operator could have
been aware of suspended work orders or equipment removed for maintenance
purposes only if he had personally been involved in the suspension of work.
Authority operations were very superficial on British platforms at the time. The
corresponding authority in Norway, for example, carried out a considerably
stricter inspection and control programme. The British Government had a
favourable attitude to North Sea oil drilling (due to financial reasons, etc.) and
had adopted a policy of as little interference as possible with operations.
B2
A public enquiry, undertaken by Lord Cullen, was commissioned in July 1988
to establish the circumstances of the accident. His report lead to extensive
restructuring of the UK offshore safety legislation, with the primary onus of
responsibility for offshore safety being shifted towards the operating companies
and away from the regulatory authorities. Lord Cullen also recommended the
introduction of the Safety Case concept for the North Sea to align offshore
safety management with existing onshore legislation. The measures taken have
improved occupational safety on oil platforms considerably.
The HSE developed and implemented Lord Cullen’s key recommendation, the
making of regulations to require that the Operator/Owner of every installation
should be required to submit to the HSE, for their acceptance, a Safety Case
which demonstrated that the Company had adequate Safety Management
Systems (see Chapter 3.5), had identified risks and reduced them to as low as
reasonably practicable, had put management controls in place, had provided for
temporary safe refuge to be available and had made provisions for safe
evacuation and rescue.
B3
Series title, number and
report code of publication
VTT Publications 633
VTT-PUBS-633
Author(s)
Oedewald, Pia & Reiman, Teemu
Title
Special characteristics of safety critical organizations
Work psychological perspective
Abstract
This book deals with organizations that operate in high hazard industries, such as the
nuclear power, aviation, oil and chemical industry organizations. The society puts a great
strain on these organizations to rigorously manage the risks inherent in the technology
they use and the products they produce. In this book, an organizational psychology view
is taken to analyse what are the typical challenges of daily work in these environments.
The analysis is based on a literature review about human and organizational factors in
safety critical industries, and on the interviews of Finnish safety experts and safety
managers from four different companies. In addition to this, personnel interviews
conducted in the Finnish nuclear power plants are utilised. The authors come up with
eight themes that seem to be common organizational challenges cross the industries.
These include e.g. how does the personnel understand the risks and what is the right level
for rules and procedures to guide the work activities.
The primary aim of this book is to contribute to the nuclear safety research and safety
management discussion. However, the book is equally suitable for risk management,
organizational development and human resources management specialists in different
industries. The purpose is to encourage readers to consider how the human and
organizational factors are seen in the field they work in.
ISBN
978-951-38-7005-8 (soft back ed.)
978-951-38-7006-5 (URL: http://www.vtt.fi/publications/index.jsp)
Series title and ISSN
Project number
VTT Publications
1235-0621 (soft back ed.)
1455-0849 (URL: http://www.vtt.fi/publications/index.jsp)
4357
Date
Language
Pages
March 2007
English
114 p. + app. 9 p.
Name of project
Commissioned by
Organizational culture and management of change
(CULMA)
State Nuclear Waste Management Fund,
VTT Technical Research Centre of Finland
Keywords
Publisher
operational safety, safety management, human
safety, environmental safety, safety-critical
organizations, decisionmaking, nuclear power plants,
aviation industry, chemical industry, oil refining
industry
VTT Technical Research Centre of Finland
P.O.Box 1000, FI-02044 VTT, Finland
Phone internat. +358 20 722 4404
Fax +358 20 722 4374
VTT PUBLICATIONS 633
Publikationen distrib ueras av
This publication is available from
VTT
PL 1000
02044 VTT
Puh. 020 722 4404
Faksi 020 722 4374
VTT
PB 1000
02044 VTT
Tel. 020 722 4404
Fax 020 722 4374
VTT
P.O. Box 1000
FI-02044 VTT , Finland
Phone internat. +358 20 722 4404
Fax +358 20 722 4374
ISBN 978-951-38-7005-8 (soft back ed.)
ISSN 1235-0621 (soft back ed.)
ISBN 978-951-38-7006-5 (URL: http://www.vtt.fi/publications/index.jsp)
ISSN 1455-0849 (URL: http://www.vtt.fi/publications/index.jsp)
Oedew ald & Reiman
Julkaisu on saatavana
Special characteristi cs of saf ety critical organi zations
Special characteristics of safety critical organizations analyses organizations
that operate in high hazard industries, such as the nuclear power, aviation,
oil or chemical industry. The society puts a great strain on these organizations
to rigorously manage the risks inherent in the technology they use and the
products they produce. Most of the organizations manage well in these
ex traordinarily demanding domains. However,catastrophic accidents,such as
the Piper Alpha off-shore platform fire or the Chernobyl nuclear accident
remind us of the possibility of serious organizational failure.
This book describes the challenges and tensions of managing and working
in safety critical organizations through an organizational psychology
window. The book offers a critical view of the literature about organizational
factors and safety and illustrates the practical challenges of safety critical
organizations with revealing interview ex cerpts from people working in these
organizations. The authors suggest eight themes that arise as common
organizational challenges across the industries, e.g. how the personnel
understands the risks and what the role of rules and procedures in guiding
the work activities is.
The book is written for managers, practitioners as well as for students. It
is equally suitable for risk and safety management, organizational
development and human resources management specialists in different
industries. The purpose is to encourage the readers to consider how the human
and organizational factors are approached in the field they work in.