The British Journal of Sociology 2010 Volume 61 Issue 1
Trust and technology: the social foundations of
aviation regulation
bjos_1303
83..106
John Downer
Abstract
This paper looks at the dilemmas posed by ‘expertise’ in high-technology regulation by examining the US Federal Aviation Administration’s (FAA) ‘typecertification’ process, through which they evaluate new designs of civil aircraft. It
observes that the FAA delegate a large amount of this work to the manufacturers
themselves, and discusses why they do this by invoking arguments from the sociology of science and technology. It suggests that – contrary to popular portrayal –
regulators of high technologies face an inevitable epistemic barrier when making
technological assessments, which forces them to delegate technical questions to
people with more tacit knowledge, and hence to ‘regulate’ at a distance by evaluating ‘trust’ rather than ‘technology’. It then unravels some of the implications of
this and its relation to our theories of regulation and ‘regulatory capture’.
Keywords: Redundancy; Reliability; aviation regulation; technology assessment;
social epistemology
Introduction1
Casual attendees of the Flight Safety Foundation’s 1990 annual International
Air Safety Seminar might have been surprised to hear a senior Federal Aviation Administration (FAA) official earnestly, but perhaps injudiciously, declare
that: ‘The FAA does not and cannot serve as a guarantor of aviation safety’;
and that: ‘The responsibility for safe design, operation and maintenance rests
primarily and ultimately with each manufacturer and each airline.’2 After all,
why have a technology regulator if it defers responsibility for safety to the
manufacturers? What does regulation mean in such circumstances?
The FAA is the USA’s aviation regulator. In theory, it represents the US
citizenry: protecting the people’s interests by overseeing, on their behalf, a
Downer (Centre for Analysis of Risk and Regulation, London School of Economics and Political Science) (Corresponding author
email:
[email protected])
© London School of Economics and Political Science 2010 ISSN 0007-1315 print/1468-4446 online.
Published by Blackwell Publishing Ltd, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden,
MA 02148, USA on behalf of the LSE. DOI: 10.1111/j.1468-4446.2009.01303.x
84
John Downer
complex and inscrutable technology they routinely trust with their lives.
Together with the US Nuclear Regulatory Commission, they are perhaps the
most prominent regulators of complex technologies anywhere in the world:
framing, promulgating and implementing an extensive network of specifications and regulations governing the design, use, and manufacture of civil
aircraft in the world’s most significant aviation market.3
An important element of this work is the FAA’s role in assessing and
verifying the reliability and risk of new designs of large passenger aircraft. This
is known as the ‘type-certification’ process, and through it the FAA confirm
that safety-critical aviation systems meet the standards outlined in Federal
Aviation Regulations Part 25 (FAR-25) and Part 33 (FAR-33): the ‘master
documents’ governing, respectively, the regulation of large civil airframes4 and
engines (see Lloyd and Tye 1982; National Transportation Safety Board
(NTSB) 2006). If this work does not amount to the FAA acting as a ‘guarantor
of aviation safety’ then it is worth asking why.
All modern societies manage their relationship with technology through
expert mediators, who are usually state regulatory bodies such as the FAA.
These regulators have become a twenty-first century clergy, standing between
the public and the esoteric knowledge with which they must contend, and both
the public and policy-makers are prone to accept their conclusions at face
value with minimal reflection or circumspection. Case studies of technological
practice repeatedly suggest that such obeisance is misguided, however, arguing
that the seemingly quantitative and objective surety that regulators frequently
project is an unrealizable and misleading goal (See e.g. Downer 2007;
MacKenzie 1996; Pinch 1993).5
By tracing the practical demands of type-certification and highlighting the
limitations the impose, therefore, this paper will explore the FAA’s complex
relationship with the technology they regulate. It will link ideas about ‘regulatory capture’ to insights from the sociology of scientific knowledge (SSK), to
highlight the complex negotiations required by modern technological governance. More specifically, it will argue that high-technology regulators contend
with an intractable technical problem by turning it into a more tractable social
problem, such that, despite appearances to the contrary, the FAA quietly assess
the people who build aeroplanes in lieu of assessing actual aeroplanes.
Epistemic exigencies
As part of the type-certification process, the FAA must gauge a new engine’s
ability to absorb errant birds.6 As airworthiness tests go, those relating to birds
being sucked into engines are relatively straightforward: the testers emulate a
‘bird-strike’ by revving an engine to a high speed and launching birds into it
from a cannon (Downer 2007). The procedure is only a minor element of total
engine certification – one test among many – but a brief digression into its
© London School of Economics and Political Science 2010
British Journal of Sociology 61(1)
Trust and technology
85
minutiae offers a broad and important insight into the Byzantine complexity of
regulating high technologies.
Bird-strike tests are deceptively complex, despite their straightforward
appearance, sitting atop an intricate pyramid of technical and epistemological
assumptions about their representativeness, relevance and authenticity
(Downer 2007). Because the FAA cannot destroy an unlimited number of
expensive engines, for instance, they try to ensure that each test counts by
recreating the worst possible bird-strikes. To this end, the regulators stipulate
a variety of carefully chosen test parameters, such as the mass of the birds, the
number of birds, and the speed at which they strike the engine.7 These parameters reflect complex and inevitably subjective judgments, however.
Any of the many requirements offers a glimpse of this vast fractal complexity if probed in enough depth. One condition, for instance, is that the birds hit
the engine at its most vulnerable point, which means agreeing what part of the
engine is most vulnerable to birds, and how. This vulnerable point is known as
the Critical Impact Parameter, or CIP. For most modern turbofan engines, the
CIP is the stress imparted to the leading edge of the fan blade, but other
potential CIPs include the stress imparted to engine parts, such as the blade
root, and different variables, such as ‘strain,’ ‘deflection’, and ‘twist’. The FAA
offers some ‘example considerations for determining the CIP’ in a 2001 advisory circular:
For turbofan first stage fan blades, increasing the bird velocity or bird mass
will alter the slice mass, and could shift the CIP from leading edge stress to
some other highly stressed feature of the blade (e.g. blade root). For fan
blades with part span shrouds, it may be blade deflection that produces
shroud shingling and either thrust loss or a blade fracture that could be
limiting. For unshrouded wide chord fan blades, it may be the trailing edge tip
of the blade which experiences damage due to an impact induced shock wave
traveling through the blade, or the twist of the blade in dovetail that allows it
to impact the trailing blade resulting in blade damage (FAA 2001: 4).
Without troubling to understand all the details here, it suffices to recognize
that calculating an engine’s CIP is an ambiguous undertaking. The first sentence of the FAA’s ‘considerations’ alone portends a wealth of reckoning:
. . . increasing the bird velocity or bird mass will alter the slice mass, and
could shift the CIP from leading edge stress to some other highly stressed
feature of the blade . . .
One implication of this observation is that there is no straightforward relationship between the severity of the test and the speed the aircraft is moving,
which, by itself, enormously complicates the question of what speed the birds
should strike the engine. The 200-knot speed is contentious, with some critics
arguing that birds are often struck when the aircraft is going faster. The
British Journal of Sociology 61(1)
© London School of Economics and Political Science 2010
86 John Downer
maximum allowed airspeed below 10,000 feet is 250 knots and, critics suggest
this should be the speed for the test to represent the most challenging possible circumstances. The FAA contend that the test becomes less rather than
more severe at speeds greater than 200 knots, because the 200-knot stipulation is more likely to ‘result in the highest bird slice mass absorbed by the
blade at the worst impact angle, and therefore results in the highest blade
stresses at the blade’s critical location’ (FAA 1998). This is also contentious.
The Airline Pilots Association (ALPA) is doubtful of the slice-mass argument, and question whether it is a proven assumption. They also observe that
the speed civil aircraft travel at low altitudes is rising beyond the 250-knot
limit (ALPA 1999). The optimum bird speed (as with mass, density, etc)
varies according to the fracture mechanics of the fan blades themselves,
which, again, are contested and far from straightforward.8
Reconciling these many variables means integrating contested and uncertain research from many disciplines. System engineers, materials scientists,
statisticians and ornithologists, all must collaborate to form judgments based
on compromises, best guesses and interpretations of limited evidence (Downer
2009). There are no objective or definitive answers.
The ambiguity surrounding the CIP is far from unusual. Proximity to technological testing invariably reveals it’s ambiguous, contentious and skillful
nature. As Wynne (1988: 153) puts it:
In all good ethnographic research [of] normally operating technological
systems, one finds the same situation. [ . . . ] Beneath a public image of
rule-following behavior [ . . . ] experts are opening with far greater levels of
ambiguity, needing to make uncertain judgments in less than clearly structured situations.
If we remember that the CIP is just one of many critical parameters of an
engine bird-strike test, which, in turn, is just one of many tests to which the
FAA subject an engine, and that engines are just one of many systems that
constitute an aeroplane, then we can begin to appreciate the vast scale of
‘type-certifying’ a new aircraft and the epistemic challenge of auditing
complex technologies. (A challenge that is only rising as large civil aircraft
become more sophisticated and aeronautical engineering splits into more
specialties, such as software engineering.)9
The complexity of modern aircraft has long passed a level where regulating
it is within the FAA’s budget and manpower, and yet the FAA would be
ill-placed to make informed judgments even with infinite resources; they
simply lack the ‘tacit knowledge’ to make the requisite judgments about the
technologies they certify.
Sociologists of technology have long argued that the orderly public image
of technology belies the ‘messy reality’ of real engineering practice. They
claim there is more to understanding technology than can be captured in
© London School of Economics and Political Science 2010
British Journal of Sociology 61(1)
Trust and technology
87
standardized tests, and something about experts that eludes expert-systems
(e.g. Collins and Pinch 1998; MacKenzie 1996). They point to epistemological
dilemmas, such as the ‘problem of relevance’ or the ‘experimenter’s regress’
(Collins 1985; Pinch 1993), to argue that technological disputes cannot be
definitively resolved and that technological practice cannot be governed by
objective ‘rules’ because ‘compliance’ is inevitably a matter of interpretation
and judgment (Wynne 1988).
This insight leads sociologists to emphasize the role of ‘tacit knowledge’ in
technical understanding. Tacit knowledge is a term used widely by sociologists
of science and technology. Broadly speaking, it refers to the information, skill
and experience that vital to a task but difficult to codify, and which consequently get marginalized, ignored or obscured by formal accounts. Polanyi
(1958), who coined the term, observed that ‘we know more than we can tell’,
and sociologists of knowledge have painstakingly explored this observation
and its significance in different contexts (e.g. Collins 1982; MacKenzie and
Spinardi 1996).10
Insights about epistemological uncertainties and tacit knowledge do not
imply that formal rules have no place or purpose, but they suggest that technology regulation demands more than assiduous ‘box tickers’, and that regulators require the familiarity and experience to negotiate complex
indeterminacies: they cannot be mere accountants.11 As Woods and Hollnagel (2006: 5) put it: ‘Safety is not a commodity that can be tabulated’.
In respect to aviation this is reflected, for instance, in a 1980 US National
Research Council (NRC) report on the FAA, which bluntly concedes that:
‘In a technological environment, the determination of design and engineering adequacy and product safety cannot be legislated in minute detail’ (NRC
1980: 23). Indeed, the claim that the FAA cannot guarantee aircraft safety is
far from unorthodox. The US General Accounting Office (GAO), the US
Department of Transport (DoT), the Aerospace Industries Association
(AIA), and the US Office of Technology Assessment (OTA), have all voiced
similar conclusions about aircraft regulation at different times. The OTA
(1988), for example, reported that FAA personnel lacked the expertise to
make good technological judgments, while the GAO (1993: 19) similarly
found the FAA to be ‘not sufficiently familiar with [particular systems] to
provide meaningful inputs to the testing requirements or to verify compliance with regulatory standards’.
Designee-dependency
If assessments depend on judgments that cannot be systematized and require
a degree of tacit knowledge that FAA regulators lack, then how does the FAA
perform its regulatory mandate to type-certify new aircraft designs?
British Journal of Sociology 61(1)
© London School of Economics and Political Science 2010
88
John Downer
The answer is straightforward and surprising: they delegate most of it to the
manufacturers. Needing to make complex judgments in an environment where
rules are ‘interpretively flexible’ (Pinch and Bijker 1984), and lacking the tacit
expertise to do so, the FAA depends heavily on of a cadre of insiders – with
their greater access, knowledge, and experience – to help it assess new systems.
As one engineer put it:
[T]here is not a way for a third party organization to assess our understanding of, oh, fly-by-wire systems, FADECs, or damage tolerant composite
design. [ . . . ] [T]he very best method we have of discriminating between
those who can and those who can’t, but talk a good game, are their peers.12
This relationship is formalized in what the FAA calls Designated Engineering
Representatives, or DERs. The FAA is authorized to deputize engineers and
let them act as surrogates for the regulator: overseeing tests, calculations and
designs to ensure that aircraft are compliant with aviation regulations. DERs
are employees of the manufacturers, usually with 15 to 20 years’ experience,
who hold key technical positions and work on the aircraft they assess. They are
cheap for the FAA because they are primarily paid by the manufacturers, so
the regulator can use them in large numbers to better leverage its resources.
More significantly, they give the FAA access to a reservoir of tacit ‘hands-on’
knowledge, based on a level of involvement not practical for FAA personnel
(NRC 1980: 7).13
Perrow (1984: 267) observes that it is common for organizations producing
high risk technologies to play an active role in their own regulation, ‘if only
because they alone possess sufficient technical knowledge to do so’. The FAA
and its predecessors have relied on designees, in some form or another, since
the practice was first authorized by Congress in the 1920s.14 By 2004, there
were approximately 13,400 DERs performing a variety of functions: overseeing tasks such as pilot tests and medical examinations as well as airworthiness
assessments. The designees are currently grouped into 18 programmes, overseen by three FAA offices: Flight Standards, Aerospace Medicine, and Aircraft
Certification (GAO 2004: 10). The regulators choose the DERs (although
designees are usually nominated by the manufacturer), train them and oversee
their work.15
In theory, the FAA reserves key elements of the certification process exclusively for its own staff. The regulator’s publicly stated position is that designees
conduct routine functions, allowing core FAA personnel to concentrate on the
most critical safety areas such as framing the standards (GAO 2004).To this end,
the FAA sets the regulations, designs the tests, determines and reviews the
analytical criteria the tests use, and makes the final determination as to whether
regulations are satisfactorily met (NRC 1980). Or, at least, they do in principle.
In practice, even the limited role the FAA demarcates for itself has grown
untenable as aircraft have become more complex. The DER system may have
© London School of Economics and Political Science 2010
British Journal of Sociology 61(1)
Trust and technology
89
begun as a labour practicality, where the FAA designed the tests, wrote the
standards and deputized engineers to oversee routine compliance actions, but
it has grown to be much more than this. In a 1993 report, for instance, the GAO
(1993: 22) concluded the FAA was increasingly delegating tasks it traditionally
reserved for itself. As far back as 1989, an internal FAA review similarly
concluded that the regulators had been forced to delegate practically all the
certification work on Boeing’s highly advanced flight management system for
the 747-400, because their staff ‘were not sufficiently familiar with the system
to provide meaningful inputs to the testing requirements or to verify compliance with the regulatory standards’ (AIAA 1989: 49). In this instance, the
extent of delegation varied widely between branches, being highest in those
responsible for the advanced computer systems, (where an estimated 75 to 95
per cent of test plans were delegated), and lowest in branches that dealt with
less innovative fields such as aircraft structures. In all branches, however, it was
undeniable even 20 years ago that the FAA was relinquishing roles it had long
claimed to retain.16
Circumstantial evidence testifies to a growing role for DERs since then.
Between 1980 and 1992, for instance, the number of DERs overseen by the
FAA’s two main branches rose 330 per cent, while the number of FAA certification staff rose only 31 per cent, bringing the overall ratio of designees to
FAA staff from about 3 to 1 in March 1980, to 11 to 1 in 1992. Again, this ratio
was steeper in sections that dealt with the most complex systems – over 30 : 1
in some instances (GAO 1993: 17–19). In 1993, the GAO (1993: 17) concluded
that between 90 and 95 per cent of all regulatory activities were being delegated to DERs.
Bureaucratic visions
The idea of the independent expert, clergyman of the modern state and disinterested arbiter of objective facts that then inform the deliberations of
policy-makers, is pervasive outside of academia and lies at the heart of what is
often referred to as ‘evidence-based policy-making’.
This ideal has long been criticized by academics, however, and the aviation
industry’s intimate relationship between regulator and regulatee will be unsurprising to academics and practitioners familiar with institutions where selfregulation is well established, such as healthcare (e.g. Ham and Alberti 2002)
or finance (e.g. Georgosouli 2008). Yet, even if self-regulation is well established in many spheres, it is much less visible in what are perceived to be
‘high-technology’ industries, especially when there are obvious safety concerns, such as in aviation and nuclear power.17 In this context, therefore, the
FAA’s reliance on DERs is less intuitive than students of other regulatory
regimes might suppose.18
British Journal of Sociology 61(1)
© London School of Economics and Political Science 2010
90
John Downer
Machines, much more than other regulatory objects, are invariably portrayed as discrete and quantifiable by the people who govern them (Wynne
1988). Policy discourse on technology invariably favours an idealized, ‘rule
following’ model of regulation that conflates ‘safety’ with ‘regulatorycompliance’. It portrays technology regulation as a mechanical, ‘proof-driven’
appraisal of the machines themselves: a process governed by formal rules and
objective algorithms that promise an incontrovertible, reproducible and valuefree assessment grounded in measurements rather than expert opinion. Gherardi and Nicolini (2000: 343) call this the ‘bureaucratic vision of safety’; Porter
(1995) calls it the ideal of ‘mechanical objectivity’; both terms describe a
persuasive but unrealizable ideal that seeks to replace trust in people with
trust in numbers.
As we have seen, however, this vision is an apparition and the surety it
promises is unrealistic. Where technological domains trade in ‘hard data’ and
‘solid technical conclusions’ their discourse is masking the ambiguities and
social processes behind these data. Successive studies of complex systems
have highlighted deficiencies in the formal descriptions of technical work
embodied in policies, regulations, procedures, and automation (e.g. Schulman
1993; Woods and Hollnagel 2006; Wynne 1988). Wynne speaks of ‘white
boxing’ technology, in the sense that – unlike ‘black-boxing’ – regulators
purport to make the inner workings of a technology publicly visible and
accountable even whilst obscuring the messy realities of technological practice (1988: 160).
The FAA unquestionably ‘white box’ their type certification work to some
extent: promoting (or doing little to publicly subvert) an image of aeroplanes
as definitively and impartially ‘knowable’. Their public literature rarely mentions DERs. After the 1996 crash of ValuJet 592 in the Florida Everglades, for
instance, FAA Director Hinson testified at a Senate enquiry that: ‘when we say
an airline is safe to fly, it is safe to fly. There is no gray area’ (quoted in
Langewiesche 1998).
The regulators gently promulgate an unrealistic vision of type-certification
(certi-fiction), in part, because a more authentic portrayal would lack rhetorical legitimacy. Promoting confidence in a new aircraft design would be a
struggle if regulatory assessments were explicitly touted as reliant on
the best judgments of the manufacturers themselves; yet this is essentially
what happens. This is to say that aviation regulation is performative as well
as functional.19 ‘Following rules may or may not be a good strategy
for seeking truth,’ writes Porter (1995: 4), ‘but it is a poor rhetorician
who dwells on the difference.’ ‘Better to speak grandly of a rigorous
method,’ he says, ‘enforced by disciplinary peers, canceling the biases of the
knower and leading ineluctably to valid conclusions.’20 With quantitative
rules and strict measurements, ‘mere judgment’ disappears, or such is the
impression.21
© London School of Economics and Political Science 2010
British Journal of Sociology 61(1)
Trust and technology
91
Conflicts of interest
If an authentic portrayal of type-certification lacks ‘rhetorical legitimacy,’
however, it is worth asking why.
Rules and numbers, as explained above, confer legitimacy because they are
thought to be impersonal and constraining, and so are thought to limit discretion when credibility or disinterestedness is suspect. If rules are not, in
fact, constrictive or impersonal, however, then the issue of credibility is not
resolved. This problem, again, is reflected in aviation. When Ralph Nader and
Wesley Smith (1994: 14) wrote an exposé of airline regulation, for instance,
they noted the designee system and lamented, incredulously, that the FAA
‘believes in the honor system for airline compliance’ (see also, Schiavo
1997).
Many academics would agree with Nader and Smith: the designee system
does seem like a conflict of interest. DERs effectively have two masters: the
manufacturer who pays them, and the FAA to whom they are supposed to
report problems. Indeed, the arrangement seems exemplary of an institutional
pathology sometimes referred to as ‘regulatory capture’. This concept was first
outlined in the 1970s by a group of lawyers and economists at the University
of Chicago.22 Essentially, it is the argument that, over time, powerful industries
come to dominate the agencies that regulate them (see e.g. Peltzman 1976;
Posner 1971, 1974, 1975; Stigler 1971). This is thought to happen for various
reasons, but often because of an information imbalance that leaves the regulators dependent on their charges (Niles 2002: 393). Academics have observed
the phenomenon in a wide range of industries, but several have singled out the
FAA as particularly subject to regulatory capture (e.g. Dana and Koniak 1999:
148; Niles 2002). In the blunt words of one FAA veteran: ‘To tell the truth, the
industry, they really own the FAA’ (quoted in Niles 2002: 384).
Academics view regulatory capture as an institutional pathology because it
is thought to allow regulated organizations to pursue their self-interest in ways
regulators might otherwise be expected to curb on the public’s behalf, or even
to allow organizations to leverage regulation to their own ends, at the public’s
cost. It is said that regulatory capture ‘puts the gamekeeper in league with the
poacher’. Wiley (1986: 713), for instance, describes regulatory capture as ‘a
method of subsidizing private interests at the expense of the public good’.
Regulation can be construed as a form of audit, and as Michael Power notes,
audits invariably presuppose that the audited party is susceptible to ‘moral
hazards’ (1997: 9) (audits, almost by definition, must be guarding against
something).
It is not entirely clear, however, that the relationship between regulator and
regulatee is inherently adversarial.‘One of the inherent complexities of capture
theory is its requirement that identifiably “private” interests be distinguished
from “public” ones,’ writes Niles (2002: 392), ‘But how can it be determined
British Journal of Sociology 61(1)
© London School of Economics and Political Science 2010
92
John Downer
where the private interests of the regulated end and the broad public interests
begin?’
The regulator-regulatee relationship is especially ambiguous in the aviation
industry, where observers commonly argue that the interests of aeroplane
manufacturers’ are aligned with those of their regulators. Advocacy by critics
such as Nader (see also, Schiavo 1997) kindled a succession of investigations
into the DER system over the last three decades, all of which largely dismissed
the ‘conflict-of-interest’ concern. Each report differs slightly in its reasoning,
but the primary argument in every case is that, rather than there being a
conflictual relationship between regulator and regulatee, the FAA and the
manufacturer share the same interests. The National Academy of Sciences, for
instance, found succour in ‘the self-interest of the manufacturer in designing a
safe, reliable aircraft that would not expose them to lost sales or litigation from
high profile failures’ (NRC 1980). A view the GAO (2004) echoed over 20
years later.
This ‘aligned-interests’ argument is certainly credible. Unlike the shipping
industry – where comprehensive insurance and elaborate bureaucratic prophylactics shield shipping companies from disasters at sea – aviation safety is
strongly linked to profitability (Cobb and Primo 2003: 5). As Perrow (1984:
167) observes:
The aircraft and airlines industries are uniquely favored to support safety
efforts. Profits are tied to safety; the victims are neither hidden, random, nor
delayed and can include influential members of the industry and Congress.
Aeroplane manufacturers are rarely liable for legal damages directly, but
crashes are in nobody’s interest, especially if they tar a specific design (which
they invariably do, to a varying extent).
The unfortunate history of the DC-10 is instructive here. During the 1970s
and 80s the DC-10 was involved in a string of high-profile accidents that,
although statistically questionable, earned it a reputation for unreliability. As
public confidence in the aircraft plummeted TWA took out full-page advertisements stressing that they owned none of the star-crossed aircraft and
American Airlines ran campaigns stressing that they serviced certain routes
exclusively with Boeings (Newhouse 1985: 87–9). The upshot was a financial
disaster for its manufacturer. Airlines across the world cancelled options they
held to purchase new DC-10s, and few carriers bought its highly regarded
successor, the MD-11. Eventually, the historied McDonnell-Douglas corporation failed and was forced to merge with Boeing.
The sense of aligned interests in the aviation industry seems to reach far
beyond the corporate level. Even the engineers often interpret their
relationship with DERs as complimentary rather than adversarial: the designees simply being the people who vouchsafe for the group’s collective efforts.
‘After all . . . ,’ as one engineer explained,
© London School of Economics and Political Science 2010
British Journal of Sociology 61(1)
Trust and technology
93
it’s very clear to all involved that we are talking lives here. It’s also helpful
that these are ubiquitous commercial transports. Everyone knows that not
only will they fly on these things themselves, but their wives, mothers, children, girlfriends, you name it, will be flying on them as well. It’s a sobering
thought, trust me.23
Persuasive and intuitive as it is, however, the aligned-interest argument is not
beyond reproach. Manufacturers certainly see value in building reliable aircraft, but they juggle other pressures as they compete in a highly demanding
marketplace. Certification failures can be enormously expensive, and it is
probably fair to say that the major airframers literally ‘bet the company’ on the
commercial success of new aeroplane designs. In such circumstances, it is
difficult to imagine that manufacturers’ ‘risk-tolerance’ is entirely untouched
by market pressures. It also seems intuitive, moreover, that an engineer who
helped build a system is unlikely to be the most impartial judge of it. Not to
mention that the aligned-interest argument begs the question of why FAA
certification is necessary at all, or why the same bodies which ultimately
exonerate the DER system will sometimes refer to the increasing levels of
delegation as a ‘significant problem’ (e.g. GAO 1993: 21).
The simple truth is that criticisms of the DER system are largely moot
because almost every observer agrees there are few alternatives. The FAA
would still depend on the manufacturers for their tacit knowledge and deep
understanding, irrespective of how the agency organized its relationship with
the aviation industry. The GAO put this succinctly in a report: ‘The designee
system for augmenting the capability of the FAA to review and certify the type
design is not only appropriate but indispensable’ (GAO 2004). As one correspondent, a former aviation engineer, put it:
The FAA trusts the DERs because there really is no better alternative. [ . . . ]
Can you imagine the government having to create a certifying organization
that is parallel to the existing airframers and engine builders? Oy!24
Yet, given this inevitable dependence on the manufacturers to frame and to
implement aviation regulations, it is worth considering the question raised
above. If compliance and corroboration ultimately rely on self-interest then
what is the purpose of airworthiness certification?
The role of regulator
In essence, the purpose of airworthiness certification is much as it purports to
be: to provide some manner of external oversight. The epistemic challenges of
doing this directly are intractable, as we have seen, so the FAA approaches the
problem obliquely – by turning a technical problem into a social one.
British Journal of Sociology 61(1)
© London School of Economics and Political Science 2010
94
John Downer
This is best explained in reference to the sociology of scientific knowledge
(SSK). In SSK terms, the FAA’s dependency on DERs stems from the ‘interpretive flexibility’ of their tests and standards (Pinch and Bijker 1984). This
is to say that aviation insiders widely accept that an aircraft could meet every
standard, pass every test, and still be unsafe to fly, and this leaves aviation
regulators dependent on the informal judgments of people who are best
able to make them. A common refrain is that engineering assessments are
‘only as good as the people doing the analysis’. As one regulatory expert
writes:
. . . assurance of ultra-dependability has to come from scrutiny [ . . . ] and
scrupulous attention to the processes of its creation; since we cannot
measure ‘how well we’ve done’ we instead look at ‘how hard we tried’
(Rushby 1993).
Rather than regulate the numbers, therefore, the FAA regulate the people who
produce them. It is a well-established principle in SSK that to trust in numbers
is to trust the people who produce them (Porter 1995; Shapin 1994, 1999). ‘An
emphasis on rules and numbers’, MacKenzie argues, ‘simply displaces, rather
than solves, modernity’s problem with trust’ (2003: 2). This is because we
cannot, ourselves, substantiate the veracity of most numbers. ‘We can, it is true,
make the occasional trip to places where [technical] knowledge is made,’
writes Schaffer (1999: 498), but adds that ‘[ . . . ] when we do so, we come as
visitors’. We ‘believe scientists not because we know them, and not because of
our direct experience with their work,’ Shapin (1999: 270) concludes, but
‘because [ . . . ] their claims are vouched for by other experts we do not know.
’This is why Collins (1988: 729) argues that there is a ‘moral complexion’ to
publicly demonstrating the properties of technologies. We trust that regulatory
judgments are being made honestly by appropriately knowledgeable, motivated and qualified people, who are ‘credibly’ representing the public for
whom they act as proxies.
The DERs, in this instance, are Shapin’s ‘experts we do not know’, yet we
cannot trust in them directly. As employees of the manufacturers, DERs are
not sufficiently ‘credible’ to be the arbiters and guarantors of the knowledge
they provide, despite being the only people with the technical competency to
provide it. Herein, therefore, lies an epistemic space in which the FAA can
work and a function they can perform: they can know the ‘experts we do not
know’.
Modernity has a problem combining credibility with expertise. In the
seventeenth century, the public trusted in the witness reports of gentlemen because of their credible (or ‘virtuous’) position in society (Shapin and
Schaffer 1985). Having divested gentlemen of an inherent claim to ‘virtue’
(and therefore credibility), modern societies prefer to invest it in independent
and publicly accountable actors, such as state regulators. Yet these actors lack
© London School of Economics and Political Science 2010
British Journal of Sociology 61(1)
Trust and technology
95
the expertise to be credible witnesses of modern aircraft, and the actors who
possess this expertise lack the modern characteristics of virtue (independence,
etc.).
We resolve this dilemma by having a virtuous witness – the FAA – attest to
the virtue of expert secondary witnesses, such as DERs, and warrant (as
independent, publicly accountable actors) that these (potentially biased)
experts are worthy of trust. The FAA cannot assess the creditworthiness of
technological claims directly, but they can assess the creditworthiness of the
people who make them. The National Research Council recognized this when
they offered this recommendation in a 1998 report:
The committee believes that design safety would be enhanced if the FAA
devoted its engineering resources to promoting the safety and efficacy of
manufacturer’s design teams and processes, rather than trying to identify
problems in specific designs. The FAA should examine the technical qualifications and integrity of design organizations, including their understanding
of regulations and policies and their ability to properly implement them
(NRC 1998).25
This advice explicitly recognizes that the FAA’s primary function is human
resource management rather than technological assessment directly. The regulator cannot be intimately involved in most of the tests, but by certifying and
overseeing the representatives who conduct, interpret and even frame the
tests, they can regulate aircraft design at one remove. We might call this
‘second-order’ regulation.
As the NRC’s advice to the FAA suggests, second-order regulatory
assessments look for virtue (‘integrity’) as well as technical competence.
Virtue in this context is complex, amorphous and difficult to define, of
course, but this quote, from an aviation engineer, illustrates some of its
dimensions:
[A] potential problem is with people who understand the technology but
who [ . . . ] cannot be trusted to do the right thing for the right reason, or
those who value career progression above all else. [ . . . ] [T]he DER has to
be respected by those who work with him. He can be technically competent
to brilliant, but it won’t matter if his ability to work with other people is
severely compromised. [ . . . ] Knowing where to draw the line is the $64,000
question, and that is a totally social question without a single technically
redeeming aspect.26
If we look outside the FAA, we find second-order regulation in other technologically demanding industries. The following are excerpts from interviews
with regulators working for Britain’s Ministry of Defence and the oil industry,
respectively. Both are answers to the question of how they know the technologies they are assessing are good enough:
British Journal of Sociology 61(1)
© London School of Economics and Political Science 2010
96 John Downer
I [would] get a lot of feel for people and parts of organizations that were
good and parts that were bad. And, I mean, in the same organization you can
get some pockets that you wouldn’t trust to program a fruit machine, and
other pockets that are perfectly all right for safety-critical [work] [ . . . ] It’s
sort of localized cultures.27
We often want to know about key personnel. [ . . . ] Usually to try and ensure
some continuity. [ . . . ] We say, ‘please don’t change any of these key people
without consulting us first.’ It’s not necessarily looking at their professional
qualifications. [ . . . ] Like most things, [ . . . ] you learn to trust a contractor,
and thereafter trust them to do it.28
The underlying principle here is far from revolutionary, and appears in many
different contexts outside of regulation. It has become a political adage, for
instance, that good leaders are as often those who are good at delegating to
good people as they are those who are themselves prodigious. (As in Reagan’s
famous maxim: ‘Surround yourself with the best people you can find, delegate
authority, and don’t interfere.’)29 Shapin (2008) observes that venture capitalists are often as keen to judge the people involved in a venture as they are to
judge the business plan.
Core sets
To make these second-order judgments, the FAA uses the DERs to access
what Collins (1981, 1985, 1988) would call the ‘core set’ of aviation
engineering. The term ‘core set’ refers to the narrow community of technically
informed specialists who actively participate in the resolution of scientific and
technical controversies (Collins 1988: 728). The core set is distinctive because,
even though a technical question may provoke opinions from a wide range of
actors (both lay and professional), only a subset of these actors are considered
legitimate commentators: they are the ‘insiders’ on any given issue, the ‘core’,
whose voices are respected even if they disagree.
It is common to resolve technical questions by demarcating the boundaries
of this set: engaging with the legitimacy of the experts in lieu of engaging with
the issues directly. In explaining the age of the earth, for instance, we (as a
society, if not as individuals) defer to the opinions of academic geologists
rather than those of religious fundamentalists.30 Similarly, the debate about
tobacco and lung cancer only ‘closed’ when states (courts, policy makers,
opinion leaders) narrowed their conception of the core set by excluding
the work of scientists funded by the tobacco industry, even though the
work of those scientists was epistemologically indistinguishable – to outside
observers – from the work of independent scientists (Ong and Stanton
© London School of Economics and Political Science 2010
British Journal of Sociology 61(1)
Trust and technology
97
2001). If a consensus is forming around global warming, moreover, it is because
of the growing credibility of specific communities, not because the public are
engaging with the evidence directly.
Although there is often disagreement within core sets, especially at research
frontiers (see, e.g. Collins and Pinch 1993),31 they tend to coalesce around a
consensus over time: a process that sociologists of science call ‘closure’ (Collins
1985; Latour 1987). To say that a core set has reached ‘closure’ on an issue is
not to say it has been definitively proven or is beyond repeal, but all facts are
ultimately contestable, as epistemologists have argued since Wittgenstein, and
so our standard of proof has to be a social one (e.g. Bloor 1976). This is why
Collins and Evans (2002) argue that knowing the consensus of the core set is
the most practicable authority available.
This authority is unavailable to us, however, if we cannot identify the core set
or recognize when it has reached a consensus. Epidemiologists, for instance,
were convinced of a link between smoking and lung-cancer long before it was
universally accepted by public institutions such as the courts (Ong and Stanton
2001).32 ‘The crucial judgment,’ write Collins and Evans (2002: 259):
. . . is to know when the mainstream community [ . . . ] has reached a level of
social consensus that, for all practical purposes, cannot be gainsaid, in spite
of the determined opposition of a group of experienced [interlocutors] who
know far more about the [issue] than the person making the judgment.
Collins and Evans (2002, 2003) argue that one need not be a member of a
core set to know the set exists and to recognize its boundaries. (Anyone
familiar with the day-to-day world of epidemiology in the 1970s, for example,
would have been aware of its consensus on smoking.) To refine this point,
they divide expertise into two broad types: ‘contributory’ and ‘interactional’.
‘Contributory’ expertise, as they define it, is the knowledge and familiarity
required to actively participate in a technical debate; whilst ‘interactional’
expertise, being the level of familiarity sufficient to converse with the ‘contributory’ experts.33
By this view, ‘interactional’ expertise confers useful competencies, even in
the absence of ‘contributory’ expertise. Firstly, it allows people to act as ‘translators’ (or what Sims (1999) calls ‘brokers’): interpreting between different
spheres, coordinating interactions and reconciling differences. And secondly, it
allows them to ‘discriminate’ between differing claims and levels of legitimacy
(Collins and Evans 2002: 259).34
Collins and Evans envisage social scientists such as themselves fulfilling the
role of ‘interactional expert’, but their framework works better as a lens
through which to view regulatory bodies such as the FAA. We might say the
FAA are ‘knowledge experts’ in the sense outlined by Collins and Evans: able
to both discriminate and translate. They lack the contributory expertise
required to participate in aircraft design but are close enough to the design
British Journal of Sociology 61(1)
© London School of Economics and Political Science 2010
98
John Downer
process, and its core set to have the ‘interactional expertise’ necessary to make
informed judgments about it.
The DER relationship gives the FAA access to the tacit world of aircraft
design – its rumours and hearsay, ad hoc operating rules and collective
opinions – and, through this local knowledge, a view of the social economy and
reputational landscape of aviation engineering: who is reputable, diligent,
honest, trustworthy.As in all social groups this informal information constantly
circulates within engineering circles. Gossip like this is not objective, quantitative, exact or verifiable. It has none of the epistemic qualities we think we
value in technological information, yet it is the key to ‘how we know what we
know’, and regulators lean on it heavily. It allows them to learn how the
engineering community feels about particular systems and the people building
them. They are what Sims (1999) calls a ‘marginal’ group, in that they inhabit
more than one social world, moving between the public, policy makers and
engineers. Throughout the design process they are engaged in engineering
dialogues, constantly negotiating with the manufacturers about design choices.
If consequential people have significant doubts about a specific design or the
circumstances of its creation, then the FAA is likely to recognize this despite
the background noise of constant engineering dialogue and dissent.
Limitations
The pragmatism of second-order regulation does entirely circumvent the
epistemic problems of assessing complex systems, it merely replaces one set of
issues with another, more tractable, set.35
Some of the issues it raises are reflected in the vigorous criticism that Collins
and Evans’ (2002) arguments about the value of interactional expertise
attracted within Science Studies. Critics such as Jasanoff (2003), Rip (2003) and
Wynne (2003), for instance, argue that an emphasis on the ‘core set’ begs the
question of what is ‘core’ and essentializes the notion of ‘expertise’, which most
sociologists of science consider to be a conditional and constructed category
(Jasanoff 2003: 394–96; Wynne 2003: 404). Quis custodiet ipso custodes? as
Pinch (1991: 148) succinctly puts it: who guards the guardians and assesses the
assessors? ‘In technically grounded controversies in the policy domain,’ writes
Jasanoff (2003: 395), ‘the central question most often is what is going to count
as relevant knowledge in the first place.’ She argues that social scientists should
be problematising closure rather than leveraging it for normative ends, and
observes that the demarcation of expertise sometimes bounds crucially
important knowledge, practices and norms out of decision making (Jasanoff
2003: 395).
This critique points to significant questions about type certification and
the extent to which it defines ‘legitimate’ discourse. Advocates of ‘crash
© London School of Economics and Political Science 2010
British Journal of Sociology 61(1)
Trust and technology
99
survivability’, for instance, contest many of the dominant conceptual frames of
aviation safety, such as ‘failures over time’ or the total number of ‘catastrophic
incidents’.36 They argue that these yardsticks make it difficult for regulators to
mandate changes that would save lives, such as compulsory smoke hoods, child
restraint safety seats, sprinkler systems, and backward-facing seats (Bruce and
Draper 1970; Nader and Smith 1994; Weir 2000).37 Carriers are reluctant to
make such changes because of they cost money and potentially deter customers, and the FAA is unable to make a convincing or sustained argument for
crash survivability without violating the conceptual frame through which they
have constructed ‘safety’, or their definition of a ‘safe’ aeroplane as one that
does not crash.38
Perhaps a more significant shortcoming of second-order regulation lies in
the bureaucratic ideal behind which it hides, and the widespread misapprehension of the FAA’s role as an auditor of machines rather than people. When
outsiders open the white-box of technological practice there is often an air of
impropriety when the bureaucratic ideal proves to be distorted. (Hence the
periodic outcries about the FAA naïvely trusting in an ‘honour system’ for
regulatory compliance.) This impression of impropriety has perverse consequences, such as the periodic investigations into the designee system and the
subsequent administrative performances necessary to reaffirm the illusion of
mechanical objectivity.
It might also be said that this approach to regulation, with it’s promulgation
of a bureaucratic ideal is undemocratic in that it separates the public from
discourses in which they have legitimate concerns. The intricacies of an engine
blade might surpass the public ken, but the relative interests of aeroplane
manufacturers and aeroplane regulators (and the question of their alignment)
are almost certainly within the bounds of reasonable public discourse. It
follows, therefore, that the FAA might better serve its mandate by foregoing
the bureaucratic ideal of objective technological mensuration, despite its
attractions, and promoting instead a fuller but more challenging image of their
work and its shortcomings. As Wynne (1988: 163) puts it:
Thus a more mutually respectful, dialectical interaction between experts and
publics could become the context of negotiation of those ambiguous judgments and responsibilities which experts currently have to exercise furtively,
behind a screen of objective, rule-controlled myth.
Again, however, this appeal is far from straightforward in that it manifests the
classic drawbacks of democracy. The ‘white-box of objectivity’ might be
undemocratic but it does create a backstage negotiation-space where experts
can make sometimes impolitic but necessary trade-offs about technically
complex and emotive issues away from the gaze of a fickle public and sensationalist media. This, in turn, forestalls many of the compromises necessitated
by what UK civil servants refer to as ‘stakeholder-concern’. Put differently, it
British Journal of Sociology 61(1)
© London School of Economics and Political Science 2010
100 John Downer
shields the regulatory process from the insidious pressures of the ‘audit
society’, described by Power (1997), and frees regulators from having to proceduralise what are essentially ad hominem problems.
Conclusion
The FAA type-certification process is important, but aircraft are complex and
inscrutable and so auditing them must lean heavily on the tacit knowledge of
the engineers who build them. This dependency makes the regulator heavily
dependent on its regulatees for their technical understanding.This is surprising
in the context of high-technology regulatory assessment, which fosters a
‘bureaucratic ideal’ of machines as objectively and quantitatively knowable,
and it raises legitimate questions about regulatory capture. Observers assuage
such concerns by pointing to a shared interest in design safety, and any shortcomings of this argument are largely moot, given the FAA’s fundamental
epistemic disadvantages.
The FAA retains an important regulatory function despite its epistemic
dependency, by auditing the moral-economy of aircraft engineering. This is to
say, it actively exploits the social dimensions of expert knowledge by regulating
the experts in lieu of the actual expertise: indirectly engaging with the aircraft
by actively engaging with the ‘core sets’ of engineers who design, build and
assess them. This ‘second-order’ regulation brings esoteric issues about technology into more traditional discursive realms by transposing technical dilemmas into social problems. Although far from straightforward, these social
problems are, at least, tractable and amenable to the normal tools of social
science. They can be argued in conventional terms.
Sociologists of science have long acknowledged the necessity of this transposition, but its practical implications are under-recognized by broader academics and policy makers. Abandoning the bureaucratic ideal of technology
regulation and embracing a more practical epistemology of technical knowledge means letting go of the reassuring (but ephemeral) certainties of
mechanical objectivity, but in return, it offers a more accurate understanding
of the nature of regulation and work of regulators. This is probably a fair
trade.
(Date accepted: November 2009)
Notes
1. The author would like to thank Trevor
Pinch, Michael Lynch, Terry Drinkard,
Bridget Hutter, Jeanette Hofmann and
© London School of Economics and Political Science 2010
David Demortain for their time and generous comments on this paper.All faults are, of
course, the author’s alone.
British Journal of Sociology 61(1)
Trust and technology 101
2. Leroy A. Keith, manager of FAA’s
transport airplane directorate aircraft certification service, speaking at the Flight Safety
Foundation 43rd annual International Air
Safety Seminar, 1990 (quoted in Nader and
Smith 1994: 157).
3. Their work here is widely considered
to be exemplary, and their standards have
become the yardstick and model for international aviation regulation. Despite this
influence on foreign aviation, the reliability
of aircraft under the FAA’s direct mandate
compares favourably with those operated
in other countries. Of the accidents that do
occur under its aegis, relatively few are
attributed directly to technological failures
or design problems. Between 1982 and
1991, for example, 163 major accidents
occurred, and of those where the causes
were identified (120) only 12.5 percent
were caused by a failure of the aircraft’s
design or systems, whereas 71.7 percent
were attributed to human error (GAO
1993).
4. An ‘airframe’ constitutes almost every
structural element of a plane that is not the
engines.
5. MacKenzie (2001), for instance, demonstrates that even abstract and formal
systems like computer programs are impossible to ‘know’ exactly; and where systems
are ‘messier’, the uncertainties quickly
multiply.
6. The rules that govern these tests are to
be found in FAR-33, which covers the design
and construction standards for turbine aircraft engines.
7. The bird is fired at 200 knots or 232
mph which is the approximate speed of an
aircraft at takeoff and landing when most
bird-strikes occur. The US Air Force, whose
planes fly faster at low altitudes, has a
60-foot cannon that will fire a 4-lb feathered
bird, head first, at over a 1,000 miles per
hour. They call it the ‘rooster booster’.
8. Operating under enormous strain at
upwards of 2,500 degrees Fahrenheit – well
above the melting point of most alloys –
modern turbojet high pressure turbineblades represent the very forefront of
British Journal of Sociology 61(1)
materials science; their metal elements are
‘grown’ as a single crystal.
9. New computer-based avionics and
flight control systems, for instance, have
introduced software as a safety critical component, requiring complex and unfamiliar
dimensions of engineering and expertise
(GAO 1993: 13); whilst, more recently, new
composite structural materials are challenging long-established design paradigms
rooted in traditional metallurgy. ‘Probably
the least reliable bits of a heavy jet transport
are the avionics,’ lamented one engineer to
the author, ‘they work or don’t work given
the phase of the moon or something’
(Anonymous communication 02/03/2005).
10. Collins (1982), for instance, invokes it
when looking at the various attempts to
build a ‘TEA’ laser from the published
papers of a Canadian defence research
laboratory. Using it to explain many of the
researchers’ difficulties and why some succeeded whilst others failed. MacKenzie and
Spinardi (1996) in turn, have used it to
comment on the practicalities of nuclear
disarmament.
11. Indeed, as Power (1997) and
MacKenzie (2003) make clear, even accountants cannot be ‘mere accountants’, as they
have their own complex and ambiguous
rulebook that requires interpretation.
12. Anonymous personal communication
19/5/09.
13. See also Fanfalone (2003).
14. This only applies to nationally built
airlines – in the context of this paper primarily those built by Boeing. For aircraft
designed and built outside the USA, the
FAA relies on foreign authorities to conduct
many of the certification activities done by
DERs.
15. The DER recruitment process
involves detailed reviews of the applicants’
qualifications, work experience and job performance (GAO 2004).
16. In an attempt to reclaim the functions
it previously kept in-house, such as rulemaking, the FAA has developed a program
of in-house specialists who provide, among
other things, technical assistance on key
© London School of Economics and Political Science 2010
102 John Downer
decisions during the certification process,
called the National Resource Specialist
(NRS) Program. The FAA identified 23
areas where it needed technical guidance
and advice including engine propulsion
system dynamics, fuel and landing gear
systems, advanced materials, advanced
avionics. By 1998, however, the programme
was still much smaller than originally envisioned, with only 11 positions authorized,
though the FAA had identified a need for 23
and only 8 of the 11 actually filled (GAO
1993: 12–30).
17. Although some technological industries, such as the UK railway industry, do
practice self-regulation (Hutter 2001; Lodge
2002), this tends to be limited to the regulation of the operation of the systems in question, rather than of the oversight of the
technological artefacts themselves.
18. Moreover, self-regulation arguably
goes further with regulators of technology
than in other systems that rely on selfregulation. In most cases the regulator is
capable of a degree of oversight, even if it
lacks the capacity to oversee every actor
within its purview, and this allows it to selectively audit the degree of self-compliance
among its charges. Health and safety regulators cannot monitor every burger restaurant
but they are capable of performing health
and safety inspections by themselves, and so
can audit a representative sample. The FAA
is in a somewhat different position.
19. In this it is similar to many other forms
of expert advice, as writers such as Hilgartner
(2000) and Wynne (1988) testify. Hilgartner
(2000), for instance, argues that all expert
bodies constitute and maintain their authority, in part, by highly stylizing their public
scientific and technical pronouncements.
Unable to calibrate the complex balance
between imperfect (but valuable) expert
opinion, on one side, and the public’s capricious concerns, on the other, he argues that
they invariably tip the scales by downplaying
inherent uncertainties.
20. For more on the authority of numbers
see Anderson and Feinberg (1999) and
Desrosieres (1998).
© London School of Economics and Political Science 2010
21. Jasanoff (2003) argues that this is
especially true of the USB; an effect,
she suggests, of a distinctive American
‘civic epistemology’ born of strong
democratic inclinations and the litigationheavy nature of American public
life. Vogel (2003: 567) echoes this view,
linking the adversarial US legal system
with an emphasis on highly formalized –
and hence legally ‘defensible’ – risk assessments in a wide range of regulatory
regimes.
22. Although, as with all ideas, it is possible to find its roots in earlier work, such as
that of Marver Bernstein in the mid 1950s
(see Niles 2002: 390–1).
23. Personal communication with author,
21 May 2005.
24. Personal communication with author,
19 May 2006.
25. This emphasis is borne out by the
FAA’s priorities. A GAO (2004) review
of the training records for 90 certification
engineers showed that 43% received little
or no technical training that directly supported certification. Instead, many received
training in supervisory and managerial skills
on subjects such as ‘total quality management’, human relations, and leadership
development.
26. Anonymous personal communication
19/5/09.
27. Anonymous interview, conducted
February 1996. Courtesy of Donald
MacKenzie.
28. Anonymous interview, conducted
March 1995. Courtesy of Donald
MacKenzie.
29. Not that it necessarily worked for
Reagan, but it would probably be easier to
attest to the merits of the Reagan presidency
than to the prodigality of the Gipper
himself.
30. A handful of scientists at prestigious
institutions maintain the literal truth of the
bible, or the viability of cold-fusion, but,
whilst either view might one day become
orthodox, it would be obtuse, at present, to
consider either as genuinely ‘credible’ in a
practical sense.
British Journal of Sociology 61(1)
Trust and technology 103
31. In such cases the precautionary principle may be the only useful response (see,
for instance, Collingridge and Reeve 1986).
32. Our rubrics about expertise often
leave us vulnerable to deception. As Collins
(1988: 742) shows, it is possible to present
technical information in ways that obscure
its meaning and validity, and that this can
work against the interests of the public. He
gives the example of nuclear fuel flasks that
were ‘shown’ to be safe in what was a very
convincing but in retrospect, highly questionable public demonstration.
33. They also suggest ‘referred expertise’
as a sub-category. This being defined as ‘the
level of competence required to deeply
understand what it means to be a contributory expert’ (usually borne of being a contributory expert in a different but analogous
field).
34. Modern societies have developed
shorthand rules for identifying core sets on
particular issues: deferring to accredited
scientists on matters scientific, for instance,
to clergy on matters theological, and to
engineers on matters mechanical (each with
gradations corresponding to incidental
markers, such as institutional prestige).
35. Engineers might like to think of this
as a sort of sociological ‘LaPlace transform’.
36. Their argument, broadly, is that
behind these metrics is an implicit and false
assumption that aviation disasters are inevitably fatal. Whereas aircraft rarely plummet
like stones from the sky and most accidents
have survivors, or people who might have
survived were aircraft designed differently.
It is generally held that about 80% of US
commercial airline accidents are ‘survivable’
in the sense that the crash impact ‘does not
exceed human tolerances’, and, by some
estimates, about three out of four people
who have died in ‘survivable’ crashes were
killed by fire, smoke, or toxic fumes, rather
than by the impact itself.
37. It is true that aircraft are built with a
certain level of crash survivability, a
minimum number of exits, flotation devices,
lap belts, escape chutes, and – in recent
years – improved fire safety standards, such
as less flammable upholstery. However,
there are many areas in which the FAA has
been unable or unwilling to mandate
changes that would have saved lives over the
last few decades.
38. As Nader and Smith (1994) put it,
accident survivability implies the evitability
of aircraft accidents.
Bibliography
Air Line Pilots Association [ALPA] 1999
‘Comments on Rules Docket (AGC-200)’.
Docket No. FAA-1998-4815 (52539), March.
Aerospace Industries Association of
America [AIAA], Inc. 1989 ‘Maintaining a
Strong Federal Aviation Administration:
The FAA’s Important Role in Aircraft
Safety and the Development of US Civil
Aeronautics’, September.
Anderson, M. and Feinberg, S. 1999 ‘To
Sample or Not to Sample? The 2000 census
controversy’, Journal of Interdisciplinary
History 30: 1–36.
Bloor, David 1976 Knowledge and Social
Imagery, London: Routledge & Kegan Paul.
British Journal of Sociology 61(1)
Bruce, James T. and Draper, John 1970 [The
Nader report on] Crash Safety in General
Aviation Aircraft, Washington: Center for
Study of Responsive Law.
Collingridge, D. and Reeve, C. 1986
Science Speaks to Power, New York: St
Martin’s.
Cobb, Rodger and Primo, David 2003 The
Plane Truth: Airline Crashes, the Media,
and Transportation Policy, Washington, DC:
Brookings Institution Press.
Collins, Harry 1981 ‘The Place of the “CoreSet” in Modern Science: Social Contingency
with Methodological Propriety in Science’,
History of Science 19: 6–19.
© London School of Economics and Political Science 2010
104 John Downer
Collins, Harry 1982 ‘Tacit Knowledge and
Scientific Networks’ in B. Barnes and D.
Edge (eds.) Science in Context: Readings in
the Sociology of Science, Open University
Press; Milton Keynes: 44–64.
Collins, Harry 1985 Changing Order,
London: Sage.
Collins, Harry 1988 ‘Public Experiments and
Displays of Virtuosity: The Core-Set Revisited’, Social Studies of Science 18 (4): 725–48.
Collins, Harry and Evans, Robert 2002 ‘The
Third Wave of Science Studies: Studies of
Expertise and Experience’, Social Studies of
Science 32 (2): 235–96.
Collins, Harry and Evans, Robert 2003 ‘King
Canute Meets the Beach Boys: Responses to
the Third Wave’, Social Studies of Science 33
(3): 435–52.
Collins, Harry and Pinch, Trevor 1993 The
Golem: What Everyone Should Know About
Science, Cambridge: Cambridge University
Press.
Collins, Harry and Pinch, Trevor 1998 The
Golem at Large: What You Should Know
About Technology. Cambridge: Cambridge
University Press.
Dana, D. and Koniak, S. 1999 ‘Bargaining in
the Shadow of Democracy’, University of
Pennsylvania Law Review 148 (2): 473–
559.
Desrosieres, A. 1998 The Politics of Large
Numbers: a History of Statistical Reasoning,
Cambridge, MA: Harvard University Press.
Downer, John 2007 ‘When the Chick Hits
the Fan: Representativeness and Reproducibility in Technological Testing’, Social
Studies of Science 37 (1): 7–26.
Downer, John 2009 ‘When Failure Is an
option: Redundancy, Reliability, and Risk’,
LSE CARR Discussion Paper – 53; May
2009; [ISBN 978-0-85328-395-9] http://www.
lse.ac.uk/collections/CARR/pdf/DPs/
Disspaper53.pdf [Accessed 06/02/2009]
Fanfalone, Michael 2003 Testimony to the
House Aviation Subcommittee [PASS Personnel Install, Maintain, Troubleshoot and
Certify the Country’s Air Traffic Control
System] 27 March. http://www.findarticles.
com/p/articles/mi_m0UBT/is_17_17/ai_
100769720#continue [Accessed: 06/09/2007]
© London School of Economics and Political Science 2010
Federal Aviation Administration [FAA]
1998 ‘Airworthiness Standards; Bird Ingestion: Notice of Proposed Rulemaking
(NPRM). CFR Parts 23, 25 and 33’. Docket
No. FM-1998-4815; Notice no. 98-18j RIN
21200AF34.
Federal Aviation Administration [FAA]
2001 ‘Bird Ingestion Certification Standards’,
Advisory Circular AC 33.76 1/19/01.
GAO 1993 ‘Aircraft Certification: New FAA
Approach Needed to Meet Challenges of
Advanced Technology’, GAO Report to the
Chairman, Subcommittee on Aviation, Committee on Public Works and Transportation,
House of Representatives, GAO/RCED-93155, 16 September.
GAO 2004 ‘Aviation Safety: FAA Needs to
Strengthen the Management of its Designee
Programs’, Report to the Ranking Democratic Member, Subcommittee on Aviation,
Committee on Transportation and Infrastructure, House of Representatives, October.
http://www.gao.gov/cgi-bin/getrpt?GAO05-40 [Accessed: 08/03/08]
Georgosouli, Andromachi 2008 ‘The Nature
of the FSA Policy of Rule Use: A Critical
Overview’, Legal Studies 28 (1): 119.
Gherardi, S. and Nicolini, D. 2000 ‘To Transfer is to Transform: the Circulation of Safety
Knowledge’, Organization 7 (2): 329–48.
Ham, C. and Alberti, K. 2002 ‘The Medical
Profession, the Public, and the Government’, British Journal of Medicine 324
(7341): 838–42.
Hilgartner, Stephen 2000 Science on Stage:
Expert Advice as Public Drama, Palo Alto,
CA: Stanford University Press.
Hutter, B.M. 2001 Regulation and Risk:
Occupational Health and Safety on the Railways, Oxford: Oxford University Press.
Jasanoff, Sheila 2003 ‘Breaking the Waves in
Science Studies: Comment on H.M. Collins
and Robert Evans, “The Third Wave of
Science Studies” ’, Social Studies of Science
33 (3): 389–400.
Langewiesche, William 1998 Inside the
Sky: A Meditation on Flight, New York;
Pantheon.
Latour, Bruno 1987 Science in Action: How
to Follow Scientists and Engineers Through
British Journal of Sociology 61(1)
Trust and technology 105
Society, Cambridge, MA: Harvard University Press.
Lloyd, E. and Tye, W. 1982 Systematic
Safety – Safety Assessment of Aircraft,
Systems, CAA: London.
Lodge, M. 2002 On Different Tracks: Designing Railway Regulation in Britain and
Germany, Westport, CT: Praeger.
MacKenzie,
Donald
1996
Knowing
Machines: Essays on Technical Change,
Cambridge, MA: MIT Press.
MacKenzie, Donald 2001 Mechanizing
Proof: Computing, Risk, and Trust, Cambridge, MA: MIT Press.
MacKenzie, Donald 2003 ‘A Philosophical
Investigation into Enron’, London Review
of Books, 26 May.
MacKenzie, Donald and Spinardi G. 1996
‘Tacit Knowledge and the Uninvention of
Nuclear Weapons.’ in D. MacKenzie
Knowing Machines: Essays on Technical
Change, MIT press, Cambridge (Mass.)
Nader, Ralph and Smith, Wesley 1994 Collision Course: The Truth About Airline Safety,
New York: TAB Books.
National Research Council [NRC], Assembly of Engineering, Committee on FAA Airworthiness Certification Procedures 1980
Improving Aircraft Safety: FAA Certification of Commercial Passenger Aircraft,
Washington, DC: National Academy of
Sciences.
National Research Council [NRC] 1998
Improving the Continued Airworthiness of
Civil Aircraft: A Strategy for the FAA’s Aircraft Certification Service, Washington, DC:
NRC.
National Transportation Safety Board
(NTSB) 2006 ‘Safety Report on the Treatment of Safety-Critical Systems in Transport
Airplanes’, Safety Report NTSR/SR-06-02.
PB2006-917003, Notation 7752A, Washington DC.
Newhouse, J. 1985 The Sporty Game: The
High-Risk Competitive Business of Making
and Selling Commercial Airliners, New
York: Knoff.
Niles, Mark 2002 ‘On the Hijacking of Agencies (and Airplanes): the Federal Aviation
Administration, “Agency Capture” and
British Journal of Sociology 61(1)
“Airline Security” ’, Journal of Gender,
Social Policy & the Law, 10 (2): 381–442.
Office of Technology Assessment [OTA]
1988 Safe Skies for Tomorrow: Aviation
Safety in a Competitive Environment, OTASET-381, Washington, DC: US Government
Printing Office.
Ong, E.K. and Stanton, S.A. 2001 ‘Constructing “Sound Science” and “Good Epidemiology”: Tobacco, Lawyers, and Public
Relations Firms’, American Journal of
Public Health 91 (11): 1749–57.
Peltzman, S. 1976 ‘Toward a More General
Theory of Regulation’, Journal of Law and
Economics 19: 211–40.
Perrow, Charles 1984 Normal Accidents:
Living with High-Risk Technologies. New
York: Basic Books.
Pinch, Trevor 1993 ‘ “Testing – One, Two,
Three . . . Testing!”: Toward a Sociology of
Testing’, Science, Technology & Human
Values 18 (1): 25–41.
Pinch, Trevor 1991 ‘How Do We Treat Technical Uncertainty in Systems Failure? The
Case of the Space Shuttle Challenger’ in T.
La Porte (ed.) (1991) Social Responses to
Large Technical Systems: Control or Anticipation Proceedings of the NATO Advanced
Research Workshop on Social Responses to
Large Technical Systems: Regulation, Management, or Anticipation, Berkeley, California, October 17–21, 1989, Nato Science
Series: D: 58; March: 143–158.
Pinch, T. and Bijker, W. 1984 ‘The Social
Construction of Facts and Artefacts: Or
How the Sociology of Science and the Sociology of Technology Might Benefit Each
Other’, Social Studies of Science 14: 339–441.
Polanyi, M. 1958 Personal Knowledge; Routledge & Kegan Paul; London.
Porter, T. 1995 Trust in Numbers: The
Pursuit of Objectivity in Scientific and Public
Life, Princeton, NJ: Princeton University
Press.
Posner, R.A. 1971 ‘Taxation by Regulation’.
Bell Journal of Economics and Management
Science 22(2): 22–50.
Posner, R.A. 1974 ‘Theories of Economic
Regulation’, Bell Journal of Economics and
Management Science 5(2): 335–58.
© London School of Economics and Political Science 2010
106 John Downer
Posner, R.A. 1975 ‘The Social Costs of
Monopoly and Regulation’, Journal of
Political Economy 83(4): 807–27.
Power, Michael 1997 The Audit Society:
Rituals of Verification. Oxford: Oxford University Press.
Rip, Arie 2003 ‘Constructing Expertise in a
Third Wave of Science Studies?’, Social
Studies of Science 33(3): 419–34.
Rushby, John 1993 ‘Formal Methods and the
Certification of Critical Systems’, Technical
Report CSL-93-7, December. Menlo Park,
CA: Computer Science Laboratory, SRI
International.
Schaffer, Simon 1999 ‘Late Victorian
Metrology and its Instrumentation: a Manufactory of Ohms’, in Mario Biagioli (ed.)
The Science Studies Reader, London: Routledge: 457–99.
Schiavo, Mary 1997 Flying Blind, Flying
Safe, New York: Avon Books.
Schulman, Paul 1993 ‘The Negotiated Order
of Organizational Reliability’, Administration & Society 25 (3): 353–72.
Shapin, Steven 1994 A Social History of
Truth: Civility and Science in SeventeenthCentury England, Chicago: University of
Chicago Press.
Shapin, Steven 1999 ‘The House of Experiment in Seventeenth Century England’, in
Mario Biagioli (ed.) The Science Studies
Reader, London: Routledge.
Shapin, Steven 2008 The Scientific Life,
Cambridge, MA: Harvard University
Press.
© London School of Economics and Political Science 2010
Shapin, Steven and Schaffer, Simon 1985
Leviathan and the Air-Pump: Hobbes, Boyle,
and the Experimental Life, Princeton, NJ:
Princeton University Press.
Sims, Benjamin 1999 ‘Concrete Practices:
Testing in an Earthquake-Engineering
Laboratory’, Social Studies of Science 29 (4):
483–518.
Stigler, G. 1971 ‘The Theory of Economic
Regulation’, Bell Journal of Economics and
Management Science 2: 3–21.
Vogel, D. 2003 ‘The Hare and the Tortoise
Revisited: The New Politics of Consumer
and Environmental Regulation in Europe’,
British Journal of Political Science 33: 557–
80.
Weir, Andrew 2000 The Tombstone Imperative: The Truth about Air Safety, London:
Simon & Schuster.
Wiley, John 1986 ‘A Capture Theory of
Antitrust Federalism’, Harvard Law Review
99: 713–23.
Woods, David and Hollnagel, Erick 2006
‘Resilience Engineering Concepts’, in E.
Hollnagel, D. Woods, and N. Leveson (eds)
Resilience Engineering: Concepts and Precepts, Aldershot: Ashgate: 1–6.
Wynne, Brian 1988 ‘Unruly Technology:
Practical Rules, Impractical Discourses and
Public Understanding’, Social Studies of
Science 18: 147–67.
Wynne, Brian 2003 ‘Seasick on the Third
Wave? Subverting the Hegemony of Propositionalism’, Social Studies of Science 33 (3):
401–17.
British Journal of Sociology 61(1)