The Postdigital Human: Making the
History of the Future
Steve Fuller & Petar Jandrić
Postdigital Science and Education
ISSN 2524-485X
Postdigit Sci Educ
DOI 10.1007/s42438-018-0003-x
1 23
Your article is protected by copyright and
all rights are held exclusively by Springer
Nature Switzerland AG. This e-offprint is
for personal use only and shall not be selfarchived in electronic repositories. If you wish
to self-archive your article, please use the
accepted manuscript version for posting on
your own website. You may further deposit
the accepted manuscript version in any
repository, provided it is only made publicly
available 12 months after official publication
or later and provided acknowledgement is
given to the original source of publication
and a link is inserted to the published article
on Springer's website. The link must be
accompanied by the following text: "The final
publication is available at link.springer.com”.
1 23
Author's personal copy
Postdigital Science and Education
https://doi.org/10.1007/s42438-018-0003-x
INTERVIEWS
The Postdigital Human: Making the History of the Future
Steve Fuller 1 & Petar Jandrić 2
# Springer Nature Switzerland AG 2018
Steve Fuller is Auguste Comte Professor of Social Epistemology in the Department of
Sociology at the University of Warwick, UK. A product of Jesuit education (Regis
High School in New York City), Steve was a John Jay Scholar at Columbia University
where he graduated number two in his class of 1979. Columbia awarded Steve a Kellett
Fellowship to study at Cambridge University for what turned out to be two of the most
decisive years in recent British history (1979–1981). He subsequently earned an
M.Phil. in history and philosophy of science, which he followed up with a Ph.D. at
the University of Pittsburgh, where he was an Andrew Mellon Fellow. He was awarded
a D.Litt. by the University of Warwick in 2007 for sustained lifelong contributions to
scholarship. He is also a Fellow of the Royal Society of Arts, the UK Academy of
Social Sciences, and the European Academy of Sciences and Arts.
Steve is best known for his foundational work in the field of ‘social epistemology’,
which is the name of the quarterly journal published by Taylor & Francis that he founded
in 1987 as well as the first of his more than 20 books. Steve has pursued social
epistemology as a profoundly interdisciplinary project, which simultaneously upholds
the normative ambitions of philosophy while remaining accountable to the empirical and
historical disciplines. Whereas other philosophers of science have associated their ‘normative’ positions with science’s status quo, Steve has drawn inspiration from Karl Popper
who saw science as always testing its most cherished assumptions against the evidence.
This has led him into some relatively uncharted domains, starting with his active
participation in the ‘Science Wars’ in the 1990s, the revival of intelligent design theory,
and a robust defence of public intellectuals against the claims of academic expertise.
From 2011 to 2014 Steve published a trilogy relating to the idea of a ‘post-’ or ‘trans-’
human future: Humanity 2.0: What It Means to Be Human Past, Present and Future
(Fuller 2011), Preparing for Life in Humanity 2.0 (Fuller 2013), and The Proactionary
Imperative: A Foundation for Transhumanism (Fuller and Lipinska 2014). His most
recent books are Knowledge: The Philosophical Quest in History (2015a), The Academic
* Petar Jandrić
[email protected]
1
University of Warwick, Coventry, UK
2
Zagreb University of Applied Sciences, Zagreb, Croatia
Author's personal copy
Postdigital Science and Education
Caesar (2016a), and Post-Truth: Knowledge as a Power Game (2018). Steve’s works
have been translated into over 20 languages.
About the Conversation
In December 2017 Petar Jandrić emailed Steve Fuller with an idea for this conversation.
Steve wanted to converse in writing, so the conversation was conducted through
numerous email exchanges between December 2017 and June 2018.
Mapping the Soul of Science
Petar Jandrić (PJ): Please outline the main Science Wars of our times. BAs a veteran
of these Science Wars^ (Fuller 1999: 243), what do you think of their impact and
legacy? To paraphrase the title of your 1999 article, Bwho is exactly the enemy^ and
who gets the most benefit from Science Wars?
Steve Fuller (SF): The original Science Wars, which occurred in the 1990s, were
perhaps an inevitable consequence of the post-Cold War meltdown in government
science support around the world. People now forget that the Cold War consisted to a
large extent of various science-and-technology-based ‘races’ (e.g. to build smarter
missiles, to get to the Moon first) that were basically proxy battle theatres played out
on the international stage. Once the Soviet Union fell, science lost that taken for granted
primacy, which was most immediately felt in quick shift in funding patterns from
physics to biomedicine, from public to private, and – philosophically speaking – from
global unity of science projects to locally embedded knowledge practices.
From that standpoint, the emerging field of science and technology studies (STS) was an
obvious lightning rod since its generally social constructivist orientation was instrumental –
even at the policy level – in demystifying a lot of the more extravagant claims that leaders in
the scientific community were making on the public even after the Cold War had ended.
What we see now is simply an intensification of that tendency – in ways that have frightened
such older STS scholars as Bruno Latour and Donna Haraway. They openly regret the
current wave of what I have called ‘Protscience’ (‘Protestant science’) or ‘customised
science’ as going a step too far beyond the locally embedded science that they had advocated
(Fuller 2015b). What this means in practice is that the STS folks are scandalised that alt-right
ideologues, creationists and anti-vaccinationists have joined the ranks of more politically
correct minority voices to turn science – and its critique – to their advantage.
I personally don’t have a problem with this turn of events. If STS is any good as a
form of inquiry, its findings and methods should be capable of serving both the
politically correct and the politically incorrect. If that isn’t a good working definition
of ‘scientific objectivity’, I don’t know what is.
PJ: Your book Kuhn vs. Popper carries a telling subtitle The Struggle for the Soul of
Science (Fuller 2003). What, for you, is the soul of science?
SF: The ‘soul of science’ simply means what science is ultimately about. Kuhn saw it
as primarily the collective activity of the self-recognising professional science community, which the rest of society may wish to support and whose fruits it may wish to use.
But all of this is in the spirit of recognising the autonomy of science as an institution
Author's personal copy
Postdigital Science and Education
from the rest of society. Popper’s view is that what we normally call ‘science’ is simply a
more technically rigorous and extended version of critical rationality more generally. In
that respect, science knows no institutional boundaries – and indeed, as Popper’s radical
follower, Paul Feyerabend maintained, science’s institutional arrangements may impede
the flourishing of the scientific spirit. Anyone can in principle treat their beliefs
scientifically, if s/he subjects them to strong tests of validity.
PJ: The soul of science, then, is closely related to the way we do science. What is
social epistemology? What are its main advantages over older traditions such as
analytic philosophy; what are its main drawbacks?
SF: Social epistemology is first of all an interdisciplinary project that basically tries
to address normative philosophical questions about the nature of knowledge by considering history and the social sciences. When I came up with the idea in the mid1980s, I wanted to address two problems at once – the tendency of analytic philosophers to see the ‘social’ as simply some aggregated version of individual epistemology
and the tendency of sociologists of knowledge (including STS people) to discount
normative issues altogether and simply describe past or present knowledge practices.
When I say ‘normative’ I simply mean a concern with how things ought to be done –
‘performance standards’, if you will: What makes something better or worse at what it
does, and what contributes to its improvement or decline. Many philosophical disciplines, including logic, epistemology, ethics, law, aesthetics, have been traditionally
normative in orientation. A good way to see this is that these disciplines are critical of
what passes for ‘normal’ in, say, human interaction, reasoning, or art.
Analytic philosophy also aspires to be ‘normative’ in this sense but its prescriptions – the
practical side of normativity – are hopelessly naïve or unworkable. And here I mean not
only its rather self-serving notions of what counts as ‘good evidence’ and ‘sound reasoning’
for knowledge claims (which are usually biased in favour of dominant paradigms) but also
its easy deference to concepts like trust, which effectively licences the offloading of
epistemic judgement to experts – be they based on scientific or indigenous knowledge.
Sometimes analytic social epistemologists give the impression that a more complex social
world allows the individual to take less responsibility. This profoundly goes against the
Protestantised/customised science trend mentioned earlier, which is all about people taking
back control of science by integrating their version of science into their lives.
I am sometimes accused of being too harsh about analytic philosophers, but they operate
with such narrow normative horizons – basically propping up the current knowledge system
(perhaps by adding a few more countervailing, typically ‘politically correct’ voices) – that
they are virtually useless when it comes to charting the future course of organised inquiry,
which I think is the ultimate payoff of anything deserving the name ‘social epistemology’.
PJ: According to Collin, your approach to STS is different from mainstream:
A philosopher and historian of science by training, Steve Fuller operates at a
meta-level in relation to the rest of STS’s main figures. Rather than illuminating
the development of natural science by means of empirical case-studies of his own
making, Fuller undertakes a historico-critical survey of the development of STS
itself and offers advice concerning its future development. (Collin 2011: 167)
What draws you towards this approach? What are its main advantages and drawbacks?
Author's personal copy
Postdigital Science and Education
SF: That’s one of the most perceptive things that Collin has said about my work. It does
indeed operate at a meta-level to STS, and that’s one reason why I’m not too upset by the
Science Wars – past and present. If STS routinely makes radically demystifying claims
about any science or technology it touches, then it shouldn’t be surprised that people take
them seriously. In 1990s, STS people thought they were being attacked unfairly by scientists,
and nowadays they think that they are being used unfairly by Hillary Clinton’s ‘basket of
deplorables’. Both complaints show a general meta-level cluelessness by STS people.
For me, the main advantage of adopting a meta-level perspective is that it keeps one
aware that the validity that any move that one makes in a language game depends on
what the players take the game to be. Social constructivism as a world-view says that
there are potentially multiple games in play at once, as players struggle simultaneously
to determine the game they are playing and, as a consequence, who is winning and
losing according to its rules. In my most recent book, I basically define the ‘post-truth
condition’ in these terms. This is quite different from classic conceptions of relativism,
which start with a fairly well-bounded sense of the field of play – i.e. a society or
culture, relative to which something is then true or false.
PJ: According to Collin, your social epistemology is normative and naturalistic
(2011: 167–168). You already addressed the ‘normative’ part, what do you mean by
‘naturalistic’? More generally, what is the task of social epistemology?
SF: ‘Naturalistic’ simply means that I take historical and empirical research as setting
prima facie constraints on the norms of organised inquiry. I say prima facie because
those constraints may be removed or mitigated in various ways. Indeed, this is to be
expected in any ‘scientifically’ organised inquiry. In my second book, Philosophy of
Science and Its Discontents (Fuller 1989/1993), I describe myself as a ‘reflexive
naturalist’, by which I mean a naturalist who takes seriously science’s historical tendency to radically overturn its most fundamental theories, even as it accepts the data that
informs them. To put the point somewhat paradoxically, if science is our best form of
knowledge, then one of the biggest lessons it teaches is that knowers need to be prepared
for radical changes of mind over time. Even if Popper didn’t capture the psychology of
individual working scientists – who prefer to confirm than falsify their theories – he did
capture the meta-psychology of scientific inquiry as a collective movement.
As a historical aside, it is worth observing that the older style of ‘naturalism’
represented by, say, the American pragmatists and the original historians and philosophers of science (i.e. the so-called ‘evolutionary epistemologies’ of Donald Campbell,
Stephen Toulmin, Dudley Shapere, David Hull, etc.) was very much in the spirit of
what I’m talking about here – namely, that science not only teaches us about the world
but it also teaches us about how we learn about the world. It does what logicians call
‘first order’ and ‘second order’ work simultaneously. By highlighting this reflexive
dimension, ‘naturalism’ functioned as a kind of secularised Hegelianism (this certainly
explains Peirce and Dewey). However, latter-day ‘naturalists’, typically enamoured of
evolutionary psychology and sometimes retro-fitted with Neo-Aristotelian ‘virtue’
thinking, tend towards a more static and even ‘essentialist’ view of ‘human nature’,
in which the recognition of our limitations takes precedent over our overcoming them.
In this context, self-avowed liberals and conservatives, such as, say, Steven Pinker and
Alastair MacIntyre, respectively, find themselves in common cause.
Put another way, I accept that Kant’s slogan, ‘ought implies can’ can mean one of
two things: either that people shouldn’t be held to moral standards that they could never
Author's personal copy
Postdigital Science and Education
reach or that the standards are sufficiently good in themselves that the barriers to
reaching them should be removed. The more conservative former interpretation of
Kant’s maxim was promoted in my early career by Alvin Goldman and Ronald Giere,
typically as part of a definition of ‘naturalised epistemology’, which following Quine,
always presumed the continuity of humans and animals. I fear that evolutionary
psychologists have given a sexy gloss to this position for a new generation. However,
I became a transhumanist when I started to find the latter interpretation of Kant’s slogan
more in the spirit of his largely smouldering revolutionary ambitions – which I think is
also truer to the Hegelian roots of modern naturalism. It sees the relevant sense of
ontological continuity as existing not between humans and animals but between
humans and God.
The Battlefield of the Truth
PJ: What is your take on philosophical realism, and its close cousins, rationality and
rational thinking?
SF: What all these terms have in common is the implicit appeal to a standard of
judgement. I stress ‘implicit’ because the standard has become ‘naturalised’, ‘unconscious’, ‘taken for granted’, ‘presumed’ – choose your favourite term. In classical
philosophy, Plato and Aristotle exemplified two types of realism, each with its own
standard by which rationality was judged. For Plato the real is what the external world
prompts us to remember, whereas for Aristotle the real is what the external world tells
us to believe. Implied here are not only two rather different conceptions of rationality
but more significantly two rather different conceptions of what the mind is for. In a
sense, Plato thinks that the empirical world is simply the means by which we discover
– if not exploit – the contents of our minds, which is the stuff of reality. This at once
opens the door to both a highly instrumental and a highly idealised view of reality, a
vision that I think has been most fully realised in the West in the history of
technology, perhaps even more than the history of science. Aristotle, in contrast,
thinks that the correspondence – or as we now tend to say, ‘adaptation’ – to the
empirical world is the mark of a ‘realistic’ orientation to the world, in which
‘evidence’ plays a pivotal role as the anchor for what it means to be rational. In
modern times, Franz Brentano understood this point very well. It helps to explain the
curious affinity between Neo-Thomism and phenomenological approaches to philosophy, both of which aspire to this Aristotelian conception of realism and
rationality.
On the Platonic conception of the real, science is about all that is possible, as in the
realistic reading of scientific laws as implying counterfactually true statements, or ‘true
in all possible worlds’. In contrast, on the Aristotelian conception of the real, science is
only about that which is probable, as in the idea of scientific laws as mere inductive
generalisations for limited domains of reality, a view upheld in our day by Nancy
Cartwright. I’m on Plato’s side of this argument, which is open to a much more
constructivist reading of the natural world – that is, as offering clues without corresponding to the real. In a sense, I believe the opposite of what Cartwright believes: I
believe that the natural world as the object of our empirical knowledge is a model of all
that is real, which in turn is all that is possible.
Author's personal copy
Postdigital Science and Education
PJ: Please relate that to your views on burden of proof (Fuller 1988/2002: 105 ff),
and to the problem of replication (Fuller 2006a: 54 ff).
SF: No debate is ever fought on a level playing field. One side generally holds the
presumption and the other bears the burden of proof. One of my earliest insights into
social epistemology – drawing on my formal training in rhetoric, the intellectual
historiography of the ‘Cambridge School’ surrounding Quentin Skinner, and the
historicist philosophy of science of Kuhn and Toulmin – was that incommensurability
between paradigms, cultures, historical periods, etc. typically had less to with a radical
difference in ideas per se than a radical difference in the plausibility attributed to the
ideas. In other words, epistemologically speaking, what’s at stake is less about comprehension than imagination. Were Aristotle to travel in time to the present, he could
certainly be taught to understand the basis on which we pursue, say, space travel or
nuclear energy, while at the same time questioning what we might loosely call its
‘wisdom’. In other words, were he among us now, Aristotle would probably sound like
‘precautionary’ thinkers who think that humanity is on borrowed time and setting itself
up for a massive fall, if it continues down its current trajectory. He would thus place the
burden of proof on us to demonstrate that our record of success is more than illusory.
Ultimately the difference between Aristotle and us boils down to what in my latest
book, Post-Truth: Knowledge as a Power Game, I call ‘modal power’, namely, power
over the definition of what is possible (Fuller 2018).
As for replication, my view is straightforwardly constructivist: There is always an
open question about what it is about some past event, including an experimental
outcome, that needs to be reproduced by some future event to count as a proper
‘replication’ in a sense that might be regarded as confirmation. Here I’m very much
influenced by what Nelson Goodman (1955) called the ‘new riddle of induction’,
whereby the same series of events could be used to infer any of several different
trajectories, depending on what is considered salient in the original set of events. Once
again, this is an epistemic decision, not a discovery.
PJ: In Philosophy, Rhetoric, and the End of Knowledge: A New Beginning for
Science and Technology Studies, you and James Collier write: BThe love affair that
Western thought has had with the idea of truth as something that us Bdiscovered^ or
Brevealed^ finally comes to an end in the world of tomorrow^ (Fuller and Collier 2004/
2012: 312). What is your conception of (philosophical) truth?
SF: Truth is something that is made possible – as part of a true/false binary – against
the backdrop of agreed assumptions, especially about how to organise the world. In the
absence of such shared assumptions, truth is impossible to determine. What I call the
‘post-truth condition’ is just such a state, which prevails not only today but also
prevailed at the so-called pre-Socratic period at the start of Western philosophy. I refer
here to the sophistic Athens that Socrates encounters in Plato’s dialogues. Plato’s basic
lesson, which has been subject to creative variations over history (perhaps most brutally
by Hobbes), is that truth is a regime that must be imposed. In our postmodern times,
which seem allergic to the idea of concentrated power, truth regimes are ‘constructed’
or ‘co-produced’, reflecting a more diffuse and perhaps even dissipated conception of
power. But in any case, reality itself is ‘naturally’ indeterminate.
PJ: Defined as intrinsic good, socially constructed knowledge inevitably produces
epistemic injustice. The definition of epistemic injustice is equally concerned with
creation and access to knowledge, but let’s take on one challenge at a time and talk
Author's personal copy
Postdigital Science and Education
about creation. Contemporary science is heavily based on privileging some knowledges
over others (e.g. Western knowledges are seen as more important than indigenous
knowledges) and privileging some groups over others (e.g. female voices have, for the
most of human history, been seen as less important than male voices). Some authors,
such as Sandra Harding (2011), have therefore started to look at STS through the lens
of postcolonial theory. What are the limits and potentials of Postcolonial Science and
Technology Studies?
SF: I see postcolonial theory in its current form as pretty limited. It’s basically a
form of inverted colonialism that nevertheless aspires to global reach. In practice,
postcolonialism is restricted to undoing the perceived damage caused by EuroAmerican expansion across the world in the modern period. In that respect, it’s little
more than Western imperialism’s ‘evil twin’. The conspicuous absence of postcolonial
discourse concerning China (minus Taiwan, of course) is striking – but also explainable
given that mainland China remained relatively immune to the more violent forms of
Western expansion during the period of most concern to postcolonialists.
But let’s say we interpret postcolonialism generously in terms of its global ambitions. There remains the question of what sort of damage we are trying to redress,
which could then be the basis for some sense of ‘epistemic justice’. Are we concerned
with, say, giving women as individuals greater voice – regardless of what these women
believe – or giving voice to some conceptual understanding of ‘women’ as implying an
alternative world-view, regardless of which individuals – male or female – express it? I
had already raised this point in The Governance of Science (Fuller 2000) because it
pointed to a deep ambiguity in the politics of multiculturalism that was dominant in the
1980s and 1990s. After all, men and women significantly overlap in their opinions and
world-views regardless of the gender of their birth. The ambiguity has been exacerbated
in recent years by the ‘trans’ movement, whereby, say, individuals born as males claim
to speak as women once they’ve undergone a ‘gender transformation’ of some sort
(which may involve minimal genital change), after which they claim authority to speak
against the opinion of individuals who have retained the female gender of their birth. A
somewhat analogous issue arises in ecology with regard to the meaning of ‘biodiversity’: Are we trying to preserve actual organisms or simply their genetic code? As our
capacity to mine the genomes of organisms for purposes of reproducing them on
demand increases, then one could argue that a species ‘survives’ even if all its living
members cease to exist if their DNA is on record. In that case, what matters is not the
physical individuals but the code that members of the species share, which could be
used to seed new individuals under any of a number of material conditions.
PJ: In 2016, after Oxford Dictionaries announced post-truth as their Word of the
Year (Steinmetz 2016), the concept suddenly gained a lot of popularity. However, as
you indicate in your answer, and also in the recent article in The Guardian, Bscience has
always been a bit ‘post-truth’^ (Fuller 2016b). Please situate our current post-truth
condition in a historical context. Why has it arrived into prominence today?
SF: Well, this is the subject of my latest book (Fuller 2018). Basically, the distinction between what is true and false is clear only insofar as there is agreement on what
could be true or false. This is what I call ‘modal power’, and it’s not as pedantic as it
sounds. Plato believed that the sort of social and political instability that resulted in the
fall of Athens was due to a plethora of competing frameworks for making sense of
reality, a situation that was routinely exacerbated by free public theatre. Thus, Plato
Author's personal copy
Postdigital Science and Education
advocated a kind of ‘monopoly intellectualism’ in which control over the production of
truths and falsehoods was in the hands of a unitary regime, ideally one governed by
philosopher-kings, who would actively restrict the scope for expressing alternative
modes of governance and even modes of being; hence, his notorious ban on the poets
and playwrights. And while Plato believed in presenting philosopher-kings as entitled
to rule by virtue of their superior intellectual nature, he well knew that such matters are
not straightforwardly decided. Indeed, he was a pioneer in such ideas as ‘talent
scouting’ and ‘intelligence testing’, as well as ‘hothousing’ intellectually promising
children to mature into candidate philosopher-kings. However, there is bound to be
internal disagreements among these elites, basically between those who want to retain
the status quo and those who want a new order.
Following successive updatings of Plato first by Machiavelli and then his early
twentieth century follower Vilfredo Pareto, I interpret the differences between defenders of the status quo (‘lions’) and advocates for a new order (‘foxes’) as being
about whether to maintain or change the ‘rules of the game’. It is this sort of ‘meta’
level of conflict, which has the potential to change what people regard as true or false,
that characterises the post-truth condition. But while its historical lineage extends back
to Plato, etc., the post-truth condition has become more evident in our own time
through a combination of greater mass education and greater access to means of both
accessing and generating information. And of course, advances in information technology, especially the internet, have been instrumental in all these developments. The
negative gloss typically given to ‘post-truth’, as in the Oxford Dictionaries’ definition,
simply reflects that the lions are now in a position to be more easily challenged by the
foxes.
PJ: In a recent book chapter, I concluded that Ba poisonous public pedagogy that can
be counterbalanced only by a fully developed critical pedagogy of trust^ (Jandrić 2018:
110). Would you agree with this conclusion? If so, how should we go about establishment of trust within the post-truth conditions?
SF: I must confess that I have never found ‘trust’ a helpful concept when discussing
matters relating to knowledge. In fact, relatively early in my career, I dubbed it
‘phlogistemic’ (after ‘phlogiston’), basically because I think the concept is trying to
identify a genuine phenomenon but in an obtuse and potentially obscurantist manner
(Fuller 1996). What gets talked about in the context of ‘trust’ is mainly the distribution
of risk in society. (Here I’m somewhat in agreement with Niklas Luhmann.) If you trust
someone, you basically offload your uncertainty to them, enabling them to act on your
behalf, which includes their taking responsibility for the consequences of their actions.
Seen this way, trust looks more like a form of moral cowardice that hides behind
background claims that we live in such a ‘complex’ world that we have no choice but
to trust others who ‘know better’. Of course, I don’t deny that we engage in such
activities all the time, but I’m not sure that valorising them as ‘trust’ helps us clearly
understand what is at play.
In terms of your original claim, I think that the only truly meaningful sense of ‘trust’
that is required to counteract ‘poisonous public pedagogy’ is our own trust that our
audiences will judge appropriately the various things that they hear or read. This means
that we must make our own arguments as clearly and forcefully as possible, in full
awareness of the spaces into which we speak, which include the audience’s default
settings. In this context, appeals to more high-minded paternalistic notions of ‘trust’ and
Author's personal copy
Postdigital Science and Education
‘truth’ can easily backfire for patronising an audience that regards itself as capable of
making up its own mind. And if the audience decides against our own positions, then
we should not assume that this is because they were uneducated, misinformed, etc.
They may simply weight the various values at play differently from us. I would have
thought that if we learn nothing else from the Brexit referendum and Trump’s election,
we should learn that.
The Question Concerning Disciplinarity
PJ: You are a sociologist by training, yet your works are deeply philosophical. Please
outline the relationships between philosophy (epistemology) and sociology of science.
SF: Actually you’ve got this the wrong way round, though it perhaps reflects
something about when you have come to my work. It should be pretty clear that my
work has been always driven by philosophical premises, which in turn reflects the bulk
of my formal training and my underlying interests. It is true that I have only held full
professorships in sociology, and that started once I moved to the UK from the US in
1994. (So there is a point here about the cross-cultural non-translation of disciplinary
differences.) And I’ve now spent more than two-thirds of my academic career in the
UK as a ‘Professor of Sociology’ but my qualifications in the field are basically onehalf of an undergraduate degree, which I chose because its discipline-based requirements were light and I believed – even at an early age – that one should formally study
philosophy only after having become acquainted with some empirical disciplines. (By
the way, this is the source of my long-standing scepticism of ‘expertise’. I don’t feel I
have an expertise nor have even aspired to it. However, I do believe that I can think for
myself.) In this regard, I was strongly influenced by the nineteenth century philosophers whom I read about in my teenage years – Hegel, Comte, Mill, Spencer, Nietzsche
– all of whom were really grounded in some other field (s) before they turned to
philosophy. It’s perhaps worth mentioning that analytic philosophy – which had an
undisputable grip over the discipline even in my youth – already had the reputation for
being excessively self-regarding and intellectually self-contained. So that certainly
turned me off in my undergraduate years.
In any case, the main thrust of your question is easy to answer. If you think that
science is the most impressive epistemic enterprise that humanity has undertaken, then
it’s impossible to make sense of science without a sociological understanding of it,
since science involves many people dispersed in space and time whose collective
product is greater than the sum of its individual constituents. That’s my starting point
for social epistemology – and it should be a no-brainer to any reflective philosopher
who isn’t still spellbound by Descartes.
PJ: And what about psychology, Steve?
SF: Popper, who received his formal training in the field, had basically the right
idea. Psychology gives you an inventory of the capacities and liabilities of the human
mind, on the basis of which you need to decide how to organise people to realise the
sorts of achievements of which science is capable. I think Popper’s dualistic view of
matters – we are our own best conjecturer and the other’s best refuter – works as an
opening move in the game to get the epistemic whole to add up to more than the sum of
its cognitive agents. But cognitive and social psychology offer a lot more to flesh out
Author's personal copy
Postdigital Science and Education
this guiding intuition – and this has been a constant theme in my work. (In fact, I was
one of the proponents of a distinct ‘social psychology of science’ in the late 1980s and
early 1990s, culminating in [Shadish and Fuller 1993], which is still worth reading
today.) Analytic social epistemologists have tried to go down this route as well, but they
are usually trying to rationalise what they already find acceptable or unacceptable about
science rather than envisaging a better way for science to work. It’s the latter, more
prospectively oriented project that has always interested me.
PJ: Social epistemology works across disciplines. Looking at various sources, I
recently compiled several mainstream strategies for such work (Jandrić 2016).
BMultidisciplinarity concerns studying a research topic not in just one discipline but
in several at the same time^ (Nicolescu 2008: 2). In interdisciplinary research, Ban issue
is approached from a range of disciplinary perspectives integrated to provide a systemic
outcome^ (Lawrence & Després 2004: 400). Transdisciplinary research focuses Bon the
organisation of knowledge around complex heterogeneous domains rather than the
disciplines and subjects into which knowledge is commonly organised^ (ibid). Finally,
antidisciplinary research Bprovides the grounds for a critique of the limits on knowledge production in other disciplines^ (Kristensen & Claycomb 2010: 6). Which (of
these) approaches suit best to social epistemology? Why?
SF: To be honest, there is too much second-order discussion about interrelating disciplines and not enough first-order practice, and so my eyes glazed over when I read your
question. But since you ask, this is what I think. ‘Transdisciplinarity’, so-called ‘mode 2
knowledge production’ assumes that the important (socio-economic) problems arise from
outside the disciplines, and disciplines simply provide the means to address those problems
in the spirit of collaboration, in which each discipline brings something to the table (Gibbons
et al. 1994). I see this mentality as common to the old ‘social democratic’ and new ‘neoliberal’ way of thinking about the value of academia in the welfare state (Fuller 2016a:
Introduction). In this context, ‘transdisciplinarians’ can comfortably co-exist with ‘disciplinarians’ because once the latter have solved a given externally defined problem, they can
return to what they normally do in their home disciplines. In this respect, transdisciplinarity
is multidisciplinary without necessarily being interdisciplinary. At a deeper epistemic level,
we might say that transdisciplinarity exists in symbiosis with disciplinarity because they
‘instrumentalise’ reciprocally. Transdisciplinarity instrumentalises the disciplines to solve a
real world problem, which arguably arises because the system of disciplines itself
instrumentalises reality in the Kuhnian sense of imposing templates, or ‘paradigms’, to
make reality easier to process. This effectively blinds the disciplinarians to certain ‘real
world’ issues that in turn create the need for transdisciplinarity.
In contrast, ‘interdisciplinarity’ outright problematizes disciplines by suggesting that
they are inadequate even to solve their own problems. They take the above ‘blindspots’
much more seriously. As Kuhn pointed out for scientific paradigms, a discipline is a
path-dependent entity, whose research horizons are constrained by the world-view
implicit in its foundational theories and methods. This invariably leaves gaps, not only
in the literal sense of disciplines ignoring certain areas altogether but also in the more
figurative sense of their ignoring those researchers who have actually published in
those areas. The one field that has truly come to grips with this matter is Library and
Information Science, which coined the phrase ‘undiscovered public knowledge’ to
characterise the vast majority of published research that remains un- or under- utilised
by the academic community of researchers.
Author's personal copy
Postdigital Science and Education
The University of Chicago library scientist Don Swanson (1986) coined the
phrase to dramatise how solutions to long-standing problems may already be
present in the academic literature, but academics are not motivated to read across
fields sufficiently to put the pieces from different disciplines together. So the
critique here is at least three levels: 1) there’s more stuff than can be reasonably
read; 2) disciplinary specialisation exacerbates the problem; 3) as a result, when
we ask money for ‘new research’, we may end up reinventing the wheel, in the
sense that the answer may already exist and we just don’t know it. The last
point, which I think is quite profound, goes to question of whether research
funding is spent efficiently, given the general state of ignorance by academics of
their own avowed body of knowledge. If library and information scientists were
taken more seriously in research policy-making, we could address this problem
properly.
PJ: In your approach to social epistemology, how do you resolve the problem of
commensurability?
SF: Well, you’re assuming that commensurability is something that should be
resolved, as opposed to be simply managed. After all, when incommensurability is
resolved, it is normally because the previously incommensurable parties have come
into regular communication. This leads them to develop a hybrid language, which
linguists chart as going through pidgin and creole stages before becoming full-fledged
languages in their own right that then supervene over the original incommensurable
languages. That’s basically the story of Latin and especially Arabic, both of which
developed their global reach as trade languages. Indeed, while Latin is the older
language, Arabic was the lingua franca of science until the mass Latin translation of
Arabic translations of Greek sources in the early thirteenth century. In Philosophy,
Rhetoric and the End of Knowledge (Fuller and Collier 2004/2012) I observed that the
logical positivists were striving for something similar with regard to their project of for
an ‘International Encyclopaedia of Unified Science’, with symbolic logic and primitive
observations constituting the lingua franca. More recently, the Princeton historian
Michael Gordin (2015) has recounted this trajectory, focusing on the rise of English
as the universal language of science after the First World War, a time when artificial
languages such as Esperanto and Ido also appeared to be in the running. However, the
US science translator Scott Montgomery (2000) has really got into the cross-cultural
side of this matter in seriously global way, via several books, which I have supported
from the start.
I devoted the mid-section of Social Epistemology (Fuller 1988/2002) to incommensurability. There I noted that both Quine and Kuhn drew inspiration for their contemporaneous yet somewhat divergent accounts of translatability from the Biblical translator, Eugene Nida (1964). His key point was that there are two competing aims of
translation – one is, so to speak, faithfulness to the original source and the other is
faithfulness to the intended audience. Put bluntly, the Bible translation that is most
likely to have the intended effect on today’s Christians is probably not going to be very
faithful to the original sources, since the ancient and modern Christians operate in
radically different semantic spaces. In that respect, incommensurability is never resolved but only managed – at least from an intellectual standpoint. Of course, incommensurability in this sense of competing temporal demands on translation can always
be ‘resolved’ quite literally, once the ancients disappear from both view and memory,
Author's personal copy
Postdigital Science and Education
and only the modern translations are left standing for the original. (Once this point is
taken seriously, the classical conception of the humanities becomes justified.)
PJ: Analysing your work, Collin writes:
In our original discussion of explanation in the context of the Edinburgh School,
we distinguished between Type I and Type II accounts, the former explaining the
genesis of theories, the latter only their reception. Fuller never indicates very
clearly which kind he endorses, but we may infer from other discussions that he
sees explanation to be of the latter type. (2011: 190)
Can you clarify your position about the question of explanation? Do you agree with
Collin’s conclusion?
SF: I actually think we need to explain both the genesis and reception of theories,
but the normative import of these two activities – the former of which might be called
‘psychological’ and the latter ‘sociological’ for the sake of convenience – is different, at
least as far as my conception of social epistemology is concerned. And when I say
‘normative import’, I mean whether the explanation justifies the phenomenon explained. My main concern in all this is to avoid ‘path-dependent’ conceptions of
knowledge that effectively turn the existing class of knowers into rentiers who force
those in search of knowledge to follow in the footsteps of those who started the path
(e.g. by passing excessive costs in time and money to get access to the relevant
knowledge, as is commonplace in higher education). This is why I have always
opposed Kuhn’s paradigm-driven account of science, which is basically about tying
the fate of inquiry to extending the vision represented by some exemplary achievement,
such as Newton’s Philosophiæ naturalis principia mathematica (1687). To my mind,
this is comparable to what historians of technology call ‘lock-in’, whereby once some
significant innovation takes hold in a market, then all alternative paths that had been
moving in the same direction lose their incentives to continue, which then leads the
innovator to capture the market, which in turn results in monopolies and their attendant
bottlenecks on capital flow. Put in more familiar epistemological terms, from the
standpoint of the growth of knowledge as something we wish to promote (i.e. the
normative question), it is more important to learn how a particular solution to a problem
became the solution to the problem than how the particular solution itself was reached.
After all, that ‘locked in’ solution may have inhibited the development of more efficient
solutions, which in turn may have resulted in other benefits. I hope you can see how
this line of thinking is related to my long-standing interest in counterfactual historiography as a policy instrument.
PJ: I’m glad you mentioned Newton! My first degree was in physics – at the
University of Zagreb, following Newton’s (and later Einstein’s) footsteps, we were
taught (and many accepted for granted!), that the role of science is to find the theory of
everything or the grand theory. As I developed an interest in philosophy, however, I
became aware of various counter-arguments including but not limited to the Gödel’s
incompleteness theorems. Please describe your philosophical take on the grand theory.
SF: My answer here will be short. There’s nothing wrong with pursuing grand
theory as long as the payoffs are made clear not only to those pursuing grand theory but
also to those whose status would be changed as a result of it. But Gödel’s theorems are
Author's personal copy
Postdigital Science and Education
beside the point when it comes to such issues. People who don’t like grand theories
generally believe that such things are indeed possible but they don’t like the consequences – especially in terms of their misrepresentation of the phenomena that are
covered under them.
The Curious Relationship between Science and Religion
PJ: Speaking of grand theories, we are just one small step from religion. In The New
Sociological Imagination, you write:
There is nothing inherently antagonistic about the relationship between science
and religion that requires ‘bridging’. Modern science is an outgrowth of the
secularization of Christendom, itself a descendant of the medieval Islamic quest
for a unified understanding of a reality created by a God who is bound by his own
actions (Fuller 2006b: 131).
A bit later, you say continue: BThe idea that Science and Religion – in their capitalised
forms – have been in perennial conflict is a Western myth invented in the last quarter of
the nineteenth centuryB(ibid: 132).
How do you define religion, Steve? Please outline your views to the relationships
between science and religion.
SF: People get unnecessarily aggravated by this issue. The current usage of
‘religion’ dates from the mid-19 century to describe ways of governing people over
large regions of space and time without the modern nation-state. In short, it was
originally a synonym for ‘non-modern’ or ‘pre-modern’, but in any case it was a
residual category. This accounts for the fact that the great world-religions range from
being polytheistic to atheistic, from being inward- to outward-looking, from being
highly ritualistic to being very anti-ritualistic. While religions are naturally clustered
according to common intellectual ancestry, they really share nothing that could be
characterised as an ‘essence’ that might help or hinder the advancement of science. In
particular, the tenacious holding of beliefs is certainly not unique to religion. Yet it is
this last point that usually animates people when they ask me about the sciencereligion relation. They seem to assume that scientists are more sceptical than religious people, even though both parties display considerable scepticism towards
common sense, while advancing their own quite counter-intuitive views. (In this
respect, Thomas Reid’s ‘common sense’ philosophy in the late 18 century should be
seen as a tactical retreat for religion comparable to science’s own tactical metaphysical retreat via ‘operationalism’ and ‘instrumentalism’ in the mid 20 century.) Where
science and religion might be said to decisively diverge is over the methods used to
validate their respective claims.
PJ: Our views about the relationships between science and religion may differ, yet I
think we can easily agree that science is not religion. Please describe the general
problem of demarcating science from non-science. What is the difference between
believing that E = mc2, believing that the meek will inherit the kingdom of God, and
believing that vaccines are bad for your kids?
Author's personal copy
Postdigital Science and Education
SF: My view about the science vs. non-science distinction is rather close to Karl
Popper’s – namely, that it has nothing to do with the content of knowledge claims but
rather with the conditions under which those claims might be given up or substantially
revised. In particular, Popper stressed that a ‘scientific attitude’ seeks the falsification of
even one’s most cherished beliefs. Of course it doesn’t follow that those beliefs will be
eventually overturned, but it does mean that you really can’t be considered ‘scientific’ if
you’re not willing to undertake a high degree of risk. In my graduate school days, I had
already intuitively seen the similarity between Popper’s falsifiability principle as a basis
for continuing – rather than abandoning – faith in science as a mode of inquiry and
Pascal’s wager as a basis for continuing – rather than abandoning – faith in God’s
existence. While my Ph.D. already bore evidence of this insight, I have spent much of
the rest of my career trying to appreciate and articulate its full implications.
PJ: Please describe your position in the debate between intelligent design and
evolution.
SF: The most persuasive argument for intelligent design is that it would never have
made sense for us to try to understand the entirety of the universe unless we thought
that we had some special relationship with the (divine) agent who is ultimately
responsible for it. It’s certainly not necessary to understand all of reality – let alone
in terms of a systematic set of laws or organising principles – in order to survive and
even flourish within the constraints normally provided by our bodies and senses.
Indeed, most of the world’s cultures have conducted themselves within just those
constraints, so that even when they have admitted a much larger world beyond
immediate experience, they have generally regarded it as unfathomable at the cognitive
level. (Of course, many cosmologies stress the need to be in harmony with nature, but
this attitude does not generally involve turning nature into an object of knowledge.)
Their cosmologies have not stressed as much as ours the fundamental ‘intelligibility’ of
reality, which is to say, its inherent tractability to our minds. As Leibniz, Kant,
Whewell, Peirce and other modern philosophers of science have stressed, such intelligibility is ‘transcendentally’ required of scientific inquiry. (Of course, they differed
over whether this point was sufficient to prove the existence of God.)
For my own part, I regard the intelligibility condition as basically a secularisation of
the Abrahamic religious conception of humans as having been created in imago dei.
Historians of science and religion such as Alistair Crombie, Amos Funkenstein and
Peter Harrison have detailed the relevant theological moments, which focus on the 13
century revival of St Augustine’s emphasis on the fallen state of humanity as both a
reminder and an incentive for humans to appreciate that our grasp of reality has been in
the past – and could be in the future – much more profound than that provided by the
ordinary deliveries of the senses, which can too easily seduce us to remain in our fallen
animal natures. It is not too much of a stretch to see in this fixation on overcoming the
Fall as the basis for, say, the experimental approach pioneered by Francis Bacon, which
basically uses the senses to interrogate the senses. Equally it provides the theological
basis for distrusting induction in the manner that Popper popularised for the secular 20
century.
PJ: Are you saying that religion is a precondition for science?
SF: The point is that we wouldn’t have gone down the path of modern scientific
inquiry at all without the predominance of the world-view associated with the
Abrahamic faiths. As Thomas Henry Huxley very much realised in the ‘Romanes
Author's personal copy
Postdigital Science and Education
Lecture’ (1893) that he gave near the end of his life, if the 20 century proves to be
totally devoid of divinity with regard to our understanding of humanity, then it is
difficult to see how science will continue to progress. As Huxley himself admitted,
science’s seemingly boundless progress has been based in a self-aggrandising sense of
humanity, of the sort exemplified by Newton’s world-system, which Darwin himself –
a product of this progressive spirit – has radically undercut by reducing us to simply
one among many animal species.
But perhaps more to the point is humanity’s very high metaphysical self-regard:
Why else would we continue to devote so many resources to this very pursuit of
science, given the radical transformation – not to mention destruction – that has resulted
for our planet? After all, even if one – as I do – inclines to believe that science’s balance
sheet shows many more benefits than harms, there is no denying that science has
increased the level of risk in the world. But as I have argued in The Proactionary
Imperative (Fuller and Lipinska 2014), risk is something that can be feared or embraced, and ‘science’ understood as a challenge for us to recover our divine birthright, a
project whose both religious and scientific sides have been in recent times most clearly
understood – including the heroic attitudes that are required – by the Russian ‘Cosmist’
movement (Young 2012). To be sure, most of the world’s cultures have now adopted a
broadly ‘scientific’ perspective – largely through the triumph of capitalism in its various
forms, if we are to be brutally honest. I think that philosophers – and even theologians –
have yet to take seriously the distinctiveness of thinking about the world as having been
brought about in a way that enables us, at least in principle, to understand it as a
systematic unity. I think that this is the great contribution of the Abrahamic religions,
and it explains why the original theories of evolution – most notably Lamarck’s – were
progressive in orientation. In short, I think that science could turn out to be a fool’s
errand unless we believe that we really have a chance of acquiring ‘God’s point-ofview’ in some literal sense.
PJ: Some critics dismiss your opinion about intelligent design on the base of
argumentum ad hominem and claim that you do not have enough knowledge about
biology, chemistry, and physics, to make informed judgements. As a typical (and
sometimes very poisonous!) fallacy of irrelevance, argumentum ad hominem is inevitably wrong – and engaging in such behaviour is a clear display of bad manners.
Looking beyond personal offence, however, an interesting question remains: How
much knowledge about the physical world is required to make an informed opinion
in this debate? More generally, please describe your take on the rule of experts. Should
it remain, or should it be replaced by the rule of everybody, the rule of somebody, the
rule of nobody?
SF: I don’t begrudge my critics for their ad hominem arguments against me. I think
their arguments are wrong but not unfair. As long as speaker competence matters to
establishing a knowledge claim, argumentum ad hominem is unavoidable. This point
becomes obvious if you think of the appeal to expertise as no more than the positive
version of the ad hominem argument. A good way to see this point is in my testimony
as an expert witness for the defence in Kitzmiller vs. Dover Area School District (U.S.
District Court for the Middle District of Pennsylvania 2005), the major US court case
involving the teaching of intelligent design. The bone of contention was whether public
high school science teachers could be obliged to read a statement saying that there are
alternatives to Darwin’s theory of evolution by natural selection – including intelligent
Author's personal copy
Postdigital Science and Education
design – books about which could be found in the school library. Teachers were not
forced to teach intelligent design, but simply to say that it exists. Nevertheless, given
the rather broad interpretation of the US Constitution’s ‘separation of church and state’
clause currently in fashion, the fact that the school board was promoting intelligent
design for largely religious reasons made it easy for the judge to rule in favour of the
plaintiff. Afterwards I published two books on the larger philosophical issues surrounding the case and offered a reflection on the tenth anniversary of the decision, where my
attitude was one of je ne regrette rien (Fuller 2015c).
One of the first things I said under oath during the trial was that historians,
philosophers and sociologists are more expert on the nature of science than professional
scientists. No doubt this statement by itself earned me a lot of enemies, but I stand by it.
After all, scientists are primarily trained as specialists, which inclines them to defer to
other specialists whenever they feel they’ve exceeded their epistemic jurisdiction. Of
course, if they understood science as something with a common nature that transcends
specialist differences – in the manner of a historian, philosopher or sociologist – they
might be bold enough to offer an argument in their own name rather than simply defer
to their own positive version of the ad hominem.
PJ: So what, then, is expertise?
SF: Expertise in the first instance is about control over epistemic jurisdiction
– who has the right to frame an issue, on which a judgement might be reached
and a decision taken. Thus, there is no straightforward answer to the question of
‘how much knowledge of the physical world is required to make an informed
opinion’. It depends on how the issue under discussion is framed. In this respect,
appeals to expertise often involve epistemic overkill. You don’t need to invoke
Newtonian mechanics to explain why you’ll fall to your death if you walk out of
the window of a tall building. (I raise this blunt example because scientists
frequently invoked it in the Science Wars to ‘refute’ their imagined opponents.)
Moreover, Newtonian mechanics scores no big victories because it can explain
that fact. Newtonian mechanics scores big only if you want to connect that fact
with the movement of all the other bodies in the universe – post-Einstein,
moving much slower than the speed of light. For Newton, this was a big win
because he ultimately wanted to provide an account of the physical reality that
does justice to God’s capacity to know and act in all places at all times.
However, if you are free of that metaphysical burden, then you can explain
why walking out the window is fatal in ways that are much closer to people’s
default ways of understanding the world. This is basically what Aristotle had
done – and his practice is mirrored across most of the world’s cultures.
And so, if experts are primarily expert in framing, then it is certainly incumbent on
people to understand the frames of references that are proposed by various competitors
who are presenting themselves as ‘expert’ on some issue, but in the end it is up to the
people to decide what to believe and how to act on the basis of it. The ultimate test of a
democracy is whether it allows people to live for a specific period – say, an election
cycle – with the consequences of their own collective decision-making even when they
might entail significant hazard. Representative democracies of the sort championed by,
say, the UK’s parliamentary system amount to half-hearted endorsements of democracy, since the citizenry rarely vote directly on specific policies – except in the case of
referenda, à la Brexit, which gives one food for thought (Fuller 2018: chap. 1).
Author's personal copy
Postdigital Science and Education
PJ: The world of science is not all sunshine and roses, and our validating procedures
for accepted scientific knowledge are far from perfect. What does it mean to validate
scientific knowledge?
SF: I think of validation pretty straightforwardly. Validation is the process whereby
knowledge claims are submitted to various tests, which are then publicly available for
inspection, on the basis of which people decide how, if at all, to take the claims forward.
What makes Kuhn-style ‘normal science’ distinctive in this matter is the paradigmdriven scientific community’s willingness to cast something approximating a bloc vote,
such that everyone in the science draws more or less the same conclusions from, say,
the result of a particular experiment. But of course, in the social sciences and the
humanities, people display much greater variation in their response to such ‘tests’,
which in turn serves to differentiate their various ‘schools’. And the difference between
‘Kuhnian’ and ‘non-Kuhnian’ epistemic world largely boils down to a difference in
training in how to interpret empirical phenomena. I don’t think that the non-Kuhnian
world is any less intellectually ‘rigorous’ than the Kuhnian world. But it is much more
tolerant of opposing views co-existing in the same general field of inquiry. And to his
credit, Kuhn was pretty honest that normal science requires an authoritarian mode of
governance – that was because he believed that too much tolerance of different
interpretations of empirical phenomena simply destroys the validation process altogether. Any debate about scientific validation should start by examining that specific
claim of Kuhn’s.
PJ: In 1996, the physics professor Alan Sokal conducted a famous practical
experiment in validation of scientific knowledge. Sokal submitted a fraudulent article
to the academic cultural studies journal Social Text, and after the article was peer
reviewed and published, he revealed that its content was nonsensical (Sokal and
Bricmont 1998). This experiment, popularly known as Sokal’s Affair or Sokal’s Hoax,
has provoked wide public debate. Few months after Sokal revealed his hoax, in a ‘A
Letter to the Editor published’ in Times Literary Supplement, you wrote that the editors’
Bactions seem to imply that they believed Sokal’s piece to be sufficiently well-crafted to
merit academic discussion^, and that you Bwould stand behind the editors in arguing
that it is better to have this point revealed in open debate than to have had the article
censored in the editorial board room^ (Fuller 1996). Later on, you wrote about the
Sokal’s affair in several occasions, including but not limited to your book The Philosophy of Science and Technology Studies (Fuller 2006a: 102 ff). Please describe your
position in the Sokal’s affair. After more than two decades, please assess its (historical)
significance.
SF: Well, much more than I had realised at the time, the Sokal Hoax marked the
beginning of the end of STS as a radical movement within the study of science and
technology. As you quoted, I originally urged that the validity – or not – of Sokal’s
article should be determined by the use – or not – that people made of the article. After
all, that standard would have been more in line with the broadly ‘deconstructive’ spirit
of STS, which holds that meaning is not something invested in works by their authors
but rather something that is derived – if at all – by those who read the work. Indeed, had
a lot of scholars found Sokal’s piece so illuminating as to contribute productively to
their own work, then whatever errors or frauds that Sokal seeded in his article would
lose epistemic significance. I realise that saying things so baldly makes it appear that
I’m indifferent to the truth. On the contrary, I am very much concerned with the truth –
Author's personal copy
Postdigital Science and Education
but I also realise that things are never quite as they seem. Put bluntly, the search for
what’s true in false theories has probably done more to advance the history of science
than the simple building on theories that are already presumed to be true.
The Postdigital Human
PJ: Between 2011 and 2014 you extensively published about the idea of a ‘post-’ or
‘trans-’ human future. What does it mean to be human, Steve? What is the difference
between posthumanism and transhumanism?
SF: We shouldn’t be sentimental about these questions. ‘Human’ began – and I
believe should remain – as a normative not a descriptive category. It’s really about
which beings that the self-described, self-organised ‘humans’ decide to include. So we
need to reach agreement about the performance standards that a putative ‘human’
should meet that a ‘non-human’ does not meet. The Turing Test serves to focus minds
on this problem, as it suggests that any being that passes behavioural criteria that we
require of humans counts as human, regardless of its material composition. While the
Turing Test is normally presented as something that machines would need to pass, in
fact it is merely a more abstract version of how non-white, non-male, non-elite
members of Homo sapiens have come to be regarded as ‘human’ from a legal
standpoint. So why not also say ‘non-carbon’ in the case of, say, silicon-based
androids?
The difference between posthumanists and transhumanists turns on whether the
‘human’ is itself the ultimate locus of value. Transhumanists say ‘yes’ (but humans
haven’t yet distinguished themselves sufficiently from animals) and posthumanists say
‘no’ (and humans have already distinguished themselves too much from animals). The
affective issues aside, these two sides are in disagreement about appropriate performance standards. Transhumanists believe that the advanced levels of intelligence to
which we have held ourselves and machines are necessary for the granting of legal
rights, whereas posthumanists are less preoccupied with intelligence than with life itself
and thus a standard that places humans on the same level as other animals in terms of
entitlement to flourish as a species.
PJ: For you, social science is clearly a moral project (Barron 2003; Fuller 2011).
Why?
SF: The short answer, referring to my answer to the previous question, is that
‘human’ is a normative not a descriptive category. Homo sapiens is the prime candidate
for being counted as ‘human’, but the history of the concept of humanity shows that
even Homo sapiens has had to earn the title. ‘Sociology’ as an idea that links Auguste
Comte and Emile Durkheim – the one who conceived of it as a political movement and
the latter as an academic discipline – is ultimately about accepting the assumption that
you cannot be ‘human’ on your own, simply because you have, say, the right genetic
makeup. Humanity is a collective achievement or nothing at all. This is a truth that
applies not only to the Abrahamic religions but also to both of their main secular
incarnations, capitalism and socialism. In the case of capitalism, where this point may
seem less obvious, the relevant site of the ‘collective’ is of course the market, which
doesn’t exist unless there is a division of capacities – or ‘labour’ in that broad sense –
that requires the need for trade between at least two parties, each offering what the other
Author's personal copy
Postdigital Science and Education
lacks. (It is interesting that in the Abrahamic theologies, there has been a long-standing
discussion about whether God needs to create the world in order to prove his own
divinity. Some theologians argue that this would make God seem too much like
humans. However, the alternative would seem to be that the world’s creation is an
arbitrary divine act.) It is also worth adding that the great two modernist traditions in
ethics – Kantianism and utilitarianism – which on the surface seem ‘individualistic’ in
fact presuppose a collective orientation in the passing of moral judgement. Kant’s
categorical imperative compels one to imagine a world in which everyone did what one
now proposes to do under similar circumstances, whereas Bentham’s utility calculus,
while extracted from individuals, requires their aggregation to deliver a piece of
legislation.
PJ: At the Second International Knowledge and Discourse Conference, held at the
University of Hong Kong in June 2002, you and Bruno Latour staged a very popular
public debate. You Bsuggested the motion of the debate, that ‘A strong distinction
between humans and non-humans is no longer required for research purposes’^ (Barron
2003: 78). Sometime in the middle of the debate, you said: Bactually I don’t draw your
sharp line between the moral project of social science and the empirical project^ (In
Barron 2003: 87). What is the relationship between the moral project and the empirical
project of social science?
SF: The ‘moral’ moment in social science is exactly what Max Weber meant by the
‘value relevance’ of research – namely, what you choose to study, most importantly at
the level of ontology. It is significant that when Weber and his colleagues formed the
first German professional society for sociologists, they excluded race from their
considerations, even though that was a hot topic among those who might be considered
founders of this emerging discipline. They also excluded animals, even though the antiDurkheimian version of sociology that came to France via the translation of Herbert
Spencer had offered a vision of sociology rather close to today’s ‘sociobiology’ or
‘evolutionary psychology’ (Gabriel Tarde, whose fortunes Latour has revived in recent
years, was the leading figure of this group). Indeed, these same French thinkers also
had a rather robust conception of technology’s role in shaping the human life-world,
including the sorts of human-organism-machine analogies that Norbert Wiener would
popularise in the mid-twentieth century as ‘cybernetics’ (Alfred Espinas, who received
the first French Ph.D. in ‘Sociology’, is especially noteworthy here). In contrast, for
their part, the Germans tended to regard technology in purely expressive or instrumental terms – in other words, with Homo sapiens always in the driver’s seat. Taken
together, all of these choices about what to include in and exclude from the purview of
‘sociological’ inquiry frames his/her understanding of the ‘human’, which in turn
determines which sort of data are considered relevant, meaningful, etc.
PJ: Please describe your position on Kant’s distinction between autonomy and
heteronomy in the context of social science.
SF: The distinction is pretty straightforward. It involves two different perspectives
from which the human may be understood. If you’re ‘autonomous’ you regard yourself
as the ultimate source in anything you say, think, and do. In that sense, you are always
the responsible party. If you’re ‘heteronomous’ you see yourself as simply a site where
various forces play themselves out – class, race, gender, the usual sociological variables
– and you are not responsible. Existentialism is the philosophical and cultural movement that has most creatively played with this tension. After all, both perspectives exist
Author's personal copy
Postdigital Science and Education
simultaneously in the human condition, something which the criminal justice system
routinely needs to resolve. But Kant’s original point was that you’re not really human
unless you conceive of yourself as autonomous, regardless of your material circumstances. In this respect, he was drawing on classical Stoicism but also his own rather
stripped down Christian upbringing, which frowned upon the rather mythopoeic idea
that morality was about choosing God over Satan in some eternal battle for control over
our souls. Kant doesn’t allow you to say ‘I am doing God’s will’ to justify your actions.
You may think you’re doing God’s will, but if your actions are truly moral then you
must personally authorise them as well. It’s at that moment that you claim your
birthright as having been born in imago dei. The bottom line is that you present
yourself as a moral agent, not a mere moral vehicle.
This point relates interestingly to social science, when one considers that Kant’s
version of Christianity is often associated with Calvinism. The key point about Calvin
is that a Christian never knows whether God regards what s/he does is the right thing
because of humanity’s fallen state, but s/he does it anyway because s/he thinks it’s the
right thing to do – and it may well turn out to be right. This open embrace of radical
uncertainty in the face of unknown consequences is the hallmark of not only Kant’s
ethic but also Popper’s falsificationist ethic, which purposely does not presume that
everything that has been thought to be true is necessarily so. This in turn makes life one
big test case. In Fuller (2003), I remark on Popper’s having been influenced in his
youth by the German translation of Kierkegaard. I think this makes the historical
connection reasonably clear.
But in terms of today, what might be called the ‘rump end’ of socialist social science
comes dangerously close to the sort of paternalistic attitude to which Kant’s vision of
Enlightenment was opposed. In other words, by so heavily stressing the heteronomous
side of the human condition, these so-called ‘leftists’ are in fact disempowering the
very people that they claim to want to empower. More concretely, I mean leftist claims,
which when stripped of euphemism, amount to saying that ordinary people these days
can’t think straight because they’re too busy doing ‘bullshit jobs’ (when they still have
jobs), being distracted, being fooled, etc. Even though no one disputes that we live in an
increasingly globalised capitalist economy, nevertheless people are also increasingly
informed about that fact – through social media, if nothing else – and hence are in an
increasingly better position to decide for themselves what to do. To be sure, many of
the decisions taken in this newfound state of freedom have contradicted what the social
science ‘experts’ would deem to be in those people’s best interests, and it’s by no means
clear that those decisions were the right ones. Nevertheless, what matters from the
standpoint of autonomy is that the people taking the decisions are assuming responsibility for the consequences. A telling indicator is that claims that people were ‘systematically misinformed’ about the Brexit campaign and the 2016 US presidential election
continue to be seen by the general public as patronising.
PJ: In Humanity 2.0 you write: BHere we move into what may be the most controversial aspect of my position, namely, that the active promotion of a certain broadly
Abrahamic theological perspective is necessary to motivate students to undertake lives in
science and to support those who decide to do so^ (Fuller 2011: 180). A bit later, you
write: BIt is very unlikely that science would have taken the course it has – and valued as
much as it has been – were it not for the Abrahamic belief that humans were created in the
image of God^ (Fuller 2011: 183). In her review of your book, Sabrina Weiss questions
Author's personal copy
Postdigital Science and Education
Bthe presumption of Abrahamic theology as the best tradition to use is ignorant of benefits
offered by other religions^ (Weiss 2012). Why do you think that the Abrahamic tradition
is the best direction for future development of science?
SF: First, it’s a no-brainer to say that there are benefits to the human condition
offered by non-Abrahamic religions. Of course! But whether non-Abrahamic religions
would have ever produced science in the form that we recognise – let alone an
improved version of it – is another matter entirely. Notwithstanding the strictures of
political correctness, I seriously doubt that science in our modern sense would have
arisen outside of the Abrahamic cultural orbit. Certainly Buddhism and Daoism – the
two most frequently touted candidates – had plenty of time to produce something
comparable but didn’t manage to do it, and it wasn’t because Westerners prevented
them from doing so. The most obvious reason is that they had no interest in doing so,
which was due partly to their lack of a strong sense of humanity’s privileged standing in
the cosmos. However, people who call for a ‘Buddhist’ or ‘Daoist’ approach to science
today are generally mindful of the destructive and destabilising consequences of the
science-driven technology over the past 200 years that has been motivated by
Abrahamic hubris. Certainly no one on either side of this argument denies the
destabilising effects of modern science. Here it is worth recalling that Herbert
Butterfield, who in the 1930s first proposed the idea that a ‘Scientific Revolution’
had occurred in seventeenth century Europe, deemed it as the second greatest moment
in human history after the birth of Jesus. As a Christian, he meant this as a compliment.
And even an avowed atheist like Richard Dawkins promotes science in the same spirit
as the Protestant evangelists began to promote the Bible during the Scientific Revolution. (Recall that both Galileo and Francis Bacon saw the Bible and Nature as the two
‘books’ through which God communicated with us.) The question is whether all this
disruption has been for good or ill, especially in its current manifestations.
I am thus led to read charitably the current enthusiasm for non-Abrahamic approaches to knowledge as expressing a desire to impose a ‘precautionary’ regime on
today’s science and technology, scaling back the dreaded ‘Anthropocene’, whereby we
have come to be the most decisive causal factor on the disposition of the planet – much
for the worse, according to these critics. And while I don’t wish to downplay anthropogenic climate change, etc., nevertheless I believe that these problems will be solved
only by greater application of science and technology. But that still leaves many options
on how to proceed. My own view is that we shall simply adapt as a species to climate
change. Of course, some populations will be at greater existential risk than others, but
those are likely to be the same populations who have been vulnerable since the start of
the Industrial Revolution, namely, the poor and the chronically disadvantaged. In that
respect, I think that much of the moral censure surrounding inequality and underdevelopment has simply carried over into the critique of science and technology – which
of course is not to render that critique any less legitimate.
PJ: In Humanity 2.0 (Fuller 2011) you explore the relevance of theology to the
future of humanity. What is ‘Theology 2.0’; why do we need it?
SF: Well, as many of my previous answers have indicated, the modernist impulse
that resulted in the strong identification of humanity with the advancement of science
and technology was just a secular continuation of the original Abrahamic motive for the
full-throated pursuit of knowledge from a presumptive state of ignorance. In a nutshell:
if there is a path to salvation after the Fall, it is through science and technology. In the
Author's personal copy
Postdigital Science and Education
modern era, we called this belief ‘progress’, which was often portrayed as ‘building a
heaven on Earth’. I’m still a believer, but I think that the next phase for this ‘faith’ is to
explicitly remove the barriers that separate science and religion. We live in a time when
it is politically correct to follow Steven Jay Gould’s influential formulation that science
and religion are ‘two non-overlapping magisteria’ – or ‘separate but equal’, to put it in
the more brutally realistic terms of the US Supreme Court. While this settlement has
been portrayed as a stopgap to prevent conflict between science and religion, all that it
has really done is stunt the growth of both sides of the divide. My embrace of intelligent
design theory was motivated by my felt sense that this situation needs to be reversed –
even in nominally ‘secular’ societies; hence the need for ‘Theology 2.0’.
PJ: In recent years I have conducted many interviews with Peter McLaren on
liberation theology (McLaren and Jandrić 2017a, 2017b, 2018; see also McLaren and
Jandrić 2015); these days, we are wrapping up these interviews into a book. What is
your take on liberation theology and especially José Porfirio Miranda’s claim that BThe
eschaton of Marx, which is the same as that of the gospel, is what gives meaning to
history^ (Miranda 1980: 307)?
SF: As should be clear from what I have said so far, I think that Miranda is
essentially correct, especially in terms of the collective nature of salvation. Under many
interpretations of both Christianity and Islam, one’s own salvation cannot be secured
unless one has also tried to save others. Hence the stress that these religions place on
proselytism and evangelism, which in turn has been a source of tension with secular
authorities in the modern era. This is a feature that socialism and other modern
progressive movements have picked up from these religions. One can see Marx
evolving to this position in his famous early work, The German Ideology (1932),
basically a critique of the Hegel-inspired liberal theology that inspired the strategy of
‘consciousness raising’ that would be explicitly articulated a few years later in The
Communist Manifesto (Marx and Engels 1976/1848). This became the signature idea of
Marxist-inspired socialism in the twentieth century, epitomised in the idea that socialism is impossible in one country: No one is liberated until everyone is. Liberation
theology easily made common cause with Marxists under those conditions, especially
in Latin America and Africa.
PJ: What is the place of liberation theology in your Theology 2.0?
SF: I was actually trained by the Jesuits during the period – the 1970s – when
liberation theology was probably at its peak, both as a political strategy and a mode of
theological inquiry. My teachers, who were very hostile to the US involvement in the
Vietnam War, gave a ‘liberation spin’ to much of their religious pedagogy. They
basically argued that it was impossible to live a full Christian existence if one’s soul
is in a state of captivity because of excessive political and economic strictures. The
Marxo-Freudian term ‘alienation’ was the preferred way to talk about this captivity
when I was a student – especially in theology classes. I should also say that my teachers
had a much more ambitious sense of an ‘unalienated’ human being than their Marxist
fellow-travellers. Here the role of the heretical Jesuit scientist-theologian Pierre
Teilhard de Chardin should not be underestimated as providing a metaphysical framework for understanding liberation theology. The Vatican permitted the publication of
Teilhard’s works only after his death – and that was largely due to eminent biologists
like Julian Huxley and Theodosius Dobzhansky, who were sympathetic to his rather
spiritualised version of ‘evolutionary humanism’.
Author's personal copy
Postdigital Science and Education
It is easy to forget that the intellectual traffic between science and religion was
markedly better 40–50 years ago than it is today. You’ll see that Teilhard is discussed
in some detail in Humanity 2.0 (Fuller 2011), since he clearly anticipated much of the
agenda associated with contemporary ‘transhumanism’ as part of a species-wide spiritual awakening, which he characterised in terms of the emeregence of a ‘noosphere’.
That was his own version of H.G. Wells’ ‘world-brain’ and Marshall McLuhan’s ‘global
village’, all of which referred to technologically-driven step-change improvements in
human telecommunications, starting from the telegraph and the telephone to radio and
television – and of course nowadays distributed computer networks – all running in
parallel to permit in principle any individual to have the collective knowledge of
humanity at their fingertips. Very much like today’s transhumanists, Teilhard believed
that this technological trajectory would open up new paths for development that would
accelerate the sorts of changes that biological evolution had made possible.
PJ: With some wisdom of hindsight, please give your opinion to the motion of the
debate between you and Bruno Latour: BA strong distinction between humans and nonhumans is no longer required for research purposesB(Barron 2003).
SF: Although the debate got considerable local press coverage, the results were pretty
inconclusive, and I’m not even sure that the italics in the published version really captured
the emphases that Latour and I were placing on what we said. It’s worth recalling that the
debate took place in 2002 in Hong Kong, where it was treated as something of a spectacle.
There was a page-long story in the South China Morning Post, the leading Englishspeaking newspaper, with photos of Latour and me in full flow. But I’m not convinced
that the audience really understood all the different levels of contestation. In any case,
Latour’s position was prima facie the more plausible, since he seemed to be simply
arguing that all phenomena should be treated equally, whether it comes from something
human, natural or artificial. I don’t think that people saw the abdication of responsibility
that this position implied – going back to the point I earlier made about Max Weber and
the importance of ‘value relevance’ of research. After all, to say that everything that exists
has a bearing on everything else that exists, avoids the question of what matters and what
doesn’t matter. (Around this time, Latour was beginning to introduce the idea of ‘matters
of concern’, but that is really more about our receptiveness to phenomena rather than any
valuation of them.) From that standpoint, I appeared to be arguing for a style of ‘politically
engaged’ research that had largely gone out of fashion with Jean-Paul Sartre and the ‘68
generation and certainly had no place in the emerging neo-liberal world order in the wake
of the fall of the Soviet Union. If nothing else, Latour was attuned to that transition and has
repeatedly made a point of distancing himself from anything that might recall the ‘critical’
sensibility of the ‘68ers – and hence in his native France he was always a thorn in the side
of Pierre Bourdieu.
As for myself, it was clear that I was thinking about the ‘human’ as a future
projection, which is continuous with the idea that ‘humanity’ is a progressive project,
regardless of the historicity of the Fall. (I remember Latour becoming very incredulous
when I used the Enlightenment phrase ‘project of humanity’ in response to a question
after the debate.) So I have remained committed to privileging humanity in the order of
things. But I have become more open-minded about what form the ‘human’ might take.
After all, the idea that humans are even related to – let alone descended from – apes,
stems only from the mid-eighteenth century. Before that time, thinkers who speculated
about other intelligent creatures both on Earth in the Heavens imagined something
Author's personal copy
Postdigital Science and Education
closer to the relatively featureless androids of old science fiction movies than, say,
Planet of the Apes (Schaffner 1968). Indeed, I have come to think that there is
something to the transhumanist idea of ‘morphological freedom’, which might be
regarded as liberalism’s final frontier. Now that we allow people to change class and
even gender, then switching race cannot be too far behind, and behind that is the
prospect that we might cognitively ‘uplift’ animals while we ourselves shift from
carbon to silicon form, via brain emulation or mind uploading.
In contrast, my sense of Latour since the 2002 debate is that he’s become even more
‘grounded’ and ‘materialist’ in his orientation to social research – and, ideologically
speaking, he’s become more conservative. For example, he has taken the
‘Anthropocene’ much more at face value than I have. We debated this topic in
2015 at the Breakthrough Institute, the ‘eco-modernist’ think-tank in San Francisco
(The Breakthrough Institute 2015). Eco-modernists believe in the idea of a ‘good
Anthropocene’, and I’m on their side, while Latour is opposed. To be sure, we both
agree that humans are the biggest source of climate change. But he sees it as an
existential threat, whereas I see it as a challenge that poses new opportunities for
human advancement, along the lines of ‘necessity is the mother of invention’. Latour
appears to be nowadays motivated by fear in the face of risk, whereas I remain hopeful.
Postdigital Science and Education
PJ: Every time we log into our social networks, we have (more or less) meaningful
interactions with artificial algorithms. We usually know that we are not talking to other
people, so these interactions do not qualify as Turing tests – and yet, in many ways, we
do treat these algorithms as equals. In consequence, as of recently, a lot of
(posthumanist) research speaks about radical equality between human and nonhuman actors (Bayne and Jandrić 2017; Jandrić 2017). What is your take on this type
of interaction between humans and machines?
SF: This is really about transhumanism because the ‘human’ is used as the standard
of performance, tbe terms of which depends on how we respond to what these created
things do and say. That’s the whole point of the Turing Test. As my previous response
suggests, I’m not terribly vexed by this phenomenon as such. The real social justice
problem is that we don’t treat fellow flesh-and-blood humans as seriously as we treat
these algorithms.
PJ: In Humanity 2.0 you write:
When the social sciences are presented as the most progressive of the three main
bodies of knowledge – that is vis-à-vis the humanities and the natural sciences – a
story is told whereby the social sciences provide voice and direction for what the
18th century Enlightenment philosophers had called the ‘project of humanity’.
(Fuller 2011: 16)
What about today’s social sciences? Are they the most progressive of today’s main
bodies of knowledge? What kind of voice and direction should today’s social sciences
provide to our current project of post/trans humanity?
Author's personal copy
Postdigital Science and Education
SF: A general case can be made for the social sciences remaining the most
progressive body of knowledge in the academy, but it’s hard to say which is the most
progressive discipline. Strange as it may sound, I’m inclined to say that economics is
always ahead of the curve – at least at a conceptual level, if not a predictive one. When
I’ve sought clarity in my own thinking, I’ve always found the codified intuitions of
economists the most bracing and challenging. A key advantage that economics as a
discipline has is its abstractness. Economic principles can be applied to just about
everything – not merely human behaviour. Indeed, as Philip Mirowski and others have
observed (often critically), many of those principles began in classical mechanics and
thermodynamics. As a result, economics may be best positioned of the social sciences
to capitalise on the post/trans-human turn, since it probably carries the least ontological
baggage of any of the social sciences. (People who think that economics is inherently
‘individualistic’ are too focused on how economic principles have been applied, rather
than the principles themselves, which are ontologically neutral.) Perhaps the economist
who has perhaps most distinguished himself in the post/trans-human arena so far is
Robin Hanson, whose work I recently reviewed (Fuller 2017).
In contrast, my own home discipline of sociology has really lost its way for a variety
of reasons, which add up to a loss of salience of the general concept of ‘society’, mainly
due to the end of socialism as a viable worldwide political movement. Indeed, we may
even be seeing the slow death of the welfare state, since it is difficult to see how the tax
system will be ever again capable of redistributing wealth as it did so effectively in the
third quarter of the twentieth century. This was the backdrop against which I wrote The
New Sociological Imagination (Fuller 2006b). One of the consequences of our ‘brave
new world’ is that the very idea that there is something uniquely ‘human’ about the idea
of society is quickly disappearing, which in turn has opened the door to reviving the
more ‘sociobiological’ approach to sociology that I earlier mentioned in connection to
the anti-Durkheimian origin of the field, which effectively undermines any privileging
of the ‘human’, indeed along the lines that Latour now advocates. So the burden of
proof is firmly on the side of defenders of the ‘human’ to indicate exactly what they’re
trying to defend.
In these matters, I’ve been strongly influenced by the political philosophy of
republicanism, which I first discussed in some detail in The Governance of Science
(Fuller 2000). The basic idea is that a society should consist only of beings who regard
themselves and each other as equal participants in public affairs – what after Philip
Pettit is often discussed as ‘freedom from domination’. Historically this condition has
been associated with city-states, whose self-consciously artificial character meant that
they had specific entry requirements that could be met in a variety of ways but the
bottom line was that candidates would contribute to – not subtract from – the polity as a
result of their residency. Our modern notions of citizenship and especially civil rights
derive from this tradition, and of course it has led to endless controversy about who
should be ‘in’ or ‘out’ of a given polity. Both John Calvin and Jean-Jacques Rousseau
endorsed republicanism, and it also figured in late nineteenth century voluntary attempts to ‘repatriate’ American Blacks to West Africa and European Jews to Palestine.
So republicanism – like all interesting political philosophies – has had a chequered
career in practice, not least in its most complex, scaled-up version, the USA. Nevertheless, I am sympathetic to the general spirit of the approach, which is why with an eye
to our impending post/trans-human condition, I have been a champion of Turing Test-
Author's personal copy
Postdigital Science and Education
like criteria – such as formal examinations – to determine the sorts of beings who
should be allowed in the ‘human’ polity. The ultimate value of focusing the mind in this
way is twofold: on the one hand, it requires the polity to take collective responsibility
for deciding who does and does not count as an ‘equal’ in the politically relevant sense
of ‘human’, and on the other hand, it also requires that the polity decide how to do deal
with those who fail to meet the relevant criteria. In the latter case, there are several
options – ranging from principled hostility, through indifference and limited reciprocal
arrangements, to eventual incorporation (e.g. through a training or probationary period). This last possibility becomes especially interesting if one thinks about the prospects for, say, upgrading computers or enhancing animals to enable them to flourish as
autonomous agents in a human-centred environment.
PJ: This interview is published in the inaugural issue of the new journal Postdigital
Science and Education – and its mission statement can be found in (Jandrić et al. 2018).
Please comment on the concept of the ‘postdigital’. What are its theoretical and
practical potentials?
SF: I’ve now read your journal’s mission statement, and it’s very much of the
moment. As you yourself observe, there have been already a series of ‘post’ movements over the past 50 years, starting with ‘postmodernism’. So, there is a question
about just how long ‘postdigital’ will remain meaningful as a coherent organising
rubric. Here I would offer the following observation. When ‘post-’ is prefixed to a term
of periodization, it can refer to one of two things: either to a time when, say,
‘modernism’ or ‘humanism’ will have passed as a moment in history; or, to a time
when ‘modernism’ or ‘humanism’ will have become the self-conscious agent of
history. It is this latter, more reflexive understanding of these movements that makes
the ‘post-’ more ontologically radical and, as a result, more intellectually interesting. To
anticipate some of the issues that we discuss below, I find the idea of ‘posthuman’ in
the sense of ‘after the human becomes obsolete’ to be less challenging and attractive
than in the sense of ‘after the human drives history’. Whereas the former treats the
‘human’ as, say, Foucault does in The Order of Things (1994) – namely, as simply an
occurrence (or perhaps even an accident) of natural history – the latter treats the
‘human’ coming into its own as the responsible agent and principal driver of history.
It was this latter prospect that led Julian Huxley to coin the term ‘transhumanism’, as
well as to lend his support to the publication of Teilhard de Chardin’s prohibited works.
Now, how would this distinction apply to ‘postdigital’? On the one hand,
‘postdigital’ could simply mean ‘after the digital has lost its novelty or salience’; on
the other hand, it could mean ‘after the digital becomes the master narrative of our
world’. Your manifesto appears to vacillate between the two readings, perhaps because
you also seem to want to canvas the different uses of the term ‘postdigital’, which of
course aren’t necessarily compatible with each other. But here too, the latter, more
ambitious sense of ‘postdigital’ is more illuminating. And here I would trace the
concept back to Erwin Schrödinger’s famous 1943 Dublin lecture, ‘What is Life?’
(Schrödinger 1944), where he introduced the idea of a ‘genetic code’ by analogy with
digital code. It seems to me that this connection, which helped to inspire the molecular
revolution in biology, is what gives the ‘postdigital’ its intellectual power, which has
now been greatly extended through the computer revolution, ranging from the
digitisation of organisms (i.e. the sequencing of genomes) to the creation of digital
organisms (i.e. entities in virtual reality). To be sure, Schrödinger’s connection has been
Author's personal copy
Postdigital Science and Education
always contested, especially by thinkers keen to resist ‘physical reductionism’ in
biology. The late historian of twentieth century biology, Lily Kay (2000) published a
very well informed albeit critical account of this early period, in which Norbert
Wiener’s cybernetics also played a significant role in collapsing traditional metaphysical differences between the human, the organic and the mechanical. In any case, I think
this sort of understanding of ‘postdigital’ is bound to have a long half-life.
References
Barron, C. (2003). A strong distinction between humans and non-humans is no longer required for research
purposes: A debate between Bruno Latour and Steve fuller. History of the Human Sciences, 16(2), 77–99.
Bayne, S., & Jandrić, P. (2017). From anthropocentric humanism to critical Posthumanism in digital education.
Knowledge Cultures, 5(2), 197–216.
Collin, F. (2011). Science studies as naturalized philosophy. Springer Science+Business Media.
Foucault, M. (1994). The order of things: An archaeology of the human sciences. New York: Vintage.
Fuller, S. (1988/2002). Social Epistemology. Bloomington: Indiana University Press.
Fuller, S. (1989/1993). Philosophy of science and its discontents. New York: Guilford Press.
Fuller, S. (1996). Recent work in social epistemology. American Philosophical Quarterly, 33, 149–166.
Fuller, S. (1999). The science wars: Who exactly is the enemy? Social Epistemology, 13(3–4), 243–249.
Fuller, S. (2000). The Governance of Science. Milton Keynes: Open University Press.
Fuller, S. (2003). Kuhn vs. popper: The struggle for the soul of science. Cambridge: Icon Books.
Fuller, S. (2006a). The philosophy of science and technology studies. New York and London: Routledge.
Fuller, S. (2006b). The new sociological imagination. London: Sage.
Fuller, S. (2011). Humanity 2.0: What it means to be human past, present and future. London: Palgrave
Macmillan.
Fuller, S. (2013). Preparing for life in humanity 2.0. London: Palgrave Macmillan.
Fuller, S. (2015a). Knowledge: The philosophical quest in history. New York: Routledge.
Fuller, S. (2015b). Customised science as a reflection of ‘Protscience’. Epistemology and Philosophy of
Science, 46(4), 52–69.
Fuller, S. (2015c). Intelligent Design: Ten Years after Dover. ABC Religion & Ethics, 22 December.
http://www.abc.net.au/religion/articles/2015/12/22/4376838.htm. Accessed 15 May 2018.
Fuller, S. (2016a). The academic Caesar: University leadership is hard. London: Sage.
Fuller, S. (2016b). Science has always been a bit ‘post-truth’. The Guardian, 15 December. https://www.
theguardian.com/science/political-science/2016/dec/15/science-has-always-been-a-bit-post-truth.
Accessed 15 May 2018.
Fuller, S. (2017). Review of The Age of Em: Love, Work and Life when Robots Rule the Earth by R. Hanson.
Journal of Posthuman Studies, 1(1), 104–109.
Fuller, S. (2018). Post-truth: Knowledge as a power game. London: Anthem.
Fuller, S. & Collier, J. H. (2004/2012). Philosophy, Rhetoric, and the End of Knowledge: A New Beginning for
Science and Technology Studies. (Orig. 1993, by fuller). New York: Routledge.
Fuller, S., & Lipinska, V. (2014). The Proactionary imperative: A Foundation for transhumanism. London:
Palgrave Macmillan.
Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P., & Trow, M. (1994). The new production
of knowledge. London: Sage.
Goodman, N. (1955). Fact, fiction and forecast. Cambridge: Harvard University Press.
Gordin, M. (2015). Scientific babel: From the fall of Latin to the rise of English. London: Profile.
Harding, S. (Ed.). (2011). The Postcolonial Science and Technology Studies Reader. Durham and London:
Duke University Press.
Huxley, T. (1893). Romanes Lecture. Oxf ord Magazine, May. ht tps:// mathcs.cl arku.
edu/huxley/comm/OxfMag/Romanes93.html. Accessed 15 May 2018.
Jandrić, P. (2016). The methodological challenge of networked learning: (post) disciplinarity and critical
emancipation. In T. Ryberg, C. Sinclair, S. Bayne, & M. de Laat (Eds.), Research, boundaries, and policy
in networked learning (pp. 165–181). New York: Springer.
Jandrić, P. (2017). Learning in the age of digital reason. Rotterdam: Sense.
Author's personal copy
Postdigital Science and Education
Jandrić, P. (2018). Post-truth and critical pedagogy of trust. In M. A. Peters, S. Rider, M. Hyvönen, & T.
Besley (Eds.), Post-truth, fake news: Viral Modernity & Higher Education (pp. 101–111). Singapore:
Springer.
Jandrić, P., Knox, J., Besley, T., Ryberg, T., Suoranta, J., & Hayes, S. (2018). Postdigital science and
education. Educational Philosophy and Theory, 50(10), 893–899.
Kay, L. (2000). Who Wrote the Book of Life? A History of the Genetic Code. Palo Alto: Stanford University
Press.
Kristensen, R. G., & Claycomb, R. M. (Eds.). (2010). Writing against the curriculum: Antidisciplinarity in the
writing and cultural studies classroom. Plymouth: Rowman & Littlefield Publishers, Ltd.
Lawrence, R. J., & Després, C. (2004). Futures of transdisciplinarity. Futures, 36(4), 397–405.
Marx, K. (1932). The German Ideology. https://www.marxists.org/archive/marx/works/1845/germanideology/index.htm. Accessed 15 May 2018.
Marx, K., & Engels, F. (1976/1848). The Communist Manifesto. https://www.marxists.
org/archive/marx/works/1848/communist-manifesto/index.htm. Accessed 15 May 2018.
McLaren, P., & Jandrić, P. (2015). Revolutionary critical pedagogy is made by walking – In a world where
many worlds coexist. In P. McLaren (Ed.), Pedagogy of Insurrection: From Resurrection to Revolution
(pp. 255–298). New York: Peter Lang.
McLaren, P., & Jandrić, P. (2017a). From liberation to salvation: Revolutionary critical pedagogy meets
liberation theology. Policy Futures in Education, 15(5), 620–652.
McLaren, P., & Jandrić, P. (2017b). Peter McLaren’s liberation theology: Karl Marx meets Jesus Christ. In J. S.
Brooks & A. Normore (Eds.), Leading against the grain: Lessons for creating just and equitable schools
(pp. 39–48). New York: Teachers College Press.
McLaren, P., & Jandrić, P. (2018). Karl Marx and liberation theology: Dialectical materialism and Christian
spirituality in, against, and beyond contemporary capitalism. TripleC: Communication, Capitalism &
Critique, 16(2), 598–607.
Miranda, J. P. (1980). Marx against the Marxists: The Christian Humanism of Karl Marx. Trans. Drury J.
Maryknoll: Orbis Books.
Montgomery, S. (2000). Science in translation: Movements of knowledge through cultures and times.
Chicago: University of Chicago Press.
Newton, I. (1687). Philosophiæ naturalis principia mathematica. http://cudl.lib.cam.ac.uk/view/PR-ADV-B00039-00001/1. Accessed 15 May 2018.
Nicolescu, B. (2008). In vitro and in vivo knowledge – Methodology of transdisciplinarity. In B. Nicolescu
(Ed.), Transdisciplinarity – Theory and practice (pp. 1–22). New York: Hampton Press.
Nida, E. (1964). Toward a science of translating. Leiden: E.J. Brill.
Schaffner, F. J. (1968). Planet of the Apes [Motion picture]. Los Angeles: 20th Century Fox.
Schrödinger, E. (1944). What is Life? Dublin: Trinity College. http://www.whatislife.ie/downloads/What-isLife.pdf. Accessed 15 May 2018.
Shadish, W., & Fuller, S. (Eds.). (1993). The social psychology of science. New York: Guilford.
Sokal, A., & Bricmont, J. (1998). Fashionable nonsense: Postmodern Intellectuals' abuse of science. New
York: Picador.
Steinmetz, K. (2016). Oxford’s word of the year for 2016 is ‘post-truth’. Time, November 15. http://time.
com/4572592/oxford-word-of-theyear-2016-post-truth/. Accessed 15 May 2018.
Swanson, D. (1986). Undiscovered public knowledge. The Library Quarterly, 56(2), 103–118.
The Breakthrough Institute (2015). Breakthrough Dialogue 2015: The Good Anthropocene.
https://thebreakthrough.org/index.php/dialogue/past-dialogues/breakthrough-dialogue-2015. Accessed
15 May 2018.
U.S. District court for the Middle District of Pennsylvania. (2005). 400 F. Supp. 2d 707. https://law.justia.
com/cases/federal/district-courts/FSupp2/400/707/2414073/. Accessed 15 May 2018.
Weiss, S. (2012). Review of humanity 2.0 by Steve fuller. Social Epistemology Review and Reply Collective,
1(3), 6–9.
Young, G. (2012). Russian cosmism: The esoteric futurism of Nikolai Federov and his followers. Oxford:
Oxford University Press.