902055
research-article2020
CNC0010.1177/0309816820902055Capital & ClassEngster and Moore
Article
The search for (artificial)
intelligence, in capitalism
Capital & Class
1–18
© The Author(s) 2020
Article reuse guidelines:
sagepub.com/journals-permissions
https://doi.org/10.1177/0309816820902055
DOI: 10.1177/0309816820902055
journals.sagepub.com/home/cnc
Frank Engster
Helle Panke, Rosa Luxemburg Foundation, Germany
Phoebe V Moore
University of Leicester, UK
Abstract
Artificial intelligence is being touted as a new wave of machinic processing and
productive potential. Building on concepts starting with the invention of the term
artificial intelligence in the 1950s, now, machines can supposedly not only see,
hear, and think, but also solve problems and learn, and in this way, it seems that
actually there is a new form of humiliation for humans. This article starts with
a historical overview of the forerunners of artificial intelligence, where ideas of
how intelligence can be formulated according to philosophers and social theorists
begin to enter the work sphere and are inextricably linked to capitalist production.
However, there always already has been an artificial intelligence in power in, on
the one hand, technical machines and the social machine money, and on the
other, humans, making both sides (machines and humans), an interface of their
mutual capitalist socialisation. The question this piece addresses is, then, what
kind of capitalist socialisation will the actual forms of artificial intelligence bring?
Keywords
artificial intelligence, capitalism, Hegel, Marx, technology, work
Introduction
In 1917, in A Difficulty in the Path of Psychoanalysis, Freud declared that humanity had
suffered three historic humiliations. The first was the cosmological humiliation of Galileo’s
discovery that the earth was not at the centre of the universe. The second was the biological humiliation by Darwin’s discovery that mankind was the result of an evolution instead
Corresponding author:
Phoebe V Moore, University of Leicester School of Business, Brookfield Campus, Leicester, LE2 1RQ
Leicester, UK.
Email:
[email protected]
2
Capital & Class 00(0)
of god’s creation, and that man is of the animal, instead of god’s kingdom. And finally, the
psychological humiliation by Freud’s own insight, namely that we are not in control of our
own minds. Freud (1917) wrote that ‘Psychoanalysis confronts consciousness with the
embarrassing insight that [. . .] the ego is not master in its own house’. But perhaps even
more telling is Marx’s insight, wherein he showed the ‘social humiliation’: we are not able
to control nor fully understand our own mode of production; despite the fact that we
create it, it effectively controls us. Humankind hence does not understand our own form
of socialisation.
Artificial intelligence (AI) is being touted as a new wave of machinic processing and
productive potential. Building on concepts starting with the invention of the term AI in
the 1950s, now, machines can supposedly not only see, hear, and think, but also solve
problems and learn, and in this way, it seems that actually there is a new form of humiliation. In 1996, Deep blue beat the world’s best chess player at that time, Garri Kasparov.
In 2011, IBM Watson won, live on television, a Jeopardy clash. In 2016, AlphaGo beat
the best Go player in the world. Has, with these victories, AI surpassed or even overtaken
human intelligence? Is this the final humiliation of humankind? The answer is yes, AI
has overtaken us. But ironically, human intelligence has already been taken over by an
intelligence, an artificial one, that humans themselves have constructed, but in an unconscious and unwilling way. So, the question of this article is, ‘How can we understand this
quandary situation we humans are in?’ and ‘Why and how have we been humiliated
right from the beginning?’
To come to an answer, it is wise to begin with the question of what intelligence is –
the ‘artificial’ cannot be clarified without clarifying ‘intelligence’. However, rather than
point directly to intelligence as such, the article takes a detour by looking at how we
humans, since the enlightenment, have posed the question of intelligence, where and
how we searched for it, conceptualised it, and how the human/machine relationship is
influenced by these speculations from both sides, where the human uncritically sees
herself/himself in the machine. In short, we must look at the ways in which we have tried
to come to a point of self-illumination and self-understanding. Intelligence has been
searched for in two places: in the human mind, and in all kinds of technics and machines.
This all-too-human search has introduced the peculiar correlation that while machines
have continuously been anthropomorphised, humans ourselves become machinised,
thinking about human intelligence from the functioning of mechanical, physical, and
finally calculation machines. This dialectic can be periodised in a common historical
development in which each stage of technical innovation and revolution have introduced
ideas of human intelligence on the one hand, and machinic intelligence, on the other.
But this correlation has not only led to a historical development, the development of
history has also led to different ideas of history itself and of a sense and even an end in
the history. There have been in fact two overarching ideas of historical development: one
lay in the utopian or communist use of intelligence, be it human or technical and
machinic intelligence, for the complete overcoming of capitalist society led from itself,
quasi automatically, to another, different society. The other foresaw dystopian end times,
from the era of machine breakers, the unleashing of modern science to the mechanical
monsters and the creatures of industrial age to the age of cybernetics and total control.
Today, in the era of a coming AI, we seem to face again these two poles: will machines,
Engster and Moore
3
robots or the Matrix do our jobs, leading finally to an overcoming of work and hence of
the realm of necessity in the communism of the commons, where machines can do what
humans’ jobs once were, and perhaps even live human-like lives? Or will we become
redundant and unnecessary or even be overtaken on this planet, making us slaves of
robots or the Matrix?
The first section of this article introduces a history of the forerunners of AI, identifying how mathematicians, engineers and inventors of machines and technologies have
written about the concept of intelligence. Many key figures (most of them men) invented
early computation techniques, computers, and experimented with their capabilities,
leading to what was later called AI, from 1955. These inventors wrote scholarly, work
design focussed and technical essays showing that each stage of technical innovation and
revolution has introduced ideas of human intelligence on the one hand, and machinic
intelligence, on the other.
The second section shows that one should pose the question of what intelligence is,
in more radical terms. The argument is that intelligence is neither human nor machinic,
it is not present in subjects nor means and objects. It is what we have shown as their correlation, but this correlation has to be understood as a capitalist mediation that is sublated on both sides. This sublated mediation is AI. We can point out this kind of
supra-individual AI with the critical content of philosophy, namely German Idealism
and its materialist turn by Marx. This has two goals: the first is to show AI as a supraindividual form of social and specific capitalist mediation, which, second, produces
human and machinic intelligence as interfaces to the capitalist society and entangles
them. Decisive is that this character of interfaces, and this entanglement of man and
machine is not mutual without a common excluded third, which is their blind spot, and
this blind spot is what we must develop in order to understand AI.
The search for intelligence
Machines are inextricably tied to social circumstances and political economy and have
for decades been incorporated into work processes and used to valorise living labour
and have socialised other machines, socialise ourselves, and shape our experiences of
capitalism. It is the ways that technologies and machines are incorporated into society,
and human relationships with machines, rather than just the technology or work
design practices on their own, which reveal intelligence or the artificiality of ascriptions to intelligence that are so often construed. Intelligence has been continuously
linked to quantification and with an overarching power structure where, as calculation
and prediction machines advance, we expect ourselves to advance, but in direct alignment and even in competition with machines. A series of thinkers and engineers precede the era of production of scientific management and the story starts with Hobbes’
Leviathan. While thinkers before scientific management did not necessarily focus on
workplace management practices and ideologies that accompanied the invention of
new technologies, the trajectory of thinkers leading up to the labelling of AI in the
1950s reveal that the ways of thinking about thinking and the constructed hierarchy
of competences and decisions about ‘what counts’ provide clues for how we arrived at
the contemporary era of blindly triumphalist belief in the capacity of machines to solve
4
Capital & Class 00(0)
many of the issues facing humanity today through AI as well as to reveal the blind spot
in these discussions.
Reason is reckoning
Thomas Hobbes, in the chapter ‘Of Reason and Science’ in Leviathan, muses: ‘When a
man reasons, he does nothing else but conceive a sum total from addition of parcels . . .
for reason is nothing but reckoning’ (Hobbes 1651). Hobbes was convinced that humans’
capacity for reason, which animals do not have, is a process whereby we simply carve the
world into symbolic units and use sums to make decisions informing intention. Man can
consider the consequences of our actions and make theories and aphorisms, reasoning
and reckoning ‘not only in number, but in all other things whereof one may be added
unto or subtracted from one another’. In the beginning was ‘the word’, the first sentence
of the Christian Bible’s book of John tells us, but perhaps this statement needs reconsidering to another phrase, reading ‘in the beginning was the number’.
Also before the era of scientific management, which introduced the work component
to technologies and intelligence, Charles Babbage’s Analytical Engine marks the emergence of supposedly thinking machines resulting from experimentation of human/
machine relations in workplaces and with work design methodologies. Though it was
never built, this ‘engine’ was the first imagined digital computer, where punched cards
allowed for the operation of logical and mathematical computations. These differed
from analogue computers which measured magnitudes and physical quantification like
voltage, duration, angle of rotation in proportion to the quantity to be manipulated. The
word ‘digital’ is important in this phase, because it is the Latin word for ‘finger’. Fingers
are discrete units, Dreyfus (1992) points out in his important text What Computers Still
Can’t Do. So, choices of ‘what counts’ also matter, that is, which discrete units are selected
to obtain results. This is a point that Henri Bergson (1913/2001) to some extent dealt
with in Time and Free Will: An Essay on the Immediate Data of Consciousness, where he
queried methods that attempt to measure intensities of sensation with quantitative
means, as though each unit of measure can be seen as identical as well as building on
previous amounts. While most of the thinkers who set the stage for thinking about
thinking were men, one woman stands out during this early phase. The Countess of
Lovelace, Ada Byron King, wrote the first algorithm that was part of the design and
invention of the Analytical Engine. During her short 36 years, the only legitimate child
of Lord Byron and Lady Wentworth, is seen as the first to perceive the full possibilities
of computer programmes.
Another predecessor to the invention of the digital computer and to ways of thinking
now seen in AI, not to mention the initiator for all logic in mathematical computation
and forms of statistics, was George Boole, of now-labelled Boolean algebra. In The Laws
of Thought, Boole (1847/2007, 1854/2009) indicated that ‘all reasoning is calculating’,
again supporting the idea that all thought can be reduced to numbers, symbols and ultimately, quantification. Binary code, upon which all programming languages are now
built, is derived from his postulates, where variables are based on truth values, where the
choices are two: 1 (true) and 0 (false). These distinctions form all logic of computation
today. It is fairly easy to see the weaknesses with assuming that human thought
Engster and Moore
5
and reasoning (and thus ‘intelligence’) can be associated with such a black and white
calculation, but these processes are notable for their early significance in discussions
about capacities for reasoning and intelligence.
Scientific management
Some years later, scientific management started in the United States and spread across
Europe as a supposed ‘civilizing process’ (International Labour Office 1927). The model
was experimented with for about 25 years at the beginning of the 20th century. Frederick
W. Taylor and Lillian and Frank Gilbreth celebrated science and technology in work
design and were influential in identifying human intelligences, where management are
expected to be intelligent and the worker is definitively, not. This division is inherent to
Taylorist Scientific Management where managers and consultants busily looked for ideal
movements of manual workers to achieve what Marx had identified to be ‘no unnecessary waste of raw material’, if that raw material is the movement of humans’ very limbs.
Handling pig-iron, Taylor writes in The Principles of Scientific Management, is ‘so crude
and elementary in its nature that the writer firmly believes that it would be possible to
train an intelligent gorilla so as to become a more efficient pig-iron handler than any
man can be’.
However, he stresses, ‘the science of handling pig iron is so great and amounts to so
much that it is impossible for the man who is best suited to this type of work to understand the principles of this science’ (Taylor 1911/1998: 18). Scientific management
required the separation of unskilled and skilled labour categorised in terms of manual
and mental labour and Taylor was quite scathing in his accounts of the less-able human
who, he argued, would be best suited for manual work. So, quite clearly, only the boss
and administrators were permitted to be intelligent and the intelligence of machinic
capacities was assumed. Of importance, Taylor was also convinced of the need to separate play from work indicating in The Principles that it is a ‘matter of ordinary common
sense to plan working hours so that the workers can really ‘work while they work’ and
‘play while they play’, and not mix the two (Taylor 1911/1998: 44). So, the productive
self is available only within an explicit work context, unlike later participative management methods.
Also, in the early 1900s, two other industrialists were devising schemes to understand
workplace productivity as linked to human physical movement as well as physiology:
Frank and Lillian Gilbreth. Frank Gilbreth, upon entering the construction industry,
was intrigued to discover that every bricklayer went about laying bricks with a different
set of motions. Based on his perceptions of the inefficiencies and diverse methods that
each bricklayer used, he set out upon what he and Lillian called ‘The Quest of the One
Best Way’ (also the name of the biography Lillian wrote). Looking at micro-movements
by using a series of technological devices including a spring driven camera, an electric
motor-driven camera, and a michrochronometer, which was an instrument for measuring very small intervals of time, the Gilbreths looked for the hoped ‘one best way’ to lay
bricks that would lead to the least fatigue, research that soon became known as motion
and fatigue studies. The Gilbreths also measured workers’ heart rate using a stethoscope
and stopwatch – a foreshadowing of the heart rate measures we see in the construction
6
Capital & Class 00(0)
industry today, wearable technology that is used as a risk aversion strategy where an
employer can spot worker’s heart rate rising abnormally and can warn a worker to ‘take
it easy’ (Hughes 2015). A ‘therblig’ (Gilbreth spelled backwards) was the name these two
gave the system of analysing the body’s basic movements and using technology to isolate
movements into discrete units and quantify their outputs. Therbligs were a presage for
much more recent types of data analysis that could inform workplace design based on
technological readings.
In a similar timeframe to the work that was being done elsewhere, but unknown to
the Gilbreths in the early days, Taylor started working at Midvale Steel Company. As
general foreman, Taylor quickly became convinced that the greatest obstacle to cooperation between workmen and management is the ‘ignorance of management as to what
really constitutes a proper day’s work for a workman’ (Taylor 1911/1998). He asked the
plant to invest in research to identify the ‘fraction of horse-power, or foot-pounds of
work that one first-class man could reasonable perform in one day’ (Taylor 1911/1998).
Taylor selected two strong, able-bodied, so-called ‘first-class’ men, and carried out experiments for several years to identify exactly how much work was needed and which movements were the best, to carry out specific tasks.
While Taylor’s work was similar to Lillian and Frank Gilbreths’, Taylor focused
on time and measurement and prioritised efficiency and productivity more predominantly than the Gilbreths, who looked more closely at motion, and emphasised the
physiological by looking at fatigue and the need for rest. Taylor was quickly becoming well-known in industrial circles through consultancy work and quickly became
an internationally respected specialist and his talks and research were in high
demand. Frank Gilbreth was invited to one of the several-hour lectures Taylor held
in his home in the 1920s. Gilbreth introduced the concept of motion to Taylor after
the lecture, which led to collaborations that were soon known as ‘time and motion
studies’ and later, ‘scientific management’. ‘The principles of motion economy’ in
scientific management, seen as ‘helpful in work design’ (Barnes 1937/1980: 174)
were generally split into three areas: those related to the use of the human body, the
arrangement of the place and area, and as related to the design of tools and equipment. The experiments informing these techniques were also informed by ‘human
factors engineering’ research in the early part of the 20 C and are most obviously
applicable to factory production environments, but later research looks at Taylor’s
work as continuing to hold significance.
The ideas of scientific management were not themselves, necessarily new, but Taylor
successfully systematised a range of concepts ‘designed to increase and control industrial
production’ through widening the function of intelligent management, unintelligent
workers (he sometimes compared to animals), and machinic, coordinating elements of
his system. He believed that there was a science to his system and that, perfectly implemented, the system would lead to prosperity for all. He set out to look at each component of production processes, to experiment with machines and methods of work as well
as materials and was very committed to using measuring instruments in investigations,
using stop watches to measure the length of time a worker took to finish a task. Therefore,
within this period, machines were not seen as, themselves intelligent, but were there to
aid human reasoning and intelligence. Human intelligence was permitted to fit within a
Engster and Moore
7
straightjacket of efficiency and conformity, where the blessed ones (management) were
expected to lead all to a destiny of prosperity (intelligently).
There were four basic categories of scientific management: research, standardisation,
control and cooperation. Standardisation was ideally set so that all practices, classifications and qualities would be prescribed, and tools and equipment, methods of accounting and wage rates all comparable. To reach the full potential to the production process,
control was considered important, meaning management was given more authority and
work to achieve control, by
planning the work to be done, routing it through the factory, and scheduling each machine or
group of machines for its part of the job; providing the necessary materials and tools for the
worker when he required them; and inspecting the finished product and even the workman’s
work methods. (Nadworny 1955: v–vi)
This relationship between the worker and manager would require what Person called ‘a
mental attitude’, ‘a condition of efficient common effort, a model of conduct, the result
of the formulation of standards of purpose, facility, method and relationship’ (Person,
managing director of the Taylor Society, cited in Nadworny 1955: vi), where management’s codes and diktats were to be followed, which was seen to be inevitable advancement for productivity, which would then be shared by both management and workers.
Scientific requirement relied on the explicit distinction between managers, workers and
machines, where managers are the only subject with agency and capability for intelligence. This division was portrayed as, again, the one best way. Machines were simply
tools to ensure perfectly intelligent work designs. But within just a few decades, machines
were expected to mimic human intelligence fairly easily.
Can machines think?
In 1950, Alan Turing published the important essay ‘Computing Machinery and
Intelligence’ (Copeland 2005), which introduced a test he called The Imitation Game.
This game was designed to identify whether machines could behave like humans, by
seeing whether humans could tell the difference between humans and machines. The
alignment of human and machinic intelligence was portrayed competitively, where
Turing is one of the only modern thinkers to look critically at the words ‘thinking’ and
‘machine’ and decided in this essay that it is perhaps more important to enquire whether
humans can be fooled by a machine into thinking that the machine is a human, than it
is to seriously ask whether or not machines can, themselves, think. Turing was quite clear
that we as humans ascribe intelligence to machines and that we display our own weaknesses in doing so. He is also one of the only scholars to draw a ‘fairly sharp line between
the physical and the intellectual capacities of a man’. He argued that no engineer and no
chemist has produced any materials that convincingly mimic the skin, for example.
While Turing did not go more deeply into these insights, that is, enquire ontologically to
what extent bodies and minds are intertwined in the intellectual life of a human, it is
interesting that a man whose sexuality was punished by his society during his lifetime
and his considerations took him into that line of the questioning, considering the
8
Capital & Class 00(0)
physical as inherently a part of human intelligence, like no other in the early days of
human/machine thinking.
But the history of actually termed ‘artificial intelligence’ begins at an academic conference in 1956, which were really a series of workshops led by an Assistant Professor named
John McCarthy, who worked with Marvin Minsky of Harvard, Nathan Rochester of
IBM and Claude Shannon of Bell Telephone Laboratories at these workshops to see
whether they could make ‘a machine behave in ways that would be called intelligent if
humans were so behaving’ (McCarthy et al. 1955). McCarthy used the term ‘artificial
intelligence’ to differentiate it from ‘cybernetics’ and soon, despite, or perhaps more like
because, of its overlaps in research questions, the field of AI took precedence to cybernetics altogether.
The subsequent so-called Symbolic Approach to AI involved attempting to mimic the
logical processes of the brain, which was an era of investigation that is later called Good
Old-Fashioned AI (GOFAI). The term AI flourished in popularity, and in a recent book
by Jerry Kaplan (2016), he speculates that its improbable success in attracting interest
goes beyond its academic roots. The relationship between the machine and humans has
fascinated people for generations. However, if the airplane had been called an ‘artificial
bird’, imagine the subsequent confusion in watching the progress of the invention of the
wing, and so on – the airplane is not a direct replica of a bird. But humans have always
looked at machines as mirrors and to think of their intelligence as artificial, appears more
seductive if applying such terminology, Kaplan argues. Calling AI ‘symbolic processing’
or ‘analytical computing’ would not have the same impact.
In this first period of AI research, physical symbol systems (PSS), also called formal
systems, were envisaged by Allen Newell and Herbert A. Simon, two of the ‘godfathers’
of AI. These systems are those that ‘have the necessary and sufficient means for general
intelligent action’ (Newell & Simon 1976). PSS relied on rationalist fundamentals and
logical assumptions in the tradition of Descartes, Hobbes, Leibniz, Kant and Hume,
where one must have abstracted a theory based on invariant features in specific situations
to deal with a domain. The claim that the PSS have the means for general intelligent
action is based on an understanding of how humans think, that is, that we only carry out
symbol manipulation. If that is right, then machines, can, of course, be intelligent.
In this early phase of AI research, researchers held that human-readable representations of problems, in the form of symbols, should inform how all AI research should be
conducted. This form of AI involved expert systems which reflected production rules
that are altered by human deduction in the context of emergent errors, the processes
relying on an ‘if this, then that’ type of formula, basically, a flow chart. Machines’ intelligence required ‘making the appropriate inferences’ from their seemingly internal representations, where a PSS was seen to posit this, as quoted, ‘necessary and sufficient means
for general intelligent action’ (ibid). There was a lot of criticism of PSS and of GOFAI in
general, but nevertheless, is the basis of AI in the earliest considerations. Critics held that
there are problems with expecting machines to supposedly represent the world exactly as
humans do or argued that humans don’t ‘represent’ the world at all, but in fact, ‘are’ the
world. This reasoning sits alongside a more recent idea that machines should be fully
autonomous. These days direct comparisons with human thought and being have
subsided.
Engster and Moore
9
The first major challenge to GOFAI and the hopes that machines could be trained to
‘represent’ the world in ways that humans do, was the invention of the first artificial
neural network. Frank Rosenblatt, a psychologist, is said to have invented the first one in
1958, just 2 years after this first conference. This neural network was called Perceptron.
It modelled the ways that human brains process visual data and learn how to recognise
objects at the same time as picking similar cases: the first time this could be done in
parallel (which differentiates it from the PSS). Neural networks allow computers to make
decisions based on information within domains that is collected from various silos which
result in conclusions from either supervised or non-supervised processes. That is what
sets neural networks apart from GOFAI and brings us into the ‘cognitive’ period of AI
research, where scientists are moving away from comparing machines so directly with
humans and expecting them to think in ways that humans do. Indeed, the invention of
neural networks raised a number of philosophical questions about how a theory of
domain is formulated, but explicit philosophical questions were introduced somewhat
later by such figures as Hubert L. Dreyfus.
Indeed, it was this now well-known critic of symbolic AI Dreyfus, who noticed when
reading original texts in AI research in the 1960s, and examining the work of Newell and
Simon, that the ontology and epistemologies underpinning early AI researchers’ thoughts
were derived from a range of rationalist tenets. Researchers projected intelligence onto
machines, thinking they can comprehend a symbol in the same way that humans do, and
that their sensors would mimic humans’ ability to process meaning from their surroundings. Dreyfus’s work indicates that researchers had come across problems of significance
and relevance, issues that are philosophically dealt with in the existentialist tradition.
Dreyfus argued, as alluded above, that humans do not experience objects in the world as
models of the world, or symbols of the world, but experience the world itself.
About a decade later, in 1966, Joseph Weizenbaum, a German American Massachusetts
Institute of Technology (MIT) computer scientist who is also considered one of the
forefathers of AI, invented the predecessor for today’s chatbots, naming this computer
programme ‘Eliza’, after the ingenue in George Bernard Shaw’s Pygmalion. Pygmalion is
a character in Greek mythology who develops a love interest in his own sculpture which
comes to life. This seemed an appropriate name for this chatbot given ‘her’ quickly
observed capacity to induce emotion from those speaking to this specific software programme. Weizenbaum took the human responses he witnessed quite seriously and was
genuinely surprised at Eliza’s seeming impact on them. Turing would have probably
found this finding quite interesting given the Difference Engine experiment he pursued,
where humans project their own assumptions onto machines. Weizenbaum was also very
sceptical about the integration of computers into society and saw the dark sides that it
introduced.
The following section now takes a step back from the historical materialist outline
here, where human intelligence was continuously imagined, incorrectly, to be constricted
to basic capitalist norms of thinking, where humans have at points projected our own
(flawed) ideas of intelligence onto machines, resulting in a tightening of control in the
employment relationship; a gendered chatbot Eliza who was modelled after a literary
ingenue and expected to make people feel good, a kind of artificial affective intelligence.
Here, we turn to look at the ontological basis for thinking about artificiality and
10
Capital & Class 00(0)
intelligence, where philosophers have already made it clear that even the idea that intelligence can be explained is flawed, much less attempting to make a machine intelligent.
The critical core of philosophy and its blind spot:
AI as the supra-individuality and negativity of
reason
The critical core with regard to human intelligence has been already there in the philosophy of German Idealism, namely, to think the human, and with the human also
intelligence, from outside the human, from ‘somewhere else’ – but neither from a god
nor from nature. There is nothing ideal-religious, natural or empirical at all that can
explain human intelligence. Kant and Hegel rather pointed to a negative essence of
mediation between subject and object which must constitute what it mediates.
According to Kant’s critical insight (Kant 1929) per se, objectivity is uncircumventably
always already constituted by a transcendental form of subjectivity, while Hegel wanted
to overcome the dualism in Kant by thinking subject and object from a mediation that
is their speculative identity, and this mediation falls into the logic of the notion and
the Spirit.1 Marx, finally, noted that mediation has to be thought as a social and historic specific mediation which however seems to have, like in German Idealism, a
quasi-ontological and transcendental, or in Hegel’s case, even an absolute status. Yet, it
is precisely this status that needs to be turned out as a ‘second nature’ produced by
capitalist mediation. This second nature is exactly what needs to be developed as a
kind of AI.2 But if the subject and its intelligence have to be thought from this AI, then
the subject must be right from the beginning a divided and split subject. The subject
is subtracted and at once attracted by an over-individual ‘other’ which functions like
an AI and is precisely responsible for what in the subject is ‘intelligence’ and makes it,
right from the beginning, an interface.
What goes for the subject, goes for its means: what makes its means of production
productive is their character as an interface, an interface with the same AI they like
humans embody. This entanglement with AI comes to itself in the means of production
per se: the machine. A machine is, as Heidegger, Simondon or Deleuze and Guattari
pointed out, nothing without its context, without its interconnection with other
machines and without their entanglement with the human. This entanglement led
indeed to the idea of man as right from the beginning a ‘Dividuum’ or a ‘Man-Machine’
(Raunig & Derieg 2016).
So, when Heidegger states that the essence of the technic is something ‘non-technical’
(Heidegger 1993), this non-technical essence is precisely what we can take as AI. And
with Marx we can claim that this AI, that this non-technical is its social and specific capitalist essence. Machines are productive because they are part of the valorisation of value,
they increase its productivity and they enlarge the capitalist (re-)production. So, the very
first act of AI seems to split both humans and machines to open them for their mutual
mediation, making each of them an interface while the AI itself disappears in this entanglement and appears as a property of individual humans on the one hand, and of individual machines, on the other.
Engster and Moore
11
In sum, to understand AI, we have to develop these two characteristics, first, AI as a
supra-individual form of social and specific capitalist mediation, which, second, produces human and machinic intelligence as interfaces to the capitalist society and both
splits and entangles them. Decisive is that this character of interfaces and this entanglement of man and machine is not mutual without a common excluded third which is
their blind spot, and this blind spot is what we must develop as AI. Of course, we can
only sketch out here a kind of a programme. To show at least the starting point of such
a programme, we want to show how we can use Marx to socialise what already in the
philosophical self-understanding of our society was at stake, especially in Marx’s famous
materialist turn of the Hegelian ‘Spirit’.
AI in Marxism: the socialisation of human labour
and its means of production by capitalism
There are in the history of Marxism, however, two main lines of interpretation of this
materialist turn. The first strain was dominant in the Marxism that had already begun in
Marx’s time. It was heavily influenced by Engels’ interpretation of Marx and by Engels’
own texts, and after Marx’s death, this strain dominated the socialist labour movement
and its organisations and parties. To put it simply, the general assumption here was that
what Hegel presented as Spirit, in reality3 is human labour and social practise. Marx’s
materialism deciphers in the Hegelian ‘negativity of reason’, in ‘the labour of the concept’
and in the ‘Spirit’ alienated forms of a human essence that lies in praxis, especially in the
social determination and productive power of human labour and its metabolism with
nature, hence of the working class. Consequently, the truth of the independent, ideal and
autonomous status of the Spirit lies in the class division and the separation and alienation
of the working class from the means and the products of their labour.
The other strain to interpret Marx’s materialist turn and the socialisation of reason
and Spirit is more philosophical and categorical. It is present in Western Marxism,
Critical Theory and actually in Post-Marxism.4 It brought a more negative critic, often
in taking explicit distance to the shortcomings of the other strain, and it brought in the
opposite pole to that of labour, production and metabolism with nature, namely that of
the form of social mediation by exchange and commodity form, by circulation and valorisation. These forms were not only seen as a formalisation of social mediation, constituting a social objectivity. They were also seen as the origin of specific capitalist forms of
subjectivity, be it the subjectivity of thinking and consciousness, be it the subjectivity of
epistemology or be it the subjectivity of political ideals and ideology (Adorno 1993;
Fulda et al. 1980; Lukács 1977; Sohn-Rethel 1978).
Decisive for both strains, however, is that there is in fact a socialisation of labour, of
its means and of the society in general going on. But while in the strain of classical
Marxism this socialisation is done by labour and its metabolism with nature, in the other
strain it arises from within the mediation of the society with itself, namely by commodity-form and value form. And while in the first strain the supra-individual subject produced by this socialisation is the proletariat, in the second strain the subject that arises
from the forms of capitalist mediation and valorisation is capital. Consequently, also the
12
Capital & Class 00(0)
subject of social transformation is not the proletariat, but capital, and unfortunately
capital is not leading to its own overcoming. So far to the two main lines of a materialist
turn of AI in power in our society.
The blind spot in Marxism and the socialisation by
capitalist money
We think its indeed possible to combine the critical content of both strains, combine on
the one hand the traditional Marxist idea of a socialisation of humans, their means and
the society in total by labour, but split by a contradiction between labour and capital and
social antagonism which the proletariat must overcome, and on the other hand the idea
that commodity form and capitalist valorisation is the real subject coming out of this
socialisation.
But both strains overlooked the crucial point of our socialisation, which is money.
Although the second strain saw the form of socialisation in social mediation by commodity- and value-form, instead of labour, this strain also overlooked the importance of money
for the socialisation of humans and machines. The crucial point is that money in capitalism not only becomes the form to socialise humans by their labour and their means of
production, but this form becomes in fact something like a machine or the machinic as
such. In money, an overarching socialising machine is in force, socialising, like a kind of
artificial but at once all too human intelligence, humans and their labour on the one hand
and their means on the other, especially all the individual technical machines. So, we have
two types of machines, we have all the different technical machines in our social production, and we have one big single overarching social machine with money, the one machine
to rule all other machines and the whole mode of production as such.5
If we want to understand the entanglement of these two types of machines, we have
to look how the technical machines are productive in a valorisation that works by the
social machine, namely by the functions and circles of money. Marx is clear and simple
here: the technical machines, whatever they on a technical side do: mechanical movements, calculations or self-learning – technical machines enter in a production that is a
pure quantitative relation and valorisation. And here, in this quantitative valorisation,
machines do always one and the same thing: they reduce necessary labour-time and
convert it into surplus labour time. In other words, according to Marx the machine is not
productive, as it seems, because it produces more or better goods. It is productive because
it converts necessary into surplus-labour time.
This conversion, although it seems a plain and simple thing, is nothing less than the
socialisation of the whole production, because according to Marx, the production of
surplus-labour time is a social process that includes all the different capitals, all the
works done, and all the commodities produced. This is because reduction of necessary
labour-time not only reduces the values of the various commodities produced. It reduces
the costs of the one and only productive commodity, reproduced by all these capitals
and the various commodities, namely the commodity labour-power. And for this circle
of reproduction all the individual capitals must come together, especially in forms or
relative surplus production. This relative surplus-production is where two types of
machines are productive: The technical machines and the technic of the social machine.
Engster and Moore
13
The two types of machines: the technical machine
and the technic of the social machine money
How this conversion by machines and this relative surplus production exactly functions is not decisive here. Decisive is that to bring the whole production together and
form a reproductive circle is what the social machine money does. So, this is crucial if we
want to elaborate capitalist money as a kind of AI: to bring all the technical machines, all
the human labour forces and all the produced commodities together as pure quantitative
values and to set them in a common productive valorisation process, we need the social
machine: money.
We can show the joint socialisation by the different generations of technical
machines on the one hand, and of the social machine, money on the other. These
generations mark different capitalist stages or periods of this socialisation. What first
regards the generation of physical machines, starting with the famous steam machine
and leading to the big industry: here the machines have socialised the working class by
formalising labour and its conditions of living, homogenising the working class and
their life-forms and concentrating them in big factories and cities; but this formalisation, homogenisation and concentration of labour was also in power in their political
representation like huge socialist and social-democratic mass organisations, mass
unions and mass parties, and so on. This is why the idea of a social and political subject
was often identified with this kind of an industrial white-male working class, although
it seems now that this kind of homogeneous composition was a historical episode only
produced by these industrial types of machines.
This became clear by the last technical revolution, brought by calculation machines,
leading to cybernetics, digitalisation, the Internet and algorithms. These machines
socialise the capitalist society in the opposite direction, namely by an individualisation,
decentralisation and dispersion of labour and their means, leading to a kind of postfordist, post-industrial society. This is why the social subject now is rather thought more
as the Multitude, General Intellect; collective intelligence and so on, rather than as a
homogeneous white-male industrial working class, just like capitalism is no longer called
industrial and fordist, but ‘digital capitalism’ or ‘knowledge capitalism’ for example.
Between these two technical revolutions which mark the beginning and the ‘post-’ of
industrialisation lies the transition from Fordist to so-called post-Fordist production,
characterised by the machines of the second industrial revolution and a (petro-)chemical
and electronical production with a Taylorist and Fordist organisation. While Fordism
started with the mass-production of destructive goods, destroying in the two world wars
also masses of the two elements of the valorization itself, after World War II (WW II)
fordist mass-production became a machine for the integration of masses by mass-production of commodities for civil consume. The technic of post-fordist production with
its individualization, flexibilisation and fragmentation, however, sets free a disintegration
and an insecure future that the same technics must try to recapture and control. While
Fordist machines in the decades after WW II where integration machines by a mass
production which corresponded a mass-employment, mass-income and mass-consumption, and while even the Keynesian state and the social welfare where integration
machines through (re)distribution of wealth and a political machinery, the actual
14
Capital & Class 00(0)
so-called post-Fordist era with neoliberal politics, a financial market driven economy and
with the means of production of digital capitalism are machines to disintegrate these
previous forms.
With regard to the money-machine, we could assign to these different technical
machines and their forms of socialisation respective historical forms of this social
machine money and its forms of capitalist socialisation. In the industrial age, money
became concentrated and fixed in factories and huge machine parks as ‘fixed’ and
‘constant capital’ (Marx), circulating products of beginning material mass consumption or, in war times, when money became a war machine, of mass-destruction, just
like the physical machines turned their power into forces of mass-destruction. With
the beginning of postwar Fordism, masses of money-capital became fixed capital and
constant capital in the production of civil private and civil state consume, depending
on the integration of the masses by the economic circle of mass-employment, massincome, mass-production and mass-consumption. Today, in the era of calculation
machines, money is concentrated and fixed in high-tech companies and their algorithms and platforms, research hubs, and so on, which socialise information, data, and
so on and turn them into conditions of valorisation. Money now depends more and
more on a future valorization that is uncertain and at risk, making money itself risk- or
venture-capital, but money goes also into the financial instruments and technics to
colonise and hedge these uncertain and precarious futures. In all these forms, the
money machine functions more, just like the calculation machines, by controlling,
operating and governing physical machines and the whole production process than by
really entering in the means of industrial commodity production. These financialised
forms of money that are derived from capitalist valorisation become more and more
important for our capitalist form of socialisation: with credit, money socialises past
and future private profit, with fictitious capital like stocks and state bonds, it socialises
private or state property, with derivatives, it hedges and socialises the risks of future
capitalist production resulting from these credit and financial forms.
AI: the technic to socialise all the single technical
machines
In the historical overview of the forerunners of AI, we have seen how AI became introduced in the work sphere and in capitalist production. In the second part, we have seen
that there always already has been an AI in power in on the one hand, technical machines
and the social machine money, and on the other, humans and their labour making both
sides (machines and humans), an interface of their mutual capitalist socialisation. The
question this text finally has to address is, then, what kind of capitalist socialisation will
the actual forms of AI bring?
Actual AI is neither an attraction and concentration of mass workers nor their individualisation and dispersion, like it was the case first in the Fordist and then in the postFordist mode of production. AI instead is the machine to socialise all other machines and
by that, also humans. AI is about socialising all the technical and calculation machines
already in force, but till today ‘only’ connected as single machines with no internal communication, no self- and deep-learning competence. Yet, they will, socialised by AI,
Engster and Moore
15
socialise all things and all humans by connecting them with chips, sensors and other
interfaces to the Internet to let them communicate, processing their data by self-learning
algorithms, with significance for workers as their every move is increasingly tracked
(Moore 2018). This socialisation of machines is also why the Internet, calculation speed
and big data are so important, and this is why AI is socialising us by becoming part of
our social infrastructure.
With this technic to socialise, the actual AI becomes adequate to the social character
of money, as if they are in kind of superposition. But this, however, is only the technical
side of AI. This socialisation of machines will deal with all kinds of technical problems
and will possibly find technical solutions. But this technical side of AI will always be an
interface with the non-technical and pure social side, namely with the valorisation process in force by the AI we individualised in the money-machine. Decisive therefore is
what the technical side of AI will mean for this valorisation, and here AI will not solve
the problems it will actually contribute to.
Adorno’s and Horkheimer’s Critical Theory already claimed the decisive problem
in their Dialectic of Enlightenment: every advancement in AI will lead, like every technological advance and revolution before, to the same fateful turn, namely that the
same technology that brings progress in productivity will also turn this progress into
its own opposite, into forces of dislocation, crisis and destruction. Especially Adorno
aimed that technic is a homogenisation, a standardisation and objectification that is
not only ‘pure’ technical but also social and cultural and leads to the commodification, valorisation, ramification, and so on of the social and to a ‘culture-industry’.
Here, he saw a kind of analogy between on the one hand the technic of the concept
and the technic of identification in general and the technic of capitalist commodification and valorisation on the other.
But the fateful turn can also be brought to the point with Marx’s basic insight he had
in regard to the nature of machines and of capitalist progress in general. We could bring
the turn to the point with the conversion of necessary into surplus labour time according
to which all machines on the pure economic level always produce two things: surplus
labour-time and a surplus population. Like classical industrial machines produced surplus time from those who work and an industrial reserve army by those made redundant,
also the calculation machines of today and the AI to come, whatever they in the technical
sense do and produce, on the level of valorisation will produce two things: surpluslabour time and a surplus-proletariat or a surplus population (rather than ‘only’ a postindustrial reserve army).
This turn can be generalised to what is, in the end, maybe nothing less than the main
contradiction of capitalist society, surpassing even the contradiction between labour and
capital: that the progress on the side of scientific and technological development does not
correspond to a progress in the social. Technical progress does not correspond to emancipatory forces or movements or an emancipatory consciousness. On the contrary, new
technologies of AI go hand in hand with a worldwide social regress by an increase of
religious and populist politics, conspiracy theories and so on, corresponding to what in
the age of industrial mass-communication hundred years ago has been, coming back to
Freud, the mass-psychology of fascism.
16
Capital & Class 00(0)
This is because the turn that reducing labour time produces with surplus labour-time,
a surplus population and economic dislocation is not only a ‘pure’ economical turn. The
‘pure’ economical, rather, is processed by ideological forms. The economical is not
addressed in the categories of the capitalist economy and its dynamic described in Marx’s
Critique of Political Economy but deferred in an ‘economy’ of ideological forms, actually
the economy of right-wing populism. This deferral equals the situation with the first
generation of machines of electronic mass-communication (radio, film). While communication machines were used, like the machines for fordist mass-production at this time
in general, to homogenise and formalise, to standardise and to mobilise the masses and
the ideological messages, especially in fascism and Stalinism, the actual machines are
used to overtake and supersede the fragmentation, individualisation and flexibilisation
that already post-Fordist machines and mode of production brought. Now the economical de-valorisation, the social dislocation and the precarity that the age of post-industrial
calculation machines and AI brought for masses of the population in classical industrial
countries get an ideological culturalisation and nationalisation by all the forms of populism which shall not only explain the dislocation and present its ‘real’ causes, but also
hedge them by forms of a pretended cultural identity, national sovereignty, measurements of control, and so on. It is like if the good old state- and Fordist-machines find
their resurgence in these ideologies which call for a homogenization and nationalisation
of culture and identities which shall solve the economic and social de-valorisation and
dislocation the machines of the post-Fordist era bring – but posted, spread, and shared
are these ideologies which equal the functioning of old Fordist machines by the very
same new calculation machines and their social networks, bots, algorithms, and so on.
Thus, in the present article, the development of the concept of ‘intelligence’ as pioneered
by earlier AI researchers is identified to be flawed. We have returned to the core concepts
in philosophy to reveal that intelligence has already been flawed. We have identified a
new way to consider its ramifications and to potentially think about a future, better
intelligence outside of a capitalist framework with machinic reasoning.
Notes
1.
2.
It is important to distinguish between Hegel’s Science of Logic, where he develops the logic of
concept thinking in a non-empirical way, and The Phenomenology of Spirit, which is about the
logic of self-consciousness as the condition that the world of experience and history become a
self experience of a supra-individual spirit. The different statuses often become conflated. See
Hegel (1977), Engster (2014).
The idea of a ‘second nature’ first appeared in ancient Greece in the sense of ‘another
nature’ (Aristotle). It reappears in Hegel, but now it is the freedom of a Spirit who knows
the necessities of first nature, sublating the difference to nature’s necessities in the building
of its own realm (Hegel 2001: 28, 136, 205). Marx and Critical Theory (Adorno 1973:
300–360) brought again a new determination and a kind of materialist turn, as now second nature is thought in analogy or homology to the necessities of first nature (although
Marx not uses the term, but speaks of ‘naturwüchsig’, primordial). Decisive is that like
first nature, also the second nature of capitalist society gets objectified by the quantification of its own social relations–which, however, leads instead to a realm of freedom to the
necessities and the primordialism of capitalist valorization and makes it, as Marx puts it in
Capital, an ‘automatic subject’ (Marx 1867/2015: 107). With digitization, algorithm and
Engster and Moore
3.
4.
5.
17
programming there is with the production of meaning a kind of third nature, or a third
second nature besides Spirit and Capital. It is, however, important that also first nature
is not ‘given by nature’. With Hegel’s and Marx’s dialectic we should rather search the
technic to split and entangle first and second nature as precisely the kind of the AI that
Hegel develops as Spirit and Marx as Capital.
Especially in his early writings Marx argued against Hegel with ‘the real men’, as if in Hegel
men is an abstraction and Spirit, notion, and so on are ideal abstractions taken from real men,
real life and mediation done by social practice and labour.
One of the starting points was Lukács’ reification essay (Lukács 1971).
It became common to distinguish between technic in a narrow and technology in a broader
sense. Technic in the narrow sense regards the functioning, here of machine and money
money-machine, while technology is how this technic determines the whole social context,
but also vice versa got its functions by this context. For both meanings here important the
non-technical essence as the social, pure social, and this more the logic.
References
Adorno TW (1973) Negative Dialectics (trans. EB Ashton). New York: Seabury Press.
Adorno TW (1993) Hegel: Three Studies (trans. SW Nicholsen). Cambridge, MA: The MIT Press.
Barnes RM (1937/1980) Motion and Time Design and Measurement of Work. Toronto, ON,
Canada: John Wiley & Sons.
Bergson H (1913/2001) Time and Free Will: An Essay on the Immediate Data of Consciousness
(trans. FL Pogson). Mineola, NY: Dover Publications.
Boole G (1847/2007) The Mathematical Analysis of Logic: Being an Essay towards a Calculus of
Deductive Reasoning. Whitefish, MT: Kessinger Publishing, LLC.
Boole G (1854/2009) An Investigation of the Laws of Thought on Which Are Founded the
Mathematical Theories of Logic and Probabilities. Cambridge: Cambridge University Press.
Copeland J (2005) The Essential Turing: The Ideas that Gave Birth to the Computer Age. New York:
Oxford University Press.
Dreyfus H (1992) What Computers Still Can’t Do: A Critique of Artificial Reason. Boston, MA:
MIT Press.
Engster F (2014) Das Geld als Maß, Mittel und Methode. Das Rechnen mit der Identität der Zeit.
Berlin: Neofelis.
Freud S (1917) A Difficulty in the Path of Psycho-analysis. The Standard Edition of the Complete
Psychological Works of Sigmund Freud, Volume XVII (1917-1919). London: The Hogarth
Press and The Institute of Psycho-Analysis.
Fulda F, Horstmann R and Theunissen M (1980) Kritische Darstellung der Metaphysik. Eine
Diskussion Über Hegels ‘Logik’. Frankfurt: Suhrkamp.
Hegel GFW (1977) The Phenomenology of Spirit (trans. AV Miller). Oxford: Oxford University
Press.
Hegel GWF (2001) Philosophy of Right (trans. SW Dyde). Kitchener, ON, Canada: Batoche
Books.
Heidegger M (1993) The Question Concerning Technology. Basic Writing ed. (trans. W Lovitt).
New York: Harper Colling, pp. 331–41.
Hobbes T (1651) Leviathan, or the Matter, Forme and Power of a Commonwealth Ecclesiasticall and
Civil (By of Malmesbury. Anno Christi). London: British Library, Public Domain.
Hughes M (2015) How to Adapt Your Recruitment and HR Strategy to Wearable Technology. IT
Proportal, 3 August. Available at: http://www.itproportal.com/2015/08/03/how-to-adaptyour-recruitment-and-hr-strategy-to-wearable-technology/
18
Capital & Class 00(0)
International Labour Office (1927) Scientific Management in Europe. Geneva: International
Economic Conference.
Kant I (1929) Critique of Pure Reason (trans. NK Smith). London: Macmillan.
Kaplan J (2016) Artificial Intelligence: What Everyone Needs to Know. Oxford: Oxford University
Press.
Lukács G (1971) Reification and the consciousness of the proletariat. In: Lukács G (ed.) History
and Class Consciousness. Studies in Marxist Dialectics (trans. R Livingstone). Cambridge, MA:
MIT Press, pp. 88–222.
Lukács G (1977) The Young Hegel: Studies in the Relations between Dialectics and Economics.
Cambridge, MA: MIT Press.
McCarthy J, Minsky ML, Rochester N, et al. (1955) A proposal for the Dartmouth Summer
Research Project on Artificial Intelligence. Available at: http://jmc.stanford.edu/articles/dart
mouth.html
Marx K (1867/2015) Capital: A Critique of Political Economy Volume 1, Book One: The Process of
Production of Capital. Moscow: Progress Publishers.
Moore PV (2018) The Quantified Self in Precarity: Work, Technology and What Counts. London:
Palgrave Macmillan.
Nadworny J (1955) Scientific Management and the Unions 1900-1932: A Historical Analysis.
Cambridge, MA: Harvard University Press.
Newell A and Simon HA (1976) Computer science as empirical inquiry: Symbols and search.
Communications of the ACM 19(3): 113–126.
Raunig G and Derieg A (2016) Dividuum: Machinic Capitalism and Molecular Revolution
(Semiotext (E) Foreign Agents Series). Cambridge, MA: MIT Press.
Sohn-Rethel A (1978) Intellectual and Manual Labour: A Critique of Epistemology. Atlantic
Highlands, NJ: Humanities Press.
Taylor FW (1911/1998). The Principles of Scientific Management. New York: Dover Publications.
Author biographies
Phoebe Moore is based at the University of Leicester School of Business and is the Director of the
Research Centre for Philosophy and Political Economy. Moore writes about technology and the
workplace. Her last book is entitled The Quantified Self in Precarity: Work, Technology and
What Counts (Routledge, 2018).
Frank Engster works Helle Panke, Rosa Luxemburg Foundation. His specialisms are in Marxism,
money and time. His last book is entitled Das Geld als Maß, Mittel und Methode. Das Rechnen mit
der Identität der Zeit (Neofelis, 2014).