Turing’s Rules for the Imitation Game
GUALTIERO PICCININI
Department of History and Philosophy of Science, University of Pittsburgh, 1017 Cathedral of
Learning, Pittsburgh, PA 15260, USA; E-mail:
[email protected]
Abstract. In the 1950s, Alan Turing proposed his influential test for machine intelligence, which
involved a teletyped dialogue between a human player, a machine, and an interrogator. Two readings of Turing’s rules for the test have been given. According to the standard reading of Turing’s
words, the goal of the interrogator was to discover which was the human being and which was the
machine, while the goal of the machine was to be indistinguishable from a human being. According
to the literal reading, the goal of the machine was to simulate a man imitating a woman, while the
interrogator – unaware of the real purpose of the test – was attempting to determine which of the
two contestants was the woman and which was the man. The present work offers a study of Turing’s
rules for the test in the context of his advocated purpose and his other texts. The conclusion is that
there are several independent and mutually reinforcing lines of evidence that support the standard
reading, while fitting the literal reading in Turing’s work faces severe interpretative difficulties. So,
the controversy over Turing’s rules should be settled in favor of the standard reading.
Key words: Turing test
1. Introduction
In his 1950 Mind paper, Alan Turing proposed replacing the question "Can machines think?" with the question "Are there imaginable digital computers which
would do well in the imitation game?" (Turing, 1950, p. 442). The setup for what
came to be known as the Turing test was introduced in the following famous passage:
[The imitation game] is played with three people, a man (A), a woman (B),
and an interrogator (C) who may be of either sex. The interrogator stays in a
room apart from the other two. The object of the game for the interrogator is
to determine which of the other two is the man and which is the woman. He
knows them by labels X and Y, and at the end of the game he says either "X
is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put
questions to A and B thus:
C: Will X please tell me the length of his or her hair?
Now suppose X is actually A, then A must answer. It is A’s object in the game
to try to cause C to make the wrong identification. His answer might therefore
be
"My hair is shingled, and the longest strands are about nine inches long."
Minds and Machines 10: 573–582, 2000.
© 2001 Kluwer Academic Publishers. Printed in the Netherlands.
574
GUALTIERO PICCININI
In order that tones of voice may not help the interrogator the answers should be
written, or better still, typewritten. The ideal arrangement is to have a teleprinter
communicating between the two rooms. Alternatively the question and answers
can be repeated by an intermediary. The object of the game for the third player
(B) is to help the interrogator. The best strategy for her is probably to give
truthful answers. She can add such things as "I am the woman, don’t listen to
him!" to her answers, but it will avail nothing as the man can make similar
remarks.
We now ask the question, "What will happen when a machine takes the part of
A in this game?" Will the interrogator decide wrongly as often when the game
is played like this as he does when the game is played between a man and a
woman? These questions replace our original, "Can machines think?" (Turing,
1950, pp. 433–434).
When the imitation game involved two human beings, Turing explained the
rules in some detail. However, after introducing machines into the game, Turing
did not make the rules explicit. According to the traditional interpretation of this
passage, when a machine and a human being are playing the game, the goal of the
interrogator is to discover which is the human being and which is the machine,
while the goal of the machine is to be mistaken for a human being. I will refer to
this as the standard reading. Under the standard reading, the Turing test is squarely
a comparison between human beings and machines, where a skillful interrogator
can require the machine to demonstrate mastery of human language, knowledge,
and inferential capacities. Possessing these abilities is, by most standards, a clear
sign of intelligence or thinking.1 So, the question of whether a machine can do
well at the imitation game can be seen as a sensible replacement for the question
of whether a machine can think.
Some authors have read Turing’s passage in a more literal way, suggesting that
the goal of the machine is to simulate a man imitating a woman, while the interrogator – unaware of the real purpose of the test – is still attempting to determine which
of the two players is the woman and which is the man. I will call this the literal
reading.2 Supporters of the literal reading disagree over which of the machine’s
capacities are being uncovered by Turing’s game. Some argue that his point is
testing the machine’s ability to utilize language like a person; the blindness of
the interrogator and the gender impersonation are introduced for methodological
reasons – in order to make the test unbiased.3 Others suggest that Turing’s point
was not to test the machine’s ability to utilize language like a human, but literally
to test the machine’s competence at replicating the abilities of a human male who
is attempting to imitate a human female.4
As far as I know, no one has defended the standard reading against this revisionist line. The present work offers a thorough study of Turing’s rules for the imitation
game in the context of his advocated purpose and his other texts. Several independent and mutually reinforcing lines of evidence that support the standard reading
TURING’S RULES FOR THE IMITATION GAME
575
will be presented, while fitting the literal reading in Turing’s work will face severe
interpretative difficulties. The evidence supporting the standard reading is found by
considering other sections of the Mind paper, its overall argumentative structure,
and relevant statements made by Turing on other occasions. So, the controversy
over Turing’s rules should be settled in favor of the standard reading.
2. How Literal is the Literal Reading?
An opponent might accuse the standard reading of unnecessarily attributing ambiguity to Turing’s description of the game’s rules. If Turing meant the machine to
simulate not a woman, as his words seem to suggest, but a generic human being,
why didn’t he say so from the start? The standard reading makes Turing’s description of the rules appear confusingly incomplete, while the literal reading seems to
take Turing’s words at face value. Other things being equal, this opponent would
conclude, the literal reading should be preferred over the standard one. Before
turning to the evidence in favor of the standard reading, let me dispense with this
potential objection.
It turns out that, when examined closely, the literal reading generates an interpretative problem similar to the one just mentioned. Suppose the literal reading is
correct. Turing’s words would still fall short of fixing the rules of the game, this
time with respect to the interrogator’s role. Does the interrogator know that she is
dealing with a machine and a woman, or does she incorrectly think she is dealing
with a woman and a man? Turing doesn’t say anything in this respect. This question is far from irrelevant, as we would expect the interrogator’s strategy, and the
chances of making correct guesses, to be different in each of the two cases. So, the
literal reading is also committed to attributing ambiguity to Turing’s explanation
of the rules. Usually, the proponents of the literal reading assume that the interrogator should not know that she is talking to a machine.5 But such misconception
of the interrogator changes the game’s original setting, in which the interrogator
was correctly informed that the players were a woman and a man. This change
does resolve the ambiguity resulting from the literal reading, but generates the
following question: if Turing meant the interrogator to ignore the real purpose of
the game, why didn’t he say so? The ambiguity resulting from the literal reading,
and the inference required to resolve the ambiguity, make the literal reading no
longer literal. As both readings attribute an ambiguity to Turing’s description of
the rules, no reading is better off than the other in this respect.
3. The Turing Test as a Replacement for the Question "Can Machines
Think?"
Any reading of Turing’s rules must explain how the imitation game fulfills his
goal of replacing the question of whether machines can think. As I said, the standard reading’s account is straightforward: if a machine can demonstrate mastery
576
GUALTIERO PICCININI
of human language, knowledge, and inferential capacities to the point that it is
mistaken for a human being, most people would consider it intelligent – or so
they should according to Turing. With respect to this replacement goal, though, the
literal reading generates more questions than answers. Presumably, any successful
simulation of a human being includes a simulation of both a human gender and the
human ability to imitate other human beings. Under the standard reading, this fact
could be exploited by the interrogator. Given the rules of the Turing test under the
standard reading, the interrogator can ask both players to impersonate a member
of any gender to see how they compare on that task. However, it is not obvious
how this gender imitation task relates to the question of human intelligence. The
literal reading restricts the entire test to the question of whether a human male or
a mechanical male can imitate the opposite sex better. In what way is this ability
relevant to whether machines think? Why is proficiency at this task sufficient to
prove that a machine is intelligent? One possible answer is that the machine needs
to simulate the mental processes of a human male to a degree of sophistication that
is sufficient to also simulate the human male impersonating a human female. This
might convince one that the machine is intelligent. But if the machine is able to
simulate a man to such a degree, why not ask it a broad range of questions rather
than limiting the task to the impersonation of the opposite gender?
These questions illustrate that it is not obvious how the test – as defined by the
literal reading – fulfills Turing’s replacement goal. The proponents of the literal
reading owe us not only an answer to these questions, but an explanation for why
Turing did not address them at all.6 If, as some have argued, he simply was introducing an experimental design to make the test unbiased, why didn’t he say so?
Again, recall that, in the imitation game played by humans, the interrogator knows
that both players are human. If Turing thought the game with the machine needed
the extra precaution of deceiving the interrogator as to the nature of the game,
he should have – and presumably would have – both said it and explained why
he thought so.7 Instead, he spent most of his rather long text considering various
general attributes of human beings, and various general reasons for believing that
machines cannot think. In each case, he argued that none of those reasons were
obstacles to the conclusion that a digital computer would eventually be able to
play the imitation game. His punchline was that a machine could be developed
to match all the elements that are relevant to human intelligence, including the
ability to learn from experience and one’s own mistakes. Turing never discussed
any elements that would make the machine able or unable to do the gender imitation, nor did he mention how his detailed discussions of various human abilities
related to gender imitation. As a result, it is natural to read the Mind paper as being
entirely devoted to the motivation and explication of the test as understood under
the standard reading. Turing was neither a sloppy thinker nor a sloppy writer. If
he wanted to propose his test under the literal reading, it’s likely that he would
have motivated and explained it in detail, instead of just concentrating on what
TURING’S RULES FOR THE IMITATION GAME
577
potentially makes humans and machines intellectually different, or intellectually
equal.
4. The Turing Test in Section 2 of the Mind Paper
The passage describing the test constitutes most of section 1 of the Mind paper.
In section 2, a few lines after introducing the game, Turing wrote that "[t]he new
problem has the advantage of drawing a fairly sharp line between the physical and
the intellectual capacities of a man" (Turing, 1950, p. 434).8 He did not mention the
capacities of human beings as gender imitators, but he did give "specimen questions
and answers" between the interrogator and the other players. The questions were
no longer relevant to being a man or woman, as were the examples given by Turing
for the game with only human players. Now, the "specimen questions and answers"
involved writing a sonnet, adding numbers, and playing chess. These were some
of the paradigmatic intelligent human activities that Turing referred to – in other
papers – as some of the tasks on which computers needed to be tested to show
they were intelligent.9 Turing’s examples are in line with the standard reading,
according to which the goal of the interrogator is to distinguish the human being
from the machine. If one, instead, takes the literal reading, one should explain why
Turing’s examples are not about gender differences, but about general intellectual
abilities of human beings, at which men and women hardly differ.
At the end of section 2, Turing made two additional points that are hard to
reconcile with the literal reading. The first is that the "counterpart" to the imitation
game is for a "man" (i.e. a human being) to "pretend to be the machine" (Turing,
1950, p. 435). This makes sense if Turing meant to test a machine simulating a
human being. If he meant to test a machine simulating a man imitating a woman,
he should have said that the counterpart to his test is for a woman to imitate a man
imitating a machine. Turing’s second point is a suggestion that the best strategy
for the machine is giving answers that would naturally be given by a "man."10
This is not in direct contradiction with the literal reading, for the claim might be
that the machine should literally simulate the answers given by a man imitating
a woman. But the literal reading makes this assumption oddly unwarranted. For,
under the literal reading, the machine could follow a strategy that appears at least
equally good, if not better: imitate the woman directly, giving answers that would
be given by a woman qua woman, rather than by a man imitating a woman. In
contrast, under the standard reading, given that Turing’s "man" can stand for a
generic human being, the assumption that the machine should attempt to provide
human-like answers becomes straightforward.
5. Turing’s Second Description of the Test
The most serious problem with the literal reading is that, in section 5 of the Mind
paper, Turing described the test again, in accordance with the standard reading. He
578
GUALTIERO PICCININI
described the game as that of a machine imitating a human being, which, as usual,
he called "man."11 When they haven’t ignored section 5, the proponents of the
literal reading have suggested that, in it, Turing described a different "version" of
the test (as understood under the literal reading). In the first "version," the machine
was playing against a woman; this time, the test is alleged to involve a machine
playing against a human male, and both the mechanical and the human player
must pretend to be women.12 This suggestion is entirely ad hoc, for there is no
independent evidence supporting it. Nowhere in the paper did Turing mention any
change in his description of the test, or in the rules of the game. If he meant to
describe two tests rather than one, we should expect that he say so, and explain
why he was making such a change. Moreover, there is textual evidence against
the hypothesis that, in sections 1 and 5, Turing was describing two different tests.
In section 3, Turing discussed the already introduced game (from section 1 of the
paper), and then pointed in the direction of section 5, where the game was described
as being between a computer and a human being.13 In section 3, as throughout
the paper, Turing referred to the game, or test, without ever using the plural, and
mentioned no change in the rules. In section 3, as in the rest of the Mind paper,
Turingwas writing about one and the same test, namely the test as defined by the
standard reading.
6. The Turing Test Outside of the Mind Paper
The Turing test was foreshadowed in a report on mechanical intelligence written
by Turing a few years before the Mind paper. Even before the actual construction
of digital computers, Turing and others began writing computer programs for chess
playing and other activities.14 The performance of such programs could be tested
by asking a person to compute, at each stage of the game, what the next move
should be according to the program. A human being who is given paper, pencil,
and a set of instructions to carry out, was called by Turing a "paper machine." A
paper machine behaves like a digital computer executing a program:
The extent to which we regard something as behaving in an intelligent manner
is determined as much by our state of mind and training as by the properties of
the object under consideration. If we are able to explain and predict its behaviour or if there seems to be little underlying plan, we have little temptation to
imagine intelligence. With the same object therefore it is possible that one man
would consider it as intelligent and another would not; the second man would
have found out the rules of its behaviour.
It is possible to do a little experiment on these lines, even at the present stage
of knowledge. It is not difficult to devise a paper machine which will play a
not very bad game of chess. Now get three men as subjects for the experiment
A, B, C. A and C are to be rather poor chess players, B is the operator who
works the paper machine. (In order that he should be able to work it fairly fast
it is advisable that he be both mathematician and chess player.) Two rooms are
TURING’S RULES FOR THE IMITATION GAME
579
used with some arrangement for communicating moves, and a game is played
between C and either A or the paper machine. C may find it quite difficult to
tell which he is playing [sic]. (This is a rather idealized form of an experiment
I have actually done.) (Turing, 1948, p. 23).
The "experiment" described here closely resembles the imitation game under
the standard reading. In this passage, Turing suggested that playing chess against a
machine could generate the feeling that the machine was intelligent. In the title of
the above quote, Turing called intelligence an "emotional concept," meaning that
there is no objective way to apply it. Direct experience with a machine, which plays
chess in a way that cannot be distinguished from human playing, could convince
one to attribute intelligence to the machine. This is very likely to be an important
part of the historical root for Turing’s proposal of the imitation game. Believing that
"[t]he extent to which we regard something as behaving in an intelligent manner
is determined as much by our state of mind and training as by the properties of
the object under consideration," he hoped that, by experiencing the versatility of
digital computers at tasks normally thought to require intelligence, people would
modify their usage of terms like "intelligence" and "thinking," so that such terms
apply to the machines themselves.15
Finally, Turing described his test on two other public occasions. The first was
a talk broadcast on the BBC Third Programme on May 15, 1951. The relevant
portion went as follows:
I think it is probable for instance that at the end of the century it will be possible
to programme a machine to answer questions in such a way that it will be
extremely difficult to guess whether the answers are being given by a man or by
the machine. I am imagining something like a viva-voce examination, but with
the questions and answers all typewritten in order that we need not consider
such irrelevant matters as the faithfulness with which the human voice can be
imitated (Turing, 1951, pp. 4–5).
The second was a discussion between Turing, M.H.A. Newman, Sir Geoffrey Jefferson, and R.B. Braithwaite, broadcast on the BBC Third Programme on January
14 and 23, 1952. Turing said:
I would like to suggest a particular kind of test that one might apply to a machine. You might call it a test to see whether the machine thinks, but it would
be better to avoid begging the question, and say that the machines that pass are
(let’s say) Grade A machines. The idea of the test is that the machine has to try
to pretend to be a man, by answering questions put to it, and it will only pass
if the pretence is reasonably convincing... (Turing, 1952, pp. 4–5, italics in the
original).
The topic of these radio broadcasts was whether digital computers could be said
to think. Turing’s advocated purpose was the same in these occasions as in his
Mind paper: to replace the question of whether machines could think with the
question of whether machines could pass his test. The terms used, and the gist
580
GUALTIERO PICCININI
of Turing’s speeches, closely resembled the Mind paper. Yet, in both occasions,
Turing unambiguously described the test as understood under the standard reading.
7. Conclusion
According to those who have witnessed or studied his life, Turing was often a
surprisingly fast thinker. He would get frustrated when others took a long time
to get points that seemed obvious to him.16 Perhaps because of this, his writing
was lucid but not always easily understood. In his logic papers, some apparent
obscurities resulted from him skipping some of the inferential steps, and can be
clarified by adding the missing steps.17 In light of this, the most likely explanation
for the ambiguity in Turing’s rules is that he expected his readers to fill in the
details in accordance with the game’s purpose. Given that the test is a replacement
for the question of whether machines can think, the machine must pretend to be
human, while the interrogator tries to determine which of the two players is the
machine and which is the human being. A careful examination of Turing’s work,
at any rate, provides plenty of evidence that the standard reading of his rules is
correct. Turing’s own imitation game did not involve a machine simulating a man
who’s pretending to be a woman, but a machine simulating a human being.
Acknowledgements
The writing of this paper was prompted by a discussion with Susan Sterrett, for
which I am very grateful. Thanks to the participants of The Future of the Turing
Test for the fruitful discussion that took place there, and to Becka Skloot for many
helpful comments.
Notes
1 For the present purpose of understanding Turing’s text, following his usage, I use the terms "intelli-
gence" and "thinking" interchangeably. This is not meant to suggest that, in other contexts, no useful
distinction can be drawn between the two.
2 Webb, 1980, p. 238; Haugeland, 1985, p. 6; Genova, 1994, pp. 313–315; Cowley and McDorman,
1995, p. 122, esp. n. 10; Hayes and Ford, 1995, p. 972; Saygin et al., 2000; Traiger, 2000.
3 Haugeland, 1985, pp. 6–8; Saygin et al., 2000; Traiger, 2000.
4 Genova, 1994, p. 315; Hayes and Ford, 1995, p. 977. According to Genova, the literal reading
accounts for Turing’s replacement proposal because Turing held the view that thinking is imitating;
thus, a machine successful at imitating must be thinking (Genova, 1994, pp. 315–322). According to
Genova, the request that the machine specifically simulate a human male imitating a human female
is explained by what she takes to be Turing’s views on sexual identity, due to his own experience as
a homosexual (ib., esp. pp. 314–315).
5 The ambiguity is recognized by Haugeland, 1985, p. 6, n. 2. The additional claim that the interrogator must be deceived about the purpose of the game is explicitly made by Hayes and Ford, 1995, p.
972; Saygin et al., 2000; Traiger, 2000.
6 These questions have actually been answered at length by Sterrett (2000), who argues that the test
defined by the literal reading makes a better test for intelligence than the test defined by the standard
TURING’S RULES FOR THE IMITATION GAME
581
reading. Of course, Sterrett does not attribute her arguments to Turing. The issue of what is the best
test for machine intelligence is irrelevant to the topic of the present paper. Here, I concentrate on
what Turing said, and didn’t say.
7 Genova’s account in terms of Turing’s alleged view that thought is imitation is even more problematic. First, such a view is no reason to restrict a test for thought to the simulation of a human male
imitating a human female, rather than allowing for a wider range of simulations. Second, and more
importantly, Genova provides no textual evidence to warrant her attribution to Turing of the view that
thought is imitation. In studying his work, I have found no evidence that Turing held such a view.
8 Since Turing’s language – as that of most of his colleagues – was not politically correct by today’s
standards, he generally used "man" to refer to a generic human being.
9 The following examples are from papers written before the Mind paper. The first time he mentioned
machine intelligence in a paper, Turing did so in a discussion of mechanical chess-playing (1945,
p. 41). In a more extensive discussion of machine intelligence for an audience of mathematicians,
he suggested that machines could prove their intelligence by both playing chess and doing mathematical derivations in a formal logical system (1947, pp. 122–123). In a report entirely devoted
to machine intelligence, Turing discussed the possibility of programming machines to play various
games, to learn languages, to do translations, cryptanalysis (which he used to call "cryptography"),
and mathematics (1948, p. 13).
10 The text goes as follows:
It might be urged that when playing the ‘imitation game’ the best strategy for the machine may
possibly be something other than imitation of the behaviour of a man. This may be, but I think it is
unlikely that there is any great effect of this kind. In any case there is no intention to investigate here
he theory of the game, and it will be assumed that the best strategy is to try to provide answers that
would naturally be given by men (Turing, 1950, p. 435).
11 Notice the initial reference to section 3, which turns out to have some importance:
We may now consider again the point raised at the end of §3. It was suggested tentatively that
the question, ‘Can machines think?’ should be replaced by ‘Are there imaginable digital computers
which would do well in the imitation game?’ If we wish we can make this superficially more general,
and ask ‘Are there discrete state machines which would do well?’ But in view of the universality
property we see that either of these questions is equivalent to this, ‘Let us fix our attention on one
particular digital computer C. Is it true that by modifying this computer to have an adequate storage,
suitably increasing its speed of action, and providing it with an appropriate programme, C can be
made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?’
(Turing, 1950, p. 442).
Recall also that "[t]he object of the game for the third player (B) is to help the interrogator" (ib., p.
434).
12 Genova, 1994, p. 314; Saygin et al., 2000. According to Traiger’s (2000) reading of this passage,
in the modified test the human player can be either a man or a woman, but he or she has to play the
role of a woman.
13 Here is the relevant excerpt:
There are already a number of digital computers in working order, and it may be asked, ‘Why not
try the experiment straight away? It would be easy to satisfy the conditions of the game. A number
of interrogators could be used, and statistics compiled to show how often the right identification was
given.’ The short answer is that we are not asking whether all digital computers would do well in the
game nor whether the computers at present available would do well, but whether there are imaginable
computers which would do well. But this is only the short answer. We shall see this question in a
different light later (Turing, 950, p. 436).
The word "later" is a clear reference to the quote taken from section 5, which he – as noted in n. 11 –
starts with a cross-reference to section 3, and comes after Turing’s explanation of digital computers,
their property of universality, and the importance of programs.
14 See Hodges, 1983, chapt. 6.
582
GUALTIERO PICCININI
15 In this respect, this is what he said in the Mind paper: "I believe that at the end of the century the
use of words and general educated opinion will have altered so much that one will be able to speak
of machines thinking without expecting to be contradicted" (Turing, 1950, p. 442).
16 See Newman, 1955, p. 255; Turing, 1959, pp. 13, 27–28; numerous relevant episodes are also
reported by Hodges, 1983.
17 For some examples, see Piccinini, 2001.
References
Cowley, S.J. and MacDorman, K.F. (1995), ‘Simulating Conversations: The Communion Game’, AI
and Society 9, pp. 116–139.
Genova, J. (1994), ‘Turing’s Sexual Guessing Game’, Social Epistemology 8, pp. 313–326.
Haugeland, J. (1985), Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press.
Hayes, P. and Ford, K. (1995), ‘Turing Test Considered Harmful’, Proceedings of the Fourteenth
International Joint Conference on Artificial Intelligence, Montreal, Quebec, Canada, pp. 972–
977.
Hodges, A. (1983), Alan Turing: The Enigma. New York: Simon and Schuster.
Ince, D.C., ed. (1992), Collected Works of A.M. Turing: Mechanical Intelligence. Amsterdam: North
Holland.
Newman, M.H.A. (1955), ‘Alan Mathison Turing’, in Biographical Memoirs of Fellows of the Royal
Society. London: Royal Soc., pp. 253–362.
Piccinini, G. (2001), ‘Turing and the Mathematical Objection’, Forthcoming in Minds and Machines.
Saygin A.P., Cicekli I. and Akman V. (2000), ‘Turing Test: 50 Years Later’, Minds and Machines 10,
pp. 463–518.
Sterrett, S. (2000), ‘Turing’s Two Tests for Intelligence’, Minds and Machines 10, pp. 541–559.
Traiger, S. (2000), ‘Making the Right Identification in the Turing Test’, Minds and Machines 10, pp.
561–572.
Turing, A.M. (1945), ‘Proposal for Development in the Mathematical Division of an Automatic
Computing Engine (ACE)’, reprinted in Ince (1992), pp. 1–86.
Turing, A.M. (1947), ‘Lecture to the London Mathematical Society on 20 February 1947’, reprinted
in Ince (1992), pp. 87–105.
Turing, A.M. (1948), ‘Intelligent Machinery’, reprinted in Ince (1992), pp. 107–127.
Turing, A.M. (1950), ‘Computing Machinery and Intelligence’, Mind 59, pp. 433–460.
Turing, A.M. (1951), ‘Can digital computers think?’ Typescript of talk broadcast on BBC Third
Programme, 15 May 1951, AMT B.5, Contemporary Scientific Archives Centre, King’s College
Library, Cambridge.
Turing, A.M. (1952), ‘Can automatic calculating machines be said to think?’ Typescript of broadcast
discussion on BBC Third Programme, 14 and 23 January 1952, between M.H.A. Newman, A.M.
Turing, Sir Geoffrey Jefferson, R.B. Braithwaite, AMT B.6, Contemporary Scientific Archives
Centre, King’s College Library, Cambridge.
Turing, E.S. (1959), Alan M. Turing. Cambridge: Heifer & Sons.
Webb, J.C. (1980), Mechanism, Mentalism, and Metamathematics. Dordrecht: D. Reidel.