Copeland - Turing's O-Machines, Searle, Penrose and The Brain
Copeland - Turing's O-Machines, Searle, Penrose and The Brain
Copeland - Turing's O-Machines, Searle, Penrose and The Brain
B.JACKCOPELAND
1. Johnson-Laird is surely right when he discerns three mutually exclusive
positions in current thinking concerning the relationship between human
mentality and computation (1987: 252). He suggests that the only alterna-
tive to these three positions is that consciousness is not 'scientifically
explicable' (ibid.).
(1) The human brain (or, variously, mind or mindbrain) is a computer,
equivalent to some Turing machine.
There are, of course, innumerable fine-grained renderings of this theory:
the brain is variously thought to be a digital computer, an analogue
computer, a stored-program machine, a program-controlled machine, a
massively parallel distributed processor, etc.
(2) The activity of a human brain can be simulated perfectly by a
Turing machine but the brain is not itself a computing machine.
Analogies offered in the literature include hurricanes, the motion of the
planets around the sun, and the digestion of pizza. None of these phenom-
ena is itself computational in nature yet, supposedly, all can be simulated
perfectly by a Turing machine (in the sense that the machine can compute
descriptions of the phenomena to any desired number of decimal places
and on any temporal grid). (2) is Searle's position (1992, 1997).
(3) The brain's cognitive activity cannot in its entirety be simulated
by a computing machine: a complete account of cognition will
need 'to rely on non-computable procedures' (Johnson-Laird
1987: 252).
Penrose (1994) maintains a version of (3) which is so strong that it is not
entailed by the conjunction of the negations of (1) and (2).
I shall be arguing that (1), (2) and (3) do not exhaust the alternatives
open for scientific consideration. At least two further hypotheses belong
alongside (1) to (3):
(4) The brain is what Turing called an O-machine.
(5) The cognitive activity of the brain can be simulated perfectly by
an O-machine, but the brain is not itself an O-machine; such
simulation cannot be effected by a Turing machine.
O-machines (?2) are digital computing machines. They generate digital
output from digital input by means of a step-by-step procedure consisting
58.2, April 1998, pp. 128-138. ? B. Jack Copeland
ANALYSIS
TURING'S O-MACHINES, SEARLE, PENROSE AND THE BRAIN 129
or other of these two states according to whether the value of the function
is 1 or 0. (When a function g is computable by an O-machine whose (only)
oracle serves to return the values of a function f, then g is sometimes said
to be computable relative to f.)
One particular O-machine, the halting function machine, has as its
'classical' part the universal Turing machine specified by Turing in 1936
and as its 'nonclassical' part a primitive operation that returns the values
of Turing's famous halting function H(x, y) (Turing 1936). (The halting
function is defined thus: H(x, y) = 1 if and only if the xth Turing machine
eventually halts if set in motion with the integer y inscribed on its tape, say
in binary code (think of y as being the machine's input); and H(x, y) = 0
otherwise.) The halting function machine can compute many functions
that are not Turing-machine-computable. This is because of the familiar
phenomenon of the reducibility of one function to others (for example,
multiplication is reducible to addition). All functions reducible to the halt-
ing function and/or the primitives of an ordinary Turing machine are
computable by the halting function machine.
As previously remarked, Turing introduces O-machines without discus-
sion of how the primitive operations might be implemented. Equally, in his
earlier paper of 1936 there is no discussion of how the primitive operations
of a Turing machine might be implemented. In both papers, the role played
by his notional computing machines is to delineate classes of mathematical
problems; the possibility of real existence is beside the point of this exer-
cise. In 1936 it was, in fact, far from clear whether the construction of a
universal Turing machine lay within the bounds of physical possibility.
Once asked whether he thought that a universal Turing machine could
actually be constructed, Turing dismissively replied that the machine
would need to be as big as the Albert Hall (this was related to me by Robin
Gandy, who worked with Turing during the later part of the war). It was
not until Turing became acquainted with electronic technology developed
for a completely different purpose that he realized that the notional
machines of his 1936 paper could actually be built. Perhaps his 0-
machines, too, will one day become an engineering reality. The existence
of physical processes that cannot be simulated by Turing machine is
certainly a logical possibility, there being no shortage of law-like regulari-
ties for such processes to exhibit: functions that are Turing-machine-
computable form a relatively slender proper subset of the vast space of
functions on the natural numbers (let alone of the vaster space of functions
on the real numbers). Speculation as to whether there may actually be
physical processes that cannot be simulated by Turing machine stretches
back over at least four decades (for example Da Costa and Doria 1991;
Doyle 1982; Geroch and Hartle 1986; Komar 1964; Kreisel 1967, 1974;
TURING'S O-MACHINES, SEARLE, PENROSE AND THE BRAIN 131
sufficientfor the mind', it is not one that can be establishedby the Chinese
room argument.
Searle'smistakeconcerningthe extent of what is assertedby the Church-
Turing thesis is, in fact, a common one, which can be encountered
frequentlyin recentwriting on the philosophyof mind. For example, the
Churchlands,like Searle,purportto deducefrom Church'sThesis that the
mindbraincan in principlebe simulatedby a Turingmachine (1983: 6).
They furtherassertthat Turing's
results entail something remarkable,namely that a standarddigital
computer,given only the rightprogram,a largeenough memoryand
sufficienttime, can computeany rule-governedinput-outputfunction.
That is, it can displayany systematicpatternof responsesto the envi-
ronmentwhatsoever.(1990: 26)
Turinghad no resultswhich entail this. Rather,he had a resultthat entails
the opposite. The various functions that, in 1936, he proved to be not
Turing-machine-computable, for example,the haltingfunction,are math-
ematical characterizationsof systematic patterns of responses to the
environmentthat cannot be displayedby a standarddigitalcomputer(even
one unfetteredby resourceconstraints).
Perhapsthe introductionof a term for the fallacy embracedhere by
Searleand the Churchlandswill assist in its extinction.Sincethe fallacyis
invariablycommittedin the nameof eitherChurchor Turing,I suggestthe
Church-Turing fallacy.One commitsthe Church-Turing fallacy by believ-
ing that the Church-Turing thesis, or some formalresultprovedby Turing
or Church,securesthe truthof the propositionthat the braincan be simu-
lated by a Turingmachine.Here is a thirdexampleof the fallacyat work:
If you assumethat [consciousness]is scientificallyexplicable... [and]
[g]rantedthat the [Church-Turing] thesis is correct,then ... [i]f you
believe [functionalism]to be false ... then ... you [should]hold that
consciousnesscould be modelledin a computerprogramin the same
way that, say,the weathercan be modelled... [andif] you acceptfunc-
tionalism... you should believethat consciousnessis a computational
process.(Johnson-Laird1987: 252)
No less commonin the literatureare statementswhich,while not explicitly
involvingthe Church-Turing fallacy,makeexaggeratedclaimson behalfof
Turing machines. For example, the entry on Turing in the recent A
Companion to the Philosophyof Mind contains the following assertions:
'we can depend on there being a Turingmachinethat capturesthe func-
tional relationsof the brain',for so long as 'these relationsbetweeninput
and output are functionally well-behaved enough to be describable by ...
mathematical relationships ... we know that some specific version of a
134 B. JACKCOPELAND
Turing machine will be able to mimic them' (Guttenplan 1994: 595). Also
typical are the following:
If a mental process can be functionally defined as an operation on
symbols, there is a Turing machine capable of carrying out the compu-
tation. (Fodor 1981: 130; see also 1983: 38-39)
... any process which can be formalized so that it can be represented
as a series of instructions for the manipulation of discrete elements
can, at least in principle, be reproduced by [a universal Turing
machine]. (Dreyfus 1992: 72)
The logical availability of hypothesis (4) gives the lie to all of these claims.
The relations between the inputs and outputs of an O-machine, say the
halting-function machine, are certainly 'functionally well-behaved enough
to be describable by ... mathematical relationships'. There is nothing
unmathematical or ill-defined about the halting function! Contrary to the
claims by Fodor and Dreyfus, hypothesis (4) postulates mental processes
consisting of operations on discrete symbols that cannot be carried out by a
universal Turing machine. Turing machines and O-machines alike execute
'series of instructions for the manipulation of discrete elements'; by defini-
tion, each additional primitive operation of an O-machine is 'an operation
on symbols' - in essence, an operation that replaces a binary string with 1
or 0. The O-machines point up the fact that the notion of a symbol-
processing machine is more general than the notion of a Turing machine.
University of Canterbury
Christchurch, New Zealand
[email protected]
Researchon which this articledraws was supportedin part by Universityof Canterbury
ResearchGrant no. U6271.
TURING'S O-MACHINES, SEARLE, PENROSE AND THE BRAIN 137
References
Block, N. 1990. The computer model of the mind. In An Invitation to Cognitive
Science, Vol. 3: Thinking, ed. D. Osherson and H. Lasnik. Cambridge,Mass.: MIT
Press.
Church, A. 1936. An unsolvable problem of elementary number theory. American
Journal of Mathematics 58: 345-63.
Churchland, P. M., and P. S. Churchland. 1983. Stalking the wild epistemic engine.
Noas 17: 5-18.
Churchland, P. M., and P. S. Churchland. 1990. Could a machine think? Scientific
American262 (Jan.):26-31.
da Costa, N. and F. Doria. 1991. Classical physics and Penrose'sthesis. Foundations of
Physics Letters 4: 363-73.
Doyle, J. 1982. What is Church'sthesis? Laboratoryfor ComputerScience, MIT.
Dreyfus, H. 1992. What Computers Still Can't Do: A Critique of Artificial Reason.
Cambridge,Mass.: MIT Press.
Fodor,J. 1981. The mind-body problem. ScientificAmerican244 (Jan.):124-32.
Fodor,J. 1983. The Modularity of Mind. Cambridge,Mass.: MIT Press.
Friedberg,R. 1957. Two recursivelyenumerablesets of incomparabledegreesof unsolv-
ability (solution of Post's problem, 1944). Proceedingsof the National Academy of
Sciences (USA)43: 236-38.
Geroch, R. and J. Hartle. 1986. Computability and physical theories. Foundations of
Physics 16: 533-50.
Guttenplan,S. 1994. A Companion to the Philosophy of Mind. Oxford: Blackwell.
Johnson-Laird,P. 1987. How could consciousness arise from the computations of the
brain?.In Mindwaves, ed. C. Blakemoreand S. Greenfield.Oxford: Basil Blackwell.
Kleene, S. 1967. MathematicalLogic. New York:Wiley.
Komar,A. 1964. Undecidabilityof macroscopicallydistinguishablestates in quantum
field theory. Physical Review, second series, 133B: 542-44.
Kreisel,G. 1967. Mathematicallogic: what has it done for the philosophy of mathemat-
ics? In Bertrand Russell: Philosopher of the Century,ed. R. Schoenman. London:
George Allen and Unwin.
Kreisel,G. 1974. A notion of mechanistictheory. Synthese 29: 11-26.
Penrose, R. 1989. The Emperor'sNew Mind: ConcerningComputers,Minds, and the
Laws of Physics. Oxford: Oxford UniversityPress.
Penrose,R. 1990. Pr&cisof The Emperor'sNew Mind: ConcerningComputers,Minds,
and the Laws of Physics. Behavioraland Brain Sciences 13: 643-55 and 692-705.
Penrose,R. 1994. Shadows of the Mind:A Searchfor the Missing Scienceof Conscious-
ness. Oxford: Oxford UniversityPress.
Penrose, R. 1996. Beyond the doubting of a shadow: a reply to commentaries on
Shadows of the Mind. Psyche: An InterdisciplinaryJournal of Research on
Consciousness 2(23) [http://psyche.cs.monash.edu.au].
Pour-El,M. 1974. Abstractcomputabilityand its relation to the general purpose analog
computer.Transactionsof the AmericanMathematicalSociety 199: 1-28.
Pour-El,M. and I. Richards. 1979. A computable ordinarydifferentialequation which
possesses no computable solution. Annals of MathematicalLogic 17: 61-90.
Pour-El,M. and I. Richards. 1981. The wave equation with computable initial data such
that its unique solution is not computable.Advancesin Mathematics39: 215-39.
Sacks, G. 1964. The recursivelyenumerabledegrees are dense. Annals of Mathematics
second series, 80: 300-12.
138 B. JACKCOPELAND