Copeland - Turing's O-Machines, Searle, Penrose and The Brain

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Turing'sO-machines, Searle,Penrose and the brain

B.JACKCOPELAND
1. Johnson-Laird is surely right when he discerns three mutually exclusive
positions in current thinking concerning the relationship between human
mentality and computation (1987: 252). He suggests that the only alterna-
tive to these three positions is that consciousness is not 'scientifically
explicable' (ibid.).
(1) The human brain (or, variously, mind or mindbrain) is a computer,
equivalent to some Turing machine.
There are, of course, innumerable fine-grained renderings of this theory:
the brain is variously thought to be a digital computer, an analogue
computer, a stored-program machine, a program-controlled machine, a
massively parallel distributed processor, etc.
(2) The activity of a human brain can be simulated perfectly by a
Turing machine but the brain is not itself a computing machine.
Analogies offered in the literature include hurricanes, the motion of the
planets around the sun, and the digestion of pizza. None of these phenom-
ena is itself computational in nature yet, supposedly, all can be simulated
perfectly by a Turing machine (in the sense that the machine can compute
descriptions of the phenomena to any desired number of decimal places
and on any temporal grid). (2) is Searle's position (1992, 1997).
(3) The brain's cognitive activity cannot in its entirety be simulated
by a computing machine: a complete account of cognition will
need 'to rely on non-computable procedures' (Johnson-Laird
1987: 252).
Penrose (1994) maintains a version of (3) which is so strong that it is not
entailed by the conjunction of the negations of (1) and (2).
I shall be arguing that (1), (2) and (3) do not exhaust the alternatives
open for scientific consideration. At least two further hypotheses belong
alongside (1) to (3):
(4) The brain is what Turing called an O-machine.
(5) The cognitive activity of the brain can be simulated perfectly by
an O-machine, but the brain is not itself an O-machine; such
simulation cannot be effected by a Turing machine.
O-machines (?2) are digital computing machines. They generate digital
output from digital input by means of a step-by-step procedure consisting
58.2, April 1998, pp. 128-138. ? B. Jack Copeland
ANALYSIS
TURING'S O-MACHINES, SEARLE, PENROSE AND THE BRAIN 129

of repeatedapplicationsof a small, fixed numberof primitiveoperations,


the procedureunfolding underthe control of a finite programof instruc-
tions which is stored internallyin the form of data on the machine'stape.
Thus even if (1) is false, the theorythat the brainis a computingmachine
might neverthelessbe true. Furthermore,the success of hypothesis (4)
would vindicatefunctionalism.As with Turingmachines,the description
of an O-machineis silent about the underlyinghardware.

2. Turingintroducedthe conceptof an O-machinein his Ph.D.thesis(Prince-


ton, 1938). The thesis was supervisedby Church. It was subsequently
published as Turing 1939. This paper is a classic of recursivefunction
theory,yet it is seeminglylittle-knownamong philosophersof mind.
The primitive operations of an (ordinary)Turing machine are six in
number:(i) move the tape left one square;(ii) move the tape right one
square;(iii) read (i.e. identify)the symbol currentlyunder the head; (iv)
write a symbol on the squareof tape currentlyunderthe head (afterfirst
deletingthe symbolalreadywrittenthere,if any);(v)changestate;(vi) halt.
Theseprimitiveoperationsare madeavailableby unspecifiedsubdevicesof
the machine - 'black boxes'. The question of what mechanismsmight
appropriatelyoccupy these black boxes is not relevantto the machine's
logical specification.Suchmattersbelongto what Block has termed'reali-
zationscience'(1990: 259). An O-machineis a Turingmachineaugmented
with one or more primitiveoperationseach of which returnsthe values of
some function (on the natural numbers) that is not Turing-machine-
computable.Each additional primitiveoperation is made availableby a
blackbox. Turingrefersto these black boxes as 'oracles'.He remarksthat
an oracleworks by 'unspecifiedmeans'and says that we need 'not go any
further into the nature of [these] oracle[s]' (1939: 173). According to
Turing'sspecificationeach oraclereturnsthe values of a two-valuedfunc-
tion. Let these values be written 0 and 1. Let p be one of the additional
primitiveoperations.p is called into action by means of a special statex,
the call state. (Wherean O-machinehas severalof these additionalprimi-
tives a correspondingnumber of call states is required.)The machine
inscribesthe symbols that are to form the input to p on any convenient
block of squaresof its tape, using occurrencesof a special symboly, the
markersymbol, to indicatethe beginningand the end of the input string.
As soon as an instructionin the machine'sprogramputs the machineinto
stateX, the input is deliveredto the subdevicethat effectsp, which then
returns the correspondingvalue of the function. On Turing'sway of
handlingmatters the value is not written on the tape. Rather a pair of
states, the 1-state and the 0-state, is employed in order to record values of
the function. A call to p ends with a subdevice placing the machine in one
130 B. JACK COPELAND

or other of these two states according to whether the value of the function
is 1 or 0. (When a function g is computable by an O-machine whose (only)
oracle serves to return the values of a function f, then g is sometimes said
to be computable relative to f.)
One particular O-machine, the halting function machine, has as its
'classical' part the universal Turing machine specified by Turing in 1936
and as its 'nonclassical' part a primitive operation that returns the values
of Turing's famous halting function H(x, y) (Turing 1936). (The halting
function is defined thus: H(x, y) = 1 if and only if the xth Turing machine
eventually halts if set in motion with the integer y inscribed on its tape, say
in binary code (think of y as being the machine's input); and H(x, y) = 0
otherwise.) The halting function machine can compute many functions
that are not Turing-machine-computable. This is because of the familiar
phenomenon of the reducibility of one function to others (for example,
multiplication is reducible to addition). All functions reducible to the halt-
ing function and/or the primitives of an ordinary Turing machine are
computable by the halting function machine.
As previously remarked, Turing introduces O-machines without discus-
sion of how the primitive operations might be implemented. Equally, in his
earlier paper of 1936 there is no discussion of how the primitive operations
of a Turing machine might be implemented. In both papers, the role played
by his notional computing machines is to delineate classes of mathematical
problems; the possibility of real existence is beside the point of this exer-
cise. In 1936 it was, in fact, far from clear whether the construction of a
universal Turing machine lay within the bounds of physical possibility.
Once asked whether he thought that a universal Turing machine could
actually be constructed, Turing dismissively replied that the machine
would need to be as big as the Albert Hall (this was related to me by Robin
Gandy, who worked with Turing during the later part of the war). It was
not until Turing became acquainted with electronic technology developed
for a completely different purpose that he realized that the notional
machines of his 1936 paper could actually be built. Perhaps his 0-
machines, too, will one day become an engineering reality. The existence
of physical processes that cannot be simulated by Turing machine is
certainly a logical possibility, there being no shortage of law-like regulari-
ties for such processes to exhibit: functions that are Turing-machine-
computable form a relatively slender proper subset of the vast space of
functions on the natural numbers (let alone of the vaster space of functions
on the real numbers). Speculation as to whether there may actually be
physical processes that cannot be simulated by Turing machine stretches
back over at least four decades (for example Da Costa and Doria 1991;
Doyle 1982; Geroch and Hartle 1986; Komar 1964; Kreisel 1967, 1974;
TURING'S O-MACHINES, SEARLE, PENROSE AND THE BRAIN 131

Penrose1989, 1994; Pour-El1974; Pour-Eland Richards1979, 1981;


Scarpellini1963;Stannett1990;Vergiset al. 1986).If suchprocessesdo
existthenperhapsfutureengineerswill use themto implementthe non-
classicalpartof someO-machine.
Sciencefictionor not, this theorizingsufficesto illustratewhy it is an
empiricalmatterwhetheror not the disjunction of hypotheses(1) and(2)
is true.

3. Not so according to John Searle. Searle believes that it follows from


Church's thesis, itself a broadly logical claim, that the activity of the brain
can be simulated by a Turing machine (whence (1) v (2)). He writes as
follows:
Can the operations of the brain be simulated on a digital computer
[read: Turing machine]? ... The answer ... seems to me ... demonstra-
bly 'Yes' ... That is, naturally interpreted, the question means: Is there
some description of the brain such that under that description you
could do a computational simulation of the operations of the brain.
But given Church's thesis that anything that can be given a precise
enough characterization as a set of steps can be simulated on a digital
computer, it follows trivially that the question has an affirmative
answer. The operations of the brain can be simulated on a digital
computer in the same sense in which weather systems, the behavior of
the New York stock market, or the pattern of airline flights over Latin
America can. (1992: 200-201; see also 1997: 87)
Searle's statement of Church's thesis is mistaken. Church's thesis (also
known as 'Turing's thesis' and the 'Church-Turing thesis' (Church 1936,
Turing 1936, Kleene 1967) is a proposition concerning the extent of what
can be achieved by a human mathematician who is unaided by any
machinery save paper and pencil, and who is working in accordance with
'mechanical' methods, which is to say, methods set out in the form of a
finite number of exact instructions that call for no insight or ingenuity on
the part of the person who is carrying them out. The Church-Turing thesis
states that whatever can be calculated by a mathematician so working,
even a mathematician idealized to the extent of being free of all constraints
on time, patience, concentration, and so forth, can also be calculated by a
Turing machine.
This thesis carries no implication concerning the extent of what can be
calculated by a machine (say one that operates in accordance with a finite
program of instructions), for among the machine's repertoire of primitive
operations there may be those that no human being unaided by machinery
can perform. Nor does the thesis imply that each process admitting of a
132 B. JACK COPELAND

precise characterization 'as a set of steps' can be simulated by a Turing


machine, for the steps need not be ones that a human mathematician work-
ing in accordance with some mechanical method can carry out. Trivially,
the processing of an O-machine is always characterizable as a set of steps,
namely, the set of steps specified by the machine's program. Employing the
thesis espoused by Searle yields the absurdity that an O-machine can be
simulated by a Turing machine. Searle's attempt to recruit Church's thesis
in support of (2) is entirely fallacious.
An O-machine's program may call for primitive operations that a human
clerk working by rote and unaided by machinery is incapable of carrying
out (for otherwise, by the real Church-Turing thesis, whatever can be
calculated by an O-machine can be calculated by a Turing machine - a
contradiction). It follows that there is no possibility of Searle's Chinese
room argument being successfully deployed against the new functionalism
offered by hypothesis (4) (which Searle will presumably find as 'anti-
biological' as other functionalisms). This argument (Searle 1980, 1989)
depends upon the human clerk who occupies the Chinese room being able
to carry out by hand, using paper and pencil, each operation that the
program in question calls for (or in one version of the argument, to carry
them out in his or her head). Turing originally introduced the Turing
machine as a model of a human clerk engaged in mathematical calculation
(1936: 231), and so, of course, each primitive operation of a Turing
machine is indeed one that a human clerk can carry out. The same is true
of the electronic machines fashioned after the universal Turing machine. As
Turing himself put it:
Electronic computers are intended to carry out any definite rule of
thumb process which could have been done by a human operator
working in a disciplined but unintelligent manner. (Turing 1950: 1)
O-machines, on the other hand, conspicuously fail to satisfy this ground
condition of the Chinese room argument. Searle says:
Because programs are defined purely formally or syntactically, and
because minds have an intrinsic mental content, it follows immedi-
ately that the program by itself cannot constitute the mind. The
formal syntax of the program does not by itself guarantee the presence
of mental contents. I showed this a decade ago in the Chinese room
argument.... A computer, me for example, could run the steps in the
program for some mental capacity, such as understanding Chinese,
without understanding a word of Chinese. (1992: 200)
But the program of an O-machine, too, no less than in the case of a Turing
machine, is 'defined purely formally or syntactically'. If there is any impli-
cation from 'is defined purely formally' to 'is neither constitutive of nor
TURING'S O-MACHINES, SEARLE, PENROSE AND THE BRAIN 133

sufficientfor the mind', it is not one that can be establishedby the Chinese
room argument.
Searle'smistakeconcerningthe extent of what is assertedby the Church-
Turing thesis is, in fact, a common one, which can be encountered
frequentlyin recentwriting on the philosophyof mind. For example, the
Churchlands,like Searle,purportto deducefrom Church'sThesis that the
mindbraincan in principlebe simulatedby a Turingmachine (1983: 6).
They furtherassertthat Turing's
results entail something remarkable,namely that a standarddigital
computer,given only the rightprogram,a largeenough memoryand
sufficienttime, can computeany rule-governedinput-outputfunction.
That is, it can displayany systematicpatternof responsesto the envi-
ronmentwhatsoever.(1990: 26)
Turinghad no resultswhich entail this. Rather,he had a resultthat entails
the opposite. The various functions that, in 1936, he proved to be not
Turing-machine-computable, for example,the haltingfunction,are math-
ematical characterizationsof systematic patterns of responses to the
environmentthat cannot be displayedby a standarddigitalcomputer(even
one unfetteredby resourceconstraints).
Perhapsthe introductionof a term for the fallacy embracedhere by
Searleand the Churchlandswill assist in its extinction.Sincethe fallacyis
invariablycommittedin the nameof eitherChurchor Turing,I suggestthe
Church-Turing fallacy.One commitsthe Church-Turing fallacy by believ-
ing that the Church-Turing thesis, or some formalresultprovedby Turing
or Church,securesthe truthof the propositionthat the braincan be simu-
lated by a Turingmachine.Here is a thirdexampleof the fallacyat work:
If you assumethat [consciousness]is scientificallyexplicable... [and]
[g]rantedthat the [Church-Turing] thesis is correct,then ... [i]f you
believe [functionalism]to be false ... then ... you [should]hold that
consciousnesscould be modelledin a computerprogramin the same
way that, say,the weathercan be modelled... [andif] you acceptfunc-
tionalism... you should believethat consciousnessis a computational
process.(Johnson-Laird1987: 252)
No less commonin the literatureare statementswhich,while not explicitly
involvingthe Church-Turing fallacy,makeexaggeratedclaimson behalfof
Turing machines. For example, the entry on Turing in the recent A
Companion to the Philosophyof Mind contains the following assertions:
'we can depend on there being a Turingmachinethat capturesthe func-
tional relationsof the brain',for so long as 'these relationsbetweeninput
and output are functionally well-behaved enough to be describable by ...
mathematical relationships ... we know that some specific version of a
134 B. JACKCOPELAND

Turing machine will be able to mimic them' (Guttenplan 1994: 595). Also
typical are the following:
If a mental process can be functionally defined as an operation on
symbols, there is a Turing machine capable of carrying out the compu-
tation. (Fodor 1981: 130; see also 1983: 38-39)
... any process which can be formalized so that it can be represented
as a series of instructions for the manipulation of discrete elements
can, at least in principle, be reproduced by [a universal Turing
machine]. (Dreyfus 1992: 72)
The logical availability of hypothesis (4) gives the lie to all of these claims.
The relations between the inputs and outputs of an O-machine, say the
halting-function machine, are certainly 'functionally well-behaved enough
to be describable by ... mathematical relationships'. There is nothing
unmathematical or ill-defined about the halting function! Contrary to the
claims by Fodor and Dreyfus, hypothesis (4) postulates mental processes
consisting of operations on discrete symbols that cannot be carried out by a
universal Turing machine. Turing machines and O-machines alike execute
'series of instructions for the manipulation of discrete elements'; by defini-
tion, each additional primitive operation of an O-machine is 'an operation
on symbols' - in essence, an operation that replaces a binary string with 1
or 0. The O-machines point up the fact that the notion of a symbol-
processing machine is more general than the notion of a Turing machine.

4. As is well known, Penrose argues on the basis of formal results due to


Godel and Turing that propositions (1) and (2) are false (1989, 1990,
1994). In a section of the latter rather inconspicuously positioned in the
midst of a chapter on quantum theory and the brain (1994, ch. 7, ?9),
Penrose explains that his argument can be extended to apply to oracle
machines:
the arguments of Part I of this book can be applied equally well
against an oracle-machine model of mathematical understanding as
they were against the Turing-machine model, almost without change.
(1994: 380; see also 1996, ?S 3.10, 13.2)
This is why, in 51, I described Penrose's version of the anti-computational-
ist hypothesis (3) as being sufficiently strong as to be not entailed by the
conjunction of the negations of (1) and (2).
There is, as Penrose is evidently aware, a whiff of reductio ad absurdum
about this. Let the first-order O-machines be those whose (only) oracle
returns the values of Turing's halting function H(x, y) (as in the case of the
halting function machine described earlier). Similarly, the second-order O-
machines are those that possess an oracle which can say whether or not
TURING'S O-MACHINES, SEARLE, PENROSE AND THE BRAIN 135

anygivenfirst-orderO-machine eventuallyhaltsif setin motionwithsuch-


and-suchinput;andso on forthird-order,
andin generala-order.Penrose's
argument,originallymarketedas demonstrating thathumanmathemati-
ciansdo not use a knowablysoundTuring-machine algorithmin orderto
ascertainmathematicaltruth(e.g.in 1994,ch. 2), appearsto be so power-
ful that it can equally well be employed to show, given any number a at all
for whichthereis a notation,thathumanmathematicians
do not use, in
ascertaining mathematical truth, any knowably sound procedure capable
of being executed by an a-order oracle machine. Penrose's argument moves
relentlesslyup throughthe orders,stoppingnowhere.This discovery
appears to discomfort Penrose:
The finalconclusionof all this is ratheralarming.For it suggeststhat
we mustseeka non-computable
physicaltheorythatreachesbeyond
every [recursive] level of oracle machines (and perhaps beyond). No
doubt there are readerswho believethat the last vestige of credibility
of my argument has disappeared at this stage! I certainly should not
blame any reader for feeling this way. (1994: 381)
Penrose does hint at a way out:
[I]t need not be the case that human mathematical understanding is in
principle as powerful as any oracle machine at all. ... [T]he conclusion
G [human mathematicians are not using a knowably sound algorithm
in order to ascertain mathematical truth] does not necessarily imply
that human insight is powerful enough, in principle, to solve each
instance of the halting problem. Thus, we need not necessarily
conclude that the physical laws that we seek reach, in principle,
beyond every computable level of oracle machine (or even reach the
first order). We need only seek something that is not equivalent to any
specific oracle machine (including also the zeroth-order machines,
which are Turing machines). Physical laws could perhaps lead to
something that is just different. (1994: 381)
On that enigmatic note Penrose leaves it. Just when we seem to be
approaching a crucial part of the exposition, he suddenly falls silent. What,
indeed, does Penrose mean here?
It is natural to think of the functions, or problems, that are solvable by
a first-order oracle machine as being harder that those solvable by Turing
machine, and those solvable by a second-order oracle machine as being
harder still, and so forth. (To say that a Turing machine or O-machine
'solves a problem' is to say that when the machine is given the problem,
suitably encoded, on its tape it halts with the answer, 1 (Yes) or 0 (No),
under its head.) It is customary in recursion theory to say that a class of
problems of equal hardness are of the same degree. Problems that are
B. JACK COPELAND
i36
solvable by Turing machine are said to be of degree 0. Let me write 1 for
the degree of problems that are solvable by a first-order oracle machine
(but not by Turing machine). It is known that there are degrees between 0
and 1 (Friedberg 1957, Sacks 1964; Simpson 1977 is a survey of the area).
That is to say, there are classes of problems that are too hard to be solved by
Turing machine and yet are less hard than some of the problems that a first-
order oracle machine can solve. Here 'less hard' has the precise sense that
while a first-ordermachine can solve any of the problems in such a class, an
O-machine that is equipped only with an oracle for delivering the solutions
('Yes' or 'No') to problems in that class is unable to solve every problem that
the first-order machine can solve. This notion of degrees lying between 0
and 1 seems to make sense of much of what Penrose says about what it is
that he seeks (although certainly not of the specific claim that 'it need not
be the case that human mathematical understanding is in principle as
powerful as any oracle machine at all'). For some degree between 0 and 1,
the 'physics of mind' is exactly that hard. This is certainly a coherent posi-
tion; and for all that anyone presently knows, such may in fact be the case.
However, it is now possible to toughen up the threat of reductio. Let i
(for 'intermediate') be an O-machine of the sort just described (any arbi-
trarily selected one), and let I be the set of all machines with the same
oracle as i. (That is to say, members of I, like first-order O-machines, differ
in their programs, not in the primitive operations that each can perform.)
Do mathematicians use, in ascertaining mathematical truth, a knowably
sound procedure that can be executed by a machine in I? Apparently not,
if Penrose's Godelian argument is sound. For just as the argument can be
pressed to yield the conclusion that mathematicians do not use a knowably
sound procedure that can be executed by an (-order O-machine, it can
equally well be pressed to show the same regarding machines in I. The two
cases appear to be parallel in all relevant respects; in particular, it can be
shown that there is no machine in I that can calculate all values of the halt-
ing function for machines in I (i.e. the function whose definition is just like
that of H(x, y) given above, except that the phrase 'the xth Turing machine'
is replaced by 'the xth machine in I'). So what knowably sound procedure
can mathematicians be supposed to use (recall that i was arbitrarily
selected)? It is by no means clear how an upholder of the G6delian argu-
ment might respond to this difficulty.

University of Canterbury
Christchurch, New Zealand
[email protected]
Researchon which this articledraws was supportedin part by Universityof Canterbury
ResearchGrant no. U6271.
TURING'S O-MACHINES, SEARLE, PENROSE AND THE BRAIN 137

References
Block, N. 1990. The computer model of the mind. In An Invitation to Cognitive
Science, Vol. 3: Thinking, ed. D. Osherson and H. Lasnik. Cambridge,Mass.: MIT
Press.
Church, A. 1936. An unsolvable problem of elementary number theory. American
Journal of Mathematics 58: 345-63.
Churchland, P. M., and P. S. Churchland. 1983. Stalking the wild epistemic engine.
Noas 17: 5-18.
Churchland, P. M., and P. S. Churchland. 1990. Could a machine think? Scientific
American262 (Jan.):26-31.
da Costa, N. and F. Doria. 1991. Classical physics and Penrose'sthesis. Foundations of
Physics Letters 4: 363-73.
Doyle, J. 1982. What is Church'sthesis? Laboratoryfor ComputerScience, MIT.
Dreyfus, H. 1992. What Computers Still Can't Do: A Critique of Artificial Reason.
Cambridge,Mass.: MIT Press.
Fodor,J. 1981. The mind-body problem. ScientificAmerican244 (Jan.):124-32.
Fodor,J. 1983. The Modularity of Mind. Cambridge,Mass.: MIT Press.
Friedberg,R. 1957. Two recursivelyenumerablesets of incomparabledegreesof unsolv-
ability (solution of Post's problem, 1944). Proceedingsof the National Academy of
Sciences (USA)43: 236-38.
Geroch, R. and J. Hartle. 1986. Computability and physical theories. Foundations of
Physics 16: 533-50.
Guttenplan,S. 1994. A Companion to the Philosophy of Mind. Oxford: Blackwell.
Johnson-Laird,P. 1987. How could consciousness arise from the computations of the
brain?.In Mindwaves, ed. C. Blakemoreand S. Greenfield.Oxford: Basil Blackwell.
Kleene, S. 1967. MathematicalLogic. New York:Wiley.
Komar,A. 1964. Undecidabilityof macroscopicallydistinguishablestates in quantum
field theory. Physical Review, second series, 133B: 542-44.
Kreisel,G. 1967. Mathematicallogic: what has it done for the philosophy of mathemat-
ics? In Bertrand Russell: Philosopher of the Century,ed. R. Schoenman. London:
George Allen and Unwin.
Kreisel,G. 1974. A notion of mechanistictheory. Synthese 29: 11-26.
Penrose, R. 1989. The Emperor'sNew Mind: ConcerningComputers,Minds, and the
Laws of Physics. Oxford: Oxford UniversityPress.
Penrose,R. 1990. Pr&cisof The Emperor'sNew Mind: ConcerningComputers,Minds,
and the Laws of Physics. Behavioraland Brain Sciences 13: 643-55 and 692-705.
Penrose,R. 1994. Shadows of the Mind:A Searchfor the Missing Scienceof Conscious-
ness. Oxford: Oxford UniversityPress.
Penrose, R. 1996. Beyond the doubting of a shadow: a reply to commentaries on
Shadows of the Mind. Psyche: An InterdisciplinaryJournal of Research on
Consciousness 2(23) [http://psyche.cs.monash.edu.au].
Pour-El,M. 1974. Abstractcomputabilityand its relation to the general purpose analog
computer.Transactionsof the AmericanMathematicalSociety 199: 1-28.
Pour-El,M. and I. Richards. 1979. A computable ordinarydifferentialequation which
possesses no computable solution. Annals of MathematicalLogic 17: 61-90.
Pour-El,M. and I. Richards. 1981. The wave equation with computable initial data such
that its unique solution is not computable.Advancesin Mathematics39: 215-39.
Sacks, G. 1964. The recursivelyenumerabledegrees are dense. Annals of Mathematics
second series, 80: 300-12.
138 B. JACKCOPELAND

Scarpellim, B. 1963. Zwei unentscheitbare Probleme der Analysis. Zeitschrift fur


mathematischeLogik und Grundlagender Mathematik 9: 265-89.
Searle,J. 1980. Minds, brains,and programs.Behavioraland Brain Sciences 3: 417-24.
Searle,J. 1989. Minds, Brainsand Science:the 1984 Reith Lectures. London: Penguin.
Searle,J. 1992. The Rediscoveryof the Mind. Cambridge,Mass.: MIT Press.
Searle,J. 1997. The Mystery of Consciousness.New York:New York Review.
Simpson, S. 1977. Degreesof unsolvability:a surveyof results. In Handbook of Mathe-
matical Logic, ed. J. Barwise. Amsterdam:North-Holland.
Stannett, M. 1990. X-machines and the halting problem: building a super-Turing
machine. FormalAspects of Computing 2: 331-41.
Turing, A. 1936. On computable numbers,with an application to the Entscheidungs-
problem. Proceedingsof the London MathematicalSociety series 2, 42 (1936-37):
230-65.
Turing, A. 1938. Systems of logic based on ordinals. A dissertation presented to the
faculty of PrincetonUniversityin candidacy for the degree of Doctor of Philosophy.
Turing,A. 1939. Systemsof logic based on ordinals. Proceedingsof the London Mathe-
matical Society series 2, 45: 161-228.
Turing, A. 1950. Programmers' Handbook for Manchester Electronic Computer.
Universityof ManchesterComputing Laboratory.
Vergis,A., K. Steiglitzand B. Dickinson. 1986. The complexity of analog computation.
Mathematicsand Computersin Simulation 28: 91-113.

You might also like