Seven Misconceptions About The Mereological Fallacy
Seven Misconceptions About The Mereological Fallacy
Seven Misconceptions About The Mereological Fallacy
DOI 10.1007/s10670-013-9594-5
ORIGINAL ARTICLE
1 Introduction
H. Smit (&)
Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience,
Maastricht University, P.O. Box 616, 6200 MD Maastricht, The Netherlands
e-mail: [email protected]
P. M. S. Hacker
St John’s College, Oxford OX1 3JP, UK
e-mail: [email protected]
123
H. Smit, P. M. S. Hacker
We shall first briefly discuss what is meant by the term ‘mereological fallacy’1 and
what conceptual investigations are. Mereology is the logic of part/whole relations.
1
Kenny (1984) used the term ‘homunculus fallacy’. He argued that ascribing psychological predicates to
the brain invites the question of how the brain can for example see or remember something. Since it does
not make sense to say that the brain sees or remembers something, Kenny argued that ascribing
psychological predicates to the brain leads to the absurd consequence that one has to assume a
homunculus in the brain. We prefer the term ‘mereological fallacy’ because the fallacy is about applying
123
A Compilation for the Perplexed
Bennett and Hacker (2003, 2008; see also Smit 2010b) argue that the mereological
fallacy in cognitive sciences involves ascribing psychological predicates to parts of
the animal that apply only to the whole (behaving) animal.
The fallacy discussed by Bennett and Hacker does not imply that understanding
the brain is unimportant for understanding mental phenomena. On the contrary:
because the normal functioning of brain structures and processes are a causal
condition for mental phenomena, it is interesting to study these causal conditions.
For example alterations at the level of cells and neural circuits are a prerequisite for
retaining information, as mutations affecting the processes of long term potentiation
and depression show. But although these alterations are a prerequisite for the
retention of information, they do not show that cells, neural circuits, or brain
structures retain information. For it makes sense to say that the animal as a whole
has a memory, is able to retain information, and possesses knowledge, but it makes
no sense to say that for example the hippocampus or cortex possesses knowledge. It
is the animal as a whole that retains information and exercises the ability to retain
information, not its parts.
For understanding what conceptual investigations are and how they are used for
clarifying the mind/body problem, it is helpful to discuss first the differences
between empirical and conceptual propositions (see Fig. 1).
Empirical proposition are, for the most part, bipolar, that is they can be true and
can be false. Whether a proposition is true or false depends on what is the case. For
example, the proposition ‘This rose is red’ is true if the rose is, in fact, red. Hence
understanding an empirical proposition is to know that if the proposition is true,
then things are thus-and-so, and also to know that if it is false, then things are not so.
By contrast: a conceptual proposition is not bipolar and does not have the possibility
of being true and the possibility of being false. For example the grammatical (also
called: conceptual) proposition ‘Every rod has a length’ is true but cannot be false,
for there is no such thing as a rod without a length Similarly, ‘Every human being
has a body’ cannot be false, for there is no such thing as a human being without a
body. One cannot, for conceptual reasons, investigate empirically whether a rod has
a length or whether someone has a body.
Footnote 1 continued
predicates to parts (not to an alleged homunculus in the head) of living creatures. This also clarifies why
the fallacy extends to machines. Aeroplanes fly and clocks indicate time, but it makes no sense to say that
the engine of an aeroplane flies or that the fusée of a clock indicates time.
123
H. Smit, P. M. S. Hacker
Because empirical proposition are, for the most part, bipolar, the truth of an
empirical proposition excludes a possibility. The truth of a grammatical
proposition, by contrast, does not exclude a possibility, for a grammatical
proposition is not bipolar. Hence a grammatical or conceptual ‘impossibility’ is
not an impossibility that is described by a form of words. For a grammatical
possibility is not a possibility that is impossible. A grammatical proposition
excludes a form of words as senseless. Therefore one can say that grammatical
propositions describe the bounds of sense: if we transgress the bounds of sense, we
utter nonsense.
Some scientists and philosophers argue that conceptual propositions are parts of
scientific theories and can, therefore, be tested (as if they were hypotheses). These
scientists and philosophers are misled by the fact that (some) grammatical
propositions look as if they are descriptions of states of affairs, whereas they are
actually rules for description which we use to form empirical propositions which
are descriptions of states of affairs. Although the empirical propositions which we
form by applying the grammar to the objects that we encounter in experience can
be true or false, the grammatical proposition is a rule in the misleading guise of a
description (and, hence, cannot be tested). For example the empirical propositions
‘This rod is two meters’ and ‘Mary has a sunburned body’ can be true or false, but
if we notice that they are false, then we have not falsified the grammatical
propositions ‘A rod has a length’ and ‘A human has a body’. Grammatical
propositions cannot be tested (i.e. refuted by experiment) for they neither describe
nor predict anything.
Conceptual investigations, among other things, examine rules for the use of
words in order to shed light upon conceptual problems, confusions and unclarities.
We discuss two elucidations. First, it is sometimes thought that a description of
the rules for the use words is a form of linguistics. This is a misconception, for
resolving conceptual problems through investigating the rules for the use of words
is not like describing the rules of grammar, e.g. that the number of the verb must
match the number of the noun in English. Conceptual problems are resolved by
reminding us of the rules according to which we employ words of our language,
and arranging them in such a manner that it becomes clear that the bounds of
sense were transgressed. For example: ‘What is the relation between mind and
brain?’ is grammatically correct, but the question posed cannot be answered
because the rules for the use of the word ‘mind’ are transgressed. For if we
investigate the use of the word ‘mind’, we find that the mind is not an entity of
any kind. Hence it is neither true nor false that it stands in a relation to the brain.
Rather, it makes no sense for it to stand in a relation to the brain. Second, it is
misguidedly thought that resolving conceptual problems involves finding truths
expressed in ordinary language, i.e. the common sense expressed by the man in
the street. But the rules investigated through conceptual studies are not studied to
highlight empirical beliefs. Conceptual problems are not empirical problems but
conceptual ones that arise as the result of misunderstanding or misusing words.
These words may be words used in our everyday discourse, but also the technical
words used in science.
123
A Compilation for the Perplexed
3 Misconceptions
123
H. Smit, P. M. S. Hacker
The idea that organs are tools is misguided (Hacker 2007, 2013a; Smit 2010a).
Organs are not tools: they are parts of an organism. Some organs have a function,
like the kidney; other organs are used by us in order to fulfil a function, like the
sense-organs (we see with our eyes and bring our eyes closer to what we want to
examine) or hands (we use our hand to manipulate things). Some organs fulfil their
functions independently of us: our stomach, liver, kidneys, and spleen are not
subject to voluntary control. One cannot do anything with these organs. Other
organs are under partial control, such as the bladder or the lungs, but only up to a
point. The use of our limbs, however, is subject to voluntary control. We use our
hands to touch, to feel, to pick things up or move them, to manipulate things. The
essential point to note here is that, in contrast to what the critics believe, we do not
use our brain as we use our hands. The brain is not an organ with which we can do
anything, though we cannot do anything without out brain (see further Sect. 4). For
example we cannot walk unless the motor cortex is functioning normally, but that
does not mean that we walk with our brain.
Some evolutionary theorists argue that organs are tools because they are designed.
The reason is that they believe that Darwin replaced Paley’s argument from design by
natural selection (see for example Williams 1966 [1992]). This is a misinterpretation
of Darwin’s contribution (see Hacker 2007; Smit 2010a, 2014). There are three
problems involved here. First, tools are designed, organs not. Paley (1802 [2006]) held
that the only way to explain the complexity of the functioning of for example the eye
was by reference to design; Aristotle did not think that purpose or function logically
requires design at all. Darwin (1859 [1968], chapter 6) rejected design without
eliminating function and purpose of organs. Hence he returned to Aristotelian
teleology and extended our explanatory framework with evolutionary explanations.
Second, organs are parts of organisms, but not ‘organisms’ or ‘substances’ themselves,
whereas tools or instruments are substances. We shall elaborate in Sect. 5 how we can
explain the relation between organs and organisms of which organs are parts with the
aid of inclusive fitness theory. Third, organs are internally related to the good of the
organism whose organ they are, whereas tools, instruments or machines do not have a
good (see further von Wright 1963; see also Kenny 1963, chapter 3). Tools,
instruments or machines do not enjoy good health nor do they suffer from disease.
They do not flourish and prosper. They do not have a life cycle of birth, maturation,
reproduction and death. They suffer no pain nor do they enjoy pleasures. They are
neither conscious nor unconscious. They are not free agents with two-way powers, and
they are not responsible for their deeds. The function of artefacts and hence too of
instruments is related to their usefulness when employed for the purpose for which
they were made. Good organs by contrast, including good organs of perception, are
healthy organs that perform their function optimally. The goodness of organs in
general and of organs of perception in particular is privative. Good eyes are eyes that
are not deformed, diseased, or defective; they function well. Optimal functioning is
tested by reference to standards of normalcy of the species. Good instruments are not
normally-functioning instruments.
Notice that we distinguish evolutionary, ultimate-causal explanations concerning
the reproductive fitness of a trait from teleological explanations concerning the
function of an organ (internally related to the good of the organism). Talking about
123
A Compilation for the Perplexed
the function of an organ is excluded in ultimate causal explanations, for the term
‘fitness’ is defined by evolutionary theorists as the numbers of offspring that an
individual produces that survive to adulthood. Whether a trait is beneficial or costly
is defined on the basis of the lifetime fitness consequences of the trait (not just the
short term consequences).2 Defining fitness on the basis of life time consequences
implies that components of the fitness of an organism, e.g. the health and survival of
the organism related to the different stages of a life cycle, are not distinct parts of
the individual’s fitness. Hence they are not part of the formal models of inclusive
fitness theory, for at the population level, i.e. the fitness maximising effects of genes
relative to conceivable alternatives, the effects of genes are weighed only via their
lifetime inclusive fitness effects.
(3) The brain is not a part of a person, but of the human body
It is correct to say that the brain is (just like the liver) a part of the human body.
But the statement, advanced by Harré (2012) and others, may lead to misconception
if one does not realise that the human body is in only one sense the human organism.
For in another sense humans have a body. Misconceptions result if we do not
distinguish the expression ‘We have a body’ from the expression ‘We are a body’
(Hacker 2007, 2013a).
The human body, in one sense, is the human being: the human organism. For
human beings are animate bodies, i.e. living spatio-temporal continuants with a
distinctive range of powers. But, of course, I do not have the body, the living
organism, that I am. One must not confuse and conflate the body (the living
organism) that a human being is with the body (the somatic features) that a human
being has. We speak of having such and such a body, e.g. a beautiful, athletic, frail,
ageing, body when we are speaking of our somatic features. Whatever is true of the
body I have is true of me: if my body is sunburnt, I am sunburnt; if my body is
healthy, I am (physically) healthy. But not everything that is true of me is true of the
body I have: I may be proud or ashamed of my body, but my body cannot be proud
or ashamed of its body; I may be in debt, but there is no such thing as my body’s
being in debt. However, everything true of the body, the living organism, the human
being, that I am, is also true of me, since I am that living body. Note that I am not
my body because of the conceptual distinction between the body I am and the body I
2
We discuss an empirical example illustrating what is meant by ‘lifetime fitness consequences’. Visser
and Lessells (2001) studied the effects of clutch size on the fitness of the great tit. In experiments they
manipulated the clutch size so that the birds had to raise two extra chicks. There were three experimental
conditions: either (1) two extra nestlings were added to the nest, or (2) two extra eggs were added to the
nest, or (3) the female was induced to lay two extra eggs (by removing the eggs of the clutch so that the
female laid new eggs, and then adding the old eggs to the new ones). The effects of these manipulations
were that in all conditions the tits raised two extra tits (i.e. four tits instead of two). The results of the
experiments showed that the number of young who survived to breeding age did not differ between the
three conditions. Hence if we were to calculate the fitness-effects during one breeding season, there were
no significant effects. But there was a difference in fitness-effects when the next breeding season was
taken into consideration. Visser and Lessells found that the manipulations affected female survival.
Females in the third condition who experienced the extra costs of laying two extra eggs (besides the costs
of having to incubate them and to raise the chicks) had the lowest survival. Hence if we calculate the life
time fitness of the individuals, then we have a different result compared to calculating the reproductive
success per brood (illustrating why lifetime consequences matter).
123
H. Smit, P. M. S. Hacker
have. Perhaps we would hesitate to say that the brain is a part of a person. But even
if we grant that for the sake of argument, it does not follow that the brain is not a
part of the human organism—the human being—that we are.
Hacker (2007) offered the following analogy to clarify the relations between an
organ and the human being whose organ it is, and the person (which is the moral and
legal status that the human being enjoys). London is a part of the United Kingdom.
The United Kingdom is part of the European Union. But London is not a part of the
European Union. That does not prevent London from being a part of the United
Kingdom. So too, someone’s being a person does not prevent his brain being a part
of the human being he is, and it is a mistake to attribute psychological predicates to
parts of a human being.
(4) Knowing is not an ability but a state of the brain
Critics mistakenly assume that to know something is to be in a neural state (cf.
Dennett 2007; Searle 2007). This supports the mereological confusion for if
knowing is a state of the brain, then it makes sense so say that the brain knows (and,
hence, that is believes things too). But knowing something is not a state of the brain,
for to know something is ability-like, and hence more akin to a potentiality than to
an actuality (a state). When someone knows something, then he is able to do a wide
range of things: he can inform others, answer questions, can correct others, find,
locate, identify and explain things, and so forth. To forget that something is thus-
and-so is not to cease to be in some state, but to cease to be able to do certain things.
Of course, the activities of neurons are a causal condition for acquiring and
remembering a piece of information, for without the activities we could not acquire
and remember knowledge. But if we want to determine whether someone knows a
great deal of the theory of evolution, then we will not study the neural state of his
brain, but whether he is able to answer an indeterminate array of questions about the
subject matter. These answers are criteria for saying that he understands the theory
and that he is able to solve problems, can correct errors, can tell to others
appropriate facts, etc. The conclusion may be that he knows some parts in detail and
others only reasonably well.
The normal functioning of the senses and the brain is a causal condition for
acquiring empirical knowledge. Mutations affecting for example our discrimina-
tory abilities affect therefore the possibility that someone can acquire empirical
knowledge. For instance the colour blind (e.g. Daltonism, i.e. the inability to
discriminate red, green and grey) cannot use all of our colour words because they
cannot make the distinctions we make as the result of a mutation of a
(chromosome X-linked) gene. Hence they lack the ability to use some of our
samples and rules we use for explaining colour words, cannot correct and explain
mistakes in the use of some colour words, and cannot therefore determine the truth
of some empirical propositions. What would happen if only the colour blind were
to populate the planet? A part of our colour language would then disintegrate, for
the distinctions we make could no longer be used because the colour blind cannot
make them.
123
A Compilation for the Perplexed
(5) The difference between a behaving organism and its brain is the difference
between mechanical processes of the brain and the non-mechanical processes
of the mind
This misconception was advanced by Dennett (2007) in his objections to the
mereological fallacy. Dennett argued that the distinction between personal and
subpersonal levels of explanations captures the distinction between the organism
and its parts. For example according to Dennett being in pain is not a property of the
brain, for pains are ‘mental phenomena’ that are ‘non-mechanical’, whereas cerebral
processes are ‘essentially mechanical’. Yet this distinction is not the contrast
Bennett and Hacker (2003) drew between properties of wholes and properties of
parts. For the fallacy is not about what is non-mechanical and what is mechanical. It
is the bracket clock as a whole that keeps time, not its fusée, although the process of
keeping time is wholly mechanical. It is the aeroplane that flies, not its engines,
although the process of flying is wholly mechanical. Moreover, verbs of sensation,
such as ‘hurts’, ‘itches’, ‘tickles’ do apply to the parts of an animal, whose leg may
hurt, whose head may itch and whose flanks may tickle. These attributes are, in
terms of Dennett’s distinction, ‘non-mechanical’; nevertheless they are ascribable to
parts of an animal. So the mereological point made by Bennett and Hacker is
different from Dennett’s distinction between personal and sub-personal levels of
explanation, and, applied to animals, is different from his distinction between what
is ‘mechanical’ and what is not.
One can argue that, though it is mistaken to attribute certain predicates of wholes
to their parts, it is fruitful to extend the psychological vocabulary from human
beings and other animals to (a) computers and (b) parts of the brain. Note that here
is a difference between (a) and (b). Attributing psychological properties to
computers is mistaken, but does not involve a mereological fallacy. Attributing
psychological properties to the brain or its parts is mistaken and does involve a
mereological fallacy. Taking the brain to be a computer and ascribing psychological
properties to it or its parts is therefore doubly mistaken.
It is true that we do, in casual parlance, say that computers remember, that they
search their memory, that they calculate, and sometimes, when they take a long
time, we jocularly say that they are thinking things over. But this is not a literal
application of the terms ‘remember’, ‘calculate’ and ‘think’. Computers are devices
designed to fulfil certain functions for us. We can store information in a computer,
as we can in a filing cabinet. But filing cabinets cannot remember anything, and
neither can computers. We use computers to produce the results of a calculation—
just as we used to use a slide-rule or cylindrical mechanical calculator. Those results
are produced without anyone or anything literally calculating—as is evident in the
case of a slide-rule or mechanical calculator. In order literally to calculate, one must
have a grasp of a wide range of concepts, follow a multitude of rules that one must
know, and understand a variety of operations. Computers do not and cannot.
(6) The limiting case of a brain in a vat shows that brains can think
It has been argued by Dainton (2007) that the brain may be a limiting case of a
mutilated human being and is therefore capable of thinking, etc. Yet a brain in a vat
123
H. Smit, P. M. S. Hacker
is neither a pickled human being nor a mutilated human corpse. The intelligibility of
exhibiting cognitive powers by the exercise of prostheses does not show that the
possessor of cognitive capacities is the brain. One can imagine, in science fiction,
that a living brain might be linked to prosthetic eyes and ears, mechanical limbs and
a computerized voice box. Then, so the story may run, the voice and limbs may
exhibit thought and volition. Does this not show that the brain (and not the living
human being) is the subject that thinks and wills? No. What it shows is that this
imaginary being, which Bennett and Hacker (2008) dubbed a cerebroid, does so.
We need a brain in order to be able to think (walk and talk), just as a jet aeroplane
needs engines in order to fly. But aircraft engines cannot fly any more than brains
can think (walk or talk). It is human beings, who have brains, that think—not their
brains, which neither have brains (since they are brains) nor minds (there is no such
thing as a thought’s crossing the brain’s mind) or bodies (someone may have a
beautiful body, but their brain cannot be said to have a beautiful body). The brain is
no more a limiting case of a mutilated human being than an aircraft engine is a
limiting case of a damaged aeroplane.
Although one’s cognitive powers normally depend on a body for their expression,
an artificial vehicle may, in certain respects, do just as well. For example, a
prosthetic hand may be used to gesture in the same way as a real hand, or a speech
synthesizer to speak as a real voice. That a prosthesis is quite unlike real flesh, and
an electronic voice quite unlike a real voice need not materially affect the subject’s
ability to gesture or speak. Indeed, within the framework set by the concept of a
living being or indeed of a person, there are no conceptual limits to what ‘effectors’
may be substituted, for none of our cognitive powers depends on features of parts of
our bodies that can be neither simulated nor replaced, at least in theory.
(7) Conceptual problems can be resolved by empirical investigations
Churchland (2005) has argued that ascribing psychological attributes to parts of
an animal is not mistaken because there is no significant difference between
conceptual and empirical truths. He argued that conceptual problems can be
resolved by empirical studies. In particular, Churchland (2005, p. 467) argued that
the dualist assumption that the non-physical mind can exert subtle causal effects on
the physical body or brain violates the law of the conservation of momentum. Hence
Cartesian dualism is according to him refuted by empirical investigations. However,
it is mistaken to suppose that Cartesian dualism is refuted by the results of empirical
research. Consider the Cartesian assumption that the immaterial mind interacts with
the material body. This is not an empirical statement that can be tested, since it is
not intelligible to talk about an immaterial substance. A substance is something
which we can identify, but there are no criteria of identity for the mind defined as an
immaterial substance. We do not know how to identify the mind as an immaterial
substance, how to measure this substance, and so forth. And since there are no
criteria to identify the mind, it is senseless to say that the mind has causal powers
and can, therefore, interfere with physical processes. What kind of empirical
evidence can someone provide if he argues for example that the mind causes a
voluntary movement? How can we determine that an immaterial substance causes
this movement if we cannot identify this substance?
123
A Compilation for the Perplexed
123
H. Smit, P. M. S. Hacker
Some critics have attempted to rebut the mereological fallacy by invoking empirical
arguments. We discuss in this section some examples.
Hodgson (2005) argued that the brain constructs images of objects. For example,
when we see the sun, according to Hodgson we construct an image of the sun in our
brain. His argument is that when we see the sun, then we can not see the sun as it is
now. For empirical studies have shown that it takes time for light to travel from the
object off which it is reflected to the eyes of an observer. When we observe the
setting sun, Hodgson argued, there is something orange and apparently circular in
our visual field. But what we apprehend cannot be the sun, since the light that
impinges on our retinae left the sun 8 min previously, and, for all we know, the sun
itself might have ceased to exist. The orange thing must therefore be an image of the
sun constructed by our brain. It is important to note that, in Hodgson’s argument, we
do not see this image. It exists only as a part of a conscious experience: it is a visual
representation to us of the sun, an image constructed by our brain. Therefore,
Hodgson (2005, p. 86) concludes, it cannot be doubted that when a person actually
looks at and sees the setting sun the person’s brain constructs an image of the sun,
‘and it is by means of that image that the person can see the sun’.
This argument is misguided (cf. Hacker 2005). The fact that light from the sun
takes time to reach our eyes does not show that we do not now see the sun; it shows
that we now see the sun as it was about 8 min ago. The consequence of the scientific
discovery is not to show that we do not see the objects we take ourselves to see.
What it shows is merely that when speaking accurately of celestial objects at great
distances from us, we need to specify separately the time of the seeing and time of
the existence or occurrence of the object of vision. But there is nothing especially
surprising about that. There is no reason to suppose that we see the stars as they
were so and so many light years ago by having images of them now. Rather, we now
see stars that existed so and so many light years ago.
Hodgson claims that we see objects ‘via images constructed by our brains’. This
is obscure as long as no further explanation is given of what is meant here by
‘seeing one thing via an image of it’. One might claim that, when one sees Margaret
Thatcher on television (now she has passed away), one sees her via a television
image. But then, of course, one sees the image on the television screen. One may
also dream of Thatcher and have a vivid dream image of her. Here one does indeed
not see the dream image. And to have a dream image of Thatcher is not to see her
via an image either. People with good eidetic imagination may conjure up a vivid
mental image of Thatcher, but vividly to imagine Thatcher is incompatible with
123
A Compilation for the Perplexed
simultaneously seeing her. For one cannot imagine what one is simultaneously
seeing. Hence Hodgson’s alleged scientific model of seeing an object via an image
of it that is had but not seen is simply incoherent.
Dennett (2007), Searle (2007) and others have attempted to rebut the fallacy by
arguing that the brain is an information processing organ that receives information
and uses processed information for planning action. The conception of the brain as
an information processing organ is a variant of the old, empiricist conception of how
we acquire information. However this conception of the brain as an information
processing organ is, just like the empiricist conception, incoherent. We discuss two
reasons. First, it is a misconception to assume that the senses receive ‘unprocessed
information’. Light waves impinging on our retinae and sound waves agitating our
eardrums are not ‘unprocessed information’, since they are not information at all
(when someone tells me that p, or when I read that p, then I acquire information, but
the stream of photons and sound waves are not information in that sense). Second, it
is misguidedly assumed that the brain is an organ that ‘processes information’. Of
course, we cannot see unless the visual cortex is functioning normally, but we see
with our eyes, not with our brain. Likewise we cannot remember something unless
our hippocampus is functioning normally, but we remember something, not our
hippocampus. And we cannot walk unless the motor cortex of the brain is
functioning normally, but that does not mean that we walk with our brain and that
the brain is the organ for locomotion, as the legs are. Neither the brain nor its parts
are organs for exercising psychological powers in the sense in which eyes are organs
for seeing or ears are organs for hearing.
Many cognitive neuroscientists have attempted to rebut the fallacy by arguing
that the BOLD signal shows that mental states or processes can be studied as neural
processes (and, hence, why we can apply psychological predicates to parts of the
brain). In order to understand why this is not an argument against the fallacy, it is
important to recall that we do not use the brain (as we use our eyes when seeing).
And it is also important to recall that colloquial utterances and figures of speech
may be misleading here. For example, we use utterances such as ‘Use your brains!’.
But this utterance has the potential to mislead us (if we take it literally), for ‘Use
your brains!’ means simply ‘Think!’. It no more signifies that we think with our
brain than ‘I love you with all my heart’ signifies that we love with our heart.
Our brain is not an organ under our direct control and we cannot do anything with
it. One might, of course, object that we cannot do anything with our stomach either,
but it is nevertheless the organ of digestion. Is not the brain the organ of thought in
just the same way? The answer is again no, for the organs for non-voluntary
functions are not organs we use to perform that function. I do not use my stomach to
digest my food, since I cannot do anything with my stomach: it digests what I eat off
its own bat, and can literally be observed to do so. I do not circulate the blood in my
body by using my heart—my heart does it of its own accord, and can be observed to
do so. But the brain is not an organ of thinking in this sense either, for if it were, it
would think off its own bat and of its own accord.
Do computer-generated images of increased oxygenation in select areas of the
brain of someone, who has been asked to engage in some cogitative exercise (i.e. an
fMRI scan of someone’s brain while he is ‘thinking’), show us what thinking is? No,
123
H. Smit, P. M. S. Hacker
for as we have seen, there is nothing a brain can do that could possibly count as
thinking. BOLD (blood oxygen level dependency) signals on a scanner screen are
not manifestations of thinking, as bold action in the face of danger is. The normal
activity of the stomach satisfies the criteria for digesting. The normal activity of the
heart satisfies the criteria for pumping blood. But the normal activity of the brain
does not, and cannot, satisfy any of the manifold criteria for thinking. Rather, the
various activities of the brain that are now beginning to be discovered in association
with one or another form of thinking are necessary in order for us to be able to think
in such forms. But it is no more thinking than increased oxygenation in one’s leg
muscles is running.
Do BOLD signals show that we see something in our brain? Suppose that I see a
red tomato, do I then experience a red tomato in my brain? Is this experience a
neural state of my brain? Saying so is incoherent, for there is no such thing as
experiencing a red tomato in my brain. It does not make sense to answer the
question where I experience the red tomato by saying: ‘Here’, while pointing to my
head (as opposed to pointing at the fruit in the garden). Similarly, it can not be said
that the hippocampus is the locus of remembering, for an answer to the question
‘Where and when did you remember that …?’ is given by saying: ‘While I was in
the library’; not by saying: ‘In my hippocampus; where else?’.
123
A Compilation for the Perplexed
say that we attribute pseudo- and semi-concepts to the brain, for it is according to
him an ‘empirical fact’ that parts of our brains engage in processes that are
strikingly like guessing, deciding, believing, etc. Their ‘activities’ are enough like
the personal level behaviours to warrant stretching ordinary usage to cover it. Just as
a child can sort of believe that her daddy is a doctor, so some part of a person’s brain
sort of believe that there is an open door a few feet ahead. However, empirical
evidence that brain parts sort of believe things is, not surprisingly, not given. For it
is unclear what would count as evidence here. What would show that the brain of an
insect sort of discriminate colours when it lands on a flower? What evidence would
prove that the brain of a child with an injured limb sort of experiences pain when he
or she cries and weeps? Does the brain of a chimpanzee fleeing and screaming sort
of believe that it is frightened when the chimpanzee is attacked by the alpha male?
Does the brain of an eagle detecting a prey sort of discriminate an animal?
Apart from these conceptual problems (see further Bennett and Hacker 2003,
Appendix 1), Dennett’s ideas are problematic because he ignores the neo-
Aristotelian alternative. Dennett assumes that the only alternative to his alleged
evolutionary theory is ‘mind-creationism’. Yet the neo-Aristotelian alternative is
not a form mind-creationism: Aristotle made a distinction between things and living
creatures with the principle that only living beings have a psuchē. It is important to
note that the psuchē is, in contrast to the Cartesian conception of the mind, a
biological principle and that the possession of psuchē is not a characteristic of
human beings alone. Yet Aristotle also distinguished humans from the other
animals: only humans have a rational psuchē (the rational powers of the intellect
and will). This essential difference between humans and other animals is according
to neo-Aristotelians explicable in terms of language evolution: only humans use a
language and have therefore the rational powers of the intellect and the will. Hence
one can ask how we can explain the evolution of these rational powers in terms of
evolutionary theory. An answer to this question requires a brief exposition of
inclusive fitness theory and its use for explaining evolutionary transitions.
Hamilton (1964) realised that organisms are not always selected to maximise
their own reproductive success. He showed that genes correlating with character-
istics can not only increase in a population through increasing the fitness of an
individual, but also through increasing the fitness of other individuals carrying
copies of those genes. For example: when an individual helps a close relative raising
his or her children, then fitness benefits of the relative will contribute to the fitness
of the helping individual. Natural selection will lead to organisms that are adapted to
maximise their, what Hamilton called, inclusive fitness, i.e. the sum of the personal
fitness and the fitness that results from helping a relative. We use here Hamilton’s
theory for understanding how and why stable collectives evolved.
Before the rise of inclusive fitness theory, models of population genetics were
used for understanding the stability of organisms (e.g. stabilising selection for
understanding why there is selection against the extremes in a population). Modern
inclusive fitness theory provides us with new tools for investigating the stability of
an organism. Recall that organisms are the result of an evolutionary transition
(Bourke 2011; Buss 1987; Maynard Smith and Szathmáry 1995, 1999). An
evolutionary transition is the evolution of a higher-level unit out of lower-level
123
H. Smit, P. M. S. Hacker
units. For instance cells joined another cell resulting in the symbiotic unicell (i.e.
eukaryotic cell), and symbiotic unicells later joined other symbiotic unicells
resulting in the multicellular organism. Hence higher-level entities (symbiotic
unicells, multicellular organisms) are composed of lower-level entities (cells) that
were independent entities before the transition. Inclusive fitness theory explains why
cooperation between lower-level entities was beneficial because it enhanced
inclusive fitness, either through indirect or direct fitness benefits (Hamilton 1964;
Gardner et al. 2011).
Bourke (2011) has argued that we can subdivide an evolutionary transition in
three stages: social group formation, maintenance and transformation. He also
argued that the evolutionary processes ‘driving’ these three stages may differ. Social
group formation occurs when, depending on genetic and ecological factors, genes
for social behaviour spread in the population. Social group maintenance is the
process that maintains the stability of the newly formed group. Since the
cooperative behaviour of a group is subject to exploitation, both by group members
(cheaters, free riders) and parasites, mechanisms evolved for limiting the potential
for conflict and exploitation. These mechanisms are summarized by terms such as
‘self-limitation’ (e.g. mechanisms preventing selfish behaviour of cancerous cells in
a multicellular organism); ‘coercion’ (holding selfishness in check by e.g. policing,
punishment and dominance), and ‘defence mechanisms’ such as the immune system
against pathogens. Social group transformation occurs when the newly formed
group is so stable that it becomes an adaptive unit and may become itself a unit in
the next evolutionary transition. The fitness ‘interests’ of the group members are
then aligned. This occurs (in theory) when conflict within the group is completely
repressed (cf. Gardner and Grafen 2009; see also Queller and Strassmann 2009).
How do these evolutionary insights relate to the neo-Aristotelian conception?
The important point to notice is that organs and persons are not lower-level entities
constituting higher-level entities. These are cells (constituting multicellular
organisms) and organisms (constituting animal and later human societies). Organs
evolved after the initial social group formation as the result of epigenesis, i.e.
through different gene expression different types of cells, tissues and organs evolved
in a multicellular organism. Organs became then, just as cells, parts of an organism.
For instance, during the evolution of large multicellular animals a heart evolved,
since diffusion (a chemical process) was no longer adequate to distribute nutrients
and oxygen through the body. The heart evolved to serve a function. Teleological
explanations clarify why a normal functioning heart contributes to the health of the
organism and why a malfunctioning heart results in disease. In a similar vein,
persons are not the lower-level entities during the initial formation of human
societies. At first linguistic behaviour, a new form communicative behaviour,
evolved as an extension and replacement of nonverbal behaviour also displayed by
the other animals. When ‘simple’ (linguistic) communicative acts and responses
(like demanding, asking, ordering) were later extended with more ‘complex’
communicative acts like expressions of intending, thinking, and imagining (Smit
2013, 2014), inclusive fitness theory predicts that reciprocating and acting on norms
evolved because these enhanced inclusive fitness. Human societies were then
populated by rational, language-using creatures with powers of intellect and will,
123
A Compilation for the Perplexed
knowledge of good and evil (but notice that it is a very long way from rudimentary
language to moral agency and responsibility). Hence human beings evolved as the
result of language evolution into persons: they became bearers of rights and duties
and were answerable and responsible for their deeds. Notice that this is the reason
why babies are nowadays treated as persons, although they are immature,
undeveloped, and not yet responsible for their deeds. They are treated as persons
for they have an innate ability to acquire a language and become creatures capable
of participating in a culture (it belongs to their nature to become a person). Note that
it is also the reason why human beings who have lost the powers of reason through
age, illness or injury, are still persons precisely because they are damaged, impaired
human beings. We treat them morally as such (a person in a vegetative state is not
treated like a cabbage). Two reminders are relevant here. First, from the first use of
words to speaking and understanding a complex language with a grammar is a long
road (Smit 2013, 2014). This is unsurprising given that the use of an expanded
language requires several skills. Second, recall that the concept of a person is not a
substance but a status concept (essential to our current moral and legal thought; see
further Hacker 2007).
Note that arguing that the human psuchē evolved as the result of the evolutionary
transition from (animal) nonverbal to (human) linguistic behaviour does not commit
one to mind-creationism, just as saying that organs evolved in multicellular
organisms as the result of the transition from cells to multicellular organisms does
not commit one to organ-creationism. And note also that, in contrast to Dennett, we
do not argue that there are ‘lower forms’ of intentionality in cells and the other
animals (cells do not have intentional properties; intentionality is the result of
language evolution). Asking whether there are ‘lower forms’ of intentionality in
cells and the other animals is like posing the question whether there are ‘lower
forms’ of organs in a colony of yeast cells. Are there ‘pseudo-‘, ‘semi-‘, or ‘as if-
organs’ in a colony of yeast cells?
Some evolutionary theorists (e.g. Bourke 2011) have argued that language
evolution is not an example of an evolutionary transition, because it did not result in
a new biological entity. We include language evolution because it enables humans
to create new bonds resulting in new collectives consisting of individuals interacting
in a coordinated manner to achieve common goals. Inclusive fitness theory is not
only capable of explaining the evolution of a new collectives consisting of lower-
level entities as parts physically joined to one another (a multicellular organism), or
as parts that remain and tend to remain in close proximity (an insect society), but
also as members of a group or society since language enabled human to create new
types of bonds between individuals even if they are not living in close proximity to
each other. However, it is important to note that speaking about language evolution
as an example of an evolutionary transition does not mean that human societies are,
or are predicted to become, biological entities (as Bourke correctly noticed). This
has the important consequence that human societies are not, and will not evolve
into, obligate collectives. In obligate social groups the lower-level entities can only
replicate as part of the group. For example in the case of multicellular organisms, it
means that cells in obligate multicellular organisms can only replicate as part of an
organism, whereas in facultative multicellular organisms (e.g. social amoeba) they
123
H. Smit, P. M. S. Hacker
have the capacity to replicate independent of the higher-level entity. There is clear
evidence that obligate social groups evolve when r = 1, for there are no examples
known of obligate multicellular organisms when r \ 1 (Fisher, Cornwallis and West
2013), i.e. when social groups are formed out of an aggregation of cells that are not
always identical. Interestingly, there is a similar story to tell in the case of insect
societies: obligate insect societies (with sterile females) only evolved when there is
monogamy (Boomsma 2009), leading to a potential worker being equally related to
her own offspring and to the offspring of her mother (r=1/2 in both cases; the
relation to the offspring of her mother is the average of 3/4 to a sister and 1/4 to a
brother).3 Any small efficiency benefit for rearing siblings (offspring of the mother)
over their own offspring will then favour eusociality (i.e. a reproductive division of
labour resulting in an obligate insect society). Hence it appears that a bottleneck is
essential for the formation of obligate collectives: they evolve when a multicellular
organism is derived from a single cell, and when an insect society is started by a
single queen fertilised by a single male.
Yet although current human societies are not predicted to become obligate
societies, typical human social groups did evolve as the result of language evolution.
One can argue that the stability of the human social groups requires a socio-cultural
explanation, but there are theoretical and empirical arguments for a role of
explanations in terms of inclusive fitness theory here (Smit 2014). We mention two
examples. First, there is evidence that the ability to acquire a language has been
subject to selection, as mutations of genes affecting language acquisition show.
There is also evidence that imprinted genes affect the ability to acquire a language,
inviting the question of how we can explain the evolution of these mono-allelic
expressed genes in terms of inclusive fitness theory (Smit 2013). Second, if
language evolution enhanced the inclusive fitness of groups, and if language was
important for group identity (so that members were able and willing to pursue
common goods through joint projects; see Darwin (1871 [2003], pp. 127–133), then
this may explain why there is a critical period for acquiring a language. Yet one has
to keep in mind that natural selection favoured groups during a specific period of our
evolutionary history, i.e. when there was strong competition between groups. Recall
that human social groups are not an (obligate) end product for during the later stages
of human evolution cities, nations and federations evolved. It is therefore interesting
to use inclusive fitness theory for understanding human group formation, but it is no
less interesting to use cultural and historical explanations for understanding the rise
of complex societies.
An important reason why inclusive fitness theory is capable of explaining the
different evolutionary transitions is of course the pivotal role of genes during
ontogenesis and evolution. Hacker’s analogy (see misconception 3; the relations
between an organ and the human being is comparable to London being a part of the
United Kingdom but not of the European Union) is adequate for conceptual
purposes. For example, it highlights that as the result of the transition from
3
Boomsma’s ‘monogamy hypothesis’ should not be confused with Hamilton’s ‘haplodiploidy
hypothesis’. Hamilton argued that haplodiploidy should facilitate the evolution of altruistic helping.
However, studies have shown that is unlikely that haplodiploidy was an important driver of eusociality,
while there is evidence that monogamy has played a key role (see further Gardner et al. 2012).
123
A Compilation for the Perplexed
unicellular to multicellular organisms with organs, cells and organs became parts of
a being that has or will evolve its own purposes. The purpose of organs became
subservient to the good of the being whose organ they are (note again that machine
parts also have a purpose and these parts are also subservient to the functions of the
machine of which they are parts. But not to its good, as it has no good; see
misconception 2). Yet this conceptual observation raises the evolutionary problem
of how we can understand why organisms have a good. Humans, as language-using
creatures, can say that they do not feel well (when they are or become ill). If we
notice that linguistic behaviour evolved out of nonverbal behaviour (and note that
animals also display pain-behaviour, avoid noxious stimuli, etc.), then it becomes
interesting to investigate when and how linguistic utterances extended and replaced
behaviours also displayed by the other animals. For example the sensation pain (e.g.
as the result of an injury of a limb) probably evolved because organisms did not
overload then the limb resulting a quicker recovery (Nesse and Williams 1994). But
expressing a pain in nonverbal behaviour and later linguistic behaviour is also
beneficial if it results in helping behaviour of others (assuming that helping
enhances inclusive fitness). This explains why helping and caring behaviour
evolved, why humans have compassion with the poor and sick and have a moral
sense. Note, again, that the notion of a moral sense does not mean that we perceive
what is right or wrong, nor does it imply that there is an organ of moral sense (the
eye is the organ of sight, but the conscience is not the organ of moral thought). But
there are reasons for using the notion of a moral sense for there is weak connection
here with the notion of sense. For example a person with a sharp moral sense knows
(not: perceives) what must and what may not be done. And there is another nexus
through the notion of moral sensibility: humans have a feeling or compassion for
good and bad.
Genes are important for they are transmitted from one generation to the next.
Selection on phenotypes affects the chance that certain genotypes are (and others
not) present in the next generation (evolution). The selected genotypes determine
which phenotypes will develop in the next generation (ontogenesis). We have
argued that inclusive fitness theory is capable of explaining at least the initial steps
resulting in the human, rational psuchē, because this theory clarifies why linguistic
behaviour evolved out of the nonverbal behaviours displayed by the other animals
and our predecessors. We have also argued that understanding the mereological
fallacy is essential here, for this helps us to avoid conceptual pitfalls and enables us
to pose the interesting questions. Understanding this fallacy does not constrain
conceptual innovation (as some critics mistakenly believe) but helps us to determine
what makes sense. This is important, for empirical investigations aiming at
discovering empirical truths presuppose the correct use of linguistic expressions.
Acknowledgments We thank two reviewers for their comments on an earlier draft of this paper.
References
Bennett, M. R., & Hacker, P. M. S. (2003). Philosophical foundations of neuroscience. Oxford: Blackwell
Publishing.
123
H. Smit, P. M. S. Hacker
Bennett, M. R., & Hacker, P. M. S. (2008). History of cognitive neuroscience. Chichester: Wiley.
Birkhead, T. (2012). Bird sense: What it’s like to be a bird. New York: Walker & Company.
Boomsma, J. J. (2009). Lifetime monogamy and the evolution of eusociality. Philosophical Transactions
of the Royal Society London B, 364, 3191–3207.
Bourke, A. F. G. (2011). Principles of social evolution. Oxford: Oxford University Press.
Buss, L. W. (1987). The evolution of individuality. Princeton: Princeton University Press.
Churchland, P. M. (2005). Cleansing science. Inquiry, 48, 464–477.
Clutton-Brock, T. (2009). Cooperation between non-kin in animal societies. Nature, 462, 51–57.
Dainton, B. (2007). Wittgenstein and the brain. Science, 317, 901.
Darwin, C. (1859 [1968]). On the origin of species by means of natural selection. Hammondsworth:
Penguin Books.
Darwin, C. (1871 [2003]). The descent of man and selection in relation to sex. London: Gibson Square
Books Ltd. With an introduction by Richard Dawkins: First published in 1871 by John Murray.
Dawkins, R. (1979). Twelve misunderstandings of kin selection. Zeitschrift für Tierpsychologie, 51,
184–200.
Dennett, D. (1995). Darwin’s dangerous idea: Evolution and the meanings of life. New York: Simon &
Schuster.
Dennett, D. (2007). Philosophy as naı̈ve anthropology: Comment on Bennett and Hacker. In M. Bennett,
D. Dennett, P. Hacker, & J. Searle (Eds.), Neuroscience and philosophy: Brain, mind, and language
(pp. 73–95). New York: Columbia University Press.
Fisher, R. M., Cornwallis, C. K., & West, S. A. (2013). Group formation, relatedness and the evolution of
multicellularity. Current Biology, 23, 1120–1125.
Gardner, A., Alpedrinha, J., & West, S. (2012). Haplodiploidy and the evolution of eusociality: Split sex
ratios. The American Naturalist, 179, 240–256.
Gardner, A., & Grafen, A. (2009). Capturing the superorganism: A formal theory of group selection.
Journal of Evolutionary Biology, 22, 659–671.
Gardner, A., West, S. A., & Wild, G. (2011). The genetical theory of kin selection. Journal of
Evolutionary Biology, 24, 1020–1043.
Glock, H.-J. (2000). Animals, thoughts and concepts. Synthese, 123, 35–64.
Hacker, P. M. S. (2005). Goodbye to qualia and all what? Journal of Consciousness Studies, 12(11),
61–66.
Hacker, P. M. S. (2007). Human nature: The categorial framework. Oxford: Basil Blackwell.
Hacker, P. M. S. (2013a). Before the mereological fallacy: A rejoinder to Rom Harré. Philosophy, 88,
141–148.
Hacker, P. M. S. (2013b). The intellectual powers: A study of human nature. Chichester: Wiley.
Hamilton, W. D. (1964). The genetical theory of social behaviour I and II. Journal of Theoretical Biology,
7, 1–52.
Hampshire, S. (1959). Thought and action. London: Chatto and Windus.
Harré, R. (2012). Behind the mereological fallacy. Philosophy, 87, 329–352.
Hodgson, D. (2005). Goodbye to qualia and all that? Journal of Consciousness Studies, 12(2), 84–88.
Kenny, A. (1963). Action, emotion, and the will. London: Routledge, Kegan & Paul.
Kenny, A. (1984) The homunculus fallacy. In The legacy of Wittgenstein (pp. 125–136). Oxford: Basil
Blackwell. First published in M. Greene (ed.), Interpretations of life and mind. London: Routledge
& Kegan Paul, 1971.
Kenny, A. (1989). The metaphysics of mind. London: Clarendon Press.
Maynard Smith, J., & Szathmáry, E. (1995). The major transitions in evolution. New York: W.H.
Freeman.
Maynard Smith, J., & Szathmáry, E. (1999). The origins of life: From the birth of life to the origin of
language. Oxford: Oxford University Press.
Nesse, R. M., & Williams, G. C. (1994). Why we get sick. New York: Times Books.
Paley, W. (1802 [2006]). Natural theology, or evidence of the existence and attributes of the Deity
collected from the appearances of nature. Oxford: Oxford University Press. Edited by M.D. Eddy en
D. Knight.
Queller, D. C., & Strassmann, J. E. (2009). Beyond society: The evolution of organismality.
Philosophical Transactions of the Royal Society B, 364, 3143–3155.
Rundle, B. (1997). Mind in action. Oxford: Clarendon Press.
123
A Compilation for the Perplexed
Scott-Phillips, T. C., Dickins, T. E., & West, S. A. (2011). Evolutionary theory and the ultimate/
proximate distinction in the human behavioural sciences. Perspectives on Psychological Science, 6,
38–47.
Searle, J. (2007). Putting consciousness back in the brain: Reply to Bennett and Hacker, philosophical
foundations of neuroscience. In M. Bennett, D. Dennett, P. Hacker, & J. Searle (Eds.), Neuroscience
and philosophy: Brain, mind, and language (pp. 97–124). New York: Columbia University Press.
Smit, H. (2010a). Darwin’s rehabilitation of teleology versus Williams’ replacement of teleology by
natural selection. Biological Theory, 5, 357–365.
Smit, H. (2010b). Weismann, Wittgenstein and the homunculus fallacy. Studies in the History and
Philosophy of Biology and the Biomedical Sciences, 41, 263–271.
Smit, H. (2013). Effects of imprinted genes on the development of communicative behavior: A
hypothesis. Biological Theory, 7, 247–255.
Smit, H. (2014). The social evolution of human nature: From biology to language. Cambridge:
Cambridge University Press.
Visser, M. E., & Lessells, C. M. (2001). The costs of egg production and incubation in great tits (Parus
major). Proceedings of the Royal Society B, 268, 1271–1277.
von Wright, G.H. (1963). Varieties of goodness. London, Routledge & Kegan Paul.
West, S. A., El Mouden, C., & Gardner, A. (2011). 16 common misconceptions about the evolution of
cooperation in humans. Evolution and Human Behavior, 32, 231–262.
Williams, G. C. (1966 [1992]). Adaptation and natural selection (2nd ed.). Princeton: Princeton
University Press.
Wittgenstein, L. (1953 [2009]). Philosophical investigations. Chichester: Wiley-Blackwell. Translated by
G. E. M. Anscombe, P. M. S. Hacker & J. Schulte. Revised fourth edition by P. M. S. Hacker & J.
Schulte.
123