The Holmesian Logic
The Holmesian Logic
The Holmesian Logic
Introduction
In the fictional universe Sherlock Holmes inhabits, logicians are capable of
astounding feats that seem well beyond the abilities of their real-world
counterparts. According to Holmes himself,
“Is there any point to which you would wish to draw my attention?”
“To the curious incident of the dog in the night-time.”
“The dog did nothing in the night-time.”
“That was the curious incident,” remarked Sherlock Holmes. (The
Adventure of Silver Blaze)
We may reconstruct Holmes’ reasoning about Simpson as a decision tree
summarizing the construction of his deductive–interrogative argument,
depicted in Fig. 1. Question Q1 is the ‘big’ question and the premises sum
up the (explicit) background information that both Gregory and Holmes are
attending to. Note that the premises do not yet mention the dog’s behavior,
since Holmes considers it via questions Q2 and Q3, which Gregory
overlooked. Holmes obtains A2 from Inspector Gregory (cf. above), and A3
and A4 from his own memory (cf. below) enabling the reductio of
Simpson’s guilt. The question–answer pairs (Q3, A3) and (Q4, Q4)
illustrate the role of memory in Holmesian inquiry, which the Hintikkas
characterize as follows:
In some cases, the great detective has to carry out an observation or even an
experiment to answer the question. More frequently, all he has to do is to
perform an anamnesis and recall certain items of information which he
already had been given and which typically had been recorded in the story
or novel for the use of the readers, too, or which are so elementary that any
intelligent reader is expected to know them. (Hintikka and
Hintikka 1983, p. 159)
Paraphrasing the Hintikkas, (Q3, A3) represents the anamnesis of
elementary items of information (general knowledge about well-trained
watchdogs) that anyone (including Gregory) should know. The pair
(Q4, A4) similarly represents how Holmes recalls the newspapers account
of Simpson’s evening visit (discussed between Holmes and Watson for the
reader’s benefit at the very beginning of the short story). Holmes then
deduces that Simpson cannot be guilty, and in the process, obtains a new
descriptor for the thief of Silver Blaze (someone the dog knew well).
Fig. 1
I consider that a man’s brain originally is like a little empty attic, and you
have to stock it with such furniture as you choose. A fool takes in all the
lumber of every sort that he comes across, so that the knowledge which
might be useful to him gets crowded out, or at best is jumbled up with a lot
of other things, so that he has a difficulty in laying his hands upon it. Now
the skillful workman is very careful indeed as to what he takes into his
brain-attic. He will have nothing but the tools which may help him in doing
his work, but of these he has a large assortment, and all in the most perfect
order. It is a mistake to think that that little room has elastic walls and can
distend to any extent. Depend upon it there comes a time when for every
addition of knowledge you forget something that you knew before. It is of
the highest importance, therefore, not to have useless facts elbowing out
the useful ones. (A Study in Scarlet, I, 2)
Holmes’ notion of a ‘brain attic memory’ (hereafter, BAM) is a fictional-
world explanation for plot devices that would otherwise look like lucky
guesses, but it is partially vindicated by current neurocognitive models.
According to these models, human memory is a content-addressable
memory (herafter, CAM) that takes data as input and i.e., activates cliques
of neurons (see below) where similar data is stored (see e.g., Hebb 2002;
Marr 1969). By contrast, a random access memory (RAM), typical of
artificial computers, takes addresses as input and returns the data stored at
those addresses. Since grouping similar data in neighboring location
(addresses or cliques of neurons) improves the performance of a CAM (but
not that of a RAM) Holmes’ recommendation to keep “a large assortment [of
tools], and all in the most perfect order” is a cogent recommendation to
improve the performance of human CAM, which his BAM approximates (at
least in that respect). This is particularly true relative to domains of
expertise. Holmes’ is indeed a “skillful workman” specializing in criminal
inquiry, and maintaining his BAM/CAM in “the most perfect order” supports
relevant associations that often escape other criminal investigators.
The Holmesian canon offers more than a few examples of how a well-
maintained “brain attic” supports Holmes’ investigations. Holmes has
made a special study of tobacco ashes and even written a monograph on the
topic (“Upon the Distinction Between the Ashes of the Various Tobaccos”,
mentioned in A Study in Scarlet, The Boscombe Valley Mystery and The
Hound of the Baskervilles). He has also gathered very specific knowledge
from special sciences, such as physical anthropology (that people tend to
write graffiti at or just below eye level). This expert knowledge makes
Holmes sensitive to details at a crime scene at Lauriston Garden (visited
in A Study in Scarlet) that inspectors Lestrade and Gregson have
overlooked or misinterpreted. Specifically, Holmes spots Trichinopoly cigar
ashes on the floorboards and estimates the height of a murderer from an
inscription (“Rache”) written on a wall. From this and other clues, he
concludes that the murderer is a man, contrary to Lestrade and Gregson
who suspect a woman (perhaps named Rachel). Ruling out a woman even
suggests a motive and an ethnic origin (rache means revenge, in German),
both later confirmed by Holmes’ investigation. Eventually, Holmes
positively identifies the culprit in part thanks to Trichinopoly ashes in his
hotel room.
“Those rules of deduction laid down in that article which aroused your
scorn are invaluable to me in practical work. Observation with me is second
nature. You appeared to be surprised when I told you, on our first meeting,
that you had come from Afghanistan.”
“You were told, no doubt.”
“Nothing of the sort. I knew you came from Afghanistan. From long habit
the train of thoughts ran so swiftly through my mind that I arrived at the
conclusion without being conscious of intermediate steps. There were such
steps, however. The train of reasoning ran, ’Here is a gentleman of a
medical type, but with the air of a military man. Clearly an army doctor,
then. He has just come from the tropics, for his face is dark, and that is not
the natural tint of his skin, for his wrists are fair. He has undergone
hardship and sickness, as his haggard face says clearly. His left arm has
been injured. He holds it in a stiff and unnatural manner. Where in the
tropics could an English army doctor have seen much hardship and got his
arm wounded? Clearly in Afghanistan.’ The whole train of thought did not
occupy a second. I then remarked that you came from Afghanistan, and you
were astonished.”
“It is simple enough as you explain it,” I said, smiling.” (Ibid., 1)
Holmes’ claim that he is relying on the “rules of deduction” is somewhat
puzzling. Holmes reasons from associations between Watson’s title
(Doctor), demeanor (posture, stiff arm) and appearance (skin tone,
gauntness) to possible occupation (army doctor) and circumstances
associated with it (foreign deployment, injury, sickness). As noted by the
Hintikkas (1983, p. 163), the intermediate conclusion of Holmes’ reasoning
is literally expressed as the answer to a question (Where in the tropics...?)
and, with Watson identified as an army doctor, only requires
an anamnesis (that there is a war in Afghanistan, where the sun shines
bright and the sanitary situation is hazardous) to obtain a unique location
for Watson’s last deployment.
The crucial part of the task of the Holmesian “logician”, we are suggesting,
is not so much to carry out logical deductions as to elicit or to make explicit
tacit information. This task is left unacknowledged in virtually all
philosophical expositions of logical reasoning, of deductive heuristics, and
of the methodology of logic and mathematics. For this neglect the excuse is
often offered that such processes of elucidations and explication cannot be
systematized and subjected to rules. It may indeed be true that we cannot
usually give effective rules for heuristic processes. It does not follow,
however, that they cannot be discussed and evaluated, given a suitable
conceptual framework. (Hintikka and Hintikka 1983, pp. 156–157)
An example of “deductive heuristics” would be the appeal to auxiliary
constructions (lemmas), typically to shorten deductive proofs. Assuming a
preference for shorter proofs, a strategy with lemmas is in general
preferable to a strategy without lemmas. In formal proofs, lemmas are
handled via the Cut Rule, which is, in the proof system that underlies the
Hintikkas’ model, equivalent to introducing an instance of the Excluded
Middle. Any language as expressive as a first-order language capable of
enumerating objects would generate countably many candidate lemmas for
any proof in that language. Furthermore, some candidates not only would
not shorten the proof, but might delay its completion indefinitely. Thus,
there can be no effective method to eliminate those candidates. By the same
token, there is no general effective method for selecting the best lemma.
Since the Cut Rule is also the device by which yes-or-no questions are
introduced in the Hintikkas’ model, they are correct to point that heuristics
in their “logic of discovery” are no more problematic than in logic tout
court. That “they [can] be discussed and evaluated, given a suitable
conceptual framework” is illustrated inter alia by the Strategy Theorem,
which requires select applications of the Cut Rule, for which there is no
effective method, while retaining a normative content. Footnote7
Herbert Simon argued in much the same way as the Hintikkas, that a
normative framework for discovery must be able to rank discovery
strategies as better or worse and to provide with recommendations for
choosing discovery strategies (see Simon 1973). Taking stock in computer
implementations of his model of problem-solving (in Zytkow and
Simon 1988), Simon further illustrated his point with domain-specific
algorithms for solving particular classes of problems, namely the discovery
of empirical laws from qualitative data (descriptions of chemical reactions)
or quantitative data (descriptions of physical systems). In both cases, the
discovery algorithms can be ranked as better or worse based on how fast
they converge to known empirical laws. Simon argued that the ability to
support normative claims qualifies his framework as a ‘logic of discovery’
for the class of problem they address, in the same sense that deductive logic
(Popper) or probability theory (Reichenbach) would qualify as ‘logic(s) of
justification’ for empirical hypotheses.
Concluding remarks
In the late 1980s, Zytkow and Simon (1988) took stock of the advances of
automated discovery algorithms to express their consequences vis-à-vis the
Popper–Reichenbach thesis:
The Hintikkas demonstrated how far deductive norms can carry us. They
successfully captured the strategic role of deduction in problem-solving.
Their decision-theoretic approach to strategy selection with bounded
rationality collapsed eventually, but foreshadowed J. Hintikka’s proof-
theoretic account and the Strategy Theorem, that still stand. Finally, they
recognized that entailment-based inferential norms cannot carry very far
when it comes to introducing new concepts through questions, and they
pointed to the role of memory, beginning with Sherlock Holmes’ “brain
attic,” a suggestion that we followed here (and in greater details in Genot
and Jacot 2018).
Notes
1. 1.
2. 2.
3. 3.
For the formally minded reader: some questioning strategies can fall
short of providing enough information to entail an answer to the big
question. Given the semi-decidability of first-order logic, some of
these unconclusive strategies can be infinite (i.e. have infinitely many
steps). Thus, the standard means for strategic reasoning from utilities
(elimination of dominated strategies) would require the elimination
of infinite strategies. This would be tantamount to solving the Halting
Problem, which is uncomputable by Turing machines, and formally
equivalent to the decision problem for first-order logic (for a detailed
formal argument, see Genot 2018; Genot and Jacot 2018).
4. 4.
5. 5.
6. 6.
7. 7.