Abductive Reasoning in Science
Abductive Reasoning in Science
Abductive Reasoning in Science
To come
Philosophy of Science
ABDUCTIVE REASONING
IN SCIENCE
Finnur Dellsén
University of Iceland, Inland Norway University of
Applied Sciences and University of Oslo
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
Shaftesbury Road, Cambridge CB2 8EA, United Kingdom
One Liberty Plaza, 20th Floor, New York, NY 10006, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre,
New Delhi – 110025, India
103 Penang Road, #05–06/07, Visioncrest Commercial, Singapore 238467
www.cambridge.org
Information on this title: www.cambridge.org/9781009500524
DOI: 10.1017/9781009353199
© Finnur Dellsén 2024
This publication is in copyright. Subject to statutory exception and to the provisions
of relevant collective licensing agreements, with the exception of the
Creative Commons version the link for which is provided below,
no reproduction of any part may take place without the written permission of
Cambridge University Press & Assessment.
An online version of this work is published at doi.org/10.1017/9781009353199
under a Creative Commons Open Access license CC-BY-NC 4.0 which permits
re-use, distribution and reproduction in any medium for non-commercial
purposes providing appropriate credit to the original work is given
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
and any changes made are indicated. To view a copy of this license visit
https://creativecommons.org/licenses/by-nc/4.0
When citing this work, please include a reference to the DOI 10.1017/9781009353199
First published 2024
A catalogue record for this publication is available from the British Library.
ISBN 978-1-009-50052-4 Hardback
ISBN 978-1-009-35318-2 Paperback
ISSN 2517-7273 (online)
ISSN 2517-7265 (print)
Cambridge University Press & Assessment has no responsibility for the persistence
or accuracy of URLs for external or third-party internet websites referred to in this
publication and does not guarantee that any content on such websites is, or will
remain, accurate or appropriate.
Abductive Reasoning in Science
DOI: 10.1017/9781009353199
First published online: June 2024
Finnur Dellsén
University of Iceland, Inland Norway University of Applied Sciences and
University of Oslo
Introduction 1
Conclusion 63
References 66
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
Abductive Reasoning in Science 1
Introduction
Scientists are constantly engaged in various forms of reasoning, arguing that
because this is the case, that must be the case. Some of these forms of reason-
ing are from what may broadly be called data to what may broadly be called
theory. The data are things like observations, survey statistics, and experimental
results. A theory is typically a more ambitious type of claim that often gener-
alizes, expands, or otherwise “goes beyond” the data, such as by specifying
what causes some type of event. For example, by the early twentieth century
there was already a great deal of observational data suggesting that lung cancer
is more frequent among tobacco smokers than among non-smokers. From this
data most scientists eventually inferred that smoking causes lung cancer, and
so that one may reduce one’s chances of getting lung cancer by refraining from
smoking.
The term “abductive reasoning” refers, at least for the purposes of this Ele-
ment, to a specific way of engaging in data-to-theory reasoning. In particular,
it refers to reasoning in which theories are evaluated at least partly on the basis
of how well they would, if true, explain the available data. To see how this is
supposed to work, consider how one might conclude that smoking causes lung
cancer in the above example. The theory that smoking causes lung cancer seems
to provide a good explanation, especially compared to rival explanations, of the
observed difference in lung cancer frequency among smokers and nonsmokers.
In particular, the theory that smoking causes lung cancer arguably provides a
much better explanation of this data than various other theories one might think
of, such as that the correlation between smoking and lung cancer is a mere
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
1 This latter type of explanation was seriously proposed by R.A. Fisher (1959), who suggested
that having lung cancer might cause an unconscious irritation or pain, which in turn causes
people to smoke.
2 Philosophy of Science
2 This curious situation evokes Voltaire’s (1759, ch. 70) quip that the Holy Roman Empire was
“in no way holy, nor Roman, nor an empire.”
3 Happily, there is another Element, Bayesianism and Scientific Reasoning (Schupbach, 2022),
that covers much of the ground I have in mind here, especially recent discussions of formal
measures of explanatory power and how they could be leveraged in an account of abductive
reasoning.
Abductive Reasoning in Science 3
the nature and purpose of abductive reasoning. This view is difficult to sum-
marize briefly at this stage, but at a very general level it holds that abductive
reasoning is a collection of inferential strategies that serves to approximate dif-
ferent forms of probabilistic reasoning. Depending on the exact nature of the
probabilistic reasoning that is being approximated, the inferential strategy may
be more or less demanding. In particular, I will suggest that some of the prob-
abilistic conclusions we wish to reach are quite modest, for example, when
determining which theory to investigate further; in those cases, abductive rea-
soning is not very demanding. In other cases, we may want abductive reasoning
to warrant a reasonably high level of probabilistic confidence that a theory
is true; in those cases, abductive reasoning is an evidentially demanding and
temporally extended process that may not deliver the desired conclusion at all.
The rest of this Element is structured as follows. Section 1 briefly summa-
rizes the history of philosophical thought about abductive reasoning from the
advent of modern science to the middle of the twentieth century. Section 2
surveys contemporary accounts of abductive reasoning, based on a three-fold
distinction between accounts that construe abductive reasoning as (i) a form
of inference, (ii) a probabilistic process, or (iii) both of the above. Section 3
focuses on the fact that in abductive reasoning, one is told to infer or prefer the
best explanation. But what reason, if any, is there for scientists to prefer “better”
explanations in this way? As we shall see, there are several quite different types
of answers to this question, leading to different ideas about the role of abduc-
tive reasoning in science. Section 4 then discusses a different set of problems
for accounts of abductive reasoning, having to do with whether abductive rea-
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
appears to have been largely implicit amongst working scientists of this period,
rather than being based on an explicit account of how reasoning of this kind
ought to proceed.
For example, one may notice the surprising fact that a burning object placed in
a vacuum immediately stops burning. If, as Lavoisier claimed, combustion is a
process in which a burning substance combines with oxygen, then this surpris-
ing fact would be a matter of course. Hence, according to Peirce’s Abduction
schema, there is reason to suspect that Lavoisier’s theory is true.
It is worth noting that Peirce was arguably not entirely consistent over
time about how he defined ‘Abduction’ – or, indeed, regarding which term
he used for it, preferring “Hypothesis” and “Retroduction” in his earlier work.
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
Moreover, most contemporary readers of Peirce agree that his use of the term
“Abduction” differs in important ways from how the term tends to be used and
understood today. In particular, several scholars (Hanson, 1958; Kapitan, 1992;
Minnameier, 2004; Campos, 2011) have argued that in his most influential
works, Peirce uses “Abduction” to refer to a psychological process of generat-
ing or suggesting new hypotheses. Put differently, the standard interpretation of
Peirce’s work is that his notion of Abduction primarily describes the process by
which we can or should come to think of novel theories, namely by considering
what type of theory would potentially explain the facts before us, regardless of
whether those theories can be considered true or plausible.
Apart from textual evidence supporting this interpretation, there are philo-
sophical reasons for taking Peircean Abduction to be something other than a
rule of inference – or, at most, to be a very weak rule of inference. After all, it
should be clear that the same set of facts may lead, via a Peircean Abduction, to
quite different, indeed incompatible, theories. Put in terms of the above schema,
for each C there will arguably be several incompatible theories A1, . . . , An such
Abductive Reasoning in Science 7
that if each Ai were true, then C would be “a matter of course.” For exam-
ple, note that Lavoisier’s oxygen theory of combustion is not the only theory
on which we should expect an object to stop burning once placed in a vac-
uum. Consider instead the theory that burning involves the transfer of a specific
substance, phlogiston, from the object to the surrounding air. This theory also
explains why nothing burns in a vacuum, because in a vacuum there is no air to
receive the phlogiston that would otherwise be transferred from the object. So
which theory, Lavoisier’s oxygen-based theory or this phlogiston-based theory,
should be inferred? (We cannot infer both, since the two theories contradict
each other.) Peircean Abduction, by itself, does not answer these questions,
which in turn suggests that Peirce did not intend it to be a rule of inference
at all.
A note on terminology is appropriate at this point. As I have intimated, con-
temporary authors usually use the term “abduction” to refer to an epistemic
process of providing support for explanatory hypotheses (see, e.g., Douven,
2021). This is a process that is meant to make certain theories plausible or
believable, as opposed to merely helping us come up with those theories.
In order to prevent confusion between Peirce’s notion of Abduction and the
contemporary notion of abduction, I have chosen to use the term “abduc-
tive reasoning” when referring to the latter; and, on those occasions I refer to
the former, I will use “generation of explanatory hypotheses.” Keeping these
notions clearly distinct from one another is important for a number of reasons.
For example, some accounts of abductive reasoning (e.g., Lipton, 2004) take it
to involve, as one part of the process, the generation of explanatory hypotheses
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
(see §2.2).
4 See, for example, Sankey (2008, 251) and Okasha and Thébault (2020, 774). With that said,
as far as I know, none of the authors mentioned above advocate the simple version of the HD
model described below. Of the three, Hempel is perhaps the one that comes closest to doing
so in his textbook The Philosophy of Natural Science (Hempel, 1966, 196–199). However,
a discussion in a textbook can hardly be assumed to accurately reflect Hempel‘s own con-
sidered views on the topic. Indeed, Hempel (1945) proposes a much more nuanced theory of
confirmation that conflicts in important ways with the HD model (on this, see Crupi, 2021,
§2.1).
8 Philosophy of Science
is as follows:
We are now in a position to see why the HD model has the word “deductive”
in it. It’s because, in order for the theory to be supported by the observations or
experimental results, the empirical consequences which serve as evidence for
the theory must be deducible from the theory. However, note that what is being
deduced is not the theory itself; rather, it is the empirical consequences of the
theory. And yet it is the theory that is being supported or confirmed, not (just)
its empirical consequences.
There is a caveat to the HD model as presented above that will prove to
be important as we contrast it below with prominent accounts of abductive
Abductive Reasoning in Science 9
reasoning. It’s that on this presentation of the HD model, it is not a model of how
to end up with a theory that we can infer or accept, all things considered. Rather,
the HD model may only describe what it is for a theory T to gain some degree of
confirmation from a set of empirical data, which may only consist in making T
somewhat more credible than it would otherwise have been. After all, verifying
a single empirical consequence of some theory surely does not by itself show
that the theory is true, or even probably true. Some authors suggest that this
issue can be addressed by slightly modifying the HD model by requiring a
greater number of T’s empirical consequences to be verified, at which point T
may be inferred to be true:
Now, there are clearly some important similarities between the Peirce’s
notion of Abduction, on the one hand, and the HD model of scientific confirma-
tion and inference, on the other. In particular, the structures of the two types of
accounts are remarkably similar: both require a kind of derivation of a manifest
fact from a hypothetical guess. The most important difference concerns the fact
that, as we have noted, Peirce appears to be concerned with the process of gen-
erating theories rather than with how theories should be evaluated. By contrast,
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
Hempel explicitly leaves this out of his HD model, on the grounds that there can
be no rational rules for generating new theories. In this respect, Hempel’s HD
model and Peircean Abduction are diametrically opposed ideas. This makes it
especially interesting, and frankly somewhat puzzling, that the structures of the
accounts are so similar, since one would not expect accounts of two quite dif-
ferent aspects of scientific methodology to end up being structurally so similar
to one another.
Indeed, the similarity in structure between Peircean Abduction and the HD
model points to a well-known problem for the latter that will be familiar from
our earlier discussion of the former. Recall that in a Peircean Abduction, for
each “surprising fact” C there will arguably be several incompatible proposi-
tions A1, . . . , An such that if each Ai were true, then C would be “a matter of
course.” A similar point applies to the HD model as applied to scientific confir-
mation: For each empirical consequence E, there will inevitably be several
theories T1, . . . , Tn from which E may be deduced. This point may be illus-
trated by returning to the example of Lavoisier’s oxygen theory of combustion,
10 Philosophy of Science
which arguably implies the empirical fact that nothing burns in a vacuum. The
problem is that the competing phlogiston theory, at least as formulated above,
implies the very same empirical fact. Thus, the HD model must say that both
theories are confirmed; moreover, the model has no resources to say that one of
the two theories is confirmed to a greater extent than the other. The same goes
for any other theory from which this empirical fact can be deduced, however
implausible it might seem in other respects.
One might think that this is less of a problem for the HD model as applied
to scientific inference, given that it involves deducing not a single empirical
consequence E but a set of such consequences E1, . . . , Em , all of which have
been shown to be correct by empirical data. After all, the thought might go,
although it would be easy to come up with a theory that entails a single E,
it need not be so easy to come up with a theory that entails all of E1, . . . , Em
(provided that m is a sufficiently large number). Unfortunately, however, given
a single theory T that implies E1, . . . , Em , it is quite easy to use elementary logic
to come up with another theory that does so as well. For example, it’s a logical
fact that if T implies E1, . . . , Em , then so does the conjunction T&X, where X
can be any claim whatsoever. Indeed, any set of empirical claims E1, . . . , Em
is trivially implied by the conjunction of those claims and any other claim X,
that is, by E1 & . . . &Em &X. Here, X could be a claim that contradicts T, such
as the negation of T, ¬T. This leaves us with the absurd conclusion that the
HD model allows one to infer T and a claim that directly contradicts T, namely
E1 & . . . &Em &¬T.
Something has clearly gone quite wrong in the HD model. The solution might
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
seem obvious. For surely the issue here is that the alternative “theories” that
imply our empirical data E1, . . . , Em are highly artificial, or just plain implau-
sible – so much so that no actual scientists would propose such theories with
a straight face. This is correct, but it’s not so much a solution to the problem
as it is the beginnings of a diagnosis of it. In order to solve the problem, we
need an account of scientific reasoning in which artificial or implausible “the-
ories” are not so easily confirmable or inferable by empirical data. If possible,
the account should also be able to explain why this is the case. Unfortunately
for the HD model, it fails to do either of these things. As we shall see below,
however, some accounts of abductive reasoning do significantly better on this
score. Hence abductive reasoning, or at least some accounts thereof, can be
viewed as improvements on the HD model in this respect.
Before we move on, it is worth noting another problem for the HD model of
scientific confirmation and inference. This problem concerns the “deductive”
Abductive Reasoning in Science 11
part of the HD model, that is, the requirement that it must be possible to deduce
correct empirical claims from the theory that is being confirmed or inferred. In
short, the problem is that many scientific theories, especially those that concern
causal relationships between two or more variables, do not categorically state
that a given event will definitively occur under specified circumstances; rather,
these theories often only state that the event has a particular chance of occurring
in those circumstances. Indeed, sometimes the probability of this chance event
is extremely low. Consider, for example, the geological theories that are used
to predict when and where earthquakes will occur, which might assign a 0.1%
probability to an earthquake occurring during a given week in a very high-
risk area. In these cases, there is no deductive relationship between theory and
empirical data, because for each piece of data (e.g., for each earthquake that
is observed to occur), it is perfectly possible – perhaps even probable – that
one would have obtained contrary data (e.g., an observation that no earthquake
occurred) even if the relevant theory is true.
Probabilistic theories of this sort are problematic for the HD model because
although we cannot deduce any empirical consequences from the theories, it
nevertheless seems clear that empirical results can confirm them. For example,
suppose that a newly proposed geological theory implies that the probability of
an earthquake in your city sometime next week is as high as 10%, whereas all
other available theories assign a less than 0.00001% probability to this event.
If the earthquake subsequently occurs, then surely the new theory can be con-
sidered confirmed to some extent, at least relative to its rivals. And if a similar
story were to repeat itself in other geographical areas and at other times, with
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
the new theory assigning a much higher probability to earthquakes that actually
occur, then at some point we may feel that the theory ought to be believed or
accepted as true. Unfortunately for the HD model, it cannot deliver these ver-
dicts, for none of the theories involved implies that the earthquake will occur,
only that it has some probability of occurring.
In sum, then, we have seen that the HD model faces at least two serious prob-
lems. The first concerns how to discriminate between the “serious” theories that
are confirmed by their empirical consequences and various “unserious” theo-
ries that are not, such as conjunctions of the empirical consequences themselves
and random other claims. The second concerns inherently probabilistic theo-
ries – theories from which empirical consequences cannot be deduced but are
rather assigned a particular probability. I have focused on these problems here
because, as we shall see, even early accounts of abductive reasoning arguably
have the resources to address both of these problems. Moreover, these accounts
of abductive reasoning often preserve some of the structure of the HD model,
12 Philosophy of Science
and thus are plausibly in a position to account for the kernel of truth in the
HD model – which, after all, has seemed to many to provide a fairly accurate
description of scientists’ actual methodology.5
In making this inference one infers, from the fact that a certain hypothesis
would explain the evidence, to the truth of that hypothesis. In general, there
will be several hypotheses which might explain the evidence, so one must be
able to reject all such alternative hypotheses before one is warranted in mak-
ing the inference. Thus one infers, from the premise that a given hypothesis
would provide a “better” explanation for the evidence than would any other
hypothesis, to the conclusion that the given hypothesis is true.
There is, of course, a problem about how one is to judge that one hypothe-
sis is sufficiently better than another hypothesis. Presumably such a judgment
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
Much of this description should remind us of the ideas about scientific rea-
soning we have encountered earlier in this section. In particular, we have seen
that Darwin, Lavoisier, and Peirce all emphasized the significance of “the fact
that a certain hypothesis would explain the evidence” for “the truth of that
5 For example, Lipton (2004, 15) writes that “the hypothetico-deductive model seems genuinely
to reflect scientific practice, which is perhaps why it has become the scientists’ philosophy of
science.”
6 As we shall see in the next section, however, only some contemporary accounts of abductive
reasoning can be said to be developments of Harman’s account; other contemporary accounts
depart so significantly from Harman’s ideas that they are more fruitfully viewed as competing
accounts.
Abductive Reasoning in Science 13
7 Indeed, in this particular case, it is not clear that a theory like E1 & . . . &Em &X would provide
any explanation – let alone the best explanation – of E1 , . . . , Em . After all, E1 & . . . &Em &X
is a conjunction of E1 & . . . &Em , which cannot explain itself, and X, which may be completely
irrelevant to E1 & . . . &Em .
14 Philosophy of Science
favor inferring the new theory over the alternatives, all else being equal. If a
similar story were to repeat itself in other geographical areas and at other times,
with the new theory assigning a much higher probability to earthquakes that
actually occur, then at some point this might tip the balance in favor of the
new geological theory providing the overall best explanation, and thus being
inferable by IBE.
In sum, then, Harman’s IBE arguably improves on the HD model in both
of the two respects in which the HD model falls short. At the same time, Har-
man’s description of this type of inference seems to capture the core insight
of the HD model, namely, that much of scientific reasoning involves coming
up with educated guesses, in the form of hypotheses or theories, which are
then subsequently tested against empirical data. Harman’s notion of IBE adds
to this (i) that the connection between the theories and the data is explanatory
rather than deductive, that is, such that the theory explains rather than entails the
data; and (ii) that the theories that are inferred or confirmed must provide better
explanations than other available theories that would also explain the data.
Abductive Reasoning in Science 15
may classify the different accounts of abductive reasoning that can be found in
the contemporary literature into three distinct types:
A first type of accounts may be called inferential accounts. These accounts
hold that abductive reasoning involves inferring hypotheses on the basis of
explanatory considerations, where an inference is a type of reasoning in which
one draws a categorical conclusion of some type from a set of premises. In
particular, these accounts construe abductive reasoning as a type of amplia-
tive inference, in which the content of the conclusion goes beyond the content
of the premises, and where the premises are constituted by one’s evidence at
the relevant time. In a typical inferential account of abductive reasoning, it
involves comparing a number of competing explanatory hypotheses in terms
of how good an explanation each would provide us with, and then accepting the
hypothesis that would provide the best one. As we shall see, inferential accounts
are attractive in part because they seem well suited to explaining the actual
scientific practice of comparatively evaluating and accepting explanatory
hypotheses. Inferential accounts are discussed in Section 2.2.
16 Philosophy of Science
8 This definition of inference is in line with influential accounts of inference provided by Frege
(1979), Boghossian (2014), and Neta (2013).
9 In addition to those mentioned below, these include the accounts of Foster (1982), Musgrave
(1988), Lycan (1988, 2012), and Weintraub (2013).
10 See also Harman 1989, ch. 3, and Harman 1997.
11 In chapter 7 of the second edition of his book, Lipton (2004, 103–120) suggests that IBE can
be seen as a heuristic for Bayesian reasoning, which brings his account closer to what I am
calling hybrid accounts (discussed in §2.4). In what follows I nevertheless refer to the second
edition when discussing Lipton’s original inferentialist account, since the relevant discussion
is largely unchanged between the first and second editions.
18 Philosophy of Science
competing explanatory hypotheses (the generation stage), and one then infers
the best hypothesis that has been generated in this way (the inference stage). At
both stages, explanatory considerations come into play, helping us first to come
up with plausible competing explanations at the generation stage, and then to
accept one of these competing explanations at the inference stage.
One way to think about this aspect of Lipton’s account is in terms of com-
parisons with Peirce’s notion of Abduction and Harman’s IBE. As we have
noted, Peirce’s Abduction was arguably focused on the generation of theo-
ries, so Lipton’s first stage of IBE may be roughly identified with Peirce’s
Abduction. By contrast, Harman was silent on how “alternative hypotheses” are
generated; indeed, Harman did not explicitly acknowledge that there was any
epistemic issue to be addressed regarding how such hypotheses would be gen-
erated. Instead, Harman’s discussion of IBE is exclusively concerned with the
process of inferring some hypothesis, by comparing it in terms of explanatory
considerations with other hypotheses, regardless of how these other hypothe-
ses came into consideration. To be fair, it should not be surprising that Harman,
writing in 1965, would have overlooked the issue of how explanatory hypothe-
ses are generated, since philosophers of science did not at that time generally
consider such topics to be within the purview of their field (Schickore, 2022,
§5).
Another distinctive feature of Lipton’s account concerns what makes an
explanatory hypothesis “best,” or “better” than an alternative. On Lipton’s
account, “the best explanation [is] the one which would, if correct, be the
most explanatory or provide the most understanding: the ‘loveliest’ explana-
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
tion” (2004, 59). Lipton (2004, 61) goes on to describe his version of IBE as
“Inference to the Loveliest Explanation,” where “loveliness” is determined by
how much understanding an explanatory hypothesis would provide if it were
true. Elliott (2021) points out that in this respect Lipton’s account departs from
the more standard idea that explanatory goodness is determined by a list of
explanatory virtues (how simple it is, how much it explains, etc.), as Harman
suggests and many other proponents of IBE have maintained. In Section 3, we
will examine which of these two conceptions of explanatory goodness is more
plausible or congenial to accounts of abductive reasoning.
An important issue on which different inferential accounts diverge is how to
conceive of the structure of the evaluation that takes place within abductive rea-
soning. As the term “Inference to the Best Explanation” indicates, the standard
view – once again inherited from Harman (1965) – is that this is a comparative
evaluation of one explanatory hypothesis as better than the set of alterna-
tive hypotheses that have been generated. Thus, on the standard conception
of explanatory goodness as determined by explanatory virtues, one compares
Abductive Reasoning in Science 19
12 Indeed, it is not clear how to even make sense of absolute evaluations of explanatory virtues,
since there doesn’t seem to be a universal measure of such virtues that would apply to all or
even most theories (Dellsén, 2021, 162–163).
13 See also Sklar (1981), Stanford (2006), and Roush (2005), among others, for closely related
concerns about scientific reasoning based on the fact that scientists typically have only
generated a fraction of all possible theories in some domain.
14 Despite describing “Inference to the Best Explanation” as “only code for the real rule,” van
Fraassen goes on to refer to the “real rule” as “Inference to the Best Explanation” as well (see
20 Philosophy of Science
among other things, that these (rational) agents’ opinions can be represented as
a probability function, Pr(·), from any proposition to a real value between 0 and
1 (inclusive). This is the sense in which, as authors such as van Fraassen put
it, a rational agent has “personal probabilities”: They have credences which,
if perfectly rational according to Bayesianism, count as probabilities by the
mathematical definition thereof. In later years, it has become common to refer
to this part of the Bayesian approach as Probabilism, since it demands of
rational agents that their opinions be probabilities.15
also, e.g., Weisberg, 2009; Henderson, 2014; Pettigrew, 2021). By contrast, I use “Inference to
the Best Explanation” to refer more narrowly to the inferential account of abductive reasoning
developed by Harman and Lipton, among others.
15 Note that Probabilism is a normative requirement. It says something about what combina-
tions of credences agents ought to have in order to be epistemically rational, rather than what
combinations of opinions they actually have. For example, if A entails B but not vice versa,
Probabilism implies that rational agents must assign a lower credence to A than to B, since it
follows from the probability axioms that the probability of A is lower than that of B.
Abductive Reasoning in Science 21
A third and final claim made by Bayesianism concerns how perfectly rational
agents should change their credences (i.e., their personal probabilities) over
time, as they gain more information about the world. Put differently, it con-
cerns how they should “update” these personal probabilities in light of new
evidence. The canonical version of this claim has become known as Bayesian
Conditionalization. It says that rational agents should update the value of their
personal probability regarding some hypothesis H as they obtain some evidence
E (and no other evidence) by replacing it with the value of the probability they
previously assigned to H conditional on E, that is, the so-called conditional
probability of H given E, Pr(H|E). A bit more precisely:
Pr′(H ) = Pr(H|E)
where Pr(·) and Pr′(·) are the agent’s probability functions before and after
obtaining E, respectively.16
16 Pr(·) and Pr′ (·) are also referred to as the agent’s prior and posterior probabilities, respectively.
22 Philosophy of Science
With all this Bayesian machinery in our arsenal, we are now finally in a posi-
tion to consider probabilistic accounts of abductive reasoning. Let us start with
the approach suggested by van Fraassen (1989) following his critique of infer-
ential accounts (see §2.2 and §4.2). (In what follows, it is worth keeping in mind
that van Fraassen did not endorse the following proposal; indeed, he argued
that it was irredeemably flawed, along with other attempts to spell out cogent
accounts of abductive reasoning.) Van Fraassen’s core idea was that in order for
abductive reasoning to have a place within Bayesianism, the hypothesis that
best explains some evidence E must somehow be awarded greater personal
probability than Bayesian Conditionalization alone dictates. Thus, Bayesian
Conditionalization must effectively be modified so that a “bonus” is added to
the agent’s posterior probability for a hypothesis that provides the best expla-
nation of the relevant evidence. The simplest way to do this would be to require
that agents set Pr′(H) = Pr(H|E) + b, where b is the probability bonus awarded
to H for providing the best explanation of E. We might call this general idea
Abductive Conditionalization.17
Is Abductive Conditionalization plausible? Van Fraassen argues that it is not.
In short, van Fraassen’s argument is that since Abductive Conditionalization
requires agents to update their personal probabilities in a way that conflicts
with Bayesian Conditionalization, any argument for Bayesian Conditionaliza-
tion is an argument against Abductive Conditionalization. We will consider this
argument in much more detail in Section 4. For now, suffice it to say that most
authors – with the notable exception of Igor Douven (2013, 2022) – have agreed
with van Fraassen that Abductive Conditionalization is untenable. However,
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
few of them have concluded from this that there is no place for abductive rea-
soning within the Bayesian approach. Instead, they have generally rejected van
Fraassen’s construal of abductive reasoning in terms of bonus probabilities, and
argued that the Bayesian approach can be combined with a form of abductive
reasoning in a way that allows one to hold on to Bayesian Conditionalization.
In this section, I will consider two specific accounts of this kind, which hold
that explanatory preferences constrain Bayesian reasoning, on the one hand,
or emerge from it naturally, on the other hand. (In the next section, we turn to
accounts on which abductive reasoning functions as a heuristic for Bayesian
reasoning, and thus allow one to hold on to Bayesian Conditionalization in a
different way.)
17 If H is awarded a bonus in this way, then at least some competing hypotheses must receive
a penalty so as to balance the total probability awarded to H and its mutually exclusive and
jointly exhaustive rivals, because the sum of these probabilities has to be 1. In principle this
can be done in any number of ways, but – as we shall see below (§4.2) – Douven (2022, 51)
provides a nicely conservative and mathematically satisfying way of doing this.
Abductive Reasoning in Science 23
In light of this, constraining probabilists might argue that we can identify sepa-
rate constraints for the first and second terms on each side, that is, the so-called
“likelihoods” Pr(E|H1 ) and Pr(E|H2 ), on the one hand, and the so-called “pri-
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
ors” Pr(H1 ) and Pr(H2 ), on the other. For example, purely theoretical virtues
like simplicity might be taken to constrain the priors, Pr(H1 ) and Pr(H2 );
while other virtues that are more concerned with the relationship between the
hypotheses and the evidence, such as explanatory power, might be taken to
constrain the likelihoods Pr(E|H1 ) and Pr(E|H2 ).
Clearly, requiring that one already assigns higher prior probabilities to more
explanatory hypotheses in this way ensures that there is no conflict between
constraining probabilistic accounts and Bayesian Conditionalization. On the
contrary, this approach supplements Bayesianism by providing criteria for
which prior probabilities agents should start out with. Moreover, constrain-
ing probabilistic accounts are quite flexible, in that they can accommodate any
judgment one might like to make regarding whether hypotheses exhibiting any
particular explanatory virtue to some extent should be preferred to hypotheses
18 This follows by applying Bayes’s Theorem to both sides of the inequality and cancelling out
Pr(E), which otherwise occurs in the denominator on both sides.
24 Philosophy of Science
that don’t (or do so to a lesser extent). After all, such preferences can simply
be formulated as additional constraints on the prior probabilities one should
have before updating via Bayesian Conditionalization. Indeed, the preferences
in question need not be explanatory in any meaningful sense, since they could
literally concern any feature of the hypotheses in question whatsoever.
However, this flexibility of constraining probabilistic accounts also points to
a significant weakness. The weakness is that these accounts seem particularly
ill-placed to provide us with any sort of justification for abductive reasoning
thus understood. In particular, we can ask the constraining probabilist where
these constraints on rational probability assignments are supposed to come
from: In virtue of what is it rational to assign a higher prior conditional prob-
ability to hypotheses that provide “better” explanations? Some constraining
probabilists have suggested that only by constraining probabilities in this way
can we avoid inductive skepticism, that is, the conclusion that past observa-
tions give us no justification for our predictions of future observations.19 But
it’s not clear that this gives us any reason to think that rationality requires such
constraints, as opposed to giving us reasons to think – wishfully – that it would
be nice if it did.
In this regard, another sort of probabilistic account seems to do better.
Following Henderson (2014), I will refer to these as emergent probabilistic
accounts. Emergent probabilistic accounts do not impose any explanation-
based constraints on the prior probabilities one starts out with before condi-
tionalizing on evidence. Rather, they hold that preferences for hypotheses that
provide better explanations are automatically reflected in the likelihoods of
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
19 See Weisberg (2009) and especially Huemer (2009); although see also Smithson (2017) for a
rebuttal of Huemer’s argument.
Abductive Reasoning in Science 25
abilistic accounts purport to explain why such a preference would arise under
natural assumptions about prior probability distributions. To be sure, there
remains for the emergent probabilist a closely related problem of explaining
why the “natural” assumptions about prior probability distributions should be
taken to be correct – or indeed why they deserve to be called “natural.”20 So
proponents of constraining probabilistic accounts may retort that the suppos-
edly problematic step of postulating certain constraints on prior probabilities
in order to accommodate preferences for more explanatory hypotheses has a
close analogue in the arguments given by emergent probabilists.
21 To be clear, hybrid accounts need not be thought of as alternatives to inferential and probabi-
listic accounts; rather, it may be more fruitful to view a given hybrid account as a combination
of some particular inferential or probabilistic account with some additional claims borrowed
from, or inspired by, the other type of account. In particular, the specific hybrid account dis-
cussed towards the end of this section may be viewed as a combination of a probabilistic
account of abductive reasoning and the claim that a form of IBE serves as a reliable heuristic
for approximating the type of reasoning prescribed by that probabilistic account.
22 Such accounts have been proposed and developed in a number of different ways in recent years,
for example, by Niiniluoto (1999), Okasha (2000), Lipton (2001), McGrew (2003), Cabrera
(2017), and Dellsén (2018). It has also been argued that abductive reasoning is more fundamen-
tal than Bayesian, for example, because the latter is merely an idealized model of the former
(McCain and Moretti, 2022).
Abductive Reasoning in Science 27
conditional on H1 [i.e., Pr(E|H1 ) < Pr(E|H2 )], therefore the posterior proba-
bility of H2 is greater than that of H1 [i.e., Pr′(H1 ) < Pr′(H2 )]” (Okasha, 2000,
702–703).23
Note, though, that Okasha’s suggestion about IBE coinciding with probabi-
listic updating only works if we go along with his probabilistic assumptions,
that is, that Pr(H1 ) < Pr(H2 ) and Pr(E|H1 ) < Pr(E|H2 ). This is of course
precisely what probabilistic accounts suggest we should do in different ways.
Specifically, constraining probabilistic accounts hold that because H2 explains
better than H1 , rationality requires us to assign probabilities so as to favor H2
over H1 in some such way; while emergent probabilistic accounts hold that this
probabilistic favoring of H2 over H1 falls out of other, independently-motivated
constraints on prior probability assignments. By contrast, if one completely
rejects the idea that more explanatory hypotheses should be assigned higher
23 To see why the third inequality follows from the first two, recall from Section 2.3 that
Pr′ (H1 ) < Pr′ (H2 ) is, given Bayesian Conditionalization, equivalent to Pr(E|H1 ) Pr(H1 ) <
Pr(E|H2 ) Pr(H2 ).
Abductive Reasoning in Science 29
prior probabilities, then Okasha’s suggestion will not be compelling, for then
the doctor’s IBE-based inference might coincide with it being rational to assign
lower probabilities to hypotheses that provide “better” explanations, for exam-
ple, in that Pr(H1 ) > Pr(H2 ) and/or Pr(H1 |E) > Pr(H2 |E) (Weisberg, 2009,
132–136). For this reason, heuristic accounts of abductive reasoning must argu-
ably assume that rational probability assignments favor hypotheses that provide
better explanations, as per probabilistic accounts.
Another issue for heuristic accounts concerns what sort of conclusion an
abductive inference would warrant while at the same time approximating prob-
abilistic reasoning. Initially one might have hoped that abductive inferences
would warrant something like a high probability assignment to (and thus, per-
haps, a full belief in)24 the inferred hypothesis. However, note that this is not
the type of conclusion that is drawn in Okasha’s example, where the doctor
merely concludes that “the posterior probability of H2 is greater than that of
H1 ,” that is, that Pr′(H1 ) < Pr′(H2 ). This is a comparative claim, regarding the
relative probabilities of H1 and H2 . As such it is compatible with H1 and H2
being assigned arbitrarily low probabilities in absolute terms, and so would not
necessarily warrant an assignment of high probability to (let alone full belief in)
either H1 or H2 . Indeed, it is hard to see how the doctor’s abductive reasoning
in Okasha’s example could provide her with a conclusion more definite than the
claim about the relative probabilities of H1 and H2 , since even the abductive
argument deals in comparative claims about which type of injury is more com-
mon in children, on the one hand, and which injury better fits the symptoms,
on the other.
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
One might think that this comparative structure to Okasha’s example is inci-
dental, and that a heuristic account could be developed in which abductive
inference serves as a heuristic for absolute, as opposed to merely compara-
tive, probability assignments. However, Dellsén (2018, 1753–1760) presents
a problem for heuristic accounts of this sort. As should be apparent from how
Bayes’s Theorem combines with Bayesian Conditionalization (see §2.3), the
absolute posterior probability of H1 , Pr′(H1 ), is determined not only by the
prior Pr(H1 ) and the likelihood Pr(E|H1 ), but also by Pr(E). This term – the
probability of the evidence itself, sometimes called the marginal likelihood –
is notoriously difficult to estimate with any reliability, because Bayesianism
dictates that it must be equal to a weighted sum of the priors and likelihoods
of all competing hypotheses in logical space, including not only H1 and H2 but
also any other hypotheses that are yet to be formulated – and perhaps never
24 The parenthetical remark assumes a “Lockean” account of the relationship between personal
probability and full belief (see, e.g., Foley, 1992).
30 Philosophy of Science
will (see, e.g., Shimony, 1970; Salmon, 1990; Roush, 2005).25 Moreover, the
various explanatory considerations that are supposed to guide abductive infer-
ence do not seem to be of much help in estimating this term, since they refer
to either to features of the hypotheses themselves or their relationship with
the evidence, not to features of other hypotheses or their relationship with the
evidence.
For these sorts of reasons, Dellsén (2018) suggests that IBE cannot generally
serve as a reliable heuristic for absolute probability assignments. However, on
Dellsén’s view, this does not spell disaster for the heuristic account of abduc-
tive reasoning, since IBE can still serve as a reliable heuristic for probabilistic
comparisons of the sort we saw in Okasha’s example. Although such compar-
ative conclusions are sometimes less informative than we would like – since
they don’t tell us how confident we ought to be that a specific theory is true –
they can still provide a great deal of rational guidance for the practicing scien-
tist. For example, the comparative conclusion that H2 (torn ligament) is more
probable than H1 (pulled muscle) might prompt Okasha’s doctor to order diag-
nostic tests focused on the child’s ligament rather than the muscle. In this sense,
H2 becomes the doctor’s “working hypothesis.” So, in short, the comparative
conclusion that some particular hypotheses are more probable than their rivals
can help practicing scientists to focus their subsequent investigations on the
most probable of such hypotheses.
It’s worth noting that Dellsén’s argument that IBE can only serve as a heu-
ristic for probabilistic comparisons is targeted at standard formulations of IBE
of the sort advocated by Harman (1965) and Lipton (2004). Thus an alterna-
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
tive way to avoid the argument is to modify one’s account of IBE, or abductive
inference more generally, so that it better fits with what is required of rational
agents who wish to make absolute probability assignments. A plausible thought
is that this might be done by requiring not only that the inferred hypothesis pro-
vides a better explanation than its extant rivals, but also that the explanation
be good enough in some sense (Musgrave, 1988; Lipton, 2004). Indeed, Dell-
sén (2021) makes a concrete suggestion along these lines, arguing that a more
demanding form of IBE – which requires an agent to go through a temporally
extended process of gathering more evidence and attempting to formulate supe-
rior explanations – is capable of delivering absolute verdicts. We will examine
this suggestion, along with other similar suggestions, in Section 4. For now,
however, let us simply note that such suggestions involve modifying IBE quite
substantially from what it is normally taken to involve.
∑
25 That is, Pr(E) = nk=1 Pr(Hk ) Pr(E|Hk ), where H1 , . . . , Hn are mutually exclusive and jointly
exhaustive hypotheses.
Abductive Reasoning in Science 31
26 See Beebe (2009) for an unusually comprehensive list of various proposed explanatory virtues.
32 Philosophy of Science
Scope: How many different phenomena (or types thereof) would the hypoth-
esis explain?
Parsimony: How few new entities (or types thereof) are posited by the
hypothesis or its explanations?
Unification: To what extent does the hypothesis unify otherwise disparate
phenomena (or types thereof)?
Plausibility: How well does the hypothesis fit what one already takes oneself
to know?
Analogy: How similar is the hypothesis’ explanations to other established
explanations?
Numerous other putative explanatory virtues have been proposed as well, such
as: the simplicity or elegance with which the hypothesis is formulated; its fer-
tility or fruitfulness for further research; the testability or falsifiability of the
hypothesis; and the extent to which it doesn’t contain ad hoc elements. More-
over, various authors use different terms for the virtues described above.27 It
should be said that many of the features that have been described as explana-
tory virtues in the context of abductive reasoning have long been thought of as
good-making features of scientific theories even by authors who might not con-
sider themselves proponents of abductive reasoning (e.g., Kuhn, 1977; Quine
and Ullian, 1978; Laudan, 1984).28
There is considerable disagreement regarding which of these features should
be taken to be operative in abductive reasoning. Consider, for example, the
supposed explanatory virtue of testability, which refers to the extent to which
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
a theory has readily testable consequences (see, e.g., Lycan, 1988, 138; Beebe,
2009, 611). Although it would of course be good, in a general sense of the
term, to have theories that are more testable, it does not seem plausible that
the testability of a theory makes it more likely to be true. Rather, testability is
a good-making feature only in that it will be easier to find out whether more
testable theories are true. Similarly, whether or not a theory is formulated in a
conceptually simple or elegant way is arguably not something that indicates that
the theory is more likely to be true29 ; rather, it merely suggests that the theory
27 For example, it is quite common to use “simplicity” for what I call “parsimony.” This can
be misleading because, as noted above, there is another kind of simplicity which concerns
the way in which a theory is formulated. “Parsimony,” by contrast, refers to the ontological
commitments made by the theory rather than any aspect of how it is formulated.
28 Indeed, Elliott (2021) suggests that there is nothing specifically explanatory about at least
some of these features, and that they should therefore be described as “theoretical” rather than
“explanatory” virtues.
29 For a quick argument to that effect, note that a simply or elegantly stated theory will be
materially equivalent to any number of more cumbersomely stated theories. Since materially
Abductive Reasoning in Science 33
is easier to work with, for example because it’s easier to derive predictions and
explanations from it.
To analyze this situation, it can be helpful to bring in a distinction between
epistemic and pragmatic virtues. Epistemic virtues are features of a theory that
provide some indication, however fallible, that the virtuous theory is more
likely to be true. Pragmatic virtues, by contrast, are features that come with
some pragmatic or practical benefit, such as making it more convenient to use
the theory for various purposes. In principle, a given explanatory virtue could
be both epistemic and pragmatic, but oftentimes calling something a pragmatic
virtue tacitly carries the implication that it is not also an epistemic virtue. For
instance, in light of the previous paragraph, it seems plausible that testabil-
ity and simplicity/elegance are (merely) pragmatic virtues. By contrast, most
proponents of abductive reasoning argue that at least some of the other virtues
listed above, for example, scope and parsimony, are epistemic virtues, although
they don’t always agree on which ones enjoy this special status. (More on these
virtues in §§3.3–3.4 below.).
Regardless of which explanatory virtues are operative in abductive reason-
ing, and whether these are epistemic or (merely) pragmatic, one might wonder
how exactly these virtues are supposed to determine the overall explanatory
goodness of a hypothesis. This issue has been addressed in some detail regard-
ing theory choice in general, where Okasha (2011) in effect argues that certain
seemingly-plausible constraints on determining which theory does best over-
all with respect to some number of explanatory virtues cannot all be satisfied.
In particular, Okasha suggests that – contrary to what one might have thought
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
beforehand – determining which theory does best overall requires one to esti-
mate not only which theories do better than other theories with regard to
particular virtues, but also how much better these theories are doing with regard
to these virtues. If Okasha is right, then a scientist who wants to determine
which theory provides the overall best explanation must find some way of
measuring how much more or less scope, parsimony, and so forth, each theory
has in comparison to its rivals.30
Thus far we have considered the virtue-theoretic conception of explanatory
goodness in various guises. The virtue-theoretic conception is assumed in most
discussions of abductive reasoning, but Lipton (2004), interestingly, appears to
equivalent theories are, necessarily, equally likely to be true, it follows that a theory’s sim-
plicity (in this sense of the term – see footnote 27) cannot be positively correlated with its
probability.
30 For a critical discussion of Okasha’s argument, see Morreau (2015) and Stegenga (2015);
although see also Okasha (2015) for replies. In a somewhat different context, Priest (2016)
provides a nicely precise way of aggregating theoretical virtues that would be congenial to
Okasha’s suggestion.
34 Philosophy of Science
ses that could be used to explain why we can observe light from stars that are
extremely far away. The first is the commonly accepted hypothesis that “the
universe is around 13.8 billion years old and the speed of light is constant in a
vacuum.” This explains why we see such distant stars by assuming that light
has had a very long time to travel, at a constant speed, from the stars to us.
A second hypothesis, inspired by creationism, is that “the universe is 6,000-
10,000 years old and the speed of light has been slowing since the creation
of the universe.” This hypothesis purports to explain why we can see distant
stars by assuming that light was travelling at a much greater speed, allowing
it to cover all that distance between us and them in only 6,000–10,000 years.
Now, the first of these hypotheses seems to exhibit the theoretical virtues to a
greater degree than the second, for example in that it has greater plausibility in
light of background knowledge, and is less ad hoc in so far as it posits a con-
stant where the second posits a variable that must be fine-tuned to explain the
data. However, according to Elliott, neither hypothesis would, if true, provide
more understanding than the other. After all, each one would, if true, make our
Abductive Reasoning in Science 35
mind, however, that one’s view of why explanatorily better theories should
be preferred will clearly depend, at least to some extent, on what explanatory
goodness is. In particular, certain views about why explanatorily better theories
should be preferred work best when the list of explanatory virtues is restricted
to those that are plausibly viewed as epistemic, as opposed to merely pragmatic,
virtues.)
The simplest and perhaps most popular answer is that we should prefer
hypotheses that explain better because they are more likely to be true – or,
perhaps, more likely to be approximately true.31 Given the virtue-theoretic
conception of explanatory goodness, this amounts to the view that the explan-
atory virtues that are operative in abductive reasoning are truth-conducive: if
a hypothesis possesses these virtues to a greater extent than alternatives, that
31 I will drop this qualification in what follows to simplify discussion, but see Psillos (1999) and
especially Niiniluoto (2018) on the issue of approximate truth and its relation to abductive
reasoning.
36 Philosophy of Science
hypothesis is thereby more likely to be true, other things being equal. Since
the idea that established scientific theories are likely to be true is often asso-
ciated with scientific realism (Chakravartty, 2017, §1.1), let us call this view
realism about explanatory goodness. As we shall see, there are two importantly
different versions of this view. These hold, respectively, that there are empiri-
cal reasons for thinking that better explaining hypotheses are more likely to be
true – call this a posteriori realism – and that this can be demonstrated without
appealing to empirical considerations – call this a priori realism.
One can of course reject both types of realism about explanatory goodness.
Antirealism about explanatory goodness holds that better explaining hypothe-
ses are not, as such, likelier to be true than those that explain worse. Given the
virtue-theoretic conception of explanatory goodness, this amounts to the view
that the explanatory virtues that are operative in abductive reasoning are not
truth-conducive. Now, antirealists generally think that at least some explana-
tory virtues are pragmatic virtues, so there is a sense in which they endorse
abductive reasoning, albeit only for pragmatic purposes (van Fraassen, 1980,
87–90). Let us call this pragmatic antirealism. With that said, antirealists could
also reject entirely the idea that explanatory goodness, or explanatory virtues,
have any role to play in science. In that case, there would arguably be little left
to endorse in the idea of abductive reasoning, so antirealists of this ilk would
really be suggesting to do away with abductive reasoning entirely. Let us call
this eliminative antirealism.
Arguments for and against various realist and antirealists views of explan-
atory goodness have tended to focus on the truth-conduciveness of some
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
That rules springing from remote and unconnected quarters should thus leap
to the same point, can only arise from that being the point where the truth
resides. [ … ] Accordingly the cases in which inductions from classes of
facts altogether different have thus jumped together, belong only to the best
established theories which the history of science contains (Whewell, 1858,
88).
Whewell’s point here is not just that a theory is more likely to be true if it fits
a greater amount of evidence, but that the theory is especially likely to be true
if the various pieces of evidence that support it are “altogether different”.
For a concrete example of how this explanatory virtue gets used in scien-
tific reasoning, consider Darwin’s argumentative strategy in On the Origin of
Species (1962). Darwin went to great lengths to present various different types
of facts in support of his theory. That is, Darwin did not simply appeal to a
single type of evidence over and over, even though that would certainly have
been much easier. Instead, Darwin appealed to, among other things: (i) differ-
ences in flora and fauna across regions separated by geographical barriers, such
as oceans; (ii) similarities in certain parts of the anatomies of entirely distinct
species, such as the human hand and the wings of bats; and (iii) fossil records
of some now-extinct ancestors to current species. Indeed, when summarizing
his case for natural selection, Darwin explicitly notes that his theory explains
“several large classes of facts” (Darwin, 1962, 476), suggesting that his argu-
ment was deliberately driven by the desire to demonstrate his theory’s superior
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
scope.
How might a realist about explanatory goodness argue that scope is an indi-
cator of truth? One possibility is to take an empirical, or a posteriori, approach.
In particular, one might suggest that the history of science is full of examples
in which a theory that explains a greater number of different phenomena has
turned out to be true more often than rival theories that explain less. This seems
to have been Whewell’s approach, who examined a number of the “best estab-
lished” physical theories of the day and concluded that a great many exhibited
“consilience,” that is, scope, to an impressive extent.32 One problem with this
approach, however, is that in order to evaluate consilience is an indicator of
truth one would have to assume that the supposedly “best established” theories
at a given time are also true (or at least approximately true). Antirealists about
32 For some more recent implementation of this type of strategy of arguing for the truth-
conduciveness of explanatory goodness generally, see Boyd (1980), McMullin (1987), Salmon
(1990), and Psillos (1999).
38 Philosophy of Science
hypothesis, H2 , explains both E1 and E2 (and all else is equal). What needs to be
shown is that after obtaining E1 and E2 , the posterior probability of H2 would
be higher than that of H1 . Given Bayesian Conditionalization, this should occur
just in case one already assigns a higher probability to H2 conditional on E1 &E2
than to H1 conditional on E1 &E2 , that is, Pr(H1 |E1 &E2 ) < Pr(H2 |E1 &E2 ).
We already encountered this type of inequality in Section 2.3, except that in
place of a single “E” we now have “E1 &E2 .” Thus, for analogous reasons, this
inequality holds just in case:
In order to hold all else equal, we may suppose that H1 and H2 do not differ in
their priors, so that Pr(H1 ) = Pr(H2 ). In that case the inequality reduces to:
on E1 &E2 , that is, that the above inequality holds. However, to see more
clearly why this would be, let us rewrite each side of the inequality using
the probabilistic conjunction rule (which is another theorem of the probability
axioms):
Pr(E1 |H1 ) Pr(E2 |E1 &H1 ) < Pr(E1 |H2 ) Pr(E2 |E1 &H2 )
Since H1 and H2 both explain E1 , and all else is assumed to be equal, let us
also assume that Pr(E1 |H1 ) = Pr(E1 |H2 ). In that case, the above inequality
simplifies to:
Pr(E2 |H2 &E1 ) would both be very high, and thus plausibly close to equal.33 By
contrast, if and in so far as E1 and E2 are different types of evidence, it seems
that Pr(E2 |H1 &E1 ) would be much lower than Pr(E2 |H2 &E1 ) in virtue of H2
being the only one of the two hypotheses that explains E2 .
To illustrate this point, let us return to Darwin’s evidence for natural
selection. Consider that Darwin appeals to both biological differences across
geographically separated regions (Eg ), on the one hand, and anatomical similar-
ities across different biological species (Ea ). And let’s contrast Darwin’s theory
of natural selection (Hn ), which explains both Eg and Ea , with a theory that
explains only Eg but not Ea , such as the theory that God created different spe-
cies specifically to inhabit different geographical regions (Hc ). Now, because
33 For example, suppose E1 is the evidence that a thousand ravens observed within some geo-
graphical region have all turned out to be black, and E2 is the evidence that yet another
raven also turned out to be black. In that case, it seems plausible that Pr(E2 |H1 &E1 ) and
Pr(E2 |H2 &E1 ) would both be close to 1, and thus close to equal, regardless of the content of
H1 and H2 .
40 Philosophy of Science
Let us take stock. One of the most celebrated of the explanatory virtues is
scope, the extent to which a given hypothesis explains many different phenom-
ena, or types thereof. Realists regarding scope hold that it is an epistemic virtue,
that is, that hypotheses with greater scope are more likely to be true; whereas
antirealists hold that scope is, at most, a merely pragmatic consideration. Some
realists have argued their case by pointing to a supposed empirical correlation
between greater scope and (approximate) truth, but these arguments are based
on assumptions that antirealists have rejected as empirically false or question-
begging. A more promising argument for realism regarding scope appeals to
probability theory. If successful, this argument shows that hypotheses with
greater scope will – all other things being equal, and under certain plausible
conditions – have a greater probability of being true.
not the number of claims or principles that constitute the theory itself, but the
number of entities, or types thereof, to which the theory is ontologically com-
mitted. In short, parsimony concerns the simplicity of the world according to
the theory, whereas syntactic simplicity or elegance concerns the simplicity of
the theory in and of itself.35
cases are generally not such that some of the entities (or types thereof) posited
by the less parsimonious theory are explanatorily superfluous; rather, from the
point of view of the less parsimonious theory, these entities (or types thereof)
are indeed necessary to explain some of the evidence.
Consider, for example, the difference between Ptolemy’s geocentric model
of the solar system, on the one hand, and Copernicus’s and Kepler’s helio-
centric models, on the other. Famously, the heliocentric models of Copernicus
and especially Kepler involved postulating fewer epicycles than the geocentric
model; indeed, these epicycles could eventually be eliminated entirely from
the heliocentric model. This is often taken to be a paradigmatic case of parsi-
mony influencing theory choice.37 However, note that the argument we are
36 More generally, it is a theorem of the probability axioms that if H1 implies H2 but not vice
versa, then Pr(H1 ) < Pr(H2 ).
37 See, for example, Galileo (1632, 397) and Sober (2015, 12–22). With that said, one could also
argue that the preference for the heliocentric model was due primarily to its superior syntactic
simplicity, which seems to be Kuhn’s (1977, 324) view, for instance.
Abductive Reasoning in Science 43
currently considering cannot be used to explain the preference for the geo-
centric model in this case, since the geocentric model is clearly not identical
to the heliocentric model except in that it posits some additional entities. For
instance, the heliocentric model posits that all the planets revolve around the
Sun; whereas the geocentric model posits that the Sun and the other planets
revolve around the Earth. Relatedly, the geocentric model does not posit any
entities that are explanatorily superfluous from its own point of view: Given the
geocentric model’s core commitment to placing the Earth at the center of the
solar system, the various epicycles posited by that theory are all needed in order
to explain astronomical observations such as the apparent retrograde motion of
the other known planets (otherwise they would not have been posited in the first
place!).
So if there is a general argument for the truth-conduciveness of parsimony, it
cannot simply appeal to the supposed superfluousness of the entities (or types
thereof) posited by less parsimonious theories. How then might a realist about
explanatory goodness argue for their position regarding parsimony? As before,
there are two main options here. The first is to argue on a priori grounds that
more parsimonious theories are more likely to be true. In this vein, Swinburne
(1997, 1) claims that the truth-conduciveness of parsimony is “an ultimate [i.e.,
fundamental] a priori principle” that cannot itself be justified by anything else
(see also Biggs and Wilson, 2017). However, this claim is difficult to accept
since, as van Fraassen (1989, 147–148) points out, to claim that more parsi-
monious theories are more likely to be true seems to involve an assumption
that the world is more likely to contain fewer entities (or types thereof). That
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
ing an important role in abductive reasoning in such cases; it’s just that what
is normally taken to be an explanatory “vice” now effectively functions as a
virtue. Indeed, we can generalize this thought by noting that any theory can
presumably be located on a scale from maximally to minimally parsimonious.
In different circumstances, exactly where a theory is located on this scale could
be taken to indicate how likely it is to be true. For example, one might think
that, in certain domains, the correct explanations are most likely to exhibit a
degree of parsimony that is, say, high but not maximal. In that case, what func-
tions as an explanatory virtue could be not parsimony simpliciter, but rather
how close to high-but-not-maximal-parsimony the relevant theories are.
In my view, this points the way towards a plausible view not just of parsi-
mony, but of explanatory goodness in general, which in some ways transcends
the realism/antirealism divide. We might call it contextualism about explana-
tory goodness. On this view, which features of a theory are truth-conducive
depends on the “context,” that is, on the phenomenon to be explained and our
background beliefs about what sorts of explanations it most likely calls for.
Abductive Reasoning in Science 45
Realists regarding parsimony hold that it is an epistemic virtue, that is, that
more parsimonious hypotheses are more likely to be true; whereas antireal-
ists hold that parsimony is, at most, a merely pragmatic consideration. Some
realists have argued that it is somehow a fundamental principle of rationality
that more parsimonious theories are likelier to be true. However, this seems to
conflict with the obvious fact that the universe could have contained a greater
rather than lesser number of entities (or types thereof). Other realists have
argued that scientists in fact prefer more parsimonious theories, and that the
general success of accepted scientific theories indicates that this practice is in
fact truth-conducive. However, an important counterpoint is that the prefer-
ence for parsimonious theories appears not to be universal; rather, scientists
sometimes prefer complex theories over more parsimonious ones. This sug-
gest that the correct view of whether explanatory virtues track the truth is a
“contextualist” one, according to which it depends on the context whether, and
to what extent, a more parsimonious theory is more likely to be true than a more
complex one.
46 Philosophy of Science
[Inference to the Best Explanation] is a rule that only selects the best among
the historically given hypotheses. We can watch no contest of the theories we
have so painfully struggled to formulate, with those no one has proposed. So
our selection may well be the best of a bad lot (van Fraassen, 1989, 142–143).
The basic problem pointed out by van Fraassen here is that we may have no
reason to think that any of the available explanatory hypotheses are true. The
hypotheses that have been generated so far may all be false, in which case the
correct explanation would be provided by a hypothesis outside of the set of
available hypotheses. In that case, IBE would inevitably lead us to accept a
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
38 The bad lot objection is also sometimes called the argument from underconsideration (Lipton,
1993; Wray, 2008; Khalifa, 2010).
Abductive Reasoning in Science 47
During the nineteenth century, for example, Charles Darwin, Francis Galton,
and August Weisman each successively formulated and defended different false
theories of the mechanism of biological heredity. Arguably, none of these theo-
ries provides as good an explanation of biological heredity as the chromosome
theory of Boveri and Sutton, but the latter was not formulated until the early
twentieth century. So Darwin, Galton, and Weisman were evidently working
with a “bad lot” of explanatory hypotheses.39
This shows that the bad lot objection cannot be set aside as a mere con-
ceptual possibility with no relevance for actual scientific practice. However,
exactly what sort of problem the bad lot objection creates for abductive rea-
soning depends on what sort of account of abductive reasoning one endorses.
In Section 2, we distinguished between inferential, probabilistic, and hybrid
accounts of abductive reasoning. As noted then, van Fraassen originally con-
ceived of the bad lot objection as targeting a specific sort of inferential account,
namely IBE à la Harman (1965). Accordingly, most of the following discus-
sion in this section focuses on how inferential accounts, such as IBE, might
circumvent the bad lot objection. However, it is worth noting that the bad
lot objection is also a problem for hybrid accounts in so far as these involve
choosing an explanatory hypothesis among some set of available competing
hypotheses – all of which may be false – even if this choice is ultimately meant
to approximate a probabilistic evaluation of that hypothesis.
Indeed, at least some probabilistic accounts of abductive reasoning face a
version of the bad lot objection that is no less difficult to handle than the original
objection. In particular, consider what I called Abductive Conditionalization –
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
39 These examples of theories of biological heredity are discussed in great detail by Stanford
(2006, chs. 3–5). It should be noted that Stanford is not focusing on IBE specifically, and that
Stanford refers to the problem he is concerned with as the Problem of Unconceived Alternatives
(for discussion, see, e.g., Magnus, 2010; Ruhmkorff, 2011; Egg, 2016; Dellsén, 2017d).
40 A somewhat similar problem faces what I have called constraining probabilistic accounts.
Recall that these accounts hold that hypotheses which provide better explanations should be
48 Philosophy of Science
comparative evaluation in step (ii), one must base that evaluation on a large set
of true background theories, and these true background theories would them-
selves have to have been generated in a step (i) of an earlier application of IBE.
So, says Lipton, one cannot consistently say that scientists are generally reli-
able at comparing explanatory hypotheses in step (ii) but not also reliable at
generating true hypotheses in step (i).
assigned higher probabilities before one updates on the evidence. So, in particular, if H2 is
explanatorily better than H1 , one should assign a higher value to Pr(H2 |E) than to Pr(H1 |E).
Now, if the best explanation of E is not yet formulated – call it Hx – then as a practical matter
one obviously cannot assign any probability to that hypothesis given the evidence, Pr(Hx |E).
Consequently, it becomes unclear what probability one should assign even to H1 or H2 given E,
because if some unformulated hypothesis Hx is to receive a greater share of the probability that
is to be distributed among various competing hypotheses, then some other of these hypotheses –
including, perhaps, H1 and H2 , must receive a lesser share of that probability (see §4.2). So
if one doesn’t know whether there is some unformulated Hx that in fact provides the best
explanation of one’s evidence, then one doesn’t know how to distribute probability even among
the hypotheses that have been formulated.
Abductive Reasoning in Science 49
by some scientists are false, then the relevant scientists may well be misled
into ranking a false theory ahead of a true one. But this is no different from a
logician who competently derives a false conclusion from false premises via
a deductively valid argument. In both cases, there is a perfectly good sense in
which the inference itself can be said to be reliable relative to the premises or
background theories with which they started.
In light of this point, one can – contra Lipton – grant scientists consider-
able inductive powers regarding their reliability in comparatively evaluating
explanatory hypotheses in stage (ii), and still maintain that we have little reason
to think they are reliable at generating true explanatory hypotheses in stage (i).
On this view, the relevant type of reliability is relative to the background theo-
ries on which they largely base their evaluations: if these theories are generally
41 See Dellsén (2017c, 35–36) for a more detailed consideration of two possible senses in which
scientists’ comparative evaluations may be said to be reliable, and how this affects Lipton’s
argument.
50 Philosophy of Science
true, then scientists will reliably rank true theories above false ones; if their
background theories are generally false, they may well fail to do so in many or
most cases. To illustrate with a concrete case, consider that in the early nine-
teenth century, Alfred Wegener’s theory of continental drift was rejected in
favor of a theory on which the continents had fixed locations, partly because
the geophysical theories that were accepted at the time strongly indicated that it
would be impossible for the continents to move around as rapidly as Wegener’s
theory predicted. There is a perfectly good sense in which a reliable ranking of
the two theories places the fixed-continents theory above Wegener’s theory rel-
ative to such background theories. In that same sense, reliable rankers of the
two theories would then reverse the ranking in the late 1950s, when the relevant
geophysical theories had been overturned so as to allow for much more rapid
movements of the tectonic plates which had then been discovered to undergird
continental drift (Bowler and Morus, 2005, 237–252).
Although other reactionary responses to the bad lot objection have been
developed and defended,42 let us now move on to consider revisionary
responses instead. The most concessive sort of revisionary response would hold
that since we have formulated only a limited range of explanatory hypothe-
ses, the conclusion of an abductive inference can at most be that one of these
hypotheses is epistemically superior to the other hypotheses that have been
formulated so far. Put differently, abductive reasoning would not really war-
rant inferring that any explanatory hypothesis is true (or even probably and/or
approximately true); only that a hypothesis is superior to the other hypotheses
that have been formulated at a given time. For example, Kuipers (2000) devel-
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
42 See Lipton (1993, 94–96), Schupbach (2014), and Shaffer (2021). For rejoinders to some of
these responses, see Wray (2008), Khalifa (2010), and Dellsén (2017c).
43 For similar reasons, various authors have argued that the attitude we should take towards an
explanatory hypothesis we end up with should not be one of belief at all – not even belief that
the hypothesis is probably and/or approximately true. Rather, according to these authors, we
should end up tentatively accepting or pursuing the hypothesis in our future research (Kapitan,
1992; Dawes, 2013; Nyrup, 2015; Cabrera, 2017; although see also Henderson, 2022).
Abductive Reasoning in Science 51
situations where none of the available explanatory hypotheses are true, even
the best available explanatory hypotheses might be thought to be insufficiently
good to be inferred by this criterion.
One issue with this response is that it is far from clear what the above authors
mean when they say that an explanation must be “good enough.” The most
natural interpretation of the phrase is that the inferred hypothesis must exceed
some designated threshold of explanatory goodness, where “explanatory good-
ness” is understood in absolute rather than merely comparative terms. On this
view, each hypothesis is associated with some level of explanatory goodness
relative to the evidence at a given time, and whether the hypothesis counts as
providing a “good enough” explanation simply depends on whether that level
exceeds a threshold. However, Dellsén (2021, 161–164) argues that adding
such a clause to IBE leads to various new problems, and is anyway ill-suited to
address the original bad lot objection. Consider, for example, the many cases in
which scientists have accepted some explanatory hypothesis at an earlier time,
only to later reject it in favor of a newly-formulated alternative that provides an
52 Philosophy of Science
even better explanation of the relevant evidence (Sklar, 1981; Stanford, 2006).
In such cases, the scientists must have assumed that even the earlier hypothe-
sis exceeded the threshold for explanatory goodness – otherwise, they would
hardly have accepted it – and yet the relevant hypothesis turned out to be
false by our current lights. So, in these cases, having a hypothesis that exceeds
the threshold for explanatory goodness evidently did prevent scientists from
inferring from a “bad lot” of explanatory hypotheses.
In view of such problems, Dellsén (2021, 164–172) develops a quite differ-
ent account of when, and why, the best available explanatory hypothesis can be
considered “good enough” to be inferred. In short, the best available explana-
tory hypothesis can be inferred when it has been through a temporally extended
process which Dellsén calls explanatory consolidation. This process consists
in the accumulation of two quite different types of information which gradu-
ally make it more plausible that the hypothesis one tentatively accepts indeed
provides a better explanation of one’s evidence than any other hypothesis that
could be formulated. Specifically, as empirical evidence for the hypothesis
accumulates, it gradually increases the plausibility that no alternative to it could
explain all that evidence in an equally satisfactory manner. In addition, repeated
unsuccessful attempts to formulate alternative hypotheses that provide better
explanations also increase the plausibility that the tentatively accepted hypoth-
esis cannot be matched in that regard (Dawid et al., 2015; Dellsén, 2017d). If
all goes well, then eventually we will have accumulated enough information
of these two types so as to make it exceedingly plausible that the hypothesis in
question is “good enough” to be inferred.
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
to van Fraassen, this indicates that the agent is guilty of a kind of irration-
ality over time, often referred to as diachronic incoherence. Since Abductive
Conditionalization clearly conflicts with Bayesian Conditionalization by virtue
of adding a bonus probability to some hypotheses (and having others incur a
corresponding penalty), Abductive Conditionalization would necessarily be an
irrational way to update one’s credences in light of new evidence, according to
van Fraassen.45
As noted in Section 2, many proponents of probabilistic accounts of abduc-
tive reasoning agree with van Fraassen that it would be a bad idea to assign
bonus probabilities to the best explaining hypotheses in the way suggested
44 A “fair” bet is one that has an expected value of zero given the agent’s personal probabilities,
that is, roughly such that the agent can expect to break even in the long run if she repeatedly
made the same bet.
45 See also Pettigrew (2021) for a version of this argument that doesn’t appeal to rational betting
behavior, but instead argues that using Bayesian Conditionalization (rather than Abductive
Conditionalization) maximizes the expected accuracy of one’s credences.
54 Philosophy of Science
EXPL: Let H = {H1, ..., Hn } be a set of mutually exclusive and jointly exhaus-
tive hypotheses, and let f be a function that assigns a positive value b to the
hypothesis in H that best explains E, and 0 to all other hypotheses therein.46
Then an abductive reasoner should update their credence in any hypothesis
Hj by setting:
Although EXPL may seem complicated, the intuitive thought behind it is quite
simple: The hypothesis in H that best explains E gets a bonus probability, and
all other hypotheses in H are penalized in proportion to how probable they
would have been without these penalties.
The crux of Douven’s defense of EXPL (and to some extent Abductive Con-
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
cases in which either the evidence or the hypotheses – or indeed both – do not
themselves concern probabilistic quantities like frequencies or chances that can
be so easily compared quantitively?
In such cases, EXPL will need to appeal to some other criteria for what
counts as the “best explainer” among available hypotheses, such as explana-
tory virtues like scope and simplicity. However, it remains to be seen whether
EXPL, when coupled with such criteria for what counts as the best explana-
tion, would indeed have the epistemic benefits Douven argues it has in simple
coin tossing situations. Furthermore, EXPL may have more serious epistemic
drawbacks in more realistic cases than in coin tossing situations, because in
such cases there may not be enough obtainable evidence to turn around assign-
ments of high probabilities to false hypotheses. After all, working scientists are
not often in a situation in which they can simply toss a coin to obtain more evi-
dence pertinent to a given hypothesis; rather, they often have to design and run
entirely new experiments, or engage in laborious field-work, in order to collect
any new evidence worth speaking of.
Sober point out that if conditionalizing on X does not raise or otherwise alter
the probability of H given E in this way, then there is apparently no need for
the Bayesian to appeal to explanatory considerations in spelling out how much
some evidence confirms some hypothesis. Any probability added by the discov-
ery of X (i.e., the fact that H explains E) has already been taken into account
when E raised the probability of H. Roche and Sober then go on to argue, with
the use of a suggestive case study, that the equality Pr(H|E&X ) = Pr(H|E)
does indeed hold.
Roche and Sober’s argument has led to a flurry of responses. One response
disputes that the equality Pr(H|E&X) = Pr(H|E) holds either in Roche and
Sober’s own case study or in other similar cases of abductive reasoning (Cli-
menhaga, 2017a; see also Roche and Sober, 2017). A much more common set
of responses challenge the idea that Roche and Sober’s screening-off criterion,
Pr(H|E&X ) = Pr(H|E), is an appropriate criterion for evidential irrelevance.
For example, McCain and Poston (2014) argue that explanatory considerations
are evidentially relevant in that they affect how resilient one’s personal prob-
abilities are to being changed when new evidence is obtained (see also Roche
and Sober, 2014; McCain and Poston, 2018).47
Indeed, looking back at the accounts of abductive reasoning surveyed in
Section 2, including the various probabilistic accounts thereof, it is not clear
whether or how Roche and Sober’s screening-off challenge undermines any
of these accounts – even though each such account is surely spelling out
a sense in which explanation would be evidentially relevant. For example,
Pr(H|E&X) = Pr(H|E) is consistent with the central idea behind constraining
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
47 For another argument that explanatory considerations might be evidentially relevant in other
ways, see Lange (2017); see also Roche and Sober (2019) and Lange (2020, 2023).
48 In response to this criticism, Roche and Sober might reply that they only meant to suggest
that there is a sense in which explanatoriness is evidentially irrelevant, and that the screening-
off criterion captures this sense (Roche and Sober, 2017, 582n1). But this invites the follow-
up objection that Roche and Sober’s criterion is so divorced from the accounts of abductive
reasoning in the literature that their screening-off challenge fails to undermine any of these
accounts. In order for Roche and Sober’s argument to hit home, they would need to show
that the screening-off criterion of explanatory irrelevance has been, or should be, accepted by
proponents of abductive reasoning.
58 Philosophy of Science
zation of IBE that he calls abductively robust inference (ARI). The basic idea
appeals to the fact that a claim C may be entailed by several of the hypothe-
ses that provide some of the best explanations of the evidence, so that if any
of these hypotheses is true (regardless of which of one), C would be true. A
bit more precisely, suppose C is entailed by all of the k hypotheses that pro-
vide the best k explanations of the evidence, where k is some natural number
(less than or possibly equal to the number of available hypotheses n). For a
49 McCain and Poston (2019) discuss a closely related challenge, which they dub the disjunction
objection, and which they attribute to brief remarks made by van Fraassen (1989) and Fumerton
(1995).
50 Indeed, it would also seem to undermine the rationality of assigning a higher probability only
to Hi , and therefore assigning lower probabilities to at least some of its rival hypotheses, as per
Douven’s EXPL (see §4.2).
51 McCain and Poston (2019) suggest that their version of the problem, that is the disjunction
objection, can be avoided by adding a clause to IBE requiring that the best explanation must
also be “good enough.” Dellsén (2017a, 24) anticipates this type of response and argues that it
doesn’t work as a solution to the problem of plausible rivals.
Abductive Reasoning in Science 59
large enough k, C may then be confidently inferred even if none of the avail-
able explanatory hypotheses H1, . . . , Hn – including the very best explanatory
hypothesis Hi . After all, each one of H1, . . . , Hn would be subject to the prob-
lem of multiple rivals, whereas C would not be subject to any such problem; on
the contrary, the multiplicity of plausible rivals all of which entail C arguably
strengthens the support for C, for it shows that C is “robust” across the var-
ious explanatory possibilities described by each rival hypothesis (Woodward,
2006). Returning to our example of hypotheses about the origin of life, note
that the four best alternative explanations for the origin of life all posit that life
began with the formation of some type of nucleic acid – be it RNA, PNA, TNA,
or GNA (i.e., by an xNA). According to the version of ARI where k = 4, we
can thus confidently infer this “robust” result can be inferred.
A fair complaint about ARI is that it is underspecified in some important
respects. Indeed, Dellsén (2017a, 26) emphasizes that ARI is not in fact an
inference rule at all, but a pattern of multiple such rules for different values
of k. Setting a higher value to k will generally make the resulting inference
rule epistemically safer, in that one will be less likely to infer something false,
but also less powerful, in that the inferred claim C will generally have to be a
logically weaker proposition. Since one may want to balance safety and power
differently in different circumstances, for example depending on how much
is at stake, different instantiations of ARI (i.e., different values for k) may be
appropriate in different circumstances. In this respect, ARI should arguably be
left unspecified so that it instead preserves the flexibility required to balance
safety and power in different ways.52 With that said, there are other aspects
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
of ARI that arguably need to be spelled out in greater detail. For example, in
some cases one might want to allow an inference to a claim C that is merely
entailed by most – rather than all – of the k hypotheses that provide the best
explanations of the evidence. Furthermore, in those cases, it surely also matters
how good an explanation is provided by each of the hypotheses that entail C.
More work is needed to flesh out ARI along these dimensions.
52 It is worth noting that when we set k = 1, we get an inference rule that is extensionally
equivalent to IBE. It is in this sense that ARI is a generalization of IBE (Dellsén, 2017a, 27).
60 Philosophy of Science
Consider, in particular, a standard version of IBE which holds that one may
infer a hypothesis just in case it provides a better explanation of one’s evidence
than any other competing hypothesis. Now, at least in some cases, it seems that
this idea implies that one can infer several different hypotheses from the same
evidence, because the evidence can be explained at different “levels” such that
the inferred hypotheses do not compete with each other but only with other
hypotheses at the same “level.” In particular, suppose that, at one “level” of
explanation, Ha provides the best explanation of some evidence E; while at
another “level,” Hb provides the best explanation; and yet Ha and Hb are incom-
patible propositions. If such cases are possible, the upshot seems to be that IBE,
at least as it is standardly formulated, recommends inferring two incompatible
propositions, Ha and Hb .
Climenhaga (2017b, 253–254) demonstrates that such cases are possible by
considering a rather artificial setup involving several urns containing differently
colored balls and coin flips that determine which of these an agent randomly
chooses balls from. To see the relevance of Climenhaga’s problem to scientific
practice, let us examine a more realistic case instead. Consider the fact that both
birds and Pterygota (i.e., flying insects) are able to fly. Why is that? At a certain
abstract level of explanation, there are two relevant explanatory hypotheses
to consider, namely that the ability to fly is a trait inherited from a common
flying ancestor, on the one hand, and that it isn’t, on the other. The latter type
of explanation – that the ability to fly was not present in the common ancestor of
birds and Pterygota and instead evolved independently in the lineage of each –
would be an instance of what evolutionary biologists call convergent evolution.
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
ability to fly would happen only once rather than twice.53 Assuming that being
the best explanation has something to do with being more parsimonious and/or
conferring greater probability on the evidence, this might lead us to conclude
that (II) rather than (I) should be inferred via IBE.
However, we may also consider this issue from the point of view of some-
what more detailed explanatory hypotheses. In particular, let us suppose that
we are interested in learning not only about whether birds and Pterygota share a
common flying ancestor, but also when (if at all) there were evolutionary pres-
sures to evolve the ability to fly. So consider the following four explanatory
hypotheses concerning how exactly birds and Pterygota evolved which take a
stand on this issue as well:
(1) Flight evolved convergently in birds and Pterygota, and there were similar
evolutionary pressures favoring flight in the lineages of both.
(2) Flight evolved convergently in birds and Pterygota, and there were dissim-
ilar evolutionary pressures favoring flight in the lineage of each.
(3) Birds and Pterygota share a common flying ancestor, and there were similar
evolutionary pressures favoring flight in the lineages of both.
(4) Birds and Pterygota share a common flying ancestor, and there were
dissimilar evolutionary pressures favoring flight in the lineage of each.
Given this partition of the explanatory hypotheses, one could very well argue
that the best explanation is (1). After all, if there were indeed similar evo-
lutionary pressures favoring flight among the ancestors of birds and insects
respectively, then one should expect flight to evolve in both lineages independ-
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
ently (i.e., convergently). Indeed, (1) is the hypothesis that is generally accepted
in contemporary evolutionary biology, for various reasons that need not con-
cern us here. So let’s suppose – if only for the sake of the argument – that out
of (1)–(4), (1) should be inferred via IBE.
The problem now is that the two explanations that we have concluded should
both be inferred via IBE from the same fact are logically incompatible. If flight
evolved convergently, as per (1), then birds and Pterygota do not share a com-
mon ancestor, contrary to (II). Apparently, then, IBE warrants inferences to
logically incompatible claims. One could of course maintain that there is noth-
ing wrong with accepting incompatible claims in some cases – as when one
accepts both general relativity and quantum mechanics despite the apparent
incompatibility between these theories – and that this would just be another
case of that sort. However, all other things being equal, it is arguably at least a
53 This is a point nicely made by Sober (1994), whose discussion of similar cases from
evolutionary biology is my inspiration for this example.
62 Philosophy of Science
also be inferred via IBE, albeit indirectly. After all, anyone who is in a position
to infer (1) is also clearly in a position to infer an immediate logical conse-
quence thereof, namely (II). Dellsén (2016) refers to this variation on standard
IBE as indirect IBE, and suggests that in it one infers a hypothesis H in virtue
of H being entailed by a stronger hypothesis H* that explains E better than any
competing explanatory hypothesis.54 The point, then, is not that no hypothesis
from the set (I)–(II) can be inferred via IBE, but rather that (I) rather than (II)
should be inferred because (I) is, whereas (II) is not, part of the best complete
explanation, namely (1).
54 Indeed, Dellsén (2016, 224) argues that it is common among proponents of IBE to implicitly
count these types of inferences as instances of IBE. For example, Harman (1965, 90–91) sug-
gests that, from the fact that all observed As have been Bs, one may infer that the next observed
A will be also be a B. However, the next observed A being B clearly does not explain why all
observed As have been Bs; rather, what explains the latter is (on Harman’s view) that all As
are Bs, from which one can then deduce that the next observed A will be a B (see also Lipton,
2004, 63–64).
Abductive Reasoning in Science 63
Conclusion
Where does all of this leave us? I hope it’s clear at this point that both the
general term “abductive reasoning,” and the popular slogan “Inference to the
Best Explanation,” tend to mean different things to different philosophers.
Often prompted by various challenges to the cogency of abductive reasoning,
these philosophers have responded by clarifying, refining, or developing their
accounts of abductive reasoning so as to meet these challenges. Throughout
this Element, I have often indicated my favored approach to meeting each chal-
lenge, but only in a piecemeal manner. In this brief final section, I wish to sketch
a more holistic picture of how the pieces hang together in my view.
As discussed in Section 2, a crucial issue is whether one’s account takes
abductive reasoning to be inferential, probabilistic, or some hybrid of both. As
I indicated already in that section, I favor a version of the third type of account,
on which a form of abductive inference serves as a heuristic for rational proba-
bility assignments in which a preference for better explaining theories emerges
naturally (see §2.4). In my view, this heuristic account of abductive reason-
ing provides us with “the best of both worlds,” in that we can draw upon the
powerful probabilistic machinery of the Bayesian framework to account for
ideally rational reasoning in science, while still preserving a place for a the
type of comparative explanatory evaluation that seems to make up much of
the actual reasoning that goes on in science. Of course, in combining elements
from inferential and probabilistic accounts, this heuristic account opens itself
up to the challenges facing both. In my view, however, these challenges can be
met.
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
Apart from the various challenges that face inferential, probabilistic, and
hybrid accounts, there is also the more general challenge of why we should pre-
fer “better explanations” at all. For example, why prefer theories that explain
more to those that explain less, or more parsimonious theories to complex ones?
As discussed in Section 3, philosophers are divided roughly between realists
about explanatory goodness, who hold that better explanations are more likely
to be true, and antirealists, who hold that better explanations are (at best) more
convenient to work with. Moreover, realists disagree amongst themselves on
whether providing better explanations can be shown a priori, or instead a poste-
riori, to be truth-conducive (see §3.2). My own position on this thorny issue is
that different approaches may be appropriate for different explanatory virtues,
and that at least some of the virtues – for example, parsimony – may be truth-
tracking in some contexts but not others. In my view, this is not a problem for
the heuristic account of abductive reasoning that I favor, since abductive rea-
soners can – and often do – choose not to appeal to the relevant virtues in the
contexts in which they fail to be truth-tracking (see §§3.3–3.4).
Abductive Reasoning in Science 65
Taken together, these theses show how one can coherently and plausibly rec-
oncile the kernel of truth in traditional ideas about abductive reasoning –
stemming from such luminaries as Bacon, Darwin, Peirce, and Harman, among
many others – with the powerful and popular Bayesian approach to scientific
reasoning. The overall account may not be as simple as one might have hoped,
but then again there is little reason to think that scientific reasoning is a simple
matter.
References
Bacon, F. (1620). Novum Organum, sive Indicia Vera de Interpretatione
Naturae. John Bill, London.
Baker, A. (2003). Quantitative Parsimony and Explanatory Power. British
Journal for the Philosophy of Science, 54:245–259.
Baker, A. (2007). Occam’s Razor in Science: A Case Study from Biogeography.
Biology and Philosophy, 22:193–215.
Baker, A. (2022). Simplicity. In Zalta, E. N., editor, Stanford Encyclopedia of
Philosophy (Summer 2022 Edition). Metaphysics Research Lab, Stanford
University. https://plato.stanford.edu/archives/sum2022/entries/simplicity/.
Barnes, E. (1995). Inference to the Loveliest Explanation. Synthese, 103:252–
277.
Beebe, J. R. (2009). The Abductivist Reply to Skepticism. Philosophy and
Phenomenological Research, 79:605–636.
Biggs, S. and Wilson, J. M. (2017). The a Priority of Abduction. Philosophical
Studies, 174(3):735–758.
Bird, A. (2017). Inference to the Best Explanation, Bayesianism, and Knowl-
edge. In McCain, K. and Poston, T., editors, Best Explanations: New Essays
on Inference to the Best Explanation, pages 97–120. Oxford University
Press, Oxford.
Boghossian, P. (2014). What Is Inference? Philosophical Studies, 169(1):1–18.
Bowler, P. and Morus, I. R. (2005). Making Modern Science: A Historical
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
edition.
Lombrozo, T. (2010). Explanation and Abductive Reasoning. In Holyoak, K.
and Morrison, R., editors, The Oxford Handbook of Thinking and Reasoning,
pages 260–276. Oxford University Press, Oxford.
Lycan, W. G. (1985). Epistemic Value. Synthese, 64:137–164.
Lycan, W. G. (1988). Judgment and Justification. Cambridge University Press,
Cambridge.
Lycan, W. G. (2012). Explanationist Rebuttals (Coherentism Defended Again).
The Southern Journal of Philosophy, 50:5–20.
Mackonis, A. (2011). Inference to the Best Explanation, Coherence and Other
Explanatory Virtues. Synthese, 190:975–995.
Magnus, P. (2010). Inductions, Red Herrings, and the Best Explanation for the
Mixed Records of Science. British Journal for the Philosophy of Science,
61:803–819.
McCain, K. and Moretti, L. (2022). Appearance and Explanation. Oxford
University Press, Oxford.
References 71
van Fraassen, B. C. (2002). The Empirical Stance. Yale University Press, New
Haven, CT.
van Fraassen, B. C. (2007). From a View of Science to a New Empiricism. In
Monton, B., editor, Images of Empiricism: Essays on Science and Stances,
with a Reply from Bas C. van Fraassen, pages 337–385. Oxford University
Press, Oxford.
Vogel, J. (2005). Inference to the Best Explanation. In Craig, E., editor, The
Shorter Routledge Encyclopedia of Philosophy. Routledge, London, pages
445–446.
Voltaire, M. (1759). An Essay on Universal History, the Manners, and Spirit of
Nations: From the Reign of Charlemaign to the Age of Lewis XIV. J. Nourse,
London.
Weintraub, R. (2013). Induction and Inference to the Best Explanation. Philo-
sophical Studies, 166:203–216.
Weisberg, J. (2009). Locating IBE in the Bayesian Framework. Synthese,
167:125–143.
Weisberg, J. (2020). Belief in Psyontology. Philosopher’s Imprint, 20(11):1–
27.
Whewell, W. (1858). Novum Organum Renovatum. John W. Parker and Son,
London, 3rd edition.
Williamson, T. (2016). Abductive Philosophy. The Philosophical Forum,
47:263–280.
Woodward, J. (2006). Some Varieties of Robustness. Journal of Economic
Methodology, 13:219–240.
https://doi.org/10.1017/9781009353199 Published online by Cambridge University Press
Jacob Stegenga
University of Cambridge
Jacob Stegenga is a Reader in the Department of History and Philosophy of Science at
the University of Cambridge. He has published widely on fundamental topics in
reasoning and rationality and philosophical problems in medicine and biology. Prior to
joining Cambridge he taught in the United States and Canada, and he received his PhD
from the University of California San Diego.