J Log Lang Inf (2012) 21:141–144
DOI 10.1007/s10849-012-9161-5
Interpreting Probability
Timothy Childers · Ondrej Majer
Published online: 9 March 2012
© Springer Science+Business Media B.V. 2012
This special issue of the Journal of Logic, Language and Information contains four
selected papers from the 2009 Prague International Colloquium Foundations of Uncertainty: Probability and its Rivals. The contributions in this paper concentrate mainly
on probability; we deal with them separately and in turn.
Peter Milne—Probability as a Measure of Information Added
There have traditionally been two routes to interpreting probability—the quantitative
and the qualitative. The dominant tradition has been the quantitative, represented by
Dutch Book arguments and decision theoretic restrictions on choices between lotteries. Peter Milne takes the qualitative route. But instead of the usual interpretation of a
qualitative ordering over over likeliness, he instead explicates a notion of ‘information
added’. It soon becomes apparent that there are two different notions of information
added that can be extracted from the ordinary use. One is a measure of a new datum’s
novelty, the other a measure of the possibilities it rules out. Milne shows that the
former leads to a peculiar probability-like measure. He uses use well-known (but
under-utilized) results to establish that the latter comparative ordering of information
added leads to Popper functions.
The interpretation of the functions is, however, “information-theoretic”, and not,
at least directly, about degrees of belief (or any other conception of probability). It
reverses the usual order of proceeding under which measures of information and/or
content are taken to be functions of antecedently given probabilities (be they statistical,
T. Childers (B) · O. Majer
Academy of Sciences of the Czech Republic, Prague, Czech Republic
e-mail:
[email protected]
O. Majer
e-mail:
[email protected]
123
142
T. Childers, O. Majer
logical, or subjective), and as such, is genuinely novel. As far as we can determine,
Milne has provided the first new interpretation of probability in decades. Milne notes
possible applications such as justifying the Maximum Entropy Principle, explicating
the notion of confirmation and providing semantics for conditionals.
Jeff Paris and Alena Vencovská—Symmetry in Polyadic Inductive Logic
The logical interpretation of probability, taken as deriving probability assignments
to sentences of a language from symmetry constraints, had been left for dead in the
early 1970s. Paris and his co-authors have nursed this interpretation back to robust
health. The technical innovation is to employ the standard mathematical notion of
symmetry as an automorphism. Probability assignments to sentences are given by
automorphisms over elements of structures: this provides a precise characterization
notoriously lacking in the logical interpretation, and hence clearly delimits the application of symmetry principles thus avoiding the usual ‘paradoxes’. It also provides a
natural characterization of different symmetry principles.
And yet there was trouble in paradise. Paris and Vencovská 2010 showed that the
characterization of symmetry by automorphisms on the structures of a language with
unary predicates only leads, in the case of complete ignorance, to a counter-inductive
probability measure, Carnap’s c-†. This result could be blocked by restricting the class
of allowable automorphisms: but the cost of doing so is high, since the naturalness
of the characterization of symmetries is lost. Paris and Vencovská conjectured that
imposing the requirement that automorphisms on structures on monadic languages
be extendable to automorphisms on polyadic extensions of those languages would
provide a suitable restriction. This would be a happy result, as it is an entirely natural
requirement. Their paper in this journal supports this conjecture. It also represents a
marked expansion of the scope and power of inductive logic.
Jacob Rosenthal—Probabilities as Ratios of Ranges in Initial-State Spaces
Many foundational issues in probability are also foundational issues in statistical
mechanics, which is as such a rich mine of ideas for the interpretation of probability. Rosenthal sympathetically surveys a class of ideas for explaining the nature of
probabilities arising from deterministic processes. According to this conception, the
randomness of statistical mechanical events arises from sensitivity to initial conditions. Since we can never pin down the exact conditions in which an experiment is
conducted, we will have a range of possible outcomes, even though the underlying
processes are deterministic. Hence experiments give us relative frequencies: the value
of these frequencies is determined by the measure of the initial conditions leading to an
outcome. The measure is that which is invariant under partitioning of the state-space
of initial conditions. Consider the state space of the initial conditions of an outcome:
they will, of course, make some proportion of the total initial conditions. If, as we
partition the state-space ever more finely, the proportion remains stable, then we can
call this measure the probability of the outcome. One way of looking at it is that no
matter what set of initial conditions we choose, the relative frequency remains the
123
Interpreting Probability
143
same (the method of arbitrary functions, MAF); another is that within ranges of initial
conditions, the proportion of outcomes remains stable (the range conception, RC).
Rosenthal explores the possibility of taking MAF/RC as an interpretation of probability, in terms of providing truth conditions. This interpretation is not meant to be
universal: it is meant for problems where a state-space of initial conditions with an
associated measure can be specified. This concerns statistical mechanics and the classical games of chance in particular, but might also be applied to other fields (Strevens
is the main advocate of this view).
However, Rosenthal points out that an RC/MAF interpretation faces a severe problem: probabilities do not remain invariant under redescription. This problem is familiar
from logical interpretations of probabilities, particularly in the continuous case. However, there might be more hope in the RC/MAF interpretation since the probabilities
arise in physical settings where there would seem to be natural ways of determining
the necessary privileged description. Rosenthal shows that the most obvious ways of
determining this privileged description cause the RC/MAF interpretation to collapse
into other interpretations of probability. He points out a path to avoid this collapse;
future research will show if it can be taken.
Alan Hájek
Conditionals are a perennial philosophical vexation; Alan Hájek aims to show that they
are even more vexatious than has recently been thought. A central puzzle in the study
of conditionals is their relation to conditional probabilities: they certainly look like
they would make a fine match. Yet, the probabilities of conditionals cannot be equated
with their corresponding conditional probabilities in any straightforward way. Perhaps,
then, there is another more amenable function that would play matchmaker for conditionals and conditional probabilities. Adams proposed equating assertabilities of
conditionals with their corresponding conditional probabilities. Adams’ Thesis is said
to be of paramount importance to understanding reasoning about conditionals.
Yet as Hájek points out, there is a family of such theses, generated by differing
scopes of the thesis as well as possible meanings of ‘assertability’. His candidate for
what most people mean by Adam’s Thesis is that the assertability of a conditional
is equal to an agent’s corresponding conditional probability, where the antecedent
has positive probability and assertability is a function of a rational agent’s probability function, for all (conditional free) sentential compounds A and B. He identifies
two possible weakenings of this thesis; a qualitative version, where assertability is
restricted to three values (high, low, middle), and an even more restricted version with
just one, high, value.
What sort of function, then, is assertability? It turns out that in order to sustain
Adams’ Thesis it would have to be a fine-grained function; in fact, finer grained than
unconditional probability. Hájek deploys his ‘Wallflower’ argument, a variant of his
generalization of Lewis’s first triviality result, to show that the range of the assertability function is richer, has more values, than the unconditional probability function
(for finitely-valued probability functions). This is a prima facie most peculiar result
which, as Hájek notes, calls for further explanation. Taken together with a number of
puzzles about the exact nature of assertability (and cousins like ‘assentability’), Hájek
123
144
T. Childers, O. Majer
concludes that the Wallflower argument shows that much work remains to be done if
Adam’s Thesis is to maintain its privileged position in theorizing about conditionals.
However, the qualitative and high assertability versions of the Thesis are not affected
by this argument, also pointing the way for future research.
We thank the authors for their papers, and their forebearance during the refereeing process. We also gratefully acknowledge the support of the Grant Agency of the Czech
Republic in the form of grant GAP401/10/1504, Formal and Historical Approaches to
Epistemology. Finally we thank the Institute of Philosophy, Academy of Sciences of
the Czech Republic, as well as our co-organizer Franz Huber of Universität Konstanz
for making the conference leading to this issue possible.
123
View publication stats