Papers by Christian Wallmann
European Journal for Philosophy of Science
This paper poses a problem for Lewis’ Principal Principle in a subjective Bayesian framework: we ... more This paper poses a problem for Lewis’ Principal Principle in a subjective Bayesian framework: we show that, where chances inform degrees of belief, subjective Bayesianism fails to validate normal informal standards of what is reasonable. This problem points to a tension between the Principal Principle and the claim that conditional degrees of belief are conditional probabilities. However, one version of objective Bayesianism has a straightforward resolution to this problem, because it avoids this latter claim. The problem, then, offers some support to this version of objective Bayesianism.
Direct inferences identify certain probabilistic credences or confirmation-function-likelihoods w... more Direct inferences identify certain probabilistic credences or confirmation-function-likelihoods with values of objective chances or relative frequencies. The best known version of a direct inference principle is David Lewis's Principal Principle. Certain kinds of statements undermine direct inferences. Lewis calls such statements inadmissible. We show that on any Bayesian account of direct inference several kinds of intuitively innocent statements turn out to be inadmissible. This may pose a significant challenge to Bayesian accounts of direct inference. We suggest some ways in which these challenges may be addressed.
Inconsistencies between scientific theories have studied, by and large, from the perspective of p... more Inconsistencies between scientific theories have studied, by and large, from the perspective of paraconsistent logic. This approach considered the formal properties of theories and the structure of inferences one can legitimately draw from theories. However, inconsistencies can be also analysed from the perspective of modelling practices, in particular how modelling practices may lead scientists to form opinions and attitudes that are different, but not necessarily inconsistent (from a logical point of view). In such cases, it is preferable to talk about disagreement, rather than inconsistency. Disagreement may originate in, or concern, a number of epistemic, socio-political or psychological factors. In this paper, we offer an account of the 'loci and reasons' for disagreement at different stages of the scientific process. We then present a controversial episode in the health sciences: the studies on hypercholesterolemia. The causes and effects of high levels of cholesterol in blood have been long and hard debated, to the point of deserving the name of 'cholesterol wars'; the debate, to be sure, isn't settled yet. In this contribution, we focus on some selected loci and reasons for disagreement that occurred between 1920 and 1994 in the studies on hypercholesterolemia. We hope that our analysis of 'loci and reasons' for disagreement may shed light on the cholesterol wars, and possibly on other episodes of scientific disagreement. †
The conflict of narrowness and precision in direct inference occurs if a body of evidence contain... more The conflict of narrowness and precision in direct inference occurs if a body of evidence contains estimates for frequencies in a certain reference class and less precise estimates for frequencies in a narrower reference class. To develop a solution to this conflict, I draw on ideas developed by Paul Thorn and John Pollock. First, I argue that Kyburg and Teng's solution to the conflict of narrowness and precision leads to unreasonable direct inference probabilities. I then show that Thorn's recent solution to the conflict leads to unreasonable direct inference probabilities. Based on my analysis of Thorn's approach, I propose a natural distribution for a Bayesian analysis of the data directly obtained from studying members of the narrowest reference class.
We present and analyse four approaches to the reference class problem. First, we present a new ob... more We present and analyse four approaches to the reference class problem. First, we present a new objective Bayesian solution to the reference class problem. Second, we review Pollock's combinatorial approach to the reference class problem. Third, we discuss a machine learning approach that is based on considering reference classes that are similar to the individual of interest. Fourth, we show how evidence of mechanisms, when combined with the objective Bayesian approach, can help to solve the reference class problem. We argue that this last approach is the most promising, and we note some positive aspects of the similarity approach.
We argue that David Lewis' Principal Principle implies a version of the Principle of Indifference... more We argue that David Lewis' Principal Principle implies a version of the Principle of Indifference. The same is true for similar principles which need to appeal to the concept of admissibility. Such principles are thus in accord with objective Bayesianism, but in tension with subjective Bayesianism. put forward the following principle as a constraint on a reasonable initial credence function P, which is taken to be a probability function:
Probabilistic inference forms lead from point probabilities of the premises to interval probabili... more Probabilistic inference forms lead from point probabilities of the premises to interval probabilities of the conclusion. The probabilistic version of Modus Ponens, for example, licenses the inference from P (A) = α and P (B|A) = β to P (B) ∈ [αβ, αβ +1−α]. We study generalized inference forms with three or more premises. The generalized Modus Ponens, for example, leads from P (A1) = α1, . . . , P (An) = αn and P (B|A1 ∧ · · · ∧ An) = β to an according interval for P (B). We present the probability intervals for the conclusions of the generalized versions of Cut, Cautious Monotonicity, Modus Tollens, Bayes' Theorem, and some System O rules. Recently, Gilio has shown that generalized inference forms "degrade"-more premises lead to less precise conclusions, i.e., to wider probability intervals of the conclusion. We also study Adam's probability preservation properties in generalized inference forms. Special attention is devoted to zero probabilities of the conditioning events. These zero probabilities often lead to different intervals in the coherence and the Kolmogorov approach.
The paper investigates exchangeability in the context of probability logic. We study generalizati... more The paper investigates exchangeability in the context of probability logic. We study generalizations of basic inference rules and inferences involving cardinalities. We compare the results with those obtained in the case in which only identical probabilities are assumed.
In this paper we develop an abstract theory of adequacy. In the same way as the theory of consequ... more In this paper we develop an abstract theory of adequacy. In the same way as the theory of consequence operations is a general theory of logic, this theory of adequacy is a general theory of the interactions and connections between consequence operations and its sound and complete semantics. Addition of axioms for the connectives of propositional logic to the basic axioms of consequence operations yields a unifying framework for different systems of classical propositional logic. We present an abstract model-theoretical semantics based on model mappings and theory mappings. Between the classes of models and theories, i.e., the set of sentences verified by a model, it obtains a connection that is well-known within algebra as Galois correspondence. Many basic semantical properties can be derived from this observation. A sentence A is a semantical consequence of T if every model of T is also a model of A. A model mapping is adequate for a consequence operation if its semantical inference operation is identical with the consequence operation. We study how properties of an adequate model mapping reflect the properties of the consequence operation and vice versa. In particular, we show how every concept of the theory of consequence operations can be formulated semantically.
This article deals with algebraic logic. In particular, it discusses the theory of consequence op... more This article deals with algebraic logic. In particular, it discusses the theory of consequence operations and the general concept of logical independency. The advantage of this general view is its great applicability: The stated properties of consequence operations hold for almost every logical system. The notion of independency is well known and important in logic, philosophy of science and mathematics. Roughly speaking, a set is independent with respect to a consequence operation, if none of its elements is a consequence of the other elements. The property of being an independent set guarantees therefore that none of its elements is superuous. In particular, I'm going to show fundamental results for every consequence operation, and hence for every logic: no innite independent set is nite axiomatizable, and every nite axiomatizable set has relative to a nitary consequence operation an independent axiom system. The main result is that in sentential logic every set of formulas has an independend axiom system.
We give an elementary introduction into the theory of consequence operations. We proof some eleme... more We give an elementary introduction into the theory of consequence operations. We proof some elementary results concerning basic notions of logic like tautology, consistency, independence and completeness. We show in particular that every nite axiomatizable set is independent axiomatizable and that every consistent set has relative to a nitary consequence operation a maximal consistent extension. Finally we provide an abstract semantics for consequence operations. 1
Uploads
Papers by Christian Wallmann