Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
20 pages
1 file
Scientific evidence and scientific values under risk and uncertainty are strictly connected from the point of view of Peirce’s pragmaticism. In addition, economy and statistics play a key role in both choosing and testing hypotheses. Hence we may show also the connection between the methodology of the economy of research and statistical frequentism, both originating from pragmaticism. The connection is drawn by the regulative principles of synechism, tychism and uberty. These principles are values that have both epistemic and non-epistemic dimension. They concern both the decisions to test a hypothesis as well as inductive risk. The validity of this result stems from the values cost–benefit analysis imposes on scientific inquiry. Values associated with the economy of research are important not only in the pretest phases of generating hypotheses but also when hypotheses are effectively tested. Peirce took these economic considerations to leave room for an interpretation of probability which is not only a frequentist and propensity-theoretic but also a conceptualist one referring to degrees of belief. We show that this leeway nonetheless agrees with the theory of the economy of scientific methods. Keywords Abduction · Risk · Values · Peirce · Economy of research · Frequentism
Axiomathes, 2019
Scientific evidence and scientific values under risk and uncertainty are strictly connected from the point of view of Peirce"s pragmaticism. In addition, economy and statistics play a key role in both choosing and testing hypotheses. Hence we may show also the connection between the methodology of the economy of research and statistical frequentism, both originating from pragmaticism. The connection is drawn by the regulative principles of synechism, tychism and uberty. These principles are values that have both epistemic and non-epistemic dimension. They concern both the decisions to test a hypothesis as well as inductive risk. The validity of this result stems from the values cost-benefit analysis imposes on scientific inquiry. Values associated with the economy of research are important
2013
The thesis that the practice and evaluation of science requires social value-judgment, that good science is not value-free or value-neutral but value-laden, has been gaining acceptance among philosophers of science. The main proponents of the value-ladenness of science rely on either arguments from the underdetermination of theory by evidence or arguments from inductive risk. Both arguments share the premise that we should only consider values once the evidence runs out, or where it leaves uncertainty; they adopt a criterion of {lexical priority of evidence over values}. The motivation behind lexical priority is to avoid reaching conclusions on the basis of wishful thinking rather than good evidence. {The problem of wishful thinking} is indeed real---it would be an egregious error to adopt beliefs about the world because they comport with how one would prefer the world to be. I will argue, however, that giving lexical priority to evidential considerations over values is a mistake, and unnecessary for adequately avoiding the problem of wishful thinking. Values have a deeper role to play in science than proponents of the underdetermination and inductive risk arguments have suggested.
Objectivity, value-free science, and inductive risk, 2023
In this paper I shall defend the idea that there is an abstract and general core meaning of objectivity, and what is seen as a variety of concepts or conceptions of objectivity are in fact criteria of, or means to achieve, objectivity. I shall then discuss the ideal of value-free science and its relation to the objectivity of science; its status can be at best a criterion of, or means for, objectivity. Given this analysis, we can then turn to the problem of inductive risk. Do the value judgements regarding inductive risk really pose a threat to the objectivity of science? I claim that this is not the case because they do not lower the thresholds scientifically postulated for objectivity. I shall conclude the paper with a discussion of under-appreciated influences of values on science, which indeed pose a serious threat to the objectivity of some scientific disciplines.
Current Controversies in Values and Science
Although science is our most reliable source for empirical knowledge, it is also endemically uncertain, largely because of its inductive nature. Because of this uncertainty, scientists are continually faced with the judgment of whether the evidence they have on hand is sufficient to support an empirical claim. In this essay, I argue that social and ethical values are needed to help scientists make this judgment, because purely internal standards (such as epistemic and cognitive values) do not help with assessing evidential sufficiency (they perform other jobs in scientific reasoning), because to employ one blanket standard fails to recognize the complex range of sufficiency judgments in science, and because the authority of science in society requires a consideration of the social and ethical implications of erroneous judgment, thus necessitating social and ethical values as part of scientific reasoning. Implications for the assessment of scientific expertise are discussed.
2012
Inability to clearly defend against the criticisms of frequentist methods has turned many a frequentist away from venturing into foundational battlegrounds. Conceding the distorted perspectives drawn from overly literal and radical expositions of what Fisher, Neyman, and Pearson ‘really thought’, some deny they matter to current practice. The goal of this paper is not merely to call attention to the howlers that pass as legitimate criticisms of frequentist error statistics, but also to sketch the main lines of an alternative statistical philosophy within which to better articulate the roles and value of frequentist
Frontiers in Psychology
2021
Inability to clearly defend against the criticisms of frequentist methods has turned many a frequentist away from venturing into foundational battlegrounds. Conceding the distorted perspectives drawn from overly literal and radical expositions of what Fisher, Neyman, and Pearson `really thought', some deny they matter to current practice. The goal of this paper is not merely to call attention to the howlers that pass as legitimate criticisms of frequentist error statistics, but also to sketch the main lines of an alternative statistical philosophy within which to better articulate the roles and value of frequentist tools.
European Journal Philosophy of Science, 2023
In this paper I shall defend the idea that there is an abstract and general core meaning of objectivity, and what is seen as a variety of concepts or conceptions of objectivity are in fact criteria of, or means to achieve, objectivity. I shall then discuss the ideal of value-free science and its relation to the objectivity of science; its status can be at best a criterion of, or means for, objectivity. Given this analysis, we can then turn to the problem of inductive risk. Do the value judgements regarding inductive risk really pose a threat to the objectivity of science? I claim that this is not the case because they do not lower the thresholds scientifically postulated for objectivity. I shall conclude the paper with a discussion of under-appreciated influences of values on science, which indeed pose a serious threat to the objectivity of some scientific disciplines.
Several attempts have been made both in the present and past to impose some a priori desiderata on statistical/inductive inference (Fitleson, 1999, Jeffreys, 1961, Zellner, 1996, Jaynes, 2003, Lele, 2004). Bringing this literature on desiderata to the fore, I argue that these attempts to understand inference could be controversial.
3 Introduction
Values play an indispensable role both in the assessment of the plausibility of scientific hypotheses as well as in the statistical testing of those few submitted to experimental tests. In both cases, uncertainty nests with evaluative discourse. Such normative aspects of risk and scientific practice may have a unifying account in Peirce's philosophy of pragmaticism. In fact, scientific values, both epistemic and nonepistemic, may be associated with Peirce's theory of the economy of research, as a (form of) cost-benefit analysis of scientific inquiry related to the state of information which, at least according to Rescher (1976), precedes the stage at which decisions to test are made. 1 A well-known argument from inductive risk (Hempel 1965) has shown how values play a crucial role in testing hypotheses. In the frequentist framework, values of acceptance and rejection need to be fixed before any data is collected. In the context of inductive risk, economic values of research were relevant in shaping the normative constraints of frequentism. Naturally, decisions concerning the powers of the study and the acceptable thresholds of false positives and false negatives also incorporate values relevant to the economic considerations. In short, the scope of the economy of research is wider that what is usually thought: it remains relevant also when statistically testing a hypothesis.
In this paper, we will consider forms of uncertainty that are probabilistic. Risk is different from (fundamental) uncertainty and in a technical sense is strictly probabilistic. It refers to situations in which we know something is unknown. We can assign a probabilistic value to a phenomenon: we know the relevant probability distribution, and we take it possible to calculate the chance that such events come to pass. Among the many definitions the following is classical: risk is the probability "that a particular adverse event occurs during a stated period of time, or results from a particular challenge" (Royal Society 1983, 2). The Royal Society takes risk here "as a probability in the sense of statistical theory", which obeys "all the formal laws of combining probabilities".
Under such circumstances in which probabilities are well definable and computable, the notion of risk certainly occupies a central role. When probabilistic risk assessment cannot be hardly applied to events we face severe or fundamental uncertainty (Knight 1921;Keynes 1973). 2 While uncertainty about the future outcomes of inquiry belongs to the provenance of abduction, including scientific discovery and conjecture-making, risk-related actions pertain to deductive and inductive stages in the sense in which Peirce defined them in his logic of science. The mere addition of 1 On Peirce's theory of the economy of research, see Foss (1984), Kronz and McLaughlin (2005), Rescher (1976), and Wible (1994Wible ( , 2008, and the recent symposium proceedings published in the vol. 52(2) of Transactions of the Charles S. Peirce Society. 2 On fundamental uncertainty and decision making see Chiffi and Pietarinen (2017). Subjectivist degrees of probability may also be assigned to remote uncertain events. However, probability requires the partitions of the set of all alternatives Ω to be known, which may be problematic in case of remote events. Alternatively, certain heuristics may be used to ground our judgments on fundamentally uncertain events. We think that abductive modes of reasoning play a key role in this framework.
1 3 Axiomathes (2019) 29:329-346 probabilities, in fact, does not yet take one from deductive reasoning to the inductive realm: probable conclusions can well be necessitated by probable premises. 3 Though risk and severe uncertainty ought not to be confounded, we can show that there is an intertwining of what in contemporary literature has been termed as "inductive risk" and what is meant by "values in science". How inductive risk and values in science are intertwined can be clarified from the point of view of Peirce's logic of science and pragmaticism. We believe that these perspectives offer novel insights into the nature of scientific practices as concerns hypothesis testing as well as their invention by abduction. In Peirce's methodology, hypothesis-drawing, conjecture-making, confirmation and statistical acceptance and rejection all have their respective roles. In particular, Peirce's economy of research and its logic can teach us lessons on how procedures of statistical testing mimic the dynamics of scientific inquiry.
Since testing of hypotheses involve thresholds of acceptance or rejection of those hypotheses sensitive to normative aspects of inquiry that abduction has recognized "on probation" (Peirce to Woods, 1913), the interplay between risk and scientific values is that critical moment in inquiry worth paying a close attention. We point out the importance of non-epistemic values influencing reasoning with hypotheses in addition to epistemic values. 4 We observe that values typically preform crucial investigative function in those cases in which inductive risk is present. The validity of such and previously largely unnoticed aspect of values in science will thus be critically analysed (see footnote 4).
In Sect. 2, we will see how values are involved in the economy of research. After establishing which hypotheses can be investigated and tested, this involvement draws the preeminent link between the abductive and inductive phases of research. Abduction is the phase in which the plausibility of hypotheses is evaluated before the "inductive phase" during which a hypothesis is statistically tested.
Our discussion draws from Peirce's original views on induction as well as from the frequentist tradition in statistics shown to have originated from Peirce's work (Mayo 2005) (Sect. 3). Section 4 puts values and inductive risk in interaction with the epistemic and non-epistemic values in hypothesis testing. We observe that considerations on the economy of research are required also in the inductive phase of scientific inquiry. In Sect. 5 we show, quite surprisingly, that even the interpretations of probabilities, both in the frequentist and in what Peirce calls the conceptualist degrees-of-beliefs sense (something akin to modern Bayesianism), are affected by these economic and value-based considerations. This discussion relies on some previously unpublished works from Peirce's archives. Section 6 concludes the paper.
3 2 Before Statistical Testing: Values and the Economy of Scientific Research
Peirce's valuable insight to inquiry has been that not all hypotheses concluded by abduction are worthy of further investigation. A hypothesis is to be submitted to a statistical trial provided that the hypothesis (1) is testable, as ascertained by deduction, (2) may explain some surprising fact(s) and that the hypothesis (3) should follow the economy of research, in other words does not require the consumption of unreasonable amount of time, energy and thought compared to other rival hypotheses. Peirce writes:
Proposal for hypotheses inundate us in an overwhelming flood, while the process of verification to which each one must be subjected before it can count as at all an item, even of likely knowledge, is so very costly in time, energy, and money -and consequently in ideas which might have been had for that time, energy, and money, that Economy would override every other consideration even if there were any other serious considerations. In fact there are no others. For abduction commits us to nothing. It merely causes a hypothesis to be set down upon our docket of cases to be tried. (CP 5.602, 1903).
Testing an empirical hypothesis can be an utter time, money and information-consuming affair. Usually limitations apply to the costs of research and to the availability of material resources. This is why economic and value-based considerations should also guide scientific inquiry before deciding to select a hypothesis and evaluate it into a trial. 5 In contemporary terms, Peirce proposes a version of the cost-benefits analysis: a monetary valuation of different effects of interventions is undertaken by using prices revealed by the market or by agents' willingness to pay for (or accept compensation to avoid) different outcomes. This modern view is at the heart of Peirce's methodology of science. In particular, he pointed out that these economic considerations are a crucial aspect of the theory of inference: Now economy, in general, depends upon three kinds of factors; cost; the value of the thing proposed, in itself; and its effect upon other projects. Under the head of cost, if a hypothesis can be put to the test of experiment with very little expense of any kind, that should be regarded as a recommendation for giving it precedence in the inductive procedure. (CP 7.220, 1901).
From the perspective of the economy of research, an abducted hypothesis is justified as soon as its cost, value and impact are also taken into account. We use the term "economic" in the modern sense of the cost-benefit analysis, in which both costs and benefits may receive monetary values. Moreover, we hold that many 1 3 Axiomathes (2019) 29:329-346 methodological aspects of the design of an experiment are affected by economic considerations that can reduce time, energy and costs that tests incur, thus increasing the possibility to gain a desirable outcome. We do not claim that such considerations should be blindly followed in scientific inquiry. Consider the following example. Let us assume that we have at hand some hypotheses regarding areas of the brain that are involved in a given behavioural process, and that we have no access to a multimillion dollar Positron Emission Tomography (PET) or functional Magnetic Resonance Imaging (fMRI) machine. Is it so that the scientist simply cannot test those hypotheses? With nothing more than a pencil and paper or a laptop, the scientist might well be able to test the relevant behavior measures of those processes. Importantly, thus, values shape and integrate economic considerations in our decisions to test. There can be, in fact, a dissociation between the cost of research and its usefulness. For instance, a PET or fMRI scan might be less useful in treating a patient with brain injury than a paper-and-pencil method or other behavioral measure of that patient's performance on a given task would. 6 The tension between economic considerations and the impact and values of research was carefully addressed in Peirce's work. According to Peirce, science is a human and social enterprise instantiated in a given historical, normative and economic ecosystem. Values are a fundamental ingredient in any decision to test (Chiffi and Pietarinen 2018) and they integrate the methodology of economy of research with both abduction and induction. Next, we will analyse pragmaticism's main classes of such values.
The first is tychism, a methodological principle that indicated the presence of severe chance or randomness in scientific inquiry. Tychism implies fallibility of empirical knowledge, as Peirce famously argued. 7 The other, quite indispensable but relatively infrequently noticed assumption which is at play in scientific inquiry, refers to what Peirce termed synechism. Synechism is the principle of that continuity is operative in reasoning. Its importance in science Haack (2005) has emphasized well and whose characteristic feature is to tentatively isolate those hypotheses that are presented to the investigators that might be selected for subsequent examination and statistical test (CP 6.173) in virtue of economic reasons.
Synechism is a guide to what is specific in scientific reasoning when making conjectures. Contrary to common proclamations (but excluding e.g. Haack 2005), synechism of hypotheses should also not be taken to rely on metaphysical claims on the continuity of the universe and its laws. Certainly, Peirce did not think so. Synechism is principally a methodological and value-based, generalizable principle. 8 It tells which among the endless number of potential distinctions should arise as 1 3 commendable demarcations in the continuum of all possibilities. 9 It emerges from what Peirce saw as a "synthesis of tychism and pragmatism" (MS 490, 1906).
Peirce also remarks that those scientific hypotheses that are "gravid with young truth" (Peirce to Woods, 1913, cf. CP 8.384) are among those good candidates to be submitted to test. Peirce calls this special quality of hypotheses their uberty. It is the third value the importance of which we wish to highlight in this context. Here we change the term into an adjective and call those hypotheses uberous which present qualities that pave the way for scientific progress through truth-conducive qualities that collections of hypotheses and their relationships present to the investigator. They need not be any more truth-like than non-uberous ones are; nor does uberty mean simple fruitfulness. (False hypotheses can be maximally fruitful). An important point is that, as soon as discovery is able to discern uberous hypotheses we are already evoking all these scientific values. In saying so we merely capitalize on Peirce's recognition of this point: An uberous hypothesis which results from abductive reasoning has "value in productiveness" (Peirce to Woods, November 6, 1913). This value in hypotheses' productiveness has to be taken into account whenever we reason according to the logic of science in our scientific pursuits.
Conceived in this way, we can also put into a novel perspective Peirce's quip about scientists who proclaim to advance scientific inquiry without appealing to metaphysics: those scientists are merely bound to have bad metaphysics. 10 Peirce is attacking the incorrect conception that scientists could claim to do science objectively dispensing from the values of tychism, synechism and uberty. When these scientific values are jointly embraced in scientific inquiry, the self-correcting perspective on science could come to be guaranteed as well as its epistemic value of objectivity. 11 As we have seen, these values form the vital, living component of Peirce's economy of research. Given the uncertainty of scientific inquiry (tychism), it is important to isolate an initial set of hypotheses (synechism) and evaluate their value in productiveness and breadth (uberty) in order to achieve the goals controlled by economic considerations. Notice that in order to reduce what Peirce's called "probable error" in science, resources can be expended in testing uberous hypotheses, as those are "about the precision of a statistical estimator in scientific inquiry" (Wible 1994). The idea has, in turn, prefigured confidence intervals and frequentist hypothesis-testing.
In order to appreciate the role that values have in hypothesis testing, the next section explores in detail the frequentist procedure for hypothesis testing and its origins in Peirce's writings. We will then see in Sect. 4 that arguments for "inductive risk" 1 3
Frequentism and Values in Hypothesis Testing
According to Peirce, induction is the third stage of scientific inference that follows abduction and deduction and whose characteristic feature is that of correcting errors. It features a "distinctive method", which Peirce explains as consisting in the "provisional adoption of a conclusion, not because that a belief in that conception is in itself considered justified by the earlier premises as they stand, but because it has been reached by a rule of inquiry which persistently followed will ultimately correct any error into which it may have led the inquirer" (MS 905). 12 Induction is inference "from experiments testing predictions based on a hypothesis", which is "alone properly entitled to be called induction" (CP 7.206). As an ampliative form of inference, it is approximate and provisional. In the long run of conducting a series of wellcontrolled experimental tests, a scientific hypothesis can nevertheless be expected to converge to the interpretation of truth 13 (Pietarinen and Bellucci 2014).
Some similarities between the frequentist approach to statistical inferences and Peirce's views on the nature of inquiry have been mapped in the literature (see e.g. Hacking 1980;Mayo 2005). We will underscore the impact of values in testing procedures. For instance, Peirce's idea that induction is a type or a stage of inference rather than a behaviour comes out from Fisher's account of induction, 14 while inductive inference as a severe testing can be found in Neyman and Pearson on statistical hypothesis testing. 15 Peirce's recurrent criticism of Laplace confirms that Peirce was well aware of the fact that statistical inferences fail to assign specific probabilistic values to single hypotheses, and that they draw conclusions with the error rate which may be known in advance. This has since become the gold standard in statistical 12 Cf. MS 300 (1908), and especially MS 855 (1911): "By Induction, I mean a reasoning which provisionally conclude something to be true of every member of a collection, or, more frequently, of whatever there may be of a definite general kind, for no other reason than that firstly the same thing has been found to be true of a part of that collection, or of a finite collection of things of that kind, and secondly, that the manner in which this partial collection had come to be known to have the character which is concluded to belong to the whole, compels, or at least authorizes, us to regard it, provisionally, approximately, and probably, as an image of that whole". 13 The validity of induction depends on the validity of deduction, as well as on "our confidence that a run of one kind of experience will not be changed or cease without some indication before it ceases" (Peirce to Woods, 1913), among others. This is another, yet related topic which is dealt with elsewhere. 14 One could claim that inference expresses the justified action to derive a conclusion starting from premises However, this is not enough to create a behaviour, since the notion of behaviour requires also the possibility to respond or adapt to the environment in order to achieve a goal. 15 Mayo (2005) has uncovered certain key connections between Peirce's inductive methodology and the frequentist views of Newman and Pearson. As indicated in Hacking (1980), Peirce's ideas on induction had also a great influence on Edwin B. Wilson. We add the historical tidbit that this is the same Wilson who once in the early 1920s started working on a draft biography of Peirce (deposited at Harvard University Archives) and who anticipated many aspects of the confidence interval methodology in statistics. See, for instance, Wilson (1926).
3
inference in terms of significance measures and statistical testing. Moreover, statistical error rates for hypothesis acceptance requires value-laden decisions, encompassing epistemic and non-epistemic aspects of inquiry, and this is the reason why statistical methods are strictly associated with scientific values even in hypothesis testing.
In order to see how values contribute to statistical inference, let us consider the two classical and influential views in frequentism which also show an intimate connection with pragmatistic epistemology. Contrary to what the textbook accounts or even contemporary statistical practices may standardly suggest, there is a noticeable difference between Fisher's (1956) procedures of significance testing and Neyman's and Pearson's statistical hypothesis testing (Neyman and Pearson 1933;Neyman 1952;Pearson 1962;Chiffi and Giaretta 2014). Fisher's aim was to consider only one hypothesis per experiment. Experimental sciences exploit datasets that may suggest a specific effect. But the success of pursuing this line of thought assumes that scientific knowledge has an interpretative-heuristic aim. In significance testing, the null hypothesis H 0 (the 'no-effect' hypothesis) can be disregarded 16 when the p value (i.e., the probability of obtaining a test statistic at least as extreme as the one actually observed, assuming the null hypothesis as true) is less or equal to a specific value α; otherwise H 0 cannot be disregarded. When H 0 cannot be disregarded, it does not follow that H 0 should be accepted. One can only state whether the null hypothesis is statistically disregarded or not. According to the significance protocol, test conditions need to specify the null hypothesis exactly, and the outcome be only such that it is possible for the null hypothesis to be disregarded. In such frameworks, there is only one possible statistical error (Type I): disregard H 0 when H 0 is in fact true.
According to Fisher, induction is a type of inference. When H 0 is disregarded, this means, in addition, that "Either an exceptionally rare chance has occurred or the theory is not true" (Fisher 1956, 39). Fisher argues that in pure science, no decision is needed concerning the possible acceptance of a hypothesis (based on some specific values). However, when dealing with actual decision-making situations, we need tests to translate into a decision rule, and thus we decide to act as if certain assumptions, once they are completely specified, were accepted as final. 17 Given the conclusiveness of the outcome of a decision process, decisions can be handled by the Neyman-Pearson test method. This test allows also for those situations leading to Type II errors (accepting H 0 when H 0 is false). The rate β for Type II errors is usually fixed before any data collection. In Neyman and Pearson's framework, it now becomes possible to express a new concept: the 'power' of statistical 16 To express Fisher's ideas in a better way, we use the term "disregarded" rather than rejected. The terms "acceptance" and "rejection" are better suited to reflect Newman and Pearson's views. 17 Fisher clarifies the distinction between decisions and statistical inference as follows: "An important difference is that decisions are final, while the state of opinion derived from a test of significance is provisional, and capable, not only of confirmation, but of revision […]. A test of significance […] is intended to aid the process of learning by observational experience. In what it has to teach, each case is unique, though we may judge that our information needs supplementing by further observations of the same, or of a different kind" (Fisher 1956, 100-101).
Axiomathes 2019 experiments, identified as 1-β, is the probability that the test rejects a false null hypothesis.
The statistical power of a test to detect an exact effect is related to other quantified concepts. Among them is the fact that certain features of the test can be computed before conducting the experiment. For instance, a mathematical relation obtains between error level α (Type I, a false positive), error level β (Type II error, a false negative), the relative risk 18 (or any other measure of association the experimenter desires to detect) and the sample size N. If one knows three of these factors, then it is possible to compute the fourth (Feinstein 1977;Cranor 1990).
With reference to decision-making protocols, Neyman and Pearson argue that their method takes account of at least two hypotheses: the null as well as a wellspecified alternative hypothesis. Under their view, no single statistical test can provide evidence for the truth or falsity of a hypothesis: "Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behaviour with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong" (Neyman and Pearson 1933, 291, added emphasis).
Neyman and Pearson hit upon highly Peircean lines here, as the "rules to govern our behaviour" echo what Peirce termed the "habits of action" as those general and strategic rules whose meaning is far from exhausted by a behavioural interpretation. Unfortunately, Neyman and Pearson's interpretation 19 as well as Peirce's concept of habit both came to be misinterpreted in behaviouristic senses of the term (but see Kilpinen 2009; West and Anderson 2016 among the exceptions). When they write "in the long run of experience", Neyman and Pearson could have explicated it in Peirce's terms to epitomize its essence: "I shall term a conjecture which, more recommended than another as a result of retroduction 'more likely', reserving the expression 'more probable' for that which will be true in a greater proportion of individual occasions in 'the long run', that is, in an endless experience of occasions of a definite kind" (MS 905, added emphasis). The key ideas here are the role of such an "endless succession of experiences" (MS 905) and the generalizability that accompanies those experiences. According to Peirce, experiences are scientifically meaningful when they may be interpreted as propensities. They are generalizable, as soon as the rules governing our behaviour are sufficiently general and stable in the sense of prescribing general tendencies to act in a certain way in certain kinds of situations (see Peirce 1904 on the notion of stability expected of habits of behaviour). Such rules are general habits of actions: they are not rules governing any singular hypotheses or our beliefs on their validity as those are vulnerable to change and rebuttal in the light of new evidence, often an improbable one. Habits are strategic rules concerning stable equilibria sufficiently resistant towards new evidence, proportional to the weight and depth of new sets of evidence. Changes in habits of 1 3 action would concern qualitative aspects of decision-making in observed relationships across a variety of such evidential arrays.
The last clause in Neyman and Pearson's explanation ("we shall not be too often wrong") can also be criticized as too vague. It might mean that the justified expectation is one that is more often right than wrong. But this would fail to trigger acting on a hypothesis in cases in which it reasonably should. A scientific hypothesis may after all be subject to an unexpected rebuttal and this is the key feature of scientific fallibilism as well as the usual mechanism in scientific progress.
This matter is put into a clearer perspective when we realize that Neyman and Pearson are rejecting the idea that hypothesis-testing involved in the long run implies a complete knowability of the acceptability of the tested hypothesis. What they mean by conjectural knowledge is that statistical tests cannot warrant the security of future knowledge regarding the content of our present hypotheses. According to Neyman and Pearson, what statistical tests can do is to evaluate whether a specific hypothesis is acceptable in accordance with the specific values. In this framework, Type-I and Type-II errors can be balanced out and reduced in their occurrence but never completely eliminated. At this point, the choice of values enters the picture of hypothesis testing: economic reasons, indeed, may favour one type of statistical error rather than the other.
The Saturn-scenario demonstrates well the paradoxical conclusions associated with uncriticized application of the principle of indifference. 29 "How would an insurance company fare who should try to do business on such a basis?", he later asked in reference to his Saturn example. "A basis for business has got to be knowledge and not ignorance" (MS 514, 1911). Peirce's indication was that values that have a pre-eminently non-epistemic function (as in the case of the cards and eternal felicity) suggests a conceptualist interpretation of probability, while at the same time his example of the inhabitants of Saturn illustrates how epistemic values result in the inadequacy of the conceptualist interpretation. We can now explain this bifurcation by the necessity to follow the principles of economy that change our pre-theoretical views on the meanings of probability whenever they violate our economic and value-based considerations.
In standard scientific contexts that largely populated the legacy that followed Peirce, interpretations of probability indeed stemmed from a frequentist account. However, certain well-conceived variants may depend on the needs of the situations at hand, such as those having to do with matters of "vital importance"; in such contexts values can play an increasingly important role in deciding among rival hypotheses. Interestingly, Peirce once explicitly wrote that it is the Bayesian ("Bajesian") interpretation of probabilities that defines a "special meaning of the word 'probability'": But when they come to "inverse probabilities", that is, to the probabilities of conjectural antecedents of known events, as consequents, they adopt a principle due to a Mr. Bayes (Phil.Trans. for 1765, 370), that the probabilities of different possible antecedents are propositional to the probabilities deduced from them of the known state of things. This might be adopted as the definition of a special meaning of the word "probability", which might be called "Bajesian probability". They invariable assume that the probabilities of supposable states of things as to actual occurrences alternatives we know absolutely nothing of them except that one of them and only one is the fact and know nothing more. These writers profess to calculate the probability of an inductive conclusion from the facts stated in its premisses. But this they can only do by [end] (MS 905).
Bayesian reasoning thus has a certain marked role in science but it does not define its rationale. Unfortunately this passage ends abruptly and leaves open many questions. For one, the "known state of things" refers to the elicitation of prior probabilities, which poses problems of interpretation in general but also when attempting to model scientific discovery by Bayesian principles. Important novel discoveries 29 For instance, one of the paradoxes that arises is Bertrand's paradox. One cannot encircle an area on a continuous chart at will and claim the result to be informative without specifying what the precise method and the purpose of doing so would be for the calculation of the relevant probabilities, especially when as in Peirce's example such distributions are continuous and important information about priors is missing. For instance encircling the entire graph would be to specify a method and the result would not be arbitrary. See also Carnap (1955) and van Fraassen (1989) on problems with the principle of indifference.
1 3 Axiomathes (2019) 29:329-346 are often unexpected and reliable methods for deciding upon priors hard to come by in such situtations. For Peirce this special or narrow meaning of probability as well as the applicability of the principle of indifference would require the elicitation of "the one and the only" alternative among the "supposable states of things" as the known fact (MS 905). 30 Nonetheless, this passage communicates a related matter that has eluded earlier commentators. In fact, most of what Peirce wrote in his long and unpublished MS 905 ("On an Unpretentious Argument for the Reality of God") in which this quotation appears concerns whether the pragmatist notion of induction and its risk has room for alternative interpretations of probability. The crux is that those alternatives do not define what Peirce took to be the "real probability" (MS 905). The reality of probabilities is defined in his doctrine of chances; his later maxim of pragmaticism expresses it as a possibility that "can become actual" (MS 288). Both the doctrine of chances and the maxim of pragmaticism assign a new, generalized meaning to probability. This semantic shift in the notion of probability is substantially due to considerations prompted by the economy of research, namely the integration of epistemic values that alone merely suffice to ground the problematic principle of indifference (as exhibited in his Saturn example) with non-epistemic values (as involved in his example of the cards and eternal felicity). Indeed, values have a key role in understanding the intended interpretation of probability, and that interpretation seems to agree with the principle of the economy of research.
Risk, Values and Scientific Hypotheses
Decisions concerning levels of acceptability of a hypothesis are based not only on statistical reasons and epistemic values, among which we may list values of simplicity, objectivity or informativeness of hypotheses. Economic considerations also shape the decisions concerning the balance of how statistical errors are valued in a trial. Type-I and Type-II errors may be reduced if, for instance, the sample size is enlarged. Yet data-collection may require greater expenditure of time, money and energy. Therefore, an optimal decision among the thresholds of statistical error has to cohere with the concept of economy of research. This is particularly true when non-epistemic values enter the assessment of inductive risk. 20 Toxicology and diagnostics are examples where tests are designed to deal with specific false positive and false negative values. Such choices can hardly be done well by taking into consideration only epistemic values, as many other reasons given by adverse consequences cannot be dismissed. For instance, reasons associated to values and costs may mean preferring the risk of false positives diagnoses over false negative ones. Accepting and rejecting hypotheses involves, in fact, value judgements with different dimensions (Rudner 1953;Douglas 2000). Hempel (1965) pointed out that the complexity of decision-making increases as soon as we formulate the evaluative context in 1 3 Axiomathes (2019) 29:329-346 ways that are devoid of practical considerations. In purely epistemic decision-making, the acceptable thresholds for statistical error may not easily be determined. It is only a convention but not a pragmatic consequence that the Type I error tends to be fixed at 0.05 or below while Type II is somewhere at 0.2. 21 There is no real reason for having only such values, and other values should also be entertained as long as the acceptable and expected error rates are stated before any data collection, with a careful weeding out of confounding factors before entering stricter comparison tests and error corrections. What is worth remarking is that in order to fulfil the desiderata of Peirce's economy of research, non-epistemic values that permeate decision making have to be welded with epistemic ones. This interplay between epistemic and non-epistemic values is crucial in deciding the acceptance of statistical hypotheses. 22 Yet, according to Jeffrey (1956), no decision problem obtains regarding the acceptance of rejection of hypotheses in pure science, but only the attribution of a provisional degree of confirmation for a hypothesis. Still, also in such cases nonepistemic values come to be weaved into epistemic ones. Consider Jeffrey's problem of deciding whether a specific batch of polio vaccine is free from active polio virus:
There is nothing in the hypothesis, "This vaccine is free from active polio virus", to tell us what the vaccine is for, or what would happen if the statement were accepted when false. One naturally assumes that the vaccine is intended for inoculating children, but for all we know from the hypothesis it might be intended for inoculating pet monkeys. One's confidence in the hypothesis might well be high enough to warrant inoculation of monkeys but not of children (Jeffrey 1956, 242; see also Levi 1960). 23 Without entering into the dispute whether in science there is just the attribution of a degree of confirmation to hypotheses or whether we also decide to accept or refute them, 24 our view on Jeffrey's example brings out the dependence of non-epistemic values and 'economic' considerations (such as the different values we associate with the health of children or with the health of monkeys) on epistemic values regarding the degree of confirmation. Non-epistemic values can even take priority over epistemic ones, for example when reasons for expediting the inquiry become highly pressing. However, while we acknowledge the centrality of non-epistemic values in science, we do not take them to be constitutive, that is, as values necessary in every 1 3 scientific context (see e.g. Longino 2004 for such a view). Yet non-epistemic values can be contextual in specific occasions. 25 As the risk of inductive error cannot be completely eliminated from rational scientific pursuits, the statistical error rates may be balanced by special reasoning strategies that concern the testing of hypotheses. A similar viewpoint concerning scientific hypothesis-testing embraces a range of values that influence scientists' decisions (Hintikka 1998). As seen above, it is difficult to find clear-cut general methodological guidelines specifying the values of these inferential thresholds for statistical errors. 26 But if this is so, then the epistemic and non-epistemic values associated to the inductive risk of error may lie in the continuum of possibilities. This, in turn, means also that such values interact at the very moment of choosing hypotheses for test: an observation that follows, as we have seen, from Peirce's methodology of scientific inquiry, namely tychism, synechism and the uberty of those hypotheses, and they play an important role in fixing the values of statistical errors that agrees with the principles of the economy of research.
Interpretations of Probability and Values
We proceed to show that there is a particular overlooked task that Peircean values accomplish in the interpretations of probability. Interpretations of probability are guided by value-based considerations within the cost-benefit framework similar to the economic considerations of research in the previous sections. A proper interpretation of probability is possible only when there are these values at our disposal in specific contexts that are articulated together with their costs and benefits. Peirce presented this idea in the following way. In the series of papers entitled Illustrations of the Logic of Science he famously argues that the idea of probability essentially belongs to a kind of inference which is repeated indefinitely.
[…] Yet, if a man had to choose between drawing a card from a pack containing twenty-five red cards and a black one, or from a pack containing twenty-five black cards and a red one, and if the drawing of a red card were destined to transport him to eternal felicity, and that a black one to consign him to everlasting woe, it would be folly to deny that he ought to pre- 25 It is worth noting that disagreement on the role of values in science can also be due to the lack of distinctions between different senses of science, values, and their interplay. 26 A promising methodology to compute optimal inferential thresholds (also in virtue of economic considerations) and that may provide (at least) some guidelines is Signal Detection Theory (SDT) (Green and Swets 1966;McNicol 1972). Interestingly, the unacknowledged origins of SDT-despite having been developed quite independently-also date back to Peirce's early experimental work on perception with Jastrow (Peirce and Jastrow 1885), especially their rebuttal of the Weber-Fechner's discrete threshold principle in psychophysics and its reliance on unviable Laplacian principles of probability. The early goal of the SDT was to statistically understand the mechanisms by which human perceptual system is able to decide and report upon reception of signals amidst insignificant noise.
1 3 Axiomathes (2019) 29:329-346 fer the pack containing the larger proportion of red cards, although, from the nature of risk, it could not be repeated (CP 2.652, 1878). 27 The interplay between different normative considerations on probability may suggest that a convenient way to interpret probability is not what we today tend to blanket as the frequentist perspective, but as what Peirce termed (and here he was borrowing John Venn's terminology) a "conceptualist theory". This feature is again inspired by the desiderata of the economy of research, which turns out to be one of the main ingredients of pragmaticism. Peirce's depiction of conceptualism as a degrees-of-belief talk may even resemble modern Bayesianism. 28 Far from accepting Bayes's rule as an appropriate decision method in science, however (given the unacceptability of Laplace's principle of insufficient reason or what after Keynes 1948 came to be knowns as the "principle of indifference"), Peirce manages to pinpoint the limits of a conceptualist perspective on probability. This principle states, in its informal presentation, that given a set n of possibilities, if there is no evidence favouring one possibility over the others then the probability of each possibility is assumed to be 1/n. Peirce observes that under the conceptualist view, complete ignorance between two possibilities is represented by the probability ½. He then imagines the following scenario:
[L]et us suppose that we are totally ignorant what colored hair the inhabitants of Saturn have. Let us, then, take a color-chart in which all possible colors are shown shading into one another by imperceptible degrees. In such a chart the relative areas occupied by different classes of colors are perfectly arbitrary. Let us inclose such an area with a closed line, and ask what is the chance on conceptualistic principles that the color of the hair of the inhabitants of Saturn falls within that area? The answer cannot be indeterminate because we must be in some state of belief; and, indeed, conceptualistic writers do not admit indeterminate probabilities. […]. The answer can, therefore, only be one-half, since the judgment should neither favor nor oppose the hypothesis. What is true of this area is true of any other one; and it will equally be true of a third area which embraces the other two. But the probability for each of the smaller areas being one-half, that for the larger should be at least unity, which is absurd. (Peirce 2014, 138, "The Probability of Induction" 1878).
Mogollon Archaeology: Proceedings of the 1980 Mogollon Conference, 1982
Ficheiro Epigráfico (262, n.º 881), 2024
Asian American Religious Cultures (ABC-Clio), 2015
Journal of Consciousness Studies
The Evangelical quarterly, 2007
Fundamentos para uma Psiconeurologia A fenomenologia subjetiva nas relações humanas, 2019
GRAND NARRATIVES AND IMAGINED COMMUNICATION LITERATURE AND THE SYMBOLIC PATTERNS OF EMANCIPATION It
Political Anthropological Research on International Social Sciences , 2023
Arheologija u Srbiji, projekti Arheološkog instituta u 2021. godini, 2024
2015 17th Asia-Pacific Network Operations and Management Symposium (APNOMS), 2015
Materials Letters, 2017
IADIS International …, 2008
Arquivos Brasileiros de Endocrinologia & Metabologia, 2013
Equine Veterinary Journal, 2014
Anesthesiology and Pain Medicine, 2021
Journal of Applied Physics, 1996
Agrobiodiversity for improving nutrition, health and life quality, 2023