91
Views
1
CrossRef citations to date
0
Altmetric
Discussion

How Pseudo-hypotheses Defeat a Non-Bayesian Theory of Evidence: Reply to Bandyopadhyay, Taper, and Brittan

Pages 299-306 | Published online: 17 Aug 2017
 

ABSTRACT

Bandyopadhyay, Taper, and Brittan (BTB) advance a measure of evidential support that first appeared in the statistical and philosophical literature four decades ago and have been extensively discussed since. I have argued elsewhere, however, that it is vulnerable to a simple counterexample. BTB claim that the counterexample is flawed because it conflates evidence with confirmation. In this reply, I argue that the counterexample stands, and is fatal to their theory.

Notes

1 Another measure ordinally equivalent to the LR is [P(E|H) – P(E| ∼ H)] ÷ [P(E|H) + P(E|∼H)]. This function was derived from their own list of adequacy criteria for measures of factual support by Kemeny and Oppenheim (Citation1952).

2 Good also explicitly included a symbol K to signify background information. Nowadays most authors regard this information as absorbed in P.

3 Allowing only simple hypotheses would preclude the enormously wide range of applications where simple hypotheses are tested against composite alternatives: for example, a simple null hypothesis specifying a parameter value t = t0 against a composite alternative like t ≠ t0 or t > t0 or t < t0. But why should a measure of evidential support be restricted to statistical hypotheses, and not theories generally? BTB do not say. The Santa hypothesis at the centre of my counterexample is not statistical but deterministic, so it may be that BTB mean to include such hypotheses as limiting cases (as we shall see, Santa predicts the data with probability 1).

4 An infinitesimal is a number less in absolute magnitude than any positive real number. I should emphasise that these are perfectly respectable numbers. In the mid-twentieth century, Abraham Robinson exploited the resources of model theory to show that there is an elementary extension of the real numbers, a field of so-called hyperreal numbers, containing besides copies of the real numbers themselves, infinitesimals and infinite reciprocals of infinitesimals. Nonstandard analysis, nonstandard probability theory, and nonstandard physics have since become flourishing fields in their own right, offering often much simpler proofs of classical results. A well-known result of Bernstein and Wattenberg (Citation1969) is that positive infinitesimal probability values can be assigned to all the members of a continuum-sized outcome-space in such a way that the values sum, in a suitable sense, to 1.

5 We can note that the great Bayesian, Laplace, famously described the Bayesian theory as common sense reduced to a calculus—‘le bon sens réduit au calcul’.

6 Ironically, BTB themselves find the conflation they condemn irresistible: a positive outcome from a diagnostic test for TB in which the LR is 25.7 shows, they say, ‘that the individual is more likely (approximately 26 times more likely) to have the disease than not’ (BTB Citation2016, 295; emphasis added).

7 It is known as the principle of Regularity, so named by Rudolf Carnap. The technical problem for Regularists is that many of the outcome spaces of statistics are intervals of real numbers, and it is mathematically impossible to assign positive real-valued probabilities to all the points in them. There have been appeals to infinitesimals to solve the problem (see note 4 above), though these attempts are controversial. Howson (Citation2016) is my own contribution to the discussion.

8 Mentioned in Howson (Citation2016).

9 It is easy to show that no evidence can increase a probability of 0, or for that matter 1, by that means. BTB also misstate the principle, which they claim ‘says that [the agent’s] degree of belief in H1 after the data are known is given by the conditionalisation principle Pr(H1|D), assuming the Pr(D) is not zero’ (BTB Citation2016, 4). But Pr(H1|D) is a number, not a principle. What the principle does say is that after learning D, and nothing stronger, the agent’s new belief function should be PD(.) := P(. |D). Nor is there any need to assume that P(D) is nonzero: there are well-known axiomatisations of conditional probability in which the second argument can have probability 0, so long as it is consistent.

10 Recall that this provision is part of Edwards’s Likelihood Axiom!

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 733.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.