104
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

In Defence of an Inferential Account of Extrapolation

 

ABSTRACT

According to the hypothesis-generator account, valid extrapolations from a source to a target system are circular, since they rely on knowledge of relevant similarities and differences that can only be obtained by investigating the target, thus removing the need to extrapolate; hence, extrapolative reasoning can only be useful as a method for generating hypotheses. I reject this view in favour of an inferential account, focused on extrapolations underpinning the aggregation of experimental results, and explore two lines of argumentation supporting the conclusion that these extrapolations can be validated in a noncircular manner. The first argument relies on formal proofs of inferential validity demonstrating that it is possible to reason from prior knowledge of causal structures in order to determine whether a claim can be extrapolated. The second argument builds on the fact that the hypothesis-generator account overlooks key inferential and experimental practices resulting in progressively better-informed extrapolations.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Notes

1 The theory of evolution by natural selection (Godfrey-Smith Citation2007) and the stochastic nature of the molecular interactions underpinning biological activity (Rao, Wolf, and Arkin Citation2002) provide strong reasons to suspect that biological systems are characterized by a significant degree of variability.

2 The thalidomide disaster (Miller Citation1991) illustrates the tragic consequences of medical practices based on faulty extrapolations. Invalid extrapolations are also thought to be responsible for the lack of translatability and replicability of experimental findings (Pound and Bracken Citation2014).

3 Steel further argues that the carcinogenic effect of aflatoxins in humans was inferred by extrapolating outcomes observed in animal models that process aflatoxins in the same way as humans. These would include rat, but not mouse models. While the strategy behind Steel’s reasoning is sound, I couldn’t cross-reference this claim in the scientific literature. In addition to sources reviewed in Groopman et al., studies report a carcinogenic effect of aflatoxin exposure in infant animals, including mice (Vesselinovitch and Mihailovich Citation1972; Woo et al. Citation2011).

4 Fletcher (Citation2021) ties these two requirements to the notions of ‘direct’ and ‘conceptual replicability.’ The former involves a repetition of the same experiment, while the latter refers to attempts to replicate a result using different methods, models, and/or experimental designs.

5 Regulatory frameworks in clinical research and standards of publication and peer review in basic science demand that claims are supported by the best evidence available. Although gold standards change over time, as improved experimental designs and techniques are developed, the distinction between conclusive and preliminary evidence tends to remain categorical in the sense that a finding is not accepted by the scientific community unless it is corroborated by the highest-quality studies available at the time.

6 Additional examples are discussed in Section 4. See also LeDoux (Citation1996, Ch. 6) for a discussion of the cross-validation of findings from animal and human models of fear and anxiety, Hobson (Citation2009, Supplementary information S2) for examples from sleep and dream research, Baetu (Citation2016) Baetu (Citation2019) for examples from immunology, and Guttinger (Citation2019) for a discussion of the practice of ‘microreplication’ of results through the inter-experimental use of controls.

7 The rules of aggregation discussed here rely on a summing of elements of preliminary evidence, which may seem to go against methodological standards stipulating a categorical distinction between conclusive and preliminary evidence. I think, however, that there is no contradiction between the two views. Ideally, claims should be accepted only when corroborated by the best evidence. However, even when experimental designs and techniques capable of producing such evidence are available, there are circumstances in which it is impossible–technically, legally, or ethically–to rely on them. In the case of aflatoxin, currently accepted gold standards dictate that the best evidence for the carcinogenic effect of aflatoxin in humans comes from RCTs whereby human subjects are intentionally exposed to the putative carcinogen. For obvious reasons, such experiments cannot be conducted. Hence, in this particular case, the best evidence is generated by a combination of controlled experiments in surrogate models and observational studies in humans.

8 “The extrapolation of causal effects computed in one population to a second population is referred to as transportability of causal inferences across populations” (Hernán and Robins Citation2020, 46).

9 More generally, Pearl and Bareinboim are concerned with methods of meta-analysis aggregating data from multiple studies of various designs. A potential problem with such practices is that the mechanisms generating the data differ between experimental and observational studies, which raises questions about which adjustment formulas should be used to correct for these differences.

10 This result follows from the fact that, if the causal Markov condition can be assumed, “X blocks all paths from S to Y once we remove all arrows pointing to X and condition on Z” (Pearl and Bareinboim Citation2014, 591).

11 Markov, minimality and positivity are general assumptions about causal processes and the availability of statistical data to perform the required calculations. They don’t specifically concern the extrapolated claims.

12 These findings and the predictions made are inspired from studies reviewed in Kwon et al. (Citation1998), Miura et al. (Citation2001) and Baetu and Hiscott (Citation2002). For philosophical discussion, see Baetu (Citation2016).

13 Similarly, the studies summarized in were used to hypothesize a carcinogenesis mechanism involving a synergistic interaction between AFB1 and HBV exposure, and a causal chain from AFB1-DNA adduct formation to p53 mutations to HCC. The hypothesized mechanism suggested two possible treatment strategies, namely early immunization against HBV, and the use of drugs known to have an effect at early stages of carcinogenesis. The former is currently deployed in populations at risk. The latter was shown to work in animal models, but failed to provide significant protection in clinical trials (Groopman et al. Citation2002).

14 The proposed treatment, which involves a selective blocking of death receptors DR4 and DR5 in lymphocytes, was not approved for testing in humans as a potential treatment for AIDS, primarily because already available antiviral drugs can significantly delay or even block disease progression. Nevertheless, the converse strategy of activating these receptors in an attempt to kill cancer cells is currently under investigation by multiple phase II and III clinical trials (Snajdauf et al. Citation2021).

15 One way of conceptualizing the difference between the uninformed state preceding testing and the informed, post-testing state is in terms of populations of measurements. In the absence of prior information about the validity of data aggregation practices, the only available option is to endorse the hypothesis-generator account and independently test the predictions of the hypothesis under scrutiny in each experimental model. However, upon corroboration of this hypothesis, researchers learn that different experimental models are comparable systems. In this example, comparability amounts to a common mechanism of T-cell depletion at work in a variety of cell, animal and human models of AIDS. Comparability indicates that it is safe to treat surrogate and target models as belonging to the same population of systems and treat the results of interventions in these models as ‘conceptual replications’ (Fletcher Citation2021). For instance, one may conduct a meta-analysis by pooling the results obtained in different models to improve estimates of the causal efficacy of an intervention. For more detailed examples see Pearl et al. (Citation2016, Ch. 1) and Jaynes (Citation2003, 257–61, 177-86).

16 A similar ampliative corroboratory effect is achieved by the use of positive controls, which not only replicate previous findings, but often extend their validity to new experimental contexts (Guttinger Citation2019).

17 It might be possible to facilitate exchange of information by tools such as prediction markets (Mann Citation2016; Polgreen et al. Citation2007). Successful application of a prediction market requires a sufficient number of ‘traders,’ a diversity of information, and an incentive to trade information; all three conditions are usually met in basic science.

Additional information

Funding

This work was supported by SSHRC: [Grant Number 430-2020-0654].

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.