173
Views
8
CrossRef citations to date
0
Altmetric
Articles

Using Multiple Means of Determination

Pages 295-313 | Published online: 31 Mar 2014
 

Abstract

This article examines a metaphilosophical issue, namely existing disagreements in philosophy of science about the significance of using multiple means of determination in scientific practice. We argue that this disagreement can, in part, be resolved by separating different questions that can be asked about the use of multiple means of determination, including the following: what can be concluded from the convergence of data or the convergence of claims about phenomena? Are the conclusions drawn from the convergence of data and of statements about phenomena of special importance to the debate about realism and antirealism? Do inferences based on multiple means of determination have stronger epistemic force than inferences that are secured in other ways? Is the epistemic goal of deploying multiple means of determination well entrenched within the scientific community? Most of these questions can be discussed both in a formal and in an empirical perspective. If the differences in perspective are taken into account, some disagreements can be easily resolved. In part, however, the disagreements reflect historiographical challenges that are very difficult, if not impossible to meet.

Acknowledgements

We are grateful to the editor and two anonymous referees of this journal for several helpful comments and suggestions.

Notes

[1] In this article, we will use the terms ‘triangulating’ and ‘using multiple means of determination’ interchangeably.

[2] For taxonomies of kinds or families of robustness concepts, see Woodward Citation(2006) and Calcott Citation(2011).

[3] In her introduction to the recent collected volume, Characterizing the Robustness of Science, Soler introduces an even wider concept, ‘solidity’. This concept serves as a catch-all term to cover not only Wimsattian robustness, but also any other processes through which scientific knowledge is established (Soler Citation2012, 4). The practice of using multiple means of determination to validate an experimental result is just one among many possible ways of ‘solidifying’ a knowledge claim.

[4] Cf. Perrin (Citation1916, 206). Several philosophers have utilized this example to illustrate the concept of robustness, for instance Culp Citation(1995) and Woodward Citation(2006).

[5] Bechtel Citation(2002) has described neurological research that also involves multiple means of determination. The overall aim of his study is to relate neuronal structures to cognitive operations. In Bechtel's example, the purpose for aligning techniques (lesion, cell recording, and neuroimaging) is to obtain not converging information but complementary results, which are integrated into a more comprehensive account of brain function than would be possible based on just one technique.

[6] We use the term ‘unobservable’ loosely, in the sense of ‘not accessible to the unaided senses’.

[7] Of course, it is not necessary to illustrate formal analyses of reasoning patterns through cases.

[8] For a more detailed characterization of case studies, see Burian Citation(1997), 384–385.

[9] In her contribution to the philosophical debate about triangulations, which prominently features the work of Perrin, Cartwright describes her ‘primary concerns’ as ‘epistemological’ and positions herself at one end of ‘the line connecting real-life methodology and ideal epistemology’ (Cartwright Citation1991, 143). At the other end of the line, she positions Collins, to whose essay she is responding.

[10] Sober, who presents several construed cases to explore the structure and conditions of successful inferences based on multiple determinations, characterizes his project as providing ‘intuitive epistemological principles with an explicit and somewhat formal representation’ (Sober Citation1989, 275).

[11] Campbell and Fiske already highlighted this problem in the context of psychological experimentation. They noted that measurements of traits can be considered valid if they can be supported by maximally independent methods. They added that in actual research contexts, ‘independence’ was ‘of course’ a matter of degree (Campbell and Fiske Citation1959, 83; see Culp Citation1995).

[12] Hacking uses the term ‘robust’ to describe this situation, but he does so in the everyday sense of ‘resilient’. Images are ‘curiously robust’ in the following sense: we continue to assume that a particular image that we deem to be a representation of a microscopic object is a representation of a real object even if our interpretation of what is displayed has changed (Hacking Citation1983, 199).

[13] Prior to Hacking's argument, the notion of independence carried the epistemic weight in the responses to the problem of theory-dependence. It was argued that the theory-dependence of experimental results is not epistemically problematic as long as one can dissociate the theoretical hypothesis under test from the theory involved in the interpretation of experimental data used to test that hypothesis. In these cases, the theory involved in testing is independent from the outcome of the experiment, in the sense that it does not rely for its confirmation on the experimental data it is supposed to interpret (Kosso Citation1988). The arguments drawing on the convergence of data from different interventions add a new layer to these traditional arguments from independence between theory and experiment.

[14] Bechtel Citation(2000) has suggested that the epistemic credentials for an inference based on the convergence of sensations on a perception can be transferred to inferences based on the convergence of data on claims about phenomena. But precisely because the inferential tasks involved in the two cases are different, this does not appear to be a permissible move.

[15] The rationale underlying the inference from the convergence of data to the existence of an unobservable phenomenon can be rendered as a common-cause argument: if we are confronted with similar effects and we can assume that none of these effects cause any of the others, we can assume that a common cause produced these effects (Salmon Citation1984). We reason from the convergence of data—the reports obtained from multiple independent witnesses—to a common cause that prompts the reports, the past event (Salmon Citation1984, 217–223).

[16] If we deal with a complex situation like the work of Perrin, however, the rationale for the inference to the unobservable phenomenon is different. In this case, complex theories and background assumptions are needed to tell a causal story showing that the experimental result in question is indeed brought about by the cause in question and therefore supports the empirical claim. The rationale underlying the inference to the unobservable phenomenon is that it would be very improbable (but not impossible) for the different experimental endeavours to provide support for the empirical claim if they were not largely correct. The power of the argument results from the fact that it is very unlikely for all these experiments (assuming they are independent) to be flawed in such a way that they nonetheless support the same conclusions about the features of the unobservable phenomena that they target.

[17] For discussions of validation strategies in experimental practice in physics and biology, see, e.g. Franklin Citation(2010).

[18] Reliable process reasoning means that the scientists pursued particular lines of experiment—those they considered most reliable—rather than trying to multiply experimental angles. Hudson makes the same point in a recent study of the search in astroparticle physics for weakly interacting massive particles (WIMPs), the main evidence for dark matter (Hudson Citation2009). He argues that the scientists involved do not deploy and even occasionally disavow the value of triangulations, and that one of the key research groups involved in the debate, DAMA, pursued one particular methodology to substantiate their claims in favour of WIMPs (Hudson Citation2009, 176). Each of the other research groups obtained negative results, each in a different way. Hudson's point is that none of these groups made an argument based on these multiple lines of research. Instead, each group argued for the reliability of their approaches by showing that various sources of error had been taken into consideration, and that their negative results were therefore valid. (For the role of robustness analysis in this episode, see Staley Citation2010.)

[19] In his essay on case studies, Pitt also presents a second argument against case studies that is in tension with the argument about induction: he argues that science is always in flux and that a general account of science is therefore impossible: ‘As philosophers we seek universals, but the only universal regarding science is change’ (Pitt Citation2001, 374).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.