ABSTRACT
This paper re-interprets the precautionary principle as a ‘social epistemic rule’. First, it argues that sometimes policy-makers should act on claims which have not been scientifically established. Second, it argues that, given how scientists ought to solve ‘inductive risk’ problems, such guidance is required not only under actual conditions, but under any plausible conditions. Third, it suggests that procedural fairness may provide policy-makers with reasons to be very reluctant to act on claims which are not scientifically established. The restriction of precautionary reasoning to contexts of significant environmental or public health disaster may respond to this problem.
Acknowledgments
I am grateful to several people for discussing the ideas in this paper with me over time: most notably, Tim Lewens, Charlotte Goodburn, and Martin Peterson. I am also grateful to the students who listened to some very unusual lectures on the precautionary principle while I prepared this material, both at the University of Cambridge and at Peking University.
Notes
1. For more on ‘epistemic standards’, see my ref omitted.
2. Thanks to an anonymous reviewer for forcing me to clarify these issues.
3. For discussion of the more general epistemological implications of these claims, see Fantl and McGrath (Citation2010).
4. The relationship between debates over the precautionary principle and over inductive risk has also been discussed in great detail by Daniel Steel (Citation2015, Chapters 7 and 8). My concerns differ from Steel’s in that I am not so much concerned with constructing a positive account of a decision-making procedure, but, rather, trying to sketch a very general account of how issues around certainty play out in different fields – science and policy-making. While I find Steel’s model-based account of uncertainty illuminating as a way of understanding certain cases, it is less clear that all cases where we might think to adopt a precautionary approach necessarily involve model-based uncertainty; for example, uncertainty about the effects of neonicitinoid exposure on bees may be remedied by building a better model of how such exposures harm bees, but might also be resolved in multiple other ways, for example by constructing experiments which better control for confounders. Nonetheless, I take what I say here to be complementary with Steel’s more specific proposals.
5. For a longer version of a related argument, see my (ref omitted).
6. See Resnik (Citation2003), for useful examples of how actual debates around precautionary policy-making often devolve into fights over whether ‘knowledge’ or ‘hunches’ should guide policy.
7. For further useful discussion of how concerns about ‘procedural objectivity’ may favour processes which are not ‘objective’ in the sense of mirroring reality, see Megill (Citation1994).