748
Views
5
CrossRef citations to date
0
Altmetric
Articles

Climate Models: How to Assess Their Reliability

&
Pages 81-100 | Published online: 01 Aug 2019
 

ABSTRACT

The paper discusses modelling uncertainties in climate models and how they can be addressed based on physical principles as well as based on how the models perform in light of empirical data. We argue that the reliability of climate models can be judged by three kinds of standards: striking confirmation, supplementing independent causal arguments, and judging the causal core of models by establishing model robustness. We also use model robustness for delimiting confirmational holism. We survey recent results of climate modelling and identify salient results that fulfil each of the three standards. Our conclusion is that climate models can be considered reliable for some qualitative gross features and some long-term tendencies of the climate system as well as for quantitative aspects of some smaller-scale mechanisms. The adequacy of climate models for other purposes is less convincing. Among the latter are probability estimates, in particular, those concerning rare events. On the whole, climate models suffer from important deficits and are difficult to verify, but are still better confirmed and more reliable than parts of the methodological literature suggest.

Acknowledgements

The work has grown out of the DFG Priority Programme 1689 ‘Climate Engineering’. We are grateful to Hauke Schmidt (MPI for Meteorology, Hamburg) for his most helpful critical reading of a previous version.

Notes

1 Our general approach is similar to that of Baumberger, Knutti, and Hirsch-Hadorn (Citation2017) in that we also use confirmation procedures from the philosophy of science to assess the reliability of climate models. But we invoke a broader list of epistemic values. Both papers address model robustness, but we seek to elaborate the notion in greater detail. Accordingly, our paper complements Baumberger, Knutti, and Hirsch-Hadorn (Citation2017) by pursuing different arguments toward a similar conclusion.

2 Knutti (Citation2008, 4652–4653), Knutti (Citation2010, 2743–2744), Parker (Citation2011, 582–585), Frigg, Smith, and Stainforth (Citation2013, 895), and Baumberger, Knutti, and Hirsch-Hadorn (Citation2017, 10–11). Although the view is widely shared that model spread fails to yield a PDF, some authors are more inclined than others to grant that this spread outlines a space of plausibility (Knutti Citation2008, 4653; in contrast to Parker Citation2011, 585, 597) (see sections 6 and 8 below).

3 This is why moving from the comparison of the empirical success of models to judging the trustworthiness of specific model components essentially involved in producing this success (Katzav Citation2013b, 125) is difficult to accomplish. We will explore this option more specifically in section 7.

4 Hwang and Frierson (Citation2013, 4935–4937). Compare also the faulty correlations between climate sensitivity and the impact of atmospheric aerosol inherent in many climate models (see section 3).

5 It is true, no consensus has yet been achieved whether these changes in measuring practices are sufficient to account for the alleged hiatus or whether the internal variability of the climate system and changes in the natural forcing also play a role.

6 Accordingly, we do not take use-novelty to be a necessary condition for confirmation (as debated between Steele and Werndl [Citation2013] and Frisch [Citation2015]), but rather elaborate that use-novelty provides particularly significant confirmation.

7 In all these examples, we focus on Lakatos’ notion of empirical support and disregard his account of scientific change. In particular, we do not claim that climate research forms a progressive research programme. Establishing such a claim would require exploring the conceptual architecture of climate research.

8 We take this to be the upshot of Lloyd's model robustness and take her claim that using the causal core to produce successful simulations “involves inferring causal processes through ‘model robustness’” (Citation2015, 62) to be overshooting the goal. The empirical success of different causal cores can be compared and assessed, but the causal interpretation cannot be inferred from model robustness, but rather needs to be presupposed.

9 Accordingly, model robustness is not sufficient for confirmation, but, in contrast to Katzav (Citation2014, 230), this limitation does not invalidate model robustness (see below).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 733.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.