38
Views
0
CrossRef citations to date
0
Altmetric
Comment

Energy transition and risk analysis: a commentary

Received 17 May 2024, Accepted 23 May 2024, Published online: 20 Jun 2024

Abstract

This is a commentary on the paper by Bouder/Lofstedt on the role of risk analysis in the light of uncertainty. The commentary points at various challenges related to risk assessment, more particularly as far as new energy projects are concerned. One problem is that it is not always clear to what extent an expert can really be considered as an expert; another is that it is extremely difficult to organise public participation in a meaningful way. Moreover, as a result of lobbying by interest groups, decisions are not always made in the public interest, but industrial operators often try to influence decision-making. Several examples are provided of how that can go wrong. In general, the commentary pleads in favour of decision-making in the public interest, engaging experts in a critical manner, as well as meaningful and informed public participation.

1. Introduction

Bouder and Lofstedt provide a wonderful and challenging analysis of the role of risk analysis in the light of uncertainty, the corresponding need to provide protection and the necessity to innovate. They indicate specific challenges, more particularly in the light of the energy transition, but they equally point at the potential of risk analysis to deal with those challenges. I largely agree with their analysis and only have a few observations from the perspective of the economic approach to law and more particularly risk regulation.

2. Renewables, permitting and risk analysis

In the framework of the energy transition that is going on in many (European) countries today, including the Netherlands, many new requests will be handed to permitting authorities on whose shoulder is then the task to assess the appropriateness of the request (for example to build a park with wind turbines) and to impose conditions to reduce related risks in an appropriate manner. The questions related to risk analysis, addressed by Bouder and Lofstedt will therefore to an important extent be dealt with by regulators (in some cases at EU, but most likely at Member State level), but, as far as particular location decisions are concerned, also by permitting authorities at local level.

When faced with uncertainties concerning the risk created by particular renewable energy forms, in fact the classic principles instructing us how to handle risks in the light of uncertainty still can be applied. Bouder and Lofstedt are undoubtedly right when they state that the instrument to deal with those issues at EU level, more particularly the 2000 Communication from the European Commission on the precautionary principle may, in the light of later developments, be outdated. Yet, the basic premise of this Communication, i.e. that there should first be a scientific risk assessment based on all the available evidence and next risk management to determine the acceptable level of risk for society may, even though perhaps seemingly simplistic, still be useful for the way in which also new technological risks (including those related to the energy transition) are dealt with. Obviously (and as rightly stressed by Bouder and Lofstedt), to this also the importance of risk communication should be added. Yet, there are a few complications with respect to the application of this seemingly simple framework for risk analysis.

3. Is the expert an expert?

There is extensive empirical research showing that not only ordinary citizens, but also experts are subject to a variety of biases. The most important problem is overconfidence (a too large trust in their own expertise) as a result of which probabilities of accidents are often seriously underestimated.Footnote1 A well-known example concerns the overestimation by experts of the precision with which one could predict the likelihood of a meltdown of the nuclear core in a nuclear installation, leading to the Three Mile Island accident. Experts also showed an irresponsible large trust in the stability in the Teton Dam, leading to its eventual collapse in 1976. Experts also often get it wrong with ‘calibration’, the assessment of probabilities. As a result, physicians systematically overestimate the likelihood of survival of a patient with cancer and experts do not do any better in assessing the probability of death (following from asthma) than laymen.Footnote2 It is for that reason that Meadow and Sunstein pleaded in favour of a stronger reliance on statistics rather than on experts (Meadow and Sunstein Citation2001).

4. Is there meaningful public participation?

The importance of involving the affected public in a meaningful way has undoubtedly received an important impulse after Ulrich Beck published in 1986 (the year of the Chernobyl disaster) his famous book on the Risikogesellschaft (published in 1992 in English as the Risk Society) (Beck Citation1992). His insights that modern society no longer has the luxury of discussing a fair distribution of wealth, but rather a just distribution of risks has led to an important impetus for public participation. Public participation in an early stage of the decision-making (i.e. when various alternatives for a particular project are still available) is important, not only to increase the legitimacy of the particular project, but also to counter potentially wrong ideas from the experts (see the previous section) as a result of which ‘citizens science’ may constitute an important counterweight for the biases by experts.

Unfortunately, practice shows that for a variety of reasons public participation not always takes place in an optimal manner. Lofstedt indicated that often a wrong selection of the so-called ‘involved citizens’ takes place, as a result of which there is in some cases only opposition, but no longer a constructive dialogue (Lofstedt Citation2015, Citation2019).

It appears that all too often, decision-makers (for example permitting authorities) do not take public participation seriously, but see it rather as a ‘ticking the box’ exercise, thus only involving the public when the principal decisions have already been taken. When in a prior phase (without public participation) the public authority (supported by their experts) have already chosen one particular way to go (out of many alternatives), that often leads to a certain ‘path dependency’ from which authorities no longer wish to deviate. A tunnel vision on one particular solution may be the result, thus rejecting reasonable alternatives that would be offered by the public affected. The result is often a loss of legitimacy and strong opposition against the particular project (Wong and Lockie Citation2018).

5. Is the decision made in the public interest?

Inter alia as a result of the previously mentioned challenges (tunnel vision by experts and inadequate public participation) suboptimal decisions concerning risky products or projects are often the result. Moreover, I so far assumed that operators would provide full reliable and transparent information concerning the risks of their products or projects, supported by independent and objective experts leading to a decision by public authorities in the public interest.

There are, unfortunately, all too many cases showing that reality is often different. Operators often withhold information indicating the potentially negative consequences of particular substances or projects and engage in an effective lobbying, as a result of which licensing authorities that have to decide on the project are de facto often ‘captured’ by the operator. Given the information asymmetry between the operator (who possesses the information on the potential risky character of its products or projects) and the licensing authority (who often lacks basic expertise to judge the information provided by the operator), there is a serious danger that decisions are not made in the public interest, but that the private interests of industry prevail (Faure Citation2014).

Some industrial operators (like the tobacco industry) become real merchants of doubt (Oreskes and Conway Citation2010), supported by high level scientists raising doubts on the risky character of for example smoking or the use of DDT. Recently, the Netherlands was also confronted with a scandal related to the company Chemours in Dordrecht (previously DuPont). It appeared mid-2023 that Chemours and its predecessors were already aware of the damaging effects of PFAS for public health and the environment. The predecessor 3 M emitted PFAS for many years in the surface waters while it had been hiding the information it possessed on the damaging effects of PFAS for human health (De Jong and Faure Citation2023). The case has meanwhile led to a (successful) claim against the operator, precisely based on the fact that the operator had been hiding its knowledge on the devastating effects of PFAS. Cases like this are unfortunately not an exception and therefore stress the need for a decision-making process that better guarantees decision-making in the public interest (see 8).

6. NIMBY and risk-risk trade offs

Within the energy transition there is undoubtedly a danger of the not-in-my-backyard (NIMBY) phenomenon: as soon as there is a plan for a site for a large amount of solar panels or wind turbines, there might, in case of an inappropriate risk communication, be opposition from the public concerned. There is the danger that even citizens who would as such support a green transition would, if the project is not appropriately discussed with them at an early stage, start an opposition as a result of which the project may either be substantially delayed or could even fail altogether. This once more underscores the importance of an early and effective risk communication without which the entire energy transition could be delayed or endangered.

Within the energy transition there are undoubtedly many so-called risk-risk trade-offs (Lofstedt and Schlag Citation2017). It would be all too easy to abstain from the installation of the renewables necessary for the energy transition to fight greenhouse gas emissions because of public opposition. The one risk (for example visual pollution, noise and danger for birds in case of wind turbines) clearly has to be traded off against the other risk (of devastating consequences of climate change if no effective transition to a carbon-free society would take place). The necessity to realize that banning particular substances, products or projects (like for example pharmaceuticals) may also generate other risks (for example increased public health risks) has also emerged in the framework of the discussions concerning the precautionary principle.

7. Le Principe de précaution est mort, vive l’ALARP!

Bouder and Lofstedt point at some of the dangers inherent in the precautionary principle and have strong doubts on its usefulness. They seem ready to bury the precautionary principle and to replace it by the application of the as-low-as-reasonably-practicable (ALARP) standard within an assessment of the tolerability of risk (ToR).

They are not the only ones who are critical of the precautionary principle. Sunstein, pointing at risk-risk trade-offs, but also at other dangers, did not show himself to be overly enthusiastic of the precautionary principle either (Sunstein Citation2003) and law and economics scholars like Ogus (Citation1995) indicated that a strict application of the precautionary principle could lead to inefficiency. Applying the principle (and thus prohibiting particular activities) could on the one hand realize benefits which are highly speculative, but at the same time impose substantial costs. Ogus therefore advocated at least the application of cost-benefit analysis when applying the precautionary principle (Ogus Citation1995). A recent PhD thesis analysed the use of the precautionary principle in the decision-making concerning glyphosates in several EU Member States and concluded that the precautionary principle had de facto played no role whatsoever in the decision-making, even though there was uncertainty concerning the health effects of glyphosates and there was contradictory scientific evidence (Katdare Citation2024).

I doubt if I would be willing to already give up the precautionary principle. If it is anyway not applied that often, one could cynically argue that it does not do that much harm. Moreover, a recent project analysing the application of the precautionary principle in the EU (and drafting a new Guidance) seems still to see possibilities to apply the precautionary principle as a safeguard (RECIPES Project Citation2022). One of the major advantages of the precautionary principle may be that it, in cases of doubt, at least incentivises operators to engage in research and development obliging them to obtain reliable scientific information on the risks involved in particular substances or projects. Obviously, the principle may not lead to a complete shift in the burden of proof. That would for example be the case if operators would be required to show that there is no danger or risk related to their products at all. That could completely stifle innovation. Yet, some of the recent cases (see 5) like the hiding of information concerning the negative consequences of PFAS by Chemours, point to the fact that one may not be naïve in the sense that industrial operators may always have an incentive not to truthfully reveal the information related to the dangerous character of particular substances or projects. Proponents of the precautionary principle could well argue that if the principle had correctly been applied by licencing authorities when deciding on whether Chemours could be allowed to use PFAS, the correct decision would have been to ban the use of that product which could have led to the prevention of substantial public health damage. But I do realise that I now make myself guilty to a strong case of judging in hindsight.

8. Decision-making in the public interest

I come back to the point made earlier: especially when there is uncertainty, we should remain open for the danger that the decision-making authority and the public at large could be captured by the ‘merchants of doubt’. Public choice scholars like Mancur Olson (Citation1971) have pointed out that lobbying by interest groups will especially be successful when the information costs for the public at large (to find out that lobbying and capturing is going on) is high, and that the transaction costs for the group for effective lobbying are low. The latter is often the case when the group is single-issue oriented and lobbying will be successful in complex domains like safety regulation. This literature also provides an indication on the remedies against lobbying: informing the public at large and transparency of the decision-making are important medicines against capturing. Also Nobel Prize Winner Gary Becker (Citation1983) pointed at the fact that when various groups lobby for political power, there is a higher likelihood that the outcome will be efficient. In other words: if a counterweight can be organised against the lobbying by industry (for example by an NGO), there is a likelihood that decisions will not inefficiently favour industry, but be rather in the public interest. Empirical evidence has also shown that when NGOs are involved in environmental decision-making, the quality of environmental regulation increases (Binder and Neumayer Citation2005).

In sum, especially when there is, in decisions concerning energy transition, uncertainty related to specific risks, it is of crucial importance (as also stressed by Bouder and Lofstedt) to not only rely on scientific evidence, but also, from an early phase on to have the public actively involved in a meaningful way, to organise a transparent decision-making process and to organise (for example by granting standing to NGOs) a countervailing power against the inevitable lobbying by industry.

The case of Chemours (but one could also mention others like Tata Steel) shows that this call is absolutely no luxury, also not in the Netherlands. If these conditions could be met, one can hope that on the one hand the innovation necessary to create the energy transition can take place in an effective manner, but that at the same time a better protection of the public at large and public health can be guaranteed than was often the case in the past.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 An excellent summary of these behavioural studies pointing at biases by experts is provided by Slovic, Fischhoff and Lichtenstein (2000).

2 For other examples, see Faure and Visscher (Citation2011, 384–386).

References

  • Beck, U. 1986. Risikogesellschaft. Auf den Weg in eine andere Moderne. Frankfurt am Main: Suhrkamp Verlag.
  • Beck, U. 1992. Risk Society Toward a New Modernity. London: Sage.
  • Becker, G. 1983. “A Theory of Competition among Pressure Groups for Political Influence.” Quarterly Journal of Economics98 (3) : 371–400.
  • Binder, S., and E. Neumayer. 2005. “Environmental Pressure Group Strength and Air Pollution: An Empirical Analysis.” Ecological Economics 55 (4): 527–538. https://doi.org/10.1016/j.ecolecon.2004.12.009.
  • De Jong, E. R., and M. Faure. 2023. “De corrigerende rol van het aansprakelijkheidsrecht bij de bedrijfsmatige vervuiling van de leefomgeving.” Justitiële verkenningen 49 (4): 75–85. https://doi.org/10.5553/JV/016758502023049004006.
  • Faure, M. 2014. “The Complementary Roles of Liability, Regulation and Insurance in Safety Management: Theory and Practice.” Journal of Risk Research 17 (6): 689–707. https://doi.org/10.1080/13669877.2014.889199.
  • Faure, M., and L. Visscher. 2011. “The Role of Experts in Assessing Damages – A Law and Economics Account.” European Journal of Risk Regulation 2 (3): 376–396. https://doi.org/10.1017/S1867299X00001392.
  • Katdare, M. 2024. “Precautionary Principle: Does It Play a Role in EU Decision-Making?” Diss. Erasmus School of Law Rotterdam (to be defended as PhD thesis).
  • Lofstedt, R. E. 2015. “Effective Risk Communication and CCS: The Road to Success in Europe.” Journal of Risk Research 18 (6): 675–691. https://doi.org/10.1080/13669877.2015.1017831.
  • Lofstedt, R. E. 2019. “The Communication of Radon Risk in Sweden: Where Are We and Where Are We Going?” Journal of Risk Research 22 (6): 773–781. https://doi.org/10.1080/13669877.2018.1473467.
  • Lofstedt, R., and A. Schlag. 2017. “Risk-Risk Trade-Offs: What Should We Do in Europe?” Journal of Risk Research 20 (8): 963–983. https://doi.org/10.1080/13669877.2016.1153505.
  • Meadow, W., and C. Sunstein. 2001. “Statistics, Not Experts.” Duke Law Journal 51 (2): 629–646. https://doi.org/10.2307/1373203.
  • Ogus, A. I. 1995. “Quality Control for European Regulation.” Maastricht Journal of European and Comparative Law 2 (4): 325–338. https://doi.org/10.1177/1023263X9500200402.
  • Olson, M. 1971. The Logic of Collective Action. Cambridge: Harvard University Press.
  • Oreskes, N., and E. M. Conway. 2010. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco to Climate Change. London/New York: Bloomsbury Press.
  • RECIPES Project. 2022. “Guidance on the Application of the Precautionary Principle in the EU.” https://recipes-project.eu/results/guidance-future-application-precautionary-principle.html
  • Slovic, P., B. Fischoff, and S. Lichenstein. 2000. “Rating the Risks.” In The Perception of Risk, edited by P. Slovic, 104–110. London: Earthscan Publications.
  • Sunstein, C. R. 2003. “Beyond the Precautionary Principle.” University of Pennsylvania Law Review 151 (3): 1003–1058. https://doi.org/10.2307/3312884.
  • Wong, C. M. L., and S. Lockie. 2018. “Sociology, Risk and the Environment: A Material-Semiotic Approach.” Journal of Risk Research 21 (9): 1077–1092. https://doi.org/10.1080/13669877.2017.1422783.