958
Views
2
CrossRef citations to date
0
Altmetric
Perspectives

Predictive rebound & technologies of engagement: science, technology, and communities in wildfire management

ORCID Icon
Pages 104-111 | Received 27 Oct 2020, Accepted 28 Oct 2020, Published online: 02 Dec 2020

ABSTRACT

Technologies of anticipation – such as predictive analytics, forecasting, and modelling – offer appealing promises to those governing risk. While previous work has challenged notions that such technologies are value neutral, we must attend to specific – and subtle – ways that values are embedded and manipulated within these systems. Using the case of wildfire management, I propose the concept of predictive rebound to highlight two challenges: (1) that increasingly accurate predictive models do not always translate into the initially intended real-world gains, but rather can end up being applied to alternative ends and (2) that perceived accuracy of predictive models can be misunderstood as reducing the need for explicit debate about values within decision-making. Further analysis of predictive rebound in real-world contexts will help to inform more effective engagement with stakeholders about values, priorities, and risks; revealing situations where technologies of anticipation obfuscate value-laden decisions and facilitate unintentional drift in management priorities.

For decision-makers governing uncertain or rapidly evolving issues, technologies of prediction – such as computational modelling, machine learning, big data analytics, and other forms of forecasting – offer tempting promises. Such technologies claim to provide mechanisms for ‘describing and analysing intended and potentially unintended impacts that might arise’ (Owen, Macnaghten, and Stilgoe Citation2012), enabling decision-makers to adjust their approaches, adapt institutions and implementations, and otherwise anticipate potential challenges that may arise. Like all technologies, of course, such anticipations and predictive technologies are hardly value neutral. Instead, they embody – intentionally and tacitly – the values, priorities, and assumptions of those who create them and participate in their use.Footnote 1

While most of the literature in responsible research and innovation (RRI) has focused on the deployment of anticipation and prediction in more temporally distant senses (see Nordmann Citation2014 for further discussion of the temporality of anticipation), I argue that there are important lessons to learn about engagement, accountability, and reflection when considering urgent and proximate deployments of predictive technologies. These moments of emergency can exacerbate existing inequalities (see Monteiro, Shelley-Egan, and Dratwa Citation2017 for examples in the context of RRI and Zika) and contain all the typical challenges regarding the rightful integration of contesting values into decision-making (Kennedy Citation2019a). As I argue in this paper, however, they can also reveal more subtle and hidden moments where tensions are fomented between intended and realized goals of prediction; tensions that are also likely applicable in longer-term efforts to engage and anticipate.

Using examples from wildfire management, I make three arguments in this paper:

  1. Despite a seeming correspondence, increased accuracy in predictive technologies can sometimes fail to return the promised, intended, or desired real-world gains – or be otherwise subject to subtle drift with respect to intended versus realized goals. I call this phenomenon predictive rebound and argue that it is important to critically examine across empirical cases.

  2. As predictive rebound occurs, it can obscure (often unintentionally) critical debates about risk tolerances, stakeholder priorities, and other issues of values.

  3. Predictive rebound can be at least partially anticipated and managed through explicitly considering the ends to which technologies of prediction are applied, which involves strategic and careful engagement of stakeholders.

To do this, I begin by sketching the emergence of predictive technologies in wildfire management, explaining why they are so widely heralded. I introduce the idea of predictive rebound with a theoretical example generalized from real-world observation of hundreds of wildfire managers across Canada and Australia (Kennedy Citation2018). I then argue that explicit consideration is required to overcome the challenges resulting from predictive rebound.

A first case: technologies of anticipation in wildfire

The ways we understand – and respond to – wildfire have changed dramatically over time. In the 19th and early 20th centuries, for instance, wildland fire was largely viewed by North American colonizers as a force beyond the capacity of human control. Locally, fires presented a grave danger to life and livelihoods, often leaving residents of frontier towns with no option but to flee by foot or by rail as quickly as possible (e.g. Barnes Citation1987). For settlers building towns and extractive industries across the continent, fire was an impossible force to reckon with, a threat only faced through escape.

It wasn’t until the early to mid-twentieth century that the promise of ‘managing’ wildland fire began to take hold. The emergence of a network of protected parks and forests – and the associated cadre of rangers – in the United States was an early step in the formation of the wildland fire services as we know them today, with the ‘Big Blowup’ of 1910 playing a significant part in this development (Egan Citation2009). The decades that followed saw ever-increasing investment in the technology and number of personnel used to fight fires, adopting – particularly in the United States – an almost paramilitary approach to ‘battling’ the flames by deploying ‘armies’ of young men and unleashing an ‘airborne assault’ via smoke jumpers and water bombers.Footnote 2 Computational technologies are seen as another weapon in this arsenal, allowing prediction of where fires might occur, how they’ll spread, the effectiveness of different interventions, and even locations for pre-event mitigation efforts.

As a simple illustration of what predictive technologies promise towards wildfire response, consider the case of fire spread and fire behaviour modelling. A wildland fire’s ‘behaviour’ (that is, elements like the intensity of the burn and the rate of spread) is primarily driven by three features: fuel, weather, and topography. Fuel refers to elements like the type of tree, grass, or debris, as well as characteristics like its health or the density of the stand. Weather encompasses a number of different factors ranging from how much precipitation there has been (and, as a result, how dry or wet the fuels are) to wind speed and direction, as well as other variables like temperature and humidity. Finally, topography refers to the formation of the land itself: fire moves more quickly uphill, for instance, than in a flat location (where wind determines the spread) or downhill (where fire tends to creep more slowly).

Firefighters and managers can, based on personal experience and guidance from research, estimate how a given fire will behave under certain fuel, weather, and topography parameters. Traditionally, physical books were carried into the field, containing table after table of experimentally generated fire spread data (e.g. rate of movement and fire intensity) under different fuel, weather, and topographic conditions. This knowledge about potential fire behaviour can then be translated into operational decision-making through response choices, such as determining which communities should be evacuated (and when), where to position firefighters (maximizing effectiveness while minimizing risk), or even how to prepare landscapes in advance to reduce risk (e.g. removing fuel, cutting defensive breaks, etc; see Neale, Weir, and McGee Citation2016).

In more recent times, however, these books and experiential knowledge have generally been either augmented or supplanted through the use of computational modelling. In modern fire services, fire behaviour calculations increasingly take place on computers and servers, with semi-automated and automated models incorporating real-time data (e.g. current weather conditions or forecasts) with geospatial data (e.g. mapping of fuel types or topographies) to produce simulations of fire behaviour.Footnote 3 If it is known more precisely where a fire will spread next, for instance, fire management agencies might choose not to disrupt a community with an unnecessary evacuation (or evacuate another location earlier), may be able to place firefighters in more aggressive, effective locations with a better understanding of the risks they face, or could even (like now practiced in California) choose to undertake interventions like pre-emptively shutting off power to communities to reduce the risk of electrically-started fires (Abatzoglou et al. Citation2020). As a result of such potential benefits, increasingly advanced technologies, including machine learning and artificial intelligence approaches, are becoming more common and offer significant promise for fire managers and the public alike.

A traditional STS or RRI analysis, of course, would pause at this juncture to raise all sorts of questions about the values embedded within predictive technologies. There are a large number of epistemic and ethical issues laden within these systems, including debates around the quality and reliability of data sources; which parameters are included and which are ignored (for instance, what assets, like community, cultural, or environmental features are included in the geospatial information); what risk tolerances are ‘built in’ and which degree of precision is claimed; and how the models are deployed. Moreover, there are interesting and important questions – beyond the scope of this paper – about the empirical accuracy of these technologies (e.g. do the predictions actually outperform chance or guesswork?). These questions are certainly important, but this example also allows for us to identify a more subtle phenomenon manifest in how these technologies of anticipation influence consequential decisions made on the ground: predictive rebound.

Predictive rebound

Predictive rebound occurs when improvements to predictive models made for one purpose (e.g. increasing safety) are actually, whether intentionally or unintentionally, applied towards different aims (e.g. reducing false alarms) as a result of morphing values and priorities. To offer a simple example, consider a typical situation for someone who walks to and from their workplace. Imagine that your workday is coming to an end, and that you face a thirty-minute walk home. The weather forecast, however, shows that there’s a 30% chance of rain (Gigerenzer et al. Citation2005) between 5:00-5:30pm, when you would normally be walking. You would be faced with a decision: leave early to avoid the raid or risk getting soaked. You decide to set out at a brisk pace at your normal time and end up sodden by the time you get home.

The next day, you purchase a weather app claiming to offer much more precise forecasts. That afternoon, the app notifies you that rain will begin at 5:33pm. You could adjust your commute a few minutes earlier, using the increasingly accurate prediction to increase your safety margin with respect to arriving dry at home. On the other hand, with greater confidence that the rain will begin later, you could stay at work a little longer. This is predictive rebound: the initial intent of seeking a more accurate rainfall prediction was to reduce the odds of getting soaked, but its actual implementation results in taking more risks (e.g. leaving the umbrella behind or eeking out a few more minutes of work).

While the rainfall example may seem trivial, the contours of predictive rebound look similar in the management of much more complex and high-stakes situations. For the sake of discussion, assume that the purpose of wildfire management is achieving increased public safety (fewer lives lost in the fire itself, in hurried evacuations, or from complications resulting from smoke exposure). Improved predictive models might allow for earlier identification of neighbourhoods at risk, allowing for earlier evacuation, thereby reducing panic, car accidents, and risks to responders. Despite this being a shared and stated goal of fire managers witnessed during observational research across Australia and Canada (Kennedy Citation2018), however, increased predictive power was not always directed towards achieving this initial goal. Some managers, for instance, preferred to use modelling outputs to enable later decision-making (e.g. making a decision at the same point as it would traditionally have been made, rather than earlier), trading off early alerts for improved precision in identifying which neighbourhoods ought to be evacuated, where firefighting forces ought to be placed, or how many resources ought to be dispatched in response.

This idea of predictive rebound borrows its inspiration from the study of human behaviour in energy systems, which has identified the existence of a ‘rebound effect’ in energy consumption. When improvements are made to energy efficiency (e.g. if an automobile becomes more efficient, thereby consuming less gasoline), at least some portion of these efficiency gains are ‘lost’ to increasingly energy-intensive behaviours (e.g. the efficiency subtly compels the owner to drive more, citing lower gasoline costs, thereby negating some of the fuel savings).Footnote 4 In the predictive context, predictive rebound suggests that when forecasts and models become more accurate, only a portion of those gains tend to be realized in terms of the initially intended real-world goal.

At a practical level, of course, predictive rebound isn’t the only effect that is in play. Fire management choices are constrained by all sorts of other factors, including budget and resources available, how active a given fire season is, and both idiosyncratic individual approaches and the guidelines laid out in formal policies (e.g. computer assisted dispatch systems that move some of these choices from synchronous, explicit decision to being embedded within an algorithm). But, because the business of fire management is the business of risk management, predictive rebound can be found as a component within all sorts of decision-making, from debates between avoiding false alarms vs. the risks of a delayed evacuation to concerns about just how safe prescribed burns ought to be (e.g. should better weather forecasting be used to increase the number of good fires lit, or to be even more cautious about which days can be used without causing ‘runaway’ burns?).

Predictive rebound, values, and engagement

The examples above illustrate the first argument of the paper: that increased accuracy in predictive technologies can sometimes fail to return the initially promised, intended, or desired real-world gains. To be clear, however, this is not inherently bad. In the case of the rebound effect in energy systems, the fact that we can achieve more with the same amount of net energy consumption, for instance, can improve quality of life in important and valuable ways (e.g. being able to travel to see family more often, achieve a desired diet, or otherwise increasing the comfort and satisfaction one experiences in life while consuming somewhat less energy than before). Likewise, eliminating false alarms or deploying firefighting forces in more effective ways are desirable ends, even if they’re not the ones that might have been originally cited as the goals. In more purely mathematical terms, there’s no inherent ‘better’ option between reducing false negatives or false positives; nor is it a priori preferable to use increased anticipated certainty to narrow a confidence interval as opposed to increase confidence the future lies within the original range.

Rather, the challenge with predictive rebound is that it can subtly, tacitly, and unintentionally circumvent important explicit deliberations around underlying values and priorities. In the rain example, this is as simple as reflecting upon the shift from the stated priority (improving the odds of getting home dry) to alternative benefits, like avoiding an early departure or being able to reduce the required safety buffer. In the wildfire example, this is manifest in the decisions observed among actual managers, such as sending fewer resources (to allow fighting more fires) or delaying other decisions to improve their eventual accuracy and decrease false alarms. These may be desirable outcomes, but their importance relative to other goals requires thoughtful deliberation by the varied stakeholder communities involved.

Predictive rebound also appears to occur in other sorts of anticipatory planning. The use of predictive modelling within COVID-19, for instance, can create situations where there are tradeoffs between whether the ends of minimizing disease spread versus reopening ‘normal’ activities. For example, should predictive efforts to examine spread and interventions in schools (e.g., see Panovska-Griffiths et al. Citation2020) be used to justify aggressive interventions against the disease with a higher degree of confidence they will succeed, or be used to allow less strict measures with equal confidence as before? This isn't to suggest that we ought to keep schools closed indefinitely, using any and all predictive efforts to pursue perfect safety. Rather, I would argue that there is a critical values-based question here – how should our advances be split between reducing the health-based impact of the disease versus reopening different businesses and societal activities? – that cannot be solved by increasingly precise models, or even by the modelers themselves. Instead, it requires explicit debate involving a much wider array of stakeholders about the ends that we are pursuing and how we ought to apply the benefits of our advancing research.

Given the short format of this piece, I offer ‘predictive rebound’ as a hypothesis based on repeated, if anecdotal and qualitative, examples rather than a precisely measured or thoroughly demarcated phenomenon, in hopes that it might be further operationalized, tested, and examined in real-world case studies. The appropriate response, however, isn’t to give up on predictive technologies – it’s to be explicit about towards which ends we should use increased precision in prediction (e.g. for purposes of efficiency or to reduce the risk of adverse outcomes?), and reflective in who it is that sets those priorities. In other words, it serves as a call to advance technologies for deliberating upon these questions with appropriately narrow or broad stakeholder communities, depending on the issue at hand.

In some cases, these audiences might be a broadly construed ‘public.’ Is there, for instance, any consensus about whether the preferred outcome of improved modelling is fewer unneeded evacuations or reduced risk of catastrophic outcomes like witnessed in Paradise, California? How do we balance the possible benefits of reduced cost (e.g. playing closer to the edge by deploying fewer firefighters) versus other outcomes (e.g. improving the odds of successful containment)? However, some of these deliberations should likely be focused within the expert community, such as debates about how aggressively or risk-adversely we ought to fight fires.

The role for focused engagement within advancing technologies in emergency management, then, is critically important. Rather than letting the trade-off between outcomes (e.g. efficiency vs. margin of error) occur tacitly, or losing questions of values in increasingly precise projections, it is essential to grapple with these questions explicitly. Important descriptive and normative questions remain that warrant further investigation. For example, to what degree does predictive rebound occur in different sectors? And, in these different sectors and within different stakeholder groups, how does increased predictive precision map to desirable outcomes? With the cost and impact of disasters in today’s society, it is important that these questions be addressed overtly rather than idiosyncratically.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes on contributor

Eric Kennedy is an Assistant Professor of Disaster and Emergency Management at York University in Toronto, Canada. His research focuses on expertise, institutions, and knowledge production within the context of emergency management, particularly in wildfire management. Kennedy also specializes in research methods and evaluation design, especially related to monitoring impact and research in emergency and disaster situations. Outside of the wildfire arena, he is leading a national, SSHRC-funded monitoring project tracking social attitudes and impacts of COVID-19 across Canada until early 2022. In examining expertise, institutions, and knowledge production, he has published on boundary organizations, citizen science, responsible innovation, and science education and outreach.

Notes

1 See Fraaije and Flipse (Citation2020), among others, for discussion of the role of values and the importance of inclusivity of perspectives within RRI methodologies.

2 Maclean (Citation2017) is the quintessential example of this warfare imagry in fire. For additional discussion of the different narrative frames, see Kennedy (Citation2019b).

3 See Neale and May (Citation2018) on the evolving relationship between computational and human approaches to fire behavior modelling.

4 In the field of energy systems, the ‘rebound effect’ refers to the lower-than-expected returns on energy efficiency measures (Brännlund, Ghalwash, and Nordström Citation2007). These rebound effects can be manifest in direct increases in behavior (e.g. cooling the house to a lower temperature as air conditioning efficiency improves), other purchases with money saved (e.g. purchasing a new TV with the money saved on home heating), more global effects (e.g. if there is reduced fuel use in some countries, it can drive down the price of fuel elsewhere, spurring on more consumption), and development in other sectors (e.g. the material engineering that improved car fuel efficiency being transferred to aviation, with similar effects there). All told, cumulative rebound effects are estimated to a result in 20–60% losses in the efficiency improvement (Gillingham et al. Citation2013). Importantly, this means that efficiency is still well worth pursuing – it still results in net less energy being used – but some portion of the gains may be translated into other improvements to quality of life rather than energy savings (e.g. the ability to drive more or set one’s home to a more desirable temperature).

References

  • Abatzoglou, J. T. , C. M. Smith , D. L. Swain , T. Ptak , and C. A. Kolden . 2020. “Population Exposure to Pre-Emptive De-Energization Aimed at Averting Wildfires in Northern California.” Environmental Research Letters 15 (9): 1–8. https://iopscience.iop.org/article/10.1088/1748-9326/aba135/me
  • Barnes, Michael. 1987. Killer in the Bush: The Great Fires of Northeastern Ontario . Erin: Boston Mills Press.
  • Brännlund, R. , T. Ghalwash , and J. Nordström . 2007. “Increased Energy Efficiency and the Rebound Effect: Effects on Consumption and Emissions.” Energy Economics 29 (1): 1–17.
  • Egan, T. 2009. The Big Burn: Teddy Roosevelt and the Fire That Saved America . Boston: Houghton Mifflin Harcourt.
  • Fraaije, A. , and S. M. Flipse . 2020. “Synthesizing an Implementation Framework for Responsible Research and Innovation.” Journal of Responsible Innovation 7 (1): 113–137.
  • Gigerenzer, G. , R. Hertwig , E. Van Den Broek , B. Fasolo , and K. V. Katsikopoulos . 2005. “A 30% Chance of Rain Tomorrow: How Does the Public Understand Probabilistic Weather Forecasts?” Risk Analysis: An International Journal 25 (3): 623–629.
  • Gillingham, K. , M. J. Kotchen , D. S. Rapson , and G. Wagner . 2013. “Energy Policy: The Rebound Effect is Overplayed.” Nature 493 (7433): 475.
  • Kennedy, E. B. 2018. Built by Fire: Wildfire Management and Policy in Canada (Doctoral dissertation, Arizona State University).
  • Kennedy, E. B. 2019a. “Values in Science: Lessons from Wildfires.” Environmental Communication 13 (2): 276–280.
  • Kennedy, Eric B. 2019b. “Narratives of Fire: How Wildfire Stories Shape Blame, Causation, and Solution Spaces.” Working with Co-production Workshop, Ottawa, Canada.
  • Maclean, N. 2017. Young Men and Fire . Chicago: University of Chicago Press.
  • Monteiro, M. , C. Shelley-Egan , and J. Dratwa . 2017. “On Irresponsibility in Times of Crisis: Learning from the Response to the Zika Virus Outbreak.” Journal of Responsible Innovation 4 (1): 71–77.
  • Neale, T. , and D. May . 2018. “Bushfire Simulators and Analysis in Australia: Insights Into an Emerging Sociotechnical Practice.” Environmental Hazards 17 (3): 200–218.
  • Neale, T. , J. K. Weir , and T. K. McGee . 2016. “Knowing Wildfire Risk: Scientific Interactions with Risk Mitigation Policy and Practice in Victoria, Australia.” Geoforum 72: 16–25.
  • Nordmann, A. 2014. “Responsible Innovation, the Art and Craft of Anticipation.” Journal of Responsible Innovation 1 (1): 87–98.
  • Owen, R. , P. Macnaghten , and J. Stilgoe . 2012. “Responsible Research and Innovation: From Science in Society to Science for Society, with Society.” Science and Public Policy 39 (6): 751–760.
  • Panovska-Griffiths, J. , C. C. Kerr , R. M. Stuart , D. Mistry , D. J. Klein , R. M. Viner , and C. Bonell . 2020. “Determining the Optimal Strategy for Reopening Schools, the Impact of Test and Trace Interventions, and the Risk of Occurrence of a Second COVID-19 Epidemic Wave in the UK: A Modelling Study.” The Lancet Child & Adolescent Health 4 (11): 817–827.