1,714
Views
12
CrossRef citations to date
0
Altmetric
Articles

Over- and under-reaction to transboundary threats: two sides of a misprinted coin?

ABSTRACT

When states over- and under-react to perceived transboundary threats, their mistakes can have equally harmful consequences for the citizens they mean to protect. Yet, studies of intelligence and conventional foreign policy tend to concentrate on cases of under-reaction to threats from states, and few studies set out criteria for identifying cases of under- and over-reaction to other kinds of threats or investigate common causes. The paper develops a typology of over- and under-reaction in foreign policy revolving around threats assessment, response proportionality and timeliness. Drawing on pilot case studies, the contribution identifies combinations of factors and conditions that make both over- or under-reaction more likely. It is hypothesized that three factors play significant causal roles across the cases: (1) institutions have learned the wrong lessons from previous related incidents; (2) decision-making is organized within institutional silos focused on only one kind of threat; and (3) actors have strong pre-existing preferences for a particular outcome.

INTRODUCTION

This contribution is concerned with two particular types of foreign policy ‘fiascos’ which appear, at first glance, very different and may therefore require distinct explanations. The first type is a foreign policy under-reaction epitomized by states failing to deter, repel or prepare for a ‘surprise attack’ by another state, even though such an attack could have been foreseen and means were available to avoid much of the harm caused at comparatively little costs and risks. The second type is a foreign policy over-reaction, which until recently has been less frequently studied and could be illustrated by the case of states launching highly costly and risky pre-emptive or retaliatory attacks against a perceived threat, even though the target of the attack was no actual threat or any potential threat could have been addressed with significantly less costly or risky means. When states and international organizations over- and under-react to perceived transboundary threats and hazards that emanate from or easily spread beyond a given state's territory, their mistakes can have equally harmful consequences for the citizens they mean to protect. We do not know empirically which kind of failure is more frequent, but the tendency to focus on warning failures and under-response can lead to problematic prescriptions; ever more warning, higher receptivity, better preparedness and commitment to act early could lead to, first, costly over-reaction and ultimately paralysis, as warnings will outstrip preventive capacities. Therefore, it would be desirable to identify a combination of factors or conditions that substantially increase the probability of both under- and over-reaction and thus give greater confidence to take remedial action.

It is not new to argue that failures of perceptions may cause either under- and over-reaction in foreign policy since Robert Jervis's seminal work on psychological biases and recurrent errors of judgement in foreign affairs (Jervis Citation1976). However, the extensive United States- (US-) dominated strategic surprise literature still tends to concentrate on cases of under-reaction to impending attacks and treats insights about over-reaction as a by-product (Betts Citation1982; Kam Citation2010; Wohlstetter Citation1962). More recently, a number of authors have characterized the US-led war on terror as an over-reaction and highlighted its various discontents, in terms of solving the original problem, but also in terms of creating new problems on the way (Aradau and Van Munster Citation2007; Desch Citation2007). However, this literature does not offer us a systematic theory of how under- and over-reaction might be linked and the difficulties of successfully navigating the boundary between them.

Moreover, there is still insufficient cross-fertilization between intelligence, security studies and foreign policy analysis on the one hand, and the literature on risk management, regulatory policy and emergency response to diverse types of transboundary threats such as unsafe drugs and foods, flooding, climate change, pandemics or financial system collapse (Bazerman and Watkins Citation2008; Weick and Sutcliffe Citation2007). The lack of attention to these threats is all the more problematic given the shifting and expanding nature of transboundary threats and the concomitant rise of an all-risks approach to foreign affairs visible in states’ national security strategies (Dunn Cavelty and Mauer [2009]; on policy fiascos in the risk era, see Beasley [Citation2016]). The recognition and response to such threats pose particular challenges as compared to predominantly domestic threats (De Franco and Meyer Citation2011).

This contribution aims to improve cross-fertilization between scholars working on warning failures in the area of national security and those working on risk and disaster management in international public policy. In a first step it develops a single definition of failures to deal adequately with uncertain threats whilst avoiding 20/20 hindsight. It will elaborate which performative acts are most important to what might be called ‘calibrated prevention’ and suggest a typology of failures that could lead to either over- or under-reaction. It will then discuss how to search for common causes. The second section will put these criteria into action by selecting six pilot studies and identifying three common mechanisms at play in both over- and under-reaction cases: (1) institutions have learned the wrong lessons from previous incidents; (2) decision-making is organized within institutional silos focused on only one kind of threat; and (3) actors have strong pre-existing preferences to act or not act. These hypotheses will require more extensive empirical testing in future research.

CONCEPTUALIZING OVER- AND UNDER-REACTION TO TRANSBOUNDARY THREATS

How to define and conceptualize the phenomena of over- and under-reaction in foreign policy? The existing literature on intelligence failures (Betts Citation2007; Jervis Citation2010), foreign policy mistakes (Baldwin Citation2000; Walker and Malici Citation2011), success and failure in public policy (Bovens et al. Citation2001; McDonnell Citation2010) and over- and under-reaction specifically (Maor Citation2012, Citation2014) offers useful starting points. However, the literature also has limitations for our research question and disagrees on the issue of whether objectivity in case identification and policy evaluation is possible and desirable. Constructivist approaches to policy fiascos (Bovens and ‘t Hart Citation1996: 10–11) highlight the non-linear, competitive and ideational nature of the goal-setting process in policy-making, where the meaning and valuation attached to policy goals vary amongst actors as well as over time and policy failure seems to lie ‘largely in the eye of the beholder’ as McDonnell criticizes (Citation2010: 6). In contrast, most scholarship in foreign policy analysis and intelligence studies starts from the premise that the identification of failure or success is both possible and necessary, despite criticism of using unsophisticated frameworks for such judgements (Baldwin Citation2000). Maor aims to reconcile ‘the tension between the objective and subjective dimensions of “overreaction”’ by defining it as ‘policies that impose objective and/or perceived social costs without producing objective and/or perceived benefits’ (Maor Citation2012: 235).

However, the subjective/objective divide stands for quite different research designs in terms of the sampling criteria for cases and the evaluation of mistakes and failures. While it can be illuminating to better understand how and when political actors, public and news media ‘construct’ foreign affairs fiascos as a first cut, such an analysis needs to be juxtaposed with or followed by a scientifically sound assessment of failure rather than substituting such an assessment with subjective views of practitioners or publics. Scholarship can and should provide a more rigorous, transparent, nuanced and cautious assessment of policy successes or failures and their causes than politicians, journalists or other experts with less time, appropriate training or awareness of cognitive biases. This is particularly true for the study of foreign policy mistakes where the risk of unfair accusations and attribution errors is higher than in domestic policy because of greater uncertainty affecting analytical judgements and the higher probability of unavoidable mistakes (Betts Citation1978). It also more difficult here to identify what was known and communicated by whom, given the arguments to maintain a degree of secrecy about man-made threats to safeguard intelligence sources, methods and relations to foreign governments. More encouragingly, scholars in this policy area will find it easier to identify a widely shared agreement about the undesirability of the harm given its severity and typically symmetric effects. Contestation in foreign policy tends to focus more on the threat assessment and the means to be used for a given goal, rather than the policy goal itself.

A useful starting point for identifying different kinds of ‘failure’ in foreign policy is Walker and Maliki's (2011) study of US president's foreign policy mistakes. They advance a useful typology by distinguishing between mistakes of omission (‘too little too late’) and commission (‘too much too soon’). They furthermore highlight two cross-cutting dimensions in mistakes of threat diagnosis and policy prescription (ibid.: 54). Using this distinction as a starting point, a more nuanced typology appropriate to the study of over- and under-reaction is developed below and used to select pilot cases (see ).

Table 1 Typology of over and under-reaction with cases

Walker and Maliki's (2011) first dimension of threat diagnosis is in principle applicable to all kinds of threats and hazards, but should be further differentiated into failures of probability assessment and misjudgements of the severity and nature of a given threat. The accuracy of threat diagnosis, can only be measured post-hoc, even though ex ante we can gauge expert's confidence in the quality of the available evidence coupled with past reliability of applicable theories or models to interpret evidence. Genuinely novel threats are more difficult to accurately forecast, as theories could not be previously tested and may not be applicable. Transboundary threats are more likely to be novel because of the complexity and pace of the interplay between new phenomena such as globalization, technological and demographic change, as well as the expanded range of actors who can influence outcomes (Dunn Cavelty and Mauer Citation2009; Fishbein Citation2011). Furthermore, domestic authorities face greater difficulties in identifying relevant information (because of complexity), accessing information (because of secrecy, linguistic barriers or remoteness), or validating it (because of deception and lack of experience). These problems affect not only man-made but also biological threats. In the case of swine flu, the World Health Organization (WHO) accurately assessed the probable spread of the virus, but did not recognize and communicate early enough that it was no more lethal than a normal flu virus, thus causing over-reaction in many countries. While uncertainty will always be a significant problem in foreign affairs, it does not imply that associated risk assessment is futile or that cost–benefit analysis can be dispensed with, only that the epistemological basis for probabilistic methods may be fragile, contested or highly variable over time (Posner Citation2004: 175–87). So while it may be easy to see that an over-reaction was caused by an error of threat assessment, the real difficulty lies in deciding whether this error was avoidable and at what point mistakes can be described as ‘failures’ in terms of nature and scale.

Secondly, Walter and Maliki (Citation2011) are right to attend to policy itself (‘prescription’), but their focus on defence and security is too narrow for our purposes and insufficiently sensitive to the ‘too little/too much’ dimension of under- or over-reaction. It is proposed instead to focus first on the proportionality of the response in terms of scale and scope. Some types of policy problems, such as protection from floods or vaccination programmes, require a minimum scale of response to be effective at all, whereas others may only partially fail if the response is under-scaled. Over-scaled responses in terms of resource intensity means not just a lack of efficiency in terms of marginal utility (Baldwin Citation2000), but directly reduce a state's ability to mobilize sufficient resources to prevent or mitigate other types of foreign or indeed domestic threats or hazards to human life and health. The US War on Terror (WOT), including the US invasion of Iraq and Afghanistan, has been estimated by the Congressional Research Service to have cost US$1.6 trillion with a narrow focus on US military operations (Belasco Citation2014), whereas the academic ‘Cost of War Project’ arrived at an economic cost to the US of US$4.4 trillion (Crawford Citation2014). Even without monetizing lives or life-expectancy gained or lost, one can easily characterize this scale of spending as disproportionate in relation to the risk of terrorism and compared to alternative foreign or domestic uses of these resources. Secondly, a policy may be mis-designed in terms of scope when attempts to tackle a given threat the effects are (would have been) either counter-productive or create (would have created) significant displacement threats and risks. It has been argued, for instance, that US practices of renditions, torture, unlawful detention and drone-strikes used in the WOT have damaged the US ability to find allies and boosted radicalization and recruitment to violent jihadist groups. This conceptualization of over-reaction means also that a fair ex post assessment of the proportionality of policy needs to incorporate counterfactual reasoning about alternative consequences that result from either action or inaction with given resources (Baldwin Citation2000).

The third performative dimension that is implicit in Walker and Maliki's (Citation2011) typology but not separated out is timeliness. A diagnostic judgement about the high probability of a state attack six months in advance would be very useful for maximizing policy options but also very difficult, whereas the same judgement is typically easier a couple of hours before an attack, when indications/signals are stronger but effective options will have dramatically narrowed. Similarly, whether a given policy reaction is proportional often depends on the evolution of a threat over time, its magnitude as well as its nature: an overwhelming military presence may be appropriate during a particular phase of military operation but counterproductive at earlier or later stages of conflict prevention and resolution. Similarly, countermeasures against pandemics are a race against time where the type of action depends on the spread, mutation and lethality of a virus. Hence, we propose to focus on accurate threat assessment, policy proportionality and timeliness as key challenges to avoid either under- or over-reaction to transboundary threats. In reality, cases will not map neatly onto each of the cells in , but may show the presence of both kinds of mistakes at different points in time.

Building on these considerations we can now describe both sides of the coin as failures to mobilize the available cognitive and material resources of policy-making in a proportionate and time-sensitive way to the severity, probability and nature of a transboundary threat. In the case of under-reaction, the failure lies in not acting early or decisively enough given available knowledge and means, whereas over-reaction are cases where action was taken in response to either an exaggerated or illusionary threat or could have been realistically addressed with significantly less costly or risky means. This definition does not necessarily limit our focus to one particular actor involved in the policy-process: analysts, policy-planners, decision-makers or, indeed, operatives involved in implementation. Scholars in intelligence studies spent considerable efforts to distinguish failures of the intelligence community from failures of policy-makers (Jervis Citation2010; Pillar Citation2011). These distinctions also matter to the definition of appropriate criteria to assess whether a given action was a mere technical mistake, negligence against professional norms, gross incompetence, or outright malfeasance, for instance, when senior decision-makers consciously suppress, obscure or hide ‘inconvenient’ threat assessments.

The other important aspect of this definition lies in the words ‘available’ and ‘realistic’ in recognition of the distinction between ex ante avoidable failures and those actions or lack of action that may have caused an over- or under-reaction in terms of ex post cost–benefit assessments, but which were ultimately unavoidable given the knowledge, skills, instruments and conditions at the time – a distinction often acknowledged but rarely heeded in scholarly works on mistakes and missed opportunities (Tuchman Citation1985; Zartman Citation2005). Most public inquiries launched after cases of ‘under-reaction’ revolve around two questions: attributing individual or institutional accountability (‘blame’); and learning lessons about how such harm may be avoided in the future (see Bovens and ‘t Hart Citation2016). The former task is not just hampered by the ‘politics of blame’ (Weaver Citation1986), but also arises from hindsight bias as human beings tend to overestimate what was knowable and likely given their knowledge of what actually happened. A good example is allegedly plentiful and high quality early warnings about genocide in Rwanda quoted in writings which, on closer examination, turn out to lack specificity and credibility, or did not satisfy basic criteria to qualify as a warning (Otto Citationforthcoming). Moreover, academic works as well as public inquiries such as in the area of conflict prevention do not sufficiently acknowledge uncertainty about what works in preventing transboundary threats, including trade-offs, moral dilemmas and unintentional consequences (Meyer et al. Citation2010). Indeed, some transboundary threats may be too difficult to solve for even the most powerful states, regional bodies and global institutions of governance. It is instructive that many of the lessons learnt from the financial crisis of 2008 are yet to be fully implemented, including banks being ‘too big to fail’ or reducing global and regional imbalances in trade and capital flows.

When identifying relevant cases for the proposed pilot study we need not only to conduct ex post cost–benefit calculations in the full knowledge of the consequences and an assessment of the alternative causes of action, but also consider the relationship between knowledge, means, time and threat properties. Using the typology elaborated above, three cases each of potential over- and under-reaction to transnational threats were selected according to the following criteria: (1) equal coverage of both security as well non-conventional transboundary threats; (2) states as well as international organizations and agencies as actors (European Union [EU], Eurocontrol, WHO); (3) significant degree of news media salience as a potential foreign policy fiasco. The first two criteria are motivated by our aspiration to maximize variation on case properties and thus increase the theoretical yield if common factors or mechanisms can be found. The third criterion reflects a best case design as one would expect foreign policy fiascos, especially those on the under-reaction side, to have tangible consequences that draw news media attention and trigger controversies over who (if anyone) deserves blame and how to improve (on the attribution of blame in media narratives, see Oppermann and Spencer [Citation2016]). These case features enabled also better access to relevant information about the performance of different actors and stages available from public inquiries, official reports and subsequent analysis in the academic literature. Hidden or forgotten foreign policy fiascos may well exist, but given the novelty of this approach the contribution starts from the low-hanging more visible fruits.

After the initial scanning for suitable cases according to the criteria above, a pilot case analysis was conducted drawing on the preliminary or final results of public inquiries (9/11 Commission Citation2004; Chilcot Inquiry Citation2010; Financial Crisis Inquiry Commission [FCIC] Citation2011; House of Lords Citation2015; Lord Butler Citation2004; WHO Review Commitee Citation2011: 7,), statements by the actors themselves (e.g. Eurocontrol Citation2010) and secondary literature. The evidential basis varies, but filling all the information gaps through original research would have required a highly resource-intensive process-tracing approach and, in some cases, the kind of access to documents and witnesses that only public inquiries enjoy (e.g. Chilcot Inquiry Citation2010). Space constraints do not allow listing all sources consulted or provide more empirical detail from the longer case summaries that were compiled to cover the different stages in the warning–response process: collection; forecasting; prioritization; mobilization; and implementation. The conclusions as to the type of failure should be regarded as preliminary, especially with regard to the more recent cases. The cases are not necessarily identical in their degree of failure, nor the extent to which key actors can be held accountable for mistakes made in the process given available knowledge and resources. For instance, US intelligence assessments of the WMD threat were on the whole correct given the available information, but politicians cherry-picked and distorted intelligence to justify their preferred cause of action (Fitzgerald and Lebow Citation2006; Jervis Citation2010).

WHEN TO EXPECT UNDER- AND OVER-REACTION

What do we know about the factors that cause over- and under-reaction in international public policy widely conceived? There is no shortage of good scholarly works on good judgement in foreign policy (e.g., George Citation1993; Renshon and Larson Citation2003). Similarly, intelligence studies and political psychology highlight biases in information processing and analytical judgements and provide advice on how to compensate for them (Betts Citation2007; Jervis Citation2010). Similarly relevant but hardly used in foreign policy analysis is the public administration literature, which looks at crisis and disaster prevention, preparedness and management (Bazerman and Watkins Citation2008; Comfort et al. Citation2010). The literature does not currently agree on which factors matter most for appropriate responses to non-conventional transboundary threats, but is also marred by the empirical and theoretical bias towards studying cases of under-reaction. Within the pilot case studies we systemically searched for those factors (see ) that according to the evidence examined were (1) causally important enough to expect the scale and scope of the over-reaction to have been affected, although not necessarily sufficient for avoiding failure per se, and (2) with at least two of the three factors being present in all of the cases covering the over- and under-reaction divide.

Table 2 Overview of pilots case findings

The distinctiveness of this approach becomes apparent when we reflect on some factors with unidirectional effects. For instance, a high level of politicization or mediatization of a given risk arising either from the news media (Boin et al. Citation2005, Citation2008) or political actors’ strategies is likely to be positively associated with over-reaction as it tends to exaggerate risk perceptions and paves the way for extraordinary and therefore more likely disproportionate measures. Conversely, a threat that is off-the-radar of the news media, public and political debates, such as mass atrocities in foreign countries of no strategic significance, will attract less attention and resources from authorities and therefore makes missing warning signals and hesitant policy responses more likely (Power Citation2003). The other factor which is often considered as detrimental to good threat assessment and proportionate response is uncertainty (Boin et al. Citation2005: 3–54). But insofar as uncertainty can be considered a core challenge to threat assessment and proportionate response in foreign affairs, it is questionable how useful this observation is. For instance, the advice to analysts and policy-makers to ‘reduce uncertainty’ by taking more time to gather more information and conduct deeper and wider analysis will simultaneously reduce the capacity of actors to act early and effectively, thus making under-reaction more likely. High uncertainty makes it also more difficult to identify genuinely avoidable mistakes.

H1: Vivid lessons learnt from recent episodes involving similar threats lead to the over-application of these lessons to threats and scenarios which are in fact significantly different.

A rich literature in international relations argues that lessons learnt from historical cases and episodes structure how human beings, including senior decision-makers and policy communities, perceive reality. They tend to focus on surface similarities (Khong Citation1992: 14; May Citation1973) between current and past cases to fill in gaps in their knowledge to make sense and anticipate. Inferences drawn from past cases can inevitably turn out to be wrong, but are more likely to be wrong when experts and decision-makers over-estimate the similarity between past and current cases. In all cases examined above, experts and decision-makers held on to assumptions rooted in lessons learnt from previous cases that turned out to be wrong: national monetary policy was not able to deal with the repercussions of the financial crisis and markets were surprised that authorities allowed a major investment bank to fail; the swine flu virus (H1N1) was far less deadly than avian flu, but more contagious; Islamist terrorism had ceased to be solely regionally focused and had developed the level of organization, capabilities and intent to mount a major attack on the US mainland; significant segments of Iraqi society did not respond positively to regime change as Kosovo Albanians did in 1999.

In all these cases, experts as well as decision-makers based their assumptions on previous experiences of either successful or failed crisis management. US central bankers believed that they could handle a bursting asset bubble, given their experience of successfully handling the fall-out from the Dotcom bubble bursting in 2001, and paid little attention to the bubble building in mortgage-backed securities (FCIC Citation2011). The relatively successful hunt for the perpetrators of the 1993 attack on the World Trade Centre, the fact that the bombing itself had failed and the lack of subsequent attacks in the US gave the impression to many experts that jihadist terrorism was a nuisance but under control (9/11 Commission Citation2004). Their experience with the avian flu had convinced epidemiologists that a similarly lethal virus was very dangerous, but could be contained by acting early and decisively (WHO Review Commitee Citation2011). And policy-makers in the UK learned from the intervention in Kosovo that regime change can be accomplished militarily and that a successful aftermath would bring around initially hostile public opinion and opposed members of the United Nations (UN) Security Council. In the case of Russia's aggression against Ukraine, the EU Commission largely modelled its Neighbourhood Policy on the Eastern enlargement process of the EU. This resulted in over-applying a template designed for different circumstances and played an important role in underestimating the political vulnerabilities of Ukraine, as well as the risk of robust push-back from Russia (MacFarlane and Menon Citation2014: 96–7).

The experience of failure tends to lead to assumptions supportive of higher sensitivity to threats that look broadly similar, whereas success inspires confidence of being in control of these risks. The greater the sense of failure or, indeed, success in these episodes of crisis learning, the greater the probability that the lessons learnt will be over-applied to threats that may appear at first glance familiar, even when they are in fact different. This is partly because previous experiences constitute an availability heuristic that makes key actors remember more vividly those episodes that, for all kinds of reasons, caused stress and highly emotive reactions (Kahneman et al. Citation1982: 14). Furthermore, successful crisis managements tends to create complacency within institutions and leaders and to prolong the tenure of key decision-makers who have seen their previous judgements validated, whereas visible failure sparks critique and can empower the previously ignored ‘doomsayers’. These may not be better at forecasting, just more disposed to pessimism. The net effect of personnel change and higher risk sensitivity could be a pendulum swing from under- to over-reaction.

H2: If threats are managed within rigid institutional silos, it is more likely that novel threats will be either missed or inappropriately dealt with by established diagnostic and policy routines.

Any system of risk governance experiences tensions between allocating the responsibility for preventive policy to one particular part of the administration and the challenge of evaluating risks that arise from either action or inaction of other units. Risk myopia can develop as a result of the inability of the existing institutional configuration to cope with the cross-cutting nature of the risks associated with either action or inaction. The problem is not the allocation of responsibility to one organizational unit per se, but the inability of that particular unit to develop ways of sharing information and consulting with relevant units within and outside the organization to accurately identify and assess novel risks, but also understand the wider impact of their potential responses. The problem of institutional silos for risk management and prevention is still relatively new to the study of foreign policy, although it is well-recognized in studies of disasters (Bazerman and Watkins Citation2008: 102–3; Weick and Sutcliffe Citation2007).

We have seen that bodies specialized in one particular area of risk, such as the WHO or national health ministries in the case of swine flu, failed to properly appreciate the economic and political costs of disproportionate preventive action and were thus biased in their approach to calibrating their responses. A similar phenomenon could be seen in the decision by civil aviation authorities led by Eurocontrol to completely close Northern European airspace in response to the eruption of the Icelandic volcano Eyjafjallajökull on 14 April 2010; initially invoking the zero-risk regulatory approach designed for a different situation and manifested in the long-standing guidance of the International Civil Aviation Organization (ICAO) to avoid ash clouds, regardless of any concentration thresholds, average daily flights dropped by more than 80 per cent in three days (Alemanno Citation2011: 3–4), disrupting the travel plans of 10 million passengers and costing the industry in excess of US$1.7 billion (Eurocontrol Citation2010). As the human and economic cost of this blanket ban became increasingly apparent, the authorities changed how ash clouds and their concentration was measured and allowed air travel to gradually resume, depending on three zones of ash cloud concentration five days after the eruption.

Similarly, defence ministries have been traditionally focused on the survival of the state and its population against the risk of state attacks, including nuclear war. They are used to assess the risk posed by actual or potential enemies, but they have neither the habit nor the competence of conducting a more wide-ranging risk assessment of their own actions and consider systematically unintended consequences. By concentrating deliberation about and planning for the invasion of Iraq in the Pentagon, decision-makers missed out on relevant expertise in the State Department relating to risks of sectarian violence in Iraq and an interest in avoiding damage to US reputation. Similarly, in the case of Ukraine, the EU Commission's Directorate General (DG) for Trade had the lead role in conducting the negotiations with Ukraine over the Deep and Comprehensive Free Trade Agreement (DCFTA) as part of the overall Association Agreement (AA), treating the DCFTA as just another FTA with attention focused on technical economic and legal issues, rather than a wider appreciation of the geopolitical and security risks (Smith Citation2014: 594). This under-appreciation could have been avoided by stronger internal co-ordination with the European External Action Service (EEAS), as well as more involvement of the Council of Ministers and representatives from EU member states with substantial expertise of Russia.

While over-reaction is more probable when institutional responsibilities for preventive policy are all allocated to the same unit, the risk of under-reaction often arises from the lack of institutional links between risk monitoring and management. A key problem in the lack of appropriate regulation of systemic financial risk was the underlap between different national and international financial regulators in monitoring the stability of increasingly interconnected financial systems. Within financial institutions themselves, the units responsible for monitoring institutional risk exposure were often unaware of the highly specialized work done in those small units of banks that devised the highly profitable but also very risky products (Tett Citation2011). The 9/11 attacks were facilitated by an underlap in institutional responsibilities between the Federal Bureau of Intelligence (FBI) and the Central Intelligence Agency (CIA) for monitoring and countering threats to the US homeland arising from international terrorism (9/11 Commission Citation2004). Recognizing and dealing with novel risks or novel responses to risks will always be a challenge to existing institutional configurations, but silo mentalities within and across institutions make blind-spots in risk monitoring, management and response more probable.

H3: If the consequences of acting or not acting against a particular threat are highly salient for senior decision-makers, they are likely to misinterpret threats and mis-design policy responses.

We know from experiments that human beings are prone to be affected in their judgments of a phenomenon by their feelings relating to the ‘goodness’ or ‘badness’ of it (Finucane et al. Citation2000). These feelings influence judgments in a way that risks and benefits will be inversely correlated – so that a phenomenon, which is seen as very risky cannot be associated with benefits, while a phenomenon seen as beneficial leads actors to downplay the associated risks. These so-called affect heuristics, including the specific case of optimistic bias (Armor and Taylor Citation2002), play an important role in explaining extreme outcomes like wishful thinking or denial. In the cases of over- and under-reaction examined, we can find evidence that actors’ strong political or financial preferences affected their balancing judgments. Such motivational biases can not only arise from the impact of the threat itself, but also from internal or external incentive structures such as career advancement, anticipated blame or legal liability.

One case is the inflated rating given by Credit Rating Agencies (CRA) for complex structured products involving sub-prime mortgages in the run up to 2008 (see Kruck Citation2016). The ‘issuer-pays model', coupled with insufficient competition among agencies and a high fraction of income from such products, created a systemic conflict of interest in favour of analysts being overconfident in their technical ability to devise highly rated products for satisfied clients and thus attract further business from these clients (Mathis et al. Citation2009). It also made CRAs less open to internal sceptics just as some chief executive officers (CEOs) of banks were not open to warnings that the products they currently made considerable profits from could soon become a major source of loss (Tett Citation2011). In the case of swine flu, different factors were pushing in the same direction of early and vigorous action, such as the prevailing ethos to save lives by planning for the worst, coupled with subtle and undisclosed conflicts of interests affecting experts on influential WHO and national advisory committees (Cohen and Charter Citation2010). In the run-up to the 9/11 attacks, warnings about al-Qaeda were not welcome, since they appeared as a circumspect distraction to the foreign policy agenda of the new administration (Clarke Citation2004). Motivational biases are also visible in the planning of the Iraq invasion, which saw not only worst-case thinking about the risks posed by inaction, but also wishful thinking about the aftermath of regime change (Fitzgerald and Lebow Citation2006). In the case of Ukraine, one significant reason for why the EU underestimated the strain on the Yanukovych government arising from the Association Agreement and, subsequently, the Russian reaction to its fall, was firstly the desire to reach a successful conclusion to the long-standing negotiations over the Association Agreement (Smith Citation2014: 594), as well as the strong ideational support for the goals of the Euromaidan (House of Lords Citation2015).

We expect motivational biases to be particularly pronounced in settings where changing threat assessments and acting or not acting on a given risk has significant redistributive consequences, as in the case of the CRAs in the run up to the financial crisis. In the area of conventional foreign policy, positive or negative biases in the processing of warning signals may arise from balance-of-threat calculations in which policy-makers interpret potential preventive action from the perspective of whether it will strengthen or weaken potential rivals and enemies. International regulators are concerned about the consequences of being blamed for failures to act by their principals as well as external pressure groups. While a complete lack of external scrutiny can induce regulators to become complacent and more easily captured by stakeholders, extremely strong scrutiny that is averse to even the smallest risks can lead risk regulators to prioritize action to avoid blame, even if this action created new kinds of risks in other areas.

CONCLUSION

The contribution aimed at advancing our understanding of over- and under-reaction towards transboundary threats which pose specific challenges in terms of diagnosis and appropriate response. In contrast to the strategic surprise literature in intelligence studies and foreign policy analysis, it has been argued that over- and under-reaction are not completely distinct phenomena that require idiosyncratic explanations, but can be understood as inter-related failures in threat assessment, proportional response and timeliness. This approach places a greater emphasis on a substantive and in-depth assessment of performative acts of various actors in foreign policy-making, rather than a more narrow focus on either the legislative process as in some of the public policy literature (Marsh and Donnell Citation2010) or on senior decision-makers as in many studies of foreign policy performance (Walker and Malici Citation2011). The typology developed is sensitive to the risk of hindsight bias in ex post assessments of diagnostic judgements, as well as the need for counterfactual reasoning when assessing alternative choices and displacements risks. A second advantage to most of the literature in intelligence studies and foreign policy judgements lies in its wide applicability across institutional contexts (regulators, business, intelligence) and types of risks (violence, health, finance), and therefore its ability to highlight the generic problems modern government faces when trying to implement an all-risk approach to transboundary threats.

The second contribution of the article lies in identifying common causes of over- and under-reaction within six pilot studies. We have focused on the three factors common to all cases examined: (1) misapplied lessons learnt from recent vivid crises; (2) the rigidity of institutional silos, and finally (3) the strength of actor preferences in relation to the expected outcome of preventive action. It is their cumulative effects rather than the presence of a single factor, which can be expected to tilt the key judgments in a particular direction and to substantially increase the risk of over- or under-reaction. We do not claim that any of these factors are completely novel to the study of international relations and public policy, but our approach breaks new ground by linking these factors specifically to the challenge of avoiding both over- and under-reaction in foreign policy and shows their applicability to a wider range of threats.

Even though over- or under-reaction cannot be eradicated, those instances that could be classed as avoidable mistakes or outright disasters can be made less likely by monitoring for and, if possible, mitigating against the three factors. As a first step, practitioners should become more reflexive about the lessons learnt from recent and vivid cases of success or failure and, subsequently, more sceptical and demanding vis-à-vis arguments that pose similarities and widely applicable lessons to current problems. Secondly, practitioners should recognize the strong influence of motivational biases affecting potential producers and consumers of warnings and examine whether such biases are caused by misaligned internal incentives or external scrutiny structures that could be altered. Thirdly, it is important to regularly review the allocation of responsibilities for risk monitoring among states and international organizations, as well as the co-ordinating and communication mechanisms between them to allow for the better integration of relevant knowledge and the management of boundary and displacement risks.

ACKNOWLEDGEMENTS

The author expresses his gratitude for the helpful comments received from the editors, three anonymous reviewers, a discussant, as well as the participants at an ECPR workshop on Non-Proportionate Policy Response organized by Moshe Maor and Jale Tosun. An ERC Starting Grant for the FORESIGHT project (No 202022) helped fund some of the underlying research, and the author wishes to thank John Brante, Chiara de Franco and Florian Otto for helping to inform the author's thinking behind this article, as well as Nikki Ikani for advice on the Ukraine case.

Additional information

Notes on contributors

Christoph O. Meyer

Biographical note: Christoph Meyer is professor in the Department of European & International Studies, King's College London.

References

  • 9/11 Commission (2004) The 9/11 Commission Report, Washington: United States of America Government Printing Office, available at http://govinfo.library.unt.edu/911/report/911Report.pdf (accessed October 2015).
  • Alemanno, A. (ed.) (2011) Governing Disasters: The Challenges of Emergency Risk Regulation, Cheltenham: Edward Elgar.
  • Aradau, C. and Van Munster, R. (2007) ‘Governing terrorism through risk: taking precautions, (un)knowing the future’, European Journal of International Relations 13(1): 89–115. doi: 10.1177/1354066107074290
  • Armor, D.A. and Taylor, S.E. (2002) ‘When predictions fail: the dilemma of unrealistic optimism’, in T. Gilovich, D. Griffin and D. Kahneman (eds), Heurestics and Biases: The Psychology of Intuitive Judgment, Cambridge: Cambridge University Press, pp. 334–47.
  • Baldwin, D.A. (2000) ‘Success and failure in foreign policy’. Annual Review of Political Science 3: 167–82. doi: 10.1146/annurev.polisci.3.1.167
  • Bazerman, M.H. and Watkins, M.D. (2008) Predictable Surprises: The Disasters You Should Have Seen Coming, and How to Prevent Them, Boston, MA: Harvard Business School.
  • Beasley, R. (2016) ‘Dissonance and decision-making mistakes in the age of risk’, Journal of European Public Policy, doi:10.1080/13501763.2015.1127276.
  • Belasco, A. (2014) ‘ The cost of Iraq, Afghanistan, and other global War on Terror operations since 9/11′, Congressional Research Service, available at http://www.fas.org/sgp/crs/natsec/RL33110.pdf?utm_source=wordtwit&utm_medium=social&utm_campaign=wordtwit (accessed October 2015).
  • Betts, R.K. (1978) ‘Analysis, war and decision: why intelligence failures are inevitable’, World Politics 31(1): 61–89. doi: 10.2307/2009967
  • Betts, R.K. (1982) Surprise Attack, Washington, DC: Brookings Institution.
  • Betts, R.K. (2007) Enemies of Intelligence: Knowledge and Power in American National Security, New York: Columbia University Press.
  • Boin, A., ‘t Hart, P., Stern, E., and Sundelius, B. (2005) The Politics of Crisis Management: Public Leadership under Pressure, Cambridge: Cambridge University Press.
  • Boin, A., McConnell, A., and ‘t Hart, P. (2008) Governing after Crisis: The Politics of Investigation, Accountability and Learning, Cambridge: Cambridge University Press.
  • Bovens, M. and ‘t Hart, P. (1996) Understanding Policy Fiascoes, New Brunswick, NJ: Transaction.
  • Bovens, M. and ‘t Hart, P. (2016) ‘Revisiting the study of policy failures’, Journal of European Public Policy, doi:10.1080/13501763.2015.1127273.
  • Bovens, M., ‘t Hart, P. and Peters, B.G. (2001) Success and Failure in Public Governance: A Comparative Analysis, Cheltenham: Edward Elgar.
  • Chilcot-Inquiry (2010) ‘Iraq inquiry’, available at http://www.iraqinquiry.org.uk (accessed October 2015).
  • Clarke, R.A. (2004) Against all Enemies: Inside America's War on Terror, New York: Simon & Schuster.
  • Cohen, D. and Charter, P. (2010) ‘WHO and the pandemic flu “conspiracies”‘. British Medical Journal 340 , doi: http://dx.doi.org/10.1136/bmj.c2912.
  • Comfort, L.K., Boin, A. and Demchak, C.C. (2010) Designing Resilience: Preparing for Extreme Events, Pittsburgh, PA: University of Pittsburgh Press.
  • Crawford, N.C. (2014) ‘US costs of wars through 2014 ′, available at http://costsofwar.org/sites/default/files/articles/20/attachments/Costs%20of%20War%20Summary%20Crawford%20June%202014.pdf (accessed October 2015).
  • De Franco, C. and Meyer, C.O. (2011) Forecasting, Warning, and Responding to Transnational Risks, Basingstoke: Palgrave.
  • Desch, M.C. (2007) ‘America's liberal illiberalism: the ideological origins of overreaction in US foreign policy’, International Security 32(3): 7–43. doi: 10.1162/isec.2008.32.3.7
  • Dunn Cavelty, M. and Mauer, V. (2009) ‘Postmodern intelligence: strategic warning in an age of reflexive intelligence’, Security Dialogue 40(2): 123–44. doi: 10.1177/0967010609103071
  • Eurocontrol (2010) ‘Ash-cloud of April and May 2010: impact on air traffic’, available at https://www.eurocontrol.int/sites/default/files/content/documents/official-documents/facts-and-figures/statfor/ash-impact-air-traffic-2010.pdf (accessed October 2015).
  • Financial Crisis Inquiry Commission (FCIC) (2011) Financial Crisis Inquiry Report: Final Report of the National Commission on the Causes of the Financial and Economic Crisis in the United States, Washington, DC: Financial Crisis Inquiry Commission.
  • Finucane, M.L., Alhakami, A., Slovic, P. and Johnson, S. (2000) ‘The affect heuristic in judgments of risks and benefits’, Journal of Behavioural Decision Making 13: 1–17. doi: 10.1002/(SICI)1099-0771(200001/03)13:1<1::AID-BDM333>3.0.CO;2-S
  • Fishbein, W.H. (2011) ‘Prospective sense-making: a realistic approach to “foresight for prevention” in an age of complex threats’, in C. De Franco and C.O. Meyer (eds), Forecasting, Warning, and Responding to Transnational Risks: Is Prevention Possible? Basinstoke: Palgrave, pp. 227–40.
  • Fitzgerald, M. and Lebow, R.N. (2006) ‘Iraq: the mother of all intelligence failures’, Intelligence and National Security 21(5): 884–909. doi: 10.1080/02684520600957811
  • George, A.L. (1993) Bridging the Gap: Theory and Practice in Foreign Policy, Washington, DC: United States Institute of Peace Press.
  • House of Lords (2015) The EU and Russia: Before and beyond the Crisis in Ukraine, HL Paper 115, London: House of Lords, European Union Committee , available at http://www.publications.parliament.uk/pa/ld201415/ldselect/ldeucom/115/115.pdf (accessed July 2015).
  • Jervis, R. (1976) Perception and Misperception in International Politics, Princeton, NJ: Princeton University Press.
  • Jervis, R. (2010) Why Intelligence Fails: Lessons from the Iranian Revolution and the Iraq War, Ithaca, NY: Cornell University Press.
  • Kahneman, D., Slovic, P. and Tversky, A. (1982) Judgment under Uncertainty: Heuristics and Biases, Cambridge: Cambridge University Press.
  • Kam, E. (2010) Surprise Attack: The Victim's Perspective, Cambridge, MA: Harvard University Press.
  • Khong, Y.F. (1992) Analogies at War: Korea, Munich, Dien Bien Phu, and the Vietnam Decisions of 1965, Princeton, NJ: Princeton University Press.
  • Kruck, A. (2016) ‘Resilient blunderers: credit rating fiascos and rating agencies’ institutionalized status as private authorities’, Journal of European Public Policy, doi: 10.1080/13501763.2015.1127274.
  • Lord Butler (2004) Review of Intelligence on Weapons of Mass Destruction: Report of a Committee of Privy Counsellors, London: House of Commons.
  • MacFarlane, N. and Menon, A. (2014) ‘The EU and Ukraine’, Survival 56(3): 95–101,. doi: 10.1080/00396338.2014.920139
  • Maor, M. (2012) ‘Policy Overreaction‘, Journal of Public Policy 32(3): 231–59. doi: 10.1017/S0143814X1200013X
  • Maor, M. (2014) ‘Policy persistence, risk estimation and policy underreaction’, Policy Sciences 47: 425–43. doi: 10.1007/s11077-014-9203-8
  • Marsh, D. and Donnell, A. (2010) ‘Towards a framework for policy success’, Public Administration 88(2): 564–83. doi: 10.1111/j.1467-9299.2009.01803.x
  • Mathis, J., McAndrews, J. and Rochet, J.C. (2009) ‘Rating the raters: are reputation concerns powerful enough to discipline rating agencies?’, Journal of Monetary Economics 56(5): 657–74. doi: 10.1016/j.jmoneco.2009.04.004
  • May, E.R. (1973) ‘Lessons of the Past. The Use and Misuse of History in American Foreign Policy, New York: Oxford University Press.
  • McDonnell, A. (2010) ‘Policy success, policy failure and grey areas in-between’, Journal of Public Policy 30(3): 345–62. doi: 10.1017/S0143814X10000152
  • Meyer, C.O., Otto, F., Brante, J. and De Franco, C. (2010) ‘Re-casting the warning-response-problem: persuasion and preventive policy’, International Studies Review 12(4): 556–78. doi: 10.1111/j.1468-2486.2010.00960.x
  • Oppermann, K. and Spencer, A. (2016) ‘Telling stories of failure: narrative constructions of foreign policy fiascos’, Journal of European Public Policy, doi: 10.1080/13501763.2015.1127272.
  • Otto, F. (forthcoming) ‘Hindsight bias and warning: misinterpreting warnings of the Rwandan genocide‘, in C.O. Meyer, J. Brante, C. De Franco and F. Otto (eds), Heeding Warnings about War? Persuasion and Advocacy in Foreign Policy, Cambridge: Cambridge University Press.
  • Pillar, P.R. (2011) Intelligence and US foreign policy: Iraq, 9/11, and misguided reform, New York: Columbia University Press.
  • Posner, R.A. (2004) Catastrophe: Risk and Response, Oxford: Oxford University Press.
  • Power, S. (2003) A Problem from Hell: America and the Age of Genocide, New York: Harper.
  • Renshon, S.A. and Larson, D.W. (2003) Good Judgment in Foreign Policy: Theory and Aapplication, Lanham, MD: Rowman & Littlefield.
  • Smith, N.R. (2014) ‘The underpinning realpolitik of the EU's policies towards Ukraine: an analysis of interests and norms in the EU–Ukraine Association Agreement’, European Foreign Affairs Review 19(4): 581–96.
  • Tett, G. (2011) ‘Silos and silences: the role of fragmentation in the recent financial crisis’, in C. De Franco and C.O. Meyer (eds.), Forecasting, Warning and Responding toTransnational Risks: Is Prevention Possible?, Basingstoke: Palgrave, pp. 208–16.
  • Tuchman, B.W. (1985) The March of Folly: From Troy to Vietnam, New York: Ballantine Books.
  • Walker, S.G. and Malici, A. (2011) US Presidents and Foreign Policy Mistakes, Stanford, CA: Stanford Security Series.
  • Weaver, R.K. (1986) ‘The politics of blame avoidance’, Journal of Public Policy 6(4): 371–98. doi: 10.1017/S0143814X00004219
  • Weick, K.E. and Sutcliffe, K.M. (2007) Managing the Unexpected: Resilient Performance in an Age of Uncertainty, San Francisco, CA: Jossey-Bass.
  • WHO Review Commitee (2011) Report of the Review Committee on the Functioning of the International Health Regulations (2005) and on Pandemic Influenza A (H1N1) 2009, Geneva: World Health Organisation.
  • Wohlstetter, R. (1962) Pearl Harbour: Warning and Decision, Stanford, CA: Stanford University Press.
  • Zartman, W.I. (2005) Cowardly Lions: Missed Opportunities to Prevent Deadly Conflict and State Collapse, Boulder, CO: Lynne Rienner.