2,475
Views
0
CrossRef citations to date
0
Altmetric
Special Section: Psychology of Intelligence

Beyond Bias Minimization: Improving Intelligence with Optimization and Human Augmentation

Abstract

For the last half-century, the U.S. and Allied Intelligence Community (IC) has sought to minimize the ostensibly detrimental effects of cognitive biases on intelligence practice. The dominant approach has been to develop structured analytic techniques (SATs), teach them to analysts in brief training sessions, provide the means to use SATs on the job, and hope they work. The SAT approach, however, suffers from severe conceptual problems and a paucity of support from scientific research. For example, a highly promoted SAT—the analysis of competing hypotheses—was shown in several recent studies to either not improve judgment quality or to make it worse. This article recaps the key problems with the SAT approach and sketches some alternative interventions. At the core of these proposals is the idea that intelligence agencies should be focused broadly on improving intelligence and not narrowly on minimizing bias. While the latter contributes to achieving the former, overemphasis on bias minimization could inadvertently bias agencies toward a singular form of intervention, blinding then from potentially more effective interventions. Two lines of alternative intervention are sketched. The first line focuses on postanalytic statistical optimization methods such as recalibration and performance-weighted aggregation of analysts’ judgments. The second line focuses on a broad human augmentation program to optimize human cognition through better sleep, exercise, nutrition (including nootropic compounds), and biometric tracking. Both lines of effort would require substantial scientific investment by the IC to examine risks and efficacy.

Intelligence as a psychological system

Intelligence production by a network of governmental organizations (the Intelligence Community [IC]) can be considered state-sponsored higher-order cognition, akin to what would traditionally constitute thinking, reasoning, and judgment. These cognitive processes support other higher-order cognitive functions, which principally include planning and decisionmaking, and which translate to policymaking at the state level. These cognitive processes require executive functioning, including the control of memory and attention and the prioritization of competing cognitive tasks.

The IC not only performs the equivalent of cognitive tasks; it also functions as a multisensory system for the state, thus complementing its cognitive functions with perceptual functions. A vast amount of the resources allocated to the IC support this perceptual system in the form of collections capability. Parts of the IC also conduct covert operations, which serve highly specialized “motor functions” of the state. Each set of functions—perceptual, cognitive, and motor—are vital to the survival of biological organisms and, arguably, to nation-states.

This article focuses on the cognitive aspect of intelligence—what, thanks to the maverick efforts of Richards Heuer, Jr., came to be known as the psychology of intelligence analysis within the IC.Footnote1 A great deal of interest in the psychology of intelligence analysis came to be focused on minimizing cognitive bias, which is critically assessed in this article and contrasted with a more encompassing focus on improving intelligence analysis. Although the present article deals with the cognitive aspect of intelligence, as the foregoing statements hopefully make clear, the metaphor of intelligence as a psychological system could be expanded beyond the cognitive domain to more fully encompass the functions of intelligence.

The bias minimization paradigm and the SAT approach

Jack Davis, in his Introduction to Heuer’s Psychology of Intelligence Analysis, writes: “Heuer’s message to analysts can be encapsulated by quoting two sentences from Chapter 4 of this book: ‘Intelligence analysts should be self-conscious about their reasoning processes. They should think about how they make judgments and reach conclusions, not just about the judgments and conclusions themselves.’”Footnote2 Put differently, Heuer prompted analysts to become metacognitively vigilant; namely, to actively think about their thinking processes and not merely focus on the substantive topic they were analyzing. As he put it elsewhere in his book, “Another significant question concerns the extent to which analysts possess an accurate understanding of their own mental processes. How good is their insight into how they weigh evidence in making judgments?”Footnote3

Heuer did not stop with admonitions to be metacognitively self-aware. He promoted a variety of methods that came to be known as structured analytic techniques (SATs), which were intended to help analysts analyze—namely, think, reason, and judge—more effectively. SATs purportedly accomplish this by reducing one or more cognitive or motivational biases that can undermine the rigor of intelligence analysis. For instance, Heuer advocated getting the information out of the analyst’s head and into an externalized form (e.g., a spreadsheet) to reduce biases that stem from selective attention, availability of highly salient pieces of information, primacy or recency effects, and so on. As another example, he recommended various techniques aimed at getting analysts to defer zeroing in on a single hypothesis that seemed to them to be good enough. Thus, his best-known SAT—the analysis of competing hypotheses (ACH) technique, which offers an evidence (rows) × hypothesis (columns) spreadsheet with scoring rules for ranking alternative hypotheses—aims explicitly to prevent analysts from falling prey to confirmation biasFootnote4 due to a bounded-rational satisficing strategy.Footnote5

Although Heuer recommended that the IC invest in research aimed at improving analysts’ reasoning, an immediate consequence of the SAT approach was to entrench the view that unaided human reasoning was “intuitive,” “biased,” and likely to be error-prone, a viewpoint that was largely shaped by the heuristics and biases program of research,Footnote6 but which has shown little awareness of, or responsiveness to, conceptual and empirical challenges to that perspectiveFootnote7 The treatment for this endemic condition (if not the “cure”) was largely to be taken up by a cadre of analytic tradecraft experts comprising mainly IC government workers and peppered with entrepreneurs. Both groups tended to have prior experience as analysts. Heuer published, with retired analyst Randolph H. Pherson, multiple editions of a compilation of several dozens of SATs in their Structured Analytic Techniques for Intelligence Analysis, which remains a mainstay of intelligence tradecraft practitioners to this day.Footnote8

Generally, the government programs responsible for delivering SAT training to help analysts cope with their inherent biases were staffed by trainers who could digest the techniques and explain them to analyst trainees. However, few appear to have been well equipped with the research background to critically assess and scientifically test the efficacy of SATs.Footnote9 Crucially, as the SAT approach spread, organizational resources to support such research were scarce. Accordingly, the SAT approach did not productively evolve in response to scientific testing. It spread destructively because its adherents did not apply the same critical thinking they recommended analysts to undertake when considering whether on what they were training was effective. Although the SAT approach spread, the material and delivery of SAT training ossified in many intelligence organizations.

The SAT approach also served a vital accountability function that could not easily be dismissed. Once analysts were routinely trained in these techniques, the IC could point to such training as a process-based method by which they were instilling analytic rigor. Accountability pressure to expand and institutionalize the SAT approach ramped up with the IC restructuring after 11 September 2001 and the Iraqi weapons of mass destruction misestimate.Footnote10 In 2004, the U.S. Congress passed the Intelligence Reform and Terrorism Prevention Act (IRTPA), creating the Office of the Director of National Intelligence (ODNI). IRTPA required ODNI to implement common analytic standards across the IC, which included “ensuring that elements of the IC conduct alternative (‘red-team’) analysis of information and conclusions in IC products.”Footnote11 The following year, the CIA’s Kent School for Intelligence Analysis published an influential Tradecraft Primer comprising instruction on several SATs, including ACH.Footnote12 In 2007, the ODNI issued Intelligence Community Directive (ICD) 203, for the first time establishing community-wide standards for the production and evaluation of intelligence.Footnote13 Critically, ICD 203 required analysts to “utilize appropriate structured analytic techniques routinely in their analysis to enhance analytic rigor and critical thinking.”Footnote14 The SAT approach soon proliferated across all agencies of the U.S. IC and Allied intelligence services,Footnote15 and was further entrenched by its incorporation into postsecondary intelligence studies curricula.Footnote16

Perhaps because the last two decades have seen a push to expand and institutionalize the SAT approach as an antidote to analysts’ cognitive limitations, there has also been increased scholarship on the topic, especially over the last decade. A growing body of literature has highlighted serious conceptual problems underlying the SAT approach.Footnote17 One problem is that SATs do not account for the inherent bipolarity of most cognitive biases. That is, cognitive biases are often associated with opposing biases (e.g., overconfidence and underconfidence exist at opposite ends of the same bias spectrum) but SATs often assume bias unipolarity (e.g., by focusing exclusively on mitigating overconfidence and ignoring the possibility of, or costs associated with, underconfidence).Footnote18 Given that sound analytic judgment requires striking a balance between opposing biases by zeroing in on optimal cognitive sweet spots, the lopsided SAT approach may result in overcorrection or even augmentation of the opposing bias from the start.Footnote19 For instance, if analysts are trained to fixate on the potential perils of overconfidence, they may take active steps to reduce their confidence levels in their judgments. However, if they are already underconfident, the effect of bias “correction” will be to amplify rather than tamp down on bias. Indeed, in examining over 2,000 strategic intelligence forecasts, Mandel and Barnes found that Canadian analysts were significantly underconfident in the strategic intelligence forecasts they produced.Footnote20

Another conceptual problem underlying the SAT approach is referred to as noise neglect.Footnote21 Several SATs seek to produce more consistent assessments by facilitating the decomposition of complex judgments into simpler constituent judgments.Footnote22 For instance, using ACH, analysts are instructed to individually assess the consistency of each piece of evidence and important assumptions concerning each hypothesis under evaluation before making an overall assessment. By externalizing a serialized process, ACH (much like other SATs) conveys a veneer of objectivity. However, these techniques still depend on a wide variety of subjective analytic inputs, even when analysts are interpreting relatively objective data (e.g., electronic signatures acquired by automated sensors). This means that noise (i.e., random variation in subjective assessments of the same data) is generated at each stage of the process, fed into subsequent stages, and accumulates over the course of the multistage assessment.Footnote23 Simply put, rather than improving the consistency of an inherently noisy process, SATs can introduce more opportunities for noise generation.Footnote24 This issue is exacerbated by the fact that SAT instructions are often vague or ambiguous.Footnote25 For instance, instructions on the application of ACH invariably fail to clearly specify what the core term “consistency” means, leaving it to the analyst to operationalize. Naturally, different analysts may come to different conclusions and the same analysts may also do so on different occasions. Failure to appreciate noise neglect reflects a preoccupation with bias minimization, even though threats to accuracy owing to noise tend to be more widespread and severe.Footnote26

Another critique of the SAT approach is that few techniques have ever been subjected to rigorous empirical testing.Footnote27 Instead, SATs have largely developed according to the goodness heuristic, whereby intuitively “good” ideas are assumed to be good and techniques are adopted without scientific validation, although most good ideas nevertheless fail.Footnote28 In recent years, many studies conducted outside of the IC have suggested that SATs do not deliver on the promise of enhanced rigor and judgment quality. For instance, several studies find that ACH either fails to improve judgment quality or makes it worse.Footnote29 This has been demonstrated in studies with intelligence analystsFootnote30 and in studies using information-taking forms such as source reliability, information credibility, and verbal probability, all commonly encountered in the intelligence analysis process.Footnote31 Research on the group brainstorming technique endorsed by the CIA’s Tradecraft Primer reveals that it reduces the quality and quantity of ideas.Footnote32 In other cases, there are important conceptual flaws: for instance, the Indicators Validator technique identifies collection priorities,Footnote33 neglects relevant base-rate information, and generates collection priorities that are inconsistent with normative models of information utility.Footnote34

NEW FRONTIERS: statistical optimization

By parochially focusing on preanalytic “debias-analysts-prior-to-their-assessments” interventions in the form of SATs, the IC has neglected extensive research on postanalytic interventions, which have been shown to reduce bias and noise, thereby improving judgment quality. Recent studies have demonstrated how human judgments can be improved through a variety of statistical optimization procedures. One subset of such methods involves mathematically recalibrating judgments to correct for observable biases, such as under- and overconfidence. Recalibration has been shown to boost accuracy in geopolitical forecasting tournamentsFootnote35 and actual intelligence assessments.Footnote36 For instance, “coherentizing” probability judgments to conform to the axioms of probability calculus (e.g., unitarity and additivity constraints) can improve accuracy.Footnote37 Coherentizing probabilities involves renormalizing them so that they are additive and addresses the fact that probabilistic assessments of mutually exclusive and exhaustive hypotheses often do not sum to 1.Footnote38 For example, studies have shown that the judged probabilities of a terrorist attack and of no attack in the same location and timeframe add up to significantly less than 1.Footnote39 Another recalibration technique that has proven effective in several studies funded by ODNI’s Intelligence Advanced Research Projects Activity (IARPA), involves “extremizing” aggregated probability judgments by pushing them closer to the extreme probabilities of 0 or 1.Footnote40 In a real-world application, Mandel and Barnes were able to significantly reduce underconfidence in Canadian strategic intelligence forecasts using a simple extremizing procedure.Footnote41

Postanalytic interventions also include statistical aggregation methods. Simple aggregation methods such as computing the unweighted average of multiple assessments from independent assessors can reduce judgment error,Footnote42 whereas more sophisticated performance-weighted methods boost accuracy by leveraging individual differences in some aspect of assessor performance.Footnote43 For instance, one method that has shown promise involves posing logically related questions to assessors that allow the coherence of their judgments to be gauged.Footnote44 The individual differences in coherence are then used to weigh the assessors’ contribution to an aggregated estimate. Those who are more coherent (e.g., who assign probabilities that respect the additivity constraint) receive more weight. Studies have further demonstrated the efficacy of applying ensembles of postanalytic techniques (e.g., recalibration and aggregation).Footnote45 For example, Mandel, Karvetski, and Dhami directly compared the effects of ACH versus coherence-based recalibration and aggregation on judgment quality in a sample of UK intelligence analysts who were given a probabilistic hypothesis-testing task.Footnote46 Whereas ACH did not improve (and sometimes degraded) coherence and accuracy compared to analysts in a control group who were not instructed to use ACH or any other SAT, both postanalytic interventions substantially improved judgment quality.

The IC should collaborate with judgment and decision scientists to trial the postanalytic interventions described above in real-world intelligence contexts. This would mark an important turning point away from the bias minimization paradigm toward an improving intelligence paradigm. As demonstrated by Mandel and Barnes, postanalytic interventions can be tested using extant intelligence assessments.Footnote47 The IC might fruitfully experiment with data from the Intelligence Community Prediction Market (ICPM), launched in 2010, which now represents the largest classified dataset tracking analytic judgment accuracy in IC history.Footnote48 Unclassified forecasting platforms, such as the UK government’s Cosmic Bazaar, could also provide data to validate postanalytic interventions.Footnote49

Additionally, much has been learned from multiple scientific programs funded by IARPA, which have focused on improving the U.S. IC’s forecasting ability (the most notable was the Aggregative Contingent Estimation program led by Jason Matheny). As noted earlier, much of the work on statistical optimization methods was funded by such programs. The research and knowledge generated by such programs, however, ought to be integrated into intelligence practice. There must be systematic monitoring of how doing so affects the quality of intelligence assessments, which requires tracking forecast accuracy in real intelligence products.Footnote50 At present, there is little work that directly compares traditional analysis to methods that rely on new architectures, such as crowdsourced forecasting platforms like ICPM, and some of it is of questionable quality.Footnote51 However, there are signs that the IC is thinking about these and even more radical machine-supported options. For instance, unclassified ODNI strategic documents suggest that efforts to augment intelligence analysis with artificial intelligence (AI) and machine learning (ML) are ongoing.Footnote52 If the IC can reorganize to permit hybrid human–machine analyses, it should be able to implement statistical optimization procedures, which are technologically less disruptive.

NEW FRONTIERS: human augmentation

Efforts to harness AI/ML for intelligence production coincide with growing interest in human augmentation across the broader defense and security community.Footnote53 Human augmentation broadly refers to applying science and technology for performance optimization (i.e., improving human performance within its natural limits) and performance enhancement (i.e., improving human performance beyond its natural limits).Footnote54 As noted by U.S. Air Force chief scientist, Werner Dahm, “natural human capacities are becoming increasingly mismatched to the enormous data volumes, processing capabilities, and decision speeds that technologies offer or demand.”Footnote55 In this context, human augmentation represents a necessary “binding agent” for effective human–machine teaming.Footnote56

While working to leverage AI/ML and the voluminous psychological and cognitive science research produced in the decades since Psychology of Intelligence Analysis, the IC could invest in a broad human augmentation program to improve the quality of analysts’ cognition. Extensive research has identified low-risk and noninvasive methods for augmenting cognitive function,Footnote57 many of which have undergone testing and implementation within the IC-adjacent operational community,Footnote58 but have never been trialed in analytic contexts.

It has long been known, for instance, that sleep is vital to long-term memory consolidationFootnote59 and is essential both before and after learning.Footnote60 Because the quality of intelligence assessments depends on the ability of analysts to remember and piece together facts from large bodies of data, the IC could invest in optimizing analysts’ sleep. Here, the IC might draw on findings from the operational community, which has developed various methods for optimizing the duration and quality of sleep, as well as strategies for maintaining cognitive performance under conditions of partial and total sleep deprivation.Footnote61 As biometric technologies rapidly advance and options for sleep tracking proliferate,Footnote62 it is within reach of the average consumer to track multiple aspects of their sleep, including detection of blood oxygenation levels during sleep, frequency of sleep interruptions, and duration of sleep components (e.g., rapid eye movement and deep sleep). Intelligence organizations could explore ways to incentivize biometric tracking and provide health support to analysts where needed.

It is also well known that regular exercise is vital to physical and mental well-being. Acute exercise stimulates the production of brain-derived neurotropic factor and has been linked to improvements in memory,Footnote63 and a recent meta-analysis showed that exercise could improve inhibitory control and cognitive flexibility.Footnote64 A recent meta-analysis of studies examining the effect of workplace exercise on workers’ mental and general quality of life found promising results, especially when the exercise programs were unsupervised.Footnote65 Forward-looking intelligence organizations could explore incorporating individualized exercise programs into the analysts’ daily schedules to improve their health and cognition.

Intelligence organizations could further enhance cognition by incentivizing good nutrition.Footnote66 For instance, a recent systematic review found that the Mediterranean-DASH [Dietary Approaches to Stop Hypertension] Intervention for Neurodegenerative Delay (MIND) diet, a hybrid of the Mediterranean and DASH Intervention for Neurodegenerative Delay diets, specifically tailored to optimize neuroprotection and prevent cognitive decline, was positively associated with specific cognitive functions as well as with global cognitive functioning,Footnote67 and the MIND diet has been shown to substantially slow age-related cognitive decline in older adults.Footnote68 Within the U.S. Department of Defense, research is underway to investigate how nutritional interventions targeting the gut microbiome (i.e., the combined genetic material of microorganisms inhabiting the human gut) can enhance cognition, as well as other health and performance outcomes.Footnote69

A specialized nutrition strategy might incorporate supplementation with nootropics, broadly referring to chemical substances hypothesized to enhance human cognition.Footnote70 Nootropics typically function as agonists or antagonists of certain neurotransmitters.Footnote71 Recent systematic reviews suggest that several nootropics yield benefits across multiple domains of cognition, including complex attention, learning, and memory.Footnote72 The most commonly self-administered nootropic is, unsurprisingly, caffeine.Footnote73 Caffeine is a psychostimulant shown to improve attention, alertness, and performance. There is evidence to suggest that it works synergistically with other substances. For instance, a meta-analysis suggests that when caffeine is combined with L-theanine, an ingredient found in green tea that promotes calmness, it can boost attentional switching accuracy.Footnote74 Evidence also suggests that combining caffeine with Alpinia galanga (a plant from the ginger family) can prolong the attentional benefits of caffeine, thus delaying the notorious “caffeine crash” that many users experience with sustained caffeine consumption.Footnote75 Among the lesser-known plant compounds, Bacopa monnieri and Ginkgo biloba have been shown to improve attention, learning, memory, and executive functions, and Bacopa monnieri has also been found to improve language-related cognition.Footnote76 Withania somnifera (Ashwaganda), in addition to showing promise in improving attention, memory, and information processing speed, has been shown to reduce anxiety and depression.Footnote77 Ashwaganda also holds promise as an adaptogen, assisting in regulating cortisol in response to stress.Footnote78

The interventions described above ought to be the focus of a concerted research and development program led by the IC, and in close coordination with the operational community to improve intelligence through optimized human performance. The IC will have to systematically investigate how extant methods of human augmentation influence performance in analytic contexts while also monitoring emergent technologies, such as gene editing, cybernetics, and nanotechnology.Footnote79 A human augmentation program that prioritizes physical and mental wellness could, in addition to improving intelligence analysis, substantially contribute to the overall well-being of the IC’s workforce, with downstream benefits for society at large.

Any such research on human augmentation will naturally require careful consideration of various medical, ethical, and legal dimensions.Footnote80 For instance, most methods of human augmentation require collecting sensitive personal data (e.g., biomarkers), which could be misused internally or targeted by hostile actors.Footnote81 As well, interventions that affect such basic biological necessities as sleep, exercise, and nutrition could be viewed as a violation of personal lifestyle choices and would require a careful delineation of personal–professional boundaries. Another issue is that, during randomized clinical trials, participants may experience negative side effects as part of a treatment group, or they may be excluded from a beneficial treatment as part of a control group. To address these issues, the IC should look to medicine and other experimental fields that have developed detailed procedures to ensure the safety, privacy, and autonomy of participants while adhering to strict legal and ethical standards.

Conclusion

The psychology of intelligence analysis emerged in the last quarter of the twentieth century and was driven mainly by a focus on minimizing cognitive biases as a means of improving the quality of intelligence analysis. While bias minimization is a noble goal, the efficacy of methods directed at achieving it should be pursued scientifically. Good ideas that fail to yield empirical and nonanecdotal evidence of improvement in analytic products, as measured by such indicators as accuracy or adherence to logical constraints on judgments, should be cast aside, allowing for other good ideas to be tested. The ostensible goodness of the idea, however compelling it may seem, cannot be taken as proof.

The goal of bias minimization should also be recast as merely one part of the broader aim of improving intelligence analysis. This requires bias minimization and minimizing noise and optimizing human performance, especially as it relates to cognition and underlying brain activity. Achieving this goal is vital given the technological pressures modern societies will likely face as they move into the second quarter of the twenty-first century. Already there is growing anxiety about the encroachment of AI on areas of work once thought to represent uniquely human activity due to their reliance on thinking and creativity.Footnote82 While AI raises the specter of human irrelevance, biotechnological advances raise the prospect of achieving actuarial escape velocity—the point at which mortality rates decrease so quickly that remaining life expectancy increases with time.Footnote83 If human judgment is to play a vital role in important decisions in the future, and if humans are to have long, productive lives, human cognition will have to be optimized through all productive means. The statistical optimization of human judgments and the science of human augmentation are two avenues worth exploring.

DISCLOSURE STATEMENT

No potential conflict of interest was reported by the authors.

Additional information

Notes on contributors

David R. Mandel

David R. Mandel, Ph.D., is a senior Defence Scientist in the Intelligence, Influence, and Collaboration Section of Defence Research and Development Canada. He is also an Adjunct Professor of Psychology at York University. Mandel’s research focuses on judgment and decisionmaking as it pertains to humans, in general, and to intelligence analysts. Mandel is a recipient of a North Atlantic Treaty Organization System Analysis and Studies Panel Excellence Award for a Research Task Group. The author can be contacted at [email protected].

Daniel Irwin

Daniel Irwin holds an M.S. in Applied Intelligence from Mercyhurst University. His research focuses on the assessment and communication of uncertainty, especially in the domain of intelligence analysis. His work has been published in journals including Risk Analysis, Judgment and Decision Making, and Intelligence and National Security. In 2020, Irwin was awarded a North Atlantic Treaty Organization System Analysis and Studies Panel Excellence Award.

Notes

1 Richards J. Heuer, Psychology of Intelligence Analysis (Washington, DC: Central Intelligence Agency Center for Study of Intelligence, 1999).

2 Ibid., p. xiii.

3 Ibid., p. 55.

4 Raymond S. Nickerson, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of General Psychology, Vol. 2, No. 2 (1998), pp. 175–220.

5 Simon Pope and Audun Jøsang, “Analysis of Competing Hypotheses Using Subjective Logic” (The 10th Annual Command and Control Research and Technology Research Symposium, McLean, VA, 2005), https://apps.dtic.mil/sti/pdfs/ADA463907.pdf (accessed 16 August 2022).

6 Daniel Kahneman, Paul Slovic, and Amos Tversky (eds.), Judgment under Uncertainty: Heuristics and Biases (Cambridge, UK: Cambridge University Press, 1982).

7 Gerd Gigerenzer, “On Narrow Norms and Vague Heuristics: A Reply to Kahneman and Tversky,” Psychological Review, Vol. 103, No. 3 (1996), pp. 592–596; David R. Mandel, “Do Framing Effects Reveal Irrational Choice?” Journal of Experimental Psychology: General, Vol. 143, No. 3 (2014), pp. 1185–1198.

8 Randolph H. Pherson and Richards J. Heuer, Structured Analytic Techniques for Intelligence Analysis, 3rd ed. (Thousand Oaks, CA: SAGE, CQ Press, 2020).

9 Stephen Marrin, “Training and Educating U.S. Intelligence Analysts,” International Journal of Intelligence and CounterIntelligence, Vol. 22, No. 1 (2009), pp. 131–146.

10 Pherson and Heuer, Structured Analytic Techniques for Intelligence Analysis, pp. 5–7.

11 Intelligence Reform and Terrorism Prevention Act of 2004, Pub. L. 108-458§1017 (2004).

12 Pherson and Heuer, Structured Analytic Techniques for Intelligence Analysis, pp. 5–7. Pherson and Heuer report that the term “structured analytic technique” became official with the publication of A Tradecraft Primer, although the term “alternative analysis” persists elsewhere in doctrine.

13 “Intelligence Community Directive 203” (Washington, DC: ODNI, 2007). https://www.dni.gov/files/documents/ICD/ICD%20203%20Analytic%20Standards%20pdf-unclassified.pdf (accessed 16 August 2022).

14 Ibid., p. 7.

15 Pherson and Heuer, Structured Analytic Techniques for Intelligence Analysis, pp. 5–7. For an example of Allied intelligence doctrine promoting the use of SATs, see the United Kingdom’s “Professional Development Framework for All-Source Intelligence Assessment” (London: Professional Head of Intelligence Assessment, 2019), pp. 17–18. https://cabinetofficejobs.tal.net/vx/lang-en-GB/mobile-0/appcentre-1/brand-2/candidate/download_file_opp/3977/37356/1/0/f57bbe6f26db0f14241ec961dfb89b69d8680213 (accessed August 16, 2022)

16 Marrin, “Training and Educating U.S. Intelligence Analysts”; Stephen Coulthart and Matthew Crosston, “Terra Incognita: Mapping American Intelligence Education Curriculum,” Journal of Strategic Security, Vol. 8, No. 3 (2015), pp. 46–68.

17 Welton Chang, Elissabeth Berdini, David R. Mandel, and Philip E. Tetlock, “Restructuring Structured Analytic Techniques in Intelligence,” Intelligence and National Security, Vol. 33, No. 3 (2018):, pp. 337–356.

18 Ibid.; Welton Chang and Philip E. Tetlock, “Rethinking the Training of Intelligence Analysts,” Intelligence and National Security, Vol. 31, No. 6 (2016), pp. 903–920.

19 Ibid.

20 David R. Mandel and Alan Barnes, “Accuracy of Forecasts in Strategic Intelligence,” Proceedings of the National Academy of Sciences, Vol. 111, No. 30 (2014), pp. 10984–10989; David R. Mandel and Alan Barnes, “Geopolitical Forecasting Skill in Strategic Intelligence: Geopolitical Forecasting Skill,” Journal of Behavioral Decision Making, Vol. 31, No. 1 (2018), pp. 127–137.

21 Chang et al., “Restructuring Structured Analytic Techniques in Intelligence.”

22 Ibid.; Mandeep K. Dhami, Ian K. Belton, and Kathryn E. Careless, “Critical Review of Analytic Techniques” (European Intelligence and Security Informatics Conference, Uppsala, Sweden, 2016).

23 Chang et al., “Restructuring Structured Analytic Techniques in Intelligence”; David R. Mandel, “The Occasional Maverick of Analytic Tradecraft,” Intelligence and National Security, Vol. 35, No. 3 (2020), pp. 438–443.

24 Ibid.

25 Ibid.

26 Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein, Noise: A Flaw in Human Judgment, 1st ed. (New York: Little, Brown Spark, 2021).

27 Chang et al., “Restructuring Structured Analytic Techniques in Intelligence”; Mandel, “The Occasional Maverick of Analytic Tradecraft”; Chang and Tetlock, “Rethinking the Training of Intelligence Analysts”; Marrin, “Training and Educating U.S. Intelligence Analysts”; Robert Pool, Field Evaluation in the Intelligence and Counterintelligence Context: Workshop Summary (Washington, DC: The National Academies Press, 2010); David R. Mandel, “Intelligence, Science, and the Ignorance Hypothesis,” in The Academic-Practitioner Divide in Intelligence Studies, edited by Rubén Arcos, Nicole K. Drumhiller, and Mark Phythian (London, UK: Rowman & Littlefield, 2022), pp. 79–94.

28 Mandel, “The Occasional Maverick of Analytic Tradecraft”; David R. Mandel, “Can Decision Science Improve Intelligence Analysis?” in Researching National Security Intelligence: Multidisciplinary Approaches, edited by Stephen Coulthart, Michael Landon-Murray, and Damien Van Puyvelde (Washington, DC: Georgetown University Press, 2019), pp. 117–140; David R. Mandel and Philip E. Tetlock, “Correcting Judgment Correctives in National Security Intelligence,” Frontiers in Psychology, Vol. 9 (2018), p. 2640.

29 David R. Mandel, Christopher W. Karvetski, and Mandeep K. Dhami, “Boosting Intelligence Analysts’ Judgment Accuracy: What Works, What Fails?” Judgment and Decision Making, Vol. 13, No. 6 (2018), pp. 607–621; Mandeep K. Dhami, Ian K. Belton, and David R. Mandel, “The ‘Analysis of Competing Hypotheses’ in Intelligence Analysis,” Applied Cognitive Psychology, Vol. 33, No. 6 (2019), pp. 1080–1090; Christopher W. Karvetski and David R. Mandel, “Coherence of Probability Judgements from Uncertain Evidence: Does ACH Help?” Judgment and Decision Making, Vol. 15, No. 6 (2020), pp. 939–958; Christopher W. Karvetski, David R. Mandel, and Daniel Irwin, “Improving Probability Judgment in Intelligence Analysis: From Structured Analysis to Statistical Aggregation,” Risk Analysis, Vol. 40, No. 5 (2020), pp. 1040–1057; Martha Whitesmith, “The Efficacy of ACH in Mitigating Serial Position Effects and Confirmation Bias in an Intelligence Analysis Scenario,” Intelligence and National Security, Vol. 34, No. 2 (2019), pp. 225–242.

30 Mandel, Karvetski, and Dhami, “Boosting Intelligence Analysts’ Judgment Accuracy”; Dhami, Belton, and Mandel, “The ‘Analysis of Competing Hypotheses’ in Intelligence Analysis.”

31 Karvetski and Mandel, ““Coherence of Probability Judgements from Uncertain Evidence.”

32 Stephen J. Coulthart, “An Evidence-Based Evaluation of 12 Core Structured Analytic Techniques,” International Journal of Intelligence and CounterIntelligence, Vol. 30, No. 2 (2017), pp. 368–391.

33 Pherson and Heuer, Structured Analytic Techniques for Intelligence Analysis.

34 Mark A. C. Timms, David R. Mandel, and Jonathan D. Nelson, “Applying Information Theory to Validate Commanders’ Critical Information Requirements,” in Handbook of Military and Defence Operation Research, edited by Natalie M. Scala and James P. Howard (Boca Raton, FL: CRC Press, 2020), pp. 331–344.

35 Christopher W. Karvetski, Kenneth C. Olson, David R. Mandel, and Charles R. Twardy, “Probabilistic Coherence Weighting for Optimizing Expert Forecasts,” Decision Analysis, Vol. 10, No. 4 (2013), pp. 305–326.

36 Mandel and Barnes, “Accuracy of Forecasts in Strategic Intelligence.”

37 Karvetski et al., “Probabilistic Coherence Weighting for Optimizing Expert Forecasts”; Yuyu Fan, David V. Budescu, David Mandel, and Mark Himmelstein, “Improving Accuracy by Coherence Weighting of Direct and Ratio Probability Judgments,” Decision Analysis, Vol. 16, No. 3 (2019), pp. 197–217.

38 Amos Tversky and Derek J. Koelher, “Support Theory: A Nonextensional Representation of Subjective Probability,” Psychological Review, Vol. 101, No. 4 (1994), pp. 547–567.

39 David R. Mandel, “Are Risk Assessments of a Terrorist Attack Coherent?” Journal of Experimental Psychology: Applied, Vol. 11, No. 4 (2015), pp. 227–288.

40 Jonathan Baron, Barbara A. Mellers, Philip E. Tetlock, Eric Stone, and Lyle H. Ungar, “Two Reasons to make Aggregated Probability Forecasts More Extreme,” Decision Analysis, Vol. 11, No. 2 (2014), pp. 133–145; Brandon M. Turner, Mark Steyvers, Edgar C. Merkle, David V. Budescu, and Thomas S. Wallsten, “Forecast Aggregation Via Recalibration,” Machine Learning, Vol. 95, No. 3 (2014), pp. 261–289.

41 Mandel and Barnes, “Accuracy of Forecasts in Strategic Intelligence.”

42 Robert T. Clemen and Robert L. Winkler, “Combining Probability Distributions from Experts in Risk Analysis,” Risk Analysis, Vol. 19, No. 2 (1999), pp. 187–203.

43 For an overview of performance-weighted aggregation methods, see: Robert N. Collins, David R. Mandel, and David V. Budescu, “Performance-Weighted Aggregation: Ferreting out Wisdom within the Crowd,” in Judgment in Predictive Analytics, edited by Matthias Seifer (Springer, 2023).

44 Guanchun Wang, Sanjeev R. Kulkarni, Vincent H. Poor, and Daniel N. Osherson, “Aggregating Large Sets of Probabilistic Forecasts by Weighted Coherent Adjustment,” Decision Analysis, Vol. 8, No. 2 (2011), pp. 128–144; Karvetski et al., “Probabilistic Coherence Weighting for Optimizing Expert Forecasts”; Fan et al., “Improving Accuracy by Coherence Weighting of Direct and Ratio Probability Judgments”; Mandel, Karvetski, and Dhami, “Boosting Intelligence Analysts’ Judgment Accuracy”; Karvetski, Mandel, and Irwin, “Improving Probability Judgment in Intelligence Analysis.”

45 Karvetski et al., “Probabilistic Coherence Weighting for Optimizing Expert Forecasts”; Mandel, Karvetski, and Dhami, “Boosting Intelligence Analysts’ Judgment Accuracy”; Karvetski, Mandel, and Irwin, “Improving Probability Judgment in Intelligence Analysis.”

46 Mandel, Karvetski, and Dhami, “Boosting Intelligence Analysts’ Judgment Accuracy.”

47 Mandel and Barnes, “Accuracy of Forecasts in Strategic Intelligence.”

48 Jonathan McHenry, “Three IARPA Forecasting Efforts: ICPM, HFC, and the Geopolitical Forecasting Challenge” (Federal Foresight Community of Interest 18th Quarterly Meeting), https://www.ffcoi.org/wp-content/uploads/2019/03/Three-IARPA-Forecasting-Efforts-ICPM-HFC-and-the-Geopolitical-Forescasting-Challenge_Jan-2018.pdf (accessed 16 August 2022).

49 Shashank Joshi, “How Spooks are Turning to Superforecasting in the Cosmic Bazaar,” The Economist, 15 April 2021, https://www.economist.com/science-and-technology/2021/04/15/how-spooks-are-turning-to-superforecasting-in-the-cosmic-bazaar (accessed 16 August 2022).

50 Mandel and Barnes, “Accuracy of Forecasts in Strategic Intelligence”; David R. Mandel and Daniel Irwin, “Tracking Accuracy of Strategic Intelligence Forecasts: Findings from a Long-term Canadian Study,” Futures & Foresight Science, Vol. 3, No. 3–4 (2021), p. e98.

51 Bradley J. Stastny and Paul E. Lehner, “Comparative Evaluation of the Forecast Accuracy of Analysis Reports and a Prediction Market,” Judgment and Decision Making, Vol. 13, No. 2 (2018), pp. 202–211; David R. Mandel, “Too Soon to Tell if the US Intelligence Community Prediction Market is More Accurate than Intelligence Reports: Commentary on Stastny and Lehner (2018),” Judgment and Decision Making, Vol. 14, No. 3 (2019), pp. 288–292.

52 “The AIM Initiative: A Strategy for Augmenting Intelligence Using Machines” (Washington, DC: ODNI, 2019), https://www.dni.gov/files/ODNI/documents/AIM-Strategy.pdf (accessed 16 August 2022).

53 Tad T. Brunyé, Randy Brou, Tracy Jill Doty, Frederick D. Gregory, Erika K. Hussey, Harris R. Lieberman, Kari L. Loverro, Elizabeth S. Mezzacappa, William H. Neumeier, Debra J. Patton, et al., “A Review of US Army Research Contributing to Cognitive Enhancement in Military Contexts,” Journal of Cognitive Enhancement, Vol. 4, No. 4 (2020), pp. 453–468; Amanda Kelley, Kathryn Feltman, Emmanuel Nwala, Kyle Bernhardt, Amanda Hayes, Jared Basso, and Colby Mathews, “A Systematic Review of Cognitive Enhancement Interventions for Use in Military Operations” (USAARL Report No. 2019-11, U.S. Army Aeromedical Research Laboratory, Fort Rucker, AL, 2019), https://apps.dtic.mil/sti/pdfs/AD1083490.pdf (accessed 16 August 2022); “Human Augmentation—The Dawn of a New Paradigm” (Shrivenham, UK: Ministry of Defence, Development, Concepts and Doctrine Centre, 2021), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/986301/Human_Augmentation_SIP_access2.pdf (accessed 16 August 2022).

54 “Human Augmentation.”

55 Werner Dahm, “Technology Horizons: A Vision for Air Force Science & Technology during 2010–2030,” Vol. 1 (AF/ST-TR-10-01-PR, U.S. Air Force Chief Scientist, Washington, DC, 2010), p. viii.

56 “Human Augmentation,” p. 11.

57 “Human Augmentation”; Alexandre Marois and Daniel Lafond, “Towards Augmenting Humans in the Field: A Review of Cognitive Enhancement Methods and Applications,” in Proceedings of the Annual Meeting of the 44th Cognitive Science Society, edited by Jennifer Culbertson, Andrew Perfors, Hugh Rabagliati, and Veronica Ramenzoni (Cognitive Science Society, 2022), pp. 1891–1902; Brunyé et al., “A Review of US Army Research Contributing to Cognitive Enhancement in Military Contexts.”

58 Brunyé et al., “A Review of US Army Research Contributing to Cognitive Enhancement in Military Contexts.”

59 Björn Rasch and Jan Born, “About Sleep’s Role in Memory,” Physiological Reviews, Vol. 93, No. 2 (2013), pp. 681–766.

60 Matthew P. Walker, “Cognitive Consequences of Sleep and Sleep Loss,” Sleep Medicine, Vol. 9 (2008), pp. S29–S34.

61 Brunyé et al., “A Review of US Army Research Contributing to Cognitive Enhancement in Military Contexts.”

62 David R. Samson, “Taking the Sleep Lab to the Field: Biometric Techniques for Quantifying Sleep and Circadian Rhythms in Humans,” American Journal of Human Biology, Vol. 33, No. 6 (2021), p. e2354.

63 Aaron T. Piepmeier and Jennifer L. Etnier, “Brain-Derived Neurotrophic Factor (BDNF) as a Potential Mechanism of the Effects of Acute Exercise on Cognitive Performance,” Journal of Sport and Health Science, Vol. 4, No. 1 (2015), pp. 14–23.

64 Jan Wilke, Florian Giesche, Kristina Klier, Lutz Vogt, Eva Herrmann, and Winfried Banzer, “Acute Effects of Resistance Exercise on Cognitive Function in Healthy Adults: A Systematic Review with Multilevel Meta-Analysis,” Sports Medicine, Vol. 49, No. 6 (2019), pp. 905–916.

65 Thi Mai Nguyen, Van Huy Nguyen, and Jin Hee Kim, “Physical Exercise and Health-Related Quality of Life in Office Workers: A Systematic Review and Meta-Analysis,” International Journal of Environmental Research and Public Health, Vol. 18, No. 7 (2021), p. 3791.

66 M. J. Dauncey, “Recent Advances in Nutrition, Genes and Brain Health,” Proceedings of the Nutrition Society, Vol. 71, No. 4 (2012), pp. 581–591; Jeremy P. E. Spencer, “Food for Thought: The Role of Dietary Flavonoids in Enhancing Human Memory, Learning and Neuro-Cognitive Performance,” Proceedings of the Nutrition Society, Vol. 67, No. 2 (2008), pp. 238–252; Fernando Gómez-Pinilla, “Brain Foods: The Effects of Nutrients on Brain Function,” Nature Reviews Neuroscience, Vol. 9, No. 7 (2008), pp. 568–578.

67 Sorayya Kheirouri and Mohammad Alizadeh, “MIND Diet and Cognitive Performance in Older Adults: A Systematic Review,” Critical Reviews in Food Science and Nutrition (2021), https://doi.org/10.1080/10408398.2021.1925220

68 Martha Clare Morris, Christy C. Tangney, Yamin Wang, Frank M. Sacks, Lisa L. Barnes, David A. Bennett, and Neelum T. Aggarwal, “MIND Diet Slows Cognitive Decline with Aging,” Alzheimer’s & Dementia, Vol. 11, No. 9 (2015), pp. 1015–1022.

69 Steven Arcidiacono, Jason W. Soares, J. Philip Karl, Linda Chrisey, Blair C. R. Dancy, Michael Goodson, Fredrick Gregory, Rasha Hammamieh, Nancy Kelley Loughnane, Robert Kokoska, et al., “The Current State and Future Direction of DoD Gut Microbiome Research: A Summary of the First DoD Gut Microbiome Informational Meeting,” Standards in Genomic Sciences, Vol. 13, No. 1 (2018), pp. 1–16.

70 Cristina Lorca, María Mulet, Catalina Arévalo-Caro, M. Ángeles Sanchez, Ainhoa Perez, María Perrino, Anna Bach-Faig, Alicia Aguilar-Martínez, Elisabet Vilella, Xavier Gallart-Palau, et al., “Plant-Derived Nootropics and Human Cognition: A Systematic Review,” Critical Reviews in Food Science and Nutrition (2022), https://doi.org/10.1080/10408398.2021.2021137

71 Marois and Lafond, “Towards Augmenting Humans in the Field.”

72 Ibid.; Lorca et al., “Plant-Derived Nootropics and Human Cognition: A Systematic Review”; Kelley et al., “A Systematic Review of Cognitive Enhancement Interventions for Use in Military Operations”; Donatella Marazziti, Maria Teresa Avella, Tea Ivaldi, Stefania Palermo, Lucia Massa, Alessandra Della Vecchia, Lucia Basile, and Federico Mucci, “Neuroenhancement: State of the Art and Future Perspectives,” Clinical Neuropsychiatry, Vol. 18, No. 3 (2021), pp. 137–169.

73 J. W. Daly, J. Holmén, and B. B. Fredholm, “Is Caffeine Addictive? The Most Widely used Psychoactive Substance in the World Affects Same Parts of the Brain as Cocaine,” Läkartidningen, Vol. 95, No. 51–52 (1998), p. 5878.

74 David A. Camfield, Con Stough, Jonathon Farrimond, and Andrew B. Scholey, “Acute Effects of Tea Constituents L-theanine, Caffeine, and Epigallocatechin Gallate on Cognitive Function and Mood: A Systematic Review and Meta-Analysis,” Nutrition Reviews, Vol. 72, No. 8 (2014), pp. 507–522.

75 Shalini Srivastava, Mark Mennemeier, and Surekha Pimple, “Effect of Alpinia Galanga on Mental Alertness and Sustained Attention with or without Caffeine: A Randomized Placebo-Controlled Study,” Journal of the American College of Nutrition, Vol. 36, No. 8 (2017), pp. 631–639.

76 Lorca at al., “Plant-Derived Nootropics and Human Cognition: A Systematic Review.”

77 Ibid.

78 Jaysing Salve, Sucheta Pate, Khokan Debnath, and Deepak Langade, “Adaptogenic and Anxiolytic Effects of Ashwagandha Root Extract in Healthy Adults: A Double-Blind, Randomized, Placebo-Controlled Clinical Study,” Cureus, Vol. 11, No. 12 (2019), p. e6466-e6466.

79 “Human Augmentation.”

80 Ibid.; Brunyé et al., “A Review of US Army Research Contributing to Cognitive Enhancement in Military Contexts”; Kimberly R. Urban and Wen-Jun Gao, “Performance Enhancement at the Cost of Potential Brain Plasticity: Neural Ramifications of Nootropic Drugs in the Healthy Developing Brain,” Frontiers in Systems Neuroscience, Vol. 8 (2014), p. 38.

81 “Human Augmentation.”

82 Yuval Noah Harari, “Reboot for the AI Revolution,” Nature, Vol. 550 (2017), pp. 324–327.

83 Aubrey D. N. J. de Grey, “Escape Velocity: Why the Prospect of Extreme Human Life Extension Matters Now,” PLOS Biology, Vol. 2, No. 6 (2004), p. e187.