459
Views
0
CrossRef citations to date
0
Altmetric
Article

Critical review of the analysis of competing hypotheses technique: lessons for the intelligence community

Received 09 May 2023, Accepted 26 Dec 2023, Published online: 06 Feb 2024
 

ABSTRACT

Intelligence communities regularly produce important assessments that inform policymakers. The Analysis of Competing Hypotheses technique (ACH) is one of the most widely-touted methods for improving the accuracy of those assessments. But does ACH work? This critical review identified seven articles describing six experiments testing ACH. The results indicate ACH – as a whole – has little to no overall benefit on judgment quality, and may even harm it, even though some aspects of ACH might be beneficial. We consequently discourage intelligence organizations from mandating the training or use of ACH, and we recommend greater integration of science into intelligence practices in general.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Ethical Guidelines

We confirm that all the research meets ethical guidelines, including adherence to the legal requirements of the study country.

Notes

1. ODNI, “Reports & Publications”.

2. Pherson and Heuer, Structured Analytic Techniques for Intelligence Analysis, 3–4.

3. Pherson and Heuer.

4. Chang et al., “Restructuring Structured Analytic Techniques in Intelligence”.

5. US Congress, “INTELLIGENCE REFORM AND TERRORISM PREVENTION ACT OF 2004”. In that act, SATs are referred to as ‘alternative analysis’. At that time, this was a common term whose use was eventually discouraged by Randolph Pherson. Pherson advocated SATs as normal techniques for good analysis, not as ‘alternatives’. Coulthart, ‘Why do analysts use’, 2016.

6. Mandel and Irwin, “Beyond bias minimization”.

7. Heuer, Psychology of Intelligence Analysis.

8. Pherson and Heuer, Structured Analytic Techniques for Intelligence Analysis, 164.

9. Pherson and Heuer.

10. Pherson and Heuer, 164.

11. Pherson and Heuer, 164.

12. Heuer, “The Evolution of Structured Analytic Techniques”.

13. Jones, “Critical Epistemology for Analysis of Competing Hypotheses”; Mandel, “The Occasional Maverick of Analytic Tradecraft”; van Gelder, ‘Can We Do Better than ACH?’.

14. Howson and Urbach, Scientific Reasoning: The Bayesian Approach; Nola and Sankey, Theories of Scientific Method.

15. Pherson and Heuer, Structured Analytic Techniques for Intelligence Analysis, 157.

16. Pherson and Heuer, Structured Analytic Techniques for Intelligence Analysis, 158.

17. Popper, The Logic of Scientific Discovery, chap. 3.

18. Popper, Unended Quest: An Intellectual Autobiography.

19. Strevens, The Knowledge Machine: How Irrationality Created Modern Science, 21.

20. Mandel, ‘The Occasional Maverick of Analytic Tradecraft’.

21. Popper, The Logic of Scientific Discovery.

22. Mandel, ‘Conjectures on Science and Rationality: Commentary on Fiedler et al. (2022)’.

23. Howson and Urbach, Scientific Reasoning: The Bayesian Approach; Sokal and Bricmont, Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science; Wilcox, Human Judgment: How Accurate Is It, and How Can It Get Better?

24. Chang et al., “Restructuring Structured Analytic Techniques in Intelligence”.

25. Edwards and Smith, “A Disconfirmation Bias in the Evaluation of Arguments”.

26. Tetlock and Lebow, “Poking Counterfactual Holes in Covering Laws”.

27. Jones, “Critical Epistemology for Analysis of Competing Hypotheses”.

28. Whitesmith, “Justified True Belief Theory for Intelligence Analysis”.

29. Lemay and Leblanc, “Iterative Analysis of Competing Hypotheses to Overcome Cognitive Biases in Cyber Decision-Making”.

30. US Government, “A Tradecraft Primer: Structured Analytic Techniques for Improving Intelligence Analysis”, 14.

31. Coulthart, “An Evidence-Based Evaluation of 12 Core Structured Analytic Techniques”.

32. Mandel, “Assessment and Communication of Uncertainty in Intelligence to Support Decision Making: Final Report of Research Task Group SAS-114”; Mandel, Karvetski, and Dhami, “Boosting Intelligence Analysts’ Judgment Accuracy”.

33. Stromer-Galley et al., “Flexible versus Structured Support for Reasoning”. Their SAT departed from ACH by, for example, omitting a matrix and using ‘supportive’ evidence in ranking the hypotheses, contrary to ACH steps 3 and 6 in .

34. Jones, “Critical Epistemology for Analysis of Competing Hypotheses”.

35. Convertino et al., “The CACHE Study”; Kretz, Simpson, and Graham, “A Game-Based Experimental Protocol for Identifying and Overcoming Judgment Biases in Forensic Decision Analysis”; Murukannaiah et al., “Resolving Goal Conflicts via Argumentation-Based Analysis of Competing Hypotheses”.

36. Lehner et al., “Confirmation Bias in Complex Analyses”.

37. Folker Jr, “Exploiting Structured Methodologies to Improve Qualitative Intelligence Analysis”.

38. Mandel, Karvetski, and Dhami, “Boosting Intelligence Analysts’ Judgment Accuracy”, 201.

39. Heuer and Pherson, “Structured Analytic Techniques: A New Approach to Analysis”.

40. Dhami, Belton, and Mandel, “The ‘Analysis of Competing Hypotheses’ in Intelligence Analysis”.

41. Karvetski, Mandel, and Irwin, “Improving Probability Judgment in Intelligence Analysis”.

42. Mandel, Karvetski, and Irwin, “Improving Probability Judgment in Intelligence Analysis: From Structured Analysis to Statistical Aggregation”.

43. Karvetski and Mandel, “Coherence of Probability Judgments from Uncertain Evidence”.

44. Whitesmith, Cognitive Bias in Intelligence Analysis.

45. Whitesmith, 131.

46. Maegherman et al., “Test of the Analysis of Competing Hypotheses in Legal Decision-Making”.

47. Maegherman et al., “Test of the Analysis of Competing Hypotheses in Legal Decision-Making”, 65.

48. Lehner et al., “Confirmation Bias in Complex Analyses”.

49. Dhami, Belton, and Mandel, “The ‘Analysis of Competing Hypotheses’ in Intelligence Analysis”.

50. Dhami, Belton, and Mandel; Karvetski, Mandel, and Irwin, “Improving Probability Judgment in Intelligence Analysis”; Maegherman et al., “Test of the Analysis of Competing Hypotheses in Legal Decision-Making”.

51. Lehner et al., “Confirmation Bias in Complex Analyses”.

52. Dhami, Belton, and Mandel, “The ‘Analysis of Competing Hypotheses’ in Intelligence Analysis”; Mandel, Karvetski, and Dhami, ‘Boosting Intelligence Analysts’ Judgment Accuracy’.

53. Lehner et al., “Confirmation Bias in Complex Analyses”; Whitesmith, Cognitive Bias in Intelligence Analysis.

54. Karvetski and Mandel, “Coherence of Probability Judgments from Uncertain Evidence”; Karvetski, Mandel, and Irwin, “Improving Probability Judgment in Intelligence Analysis”.

55. Maegherman et al., “Test of the Analysis of Competing Hypotheses in Legal Decision-Making”; Whitesmith, Cognitive Bias in Intelligence Analysis.

56. Whitesmith, Cognitive Bias in Intelligence Analysis.

57. Karvetski and Mandel, ‘Coherence of Probability Judgments from Uncertain Evidence’.

58. Karvetski and Mandel.

59. Karvetski and Mandel.

60. Karvetski and Mandel.

61. Karvetski and Mandel.

62. Karvetski, Mandel, and Irwin, “Improving Probability Judgment in Intelligence Analysis”.

63. Dhami, Belton, and Mandel, “The ‘Analysis of Competing Hypotheses’ in Intelligence Analysis”.

64. Dhami, Belton, and Mandel; Karvetski, Mandel, and Irwin, “Improving Probability Judgment in Intelligence Analysis”; Maegherman et al., “Test of the Analysis of Competing Hypotheses in Legal Decision-Making”.

65. Coulthart, “Why Do Analysts Use Structured Analytic Techniques?”; Marrin, “Intelligence Analysis”; Marrin, “Is Intelligence Analysis an Art or a Science?”.

66. Dhami, Belton, and Mandel, “The ‘Analysis of Competing Hypotheses’ in Intelligence Analysis”.

67. Karvetski et al., “What Do Forecasting Rationales Reveal about Thinking Patterns of Top Geopolitical Forecasters?”; Mandel and Irwin, “Tracking Accuracy of Strategic Intelligence Forecasts”; Wilcox, Human Judgment: How Accurate Is It, and How Can It Get Better?

68. Bandyopadhyay, Philosophy of Statistics.; Hawthorne, “Supplement to Inductive Logic: Likelihood Ratios, Likelihoodism, and the Law of Likelihood”; Howson and Urbach, Scientific Reasoning: The Bayesian Approach; Sprenger and Hartmann, Bayesian Philosophy of Science.

69. Howson and Urbach, Scientific Reasoning: The Bayesian Approach.

70. Pherson and Heuer, Structured Analytic Techniques for Intelligence Analysis.

71. Jang, “Seeking Congruency or Incongruency Online?”; Lehner et al., “Confirmation Bias in Complex Analyses”; Maegherman et al., “Test of the Analysis of Competing Hypotheses in Legal Decision-Making”.

72. Hart et al., “Feeling Validated Versus Being Correct: A Meta-Analysis of Selective Exposure to Information”; Jang, “Seeking Congruency or Incongruency Online?”; Wilcox, Human Judgment: How Accurate Is It, and How Can It Get Better?

73. Dhami, Belton, and Mandel, “The ‘Analysis of Competing Hypotheses’ in Intelligence Analysis”.

74. Chang et al., “Accountability and Adaptive Performance under Uncertainty: A Long-Term View”; Karvetski et al., “What Do Forecasting Rationales Reveal about Thinking Patterns of Top Geopolitical Forecasters?”; Mellers et al., “The Psychology of Intelligence Analysis: Drivers of Prediction Accuracy in World Politics”.; Tetlock and Gardner, Superforecasting: The Art and Science of Prediction.

75. Mandel, “Instruction in Information Structuring Improves Bayesian Judgment in Intelligence Analysts”.

76. Carnap, Logical Foundations of Probability; Reichenbach, The Theory of Probability: An Inquiry into the Logical and Mathematical Foundations of the Calculus of Probability; Kyburg and Teng, Uncertain Inference; Pollock, Nomic Probability and the Foundations of Induction.

77. Stern, Cifu, and Altkorn, Symptom to Diagnosis: An Evidence-Based Guide.

78. Kahneman and Tversky, “Subjective Probability”.

79. Pherson and Heuer, Structured Analytic Techniques for Intelligence Analysis.

80. Kahneman and Lovallo, “Timid Choices and Bold Forecasts”.

81. Tikuisis and Mandel, “Is the World Deteriorating?”.

82. Karvetski and Mandel, “Coherence of Probability Judgments from Uncertain Evidence”.

83. Karvetski and Mandel.

84. Wilcox, “Likelihood Neglect Bias and the Mental Simulations Approach: An Illustration Using the Old and New Monty Hall Problems”.

85. Karvetski and Mandel, “Coherence of Probability Judgments from Uncertain Evidence”.

86. Lehner et al., “Confirmation Bias in Complex Analyses”.

87. Wilcox, Human Judgment: How Accurate Is It, and How Can It Get Better?

88. Tetlock, Expert Political Judgment: How Good Is It? How Can We Know?

89. Mandel and Barnes, “Accuracy of Forecasts in Strategic Intelligence’; Mandel and Irwin, ‘Tracking Accuracy of Strategic Intelligence Forecasts”.

90. Mandel and Barnes, “Geopolitical Forecasting Skill in Strategic Intelligence”.

91. Dhami, Belton, and Mandel, “The ‘Analysis of Competing Hypotheses’ in Intelligence Analysis”.

92. Maegherman et al., “Test of the Analysis of Competing Hypotheses in Legal Decision-Making”.

93. Dhami, Belton, and Mandel, “The ‘Analysis of Competing Hypotheses’ in Intelligence Analysis”.

94. Karvetski and Mandel, “Coherence of Probability Judgments from Uncertain Evidence”.

95. Mandel, “Intelligence, Science and the Ignorance Hypothesis”.

96. Dhami et al., “Improving Intelligence Analysis with Decision Science”; Mandel, “Intelligence, Science and the Ignorance Hypothesis”.

97. Dhami, Belton, and Mandel, “The ‘Analysis of Competing Hypotheses’ in Intelligence Analysis”.

98. Gleitman, Gross, and Reisberg, Psychology.

99. Mandel, “The Occasional Maverick of Analytic Tradecraft”.

100. Lehner et al., “Confirmation Bias in Complex Analyses”; Whitesmith, Cognitive Bias in Intelligence Analysis.

101. Lehner et al., “Confirmation Bias in Complex Analyses”.

102. Faul et al., “Statistical Power Analyses Using G* Power 3.1”.

103. Coulson et al., “Confidence Intervals Permit, but Don’t Guarantee, Better Inference than Statistical Significance Testing”.

104. Abdi, “Bonferroni and Šidák Corrections for Multiple Comparisons”; Makin and Orban de Xivry, “Ten Common Statistical Mistakes to Watch out for When Writing or Reviewing a Manuscript”.

105. Krosnick, “Improving Question Design to Maximize Reliability and Validity”.

106. Cohen, Statistical Power Analysis for the Behavioral Sciences; Magnusson, ‘Understanding Correlations’.

107. Nosek et al., “The Preregistration Revolution”.

108. Pham and Oh, “Preregistration Is Neither Sufficient nor Necessary for Good Science”.

109. McNeish, “On Using Bayesian Methods to Address Small Sample Problems”.

110. Mandel, “Intelligence, Science and the Ignorance Hypothesis”.

111. Gleitman, Gross, and Reisberg, Psychology; Stanovich, West, and Toplak, The Rationality Quotient: Toward a Test of Rational Thinking.

112. Mandel, “Can Decision Science Improve Intelligence Analysis’; Mandel, ‘Intelligence, Science and the Ignorance Hypothesis”.

113. Wilcox, “Credences and Trustworthiness: A Calibrationist Account”.

114. Mandel, “Intelligence, Science and the Ignorance Hypothesis’; Mandel and Irwin, ‘Tracking Accuracy of Strategic Intelligence Forecasts”.

115. Himmelstein, Atanasov, and Budescu, “Forecasting Forecaster Accuracy”.

116. Wilcox, Human Judgment: How Accurate Is It, and How Can It Get Better?

117. Mandel, “Intelligence, Science and the Ignorance Hypothesis”.

118. Friedman, War and Chance: Assessing Uncertainty in International Politics.

Additional information

Funding

This material is based upon work supported in whole or in part with funding from the Department of Defense (DoD) to JW and the Canadian Safety and Security Program project CSSP-2018-TI-2394 to DRM. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DoD and/or any agency or entity of the United States Government or the Government of Canada.

Notes on contributors

John Wilcox

Dr. John Wilcox, a cognitive scientist, is research fellow at fp21 and visiting scholar at Stanford University. He completed an interdisciplinary PhD at Stanford in philosophy and psychology and has worked on Five Eyes projects to improve intelligence analysis. He also authored the book “Human judgment: How accurate is it, and how can it get better?”.

David R. Mandel

David R. Mandel, a senior Defence Scientist, studies human judgment and decision-making. Mandel received the 2020 NATO SAS Panel Excellence Award for an international research activity he led. He serves on the editorial boards of Decision, Futures and Foresight Science, Intelligence and National Security, and Judgment and Decision Making.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 322.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.