1,569
Views
6
CrossRef citations to date
0
Altmetric
Articles

Teaching intelligence in the twenty-first century: towards an evidence-based approach for curriculum design

Pages 1005-1021 | Published online: 30 Jun 2017
 

Abstract

Since 9/11, the ‘Five Eyes’ countries have seen a dramatic rise in intelligence training and education courses across the national security and law enforcement contexts. However, there remains little publicly available empirical evidence to demonstrate specifically where improvements have been made to workplace practices and processes as a result of this investment. This article, argues that the education sector in the intelligence discipline lacks an evaluation research agenda, for validating the workplace effectiveness of training and education programs. Further, a first step in understanding whether curriculum are ‘fit for purpose’ may be articulating some underlying common normative principles for evaluating programs in any intelligence context.

Notes

1. See e.g. Walsh, Intelligence and Intelligence Analysis, 255–83; Marrin, Improving Intelligence Analysis, 77–98.

2. The following sources provide good summaries of both the history and development of intelligence training and education across the US intelligence community: Campbell, “A Survey of the US Intelligence Community,” 933–48; Rudner, “Intelligence Studies in Higher Education,” 307–23; Marrin, “Training and Educating US Intelligence Analysts,” 131–46; Landon-Murray, “Moving US Academic Intelligence Education Forward,” 744–76; and, Smith, “Common Thread?” 118–42.

3. Moore, et al., “Evaluating Intelligence,” 204–20; Breckinridge, “Designing Effective Teaching and Learning Environments.” 307–23; Kreuzer, “On Professionalising Intelligence Analysis,” 579–97; Walsh, Intelligence and Intelligence Analysis, 255–83, and Marrin, Improving Intelligence Analysis, 77–98.

4. Landon-Murray, “Social Science and Intelligence Analysis,” 497.

5. Chang and Tetlock, “Rethinking the Training of Intelligence Analysts,” 904.

6. Defining what one means by positive impact is problematic. It is partly a value judgment about what political leaders, heads of ICs, educators, and analysts themselves view as a positive impact. Is it increased accuracy, timeliness, improved writing skills, improved readability of products, influencing policy, or improved critical thinking? Further modelling and evaluation of what we mean by such terms is critical to more valid evaluation studies.

7. For example, see Notes 1–3 for discussion on key developments in training and education programs courses across ‘Five Eyes’ countries.

8. Eisner, The Educational Imagination, 302.

9. Breckenridge argues that studies and commissions writing of intelligence analysis training as far back as 1996 suggest that the IC had no overriding concept, curriculum, plan, or procedures for assessing individual needs, achievement, and training effectiveness, “Designing Effective Teaching and Learning Environments,” 316; Landon-Murray, “Moving US Academic Intelligence Education Forward,” 744–76; Wu, “Strengthening Intelligence Education with Information Processing,” 1–24, both provide a more recent focus on aspects of intelligence curriculum. For a general (non-intelligence) discussion of curriculum issues from a workplace learning perspective see, Dochy, et al., Theories of Learning for the Workplace; Billet, “Workplace Curriculum Practice and Propositions,” 17–37.

10. Rudner, “Intelligence Studies in Higher Education,” 119–22.

11. Ibid.

12. Peeters, “Ten Ways to Blend Academic Learning within Professional Police Training,” 49.

13. Ibid.

14. Tyler, Basic Principles of Curriculum and Instruction, 52.

15. Hyland, Competence, Education and NVQs, 37.

16. Marrin, Improving Intelligence Analysis, Improving through training and education, 95; Coulthart, “Why Do Analysts Use Structured Analytical Techniques?” 933–48.

17. Tyler, Basic Principles of Curriculum and Instruction, 1.

18. Walsh and Ratcliffe, “Strategic Criminal Intelligence Education,” 152–66; Also see, Smith, “Common Thread?” 118–42.

19. Australian Qualifications Council, Australian Qualifications Framework.

20. Marrin, Improving Intelligence Analysis, 127–47; Walsh, Intelligence and Intelligence Analysis, 255–83.

21. There are other professional intelligence bodies in some ‘Five Eyes’ countries. For example, the Australian Institute of Professional Intelligence Officers (AIPIO), the New Zealand Institute of Intelligence Professionals (NZIIP), and the Canadian Association for Security and Intelligence Studies (CASIS), but they are not currently involved in certification of intelligence courses.

22. IAFIE, Standards for Intelligence Education Undergraduate and Graduate Programs, 1–3.

23. Australian Qualifications Council, Australian Qualifications Framework, 16.

24. For a full list of AQF learning outcome descriptors for bachelor and postgraduate degrees see Australian Qualifications Council, Australian Qualifications Framework, 16, 17.

25. Personal communication with Dr Larry Valero President IAFFIE, 3 March 2017.

26. According to IALEIA’s website as of 2013 450 people have been certified by the association.

27. A few larger Canadian law enforcement agencies, such as Vancouver Police and the RCMP, have made some progress, though, in standing up intelligence courses and had them accredited by IALEIA as local versions of the association’s Foundations of Intelligence Analysis Training (FIAT) course. Personal communication with Ryan Prox, March 24th 2017.

28. Intelligence Community Directive 203 – Analytic Standards sets out broad standards (such as objectivity, independence from policy makers, and timeliness) meant to embody the production and evaluation of analytic products, as well as provide foundations for analytic training and education.

29. ODNI Intel Community Directive Number 610 Competency Directories for the Intelligence Community Workforce.

30. Ibid.

31. Walsh, Intelligence and Intelligence Analysis, 255–83.

32. Ibid; Marrin, Improving Intelligence Analysis, 77–98; Landon-Murray, “Moving US Academic Intelligence Education Forward,” 744–76.

33. Merriam, et al., Learning in Adulthood; Also see, Dochy, et al., Theories of Learning for the Workplace for other theoretical perspective of learning.

34. Darkenwald and Merriam, Adult Education.

35. Boud, et al., Reflection, 3.

36. Schon, The Reflective Practitioner.

37. See Brockett and Hiemstra, Self Direction Adult Learning.

38. Fry and Kolb, “Experiential Learning Theory and Learning Experiences in Liberal Arts Education,” 79–92.

39. Lave and Wenger, Situated Learning.

40. Spracher, “National Intelligence University,” 231–43.

41. Walsh and Ratcliffe, “Strategic Criminal Intelligence Education,” 152–66.

42. Marrin, “Training and Educating US Intelligence Analysts,” 131–46; Landon-Murray, “Moving US Academic Intelligence Education Forward,” 744–76.

43. Davies, “Assessment BASE,” 721–36; Shelton, “Teaching Analysis,” 262–81.

44. See Coulthart’s recent research on this subject, “Why Do Analysts Use Structured Analytical Techniques?” 933–48.

45. Wheaton, “Teaching Strategic Intelligence Through Games.”

46. Morton and Saljo, “On Qualitative Differences in Learning,” 4–11; also see, Smith and Colby, “Teaching for Deep Learning The Clearing House,” 205–10.

47. Here I am adopting the language of Kirkpatrick’s training evaluation model consisting of four levels on which to focus questions: the reaction of student to training; their knowledge or skills acquired; their behaviour after training (transferability to the workplace); and, finally, how training results transfer or impact on society). See Kirkpatrick, “Evaluation of Training,” 87–112.

48. Kirkpatrick’s four levels of training evaluation have been adopted in health care training evaluations. They may prove useful in the intelligence context, particularly in identifying and assessing the impact of training results or outcomes at the organisational level – e.g. in quality, efficiency, ‘decision-maker satisfaction statistics’, or some other metric identified in an intelligence agency’s strategic corporate goal (or sub-goals relating to training). Evaluations of curriculum also need to consider the influence of environmental variables in the workplace that limit or facilitate the transfer of learning from the classroom to the workplace; see, e.g. Baldwin’s transfer of learning model, “Transfer of Training,” 63–105.

49. Chang and Tetlock argue that probabilistic reasoning training and Bayesian inference principles are shown to boost predictive accuracy and, thus, may be one way to demonstrate improvements in analytical training transfers from the classroom to the office, “Rethinking the Training of Intelligence Analysts,” 910.

50. Creating strategic approaches to continuing professional development requires sound intelligence governance and leadership, see, Walsh, “Making Future Leaders in the US Intelligence Community,” 1–19.

51. Niewenhuis, et al., “Goal Rationalities as a Framework for Evaluating the Learning Potential of the Workplace,” 64–83.

52. The workplace-learning literature stresses the importance of the social context in which informal learning takes place. Research suggest that if the social context is supportive, particularly around every day activities, then there is the potential for this type of learning to occur more often than formal training, Le Clus, “Informal Learning in the Workplace,” 356–73.

53. Attempts to measure and classify reflective thinking have resulted in validated instruments in some disciplines, which demonstrates that differences exist and are measurable. Generally it seems that deeper levels of reflection are reached less often and are more difficult to achieve; see, Lockyer, et al., “Knowledge Translation,” 50–6. Wald, et al., have also more recently developed and validated a rubric for fostering and evaluating reflective capacity in medical learners see, Wald, et al., “Fostering and Evaluating Reflective Capacity in Medical Education,” 41–50.

54. This kind of evaluation research would need to look at cultural facilitation and barriers to participate in both workplace and external learning. On cultural and social factors and how they interact with learning, see recent doctoral research on factors that enable and deter participation of leadership development training in the U.S. IC. Consult: Stanard, Motivation to Participate in Workplace Training within the IC and Beyond, and Billett, “Toward a Workplace Pedagogy,” 27–43.

55. Biggs and Collis created the Structure of the Observed Learning Outcome (SOLO) Taxonomy in 1982 to assess teacher’s instructional approaches and student’s learning outcomes.

56. See Cervai’s et al., Expero4care model has had some success in evaluating the learning outcomes in health care training in different organisational types and different countries within the health care sector. This approach adopts multi-stakeholder perceptions of training, applying this matrix across six indicators, which including such factors as competencies, transferability, applicability at the workplace, and impact. See: “Evaluating the Quality of the Learning Outcome in Health Care Sector,” 611–26.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 322.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.