ABSTRACT
Programs for preventing and countering violent extremism (CVE) are a point of frequent collaboration across the divide between academics and practitioners in national security. However, both academics and practitioners have been hobbled in their effectiveness by conceptual under-development in the field. This has been due in part to the divide and the different incentive structures for both sides that have contributed to a lack of data-sharing from praxis. However, the main challenge preventing the establishment of rigourous studies of best practices in CVE has been the absence of an accepted analytic framework for measuring results. Practitioners engage in either assessment of program participants outcomes which may not be connected to the effectiveness of the program, or in evaluation of programs by whether they are delivered according to design. Academics may develop theories of CVE to test but are unable to do so without causal effect data from the programs. This article argues that both sides of the divide would prosper from adoption of an education sector framework for assessing what participants demonstrably learn in programs as outcomes. This approach would permit better hypothesis testing and responsible program development and management.
Acknowledgements
Thank you Chelsea Daymon and Andrew Zammit for your assistance.
Disclosure statement
No potential conflict of interest was reported by the author(s).