Abstract
In this paper, we analyse the performativity of assessment tools used for measuring the learning effects of a leadership development programme. The paper is based on an empirical study of a leadership development programme, ‘Start to Lead’, in a global organization. Through our empirical analysis, we infer two modes of ordering, which illustrate how the assessment tool produced diverse performative effects across different practices. Our findings indicate a gap between the participants’ interview-based descriptions of what they learned from the programme and the assessment tool’s operationalization of ‘the right’ leadership knowledge. We also found that managers seemed to distinguish between assessment important to the managers themselves and their daily work, which they were highly motivated to do well, and assessment important to ‘the organizational system’, which mostly resulted in quickly ‘ticking boxes’. These findings suggest that assessment tools work as demarcations defining good leadership and legitimate learning. These demarcations risk being disconnected from the everyday practice of leadership and hence risk decoupling the assessment tool and the participants’ everyday leadership practice. We end the paper by discussing the theoretical implications of this analysis.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes
1. The interest in assessing leadership development is also fed by large consultancies, such as McKinsey & Co (Cermak and McGurk Citation2010; Gurdjian, Halbeisen, and Lane Citation2014) and Bersin by Deloitte (Johnson, Wang-Audia, and Krider Citation2015), who stress the ‘need’ for measuring effects and learning from leadership development programmes.
2. For the purpose of this article, the consultancy and all names except Grundfos have been changed.