8,302
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Evaluating leadership development in a changing world? Alternative models and approaches for healthcare organisations

ORCID Icon & ORCID Icon
Pages 114-150 | Received 07 Jul 2021, Accepted 11 Feb 2022, Published online: 24 Feb 2022

ABSTRACT

Internationally, healthcare is undergoing a major reconfiguration in a post-pandemic world. To make sense of this change and deliver an integrated provision of care, which improve both patient outcomes and satisfaction for key stakeholders, healthcare leaders must develop an insight into the context in which healthcare is delivered, and leadership is enacted. Formal leadership development programmes (LDPs) are widely used for developing leaders and leadership in healthcare organizations. However, there is a paucity of rigorous evaluations of LDPs. Existing evaluations often focus on individual-level outcomes, with limited attention to long-term outcomes that might emerge across team and organizational levels. Specifically, evaluation models that have been closely associated with or rely heavily on qualitative methods are seldom used in LDP evaluations, despite their relevance for capturing unanticipated outcomes, investigating learning impact over time, and studying collective outcomes at multiple levels. The purpose of this paper is to review the potential of qualitative models and approaches in healthcare leadership development evaluation. This scoping review identifies seventeen evaluation models and approaches. Findings indicate that the incorporation of qualitative and participatory elements in evaluation designs could offer a richer demonstration and context-specific explanations of programme impact in healthcare contexts.

Introduction

There is no longer a conventional, definitive context for healthcare services, as delivery systems, models, and professional practice continue to change (Kings Fund Citation2021) in the wake of the global pandemic of 2020. Future challenges to be faced include the sustainability of health and social care systems and the support of professional staff to enable them to practice (World Health Organization (WHO), Citation2020). Globally, the WHO draws attention to the need for a focus on innovation created by digital transformations, calls for service improvement, and a further roll out of accountable or integrated care by reorienting health systems towards a collaborative primary care approach with team-based care (WHO, Citation2020). To achieve this, there is a need for patient involvement and localized solutions at the core, with the WHO recommending that all nations embed stakeholder engagement in their healthcare and workforce planning strategies to address the complex issues faced by healthcare organizations.

Global healthcare systems are highly complex institutions. These systems provide services to a heterogeneous population where individuals with complex mental and physical conditions need inputs from professionals, services, and systems that are interdependent yet often function separately (Aveling, Parker, and Dixon-Woods Citation2016). Many systems struggle to balance the operational aspects of compassionate care giving and leadership in their hospitals (West, Citation2021), with the necessity to answer questions related to effectiveness and efficiency of their services, quality of care, patient expectations, and ‘what works’ agenda (Long Citation2006). In economically developed countries, the drive for patient safety and care efficiency has created moves towards the standardization of care processes. Despite support in principle, tensions between managers and clinical experts may exist, as professional judgement is viewed as being eroded, and replaced by unquestioned rule following (Martin et al. Citation2017). Adding further complexity is how hospitals and services operate, and within these hospitals, how different professional groups respond to leaders and leadership (Andersson Citation2015). For whilst global integrated care reports, and policy-makers highlight the importance of collaboration in organizations and those who work in them, continents, and countries have different cultural values, spending priorities, and funding streams within healthcare structures. For example, the continents of North America and Canada are developing collaboration based on a matrix leadership structure (Okpala Citation2020). However, complex bureaucracy makes practice change challenging (Kuluski & Reid, Citation2020). In South America, a top-down leadership style is predominant with employees avoiding conflict and not tending to speak out (Maddox and Replogle Citation2019). Hierarchical structures in hospitals may be as prevalent as in China, which is undergoing major healthcare reform (Yang et al. Citation2020), and in Africa, where communalism and non-individualism based on the African values of Ubuntu (Olano Citation2015) are viewed as essential for twenty-first century leadership in healthcare services. Raju (Citation2021) notes that in Northern India there is a lack of preparation for clinicians to undertake any form of leadership role and a culture lacking in trust across professionals is prevalent. Within Europe, leadership in healthcare is also often categorized as hierarchical and transactional (Sola et al, Citation2016); however, there is evidence that this is changing. In Germany, consensus amongst staff teams is considered important and research in acute hospitals correlates transformational leadership behaviour and reporting of critical safety incidents (Hillen, Citation2017). Within healthcare organizations, HRD planning solutions for transforming services may assume there is a desire within different professional groups and managers to come together to make integrated care happen. The reality may be more nuanced because the day-to-day pressures of providing healthcare take priority. All these factors contribute to the different ways leaders and leadership are defined, leadership structures are institutionalized, and goals of leadership development are agreed, in global healthcare systems. Yet, there is a need to focus on ensuring care delivery and building support for collaboration (Nuno-Solimis (Citation2017), a key element of collaboration and organizing care being leadership, and subsequently for HRD practitioners’ investment in the best fit leadership development (Sfantou et al. Citation2017).

To develop leaders and leadership, global healthcare organizations continue to invest in formal LDPs (Turner Citation2019; Ho Citation2016). However, very few organizations believe their LDPs are highly effective (Schwartz, Bersin, and Pelster Citation2014), calling into question the effectiveness of current LDPs (Lacerenza et al. Citation2017). As healthcare organizations are being challenged to demonstrate the impact of their LDPs, they recognize that they cannot rely solely on traditional individualistic leadership development models and approaches, to develop, for example, learners who come from diverse cultural and societal contexts and who are patient focussed (McCray, Temple and McGregor, Citation2021). In fact, the LDPs that seek to develop individuals’ personal development are under scrutiny, as views of the ‘given’ competencies and characteristics of leaders in interdependent healthcare systems are reviewed. As Ham, Berwick, and Dixon (Citation2016) note, leaders require boundary and hierarchy-spanning skills to negotiate systems and work across care settings with other professionals, patients, and other local stakeholders. These skills are needed to drive innovation and to contribute to more equal partnerships in service improvement. Moreover, effective leaders enact their leadership through socially and situationally constructed collaboration and inter-professional partnerships (McCauley and Palus Citation2020). Bate et al. (Citation2014) advise that moving away from the notion of the healthcare context as a ‘fixed entity’, which is capturing the most predictable of outcomes, to one which can also highlight the interactions between stakeholders, i.e., patients and staff on the care delivery pathway, enables a more holistic explanation for leadership actions. Here, the context is integral rather than something that is unchanging throughout the social transformation or other processes (Pettigrew, Woodman, and Cameron Citation2001). Thus, as systems, alliances, and alignments in care delivery are changing, LDPs have begun to change their curricula in response to these changes. Human Resource Development (HRD) scholars, practitioners, and commissioners are therefore, moving away from programmes that are built upon the universal and individualistic models of leadership (Leach et al. Citation2021; Ford Citation2015; West et al. Citation2015, Citation2021; Edmonstone Citation2013a, Citation2013b, Citation2011) as these models are not wholly sufficient to meet the more nuanced leadership learning needs, in healthcare contexts.

As traditional LDP programmes are being re-designed, the evaluation models and approaches that are applied for LDP evaluations may also need to change. Evaluation is understood as ‘a process of determining the merit, worth or value of something, or the product of that process’ (Scriven Citation1991, 139). As many qualitative evaluation models and approaches have remained under-utilized, we undertake a scoping review to (1) consider the strengths and weaknesses of the dominant, traditional evaluation models for application in present-day healthcare organizations; (2) offer additional models of leadership development evaluation for application in LDP healthcare contexts by outlining their potential contribution during radical change, and (3) suggest how these alternative models can offer complementary tools to capture the impact of leadership development programmes in healthcare contexts. We argue that the intentional use of qualitative evaluation models and approaches, along with the traditional evaluation models, such as the one proposed by Kirkpatrick (Citation1996) and others, may help HRD practitioners and other leadership developers to prove programme worth and improve programmes. Our review makes an important contribution to healthcare LDP evaluation literature by shining light on a range of qualitative evaluation models and approaches that have been previously underutilized but merit a renewed attention from HRD scholars and practitioners and offers an improved understanding of their strengths and weaknesses, using Mabey’s (Citation2013) leadership development discourse framework.

Theoretical background: evaluation models/approaches

In the programme evaluation literature, the terms ‘model’ and ‘approach’ have generally been used interchangeably (Bennett Citation2003). While the term ‘approach’ is used to cover an eclectic set of good evaluation practices, the term ‘model’ is often used for labelling ‘idealized … views for conducting programme evaluations according to their authors’ beliefs and experiences’ (Stufflebeam and Shinkfield Citation2007, 135). Very often, these models and approaches offer a set of recognizable ways to design evaluations and implement them in specific contexts, therefore, empowering the evaluators to conduct more meaningful evaluations. To determine the merit or worth of a programme, most evaluators tend to be concerned with the questions of its impact (e.g., what the outcomes of a particular programme in question are). The approach of many practitioners is to focus on both quantitative and qualitative factors (Finney and Jefkins Citation2009) that could help them demonstrate programme value. But in the context of an actual project, and with the pressure to demonstrate project investment returns, evaluators may be forced to adopt a ‘functionalist-mindset’ (Mabey Citation2013) that makes them stay focus only on a limited set of ‘see-able’ outcomes; as they direct their attention towards demonstrable short-term evidence, they tend to ignore both the context and the other possible outcomes that emerge with a passage of time. In what follows, we examine why evaluating leadership development programmes in healthcare contexts is challenging.

Evaluating leadership programmes

Identifying, measuring, and demonstrating the impact of LDPs is challenging because of the inherent complexities enshrined in programme design and delivery. Hartley, Martin, and Benington (Citation2008) argue that ‘in order for evaluation to occur with any degree of robustness, there is a need for a reasonably clear specification of what forms the basis of the leadership development, leadership, and organizational performance’ (170). In practice, however, there can be ambiguity in what is being developed, how and why, in these programmes (Day Citation2011), and most LDPs are not always guided by any leadership theory (Avolio et al. Citation2009). There is also a lack of empirical support for the effectiveness of the developmental methods used in LDPs (Burgoyne, Hirsh, and Williams Citation2004), as many LDPs fail to link learning experiences with the challenges of delivering value at work. Crucially, there is also a tendency among evaluators to overlook the influence of context on leadership learning, reflection, and application (Edwards and Turnbull Citation2013a, Citation2013b). Yet, the national, regional, local, and within institution cultural contexts play a significant role in the ways leadership is understood, enacted and developed. In many cases, the programme context, the power-relations, and the dominant cultural values determine how leader identity is developed (Gagnon and Collinson Citation2014), and how LDPs’ appropriateness and relevance are judged.

Evaluating LDPs becomes even more challenging when leader development of individuals and leadership development of the collectives are not well understood. Mabey (Citation2013), when presenting an insightful framework of discourses on leadership development, argues that within the complex and volatile contexts such as in healthcare, a solely functionalist mindset towards evaluation is problematic. A functionalist perspective refers to a fixation with (a) enhancing the under-developed qualities of individual leaders through formal LDPs, as if they are in perceptual need for trainer-centred skill development events and the developed individuals will be personally capable of lifting others’ performance and of transforming complex healthcare organizations; and (b) that evaluations can faithfully and robustly capture the knowledge, skills, and attitude gains that are experienced by LDP attendees, as if these outcomes are the only critical ones for the leaders that are making hospitals responsive and efficient. Although an evaluation based on the functionalist mindset may be useful, this perspective emerges from a narrow view of individualistic leadership, that has been (mistakenly) assumed to emerge in a social or cultural vacuum, and from the view that ignores the larger cultural, economic, institutional, and societal pressures that shape leaders and leadership learning. However, the current healthcare context sees this position changing. Mabey (Citation2013) proposes that we should complement our understanding of leadership from the interpretive, dialogic, and critical perspectives on leadership and leadership development. Since Mabey’s (Citation2013) discourse framework has been recognized to have the potential to enhance our understanding of leadership development, and to make us ‘better informed and critical’ learners of leadership development (Carroll Citation2019, 127), we use his framework to categorize our findings.

The dominance of a functionalist leadership development discourse

In a challenging context, Kirkpatrick’s taxonomy (See ) may be perceived as an easy-to-use standard for demonstrating the impact, of complex programmes. Owing to its conceptual simplicity (Russ-Eft and Preskill Citation2009), prescriptive appeal and high face validity (Arthur et al. Citation2003), this taxonomy has become the most widely used framework for not only supervisory training programmes (for which it was originally intended) but also for all types of learning and development programmes (Hoole and Martineau Citation2014; Collins and Denyer Citation2008), including LDPs (Ely et al. Citation2010; McLean and Moss Citation2003), particularly in healthcare organizations (King and Nesbit Citation2015).

Table 1A. Examples of models* using a ‘Taxonomy-of-outcomes’, based on a functionalist perspective.

Table 2. Methods/approaches aligned with interpretative perspectives on leadership and leadership development.

Despite its popularity and its significant impact on evaluation practices in healthcare, researchers (such as Holton Citation1996; Bates Citation2004; Anderson Citation2010) have identified several limitations of this taxonomy (and of other taxonomy-based models proposed by Swanson and Sleezer, Citation1987, for example). Critics are concerned with the absence of a causal link that Kirkpatrick presumed to exist between the levels (Alliger et al. Citation1997); they argue that the term ‘learning’ was conceptualized too narrow to include only knowledge, skills, and attitudes while ignoring the more complex, contextual learning that is enriched by ongoing reflection (de Dea Roglio and Light Citation2009), and changes in mindsets (Kennedy, Carroll, and Francoeur Citation2013). Also, scholars are concerned with problematic assumptions that underpin this taxonomy: that all participants are the same, that every learner will complete the programme and will transfer the learning, and that all this transferred learning can be measured as observable behaviour (Ford and Sinha Citation2008; Blume et al. Citation2010). This taxonomy also pays less attention to the many factors including the content of LDP, attendance policy, and duration of a programme that influence the effectiveness of LDPs (Lacerensa et al. Citation2017). In addition, it is difficult to determine how these functionalist models can take into account many of the complexities of leadership learners’ behaviour and the dynamics in the radical, relational, change-landscape of post-pandemic healthcare contexts.

Similarly, scholars who adopt the functionalist perspective (e.g., Avolio, Avey, and Quisenberry Citation2010; Phillips and Phillips Citation2007) have also promoted Return-on-Investment (ROI) evaluation models, to determine the value of LDPs, although these ROIs are ‘notoriously difficult to evaluate with any tangibility’ (Carroll Citation2019, 128). A review of 138 ROI evaluations of healthcare LDPs concludes that ‘the improved outcomes/ROI indicators and metrics’ associated with LDPs found in majority of these studies are ‘self-reported’ (Jeyaraman et al. Citation2018), and ‘the research designs varied quite widely’ (Ibid, p. 87) that the authors could not assess the quality across studies. They call for more evidence-based approaches to assess the ROI of LDPs in healthcare. Some others adopting the same functionalist perspective towards LDP (Weiss Citation1995; Watkins, Lysø, and de Marrais Citation2011) have recommended a theory-based approach to evaluation. For them, a theory (or an underlying logic) of how a programme is understood to produce certain outputs and outcomes guides the evaluation process. They attempt to construct a theory that specifies programme context, actual inputs, processes, and short-term outputs while illustrating key linkages with expected long-term outcomes and impact. System-based models, such as Context, Inputs, Process, and Products (CIPP model) (Stufflebeam Citation1983); Context, Administration, Process, Inputs, Reactions, and Outcomes (CAPIRO model) (Easterby-Smyth Citation1994), also hold an instrumentalist view of LDPs. These models (see ), with predominant managerialist orientations, assume a deficit model of leadership development in individuals, who will eventually change organizational systems for the better, because of their attendance at a formal LDP. Although system-based evaluation models emphasize the role of context relatively stronger than the taxonomy-based models, they too are based on the functionalist assumption that organization-sponsored, centrally regulated formal LDPs could produce heroic leaders who could transform systems single-handedly.

Table 1B. Other methods/approaches aligned with functionalist perspectives on leadership and leadership development.

Historically, LDPs have over relied on developing skills of individual leaders, whilst ignoring leadership structures and other factors, such as the tensions and power issues faced by individuals and teams (Stacey Citation2012, 62–65); therefore, evaluation models and approaches that are based on the functionalist view of leadership development may also tend to ignore programme context and lead to inadequate organizational learning about the LDP. In this light, adopting a different way of conducting LDP evaluation that considers alternate models and approaches, in harmony with functionalist tools, is proposed. Next, we briefly introduce qualitative evaluation models and approaches, before presenting our review methods.

Qualitative evaluation models/approaches

Qualitative evaluation approaches have become more prevalent since the 1970s when they were identified as being important for evaluating policy and its purpose (Tayabas, León, and Espino Citation2014). A qualitative evaluation can show deeper and unexpected outcomes from interventions and capture what happens during the intervention as well as pre and post (Patton Citation2015). Whilst the value of such approaches is noted (Lincoln and Guba, Citation1985; Mertens, Citation2015), they remain underutilized (Minshew et al. Citation2021; Spencer et al. Citation2003). Qualitative evaluation approaches may be particularly suitable in healthcare leadership contexts. For example, qualitative interpretative perspectives on leadership assume that leadership is an emergent process that is experienced in teams, groups, and communities, and that this leadership is socially constructed and distributed in the collective, and not in individuals, as functionalists assume. Leadership is embedded, co-created, and enacted in most healthcare contexts and cultures. Consequently, leadership development is assumed to happen organically as participants learn, interact, build their networks, and develop their expertise in everyday practice and in specific work contexts. Within healthcare settings, leadership can be seen to emerge and develop among the teams of healthcare professionals through informal means, in collaborative project environments. Formal LDPs might facilitate such leadership emergence (Turner Citation2019). Yet what is developed through LDPs, what happens because of the leadership that is developed, if any, and how do we know that if the development of leadership actually improved both clinical outcomes and satisfactions for patients, providers, and other stakeholders are not fully known, in part due to the limited usage of qualitative evaluation approaches.

Patton (Citation2015) argues that qualitative findings are critical ‘to enhance quality, improve programmes, generate deeper insights into the root causes of significant problems, and help prevent problems’ (p. 205). This is why qualitative models that help distinguish the specific context and interactions and what goes on in these contexts are central to evaluation practice, as they acknowledge and reveal the relevance of alternative interpretations of a situation to inform change. Such models have the potential to be readily applied in evaluating healthcare LDPs, in conjunction with other functionalist evaluation models and approaches, or as a stand-alone healthcare evaluation practice (Wäscher et al. Citation2017).

Methods

In this scoping review, two main sources are used to identify the evaluation models and approaches used for evaluating social programmes, policy, and practice, within the programme evaluation literature. First, we searched for the published reviews of evaluation models and approaches within the databases and textbooks, and then we specifically searched for impact evaluations published in the evaluation-focused academic journals, as these sources cover most of the published evaluations that are conducted in various contexts. We draw on both sources, equally, to identify the models and approaches that are the most important and relevant to healthcare organizations. Since scoping reviews are useful when the information on a topic has not been comprehensively reviewed or is complex and diverse (Sucharew and Macaluso Citation2019), as in the case of LDP evaluation, we sought to hunt for qualitative models and approaches used in a range of different evaluation designs.

First, we sought to identify the published reviews of evaluation models and approaches. Using the databases, Business Source Premier, ABI/INFORM Global, Scopus, and Social Science Citation Index, we deployed the search terms ‘leadership development programmes’ and ‘qualitative evaluation’ in combination with the terms ‘model’, ‘approach’, “framework’, “technique’, ‘tools’, and ‘review’, to pinpoint potential review papers. With the very few papers that we identified (e.g. Patton Citation2015; Linzalone and Schiuma Citation2015; Brandon and Ah Sam, Citation2014; Contandriopoulos and Brousselle Citation2012), we recognized the lack of a comprehensive collection and reviews of evaluation models and approaches in the literature. Then, with the support of a subject-specific librarian, and by using the same search terms on Google Books, we identified the textbooks that contain a review of the evaluation models and approaches (marked with an asterisk in references). In the second stage, we examined nine journals that specialize in publishing evaluation studies: American Journal of Evaluation; New Directions for Evaluation; Canadian Journal of Programme Evaluation; Evaluation; The International Journal of Theory, Research and Practice; Evaluation Review; Evaluation and Programme Planning; The Evaluation Exchange; Practical Assessment, Research & Evaluation; and Journal of Multi-Disciplinary Evaluation. A total of 22 models were identified. Since our purpose is to help enhance evaluation practice within healthcare organizations, we then purposely select models and approaches using the following criteria: models and approaches are relatively simple, flexible, and in recurrent use, and they can be readily applied by healthcare evaluators, with relative ease and training. This resulted in a total of 17 models for appraisal, as ordered in ,5. In each table, using Mabey’s (Citation2013) framework, the models and approaches that are aligned with interpretivist, dialogic, and critical perspectives are grouped. These models and approaches rely on qualitative data collection methods to guide evaluators on how to go about undertaking an LDP evaluation, what steps must be taken and how to engage with stakeholders. Each table includes the assumptions that underlie each model/approach, and the necessary steps involved in applying them, whilst highlighting their strengths, weaknesses, and context suitability, along with the theorist or proponent of each.

Table 3. Methods/approaches aligned with dialogic perspectives on leadership and leadership development.

Table 4. Methods/approaches aligned with critical perspectives on leadership and leadership development.

Results

A range of qualitative models and approaches have been identified in the programme evaluation literature (see ,5). Since the evidence for the use of these qualitative evaluation models and approaches within the healthcare literature is limited, we highlight here the potential value of their application when evaluating healthcare LDPs.

Models/approaches aligned with the interpretative perspectives of leadership development evaluation

Mabey (Citation2013) clarifies that leadership from an interpretivist position shifts the emphasis from an individual to a shared approach, assumes that leadership is often culturally situated and highlights that the role of leader may be a fluid one. In the interpretative position, how leaders make sense of their role, the situation and that of others is also significant. The rationalization of events in a post hoc development of learning is captured. For the LDP to mirror this, and the methods used, it may mean that development is taking part in real-time in the workplace as opposed to an external space, and that leadership is emergent, collaborative, and not pre-determined. Exploring the LDP members’ response to the actions that have occurred in a LDP, then their lived experience during and after the event will form a critical part of learning. For the evaluators that are capturing this, it can be challenging.

In the search, six evaluation models in the interpretive space were identified (see ). These are the Culturally Responsive (Frierson, Hood, and Hughes Citation2002), Culturally Competent (Chouinard and Cousins Citation2007), Goal-free Evaluation (Scrivens Citation1999), Connoisseurship Evaluation (Eisner Citation1997), the Photovoice method (Yuan and Feng Citation1996) and an open-systems based, EvaluLEAD framework (Grove, Kibel, and Haas Citation2007).

Key themes and opportunities are the exploration and implications of the culturally grounded nature of leadership development. The opportunity to discover what is going on in a situation and how leadership practice is embedded and institutionalized within a specific healthcare system in a given context is offered (Esmail, Kalra, and Abel Citation2005; Kalra, Abel, and Esmail Citation2009). Moving beyond the predetermined goals and objectives in order to engage with other outcomes and their implications can reveal the unexpected consequences of the LDP (Scriven Citation1999). These models can help capture key stakeholders’ views on what constitutes leader excellence (Eisner Citation1997) and help HRD professionals understand how leadership emerges in teams, groups, and networks (Yuan, and Feng Citation1996). EvaluLEAD framework advocates the use of evocative forms of inquiry (that employs tools such as stories, journals, visual images, and diaries), along with evidential forms of inquiry (that rely on quantitative data) to capture qualitatively different outcomes, at multiple levels.

Models/approaches aligned with the dialogic perspectives of leadership development

A dialogic perspective on leadership assumes that leadership is a ‘discursive accomplishment’ that is ‘continually in a state of becoming as opposed to anything more fixed or stable’ (Mabey Citation2013, 366), and leadership learners become who they are, on the basis of the stories that they tell of themselves and of their organization, as they engage in everyday conversations. Discursive leadership points to multiple, fragmented, intertextual, and constantly shifting leadership identities that are enacted in specific socio-historical contexts. Consequently, developing such leadership is assumed to happen as individuals craft their own identities through framing and reframing of personal and organizational stories. Leadership development then becomes fluid, fragmented, and overlapping, and is sometimes contradicting growth of their self in each context.

The review identified seven evaluation methods and approaches that are aligned with the dialogic perspective (see ). These are the Success Case Method (Brinkerhoff Citation2005), Most Significant Change Method (Dart and Davies Citation2003), Stakeholder-based evaluation (Mark and Shortland Citation1985), Collaborative evaluation (Rodriguez-Campos Citation2012; O’Sullivan Citation2012), Utilization-focused evaluation (Patton Citation1997), Illuminative evaluation, (Parlett and Hamilton Citation2017), and Appreciative Inquiry (Cooperrider and Whitney Citation2005; Ludema, Cooperrider and Barrett, Citation2006).

Although these models/approaches were originally conceived and mostly used as tools for improving a programme or for learning from it, they can serve as effective tools to understand the stories that leadership learners tell of themselves. They enable the recognition of other actors that are participating in dialogic discourses on leadership.

Models/approaches aligned with the critical perspectives of leadership development

A critical perspective on leadership sees LDPs as the means of promoting a blind acceptance of norms and the status quo, and of propagating the knowledge and ideas that serve the interests of a powerful elite while treating participants as passive consumers of what is being done to them in a formal classroom, mostly by those who represent a dominant group in a context, and in some contexts by male presenters (Sinclair Citation2009; Ford Citation2015). In some cases, LDPs, as ‘a covert means of perpetuating political elite domination’ (Tomlinson, O’ Reilley & Wallace, Citation2013, 81), could even become exploitative by masking the power relations that exist in care settings (Currie and Spyridonidis, Citation2016; Citation2019), and by deflecting participants’ attention from emotional, structural, and political barriers to systemic change, whilst focusing exclusively on heroic, personal transformation, and individual successes as markers of leader development (Willmott Citation1997; Alvesson and Spicer Citation2012)

The review provided four evaluation approaches that are aligned with the critical perspective of leadership (). These are the Empowerment Evaluation (Fetterman, Kaftarian, and Wandersman A Citation1996), Transformative evaluation (Mertens Citation2009), Feminist evaluation (Bustelo Citation2017; Brisolara, Seigart, and SenGupta Citation2014), and Horizontal Evaluation (Thiele et al. Citation2007). These approaches help us understand ‘the dialectical asymmetries, situated interrelations and intersecting practices of leaders and followers’ (Collinson Citation2017, 272) in care contexts characterized by politics and power differences.

The key themes are power dynamics, and the nature and consequences of gender inequality, whilst noting the racial, gender, and identity issues of learners and their learning (Smith and Gosling Citation2017; Neubert and Palmer Citation2004). These models help evaluators recognize the complexity of leadership learning and development and acknowledge the power differences held by various groups including healthcare commissioners, senior management, and marginalized groups who may be involved in the formal and informal LDPs. Offering HRD practitioners further knowledge of how resistance and cynicism flow among different groups in hospital settings, and how leadership is seen as an emerging outcome of inter-dependant relationships and networks within a political context. By focusing on the dialectics of control and resistance and the ideological aspect of leadership, these approaches enable evaluators and HRD practitioners to ensure that the unique characteristics of leadership-learners are not lost in the development process, and to consider if LDPs promote the establishment of equal rights and opportunities.

Discussion

In this paper, we argue that rather than reducing the practice of impact evaluation to the application of just one or two popular evaluation models, healthcare LDP evaluators could recognize the diversity of qualitative models and approaches that are available in other fields of practice, including that of programme evaluation. The seventeen programme evaluation models and approaches reviewed in this paper have the potential to answer a wide range of impact questions. They can assist HRD practitioners to discern how these models and approaches can be used exclusively on their own, or in conjunction with other approaches for mutual facilitation and complementarity in specific contexts. Interpretive culturally responsive (CRE) and culturally competent evaluation (CCE) models can provide HRD practitioners with data that moves beyond narrow assumptions about leadership and leadership development. For example, as in the case of Kalra, Abel, and Esmail (Citation2009), a healthcare organization may endorse and champion leaders from Black, Asian, Minority Ethnic (BAME) backgrounds and support their attendance on an LDP. A functionalist evaluation model might show evidence of programme satisfaction. However, other workforce data show that these BAME leaders experience continued under-representation in hospital boards and racial inequality at work (O’Dwyer-Cunliffe and Russell, Citation2020), and they tend to leave their posts and the organization sooner than their white peers. To discover why this is the case may be significant, and culturally responsive evaluations could help us understand how the programme influences next steps for LDP attendees (if at all) and if there is any useful learning about the programme and the organizational context, which can help address the issues faced by BAME leaders. Such methods enable different perceptions, views, and values to be made accessible for discussion, identifying the individual needs of the stakeholders (Wäscher et al. Citation2017) to create a shared perspective.

Dialogic models such as those used in collaborative evaluation can be important when attempting to explore leadership in teams – a key aspect of integrated healthcare but in reality, often under explored or examined in the LDP process for impact or outcomes (Pallesen et al. Citation2020). Similarly, critical approaches, such as the empowerment evaluations have been successfully used in several healthcare quality improvement contexts. For example, to evaluate the effectiveness of HIV prevention programmes (Phillips et al. Citation2019), to reduce hospital admissions through improved diabetes care (Wandersman Citation2015), and to facilitate power-sharing and joint decision-making among nurses and families (Strober Citation2005), empowerment evaluations have been used effectively. In all these studies, the chosen critical evaluation approach has provided a better understanding of work processes and institutional arrangements in healthcare settings, and helped evaluators build capacity at these organizations to foster a learning-focused community.

The qualitative models and approaches reappraised here may be helpful in advancing LDP evaluation practice in healthcare contexts. Equally, the findings from such evaluations might help HRD practitioners to gain added value and make better sense of leadership learning situations, along with learners’ differences in beliefs, intentions, and values, as well as the social, cultural, and emotional factors that affect leaders and leadership development.

Implications for HRD research

Recently, HRD scholars argued for integrating temporal dimension in LDP evaluations (Joseph-Richard, Edwards, and Hazlett Citation2021) and highlighted the advantage of such integration for designing, implementing, and using evaluations. By knowing the timing and duration of outcomes, HRD practitioners can make a more realistic estimation of the scope of personal and relational changes that could be observed in healthcare contexts and create more-efficient learning and development investment strategies. Unfortunately, the models and approaches reviewed here offer limited guidance on how temporality could be integrated in evaluation designs. More theorization, research, and explicit guidance are needed to help HRD practitioners in this area. Published examples of applying these models/approaches, which present a vivid picture of programme contexts, the evaluation questions used, the appropriateness of the methods employed, the processes data analysis, challenges faced, and lessons learned by programme staff could enhance our understanding of how leadership and leaders are developed, and outcomes are experienced in multi-cultural, cross-border healthcare contexts. Since empirical evidence that supports the effectiveness of these models and approaches, when applied in healthcare contexts, is very limited, meta-evaluation works that investigate the effectiveness of these tools might reveal what works, for whom, when, where, why, and how. Finally, since the under-use of evaluation findings has been well recognized (Long Citation2006), we also need to show how evaluation findings can be utilized in healthcare contexts so that investments in LDP evaluations could be justified in terms of the learning-gains acquired at personal, professional, and organizational levels.

Implications for HRD practice

Given the wide range of models and approaches that are underutilized in the current practice, it may be useful to highlight one or two models as better tools for the job. However, considering the rich variation in changing healthcare contexts, it becomes challenging both to single out individual models/approaches as more effective tools than others, and to promote them as more suitable to certain contexts. Every healthcare context is unique. Such recommendations may even be considered as overly prescriptive. We endorse what Inouye, Yu, and Adefuin (Citation2005) emphasized in their evaluation commissioners’ guide that evaluators must ‘take into account potential cultural and linguistic barriers’, re-examine ‘established evaluation measures for cultural appropriateness’, and/or incorporate ‘creative strategies for ensuring culturally competent analysis and creative dissemination of findings to diverse audiences’ (p. 6), particularly when selecting methodological designs and tools. In line with Patton (Citation2008), we believe that a useful, practical, ethical, and accurate evaluation ‘emerges from the special characteristics and conditions of a particular situation – a mixture of people, politics, history, context, resources, constraints, values, needs, interests, and chance’ (p. 199). However, we can point out that combining certain complementing models and approaches may be fruitful in certain contexts.

Although the four categories are presented as distinctly different sets of evaluation models and approaches, they are by no means pure types. These categories are most distinct with respect to the key assumptions on how each of them should be implemented in a given context. Significant differences also exist with respect to the locus of evaluators’ power. However, there are many aspects of these approaches that are quite similar, depending on which category of approach one adopts. For example, although stakeholder-based evaluation is the approach most visibly concerned with stakeholder engagement and participation, there is a value position that is at least implicit in most of the approaches. Empowerment evaluation approaches are quite explicit about the centrality of power relations, and it is important to recognize who conducts an evaluation, when, where, how long, and why, even when using other approaches that are not explicitly emphasizing the role of power.

Mixing these approaches, in ways that are suitable for the given healthcare context is the key. For example, at the start of an evaluation project in a small tertiary referral hospital, the goal-free evaluation approach (interpretative) could be used for exploratory purposes (as Scriven Citation1999 himself proposed), followed by the use of the Success Case Method (dialogic) for collating evidence for impact and programme improvement. In a community (geriatric) hospital, mixing a culturally responsive evaluation (interpretative) with a stakeholder-based evaluation (dialogic) may be useful. Such purposeful mixing of models and approaches in evaluation designs could help HRD professionals draw on the potential strengths of each method while simultaneously mitigating their weaknesses. However, such decisions need to be taken only in full appreciation of the increasingly diverse, complex, and adaptive healthcare systems, which generally vary across international borders (Plsek and Wilson Citation2001; Greenhalgh and Papoutsi Citation2018). We recommend that HRD practitioners adopt an eclectic approach that is characterized by an awareness of the full range of options that are available in the literature, and to be guided by a commitment to methodological appropriateness to a given situation, when designing and implementing evaluations (Patterson et al. Citation2017).

Limitations

This review is limited in that it focuses only on qualitative methods and approaches, found primarily in programme evaluation literature, and future research could find newer tools that are suitable for healthcare contexts in other specialisms, including behavioural economics, neuroscience, and organizational psychology. In selecting these models and approaches, a practice orientation, applicability to healthcare contexts, and the word limits guided our decision. As a result, a few other taxonomy-based models that essentially extend Kirkpatrick’s work (e.g. Hamblin Citation1974; Kaufman and Keller Citation1994) and approaches, such as responsive evaluation (Stake Citation1975), responsive constructivist evaluation (also known as fourth-generation evaluation, Guba and Lincoln Citation1989), democratic evaluation (MacDonald Citation1993; Picciotto Citation2015), and developmental evaluation (Patton Citation1994), among others, have not been included. These approaches also rely heavily on qualitative methods and can be used to evaluate LDPs, although published examples that use these approaches are rare. We believe that practitioner-focused descriptive studies on what actually happens when these models and approaches are utilized, either individually or in combination with other tools in health settings, would be beneficial.

Conclusions

Evaluating leadership development in healthcare contexts is difficult and complex. Rich theorizing and generative learning in the field of LDP evaluations are slowly increasing. Paying more attention to, and pragmatic adaptation of, alternate models and approaches reviewed in this paper (as opposed to relying on a few popular programme evaluation models and approaches that are based on functionalist assumptions), HRD scholars and practitioners could demonstrate the value, if any, of LDPs in healthcare contexts. The sixteen evaluation models/approaches by themselves, we believe, provide a set of rich additional tools for HRD practitioners, working in healthcare contexts across the world. To the extent that these tools are applied, evaluations are published and healthcare leadership outcomes are convincingly demonstrated in terms of patient outcomes, continuing our evaluation efforts is essential and certainly to be encouraged. We acknowledge that evidence for integrating questions about the timings, duration, and the speed of leadership development outcomes in LDP evaluations is seldom found in these models and approaches. However, applying interpretative, dialogic, and critical models and approaches, either on their own or in combination with suitable tools, can not only provide the much-needed direction for designing evaluations but also they can convey powerfully to programme stakeholders the richness of programme impact in ways that are essentially experiential, contextual, participatory, and collaborative.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

The authors have no funding to report.

References

  • Alliger, G. M., S. Tannenbaum, J. W. Bennett, H. Traver, and A. Shortland. 1997. “A Meta-analysis of the Relations among Training Criteria.” Personnel Psychology 50 (2): 341–358. doi:10.1111/j.1744-6570.1997.tb00911.x.
  • Alvesson, M., and A. Spicer. 2012. “Critical Leadership Studies: The Case for Critical Performativity.” Human Relations 65 (3): 367–390. doi:10.1177/0018726711430555.
  • Anderson, L. 2010. “Talking the Talk–a Discursive Approach to Evaluating Management Development.” Human Resource Development International 13 (3): 285–298. doi:10.1080/13678868.2010.483817.
  • Andersson, T. 2015. “The Medical Leadership Challenge in Healthcare Is an Identity Challenge.” Leadership in Health Services 28 (2): 83–99. doi:10.1108/LHS-04-2014-0032.
  • Arthur, W., S. T. Bell, W. Bennett, and P. S. Edens. 2003. “Effectiveness of Training in Organizations: A Meta-analysis of Design and Evaluation Features.” Journal of Applied Psychology 88 (2): 234–245. doi:10.1037/0021-9010.88.2.234.
  • Aveling, E.-L., M. Parker, and M. Dixon-Woods. 2016. “What Is the Role of Individual Accountability in Patient Safety? A Multi-site Ethnographic Study.” Sociology of Health & Illness 38 (2): 216–232. doi:10.1111/1467-9566.12370.
  • Avolio, B. J., J. B. Avey, and D. Quisenberry. 2010. “Estimating Return on Leadership Development Investment.” The Leadership Quarterly 21 (4): 633–644. doi:10.1016/j.leaqua.2010.06.006.
  • Avolio, B. J., R. J. Reichard, S. T. Hannah, F. O. Walumbwa, and A. Chan. 2009. “A Meta-analytic Review of Leadership Impact Research: Experimental and Quasi-experimental Studies.” The Leadership Quarterly 20 (5): 764–784. doi:10.1016/j.leaqua.2009.06.006.
  • Bate, P., G. Robert, N. Fulop, J. Vretveit, and M. Dixon-Woods. 2014. “Perspectives on Context.” London: Health Foundation London.
  • Bates, R. 2004. “A Critical Analysis of Evaluation Practice: The Kirkpatrick Model and the Principle of Beneficence.” Evaluation and Program Planning 27 (3): 341–347. doi:10.1016/j.evalprogplan.2004.04.011.
  • Bennett, J. 2003. Evaluation Methods in Research. London: Continuum.
  • Birckmayer, J. D., and C. H. Weiss. 2000. “Theory-based Evaluation in Practice.” Evaluation Review 24 (4): 407–431. doi:10.1177/0193841X0002400404.
  • Black, A. M., and G. W. Earnest. 2009. “Measuring the Outcomes of Leadership Development Programs.” Journal of Leadership & Organizational Studies 16 (2): 184–196. doi:10.1177/1548051809339193.
  • Blume, B. D., J. K. Ford, T. T. Baldwin, and J. L. Huang. 2010. “Transfer of Training: A Meta-analytic Review.” Journal of Management 36 (4): 1065–1110. doi:10.1177/0149206309352880.
  • Brandon, P. R., and A. L. Ah Sam. 2014. “Program Evaluation.” In Oxford Handbook of Qualitative Research, edited by P. Leavy, 471–497. Oxford: Oxford University Press.
  • Brinkerhoff, R. O. 2005. “The Success Case Method: A Strategic Evaluation Approach to Increasing the Value and Effect of Training.” Advances in Developing Human Resources 7 (1): 86–101. doi:10.1177/1523422304272172.
  • Brisolara, S., D. Seigart, and S. SenGupta. 2014. “Feminist Evaluation and Research: Theory and Practice”. 1st ed. London: Guilford Publications.
  • Burden, B. 2017. “Illuminative Evaluation.” In Frameworks for Practice in Educational Psychology: A Textbook for Trainees and Practitioners, edited by B. Kelly, L. M. Woolfson, and J. Boyle, 291–308. 2nd ed. London: Jessica Kingsley.
  • Burgoyne, J. G., W. Hirsh, S. Williams, and G. Britain. 2004. “The Development of Management and Leadership Capability and Its Contribution to Performance: The Evidence, the Prospects and the Research Need.” Nottingham, UK: University of Lancaster: DfES Publications.
  • Bustelo, M. 2017. “Evaluation from a Gender Perspective as a Key Element for (Re) Gendering the Policymaking Process.” Journal of Women, Politics & Policy 38 (1): 84–101. doi:10.1080/1554477X.2016.1198211.
  • Carroll, B. 2019. “Leadership Learning and Development.” In Leadership: Contemporary Critical Perspectives, edited by B. Carroll, J. Ford, and S. Taylor, 117–137. Los Angeles: Sage.
  • Chouinard, J. A., and J. B. Cousins. 2007. “Culturally Competent Evaluation for Aboriginal Communities: A Review of the Empirical Literature.” Journal of Multidisciplinary Evaluation 4 (8): 40–57.
  • Collins, J., and D. Denyer. 2008. “Leadership Learning and Development: A Framework for Evaluation.” In Leadership Learning: Knowledge into Action, edited by K. T. James and J, Collins.161–177. Hampshire: Palgrave McMillan.
  • Collinson, D. 2017. “Critical Leadership Studies: A Response to Learmonth and Morrell.” Leadership 13 (3): 272–284. doi:10.1177/1742715017694559.
  • Contandriopoulos, D., and A. Brousselle. 2012. “Evaluation Models and Evaluation Use.” Evaluation 18 (1): 61–77. doi:10.1177/1356389011430371.
  • Cooperrider, D. L., and D. K. Whitney. 2005. “Appreciative Inquiry”. 1st ed. San Francisco, CA: Berrett-Koehler.
  • Cousins, J. B., and L. M. Earl. 1992. “The Case for Participatory Evaluation.” Educational Evaluation and Policy Analysis 14 (4): 397–418. doi:10.3102/01623737014004397.
  • Currie, G., and D. Spyridonidis. 2016. “Interpretation of Multiple Institutional Logics on the Ground: Actors’ Position, Their Agency and Situational Constraints in Professionalized Contexts.” Organization Studies 37 (1): 77–97. doi:10.1177/0170840615604503.
  • Currie, G., and D. Spyridonidis. 2019. “Sharing Leadership for Diffusion of Innovation in Professionalized Settings.” Human Relations 72 (7): 1209–1233. doi:10.1177/0018726718796175.
  • Dart, J., and R. Davies. 2003. “A Dialogical, Story-based Evaluation Tool: The Most Significant Change Technique.” American Journal of Evaluation 24 (2): 137–155. doi:10.1177/109821400302400202.
  • Day, D. V. 2011. “Leadership Development.” In The Sage Handbook of Leadership, edited by A. Bryman, D. Collinson, K. Grint, M. Uhl-Bien, and B. Jackson, 37–50. London: Sage Publications.
  • De Déa Roglio, K., and L. Light L. 2009. “Executive MBA Programs: The Development of the Reflective Executive.” Academy of Management Learning & Education 8 (2): 156–173. doi:10.5465/amle.2009.41788840.
  • Drath, W. H., C. D. McCauley, C. J. Palus, E. Van Velsor, P. M. G. O’Connor, and J. B. McGuire. 2008. “Direction, Alignment, Commitment: Toward a More Integrative Ontology of Leadership.” The Leadership Quarterly 19 (6): 635–653. doi:10.1016/j.leaqua.2008.09.003.
  • Easterby-Smith, M. 1994. Evaluating Management Development, Training, and Education. 2nd ed. Hampshire, UK: Gower.
  • Edmonstone, J. 2011. “Developing Leaders and Leadership in Health Care: A Case for Rebalancing?” Leadership in Health Services 24 (1): 8–18. doi:10.1108/17511871111102490.
  • Edmonstone, J. 2013a. “Healthcare Leadership: Learning from Evaluation.” Leadership in Health Services 26 (2): 148–158. doi:10.1108/17511871311319731.
  • Edmonstone, J. 2013b. “What Is Wrong with NHS Leadership Development?” British Journal of Healthcare Management 19 (11): 531–538. doi:10.12968/bjhc.2013.19.11.531.
  • Edwards, G., and S. Turnbull. 2013a. “A Cultural Approach to Evaluating Leadership Development.” Advances in Developing Human Resources 15 (1): 46–60. doi:10.1177/1523422312467144.
  • Edwards, G., and S. Turnbull. 2013b. “Special Issue on New Paradigms in Evaluating Leadership Development.” Advances in Developing Human Resources 15 (1): 3–9. doi:10.1177/1523422312467147.
  • Eisner, E. W. 1997. “The Enlightened Eye: Qualitative Inquiry and the Enhancement of Educational Practice.” New York: Allyn & Bacon.
  • Ely, K., L. A. Boyce, J. K. Nelson, S. J. Zaccaro, G. Hernez-Broome, and W. Whyman. 2010. “Evaluating Leadership Coaching: A Review and Integrated Framework.” The Leadership Quarterly 21 (4): 585–599. doi:10.1016/j.leaqua.2010.06.003.
  • Esmail, A., V. Kalra, and P. Abel. 2005. “A Critical Review of Leadership Interventions Aimed at People from Black and Minority Ethnic Groups.” London: Health Foundation.
  • Fetterman, D. M. 2013. “Empowerment Evaluation: Learning to Think like an Evaluator.” In Evaluation Roots: A Wider Perspective of Theorists’ Views and Influences, edited by M. Alkin, 304–322. London: Sage.
  • Fetterman, D. M., S. J. Kaftarian, and A. Wandersman A. 1996. “Empowerment Evaluation: Knowledge and Tools for Self-assessment & Accountability”. Thousand Oaks, CA: Sage Publications.
  • Finney, L., and C. Jefkins. 2009. “Best Practice in OD Evaluation.” West Sussex, UK: Roffey Park Institute.
  • Fitz-Gibbon, C. T., and L. L. Morris. 1987. How to Analyze Data. 1st ed. London: Sage.
  • Ford, J. 2015. “Going beyond the Hero in Leadership Development: The Place of Healthcare Context, Complexity and Relationships: Comment On” Leadership and Leadership Development in Healthcare Settings–a Simplistic Solution to Complex Problems?.” International Journal of Health Policy and Management 4 (4): 261–263. doi:10.15171/ijhpm.2015.43.
  • Ford, K. J., and R. Sinha. 2008. “Advances in Training Evaluation Research.” In The Oxford Handbook of Personnel Psychology, edited by S. Cartwright and C. L. Cooper, 291–316. Oxford: Oxford University Press.
  • Frierson, H., S. Hood, and G. Hughes. 2002. “Strategies that Address Culturally Responsive Evaluation.” In The 2002 User-friendly Handbook for Project Evaluation, edited by J. Frechtling, 63–73. Arlington, VA: National Science Foundation.
  • Fund, K. 2021. Integrated Care Systems Explained: Making Sense of Systems, Places and Neighbourhoods. London: King’s Fund.
  • Funnell, S. C., and P. J. Rogers. 2011. Purposeful Program Theory: Effective Use of Logic Models and Theories of Change. San Francisco, CA: Joseey-Bass.
  • Gagnon, S., and D. Collinson. 2014. “Rethinking Global Leadership Development Programmes: The Interrelated Significance of Power, Context and Identity.” Organization Studies 35 (5): 645–670. doi:10.1177/0170840613509917.
  • Greenhalgh, T., and C. Papoutsi. 2018. “Studying Complexity in Health Services Research: Desperately Seeking an Overdue Paradigm Shift.” BMG Medicine 16 (1): 95. doi:10.1186/s12916-018-1089-4.
  • Grove, J., B. Kibel, and T. Haas. 2007. “EvaluLEAD: An Open-systems Perspective on Evaluating Leadership Development.” In The Handbook of Leadership Development Evaluation, 71–110. Francisco, CA: John Wiley & Sons, San.
  • Guba, E. G., and Y. S. Lincoln. 1989. Fourth Generation Evaluation. Newbury Park, CA: Sage.
  • Ham, C., D. Berwick, and J. Dixon. 2016. Improving Quality in the English NHS. London: King’s Fund.
  • Hamblin, A. C. 1974. Evaluation and Control of Training. London: McGraw-Hill.
  • Hartley, J., J. Martin, and J. Benington. 2008. “Leadership in Health Care. A Review of the Literature for Health Care Professionals, Managers and Researchers.” National Institute of Health Research: HS&DR. Coventry, UK: University of Warwick. http://www.nets.nihr.ac.uk/projects/hsdr/081601148:.
  • Hillen, H., H. Pfaff, and A. Hammer. 2017. “The Association between Transformational Leadership in German Hospitals and the Frequency of Events Reported as Perceived by Medical Directors.” Journal of Risk Research 20 (4): 499–515. doi:10.1080/13669877.2015.1074935.
  • Ho, M. (2016). “Investment in Learning Increases for Fourth Straight Year.” Retrieved from https://www.td.org/Publications/Magazines/TD/TDArchive/2016/11/Investment-in-Learning-Increases-for-Fourth-Straight-Year
  • Hofmann, R., and J. D. Vermunt. 2017. “Professional Development in Clinical Leadership: Evaluation of the Chief Residents Clinical Leadership and Management Programme.” Faculty of Education, University of Cambridge, Working Papers Series 5: 23.
  • Holton, E. F. 1996. “The Flawed Four-level Evaluation Model.” Human Resource Development Quarterly 7 (1): 5–21. doi:10.1002/hrdq.3920070103.
  • Hood, S., R. Hopson, and H. Frierson. 2005. “The Role of Culture and Cultural Context in Evaluation: A Mandate for Inclusion, the Discovery of Truth and Understanding”. Greenwich, CT: Information Age.
  • Hoole, E. R., and J. W. Martineau. 2014. “Evaluation Methods.” In The Oxford Handbook of Leadership and Organizations, edited by D. V. Day, 167–196. Oxford: Oxford university press.
  • Inouye, T. E., H. C. Yu, and J. A. Adefuin. 2005. “Commissioning Multicultural Evaluation: A Foundation Resource Guide.” In partnership with Social Policy Research Associates. Oakland, CA.: California Endowment’s Diversity in Health Education Project. https://www.informalscience.org/commissioning-multicultural-evaluation-foundation-research-guide
  • Jeyaraman, M. M., S. M. Z. Qadar, A. Wierzbowski, F. Farshidfar, J. Lys, G. Dickson, K. Grimes, et al. 2018. “Return on Investment in Healthcare Leadership Development Programs.” Leadership in Health Services 31 (1): 77–97. doi:10.1108/LHS-02-2017-0005.
  • Joseph-Richard, P., G. Edwards, and S. A. Hazlett. 2021. “Leadership Development Outcomes Research and the Need for a Time-sensitive Approach.” Human Resource Development International 24 (2): 173–199. doi:10.1080/13678868.2020.1815155.
  • Kalra, V. S., P. Abel, and A. Esmail. 2009. “Developing Leadership Interventions for Black and Minority Ethnic Staff: A Case Study of the National Health Service (NHS) in the UK.” Journal of Health Organization and Management 23 (1): 103–118. doi:10.1108/14777260910942588.
  • Kaufman, R., and J. M. Keller. 1994. “Levels of Evaluation: Beyond Kirkpatrick.” Human Resource Development Quarterly 5 (4): 371–380. doi:10.1002/hrdq.3920050408.
  • Kennedy, F., B. Carroll, and J. Francoeur. 2013. “Mindset Not Skill Set Evaluating in New Paradigms of Leadership Development.” Advances in Developing Human Resources 15 (1): 10–26. doi:10.1177/1523422312466835.
  • King, E., and P. Nesbit. 2015. “Collusion with Denial: Leadership Development and Its Evaluation.” Journal of Management Development 34 (2): 134–152. doi:10.1108/JMD-02-2013-0023.
  • Kirkpatrick, D. 1996. “Great Ideas Revisited.” Training & Development 50 (1): 54–60.
  • Kuluski, K. R., Reid, R. J. R.j, and G. R. Baker. 2020. “Applying the Principles of Adaptive Leadership to Person Centred Care for People with Complex Care Needs: Considerations for Care Providers, Patients.” Caregivers and Organizations Health Expectations 24 (2): 175–181. doi:10.1111/hex.13174.
  • Lacerenza, C. N., D. L. Reyes, S. L. Marlow, D. L. Joseph, and E. Salas. 2017. “Leadership Training Design, Delivery, and Implementation: A Meta-analysis.” Journal of Applied Psychology 102 (12): 1686–1718. doi:10.1037/apl0000241.
  • Leach, L., B. Hastings, G. Schwarz, B. Watson, D. Bouckenooghe, L. Seoane, and D. Hewett. 2021. “Distributed Leadership in Healthcare: Leadership Dyads and the Promise of Improved Hospital Outcomes.” Leadership in Health Services 34 (4): 353–374. doi:10.1108/LHS-03-2021-0011.
  • Lincoln, Y. S., and E. G. Guba. 1985. “Naturalistic Inquiry.” Beverly Hills, CA: Sage Publications.
  • Linzalone, R., G. Schiuma, and P. Harri Laihonen. 2015. “A Review of Program and Project Evaluation Models.” Measuring Business Excellence 19 (3): 90–99. doi:10.1108/MBE-04-2015-0024.
  • Long, A. 2006. “Evaluation of Health Services: Reflections on Practice.” In The Sage Handbook of Evaluation, edited by I. F. Shaw, J. C. Green, and M. M. Mark, 461–485. London: Sage.
  • Lucas, R., E. F. Goldman, A. R. Scott, and V. Dandar. 2018. “Leadership Development Programs at Academic Health Centers: Results of a National Survey.” Academic Medicine 93 (2): 229–236. doi:10.1097/ACM.0000000000001813.
  • Ludema, J. D., D. L. Cooperrider, and F. J. Barrett. 2006. “Appreciative Inquiry: The Power of the Unconditional Positive Question.” In Handbook of Action Research, edited by P. Reason and H. Bradbury, 155–165. London: Sage.
  • Mabey, C. 2013. “Leadership Development in Organizations: Multiple Discourses and Diverse Practice.” International Journal of Management Reviews 15 (4): 359–380. doi:10.1111/j.1468-2370.2012.00344.x.
  • MacDonald, B. 1993. “A Political Classification of Evaluation Studies in Education.” In Social Research: Philosophy, Politics and Practice, edited by M. Hammersley, 105–109. London: SAGE. https://www.google.co.uk/books/edition/Social_Research/nMT7xt3CmgkC?hl=en&gbpv=1&printsec=frontcover
  • Maddox, J., and F. L. Replogle. 2019. “Strategic Change at a Hospital in Ecuador: Impact of Culture and Employee Engagement.” Organization Development Journal Fall 37 (3): 21–30.
  • Mark, M. M., and R. L. Shortland. 1985. “Stakeholder-based Evaluation and Value Judgements.” Evaluation Review 9 (5): 605–626. doi:10.1177/0193841X8500900504.
  • Martin, Graham P., David Kocman, Timothy Stephens, Carol J. Peden, and Rupert M. (2017). Pearse, ”Pathways to professionalism? Quality improvement, care pathways, and the interplay of standardisation and clinical autonomy.” Sociology of Health & Illness 39 (8): 1314–1329. https://onlinelibrary.wiley.com/doi/full/10.1111/1467-9566.12585
  • McCauley, C. D., and C. J. Palus. 2020. “Developing the Theory and Practice of Leadership Development: A Relational View.” The Leadership Quarterly 32 (5): 101456. https://doi.org/10.1016/j.leaqua.2020.101456
  • McCray, J., Temple, P. F., & McGregor, S. (2021). Adult social care managers speak out: Exploring leadership development. International Journal of Training and Development, 25(2), 200–216.
  • McLean, S., and G. Moss. 2003. “They’re Happy, but Did They Make a Difference? Applying Kirkpatrick’s Framework to the Evaluation of a National Leadership Program.” Canadian Journal of Program Evaluation 18 (1): 1–23.
  • Mertens, D. M. 2009. “Transformative Research and Evaluation”. New York, NY: Guilford Press.
  • Mertens, D. M. 2015. “Research and Evaluation in Education and Psychology: Integrating Diversity with Quantitative, Qualitative, and Mixed Methods.” 4th ed. Thousand Oaks, CA: Sage publications.
  • Minshew, L. M., J. M. Zeeman, A. A. Olsen, A. A. Bush, J. H. Patterson, and J. E. McLaughlin. 2021. “Qualitative Evaluation of a Junior Faculty Team Mentoring Program.” American Journal of Pharmaceutical Education. 85(4):288. Accessed 27 October 2021. Available at. doi:10.5688/ajpe8281.
  • Neubert, M. J., and L. D. Palmer. 2004. “Emergence of Women in Healthcare Leadership: Transforming the Impact of Gender Differences.” The Journal of Men’s Health & Gender 1 (4): 383–387. doi:10.1016/j.jmhg.2004.09.015.
  • Nuno-Solimis, R. 2017. “Revisiting Organizational Learning in Integrated Care.” International Journal of Integrated Care 17 (4): 1–6.
  • O’Dwyer-Cunliffe, Finn., and Russell, Julian. (2020) “BAME representation and experience in the NHS.” NHS Providers-Inclusive Leadership (blog). https://tinyurl.com/2muk726y
  • O’Sullivan, R. G. 2012. “Collaborative Evaluation within a Framework of Stakeholder-oriented Evaluation Approaches.” Evaluation and Program Planning 35 (4): 518–522. doi:10.1016/j.evalprogplan.2011.12.005.
  • Okpala, P. 2020. “Increasing Access to Quality Healthcare through Collaborative Leadership.” International Journal of Healthcare Management 13 (3): 229–230. doi:10.1080/20479700.2017.1401276.
  • Olano, B. L. 2015. “Preparing the New Nurse Leader for Health Care Delivery in South Africa in the Twenty-first Century.” African Journal for Physical, Health Education, Recreation and Dance 1 (2): 485–497. AJPHERD) (Supplement
  • Pallesen, K.S., Rogers, L., Anjara, S., De Brún, A., and McAuliffe, E., 2020. A qualitative evaluation of participants' experiences of using co‐design to develop a collective leadership educational intervention for health‐care teams. Health expectations, 23(2): 358–367.
  • Patterson, T.E., Stawiski, S., Hannum, K.M. Chapmpion, and Down, H. (2017). Evaluating the impact of leadership development. 2nd Ed. 1–35. Greensboro, NC: Centre for Creative Leadership. https://eric.ed.gov/?id=ED167634.
  • Patton, M. Q. 1994. “Developmental Evaluation.” Evaluation Practice 15 (3): 311–319. doi:10.1177/109821409401500312.
  • Patton, M. Q. (1997). Utilization-focused evaluation: The new century text (3rd ed.). Sage Publications, Inc.
  • Patton, M. Q. 2008. “Utilization-focused Evaluation: The New Century Text”. 4th ed. Thousand Oaks: Sage Publications.
  • Patton, M. Q. 2015. “Qualitative Research and Evaluation Methods: Integrating Theory and Practice”. 4th ed. Thousand Oaks, CA: Sage Publications.
  • Pettigrew, A. M., R. W. Woodman, and K. S. Cameron. 2001. “Studying Organizational Change and Development: Challenges for Future Research.” Academy of Management Journal 44 (4): 697–713.
  • Phillips, G., P. Lindeman, C. N. Adames, E. Bettin, C. Bayston, P. Stonehouse, D. Kern, A. K. Johnson, C. H. Brown, and G. J. Greene. 2019. “Empowerment Evaluation: A Case Study of Citywide Implementation within an HIV Prevention Context.” American Journal of Evaluation 40 (3): 318–334. doi:10.1177/1098214018796991.
  • Phillips, J. J., and P. Phillips. 2007. “Measuring Return on Investment in Leadership Development.” In The Handbook of Leadership Development Evaluation, edited by Hannum, K.M., Martineau, J.W. and Reinelt, C, 137–166. San Francisco, CA: John & Wiley Sons.
  • Picciotto, R. 2015. “Democratic Evaluation for the 21st Century.” Evaluation 21 (2): 150–166. doi:10.1177/1356389015577511.
  • Plsek, P. E., and T. Wilson. 2001. “Complexity, Leadership, and Management in Healthcare Organisations.” British Medical Journal 323 (7315): 746–749. doi:10.1136/bmj.323.7315.746.
  • Raju, P. G. 2021. “Healer Vs Leader-Determinants & Deterrents of Clinician Leadership in Indian Healthcare.” Indian Journal of Industrial Relations 57 (2): 253–270.
  • Ramseur, P., M. A. Fuchs, P. Edwards, and J. Humphreys. 2018. “The Implementation of a Structured Nursing Leadership Development Program for Succession Planning in a Health System.” The Journal of Nursing Administration 48 (1): 25–30. doi:10.1097/NNA.0000000000000566.
  • Rodriguez-Campos, L. 2012. “Stakeholder Involvement in Evaluation: Three Decades of the American Journal of Evaluation.” Journal of Multidisciplinary Evaluation 8 (17): 57–79.
  • Russ-Eft, D., and H. Preskill. 2009. Evaluation in Organizations: A Systematic Approach to Enhancing Learning, Performance, and Change. 2nd ed. New York: Basic Books.
  • Schwartz, J., J. Bersin, and B. Pelster. 2014. Human Capital Trends 2014 Survey: Top 10 Findings. Deloitte Consulting LLP and Bersin by Deloitte. https://www2.deloitte.com/us/en/insights/focus/human-capital-trends/2014/human-capital-trends-2014-survey-top-10-findings.html
  • Scriven, M. 1991. “Evaluation Thesaurus”. 4th ed. Thousand Oaks, CA: Sage.
  • Scriven, M. 1999. “Pros and Cons about Goal-free Evaluation.” American Journal of Evaluation 12 (1): 55–66.
  • Seigart, D. 2014. “Feminist Research Approaches to Studying School-based Health Care: A Three-country Comparison.” In edited by Brisolara, S, Seigart, D. and SenGupta, S. Feminist Evaluation and Research: Theory and Practice 263–283. New York: The Guilford Press.
  • Sfantou, D. F., A. Laliotis, A. E. Patelarou, D. Sifaki-Pistolla, M. Matalliotakis, and E. Patelarou. 2017. “Importance of Leadership Style Towards Quality-of-care Measures in Healthcare Settings: A Systematic Review.” Healthcare. 5 (4): 73. Multidisciplinary Digital Publishing Institute. https://doi.org/10.3390/healthcare5040073
  • Sinclair, A. 2009. “Seducing Leadership: Stories of Leadership Development.” Gender, Work, and Organization 16 (2): 266–284. doi:10.1111/j.1468-0432.2009.00441.x.
  • Smith, J., and J. Gosling. 2017. “Moving on Up: Women in Healthcare Leadership.” Practice Management 27 (2): 24–27. doi:10.12968/prma.2017.27.2.24.
  • Solà, J. G., G. J. I Badia, P. D. M. Hito, M. A. C. Antonia Campo Osaba, and J. L. Del Val García. 2016. “Self-perception of Leadership Styles and Behaviour in Primary Health Care.” BMC Health Services Research 16 (1): 572. doi:10.1186/s12913-016-1819-2.
  • Spencer, L., J. Ritchie, J. Lewis, and L. Dillon. 2003. “Quality in Qualitative Evaluation: A Framework for Assessing Research Evidence”. London: CSRO.
  • Stacey, R. 2012. Tools and Techniques of Leadership and Management: Meeting the Challenge of Complexity. London: Routledge.
  • Stake, R. 1975. “Program Evaluation, Particularly Responsive Evaluation”. Kalamazoo, MI: Western Michigan University.
  • Stame, N. 2004. “Theory-based Evaluation and Varieties of Complexity.” Evaluation 10 (1): 58–76. doi:10.1177/1356389004043135.
  • Strober, E. 2005. “Is Power-sharing Possible?” Using Empowerment Evaluation with Parents and Nurses in a Pediatric Hospital Transplantation Setting.” Human Organization 64 (2): 201–210. doi:10.17730/humo.64.2.uh6u2exgheyxhxqj.
  • Stufflebeam, D. L. 1983. “The CIPP Model for Program Evaluation.” In Evaluation Models: Viewpoints on Educational and Human Services Evaluation, edited by G. F. Madaus, M. Scriven, and D. L. Stufflebeam, 279–317. Boston: Kluwer: Nijhof Publishers.
  • Stufflebeam, D. L., and A. J. Shinkfield. 2007. Evaluation Theory, Models, and Applications. San Francisco, CA: Jossey-Bass.
  • Sucharew, H., and M. Macaluso. 2019. “Methods for Research Evidence Synthesis: The Scoping Review Approach.” Journal of Hospital Medicine 14 (7): 416–418. doi:10.12788/jhm.3248.
  • Swanson, R.A., and Sleezer, C.M. (1987), ”Training Effectiveness Evaluation”, Journal of European Industrial Training 11 (4): 7–16. doi:10.1108/eb002227.
  • Tayabas, T. T., T. C. León, and J. M. Espino. 2014. “Qualitative Evaluation: A Critical and Interpretative Complementary Approach to Improve Health Programs and Services.” International Journal of Qualitative Studies on Health and Well-Being 9 (1): 1–6.
  • Thiele, Graham, André Devaux, Claudio Velasco, and Douglas Horton. (2007). ”Horizontal evaluation: fostering knowledge sharing and program improvement within a network.” American Journal of Evaluation 28 (4): 493–508.
  • Tomlinson, M., D. O’Reilly, and M. Wallace. 2013. “Developing Leaders as Symbolic Violence: Reproducing Public Service Leadership through the (Misrecognized) Development of Leaders’ Capitals.” Management Learning 44 (1): 81–97. doi:10.1177/1350507612472151.
  • Turner, P. 2019. “Leadership Development Practices.” In Leadership in Healthcare. Cham: Palgrave Macmillan. 295–324.
  • Wandersman, A. 2015. Getting to Outcomes: An Empowerment Evaluation Approach for Capacity Building and Accountability. In edited by Fetterman, D.M., Kaftrarian, S.J. and Wandersman, A. Empowerment Evaluation: Knowledge and Tools for Self-assessment, Evaluation Capacity Building, and Accountability. 2nd (ed.) 150–164. Sage Thousand Oaks, CA.
  • Wang, C., Y. L. Yuan, and M. L. Feng. 1996. “Photovoice as a Tool for Participatory Evaluation: The Community’s View of Process and Impact.” Journal of Contemporary Health 4 (3): 47–49.
  • Wäscher, S. S., S. Ritter, P. Vollman, J. Schildmann J, and J. Schildmann. 2017. “Methodological Reflections on the Contribution of Qualitative Research to the Evaluation of Clinical Ethics Support Services.” Bioethics 31 (4): 237–245. doi:10.1111/bioe.12347.
  • Watkins, K. E., I. H. Lysø, and K. de Marrais. 2011. “Evaluating Executive Leadership Programs: A Theory of Change Approach.” Advances in Developing Human Resources 13 (2): 208–239. doi:10.1177/1523422311415643.
  • Weiss, C. H. 1995. “Nothing as Practical as Good Theory: Exploring Theory-based Evaluation for Comprehensive Community Initiatives for Children and Families.” In New Approaches to Evaluating Community Initiatives: Concepts, Methods, and Contexts, edited by P. Connell, A. C. Kubisch, L. B. Schorr, and C. H. Weiss. 1st ed., 65–92. Washington DC: Aspen Institute.
  • West, M. 2021. Compassionate Leadership: Sustaining wisdom, humanity and presence in health and social care. London: The Swirling Leaf Press.
  • West, M., K. Armit, L. Loewenthal, R. Eckert, T. West, and A. Lee. 2015. Leadership and Leadership Development in Healthcare: The Evidence Base. London: King’s Fund.
  • Willmott, H. 1997. “Critical Management Learning.” In Management Learning: Integrating Perspectives in Theory and Practice, edited by J. Burgoyne and M. Reynolds, 161–176. London: Sage.
  • World Health Organisation. 2020. “Global Strategy on Human Resources for Health.” In Workforce 2030: Health Workforce Department World Health Organization 20 Avenue Appia CH 1211. Geneva, Switzerland. https://www.who.int/publications/i/item/9789241511131
  • Yang, T., M. G. Maa, Y. Lia, Y. Tiana, H. Liud, Y. Chene, S. Q. Zhangf, and J. Denga. 2020. “Do Job Stress, Health, and Presenteeism Differ between Chinese Healthcare Workers in Public and Private Hospitals: A Cross Sectional Study Psychology.” Health & Medicine 25 (6): 653–666.