496
Views
0
CrossRef citations to date
0
Altmetric
Research Article

‘AI coaching’: democratising coaching service or offering an ersatz?

&
Received 01 Mar 2024, Accepted 12 Jun 2024, Published online: 27 Jun 2024

ABSTRACT

Organisational coaching is not the only field that has recently been challenged by the extraordinary developments in artificial intelligence (AI) with a threat of replacement of human practitioners by machines. The challenges of this field, however, are unique because the nature of coaching remains fairly elusive, making comparison between human coaching and AI coaching a complicated exercise. In this paper we conceptualise the essential characteristics of organisational coaching and propose a number of criteria according to which interventions can be identified as coaching. Our conclusion is that AI ‘coaching’ does not meet these criteria. However, in considering various genres of organisational coaching along simple model-based types, we identify a number of elements in human coaching that could be augmented to various degrees by the use of AI.

Implications for practitioners

  • Acknowledging a long-standing issue of the lack of consensus about defining organisational coaching we identify six essential characteristics of coaching. In combination they can serve as criteria according to which it is possible to say if organisational intervention can be named as coaching.

  • Stand-alone artificial intelligence coaching does not meet these criteria and requires a different name.

  • It is suggested which elements of human coaching can be augmented by the use of AI according to different types of coaching.

Introduction

Fast growth of artificial intelligence (AI) application in various contexts has bewitched the coaching community creating a huge wave of enthusiasm and anxiety. This enthusiasm is a typical reaction of coaches to new and popular ideas that could give them marketing advantage in the ever-competitive coaching industry. Bachkirova and Borrington (Citation2020) describe such ideas as ‘beautiful’ but warn that they can ‘make us ill’ if accepted uncritically. Until recently neuroscience was the ‘flavour of the month’ and now it is AI. The anxiety that coaches feel about AI development is also natural in response to threatening discussions in various fields that some knowledge-based professions may become redundant and fully replaced by AI (Dwivedi, Kaur, Choudhary, Singla, & Barnwal, Citation2023). Even if not irrational, both of these reactions, we would argue, are based on limited understanding of what AI coaching is, what it actually offers, what opportunities and what issues it presents and how it concurs with the rationale for coaching services, particularly in organisations. And yet again, these discussions reveal a long-standing controversy – the lack of consensus about what is meant by coaching per se (e.g., Bachkirova & Kauffman, Citation2009; Garvey et al., Citation2010; Grant et al., Citation2023).

The emerging literature on the use of AI in coaching is mainly descriptive (Passmore & Woodward, Citation2023) with some initial attempts at conceptual analysis (Graßmann & Schermuly, Citation2021) or empirical investigation (Passmore & Tee, Citation2023; Terblanche et al., Citation2022a, Citation2022b), which aims at gathering feedback and evaluating the value of the actual use of some initial devices that utilise artificial intelligence for coaching purposes. Very few authors so far discuss the conceptual issues that the introduction of AI presents to coaching as a practice. The discussions about AI in coaching communities vacillates between attention to advantages and disadvantages of AI in comparison to human coaching, often not specifying the problems that need to be solved nor potential ways of how AI should be used whether as augmentation or replacement. In all discussions, whether research or practitioner based, the understanding of coaching is regularly assumed as unified and uncomplicated: in other words, ‘just coaching’ (Passmore & Tee, Citation2023).

The main incentive for development of AI coaching (AIC) as a substitute of human coaching is typically identified as being the democratisation of coaching (Terblanche, Citation2020; Terblanche et al., Citation2024) meaning freer access to something that is rightly recognised as not available to many people. It is important to mention parenthetically that AIC is ‘sold as a service’ to organisations by AI developers, which also restricts access to such service for wider populations. A more serious concern with this claim, in our view, is that such ‘democratisation’ might obscure the same divide that already exists between those with and without means and power, but in this case by offering to the latter a poor substitute – an ersatz – under the name of coaching.

To explore if this is the case, we aim to consider from the position of philosophical pragmatism (Dewey, Citation1938; Peirce, Citation1878/1955) how AI contribution to coaching conceptually coheres with the idea and practice of organisational coaching. We believe that one of the best things that the ‘AI revolution’ brings to coaching is forcing coaching scholars and practitioners to face the need to explore and become more explicit about what it is that constitutes coaching, what its main purpose is, and what it is that is unique about the value it adds in comparison to other interpersonal interventions. This analysis is important not only for the initial stages of introduction of AI but also for any future efforts in AIC development and implementation that would likely draw on substantial resources in comparison to other potential research projects and innovations.

Critically engaging with these questions, we explore the main conceptual challenges of the interplay between AI and the discipline of coaching. We propose a number of criteria according to which services to organisations can be identified as coaching and explore if AIC meets these criteria when considered as a replacement of human coaching. We use ‘organisational coaching’ in a similar way as ‘workplace coaching’ to specify coaching that involves a third-party sponsor (Bozer & Jones, Citation2018). Recognising the diversity of coaching genres and underpinning theories (Bachkirova & Borrington, Citation2019; Cox et al., Citation2023), we review how AI can be useful for the purposes of augmentation of human coaching. Questioning some previously held assertions in regard to augmentation (e.g., Graßmann & Schermuly, Citation2021) we offer a new framework for potential application of AI technology that can support and enrich human coaching without generating unnecessary competitive turbulence.

What do we know at this point?

To start with terminology, current AIC devices are examples of Large Language Models (LLMs). In machine learning LLMs are artificial networks that are based in terms of input on texts accumulated by ‘web scraping’ (Krosnick & Oney, Citation2023) – collecting data from texts on the internet’s ever-growing world of information, as well as specific and intentionally selected use of training data (Moore & Lewis, Citation2010). They work by remixing and recombining existing writing by repeatedly predicting the next typically used words, thus generating convincing language, without understanding the meaning of the language processed (Bowman, Citation2023). One of the puzzles is the fact that LLMs are ‘black boxes’ or ‘alien intelligence’ because it is not clear at this stage how they actually perform linguistic tasks (Frank, Citation2023).

A more significant puzzle is understanding, at least in general terms, what is referred to as ‘intelligence’ in LLMs. LLMs clearly demonstrate linguistic capability. However, if we believe that ‘intelligence’, as we know it, includes the capacity for making sense of what is experienced (Clark, Citation1998) coupled with capacity for reasoning then the term ‘intelligence’ is not strictly speaking justified in relation to LLMs (Gáti, Citation2023). This creates an interesting paradox that linguistic capability and intelligence do not necessarily go together. One useful way of understanding LLMs is as ‘stochastic parrot’, a term coined by Bender et al. (Citation2021). Stochastic in this context implies that AI ‘outputs’ are random and are based on probability. Bender et al. (Citation2021) describe a stochastic parrot as ‘haphazardly stitching together sequences of linguistic forms … according to probabilistic information about how they combine, but without any reference to meaning’ (pp. 616–617). That is why they sometimes produce obviously incorrect answers or ‘hallucinate’ as it is sometimes euphemised (Athaluri et al., Citation2023; Bontridder & Poullet, Citation2021; Passmore & Tee, Citation2023).

Another challenge of LLM products is that they are based on sophisticated ‘borrowing’ of the material created by human authors, often without fair or correct acknowledgement of appropriated sources impinging on the rights of the authors. The quality of such imitation can be very impressive but those not familiar with original sources could be misled or accused of plagiarism (Rahman & Santacana, Citation2023)

In the context of AIC, we could say that an AI ‘coach’ essentially paraphrases the content provided by the client with some added noise. More sophisticated devices can ‘remember’ a limited number of previous content and prompts by the client, which creates an impression of continuing and joined up conversation. However, AIC cannot understand the problem or task that is presented and cannot identify what is out of context, incorrect or inappropriate. To be fair, one might argue that some human conversations can also create an impression of such description, however, it is unlikely that such conversations can sustain a viable coaching relationship.

AI in the literature

Since the emergence of public access to web-based AI models, such as OpenAi’s ChatGPT in 2022, there has been an increasing volume of research on AI and LLMs to the extent of the emergence of meta-analysis (Blut et al., Citation2021; Kaplan et al., Citation2023; Schemmer et al., Citation2022). Authors consider some questions relating to the utility of AI in practice – for example, how AI might assist learning, knowledge management (Jarrahi et al., Citation2023) and crucially, how humans interact with AI, such as considerations of anthropomorphism (Blut et al., Citation2021; Einola et al., Citation2023) and trust (Kaplan et al., Citation2023). The general literature on LLM’s also addresses some technical, but important aspects such as the size of the model regarding training data (Hsieh et al., Citation2023), misinformation, natural language and semantics (Aladakatti & Senthil Kumar, Citation2023), ‘opaqueness’ (Toy, Citation2023) and inherent bias of systems (Ray, Citation2023).

As those with various interests in AI focus on differing aspects of the technology, research in allied or associated fields to coaching have emerged, such as therapy, psychotherapy and mental health (Knox et al., Citation2023; Sedlakova & Trachsel, Citation2023; Swartz, Citation2023). These publications raise issues of ethics, confidentiality, bias, utility and appropriateness of AI application. Rather than an ‘all out’ assault on AI as inappropriate in the context of mental health, these authors raise potential issues from their professional viewpoints and often make suggestions for where humans and AI might cooperate in service of the end goals of therapy. The research and writing in these allied fields are nascent, and the debates unresolved.

In the coaching literature, AIC brings in another set of considerations for this specific context (Clutterbuck, Citation2022, pp. 369–379). For example, Graßmann and Schermuly (Citation2021) offer a definition of AI coaching ‘as a machine-assisted, systematic process to help clients set professional goals and construct solutions to efficiently achieve them’ (p. 109). The analysis of this definition highlights some problematic areas. What does machine-assisted mean? Is coaching solely a goal-related activity? This definition implies that goals are efficiently achieved, however coaching is often seen not as systematic and efficient, but rather multifaceted, ‘messy’ and experimental (Cavanagh, Citation2016; Cox et al., Citation2023). Addressing each of these questions is far from straightforward, considering that there is no universal concept of coaching and no agreement on its content and methodologies.

Furthermore, there are ongoing debates questioning the centrality of goals in coaching and goal attainment being criteria of coaching effectiveness (e.g., Clutterbuck & Spence, Citation2017). This should put in perspective the definition of AIC by Graßmann and Schermuly (Citation2021) and the result of research by Terblanche et al. (Citation2022b) on the effectiveness of coaching. The latter examined AI Coach ‘Vici’s’ impact on goal attainment, resilience, psychological wellbeing and perceived stress over a (short) longitudinal study. Significant results were only obtained for Goal Attainment and on this basis, they conclude that Vici is effective, cost-effective and scalable.

Although wider AI literature is notable for a good deal of discussion around both capability and limitation of AI, in the coaching canon such discussions of AI are scant and emergent. They are heavily focussed on efficacy and outcomes, with goal attainment usually at the fore. The questions not being asked in the published research so far are those around applicability in principle: should AI be used in coaching, and how might we make such decisions?

There are few attempts made to address questions of applicability, which require the need to consider a number of expectations or even standards currently applied to human coaches, i.e., if the coach is competent, safe and useful. Determining competence of human coaches is not simple, but the use of frameworks and standards (credentialling, accreditation and qualification) are accepted by some professional bodies as proxies for competence. Clearly, a machine cannot ‘hold’ any of these indicators of competence, but the question remains whether those who are training these devices hold them. Passmore and Tee (Citation2023) report that Chat GPT-4, on being assessed by an International Coach Federation (ICF) Master Credentialled Coach, did not meet the standard for Associate Credentialed Coach (the lowest level of credential for the ICF).

To establish whether a human coach is safe to practice requires assessment by trained ‘judges’, assured by coaches’ allegiance to a set of ethical standards and supported by their reflective practice, particularly in supervisory relationships. Once again, an AI model cannot be a member of a professional association, or sign up to an ethical framework, or indeed be supervised. Safety can also be compromised in AIC due to LLMs being prone to ‘hallucination’ (Athaluri et al., Citation2023), factual inaccuracy (Bontridder & Poullet, Citation2021) and bias (Ray, Citation2023).

We would also add that the most important question not yet addressed in the literature is whether what AI currently is capable of doing can even be recognised as ‘coaching’. A first, very limited indication in the study by Passmore and Tee (Citation2023) is not very promising: ‘In reviewing the output, the expert, an ICF master coach assessor, judged the GPT-4’s output not to be “coaching”’ (p. 8). To address such question, we would argue, a more substantial conceptual work should be undertaken in order to specify the criteria according to which an activity can be named as coaching.

How to compare human coaching and AIC?

Coaching as an organisational intervention has been recently enjoying its status without real battles with any other practices in terms of differentiation. Only at the early stages the need to distinguish it from therapy was actively discussed (e.g., Bachkirova & Baker, Citation2018). This concern is now reduced because the role of organisational context became an important differentiator and there seems to be no ‘registered’ issues of ‘crossing the boundary’. AIC, however, creates a more serious threat, questioning not just a boundary but the very existence of human coaching.

This challenge brings coaching scholars back to the drawing board having to face the fact that the nature of coaching is not that simple to describe and is often perceived as opaque. However, in order to defend coaching it is important to know, at least in terms of the main principles, what the essential features of coaching are that allow it to be differentiated from other organisational interventions (Bachkirova, Citation2024). Otherwise, it is impossible to compare what human coaching offers with what is on offer by AIC, particularly in light of the assertions that AI can do anything better in terms of skills and replication of simple models (Terblanche et al., Citation2022a).

To start with, it is important to recognise that the problem of opaqueness of coaching has been long-standing (Bachkirova & Borrington, Citation2019) in spite of many studies demonstrating the effectiveness of organisational coaching (e.g., De Haan & Nilsson, Citation2023; Theeboom et al., Citation2013). There are also many studies that indicate what seems to be making a difference for positive outcomes of coaching (e.g., Bozer & Jones, Citation2018). However, it is one thing to say what aspects of coaching engagements, when measured, correlate with the outcomes of coaching, but quite different thing to identify what describes the nature of coaching in terms of the most essential and unique characteristics. These essential and unique elements might be abstract, elusive, sometimes controversial or even esoteric, and certainly not all of them are easily measurable. It is quite possible though that these characteristics, such as the role of relationship or the self of the coach, make the main difference for the results of the coaching engagements (e.g., Bachkirova, Citation2016; De Haan & Gannon, Citation2017).

To identify the main principles and essential characteristics of coaching, it is useful to establish first, what coaching is not, i.e., training programmes, consultancy or individual reflection with use of various written sources of information and prompts. For example, we could agree that coaching is not an individual diagnostic and training session in which the coach is just a source of information. Incidentially, if it was, then the coach might lose to AI. It is also not useful to see coaching as a set of coach’s behaviours, often required during accreditation processes. Similarly, AIC can emulate observable linguistic behaviours. Furthermore, we now have significant evidence that professional coaching is much more than a mechanistic application of simple models with the predetermined set of questions (e.g., Cox et al., Citation2014; Theeboom et al., Citation2013), something that AIC devices currently emulate (Terblanche et al., Citation2022a). Organisational coaching, in particularly, is a recognised intervention, often provided by highly educated professional coaches in ways that are theoretically sophisticated, ethical and methodologically adaptable for various clientele and assignments. Therefore, the task is to identify what keeps coaching as an intervention of choice in organisations by describing its specificity and uniqueness without fear of being seen as ‘not sufficiently scientific’ and without being embarrassed by its opaqueness.

With such an attitude, we acknowledge that the expansion of coaching in organisations is the result of a major shift in management learning from prescribed, theoretical, supplier-led provision to customised, contextualised, participative and experiential journey called ‘naturalistic learning’ (Kempster & Iszatt-White, Citation2012, p. 321). This led some coaching authors naming organisational coaching as being ‘professional development through one-to-one conversation’ (De Haan et al., Citation2010, p. 607) or as ‘individually facilitated learning’ (Bachkirova, Citation2011, p. 7) in an organisational context. However, these ‘definitions’ are not sufficient for describing the full and unique nature of organisational coaching. This might require new concepts, a deliberate analysis and keeping this conversation open for better conceptualisations. However, for the purposes of differentiation with AI, that looms large in current conversations, we offer a preliminary conceptual description of the essential characteristics of organisational coaching.

Methodologically, this description is based on the conceptual analysis we conducted of the rigorous peer-reviewed theoretical publications and empirical studies which highlight significant features of organisational coaching. These publications, for example, include: Athanasopoulou and Dopson (Citation2018), Hurlow (Citation2022), Bachkirova and Borrington (Citation2019), Cavanagh (Citation2016), Cox et al. (Citation2023), De Haan and Nilsson (Citation2023) and Bozer and Jones (Citation2018). Our selection of essential elements of coaching from these publications was based on the quality of the argument, empirical evidence and consistency of recognition of particular characteristics identified in these studies. At the same time, it is important to acknowledge the interpretative nature of such analysis based on our own reflexive practice in coaching and supervising coaches. This analytical and reflexive approach does not intend to produce a complete and final set of criteria but to engage the coaching community in relationally reflexive practice (Hibbert et al., Citation2014) that value alternative views and construct new interdisciplinary and critical conversation about AIC and human coaching.

Overall, we suggest that the main value for clients is a luxury of being understood in their unique and complex set of circumstances, with consideration of their personality, values and attitudes and facing therefore a unique set of challenges in work and life. The value of coaching conversations is determined by identifying together what is important for the client to look into, how much they perceive the trustworthiness as a coach and how well they both can be attuned to each other in their joint inquiry. Following this conceptualisation of organisational coaching, in the next section we describe in a more granular way what we believe to be the essential characteristics of human coaching in principle. We argue that these characteristics in combination should be seen as minimum criteria for recognising if any engagement can be identified as coaching or not, including AI-facilitated engagements.

The nature of organisational coaching with consideration of AIC

The essential characteristics of organisational coaching proposed in this section could be seen as a starting point for discussion on what can be considered as criteria for recognising organisational intervention as coaching. We argue that in combination these criteria make coaching distinct and uniquely valuable for individual clients in organisational context and, by extension, for organisations that commission coaching:

  • Joint inquiry

  • Making sense of experience with focus on action

  • Value-based (purposeful and ethical)

  • Highly contextual

  • Relationship based on trust

  • Contracting-based

In the following subsections, we describe each of the proposed essential elements of organisational coaching with an argument for its inclusion, mentioning specific literature that supports it. Each argument is followed by the analysis of the extent AIC can meet the proposed criterion. The latter analysis is built on the literature about LLMs which we described earlier representing AI experts’ positions on current and future capabilities and limitations of relevant devices, including those designed to emulate coaching.

Joint inquiry

Seeing coaching as a joint inquiry is a key differentiator of coaching, recognised by all professional bodies and literature, e.g., a very first feature of ICF definition of coaching is that it is a partnership (ICF, Citation2023). We believe that the concept of joint inquiry is more precise than simply ‘partnership’ because it encompasses what this partnership is for and implies a dialogue as its main means, including therefore both the end and the means of coaching engagements. The philosophy of pragmatism, which has been argued to be a theoretical foundation of coaching (Bachkirova & Borrington, Citation2019; Humphreys, Citation2023; Ostrowski & Potter, Citation2023), describes inquiry as ‘a controlled and directed transformation’ of a puzzling indeterminate situation, which becomes transformed into a situation that enables the ‘best solution for now’ (Dewey, Citation1938, p. 72). This represents the reasons people usually engage in coaching: when their beliefs about reality and corresponding habits fail to guide them successfully to what they hope to achieve. They have a sense of doubt that needs to be overcome, and through inquiry they attempt to restore their system of beliefs in such a way as to provide warranted guidance for future action (Dewey, Citation1938; Peirce, Citation1878/1955).

Bachkirova and Borrington (Citation2019) argued that any successful coaching engagement requires a state of mind in the client ready to recognise the need for inquiry and engagement with a coach. Without such a state of mind in the client, which they called ‘disequilibrium’ (Bachkirova & Borrington, Citation2019, p. 348), coaching falls short of the drive and energy required for productive work. The client determines the focus of inquiry by bringing in the initial disequilibrium and both of them explore what could be seen as the core of this state. The coach looks at this situation as an opportunity not just to solve a particular problem, but also to extend the client’s overall capacity to make meaning and address any other situations (Bachkirova & Borrington, Citation2019; Cox, Citation2013). The coach’s role is to contribute to inquiry by recognising any emotional significance in the issue, participating in better understanding of the situation and the core of the problem, and exploring potential actions and their consequences.

AI cannot be a partner in a joint inquiry because it lacks subjective experience and context-based understanding of human predicaments as it lacks personal awareness of the world. Therefore, the inquiry cannot benefit from shared experiences and perspectives. AI can only imitate empathic response to emotional nature of human situations that is important for building mutual moment-by-moment deep understanding. AI cannot grasp any cultural, historical and social background significance of such situations. As joint inquiry often involves creative thinking, brainstorming and even development of new concepts, AI does not have such capabilities as it only replicates pre-existing ideas. The quality of engagement with AI depends on the ability of client to be already sufficiently articulate about their topic of inquiry. Without this, AI’s contributions are not spontaneous and even flexible enough to resemble a meaningful dialogue. Joint inquiry, in addition to being rational, depends on the subjective elements that can only be brought by humans, such as emotional resonance, creativity and contextual sensitivity.

Making sense of experience with the focus on action

The process of coaching consists of continuing interaction between clients and practitioners based on subjective experiences and constant feedback and adjustments being made in line with these experiences. Making sense of experience involves consideration of beliefs, expectations and perception of local contexts and wider environment. As all these elements are entangled, coaching is useful for deeper understanding of complex dynamics between them when clients wish to address their challenges (Cavanagh, Citation2016; Cox et al., Citation2014; Lawrence, Citation2021). Furthermore, clients’ perception of challenges is also entangled with the multiplicity of their individual properties and circumstances, such as age, background, education, psychological characteristics and current beliefs. This is one of the reasons why coaching is useful – it offers an individualised approach of working with clients appreciating what matters to them in their specific contexts and what they wish to accomplish. The job of the coach is to distil from their professional knowledge and personal experiences what might be applicable and useful for each individual client in dealing with these matters (Ostrowski & Potter, Citation2023).

Another fundamental aspect of coaching in an organisational context is learning with focus on action. Coaching is concerned not necessarily with one ‘true’ understanding of the situation and the best solution for a problem but aims at the elimination of doubt that enables the client to act (Clutterbuck & Spence, Citation2017; Deci & Ryan, Citation2000; Lawrence, Citation2021). At the same time, it is accepted that the process of coaching provides no guarantee that the identified course of action is the right one; further adjustments may well need to take place. This often brings the coaching inquiry back to experiences and further reflection in order to arrive at a different action (Bachkirova & Borrington, Citation2019).

Recognition that reflecting on and making sense of experiences and actions are cornerstone activities of coaching (Athanasopoulou & Dopson, Citation2018; Cox et al., Citation2014; Jackson, Citation2021) would judge AIC as less than adequate, because AI is unable to have experiences. AI learning is based on identifying linguistic patterns and algorithms rather than on true reflection, which implies having the capacity for critical examining of situations and own decision-making often with moral and ethical considerations. Reflexivity, in particular, involves awareness of one's thoughts, feelings and actions, as well as questioning one's assumptions and beliefs (Jackson, Citation2021), but AI systems do not have beliefs, assumptions and criticality of their own. AI cannot replicate the depth of introspection and adaptability required for true reflexivity of a human coach.

Value-based (purposeful and ethical)

One essential, but often taken-for-granted feature of coaching, is the value that coaching places on the person in front of them. The quality of attention that a coach provides affirms the unique importance of the client as a human being and serves as a pre-condition for real understanding of their situations and concerns (Cox, Citation2013; Kjellstrom & Stalne, Citation2017; Midgley, Citation1999). This value is not the result of scientific data; it comes first as it should (Kjellstrom & Stalne, Citation2017). Coaches cannot be neutral in regard to this or any other values. If they are misled to believe that they should display complete neutrality, confusing this with ‘objectivity’, then they are likely to face serious relationship issues and internal conflicts (Fatien et al., Citation2022).

The fact that coaching is value-based demands that coaches acknowledge the purpose of their work and examine their actions from an ethical point of view. It implies the responsibility of coaches for their learning and further development in order to provide quality service. To monitor this quality, they abide to codes of practice and ethics (e.g., ICF, 2023) and undertake supervision (Bachkirova et al., Citation2021). They negotiate with clients how their values could influence their interactions and own the degree of objectivity that is possible in their contributions, especially as far as complex ethical and moral dilemmas are concerned.

Although AIC devices can be designed to operate according to specific values and ethical principles, they only imitate the feeling and valuing the person they interact with. Their cold and purposeless objectivity is a poor substitute for human attention that is based on a genuine interest towards the other human being. Without self-awareness and self-criticality, AI systems do not possess sensitivity in relation to emotionally difficult issues of clients’ concerns and cannot identify their own biases. Lacking personhood, agency and conscience they cannot engage in ethical reflexivity and identify risks associated with their input. Consequently, they cannot take responsibility for potential harm as they are not concerned with any duty of care (Mayhead, Citation2023) and do not carry liability like human coaches (Wright & O’Connor, Citation2021). It is also apposite to note that their designers, however, are often far removed from the immediate and real human connections and not involved in the ethical regulations of the coaching world.

Highly contextual

The role of coaching generating insight into the specific context and nuances of organisational clients’ life are critical in terms of the value of this intervention (Lawrence, Citation2021). Take a step, or don’t take a step is a contextual question, particularly if one is on the brink of making a significant decision. The interplay between who an individual is in the world, and the world itself is infinitely complex. Otherwise, there would exist an excellent manual and set of hard and fast rules for any situation and AIC could excel in guiding an individual through the accepted pathways. Alas, context is so crucial to coaching that many approaches are rooted in it, such as various versions of systemic coaching (Hawkins & Turner, Citation2019; Lawrence, Citation2021).

To be able to fulfil their role in complex contexts coaches are expected to deal with ambiguous and unclear contextual information using relevant knowledge but also common sense and intuition. They should be sensitive and rapidly adjusting their inputs observing the real-time needs of the client, e.g., in crisis. Contextual situations often involve interpersonal conflicts with the need to take consideration of multiple cultural and individual differences of participants (Fatien et al., Citation2022). It is clear that AIC devices do not possess a level of understanding, adaptability and emotional intelligence that would allow them to navigate the intricacies of context in complex coaching scenarios.

One might argue that some coaching approaches are less attentive to context, e.g., solution-focused coaching (Greene & Grant, Citation2003), where attention is concentrated on generating possibilities and solutions, as opposed to exploring the problem deeply. If we believe that AIC in this modality may be a good fit, we have to be sure that it is capable of a partnership in innovative creation – a tall order condition. Needless to say, the idea that it is unnecessary to spend time on understanding the background and potential causes of problems, is in opposition to many other approaches or philosophies of coaching.

Relationship based on trust

The nature of coaching requires that the coach connect with clients on a personal level, creating relationships often described in terms, such as rapport, emotional connection, trust, commitment, etc. (Baron & Morin, Citation2009; De Haan & Gannon, Citation2017; Western, Citation2012). Moreover, many coaching studies suggest that the relationship with a client is the main contributing factor to the results of the process rather than the specific orientation and training of the practitioner (Baron & Morin, 2009; De Haan, Duckworth, Birch, & Jones, Citation2013; Myers & Bachkirova, Citation2020). Trust, in turn, is named the most important factor in creating quality relationships (Bachkirova, Citation2016; Cox, Citation2013; de Haan et al., Citation2013) between participating parties. Although the concept of trust is not well defined (Rotenberg, Citation2019) there are distinctive conditions that appear essential for a coach aiming to develop trust in coaching relationships: professionalism (Lane, Citation2017) and personal trustworthiness (Bachkirova, Citation2023).

The features of professionalism include required knowledge and expertise, interpersonal skills, confidentiality and accountability in terms of the process. Looking at each of these elements serious limitation of AIC become apparent. For example, whilst AI is impressive in terms of demonstrating regurgitated knowledge, it is also prone to ‘hallucination’ and misinformation (Athaluri et al., Citation2023). Interpersonal skills imply ability to engage in fluid, nuanced and contextually sensitive interactions. Although some AI devices use colloquial language, humour and compliment, these are just an emulation designed to create the appearance of rapport. Confidentiality is problematic with AI in general (Frank, Citation2023) and is a concern with AIC. We do not know where all data goes, in what country it is processed, how long it is kept, who has the access to it and what function it plays in AI learning, even in so-called ‘walled-garden’ iterations of AI. Accountability for the process is also limited, as there is no real understanding of the relationship between technical modifications and their linguistic outcomes (Frank, Citation2023; Gáti, Citation2023).

In terms of personal trustworthiness of AI coaches, the situation even less optimistic. To be personally trustworthy requires ability for emotional connection, understanding and empathy as well as honesty and transparency (Bachkirova, Citation2023; Cox, Citation2013; De Haan et al., Citation2013). This is manifested in the natural, unscripted interactions aimed to meet the need and concerns of the client, often responding to non-verbal clues, thus providing a sense of mutual understanding and acceptance (Bachkirova, Citation2016; De Haan & Gannon, Citation2017; Western, Citation2012). This is particularly important in conversations involving personal beliefs, values and aspirations. Human coaches can navigate these conversations with sensitivity, exploring complex topics in depth. AI can only simulate empathy as it lacks emotional responsiveness and provides instead generic and superficial responses. Trustworthiness also includes openness about nuanced and immediate observations, thoughts, feelings and so forth (Rosenbaum, 2019) that help to create shared meaning between the coach and client. Coach’s own ethical principles and values become part of these exchanges even when disagreements arise. Such honesty and transparency have significance for trusting relationship between humans (De Haan & Gannon, Citation2017; Midgley, Citation1999; Western, Citation2012) but have no meaning in communication with AI. This also holds for any idea of AI authenticity.

Contracting-based

Contracting in coaching is seen as the fundamental building block of the coaching relationship (Lee, Passmore, Peterson, & Freire, Citation2013). Not only does it attend to the building of the relationship with agreements about the process and content of the coaching, but also considers the potential stakeholders outside the immediate coaching relationship. Organisational sponsors of coaching are not part of the joint inquiry unit but the relationship with them is an integral factor in creating a meaningful coaching engagement (Athanasopoulou & Dopson, Citation2018; Cox et al., Citation2014). For human coaching, the contract is the basis of the work; the place they go back to, the discussion that they check in on, the way in which they judge whether the process is working. The crucial importance of contracting is clarification of expectations and responsibilities of the participants even if many legal considerations are dealt with at the level of sponsoring organisation (Lawrence, Citation2021; Wright & O’Connor, Citation2021).

In the most basic sense, AI is unable to enter into a contract as any contract requires mutual understanding, intent and consideration, which AI systems lack by their own admission (https://pi.ai/). As AI systems cannot enter into a meaningful contract, they cannot be held accountable for their actions or inactions. In cases of AIC, there are predefined protocols and algorithms determined by the design of the system, about which the client can be only informed, e.g., the types of questions the AI asks, the information it provides and the coaching strategies it displays. The designers also make efforts to align the systems with ethical principles and users’ expectations. None of these are negotiated with individual clients. Feedback mechanisms are also very limited as they are typically based on the predefined questions that are of interest to the designers.

It is recognised that contracting and regular re-contracting in human coaching do not guarantee the quality of this service. Therefore, coaches undertake supervision to monitor and improve their practice (Bachkirova et al., Citation2021). AIC, however, not only misses individualised, negotiated contracting at the start of coaching but cannot be supervised as these systems do not possess the self-awareness, ethical judgment or emotional capacity required for meaningful supervision. This is a serious concern, as AI cannot know if the coaching it provides does harm because it lacks the emotional intelligence and contextual awareness required to assess the individual impact of its actions. These systems rely on external oversight which is currently in the hands of organisations that are unregulated and not open to examining their processes.

Our verdict about the idea of ‘replacement’

We argued so far that the services offered by AIC are not comparable to organisational coaching as an interpersonal activity. Most importantly, this comparison does not imply ‘not yet’ as far as technological advances are concerned. We believe that the essential features of professional organisational coaching cannot be replicated by AIC in principle. Even in the future, when as argued by some, it will pass the Turing test by exhibiting intelligent behaviour indistinguishable from that of a human, these criteria could not be met. Although AI does offer new and interesting ways of learning for individuals in organisations, it cannot replicate the process and value of human coaching.

Based on this conclusion our suggestion is to stop naming stand-alone AI-facilitated learning as coaching. We recognise that AI, by itself, could be useful for individual training and providing information on request in relation to the goal already identified by the client. This could include various measurement instruments which would indicate the users’ progress. If such service is to be offered to employees in organisations as a stand-alone intervention (and if ‘coaching’ in the name of these devices is important), it would not be a misrepresentation to call it something like ‘digitally assisted self-coaching’.

The term ‘self-coaching’ can mitigate to some extent very serious and unique legal concerns for AI development (Wright & O’Connor, Citation2021). However, it is important to recognise that the concept of ‘self-coaching’ is also an area in need for further exploration and conceptual analysis. With ideas about regulations of AI lagging behind technological advances, the questions that Wright and O’Connor (Citation2021, p. 69) ask are crucially important:

Who owns, who may access, and for what purposes may data be used? To whom is confidentiality owed and in what ways might a coachee be open to manipulation? How will liability be determined where multiple stakeholders are involved in the creation of the new technologies? What biases might be inherent in any algorithms used and what discriminatory practices and decisions might this lead to?

A final consideration against the idea of replacement of human coaching is that AI systems are not as economical and benign as often presented by their developers. These systems are developed with a view of profiting from them with influx of investment, focusing mainly on the benefits and without full consideration of potential negative impact for relevant parties but there are legitimate concerns about responsibilities of businesses towards alignment between their purpose and wider impact (Bachkirova, Citation2024; Mayer, Citation2021). For example, it is not often transparent to what extent infrastructure of AI can harm the environment by e.g., increasing the demands for raw materials, generating electronic waste and consuming large amounts of energy for data processing and storage, and water for cooling (De Vries, Citation2023).

Potential of AI for augmentation of human coaching

Arguing for a need to be sceptical towards the idea of replacing human coaching with AIC, we are not alien towards the idea of using AI in the process of human coaching. Alongside working with a human coach, clients can benefit from the uniqueness of connection with a real person but also use AI for preparation and enhancing their reflexivity at the same time. In fact, the right design of AI-assisted self-preparation and self-monitoring can make human interaction more valuable.

It is important to clarify at this point that in the previous section we have discussed the essential nature of organisational coaching in all its types and rejected the idea of replacing it with AI. When we consider augmentation of specific observable processes of coaching it is important to recognise that coaching is not homogeneous. There are at least four main schools of coaching: psychodynamic, cognitive–behavioural, humanistic and systemic, each of which has many sub-types as well as combinations (Bachkirova & Borrington, Citation2019; Cox et al., Citation2023; Hurlow, Citation2022). However, only cognitive–behavioural approaches, primarily based on simple models such as GROW (Whitmore, Citation2017) and PRACTICE (Palmer, Citation2007) have been considered for comparison with AIC (Graßmann & Schermuly, Citation2021). Therefore, in this section we consider the differences between more structured model-based (cognitive–behavioural) coaching and other types of coaching based on elaborate theoretical positions prioritising different processes of coaching (). This comparative analysis follows from the previous discussion in this paper of the essential elements of coaching. It also builds on the literature describing well-recognised theory-based approaches to coaching (e.g., Cox et al., Citation2023; Palmer & Whybrow, Citation2019) and research into coaching processes, e.g., see Myers (Citation2017) for review of research on coaching process. As before, we develop this framework as a starting point for debates.

Table 1. Potential usefulness of AI for augmenting coaching process elements in different types of coaching.

In order to explore how these two groups of approaches to coaching can benefit from augmentation by AI, we selected some recognisable and observable elements of coaching engagements that are well-recognised in the coaching literature (Athanasopoulou & Dopson, Citation2018; Bozer & Jones, Citation2018; Cavanagh, Citation2016; Cox et al., Citation2014; De Haan & Nilsson, Citation2023). Some of these elements are described by Graßmann and Schermuly (Citation2021) according to PRACTICE algorithm (pp. 11–12). Although, their task was to consider if AI can replicate these functions rather than augment them, we have agreed with some of their conclusions in relation to augmentation but disagreed with others ().

In , we reiterated our conviction that AI can be useful for preparation to human coaching of any kind but expressed our doubt about its role in problem identification and goal setting. We see the latter elements of coaching as the most important for identifying goals that reflect the contextual nuances of clients’ situations and their individuality. Whereas simple model-based types of coaching do not recognise this level of complexity and tend to take initial goals for granted. Contrary to the proposition by Graßmann and Schermuly (Citation2021) in relation to goals, we would argue that there is currently more evidence that many coaching assignments tend to start with one set of goals and assumptions but soon travel somewhere completely different (Bachkirova & Jackson, Citation2024; Clutterbuck & Spence, Citation2017). Some psychometric instruments and SMART checking structures facilitated by AI can be of use, although less likely beyond the model-based approaches.

As we discussed in the previous section, the role of contracting is difficult to underestimate because of the salience of mutual agreement and therefore is impossible to delegate to AI. However, in model-based coaching a predetermined AI-based protocol may suffice. For joint inquiry process, collecting additional information between the session with use of AI might be a helpful element in both categories of coaching. In a similar way, AI can be useful for generating some options in action planning. We have agreed that monitoring of the client’s progress by means of AI can be useful for any type of coaching. In terms of evaluation, we believe that only model-based coaching would be satisfied with a predetermined questionnaires to make it meaningful for both parties. On the whole, it feels like AI, particularly with further sophistication, can add value to various types of coaching if used for augmentation.

Conclusions

Our task was to create a solid conceptual, but also pragmatic basis for discussion about the role of AI in replacing and augmenting human coaching in organisations. Although other authors, e.g., Graßmann and Schermuly (Citation2021), have made similar attempts, our approach was to reflect on the nature of human coaching in all its complexity and variability. We have based our analysis on the wider conceptualisation of coaching than provided from the behaviour-based perspective (Bachkirova & Borrington, Citation2019; Cox et al., Citation2014; Hurlow, Citation2022; Western, Citation2012), which plays only limited role in the current coaching landscape.

We have arrived at the conclusion that replacing human coaching with AIC is not a viable solution for democratising coaching, which also includes some doubts about AIC economic or environmental benefits. Coaching became an intervention of choice in organisations and continues to be as such according to its essentially human elements that can be eroded in AIC. No amount of technical sophistication could change that. Instead of asking whether AI will entirely replace coaching, we suggest focussing on the potential for AI to enhance and augment some elements in different coaching genres.

Out analysis highlights implications for various stakeholders of coaching, which includes coaches themselves, educators, buyers of coaching, professional bodies, AI developers and funding bodies, in particular. We hope they all will be less seduced by the ‘beautiful idea’ of AIC and related promises and not mislead those who are in need of real coaching. More substantially, although being prompted by the rise of AIC, we consider our input into the debates about the nature of organisational coaching as our main contribution. We believe that the recognision, discussion and further exploration of the essential characteristics of coaching would influence for the better how coaching is taught, practiced, governed and researched.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Tatiana Bachkirova

Tatiana Bachkirova, PhD, Professor of coaching psychology with current research interest in the development of people in organisations with a wider focus on ethics and philosophy of individual development of adults.

Rob Kemp

Rob Kemp, DCM, Experienced coaching practioners and researcher with current interest in organisational coaching with a particular focus on the dynamics of individual change in organisations and use of artificial intelligence.

References

  • Aladakatti, S. S., & Senthil Kumar, S. (2023). Exploring natural language processing techniques to extract semantics from unstructured dataset which will aid in effective semantic interlinking. International Journal of Modeling, Simulation, and Scientific Computing, 14(01), Article 2243004. https://doi.org/10.1142/S1793962322430048
  • Athaluri, S. A., Manthena, S. V., Kesapragada, V. K. M., Yarlagadda, V., Dave, T., & Duddumpudi, R. T. S. (2023). Exploring the boundaries of reality: Investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus, 15(4). https://doi.org/10.7759/cureus.37432
  • Athanasopoulou, A., & Dopson, S. (2018). A systematic review of executive coaching outcomes: Is this the journey or a destination that matters the most? The Leadership Quarterly, 29(1), 70–88. https://doi.org/10.1016/j.leaqua.2017.11.004
  • Bachkirova, T. (2011). Developmental Coaching: Working with the Self. McGrow Hill.
  • Bachkirova, T. (2016). The self of the coach: Conceptualization, issues, and opportunities for practitioner development. Consulting Psychology Journal: Practice and Research, 68(2), 143–156. https://doi.org/10.1037/cpb0000055
  • Bachkirova, T. (2023). Where ethical coaching starts: Foreword. In W.-A. Smith, et al. (Ed.), The ethical coaches handbook (pp. xxxvi–xliii). Routledge.
  • Bachkirova, T. (2024). The purpose of organisational coaching: Time to explore and commit. International Journal of Evidence Based Coaching and Mentoring, 22(1), 214–233.
  • Bachkirova, T., & Baker, S. (2018). Revisiting the issue of boundaries between coaching and counselling. In S. Palmer, & A. Whybrow (Eds.), Handbook of coaching psychology (2nd ed.) (pp. 487–499). Routledge.
  • Bachkirova, T., & Borrington, S. (2019). Old wine in new bottles: Exploring pragmatism as a philosophical framework for the discipline of coaching. Academy of Management Learning and Education, 18(3), 337–360. https://doi.org/10.5465/amle.2017.0268
  • Bachkirova, T., & Borrington, S. (2020). Beautiful ideas that can make us ill: Implications for coaching. Philosophy of Coaching: An International Journal, 5(1), 9–30. https://doi.org/10.22316/poc/05.1.03
  • Bachkirova, T., & Jackson, P. (2024). What do leaders really want to learn in a workplace? A study of the shifting agendas of leadership coaching. Leadership. https://doi.org/10.1177/17427150241238830
  • Bachkirova, T., Jackson, P., & Clutterbuck, D. (Eds.). (2021). Coaching & mentoring supervision: Theory and practice (2nd ed). McGraw Hill.
  • Bachkirova, T., & Kauffman, C. (2009). The blind men and the elephant: Using criteria of universality and uniqueness in evaluating our attempts to define coaching. Coaching: An International Journal of Theory, Research and Practice, 2(2), 95–105. https://doi.org/10.1080/17521880903102381
  • Baron, L., & Morin, L. (2019). The coach-coachee relationship in executive coaching: A field study. Human Resource Development Quarterly, 20, 85–106.
  • Bender, E., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT ‘21 (pp. 610–623). New York, NY, USA: Association for Computing Machinery.
  • Blut, M., Wang, C., Wünderlich, N. V., & Brock, C. (2021). Understanding anthropomorphism in service provision: A meta-analysis of physical robots, chatbots, and other AI. Journal of the Academy of Marketing Science, 49(4), 632–658. https://doi.org/10.1007/s11747-020-00762-y
  • Bontridder, N., & Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data & Policy, 3, Article e32. https://doi.org/10.1017/dap.2021.20
  • Bowman, S. R. (2023). Eight things to know about large language models. arXiv:2304.00612.
  • Bozer, G., & Jones, R. (2018). Understanding the factor that determine workplace coaching effectiveness: A systematic literature review. European Journal of Work and Organizational Psychology, 27(3), 1–20. https://doi.org/10.1080/1359432X.2018.1446946
  • Cavanagh, M. J. (2016). The coaching engagement in the twenty-first century: New paradigms for complex times. In S. David, D. Clutterback, & D. Megginson (Eds.), Beyond goals (pp. 151–184). Routledge.
  • Clark, A. (1998). Being there: Putting brain, body, and world together again. MIT Press.
  • Clutterbuck, D. (2022). The future of AI in coaching. In Z. Greif, H. Moller, W. Scholl, J. Passmore, & F. Muller (Eds.), International handbook of evidence-based coaching: Theory, research and practice (pp. 369–379). Springer International Publishing.
  • Clutterbuck, D., & Spence, G. (2017). Working with goals in coaching. In T. Bachkirova, G. Spence, & D. Drake (Eds.), The SAGE handbook of coaching (pp. 218–237). Sage.
  • Cox, E. (2013). Coaching understood: A pragmatic inquiry into the coaching process. Sage.
  • Cox, E., Bachkirova, T., & Clutterbuck, D. (2014). Theoretical traditions and coaching genres: Mapping the territory. Advances in Developing Human Resources, 16(2), 127–138. https://doi.org/10.1177/1523422313520472
  • Cox, E., Bachkirova, T., & Clutterbuck, D. (Eds.). (2023). The complete handbook of coaching (4th ed.). Sage.
  • Deci, E., & Ryan, R. (2000). The ‘what’ and ‘why’ of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268. https://doi.org/10.1207/S15327965PLI1104_01
  • De Haan, E., Bertie, C., Day, A., & Sills, C. (2010). Clients’ critical moments of coaching: Toward a “client model” of executive coaching. Academy of Management Learning & Education, 9(4), 607–621.
  • De Haan, E., Duckworth, A., Birch, D., & Jones, C. (2013). Executive coaching outcome research: the contribution of common factors such as relationship, personality match and self-efficacy. Consulting Psychology Journal: Practice and Research, 65(1), 40–57.
  • De Haan, E., & Gannon, J. (2017). The coaching relationship. In T. Bachkirova, G. Spence, & D. Drake (Eds.), The SAGE handbook of coaching (pp. 195–217). Sage.
  • De Haan, E., & Nilsson, V. O. (2023). What can we know about the effectiveness of coaching? A meta-analysis based only on randomized controlled trials. Academy of Management Learning & Education, 22(4), 641–661. https://doi.org/10.5465/amle.2022.0107.
  • De Vries, A. (2023). The growing energy footprint of artificial intelligence. Joule, 7(10), 2191–2194. https://doi.org/10.1016/j.joule.2023.09.004
  • Dewey, J. (1938). Logic: The theory of inquiry. Henry Holt & Company.
  • Dwivedi, A., Kaur, K., Choudhary, C., Singla, K., & Barnwal, P. (2023). Should AI technologies replace the human jobs? In 2023 2nd International Conference for Innovation in Technology (INOCON), Bangalore, India. https://doi.org/10.1109/INOCON57975.2023.10101202
  • Einola, K., Khoreva, V., & Tienary, J. (2023). A colleague named max: A critical inquiry into affects when an anthropomorphised AI ro(bot) enters the workplace. Human Relations. https://doi.org/10.1177/00187267231206328
  • Fatien, P., Louis, D., & Islam, G. (2022). Neutral in-tensions: Navigating neutrality in coaching. Journal of Management Studies, 60(6), 1485–1520. https://doi.org/10.1111/joms.12883.
  • Frank, M. (2023). Baby steps in evaluating the capacities of large language models. Nature Reviews Psychology, 2(8), 451–452. https://doi.org/10.1038/s44159-023-00211-x
  • Garvey, B., Stokes, P., & Megginson, D. (2010). Coaching and mentoring: Theory and practice. Sage Publications.
  • Gáti, D. (2023). Theorizing mathematical narrative through machine learning. Journal of Narrative Theory, 53(1), 139–165. https://doi.org/10.1353/jnt.2023.0003
  • Grant, A., Cavanagh, M. J., & O’Connor, S. A. (2023). The solution-focused approach to coaching. In E. Cox, T. Bachkirova, & D. Clutterbuck (Eds.), The complete handbook of coaching (pp. 53–68). Sage.
  • Graßmann, C., & Schermuly, C. C. (2021). Coaching with artificial intelligence: Concepts and capabilities. Human Resource Development Review, 20(1), 106–126. https://doi.org/10.1177/1534484320982891
  • Greene, J., & Grant, A. M. (2003). Solution-focused coaching: Managing people in a complex world. Pearson Education.
  • Hawkins, P., & Turner, E. (2019). Systemic coaching: Delivering value beyond the individual. Routledge.
  • Hibbert, P., Sillince, J., Diefenbach, T., & Cunliffe, A. (2014). Relationally reflexive practice: A generative approach to theory development in qualitative research. Organizational Research Methods, 17(3), 278–298. https://doi.org/10.1177/1094428114524829
  • Hsieh, C. Y., Li, C.-L., Yeh, C.-K., Nakhost, H., Fujee, Y., Ratner, A., Krishna, A., Lee, C.-Y., & Pfister, T. (2023). Distilling step-by-step! Outperforming larger language models with less training data and smaller model sizes. arXiv preprint arXiv:2305.02301.
  • Humphreys, J. (2023). Coaching from a place of grounded uncertainty: Richard rorty’s neo-pragmatism and the ICF’s core competency model. Philosophy of Coaching: An International Journal, 8(2), 4–16. https://doi.org/10.22316/poc/08.2.02
  • Hurlow, S. (2022). Revisiting the relationship between coaching and learning: Problems and possibilities. Academy of Management Learning & Education, 21(1), 121–138. https://doi.org/10.5465/amle.2019.0345.
  • ICF (International Coach Federation). (2023). Core competences. ICF. https://coachingfederation.org/credentials-and-standards/core-competences.
  • Jackson, P. (2021). Supervision for enhancing reflexivity. In T. Bachkirova, P. Jackson, & D. Clutterbuck (Eds.), Coaching and mentoring supervision: Theory and practice (pp. 28–39). McGraw Hill.
  • Jarrahi, M. H., Askay, D., Eshraghi, A., & Smith, P. (2023). Artificial intelligence and knowledge management: A partnership between human and AI. Business Horizons, 66(1), 87–99. https://doi.org/10.1016/j.bushor.2022.03.002
  • Kaplan, A. D., Kessler, T. T., Brill, J. C., & Hancock, P. A. (2023). Trust in artificial intelligence: Meta-analytic findings. Human Factors, 65(2), 337–359. https://doi.org/10.1177/00187208211013988
  • Kempster, S., & Iszatt-White, M. (2012). Towards co-constructed coaching: Exploring the integration of coaching and co-constructed autoethnography in leadership development. Management Learning, 44(4), 319–336. https://doi.org/10.1177/1350507612449959
  • Kjellstrom, S., & Stalne, K. (2017). Adult development as a lens: Application of adult development theories in research. Behavioural Development Bulletin, 22(2), 266–278. https://doi.org/10.1037/bdb0000053
  • Knox, B., Christoffersen, P., Leggitt, K., Woodruff, Z., & Haber, M. H. (2023). Justice, vulnerable populations, and the use of conversational AI in psychotherapy. The American Journal of Bioethics, 23(5), 48–50. https://doi.org/10.1080/15265161.2023.2191040
  • Krosnick, R., & Oney, S. (2023). Promises and pitfalls of using LLMs for scraping web uIs. CHI 23 Workshop, April 23, Hamburg, Germany. https://doi.org/10.1145/nnnnnnn.nnnnnnn
  • Lane, D. (2017). Trends in development of coaches (education and training): Is it valid, is it rigorous and is it relevant? In T. Bachkirova, G. Spence, & D. Drake (Eds.), The SAGE handbook of coaching (pp. 647–661). Sage.
  • Lawrence, P. (2021). Coaching systemically: Five ways of thinking about systems. Routledge.
  • Lee, R., Passmore, J., Peterson, D., & Freire, T. (2013). The role of contracting in coaching: Balancing individual client and organizational issues. In D. Peterson & T. Freire (Eds.), Handbook of the psychology of coaching and mentoring (pp. 40–57).
  • Mayer, C. (2021). The future of the corporation and the economics of purpose. Journal of Management Studies, 58(3), 887–901. https://doi.org/10.1111/joms.12660
  • Mayhead, B. (2023). The systemic nature of duty of care in coaching: Coach, client, customer and beyond. International Journal of Evidence Based Coaching and Mentoring, S17, 18–31.
  • Midgley, M. (1999). Being scentifiic about our selves. In S. Gallagher, & J. Shear (Eds.), Models of the self (pp. 467–482). Imprint Academic.
  • Moore, R. C., & Lewis, W. (2010). Intelligent selection of language model training data. Proceedings of the ACL 2010 conferences short papers (pp. 220–224).
  • Myers, A. (2017). Researching the coaching process. In T. Bachkirova, G. Spence, & D. Drake (Eds.), The SAGE handbook of coaching (pp. 589–609). Sage.
  • Myers, A., & Bachkirova, T. (2020). The rashomon effect in the perception of coaching sessions and what this means for the evaluation of the quality of coaching sessions. Coaching: An International Journal of Theory, Research and Practice, 13(1), 92–105. https://doi.org/10.1080/17521882.2019.1636840
  • Ostrowski, E., & Potter, P. (2023). A call for clarity and pragmatism in coach education. International Coaching Psychology Review, 18(2), 96–107. https://doi.org/10.53841/bpsicpr.2023.18.2.96
  • Palmer, S. (2007). PRACTICE: A model suitable for coaching, counselling, psychotherapy and stress management. The Coaching Psychologist, 3(2), 71–77. https://doi.org/10.53841/bpstcp.2007.3.2.71
  • Palmer, S., & Whybrow, A. (2019). Handbook of coaching psychology (2nd ed.). Routledge.
  • Passmore, J., & Tee, D. (2023). The library of babel: Assessing the powers of artificial intelligence in knowledge synthesis, learning and development, and coaching. Journal of Work-Applied Management, 16(1), 4–18. https://doi.org/10.1108/JWAM-06-2023-0057.
  • Passmore, J., & Woodward, W. (2023). Coaching education: Wake up to the new digital and AI coaching revolution!. International Coaching Psychology Review, 18(1), 58–72. https://doi.org/10.53841/bpsicpr.2023.18.1.58
  • Peirce, C. S. (1878/1955). How to make our ideas clear. In J. Buchler (Ed.), Philosophical writings of peirce (pp. 23–42). Dover.
  • Rahman, N., & Santacana, E. (2023). Beyond fair use: Legal risk evaluation for training LLMs on copyrighted text. Proceedings of the 40th International Conference on Machine Learning, Honolulu, Hawaii.
  • Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
  • Rotenberg, K. J. (2019). The psychology of interpersonal trust: Theory and research. Taylor & Francis.
  • Schemmer, M., Hemmer, P., Nitsche, M., Kühl, N., & Vössing, M. (2022, July). A meta-analysis of the utility of explainable artificial intelligence in human-AI decision-making. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 617–626).
  • Sedlakova, J., & Trachsel, M. (2023). Conversational artificial intelligence in psychotherapy: A new therapeutic tool or agent? The American Journal of Bioethics, 23(5), 4–13. https://doi.org/10.1080/15265161.2022.2048739
  • Swartz, H. A. (2023). Artificial intelligence (AI) psychotherapy: Coming soon to a consultation room near you? American Journal of Psychotherapy, 76(2), 55–56. https://doi.org/10.1176/appi.psychotherapy.20230018
  • Terblanche, N. (2020). A design framework to create artificial intelligence caches. International Journal of Evidence Based Coaching and Mentoring, 18(2), 152–165.
  • Terblanche, N., Molyn, J., De Haan, E., & Nilsson, V. O. (2022a). Comparing artificial intelligence and human coaching goal attainment efficacy. PLOS ONE, 17(6), 1–17. https://doi.org/10.1371/journal.pone.0270255
  • Terblanche, N., Molyn, J., De Haan, E., & Nilsson, V. O. (2022b). Coaching at scale: Investigating the efficacy of artificial intelligence coaching. International Journal of Evidence Based Coaching & Mentoring, 20(2), 20–36.
  • Terblanche, N., van Heerden, M., & Hunt, R. (2024). The influence of an artificial intelligence chatbot assistant on the human coach-client working alliance. Coaching: An International Journal of Theory, Research and Practice, 1–18. https://doi.org/10.1080/17521882.2024.2304792.
  • Theeboom, T., Beersma, B., & van Vianen, A. (2013). Does coaching work? A meta-analysis on the effects of coaching on individual level outcomes in an organizational context. The Journal of Positive Psychology, 9(1), 1–18. https://doi.org/10.1080/17439760.2013.837499
  • Toy, T. (2023). Transparency in AI. AI & Society, 1–11. BIBTEX.
  • Western, S. (2012). Coaching and mentoring: A critical text. Sage.
  • Whitmore, J. (2017). Coaching for performance (5th ed). John Murray Press.
  • Wright, A., & O’Connor, S. (2021). Supervision for working legally. In T. Bachkirova, P. Jackson, & D. Clutterbuck (Eds.), Coaching & mentoring supervision: Theory and practice (pp. 61–72). McGraw Hill.