1,544
Views
2
CrossRef citations to date
0
Altmetric
Articles

Designing human-AI systems for complex settings: ideas from distributed, joint, and self-organising perspectives of sociotechnical systems and cognitive work analysis

, , &
Pages 1669-1694 | Received 10 May 2023, Accepted 06 Nov 2023, Published online: 06 Dec 2023

Abstract

Real-world events like the COVID-19 pandemic and wildfires in Australia, Europe, and America remind us that the demands of complex operational settings are met by multiple, distributed teams interwoven with a large array of artefacts and networked technologies, including automation. Yet, current models of human-automation interaction, including those intended for human-machine teaming or collaboration, tend to be dyadic in nature, assuming individual humans interacting with individual machines. Given the opportunities and challenges of emerging artificial intelligence (AI) technologies, and the growing interest of many organisations in utilising these technologies in complex operations, we suggest turning to contemporary perspectives of sociotechnical systems for a way forward. We show how ideas of distributed cognition, joint cognitive systems, and self-organisation lead to specific concepts for designing human-AI systems, and propose that design frameworks informed by contemporary views of complex work performance are needed. We discuss cognitive work analysis as an example.

Practitioner Summary

Emerging developments in AI will pose challenges for the design of human-machine systems. Contemporary perspectives of sociotechnical systems, namely distributed cognition, joint cognitive systems, and self-organisation, have design implications that are unaccommodated by traditional methods. Cognitive work analysis may provide a way forward.

Abbreviation: AI: Artificial intelligence.

Introduction

Emerging artificial intelligence (AI) technologies will necessarily change the nature of human-machine systems and introduce new considerations that were far less prevalent in traditional human-automation interactions, especially in complex operations. Forty years have passed since Lissane Bainbridge’s (Citation1983) seminal paper on the ironies of automation, which this special issue of Ergonomics celebrates, and there is no doubt that a large body of research on human-automation interaction has made significant inroads into our understanding of the functioning of such systems and the risks, opportunities, and challenges they present. However, the lack of a human-centric view in design continues to be lamented, and it is implied that, by adopting such a perspective, many technological mishaps could be avoided.

In this paper, we observe that standard models of human-automation interaction, which are widely regarded as human-centric, tend to be dyadic in nature, focusing attention on a single human and a single machine. In addition, they direct attention to narrowly defined tasks, disregarding the physical and social reality in which those tasks are embedded; and gloss over requirements for communication, collaboration, and adaptive problem-solving. These models do not account for contemporary views of complex work, and may be just as problematic as techno-centric views of systems design. Moreover, given the rapidly growing capabilities of AI-infused machines in an expanding range of tasks, as well as tasks of increasing complexity, these problems may be exacerbated.

In this paper, we contemplate a way forward. We consider the properties of complex work settings and emerging AI technologies to assess their significance for the design of human-AI systems. We follow these observations with an exploration of the boundaries of standard human-automation interaction design methods in relation to these properties and, given their limits, we turn to three contemporary perspectives of sociotechnical systems for inspiration, specifically distributed cognition, joint cognitive systems, and self-organisation. Although these concepts were developed some time ago, they continue to have relevance and shape thinking in the field. Moreover, compared with standard models, they provide a radically different way of thinking about human-machine systems. We show how the key themes of these perspectives lead to distinct, though overlapping, implications for the design of human-AI systems and, as a starting point for discussion, we consider cognitive work analysis as an example of a design framework that can integrate these contemporary views of complex work performance.

Before proceeding, we note that there is a burgeoning literature on many important topics relating to human use of and interaction with modern AI technologies. This paper does not address all of these topics. In particular, we do not directly address broader debates on the ethical and responsible development and use of AI or whether or not AI technologies should be considered as team members (NASEM Citation2022; Shneiderman Citation2022). Although these are very important topics, this paper focuses on the nature of work and its organisation in complex operational settings, and how ideas from three perspectives of sociotechnical systems and cognitive work analysis can shape the design of human-AI systems.

We also note that this paper does not seek to make the case in general that human factors, human-computer interaction, cognitive systems engineering, or cognitive work analysis provide useful approaches for designing human-AI systems, as that has already been demonstrated (e.g. Endsley Citation2017; Hancock Citation2017; Janssen et al. Citation2019; NASEM Citation2022; Roth et al. Citation2019). In particular, the value of cognitive work analysis for designing systems with human and machine elements has long been established (Rasmussen, Pejtersen, and Goodstein Citation1994; Vicente Citation1999) and its significance for designing systems incorporating automation, and more specifically AI technologies, has already been recognised by many authors (e.g. Bisantz and Vicente Citation1994; Brady and Naikar Citation2022; Burns Citation2018; Dikmen Citation2022; Ernst et al. Citation2019; Li and Burns Citation2017; Mazaeva and Bisantz Citation2007; Naikar Citation2018; Naikar et al. Citation2021; Pritchett, Kim, and Feigh Citation2014; Roth et al. Citation2019; Salmon et al. Citation2023). Rather, this paper considers the value of cognitive work analysis as an integrating framework for the key ideas of distributed cognition, joint cognitive systems, and self-organisation, particularly in relation to the design of human-AI systems.

Complex settings

Many real-world settings are more complex than the kinds of environments generally utilised in laboratory studies of human-automation interaction, which constitute a large, and important, part of the research base (Janssen et al. Citation2019). Among other factors, complex operational settings are more dynamic, less structured or controlled, and have greater potential for unpredictable events. For example, military and emergency management operations may be conducted in areas where map data is poor or non-existent, satellite-based communications is disrupted, and up-to-date information about current and future conditions such as weather patterns, terrain conditions, or adversaries’ movements and intentions cannot be easily ascertained.

One framework for conceptualising domain complexity, which appears to resonate with many operational experts including military and business practitioners, distinguishes the types of situations that workers or decision makers may face based on the clarity of the cause-and-effect relationships (Snowden and Boone Citation2007). In more ordered circumstances, defined by simple and complicated situations, cause-and-effect relationships are perceptible or discoverable. In less ordered circumstances, or complex and chaotic situations, cause-and-effect relationships are difficult or impossible to determine. Different problem-solving strategies are required in different situations, and workers in complex operational settings may be faced with any of these kinds of circumstances.

Flach (Citation2012), following Perrow (Citation1984), presents another framework for conceptualising domain complexity in which he emphasises the dimensionality and interdependence of the problem space. Dimensionality refers to the number of variables that define the problem space, whereas interdependence refers to the nature of the interactions between the variables. As the number of variables increases, the set of possibilities for the system state increases, raising the complexity of the problem. Further, when the interdependence is high, the behaviour of any variable may change as a function of the behaviour or state of the other variables, which also raises the complexity of the problem. Therefore, systems with higher dimensionality and interdependence, such as military operations, are regarded as having more complex problem spaces than those with lower dimensionality and interdependence, such as assembly-line production (Flach Citation2012). It may be said that cause-and-effect relationships are more challenging to establish with increasing complexity because of the dimensionality and interdependence of the problem.

The operational settings with which we are concerned, aside from having higher levels of dimensionality and interdependence, share a number of specific characteristics. These settings are generally fast-paced, placing time pressure on workers, and they have shifting rather than stable conditions, so that workers’ goals are evolving (Klein Citation1989). Information about the current circumstances, or variables, is often imperfect or incomplete (Vicente Citation1999), which creates uncertainty in the system. In addition, the future direction of events, or system states, is often unpredictable (Perrow Citation1984; Rasmussen Citation1969), whether due to the number of variables, limited information about the variables, or their interdependence. We have therefore previously described these settings as having high levels of instability, uncertainty, and unpredictability (Naikar and Elix Citation2021). Notably, in competitive settings, such conditions may be fostered deliberately by creative adversaries. Moreover, workers in these systems are tasked with making high-risk, high-stakes decisions, and responses are needed at the full spectrum of time scales, requiring both speed and endurance.

Finally, complex operational settings are highly social and technological. In particular, coordination is required not just between individuals, but also among multiple teams, working towards common purposes and goals. The networked capabilities of computers means that team members may be dispersed geographically, and operations may be executed over long time periods, so that team members are also distributed temporally. Moreover, the constant changes in the circumstances, which may be largely unpredictable, require adaptation. Field studies show that team members adapt their tools, rules, and routines (Bigley and Roberts Citation2001) as well as their organisational structures (Rochlin, La Porte, and Roberts Citation1987) to the demands of the unfolding circumstances. These settings require a view of systems that extends beyond individuals, dyads, and co-located teams to distributed groups and how they coordinate, collaborate, and engage in adaptive problem-solving.

Emerging AI technologies

Driven by advances in computational power, availability of large data sources, and significant developments in machine learning techniques, the last decade has seen rapid advances in the ability of machines to perform a range of challenging activities (Moy et al. Citation2020). Studies have shown that deep learning techniques can outperform more traditional automation approaches in a wide variety of tasks (Shao et al. Citation2022). In addition, deep learning techniques can perform at, or close to, human levels on a number of constrained tasks in fields as diverse as image recognition (He et al. Citation2015), image generation (Elasri et al. Citation2022), natural language processing (Brown et al. Citation2020; Khurana et al. Citation2023; Zhao Citation2023), and decision making in games (Brown and Sandholm Citation2019; Vinyals et al. Citation2019). In some domains, such as game playing (Silver et al. Citation2017) and medical diagnosis (McKinney et al. Citation2020), deep learning methods can outperform humans. Moreover, in general, when tasks are significantly ‘time-limited’ or ‘data-large’, deep learning techniques can easily outperform humans through their speed and data processing capabilities. While this rapid evolution in machine capability has led to many new possibilities for automation, the use of AI technologies within complex environments still presents significant challenges, as well as opportunities.

At its foundation, deep learning methods involve machines learning statistical patterns in training data (Goodfellow, Bengio, and Courville Citation2016), with their success contingent on the ability of machines to apply these patterns to future contexts. In complex settings, which are characterised by shifting rather than stable conditions, distribution shift, where a model is used on data with fundamentally different statistical characteristics to the training data, is known to pose ‘a significant robustness challenge’ (NeurIPS 2022 Workshop on Distribution Shifts - Connecting Methods and Applications Citation2022). Moreover, current deep learning techniques require vast amounts of relevant data to learn subtle cause-and-effect relationships (Lake et al. Citation2017). In complex domains, cause-and-effect relationships can depend on infrequently observed environment configurations, making the process of training machines to handle these parameter regimes a major challenge.

Nevertheless, despite these and other limitations, deep learning approaches also provide new opportunities that were not possible with more traditional automation techniques. For instance, the ability of deep learning systems to learn from raw data introduces the possibility of automated systems that can learn independently, or without significant human involvement, to respond to new or unexpected situations (van de Ven, Tuytelaars, and Tolias Citation2022). In addition, recent advances in self-supervised training on large datasets has seen the development of increasingly general agents, particularly in the space of natural language processing, that are able to perform a much wider range of tasks with little or no additional supervised training (OpenAI Citation2023). While still in their infancy, these machines have shown remarkable performance on newly introduced tasks on which they were not explicitly trained—so called zero-shot performance—and even better performance on tasks where they are provided with a small number of examples using prompt engineering or few-shot evaluation (Brown et al. Citation2020; OpenAI Citation2023). These models show potential strengths in responding to unexpected or unforeseen situations common to complex environments. In addition, the end-to-end nature of modern deep learning provides opportunities for machines to demonstrate novel approaches to problems, and therefore to provide new and complementary perspectives to those developed by humans (Sadler and Regan Citation2019).

In summary, rapid developments in AI are expanding the range of tasks that machines can contribute to, as well as improving machine performance on those tasks. Consequently, many organisations are exploring opportunities for deploying larger numbers of increasingly capable machines into complex settings. That being the case, the need to identify suitable design frameworks for emerging human-AI systems is becoming increasingly pressing. Notably, Gary Klein (cited in Colmer Citation2021) argues that striving for technologies that are fully safe, reliable, and trustworthy reflects an overly sclerotic, rigid, and timid approach to design that tolerates compromises in performance. In this paper, we suggest that a principled approach to the design of human-machine systems is required that, similar to how high-reliability teams are organised, is able to guide or constrain system behaviour, even when in unanticipated states, to achieve high levels of resilience. In the following sections, we build on this idea.

Design implications

The properties of complex environments and emerging AI technologies create special challenges for the design of human-AI systems. To negotiate these challenges, we propose that human-AI systems must progressively meet the requirements of (1) contextualisation and (2) collaboration, which together define the need for self-organisation.

First, it is clear that work in complex settings is necessarily context-sensitive (Flach Citation2015; Suchman Citation1987). Actions and decisions are taken in view of the local unfolding situation, which is highly uncertain and unpredictable. Although actors may make elaborate plans, with well-defined goals, roles and responsibilities, and courses of action, which play an essential role in how the work is realised, these plans are likely to be altered, or even discarded, because of unanticipated nuances in the situation or novel events. The state of the situation is not always obvious, requiring adaptive problem-solving, and the solutions emerge in the circumstances.

This need for contextualisation, combined with the features of emerging AI technologies, suggests that the organisation of work in human-AI systems will likely need to become increasingly collaborative in nature, rather than supervisory. By referring to collaborative organisational forms, we are not necessarily according human-like team qualities to AI technologies or equating machines with humans, similar to the position taken in a recent overview of the construct of human-AI teaming (NASEM Citation2022). Instead, we are emphasising that, rather than structures in which humans are simply expected to monitor machines and intervene when problems occur, structures that promote collective problem-solving will be needed.

This type of social organisation has been observed in human teams in a variety of domains, including in healthcare (Bogdanovic et al. Citation2015), emergency management (Bigley and Roberts Citation2001), transport command and control (Luff and Heath Citation2000), and military operations (Rochlin, La Porte, and Roberts Citation1987). These field studies show that people tend to step in and out of each other’s roles as needed, as long as they have the capacity to do the work, even if they have no specific training or experience. This means that people remain sensitive to each other’s work, even while apparently focused on their own work (Luff and Heath Citation2000), which allows them to provide assistance, take over jobs, and even assume a leadership role if necessary. Such collaborative activity in the team is needed in continually unfolding situations with surprises and unforeseen events that have not been—and cannot be—anticipated by analysts, designers, or workers. For similar reasons, collaborative processes in human-automation interactions have been argued, and found, to be necessary in complex operational settings characterised by change, surprise, and unforeseen events (Christoffersen and Woods Citation2002; Klein et al. Citation2004; Roth et al. Citation2018; Woods et al. Citation2004).

As AI-infused machines continue to surpass humans on some well-defined tasks and perform acceptably on an increasing range of tasks, as outlined in the preceding section, the work organisation of human-AI systems in complex environments may need to become progressively more collaborative to preserve or enhance the resilience of the work system. Specifically, as people and machines develop overlapping capabilities, tasks may be shared among these actors. That is, tasks may be performed concurrently by people and machines or shift fluidly between them, depending on the circumstances and the local advantage of each actor. Moreover, based on the expectation that people will always be able to know, see, hear, or do things about the evolving situation that the machines cannot and vice versa, by virtue of their distinct functional capacities and processes, their collective activity will be needed on most tasks to achieve the best performance.

Taken together, the requirements of contextualisation and collaboration suggest that self-organising human-AI systems may become increasingly essential. In complex settings, there is limited scope for external intervention, centralised coordination, or a priori planning to present viable strategies for work organisation consistently. Preparing for every single continency is infeasible and, as we have already noted, many events cannot be predicted by analysts, designers, or workers. Moreover, the workspace has too many distributed elements, time is too limited, and there is too much ambiguity in the current and future circumstances to permit effective intervention by external parties or coordination by a centralised authority. Instead, human-AI systems must be self-organising—or capable of spontaneously adapting their collective performance to fit the demands of the evolving circumstances. The concept of self-organisation is discussed in more depth later in this paper.

Human-automation interaction models

Research on human-automation interaction has a long history, and considerable understanding of human-machine systems has been gained from these studies. While this knowledge base will continue to inform approaches to the design of human-AI systems, we review current models of human-automation interaction, specifically in relation to the requirements of contextualisation and collaboration, and identify some important gaps.

An early model of human-automation interaction is the Men-are-better-at—Machines-are-better-at (MABA-MABA) classification scheme (Fitts et al. Citation1951), which is used to apportion work to humans and machines based on their respective strengths and limitations. A number of problems with this approach have been identified (Dekker and Woods Citation2002), such as the tendency for humans to be left with tasks that the machines cannot do, and, ironically, with tasks that humans are not very strong in performing, such as monitoring the automation, which requires vigilance over extended periods. However, it is also acknowledged that it remains important to consider the strengths and limitations of humans and machines in the design of autonomous technologies (Roth et al. Citation2019).

In designing human-AI systems for complex environments, the respective capabilities of humans and machines is not irrelevant. However, distinguishing between their functional capabilities and intrinsic capabilities is important. Functional capabilities in complex environments are context-dependent. That is, the respective abilities of humans and machines in performing tasks may change over time, depending on the local circumstances, which may not be predictable a priori. The MABA-MABA scheme focuses attention on fixed or inherent capabilities, thereby ignoring the dynamic nature of performance in complex environments. In addition, its focus on intrinsic capabilities has the effect of encouraging functions to be allocated either to humans or to machines, disregarding the possibility, and need, for collaboration. In other words, the scheme overlooks the potential for tasks to be carried out concurrently by people and machines, with actors making similar or distinctive contributions, as well as for tasks to shift flexibly between actors as a function of the situation.

Contemporary research on human-automation interaction relies heavily on the levels of automation taxonomy (Parasuraman, Sheridan, and Wickens Citation2000; Sheridan and Verplank Citation1978), which grades human-machine interaction into a number of discrete categories, spanning from fully manual to fully automatic (). Studies based on this taxonomy have shown the impact of different levels of automation on human performance, such as on situation awareness and workload, and therefore on the overall capability of human-machine systems. In addition, these studies have demonstrated the out-of-the-loop performance consequences of supervisory control arrangements, showing for instance that humans tend to have difficulty in detecting problems in automation functioning, in understanding the nature of the problem, and therefore in intervening effectively when required (Endsley Citation2017).

Figure 1. Levels of automation taxonomy. © 2000 IEEE. Reprinted, with permission, from Parasuraman, Sheridan, and Wickens (2000).

The 10 discrete categories of the levels of automation taxonomy, graded from low (fully manual) to high (fully automatic).
Figure 1. Levels of automation taxonomy. © 2000 IEEE. Reprinted, with permission, from Parasuraman, Sheridan, and Wickens (2000).

The levels of automation taxonomy, however, does not readily accommodate the contextual and collaborative nature of performance required in complex settings. As an example, the taxonomy (levels 2–10) does not recognise that a machine is rarely capable of generating credible options independently in these environments. For instance, in the case of a scheduling problem for military transport, Roth et al. (Citation2018) found that machine-generated options are usually obsolete because of changing circumstances, including changes in the commander’s priorities. However, by providing humans with the capacity to dynamically adjust the parameters for option generation, as well as the capacity to generate their own options, a broadening was achieved, whereby the solution set considered by the human was larger than what they might have considered independently. In other words, the assumption of either-or work organisation implicit in the taxonomy, as well as in the MABA-MABA classification scheme, only holds if problem-solving is conceived narrowly, independent of its goals and context.

The taxonomy also does not accommodate the nature of the collaborative activity that may be required between people and machines given the properties of emerging AI technologies. For example, some AI algorithms may come up with surprising solutions, challenging conventional human wisdom and experience. In some cases, Go and chess grandmasters have been unable to assess computer-developed strategies, unsure whether specific moves were ingenious or mistakes—at least not in a timely manner (Leventi-Peetz et al. Citation2022). Humans therefore may be unable to approve or veto machine-generated solutions as suggested by levels 2–6 of the taxonomy. Although AI algorithms can come up with innovative solutions, they can also fail or ‘hallucinate’ (Ji et al. Citation2022). These machines therefore cannot be left to their own devices, especially in dynamic, unpredictable environments, bringing into question levels 7–10 of the taxonomy as well. Instead, different organisational structures for human interaction with AI are needed.

Third, the taxonomy does not accommodate the highly social and technological nature of work in complex settings. It guides designers to focus on a single human and machine, reducing the problem to simple, dyadic components and relationships. However, these environments consist of many individuals or teams, who interact with each other and a large array of technologies. In these domains, many kinds of automation are notionally relevant, which means that coordination may be required across multiple interdependent people and machines, not just pairs of actors.

As well as directing attention to human-machine dyads, the taxonomy places the focus on a single task. While each task in a complex setting may be systematically addressed, one at a time, with the taxonomy, this model cannot readily account for the coordination of many interconnecting activities. Further, the taxonomy focuses attention on narrowly defined tasks, isolated from the ecology and evolving context in which they occur. In complex settings, activity is shaped by a social and material ecology (Hutchins Citation1995) and typically unfolds in rapidly changing circumstances, characterised by uncertainty and unpredictability (Rasmussen, Pejtersen, and Goodstein Citation1994). Tasks in these settings may be shed, delayed, or adapted, and novel tasks may be needed to contend with surprises or unforeseen events. By focusing attention on individual, narrow, or well-defined tasks, devoid of the environment in which they are situated (Suchman Citation1987), the result is likely to be brittle designs in which human-AI systems are optimised for performing specific tasks in tightly constrained environments, but are not well suited to juggling and adapting a multitude of interconnecting activities in a complex setting.

Finally, recent research on human-automation interaction has been shifting to dynamic or flexible rather than fixed approaches to function allocation (Calhoun Citation2022) and to teaming approaches, commonly known as human-automation or human-autonomy teaming (O’Neill et al. Citation2022). While this research addresses more directly the contextual and collaborative requirements of work in complex environments, it is still largely underpinned by the levels of automation taxonomy. In addition, by far the majority of studies focus on interactions between an individual human and machine, as for example is the case in driver-automation teaming studies. While a few studies incorporate slightly larger teams (see the review by O’Neill et al. Citation2022), studies providing greater insight into the nature of work organisation and performance in human-AI systems in more complex cases are needed.

Contemporary perspectives of sociotechnical systems

Contemporary perspectives of sociotechnical systems in complex operations offer insights into their organisation and functioning, which may inform the design of human-AI systems. We focus on three perspectives namely distributed cognition, joint cognitive systems, and self-organisation. While there are significant commonalities between these perspectives, they also have distinct emphases and some potential differences. Whereas the theory of distributed cognition draws attention to the distribution of activity across multiple system elements, and the joint cognitive systems perspective emphasises the jointness or coagency of the system, the concept of self-organisation explains how the work of distributed elements may be organised into a coherent whole, especially in complex environments, where actors require, and show, high levels of spontaneity in their behaviours. Here, we review these perspectives to abstract the specific requirements of a design framework for emerging human-AI systems. We do not attempt an exhaustive account of these perspectives, but instead show how their defining themes can shape thinking about how to advance the design of human-AI systems. A preliminary, hypothetical example is then discussed in the final section.

Distributed cognition

In an influential monogram, provocatively titled Cognition in the Wild, Hutchins (Citation1995) articulates the theory of distributed cognition and provides a compelling argument that viewing cognition as solely the property of individual minds is limiting. In his view, cognition or computation is distributed. Computational processes are distributed across people, their artefacts, and the social and material ecology they cohabit (Hutchins Citation1999). Further, computational processes are distributed spatially and temporally, so that activities occurring in one spatial or temporal context can shape or transform the nature of activities occurring in other contexts.

Hutchins’s (Citation1995) view is informed by longitudinal field studies of shipboard navigation on a US Navy vessel, the USS Palau, in which he examined the role of people and artefacts in the practice of work. He recounts in detail his observations of how tasks at sea, such as establishing the ship’s location and projected course in two-dimensional space, are achieved. He documents the operations of all of the parts of the system that are involved in achieving the ship’s navigational goals, including how structures in the environment are ‘appropriated’ in the task. He finds that the necessary computations for the task are distributed throughout a network of interacting individuals and artefacts in a socially constructed setting. Notably, the navigation activity involves the coordination of 10 personnel, none of whom can be said to have individually determined the ship’s progress. On this foundation, Hutchins makes the case that when cognition is studied ‘in the wild’, it cannot be comprehensively understood as an individual activity, focusing on mental processes occurring within the bounds of a person’s head. Rather, it is best investigated as a collective activity extending across several actors and artefacts occupying a social and physical ecology. Notably, he recognises computers as a special class of artefact which has the capacity to ‘mimic certain aspects of human cognitive function’ (Hutchins Citation1999, 127).

Hutchins’s emphasis on cognition has the unfortunate consequence of de-emphasising the role of communication and coordination in complex work performance, at least among some audiences. However, a detailed reading of his work shows clearly that interactions among people and their artefacts are just as critical as those entities themselves. In other words, the distributed elements in task performance must be coordinated.

Although the theory of distributed cognition provides a computational account of systems, there are several variations in how the theory has been applied (Perry Citation2003). Its most common use is simply in acknowledging that work is more than the activity of a single individual working in isolation and without artefacts or tools. Stricter use of the theory is in describing how people ‘externalise’ mental activity, or create and use external structures to support their own as well as collaborative work. The most rigorous use is in ‘elucidating the architecture of cognition where cognitive activity is not simply mentally represented’ but is distributed throughout a system (Perry Citation2003, 197).

Later in this paper, we show the extent to which these three uses of the theory may be accommodated in a design framework for human-AI systems. Meanwhile, we assess the primary implications of distributed cognition for the design of human-AI systems to be as follows:

  • The unit of analysis is a network of actors, who may be human or technological, and the physical and social ecology of their interactions.

  • The ecological features of interest are significant features of the environment that may contribute to the accomplishment of tasks.

  • The analysis must accommodate the distribution of cognitive or computational activity in time and space, and support understanding of how coordination among distributed elements can be supported.

Joint cognitive systems

Hollnagel and Woods (Citation2005) propose the concept of a joint cognitive system as an approach to studying human-machine systems, where the machine represents ‘any artefact designed for a specific use’ (6). The approach places the emphasis on understanding what the human-machine system does, or how it achieves an end or goal, and stays in control, rather than on how the parts of the system communicate or interact. It therefore signifies a deliberate shift in focus from the investigation of human-machine interaction to human-machine performance and the coagency (or ‘jointness’) of humans and machines in achieving system goals.

The repeated use of the term ‘cognitive’ in the canonical texts (Hollnagel and Woods Citation2005; Woods and Hollnagel Citation2006) suggests that the approach presents a cognitive view of systems. However, this impression appears to have been created unintentionally (Hollnagel Citation2022). In this view, a cognitive system is one ‘that can modify its behaviour on the basis of experience so as to achieve specific anti-entropic ends’ (Hollnagel and Woods Citation2005, 22), rather than one that is capable of processing information or computation. Either a human or a machine can be a cognitive system, and a human and machine ‘working together’ is a joint cognitive system (as is a collection or collections of humans and machines). The focus, as noted, is not on the system’s cognitive architecture, or its internal information or computational processes, but rather on ‘understanding how the joint system performs and how it can achieve its goals and functions’ (Hollnagel and Woods Citation2005, 17).

The idea of ‘jointness’ does not deny the physical separateness of humans and machines, but it is argued that ‘it is more important to describe the functioning of the joint cognitive system, [and] hence to join the human and machine into one’ (Hollnagel and Woods Citation2005, 19). Further, the human-machine entity is seen to be dynamically coupled to the environment, and the influence of situation or context is considered to be direct, affecting the way in which the system works; for example, the way in which events are evaluated or actions are selected. Behaviour is seen as cyclical, rather than sequential, building on prior actions and anticipating future actions. These principles seem largely to have their origins in the ‘European cognitive viewpoint’ and Neisser’s perceptual cycle model (Hollnagel and Woods Citation2005; Woods and Hollnagel Citation2006).

As well as foundational principles, the joint cognitive systems approach is associated with ‘stories’ of joint cognitive systems at work and common patterns abstracted from observations of work settings ‘where people, technology and work intersect’ (Woods and Hollnagel Citation2006, 2). The stories coalesce around the question of how people cope with complexity, and specifically how they deal with change and surprise. Recurring themes relate to the challenges that are encountered, particularly how change challenges the coordination of activities, the resilience of systems, and the affordances of artefacts. Ultimately, the approach is concerned with the design of joint cognitive systems and the question of how to support adaptability and control in a dynamic, uncertain, and conflicted world with resource-bound agents, whether humans or machines.

From the preceding discussion, we summarise the primary implications of the joint cognitive systems perspective for the design of human-AI systems as follows:

  • The unit of analysis is a human-machine system, which may be a collection or collections of humans and machines, viewed as a single entity.

  • The focus is on understanding what the human-machine system does, rather than on how its parts interact or communicate or their internal cognitive or computational processes.

  • The aim is to engineer joint cognitive systems that can adapt and stay in control of the situation to achieve its goals.

Self-organisation

The concept of self-organisation (e.g. Camazine et al. Citation2001; Haken Citation1988) has been put forward as a foundation for explaining why actors in complex sociotechnical systems show, and require, both high levels of spontaneity and coordination in their work activities, and how that is achieved (Naikar and Elix Citation2021). This view recognises that affordances for, or constraints on, behaviour are distributed across people, their artefacts, and the physical and social environment, creating large spaces of possibilities for action from which coordinated work patterns may emerge spontaneously (Naikar Citation2020). Novel organisational structures emerge from individual, interacting actors’ spontaneous behaviours and, in turn, constrain and enable their behaviours, so that the system as a whole migrates towards becoming fitted to the changed circumstances. This continuous evolution of actors’ structures and behaviours, which signifies the phenomenon of self-organisation, accounts for the flexibility and coordination needed to operate successfully in complex environments, even in the face of considerable uncertainty about the current circumstances and unpredictability in the future direction of events.

This view of self-organisation is informed by field studies of complex sociotechnical systems in a variety of domains such as emergency management (Bigley and Roberts Citation2001; Lundberg and Rankin Citation2014), military operations (Hutchins Citation1990, Citation1991, Citation1995; Rochlin, La Porte, and Roberts Citation1987), commercial aviation (Hutchins and Klausen Citation1998), law enforcement (Linde Citation1988), transport (Heath and Luff Citation1998; Luff and Heath Citation2000), and healthcare (Bogdanovic et al. Citation2015; Klein et al. Citation2006). For example, in a field study of emergency management workers, Bigley and Roberts (Citation2001) observed considerable flexibility in firefighters’ use of tools, rules, and routines, with the majority of informants reporting breaches to standard operating procedures to fulfil the system’s overarching goals and values. Similarly, Rochlin, La Porte, and Roberts (Citation1987) found that the work organisation on US Navy aircraft carriers shifts—without a priori planning, external intervention, or centralised coordination—from a formal, rigid hierarchical structure to informal configurations, which are flat and distributed. The informal configurations are not defined by any simple mapping between people and roles. Instead, the mapping changes spontaneously with the circumstances.

In this particular context, the self-organisation concept suggests that a system’s formal organisational structure bounds individual actors’ degrees of freedom for action in ways that are suited to particular circumstances, usually those that are routine or familiar. Consequently, in new or different conditions, the formal structure may limit the system’s response in ways that are unsuitable or unproductive (). However, in responding to the changed circumstances, the spontaneous actions of individual, interacting actors may result in changes in the work organisation. When a new or different structure emerges that is better suited to the present conditions, it will stabilise and constrain and enable actors’ behaviours in ways that are fitted to the local conditions. Nevertheless, with ongoing fundamental changes in the situation, the system will continue to self-organise, driven by the spontaneous actions of interacting actors (Naikar Citation2020; Naikar and Elix Citation2021).

Figure 2. The concept of self-organisation in sociotechnical systems (Naikar Citation2020).

Figure showing continuous cycle of actors’ spontaneous actions and interactions and emergent work structures, bounded by the intrinsic constraints of the sociotechnical system.
Figure 2. The concept of self-organisation in sociotechnical systems (Naikar Citation2020).

Further, this concept recognises that, in complex situations, actors tend to shift away from formal structures or procedures and instead rely on the intrinsic constraints of the sociotechnical system as the principal mechanism governing their conduct (; Naikar Citation2020). These constraints are limits or boundaries on behaviour, which must be respected by actors to achieve effective system performance (Vicente Citation1999). However, within these limits, actors still have considerable degrees of freedom for action (Rasmussen, Pejtersen, and Goodstein Citation1994). For example, organisational values and resources place limits on actors’ behaviours, but still afford actors many possibilities for action. Consequently, by committing to these fundamental constraints on behaviour, rather than to formal specifications or procedures, actors can safely and productively adjust their actions to the local conditions, such that new structures may emerge from their spontaneous behaviours. This phenomenon of self-organisation plays a critical role in the system’s ability to adapt to the changing context.

Still, as Naikar and Elix (Citation2021) discuss, the processes of self-organisation are not without challenges (e.g. Lundberg and Rankin Citation2014) and may not be regarded as perfect or flawless when judged against idealised standards or benchmarks (Rochlin, La Porte, and Roberts Citation1987). Yet, many sociotechnical systems may be characterised as high-reliability organisations (La Porte Citation1996; Rochlin Citation1993; Weick and Sutcliffe Citation2001) because of their ability to balance safety and productivity goals successfully in unexpected, volatile conditions. Self-organisation processes can operate relatively smoothly, particularly in well-established systems, and the fact is they are necessary. Alternative strategies, like a priori planning, centralised coordination, or external intervention, are often not viable in dynamic, ambiguous conditions. Instead, spontaneous actions and emergent work structures are needed to achieve the ‘proper, immediate balance’ (Rochlin, La Porte, and Roberts Citation1987, pp. 83–84) between a system’s safety and productivity imperatives.

The primary implications of the self-organisation concept for the design of human-AI systems are as follows:

  • The unit of analysis is the work system, which includes people, technology, and the physical and social environment.

  • The focus is on understanding the intrinsic behaviour-shaping constraints of the system, which both limit and afford opportunities for action.

  • The analysis must accommodate the continuous evolution of actors’ spontaneous behaviours and emergent work structures, which provide the system with the adaptive capacity to stay in control of the unfolding situation, whether routine, surprising, or novel.

Summary and example

The perspectives of distributed cognition, joint cognitive systems, and self-organisation view humans and machines through the common lens of an integrated system. However, the three perspectives have distinct emphases, and suggest an interrelated set of concepts for the design of human-AI systems. The theory of distributed cognition emphasises the distribution, and coordination, of activity across multiple human and machine elements as well as the environment of their interactions. The joint cognitive systems perspective recognises the functional unity or coagency of the human-machine system, and the need for the system to adapt to cope with change and surprise. The self-organisation concept explains how system coherence or organisation can emerge from the interactions of multiple, distributed elements, even with significant spontaneity in the actions of individual elements, to provide the adaptive capacity for staying in control of the unfolding situation.

To present a preliminary, hypothetical example of how these perspectives may inform the design of human-AI systems, we draw on the cases of emergency ambulance dispatch management described by Hajdukiewicz et al. (Citation1999) and Wong, Sallis, and Hare (1998). The goal of getting resources, or ambulances with medical personnel, equipment, and supplies, to specific locations in a geographic region involves multiple people, including a team of dispatchers in a control centre and the drivers of the vehicles. The effective use of resources across dispatchers’ different areas of responsibility must be coordinated, while getting medical aid to the incidents as quickly as possible. The dispatchers and the drivers operate in different spatial and temporal contexts, and are embedded in distinct physical and social ecologies, so that they have different affordances for action in getting resources to an incident. For instance, drivers may encounter road blockages, which dispatchers did not know about when assigning resources to an incident. Further, the system is dynamic; for example, the locations of vehicles, the status of personnel, equipment, and supplies, and the number and nature of incidents is constantly changing. Also, unanticipated events, such as new kinds of injuries, may occur. The emergencies may range from single person incidents, such as heart attack or fractured leg, to major incidents involving injuries to many people.

In considering AI-infused technology for this work system, the three perspectives highlight that the distribution and coordination of activity across this entire system is relevant, as well as the context in which it occurs. The analysis must take into account the characteristics of the geographic region, the control centre, and the emergency response resources. The activity includes identifying, assessing, and prioritising emergencies; identifying, assessing, and allocating resources to emergencies; and route planning, navigation, and driving to the locations of emergencies. The potential for action of all of the actors, including the AI, given their functional capabilities and limitations within their specific ecologies of action must be considered as an integrated system. Moreover, the spontaneous reorganisation of activity across the human-AI system must be accommodated to account for change or unforeseen events, when the system may need to adapt to achieve its goals. The overall aim then is not simply to pre-assign functions to either humans or machines, but to support the possibilities for action of all of the actors, and provide the opportunity for collective action and self-organisation.

For instance, in prioritising emergencies and assigning resources in managing two incidents within a particular dispatcher’s area of responsibility, one involving a fractured leg and the other a heart attack (Hajdukiewicz et al. Citation1999), a machine may identify the closest ambulance with first aid personnel for the first incident and the closest ambulance with personnel who can perform cardiopulmonary resuscitation for the second incident. While this may be an acceptable solution, especially in time pressured situations with a high number of emergencies, in less demanding circumstances, a dispatcher may recognise that the second ambulance is further away from the heart attack victim than the first ambulance, and decide to send both ambulances to the incident, with the personnel in the first ambulance providing what assistance they can until the second ambulance arrives at the scene. Further, in managing a number of such isolated incidents as well as a new, major incident involving novel chemical burns and other injuries to hundreds of people, the team of dispatchers across the region may, in consultation with medical personnel, begin directing resources with particular kinds of equipment, supplies, and personnel to the burns victims. At the same time, the machine may continue allocating resources to the incidents in the region based on standard criteria, while accommodating the dispatchers’ decisions. The dispatchers’ decisions, in turn, may be influenced by the machine’s solutions. In these cases, the solutions are generated through a dynamic process involving both humans and AI influencing each other’s actions.

Similarly, while a machine may rapidly generate the fastest route for an ambulance to reach an incident, a despatcher may overhear a colleague talking about a temporary blockage in the area, and therefore communicate with the driver to suggest a feasible detour at the point of the blockage. However, on encountering the blockage, the driver may see that it can be circumnavigated by driving some way through an adjacent golf course, which may be an acceptable solution given the nature of the emergency. Further, on taking this course of action, the machine may automatically reroute the driver, generating new or revised directions for reaching the incident given the driver’s changed direction of travel. In this case, the machine, the dispatcher, and the driver, as well as the dispatcher’s colleague, are all involved in route planning and navigation to an incident. Moreover, the system is self-organising within the scope of each actor’s possibilities for action.

While aspects of these examples may seem relatively simple, with a small number of humans and AI components and a relatively undemanding context, they nevertheless begin to illustrate some of the anticipated design challenges present in complex environments with larger numbers of human and machine actors. The first example emphasises the need for collective problem-solving, where solutions emerge from the dynamic interplay of human and machine actions, rather than the machines simply generating options which it can execute independently or which the humans must select from. The second example further highlights the context-dependent nature of performance and the potential for self-organisation in systems that demonstrate this collective problem-solving, with the route ultimately taken by the driver emerging from the spontaneous interactions of multiple human actors and an AI component. We anticipate that as the number, capability, and variety of AI components in future systems increases, the need for such emergent solutions, and the need to design systems that explicitly support such self-organising behaviours is going to become increasingly important. In the following section, we consider the merits of different design methods for such human-AI systems, and discuss how the key ideas and implications of the three perspectives, specifically distributed cognition, joint cognitive systems, and self-organisation, can be organised into an integrated design framework.

Outline of a design framework

In designing human-AI systems, the question of what work the system must accomplish is a fundamental concern. After all, the purpose of designing the system is to undertake that work successfully. In addition, the work of the system largely determines the organisational forms that are suitable, as well as the appropriate mechanisms for communication or interaction. Yet, existing models of human-automation interaction address the nature of the work of the system relatively cursorily (Roth et al. Citation2019), focusing instead on how functions can be allocated between humans and machines or how interdependencies between humans and machines can be defined, once the tasks to be performed have been established.

Johnson, Bradshaw, and Feltovich (Citation2018) suggest that tools for designing human-machine systems should be evaluated based on ‘how well they represent both the human and machine, the work being performed, and the relationships between the human and machine throughout the work.’ (p. 77). In the case of complex environments, the three perspectives of sociotechnical systems discussed above, namely distributed cognition, joint cognitive systems, and self-organisation, have a number of specific implications for the nature of this analysis. In this section, we focus on cognitive work analysis, and consider the extent to which it can meet these requirements. To frame this discussion, we begin by examining methods that are linked with each of the three perspectives.

As we have shown above, the concepts of distributed cognition, joint cognitive systems, and self-organisation provide important insights into the work and functioning of complex sociotechnical systems. However, as one would expect of any theoretical perspective, they focus on providing conceptual or explanatory accounts, paying less attention to methods or representations for translating the theories or empirical patterns into designs. In the joint cognitive systems perspective, Woods and Hollnagel (Citation2006) describe laws or patterns of work derived from incidents occurring in intensive care units in hospitals, control centres in nuclear power plants and space missions, and other sociotechnical systems. While they discuss many methods for carrying out field observations, translating the resulting laws or patterns of work into functional designs has proved challenging for communities subscribing to such perspectives and remains largely undemonstrated (Bye and Naweed Citation2008; Naweed and Bye Citation2008; Righi, Saurin, and Wachs Citation2015). Moreover, as existing analysis methods are critiqued in the foundational texts (Hollnagel and Woods Citation2005; Woods and Hollnagel Citation2006), this perspective does not provide a clear path for how joint cognitive systems are to be designed (Naweed and Bye Citation2008).

In the distributed cognition perspective, Hutchins (Citation1995) presents the idea of an activity score, which he uses to provide a detailed, step-by-step account of how computational processes for the task of ‘fixing’ a ship’s location in two-dimensional space are distributed, and coordinated temporally, across actors and artefacts on the vessel. This description, which conveys a thorough sense of how this task is carried out, may be very well suited for the purpose of generating or substantiating theories of work. However, there are a number of problems with using such descriptive methods in design, similar to the observational methods described by Woods and Hollnagel (Citation2006).

First, such descriptive methods are event-dependent (Vicente Citation1999). The activity score that Hutchins (Citation1995) presents is likely only relevant for a single instance of position fixing—the one he observed in creating that record (Naikar Citation2020). In other cases, the content and organisation of the work are likely to be different, whether there are minor variations across instances or dramatic changes. Certainly, the activity score that Hutchins presents does not accommodate a critical incident on the ship, which he also observed, when the gyrocompass failed and the organisational structure for the task changed considerably as a result. Clearly, it is impractical to observe or record the full range of instances. Moreover, by definition, it is not possible to observe events that have not yet occurred, or future events. Inevitably, such descriptions will be incomplete, and will likely lead to brittle designs for human-AI systems that are limited to supporting patterns of work that have or can be observed.

Another issue with descriptive methods is their device-dependence (Vicente Citation1999). That is, these methods describe how work is currently performed with existing devices or machines, rather than how the work can be performed in the future with new or emerging technologies, such as AI. Such descriptions may incorporate currently unproductive ways of working, albeit unintentionally, or miss potentially productive ways of working with novel technologies. A further concern is that new technologies intended to support people’s current or described behaviours are likely to lead to changes in those behaviours once those technologies are introduced into the work setting, resulting in a never-ending task-artefact cycle (Carroll and Rosson Citation1992).

These issues of event- and device-dependence are particularly problematic in the case of human-AI systems, as people have limited experience in working with AI technologies, analysts have had few opportunities to observe people working with such technologies, and the field of AI continues to evolve rapidly.

The self-organisation perspective (Naikar Citation2020; Naikar and Elix Citation2021) is associated with the cognitive work analysis framework (Rasmussen et al., 1999; Vicente Citation1999). This framework is both event- and device-independent and therefore formative in orientation. The focus is on understanding how work can be performed effectively, rather than how it is performed currently (descriptive) or how it should be performed ideally (normative). (While we have not discussed normative techniques, we note that such approaches are also event- and device-dependent; see Naikar Citation2013; Rasmussen Citation1997; Vicente Citation1999). The term formative conveys a focus on how work can be constructed or formed out of the constraints and affordances of the system. In the following discussion, we assess the extent to which cognitive work analysis can accommodate the distributed cognition, joint cognitive systems, and self-organisation perspectives, specifically their implications for the design of human-AI systems.

Cognitive work analysis

The cognitive work analysis framework (Rasmussen, Pejtersen, and Goodstein Citation1994; Vicente Citation1999) comprises five dimensions of analysis, each of which provides a different view of the complex work system. The five dimensions move from an ecological to a cognitive view, with precedence deliberately given to the former. identifies the five dimensions, describing the views they provide of the system as well as their key concepts and modelling tools. The modelling tools represent the constraints or affordances associated with the relevant view. Next, we consider each dimension, exploring how these views account for the design implications of the three contemporary perspectives of sociotechnical systems. We highlight areas of convergence as well as some potential discrepancies.

Table 1. The five dimensions of cognitive work analysis.

Work domain analysis

In this dimension, as well as in the four other dimensions of cognitive work analysis, the unit of analysis is the work system, encompassing actors, artefacts, and other aspects of the environment. The work domain represents the combination of physical and social factors that form the environment in which work occurs (Naikar Citation2013). The focus is on understanding why the system exists (ends) and the constraints, or affordances, on how it can fulfil its ends or purposes (means). This analysis is therefore event-independent. It defines the functional structure of the environment, or its action-relevant, relatively permanent properties, which both limit and afford possibilities for behaviour (Vicente Citation1999). These properties of the environment are relevant to a wide range of situations including routine, unexpected, or unpredictable events. Primacy is given to understanding the work domain, as the space of possibilities for action cannot exceed the affordances or constraints of the environment.

The work domain dimension is compatible with the joint cognitive systems perspective in that, at this stage of the framework, no assumptions are made about whether the actors are humans or machines. In addition, the number of actors is not assumed; there may be one actor or numerous actors. In this sense, the system is treated as a single entity, and the focus is not on understanding how the human and machine components interact, but what the system needs to achieve and the possible structural means or affordances for doing so. Moreover, as no assumptions are made about where the actors are located in time and space, the boundaries of the analysis extend beyond specific temporal and spatial contexts, accommodating the distributed cognition perspective as well. Finally, given the emphasis on the constraints and affordances of the ecology, rather than on actors’ imagined, observed, or anticipated behaviours, this dimension can accommodate spontaneous actions and emergent work structures, consistent with the self-organisation perspective. Self-organising behaviours in routine as well as in surprising or novel situations are accommodated, as the analysis is event-independent.

Strictly speaking, it may be said that the joint cognitive systems focus on what the system does is incompatible with the work domain dimension. Norros and Salo (Citation2009) argue that this perspective focuses on the human-machine system, viewing the environment as external to the system; for instance, emphasis is placed on the affordances of the human-machine system, overlooking the affordances of the environment. However, the next dimension, activity analysis, sheds light on what the system does, and given that the relevance of work domain analysis to performance has been convincingly demonstrated in a large body of experimental studies as well as in industrial case studies (see Naikar Citation2017 and Vicente Citation2002 for reviews), we take the position that the environment is intimately interconnected with what the system does (Flach Citation2015; Gibson Citation1979; Suchman Citation1987), and is critical in any approach for designing human-AI systems. Roth et al. (Citation2019) have also argued the value of work domain analysis in designing human-machine systems with autonomous technologies.

Similarly, it could be said that the anonymity of actors in the work domain dimension is incompatible with the distributed cognition perspective, which emphasises how computational processes are distributed in the system. However, the organisation of work is considered in the fourth dimension, social organisation and cooperation analysis, and it is clear that features of the ecology that contribute to the accomplishment of tasks is central to the theory of distributed cognition (Hutchins Citation1995). In some respects, the distributed cognition perspective appears to place greater emphasis on the affordances of the material environment, compared with that of the social ecology. For example, cultural factors are considered predominantly in the extent to which they are embedded in physical objects, thereby shaping the affordances of those objects. However, as the affordances of conceptual artefacts, such as rules, heuristics, or practices, are also considered (Hutchins Citation1999; Hutchins Citation2006), we take the view that features of the social ecology that shape the performance of tasks by defining the intentional environment of the system—or why it does what it does—are not outside the bounds of this perspective.

Finally, the self-organisation perspective views the constraints of the system as the fundamental governing mechanism for actors’ conduct, especially in challenging situations, when rules, procedures, or other formal structures may not be suitable. The work domain dimension defines the environmental constraints, which demarcate the system’s full space of possibilities for action; the possibilities cannot exceed what is acceptable beyond the purposes of the system or what is feasible given the resources of the system. This dimension is therefore critical in the self-organisation perspective.

Activity (control task) analysis

This dimension of the framework focuses on the activity that is needed in the work domain. Again, no assumptions are made about whether the actors are humans or machines, or whether there is one actor or multiple actors, so that this analysis is still actor-independent. However, the analysis considers recurring spatial, temporal, or functional contexts, as these contexts shape or constrain the activity of the system. Defining the activity that is needed is possible because the goals of the system in recurring contexts can be established. In contrast, work domain analysis considers the space of possibilities for action when the goals are not known—when the actors need to work out what the goals are in the first place—for example in novel or unforeseen situations. Therefore, whereas work domain analysis accounts for emergent behaviours in a wide range of situations, including novel circumstances, activity analysis complements this analysis by accounting for emergent behaviours in broadly predictable contexts, as there are still many possibilities for action within these contexts.

In this dimension, the contexts are defined as a set of work situations and work functions or problems, using the contextual activity template (Naikar, Moylan, and Pearce Citation2006), or as a set of operating modes (Vicente Citation1999). The activity is defined as a set of control tasks or decision making tasks, using the decision ladder (Rasmussen, Pejtersen, and Goodstein Citation1994; Vicente Citation1999). While the decision ladder is often perceived as presenting an information-processing view of systems, it has been categorically stated that the decision ladder is not a model of human information processing, or the internal processes of an individual’s mind, but rather a template that is useful for design (Rasmussen, Pejtersen, and Goodstein Citation1994; Vicente Citation1999). This template is abstract enough to accommodate both human and machine contributions to the activity. The practical value of the decision ladder has been demonstrated in a number of cases (e.g. Asmayawati and Nixon Citation2020; Elix and Naikar Citation2021; Jenkins et al. Citation2010).

Hollnagel and Woods (Citation2005) describe an information-processing model as one that is sequential rather than cyclical in nature, and one that focuses on the internal processes by which inputs are transformed into outputs. Admittedly, the decision ladder appears to fit this description, but the substance of the decision ladder is far different from its appearance. One could easily change its presentation, for example by ‘tying’ its two ends together, so that it looks cyclical, somewhat like the cyclical model presented by Hollnagel and Woods. Indeed, Ashoori and Burns (Citation2010) present a model in which several decision ladders are arranged in a ‘decision wheel’; these authors were not motivated by criticisms of the decision ladder but rather the specific requirements of their project. Their representation shows clearly that activity need not begin at one end of the decision ladder and finish at the other end by following a sequence of steps. Rather, there is no obvious starting point, and activities may be ‘short-circuited’ or reiterated. In addition, activity is distributed across several actors, rather than taking place within the bounds of an individual’s head, and multiple activities may occur concurrently, rather than just sequentially. Finally, any aspect of this activity is affected or shaped by other activities on the wheel.

More to the point, in contrast with information-processing accounts (see Flach Citation2015 for a review), the decision ladder is not concerned with providing a structuralist account of the internal processes of the mind or computer in response to stimuli or input. Instead, it provides an ecological account in that it recognises the closed loop coupling of perception and action, which allows actors to discover and adapt to their surroundings. In this view, feedback loops are not closed within the mind but are closed through the actors’ situated actions (Flach Citation2015).

Particularly important to note in this context is that the decision ladder does not represent goals or states internal to an actor, but rather potentially relevant goals or states in the work domain. These states are represented in ways that are functionally significant; that is, in ways that are meaningful for action. For example, Elix and Naikar (Citation2008) show that activity in this dimension may be represented as a set of questions, consistent with how decision makers appear to act; that is, they seem to ask themselves a set of questions about what is going on, what they should do, what the options are, what happened before, what could happen next, and whether what they are doing is working (Bennett, Posey, and Shattuck Citation2008; Phillips et al. Citation2023). Therefore, activity in this dimension is not defined in terms of sequences of mental processes, internal to an individual’s mind, but instead in terms of recurring classes of question about the work domain, which may be distributed across multiple actors. We emphasise that the intent of the activity dimension is not to develop a process model, whether sequential or cyclical, explicating the intricacies of how a human or system behaves, performs, or acts. Rather, the practical significance of the decision ladder lies in understanding the states of the work domain that are relevant in recurring contexts. This understanding allows the design of systems, with human and machine actors, that can act on these states effectively.

Moreover, a key strength, if not the key strength, of the decision ladder is that the boxes comprising the model are truly black boxes. A fundamental concept of activity analysis is that the focus is on what needs to be done, not on how it is done or by whom (Vicente Citation1999). This allows us to think about the activity that is needed for coping with (Hollnagel Citation2012) or staying in control (Flach Citation2012) of the situation, without worrying about who is doing the controlling or how. Clearly, the processes adopted by humans and machines will be different. The analysis deliberately defers such considerations to later stages of the cognitive work analysis framework, so that who can do the work and how emerges from the analysis, rather than being fed into the analysis as assumptions.

In any case, regardless of the stance taken on the decision ladder, the activity dimension is compatible with the joint cognitive systems perspective in that it is actor-independent; there is no differentiation between humans and machines. A potential discrepancy is that, in line with a formative orientation, the activity dimension defines what activity is needed in the system, not what the system does, as the latter is limited to observable behaviours with existing tools and technologies in a relatively small number of situations. Still, the activity dimension clarifies what the system does, as that is intimately connected with the activity that is needed, and observations of what the system does may reveal the system’s constraints. In addition, many analysts have used the decision ladder for descriptive purposes, or to describe what the system does (e.g. Ashoori and Burns Citation2010; Naikar and Saunders Citation2003).

This dimension is also consistent with the distributed cognition perspective in that it recognises distinct contexts of work, which may be defined spatially, temporally, or functionally. Although this dimension is actor-independent, which may seem incompatible with this perspective, it provides a schematic for subsequently recognising that actors participating in the work may be distributed throughout these contexts, specifically in social organisation and cooperation analysis, the fourth dimension of cognitive work analysis. In addition, by allowing that the actors may be machines, as well as humans, cognitive work analysis does not dismiss outright a computational account of systems, as potential strategies for the work may in fact be suitably considered in these terms, in the case of machine actors specifically. However, the strategies are not considered until the next dimension of the framework, again with the intent of keeping the analysis as broad as possible at this stage, rather than constraining it with assumptions, in order to maximise the ‘leverage points’ for design (Vicente Citation1999).

In these respects, the analysis also accommodates the self-organisation perspective. By not limiting the focus to humans or machines, and by considering the distinct contexts of work, which present different constraints or affordances to actors, the framework maximises the space of possibilities for action that are considered in design. This approach is critical for equipping the system with the requisite variety (Ashby Citation1956) for staying in control of the situation.

Strategies analysis

This dimension focuses on how the work of the system can be done (Vicente Citation1999). As before, human and machine actors are not differentiated. The focus is deliberately on the set of strategies that are possible, rather than the strategies that are currently used by humans. Humans may not use certain strategies because they are resource intensive. For example, in time-pressured situations, humans tend to rely on recognition-primed strategies for decision making instead of analytical ones (Klein Citation1989). However, the use of analytical strategies may become feasible with the appropriate support, for instance with emerging AI technologies. Therefore, such possibilities are kept open as leverage points for design. Moreover, machines may use different strategies from humans for performing the same work, and this possibility is kept open as well. Again, the intent is to maximise the space of possibilities for action, or the requisite variety of the system (Ashby Citation1956), as variety in the system is needed to stay in control of the situation. Rasmussen, Pejtersen, and Goodstein (Citation1994) and Vicente (Citation1999) discuss the potential value of information flow maps for creating abstract representations of strategies, suitable for encompassing humans and machines, whereas others have proposed additional tools (Cornelissen et al. Citation2013; Hassall, Sanderson, and Cameron Citation2014). This remains an area for further research.

Meanwhile, for much the same reasons as has been spelled out for the activity dimension, the concepts of strategies analysis appear to be aligned with the three perspectives of sociotechnical systems. Briefly, this dimension is still actor-independent, so that the system may be viewed as a single entity, consistent with the joint cognitive systems perspective. In addition, this dimension focuses on the strategies that are possible, regardless of whether these are ultimately enacted by humans, machines, or both in collaboration. Therefore, cognitive work analysis does not disregard computational accounts of systems, such as those considered within the distributed cognition perspective, where these may help to inform the understanding and conceptualisation of productive functional strategies for the work system. Finally, by keeping open the system’s space of possibilities for action, and consequently its possibilities for the work organisation, which is considered in the next dimension, this dimension is also compatible with the self-organisation perspective.

Possibly, it may be argued that as the joint cognitive systems perspective places the emphasis on what the system does, not on ‘how it does it’ (Hollnagel and Woods Citation2005, 22), it is incompatible with the strategies dimension. However, the focus of strategies analysis is not on the internal processes of humans or machines, which is what this perspective argues against, but instead on the functional strategies of the work system, which may be distributed across many humans and machines. It may also be argued that as this dimension does not provide a computational account of systems, it is inconsistent with the distributed cognition perspective. However, as indicated above, computational representations may be taken into account in establishing viable functional strategies.

Social organisation and cooperation analysis

This dimension finally begins to consider who might do the work, whether these are humans or machines. Notably, the analysis does not specify who should do the work, or who currently does the work, but who could do the work. Using the diagram of work organisation possibilities, a set of criteria that governs shifts in work organisation in complex sociotechnical systems is used to rule actors in or out, or in other words to define the constraints on who can do the work and to what extent, usually leaving a very large space of organisational possibilities to leverage in design (Naikar and Elix Citation2016; Elix and Naikar Citation2021). Maximising the work organisation possibilities accommodated in the design of human-AI systems can significantly increase their resilience or adaptive capacity.

This dimension is well aligned with the distributed cognition perspective in that it can accommodate multiple distributed actors participating in a variety of interconnecting activities. These actors may be dispersed in time and space, extending across large networks of systems such as those involved in emergency management and military operations. In fact, the actors may include the original designers of the machines or artefacts, or the human workers who program the machines during the planning stages of an operation (Rasmussen Citation1986), as these actors also shape the system’s performance through their actions. Thus, the boundaries of the analysis may extend across mental, physical, and cultural spaces, covering long time periods over which the design or use of artefacts may evolve, as is advocated by the distributed cognition perspective (Hutchins Citation1995).

This analysis is also compatible with the self-organisation perspective. By maximising the organisational possibilities that can be leveraged in design, the analysis accommodates spontaneity in actors’ behaviours and emergence in work structures, which is necessary for staying in control of unfolding events. In particular, we note that, in this analysis, actors are defined in a way that is event-independent. As an illustration, actors are not necessarily defined by their formal roles and responsibilities, but rather by their spatial and temporal locations for instance (Elix and Naikar Citation2021), which shape the possibilities for the work. This approach recognises that an actor who moves from one location to another has different possibilities for the work, regardless of their rank or role, allowing emergent forms of work organisation that are better suited to local contingencies.

It is unclear, however, to what extent the joint cognitive systems perspective entertains the consideration of the work organisation, as the humans and machines are viewed as a single entity. Unless humans and machines are acknowledged as distinct components at some stage, their relational structure, or how they are organised in relation to the work, cannot be contemplated. However, as the joint system architecture is an important factor in the performance of the system, it seems reasonable that this perspective must consider the couplings or arrangements of actors, which is supported by this dimension of cognitive work analysis.

Actor competencies analysis

This dimension has been called ‘user profile analysis’ by Rasmussen, Pejtersen, and Goodstein (Citation1994) and ‘worker competencies analysis’ by Vicente (Citation1999). Here, we suggest the label ‘actor competencies analysis’ instead. The term ‘user’ tends to denote a human manipulating an interface to interact with a machine, which is the focus of much of the research in human-computer interaction (Perry Citation2003). The term ‘workers’ places the emphasis on human actors. In our case, where we are concerned with the application of cognitive work analysis to the design of human-AI systems, a broader perspective is warranted. Our suggested label is consistent with the overarching philosophy of cognitive work analysis, which generally accommodates the possibility of actors being humans or machines. If we limit the analysis to humans at this stage, the focus is more likely to be narrowed to the requirements demanded of humans given the properties of machines, rather than also accommodating thinking about how machines can be designed to work with humans for effective system performance.

This dimension is important within the cognitive work analysis framework because it emphasises that the possibilities for action in a work system depend on the interplay between the features of the environment and the characteristics of the actors. This means, first, that human and machine actors may have different action possibilities within the same work domain. For example, AI technologies equipped with particular sensors may have different perceptual capabilities for detecting objects in an environment compared with humans. Second, the first three dimensions, although actor-independent, collectively define the characteristics desirable in actors for exploiting the constraints and affordances of the environment. For instance, domain requirements for speed, temperature tolerance, or creativity shape the desirable capabilities of actors.

As a first step, then, the aim of this dimension is to understand the resources or capacities that human and machine actors need to participate in the work system. This step therefore involves consolidating the work demands in the preceding dimensions to establish how they collectively shape the capacities that actors need to function effectively; that is, to act in the world, stay in control of the evolving circumstances, and fulfil the system’s goals and purposes. These requirements can then be integrated with knowledge of human and machine categories of action, and their associated strengths and limitations, to derive implications for system design, such as team, training, and interface design.

For example, Rasmussen (Citation1983) discusses three categories of cognitive control, namely skill-, rule-, and knowledge-based modes of operation. This taxonomy encompasses automated sensorimotor behaviours, rule-based activity, and analytical reasoning, respectively, and allows for interactions between these categories. Most importantly, each category encapsulates the dynamic interplay of actors with their environment. It is also recognised that an activity may simultaneously involve all three levels of cognitive control, and that actors may shift between the three modes in performing the same task, depending on the circumstances.

This dimension is compatible with the distributed cognition perspective in that it recognises that the cognitive and computational capacities of humans and machines, respectively, are important factors in the realisation of work. It is important to note, however, that in the skills, rules, and knowledge taxonomy, the focus is not on describing the internal processes or structures of humans and machines, but instead on the capacities of actors for interacting with their environment. This dimension therefore does not provide a purely cognitive or computational account of systems, but instead promotes an ecological account of how actors can stay in control of their circumstances, discussing in particular skill-, rule-, and knowledge-based modes of operation.

As is the case with the preceding dimension, it is unclear to what extent this dimension resonates with the joint cognitive systems perspective. If the system cannot be considered in terms of its components, the capacities that different actors need to bring to the job and how those capacities can be supported through design cannot be assessed. However, as with the preceding dimension, it seems reasonable that this perspective must accommodate the capacities of actors in the design of the joint system, which this dimension of cognitive work analysis supports.

Finally, this dimension is consistent with the self-organisation perspective, given that it recognises the capacities of actors to act in the environment. Actors’ spontaneous actions and interactions form the basis of emergence in the work structure, which is necessary for operating successfully in a dynamic, uncertain, and unpredictable world.

Summary

In summary, we have shown how cognitive work analysis avoids the problems of event- and device-dependence and is compatible with many of the fundamental properties of complex sociotechnical systems considered by the distributed cognition, joint cognitive systems, and self-organisation perspectives. Some potential differences have also been discussed. summarises the key themes of this discussion. Specifically, it shows the extent to which the specific implications of the three perspectives for designing human-AI systems, which we had identified earlier in this paper, are accommodated by cognitive work analysis.

Table 2. Mapping between the design implications of the three perspectives of sociotechnical systems and the five dimensions of cognitive work analysis.

In general, cognitive work analysis provides a robust means for synthesising the core insights of the three perspectives into a coherent design framework. While there are some discrepancies in the scope and foci of these perspectives, they each provide a useful vantage point for design which, for the most part, is accommodated by different dimensions of cognitive work analysis. The inconsistencies between particular perspectives and cognitive work analysis may be attributed largely to differences in orientation. A clear example is the joint cognitive systems focus on what the system does versus the cognitive work analysis focus on what activity is needed given the system’s intrinsic constraints. While the three perspectives focus on providing descriptive accounts of human-machine systems, which is well suited for theory generation, cognitive work analysis seeks to provide a formative account that is useful for design. Such a formative orientation is particularly important in the design of human-machine systems with emerging AI technologies as there are limited opportunities for observation.

Discussion

Cognitive work analysis provides a principled way of integrating insights from distributed cognition, joint cognitive systems, and self-organisation into a coherent framework for designing human-AI systems in real-world operational settings. These are disorderly environments (Snowden and Boone Citation2007), where actors often face rapidly evolving situations, significant uncertainty about the present conditions, and limited predictability of future events, making them challenging to navigate. Work in these settings is sensitive to the context or local circumstances, and the nature of the work largely determines the organisational forms that are suitable. This may explain why human teams in these environments tend to be self-organising, even though their formal structures may be rigid and hierarchical (Naikar and Elix Citation2016, Citation2021). This form of social organisation provides the adaptive capacity to deal with change and surprise and stay in control of the unfolding situation.

The features of emerging AI technologies, assessed together with the properties of complex environments, suggest that their relationships to humans may need to become increasingly collaborative in nature. As outlined earlier in this paper, AI technologies are beginning to surpass human performance on some well-defined tasks and perform acceptably on an increasing range of tasks. At the same time, their different failure modes and the narrow scope of machine performance could lead to brittleness when deployed in complex, real-world environments. Given these characteristics, some tasks may be performed concurrently by humans and machines or may shift between the actors, depending on the circumstances. In addition, a collaborative approach recognises that, inevitably, humans will be able to know, hear, see, or do things about the situation that the machines cannot, and that the reverse is also true. Therefore, their collective activity may be required to stay in control of the situation.

Importantly, however, the need to contextualise action in complex settings means that the nature of this collaborative structure cannot be completely pre-defined or planned, as is also found to be the case in human teams (Bigley and Roberts Citation2001; Bogdanovic et al. Citation2015; Rochlin, La Porte, and Roberts Citation1987). Instead, the human-AI system must be self-organising (Naikar Citation2018). This approach differs from dynamic approaches to human-automation interaction (e.g. Parasuraman, Mouloua, and Molloy Citation1996) in that the nature of the work distribution is not pre-established based on anticipated changes in the situational parameters (Elix and Naikar Citation2021). Rather, the work organisation emerges from the spontaneous actions and interactions of actors.

Traditional models of human-automation interaction cannot easily accommodate the requirements of contextualisation, collaboration, and self-organisation. These models tend to emphasise either-or forms of work organisation and hold assumptions about human and machine contributions to the work that may be invalid with advancing AI technologies. In addition, these models tend to focus attention on human-machine dyads and single, isolated tasks, whereas complex settings are generally populated with multiple humans and machines, or networks of humans and machines, performing a variety of interconnecting activities in a rich social and physical ecology.

We therefore turned to contemporary views of sociotechnical systems, specifically distributed cognition, joint cognitive systems, and self-organisation, which recognise these fundamental aspects of work in complex settings. While there may be discrepancies between these perspectives, to the extent that these accounts are grounded in empirical observations, they may be taken as offering different views of sociotechnical systems, which each provide a useful lens for design. In this paper, we have shown that cognitive work analysis provides a strong framework for integrating the core ideas of these perspectives into a coherent design approach. Rather than simply specifying a priori assignments of functions to humans or machines, whether fixed or dynamic, this approach identifies the system’s constraints and affordances to demarcate actors’ fields of possibilities for action, which may be overlapping, and supports these possibilities in design. The aim is to promote collaborative action and self-organisation, and therefore maximise the system’s adaptive capacity or resilience.

While the use of cognitive work analysis to design self-organising human-AI systems may not differ substantially from other applications of this framework, particularly its use for designing self-organising human teams for military systems (e.g. Elix and Naikar Citation2021), it has not yet been demonstrated. Moreover, this application is not necessarily straightforward. In particular, how cognitive work analysis can adequately account for the properties of so-called ‘intelligent’ machines requires further exploration, bearing in mind that a wide variety of AI technologies may be utilised in complex operational settings, and the set of potential applications is continually expanding.

Nevertheless, while the application of cognitive work analysis to the design of self-organising human-AI systems is yet to be demonstrated, this framework has previously been used successfully in design. Experimental studies show that ecological interfaces based on some of these concepts lead to better performance by workers than interfaces based on conventional approaches (Vicente Citation2002). In addition, aspects of this framework have been applied successfully in industrial settings in a number of cases (Naikar Citation2017). Especially pertinent in this context, elements of cognitive work analysis were used to specify requirements for automation design (Bisantz et al. Citation2003), including collaborative human-automation interaction (Roth et al. Citation2018), and to design self-organising teams for airborne command and control (Naikar et al. Citation2003) and maritime surveillance aircraft (Elix and Naikar Citation2021). Such findings provide good reason to believe that cognitive work analysis may have practical value for designing self-organising human-AI systems in complex settings.

One sticking point is the practical feasibility of such an approach, given the scale of the effort required when the boundaries of the analysis are drawn around a complex sociotechnical system rather than a human-machine dyad. This issue is particularly significant in the case of cognitive work analysis, where multiple dimensions of analysis are required to meet the requirements of contextualisation, collaboration, and self-organisation. One suggestion we can offer, based on our experience, relates to the level of granularity of the analysis. While we have previously carried out extensive analyses, for instance to design self-organising human teams for military aircraft (Elix and Naikar Citation2021), it seems that our final design did not depend so much on the details we had incorporated, as on a higher-level conceptualisation of the work. Admittedly, as with any design endeavour, which is arguably as much an art as it is a science (Bennett and Flach Citation2011), it is difficult to be sure whether the details provided us with a depth of knowledge of the work and system that was important in the final design. However, we are eager to explore whether analyses at coarser granularities can lead to designs that have practical value. A second idea for managing scale, which is often mentioned (Ashoori and Burns Citation2010; Flach Citation2015; Naikar and Elix Citation2021), is seeing an actor as a team or organisation, rather than solely as an individual, where it is suitable to do so.

Finally, we note some areas for future research. First, as indicated above, while we have shown that cognitive work analysis provides a strong overarching framework for designing human-AI systems that integrates core ideas of contemporary perspectives of sociotechnical systems, the details of this framework for this particular area of application may require considerable further work. Further, case studies demonstrating the value of cognitive work analysis for designing self-organising human-AI systems are needed. Moreover, as we have highlighted in earlier work (Naikar et al. Citation2021), the interaction and communication mechanisms between humans and machines in realising such organisational structures is an important challenge to be addressed, requiring considerable further research in both artificial intelligence and human-machine interaction.

Ethical approval

The authors report that there were no human or animal participants involved in this work.

Acknowledgments

We thank four anonymous reviewers for their constructive suggestions, which helped us in strengthening our paper.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The author(s) reported there is no funding associated with the work featured in this article.

References

  • Ashby, W. R. 1956. An Introduction to Cybernetics. New York: John Wiley & Sons.
  • Ashoori, M., and C. Burns. 2010. “Reinventing the Wheel: Control Task Analysis for Collaboration.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting 54 (4): 274–278. doi:10.1177/154193121005400402.
  • Asmayawati, S., and J. Nixon. 2020. “Modelling and Supporting Flight Crew Decision-Making during Aircraft Engine Malfunctions: Developing Design Recommendations from Cognitive Work Analysis.” Applied Ergonomics 82: 102953. doi:10.1016/j.apergo.2019.102953.
  • Bainbridge, L. 1983. “Ironies of Automation.” Automatica 19 (6): 775–779. doi:10.1016/0005-1098(83)90046-8.
  • Bennett, K. B., and J. M. Flach. 2011. Display and Interface Design: Subtle Science, Exact Art. Boca Raton, FL: CRC Press.
  • Bennett, K. B., S. M. Posey, and L. G. Shattuck. 2008. “Ecological Interface Design for Military Command and Control.” Journal of Cognitive Engineering and Decision Making 2 (4): 349–385. doi:10.1518/155534308X377829.
  • Bigley, G. A., and K. H. Roberts. 2001. “The Incident Command System: High-Reliability Organizing for Complex and Volatile Task Environments.” Academy of Management Journal 44 (6): 1281–1299. doi:10.2307/3069401.
  • Bisantz, A. M., E. Roth, B. Brickman, L. L. Gosbee, L. Hettinger, and J. McKinney. 2003. “Integrating Cognitive Analyses in a Large-Scale System Design Process.” International Journal of Human-Computer Studies 58 (2): 177–206. doi:10.1016/S1071-5819(02)00130-1.
  • Bisantz, A. M., and K. J. Vicente. 1994. “Making the Abstraction Hierarchy Concrete.” International Journal of Human-Computer Studies 40 (1): 83–117. doi:10.1006/ijhc.1994.1005.
  • Bogdanovic, J., J. Perry, M. Guggenheim, and T. Manser. 2015. “Adaptive Coordination in Surgical Teams.” BMC Health Services Research 15 (1): 128. doi:10.1186/s12913-015-0792-5.
  • Brady, A., and N. Naikar. 2022. “Development of Rasmussen’s Risk Management Framework for Analysing Multi-Level Sociotechnical Influences in the Design of Envisioned Work Systems.” Ergonomics 65 (3): 485–518. doi:10.1080/00140139.2021.2005823.
  • Brown, Noam, and Tuomas Sandholm. 2019. “Superhuman AI for Multiplayer Poker.” Science (New York, N.Y.) 365 (6456): 885–890. doi:10.1126/science.aay2400.
  • Brown, T., B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. 2020. “Language Models Are Few-Shot Learners.” NeurIPS 33: 1877–1901. https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
  • Burns, C. M. 2018. “Automation and the Human Factors Race to Catch up.” Journal of Cognitive Engineering and Decision Making 12 (1): 83–85. doi:10.1177/1555343417724975.
  • Bye, R., and A. Naweed. 2008. “Book Review.” Ergonomics 51 (2): 242–244. doi:10.1080/00140130600971275.
  • Calhoun, G. 2022. “Adaptable (Not Adaptive) Automation: Forefront of Human-Automation Teaming.” Human Factors 64 (2): 269–277. doi:10.1177/00187208211037457.
  • Camazine, S., J.-L. Deneubourg, N. R. Franks, J. Sneyd, G. Theraula, and E. Bonabeau. 2001. Self-Organization in Biological Systems. Princeton, NJ: Princeton University Press.
  • Carroll, J. M., and M. B. Rosson. 1992. “Getting around the Task Artifact Cycle: How to Make Claims and Design by Scenario.” ACM Transactions on Information Systems 10 (2): 181–212. doi:10.1145/146802.146834.
  • Christoffersen, K., and D. D. Woods. 2002. “How to Make Automated Systems Team Players.” In Advances in Human Performance and Cognitive Engineering Research: Automation, edited by E. Salas, 1–12. Oxford, UK: Elsevier Science/JAI Press.
  • Colmer, A. 2021. “Agile Command and Control Insights Paper.” Noetic Group. Defence Science and Technoloy Group Emerging Disruptive Technology Assessment Symposium: https://www.dst.defence.gov.au/sites/default/files/events/documents/Agile%20C2%20Insights%20Paper%20F2.pdf.
  • Cornelissen, M., P. M. Salmon, D. P. Jenkins, and M. G. Lenne. 2013. “A Structured Approach to the Strategies Analysis Phase of Cognitive Work Analysis.” Theoretical Issues in Ergonomics Science 14 (6): 546–564. doi:10.1080/1463922X.2012.668973.
  • Dekker, S. W., and D. D. Woods. 2002. “MABA-MABA or Abracadabra? Progress on Human-Automation Coordination.” Cognition, Technology & Work 4 (4): 240–244. doi:10.1007/s101110200022.
  • Dikmen, M. 2022. “A Cognitive Work Analysis Approach to Explainable Artificial Intelligence in Non-Expert Financial Decision-Making.” Doctoral Thesis, University of Waterloo, University of Waterloo’s Institutional Repository. http://hdl.handle.net/10012/18355.
  • Elasri, Mohamed, Omar Elharrouss, Somaya Al-Maadeed, and Hamid Tairi. 2022. “Image Generation: A Review.” Neural Processing Letters 54 (5): 4609–4646. doi:10.1007/s11063-022-10777-x.
  • Elix, B., and N. Naikar. 2008. “Designing Safe and Effective Future Systems: A New Approach for Modelling Decisions in Future Systems with Cognitive Work Analysis.” 8th International Symposium of the Australian Aviation Psychology Association. VIC, Australia: Australian Aviation Psychology Association.
  • Elix, B., and N. Naikar. 2021. “Designing for Adaptation in Workers’ Individual Behaviors and Collective Structures with Cognitive Work Analysis: Case Study of the Diagram of Work Organisation Possibilities.” Human Factors 63 (2): 274–295. doi:10.1177/0018720819893510.
  • Endsley, M. R. 2017. “From Here to Autonomy: Lessons Learned from Human-Automation Research.” Human Factors 59 (1): 5–27. doi:10.1177/0018720816681350.
  • Ernst, K., E. Roth, L. Militello, C. Sushereba, J. DiIulio, S. Wonderly, E. Roth, S. Scheff, D. Klein, G. Taylor. 2019. “A Strategy for Determining Optimal Crewing in Future Vertical Lift: Human Automation Function Allocation.” Proceedings of the Vertical Lift Society’s 75th Annual Forum & Technology Display, Vertical Flight Society, May 13–16.
  • Fitts, P. M., M. S. Viteles, N. L. Barr, D. R. Brimhall, G. Finch, E. Gardner, and S. S. Stevens. 1951. Human Engineering for an Effective Air-Navigation and Traffic-Control System. Columbus, OH: Ohio State University Research Foundation.
  • Flach, J. M. 2012. “Complexity: Learning to Muddle Through.” Cognition, Technology & Work 14 (3): 187–197. doi:10.1007/s10111-011-0201-8.
  • Flach, J. M. 2015. “Situation Awareness: Context Matters. A Commentary on Endsley.” Journal of Cognitive Engineering and Decision Making 9 (1): 59–72. doi:10.1177/1555343414561087.
  • Gibson, J. J. 1979. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin.
  • Goodfellow, I., Y. Bengio, and A. Courville. 2016. Deep Learning (Adaptive Computation and Machine Learning Series). Cambridge: MIT Press Academic.
  • Hajdukiewicz, J. R., C. M. Burns, K. J. Vicente, and R. G. Eggleston. 1999. “Work Domain Analysis for Intentional Systems.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting 43 (3): 333–337. doi:10.1177/154193129904300343.
  • Haken, H. 1988. Information and Self-Organization: A Macroscopic Approach to Complex Systems. Berlin: Springer.
  • Hancock, P. A. 2017. “Imposing Limits on Autonomous Systems.” Ergonomics 60 (2): 284–291. doi:10.1080/00140139.2016.1190035.
  • Hassall, M. E., P. M. Sanderson, and I. T. Cameron. 2014. “The Development and Testing of SAfER: A Resilience-Based Human Factors Method.” Journal of Cognitive Engineering and Decision Making 8 (2): 162–186. doi:10.1177/1555343414527287.
  • He, K., X. Zhang, S. Ren, and J. Sun. 2015. “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.” In 2015 IEEE International Conference on Computer Vision (ICCV), 1026–1034. Washington, DC: IEEE Computer Society. doi:10.1109/ICCV.2015.123.
  • Heath, C., and P. Luff. 1998. “Convergent Activities: Line Control and Passenger Information on the London Underground.” In Cognition and Communication at Work, edited by Y. Engeström and D. Middleton, 6–129. Cambridge: Cambridge University Press.
  • Hollnagel, E. 2012. “Coping with Complexity: Past, Present and Future.” Cognition, Technology & Work 14 (3): 199–205. doi:10.1007/s10111-011-0202-7.
  • Hollnagel, E. 2022. “Cognitive Systems Engineering: RIP.” Erik Hollnagel. https://erikhollnagel.com/ideas/cognitive-systems-engineering.html
  • Hollnagel, E., and D. D. Woods. 2005. Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. Boca Raton, FL: CRC Press.
  • Hutchins, E. 1991. “Organizing Work by Adaptation.” Organization Science 2 (1): 14–39. doi:10.1287/orsc.2.1.14.
  • Hutchins, E. 1995. Cognition in the Wild. Cambridge: MIT Press.
  • Hutchins, E. 1999. “Cognitive Artifacts.” In The MIT Encylopedia of the Cognitive Sciences, edited by R. A. Wilson and F. C. Keil, 126–128. Cambridge: The MIT Press.
  • Hutchins, E. 2006. “The Distributed Cognition Perspective on Human Interaction.” In Roots of Human Sociality: Culture, Cognition and Interaction, edited by N. J. Enfield and S. C. Levinson, 375–398. London: Routledge.
  • Hutchins, E. L. 1990. “The Technology of Team Navigation.”In Intellectual Teamwork: Social and Technological Foundations of Cooperative Work, J. Galagher, R. E. Kraut, and C. Egido, 191–221. Hillsdale, NJ: Lawrence Erlbaum Associates.
  • Hutchins, E., and T. Klausen. 1998. “Distributed Cognition in an Airline Cockpit.” In Cognition and Communication at Work, edited by Y. Engeström and D. Middleton, 15–34. Cambridge: Cambridge, UK: Cambridge University Press.
  • Janssen, C. P., S. F. Donker, D. P. Brumby, and A. L. Kun. 2019. “History and Future of Human-Automation Interaction.” International Journal of Human-Computer Studies 131: 99–107. doi:10.1016/j.ijhcs.2019.05.006.
  • Jenkins, D. P., N. A. Stanton, P. M. Salmon, G. H. Walker, and L. Rafferty. 2010. “Using the Decision-Ladder to Add a Formative Element to Naturalistic Decision Making Research.” International Journal of Human-Computer Interaction 26 (2–3): 132–146. doi:10.1080/10447310903498700.
  • Ji, Z., N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. Bang, A. Madotto, and P. Fung. 2022. “Survey of Hallucination in Natural Language Generation.” 55 (12): 1–38. doi:10.48550/arXiv.2202.03629.
  • Johnson, M., J. M. Bradshaw, and P. J. Feltovich. 2018. “Tomorrow’s Human-Machine Design Tools: From Levels of Automation to Interdependencies.” Journal of Cognitive Engineering and Decision Making 12 (1): 77–82. doi:10.1177/1555343417736462.
  • Khurana, D., A. Koli, K. Khatter, and S. Singh. 2023. “Natural Language Processing: State of the Art, Current Trends and Challenges.” Multimedia Tools and Applications 82 (3): 3713–3744. doi:10.1007/s11042-022-13428-4.
  • Klein, G. A. 1989. “Recognition-Primed Decisions.” In Advances in Man-Machine System Research, edited by W. B. Rouse, Vol. 5, 47–92. Greenwich, CT: JAI Press Inc.
  • Klein, G., D. D. Woods, J. M. Bradshaw, R. R. Hoffman, and P. J. Feltovich. 2004. “Ten Challenges for Making Automation a “Team Player” in Joint Human-Agent Activity.” IEEE Intelligent Systems 19 (06): 91–95. doi:10.1109/MIS.2004.74.
  • Klein, K. J., J. C. Ziegert, A. P. Knight, and Y. Xiao. 2006. “Dynamic Delegation: Shared, Hierarchical, and Deindividualized Leadership in Extreme Action Teams.” Administrative Science Quarterly 51 (4): 590–621. doi:10.2189/asqu.51.4.590.
  • La Porte, T. R. 1996. “High Reliability Organizations: Unlikely, Demanding and at Risk.” Journal of Contingencies and Crisis Management 4 (2): 60–71. doi:10.1111/j.1468-5973.1996.tb00078.x.
  • Lake, B., T. Ullman, J. Tenenbaum, and S. Gershman. 2017. “Building Machines That Learn and Think like People.” The Behavioral and Brain Sciences 40: e253. doi:10.1017/S0140525X16001837.
  • Leventi-Peetz, A.-M., T. Ostreich, W. Lennartz, and K. Weber. 2022. “Scope and Sense of Explainability for AI-Systems.” In IntelliSys 2021. Lecture Notes in Networks and Systems, 294. Berlin, Germany: Springer.
  • Li, Y., and C. M. Burns. 2017. “Modelling Automation with Cognitive Work Analysis to Support Human-Automation Coordination.” Journal of Cognitive Engineering and Decision Making 11 (4): 299–322. doi:10.1177/1555343417709669.
  • Linde, C. 1988. “Who’s in Charge Here? Cooperative Work and Authority Negotiation in Police Helicopter Missions.” In Proceedings of the Second Annual ACM Conference on Computer Supported Collaborative Work, 52–64. Portland, OR: ACM Press.
  • Luff, P., and C. Heath. 2000. “The Collaborative Production of Computer Commands in Command and Control.” International Journal of Human-Computer Studies 52 (4): 669–699. doi:10.1006/ijhc.1999.0354.
  • Lundberg, J., and A. Rankin. 2014. “Resilience and Vulnerability of Small Flexible Crisis Response Teams: Implications for Training and Preparation.” Cognition, Technology & Work 16 (2): 143–155. doi:10.1007/s10111-013-0253-z.
  • Mazaeva, N., and A. M. Bisantz. 2007. “On the Representation of Automation Using a Work Domain Analysis.” Theoretical Issues in Ergonomics Science 8 (6): 509–530. doi:10.1080/14639220600647816.
  • McKinney, Scott Mayer, Marcin Sieniek, Varun Godbole, Jonathan Godwin, Natasha Antropova, Hutan Ashrafian, Trevor Back, Mary Chesus, Greg S. Corrado, Ara Darzi, Mozziyar Etemadi, Florencia Garcia-Vicente, Fiona J. Gilbert, Mark Halling-Brown, Demis Hassabis, Sunny Jansen, Alan Karthikesalingam, Christopher J. Kelly, Dominic King, Joseph R. Ledsam, David Melnick, Hormuz Mostofi, Lily Peng, Joshua Jay Reicher, Bernardino Romera-Paredes, Richard Sidebottom, Mustafa Suleyman, Daniel Tse, Kenneth C. Young, Jeffrey De Fauw, and Shravya Shetty. 2020. “International Evaluation of an AI System for Breast Cancer Screening.” Nature 577 (7788): 89–94. doi:10.1038/s41586-019-1799-6.
  • Moy, G., S. Shekh, M. Oxenham, and S. Ellis-Steinborner. 2020. Recent Advances in AI and Their Impact on Defence. DST-Group-TR-3716. Canberra: Defence Science and Technology Group.
  • Naikar, N. 2013. Work Domain Analysis: Concepts, Guidelines and Cases. Boca Raton, FL: CRC Press.
  • Naikar, N. 2017. “Cognitive Work Analysis: An Influential Legacy Extending beyond Human Factors and Engineering.” Applied Ergonomics 59 (Pt B): 528–540. doi:10.1016/j.apergo.2016.06.001.
  • Naikar, N. 2018. “Human-Automation Interaction in Self-Organizing Sociotechnical Systems.” Journal of Cognitive Engineering and Decision Making 12 (1): 62–66. doi:10.1177/1555343417731223.
  • Naikar, N. 2020. “Distributed Cognition in Self-Organizing Teams.” In Contemporary Research: Models, Methodologies, and Measures in Distributed Team Cognition, edited by M. McNeese, E. Salas, and M. Endsley. Boca Raton, FL: CRC Press.
  • Naikar, N., and B. Elix. 2016. “Integrated System Design: Promoting the Capacity of Sociotechnical Systems for Adaptation through Extensions of Cognitive Work Analysis.” Frontiers in Psychology 7 (962): 962. doi:10.3389/fpsyg.2016.00962.
  • Naikar, N., and B. Elix. 2021. “Designing for Self-Organisation in Sociotechnical Systems: Resilience Engineering, Cognitive Work Analysis, and the Diagram of Work Organisation Possibilities.” Cognition, Technology & Work 23 (1): 23–37. doi:10.1007/s10111-019-00595-y.
  • Naikar, N., G. Moy, H. Kwok, and A. Brady. 2021. “Designing for “Agility” in Envisioned Worlds: Concepts for Collaborative Intelligence in Human-Machine Teams.” In Naturalistic Decision Making and Resilience Engineering Symposium. Toulouse: Naturalistic Decision Making, Resilience Engineering Association, and Fondation pour une culture de sécurité industrielle.
  • Naikar, N., and A. Saunders. 2003. “Crossing the Boundaries of Safe Operation: An Approach for Training Technical Skills in Error Management.” Cognition, Technology & Work 5 (3): 171–180. doi:10.1007/s10111-003-0125-z.
  • Naikar, N., A. Moylan, and B. Pearce. 2006. “Analysing Activity in Complex Systems with Cognitive Work Analysis: Concepts, Guidelines and Case Study for Control Task Analysis.” Theoretical Issues in Ergonomics Science 7 (4): 371–394. doi:10.1080/14639220500098821.
  • Naikar, N., B. Pearce, D. Drumm, and P. M. Sanderson. 2003. “Designing Teams for First-of-a-Kind, Complex Systems Using the Initial Phases of Cognitive Work Analysis: Case Study.” Human Factors 45 (2): 202–217. doi:10.1518/hfes.45.2.202.27236.
  • Naweed, A., and R. Bye. 2008. “Joint Cognitive Systems: Patterns in Cognitive Systems Engineering, Book Review.” Ergonomics 51 (5): 768–770. doi:10.1080/00140130701223774.
  • NASEM (National Academies of Sciences, Engineering, and Medicine). 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press.
  • NeurIPS 2022 Workshop on Distribution Shifts - Connecting Methods and Applications. 2022. Accessed June 3 2023. https://sites.google.com/view/distshift2022/
  • Norros, L., and L. Salo. 2009. “Design of Joint Systems: A Theoretical Challenge for Cognitive Systems Engineering.” Cognition, Technology & Work 11 (1): 43–56. doi:10.1007/s10111-008-0122-3.
  • O’Neill, T. A., N. J. McNeese, A. Barron, and B. Schelble. 2022. “Human-Autonomy Teaming: A Review and Analysis of the Empirical Literature.” Human Factors 64 (5): 904–938. doi:10.1177/0018720820960865.
  • OpenAI. 2023. “GPT-4 Technical Report.” ArXiv.2303.08774. doi:10.48550/arXiv.2303.08774.
  • Parasuraman, R., M. Mouloua, and R. Molloy. 1996. “Effects of Adaptive Task Allocation on Monitoring of Automated Systems.” Human Factors 38 (4): 665–679. doi:10.1518/001872096778827279.
  • Parasuraman, R., T. B. Sheridan, and C. D. Wickens. 2000. “A Model for Types and Levels of Human Interaction with Automation.” IEEE Transactions on Systems, Man, and Cybernetics. Part A, Systems and Humans: a Publication of the IEEE Systems, Man, and Cybernetics Society 30 (3): 286–297. doi:10.1109/3468.844354.
  • Perrow, C. 1984. Normal Accidents: Living with High Risk Technologies. New York: Basic Books.
  • Perry, M. 2003. “Distributed Cognition.” In HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science, edited by J. M. Carroll, 193–223. San Francisco: Morgan Kaufmann.
  • Phillips, R. O., H. Baharmand, N. Vandaele, C. Decouttere, and L. Boey. 2023. “How Can Authorities Support Distributed Improvisation During Major Crises? A Study of Decision Bottlenecks Arising during Local COVID-19 Vaccine Roll-out.” Journal of Cognitive Engineering and Decision Making 17 (2): 166–187. doi:10.1177/15553434221125092.
  • Pritchett, A. R., S. Y. Kim, and K. M. Feigh. 2014. “Modeling Human-Automation Function Allocation.” Journal of Cognitive Engineering and Decision Making 8 (1): 33–51. doi:10.1177/1555343413490944.
  • Rasmussen, J. 1969. Man-Machine Communication in the Light of Accident Records (Report No. S-1-69). Roskilde, Denmark: Danish Atomic Energy Commission, Resarch Establishment Risø.
  • Rasmussen, J. 1983. “Skills, Rules, and Knowledge: Signals, Signs, and Symbols, and Other Distinctions in Human Performance Models.” IEEE Transactions on Systems, Man, and Cybernetics SMC-13 (3): 257–266. doi:10.1109/TSMC.1983.6313160.
  • Rasmussen, J. 1986. A Cognitive Engineering Approach to the Modelling of Decision Making and Its Organization (Report No. Risø-M-2589). Roskilde, Denmark: Risø National Laboratory.
  • Rasmussen, J. 1997. “Merging Paradigms: Decision Making, Management, and Cognitive Control.” In Decision Making Under Stress: Emerging Themes and Applications, edited by R. Flin, E. Salas, M. Strub, and L. Martin, 67–81. Aldershot: Ashgate.
  • Rasmussen, J., A. M. Pejtersen, and L. P. Goodstein. 1994. Cognitive Systems Engineering. New York: John Wiley & Sons.
  • Righi, A. W., T. A. Saurin, and P. Wachs. 2015. “A Systematic Literature Review of Resilience Engineering: research Areas and a Research Agenda Proposal.” Reliability Engineering & System Safety 141: 142–152. doi:10.1016/j.ress.2015.03.007.
  • Rochlin, G. 1993. “Defining ‘High Reliability’ Organizations in Practice: A Taxonomic Prologue.” In New Challenges to Understanding Organizations, edited by K. H. Roberts, 11–32. New York: Macmillan.
  • Rochlin, G. I., T. R. La Porte, and K. H. Roberts. 1987. “The Self-Designing High-Reliability Organization: Aircraft Carrier Operations at Sea.” Naval War College Review 40 (4): 76–90.
  • Roth, E. M., E. P. De Pass, R. Scott, R. Truxler, S. F. Smith, and J. L. Wampler. 2018. “Designing Collaborative Planning Systems: Putting Joint Cognitive Systems Principles to Practice.” In Cognitive Systems Engineering: The Future for a Changing World, edited by P. J. Smith and R. R. Hoffman, 247–268. Boca Raton, FL: Taylor & Francis.
  • Roth, E. M., C. Sushereba, L. G. Militello, J. Diiulio, and K. Ernst. 2019. “Function Allocation Considerations in the Era of Human Autonomy Teaming.” Journal of Cognitive Engineering and Decision Making 13 (4): 199–220. doi:10.1177/1555343419878038.
  • Sadler, M., and N. Regan. 2019. Game Changer: Alphazero’s Groundbreaking Chess Strategies and the Promise of AI. Alkmaar, Netherlands: New in Chess.
  • Salmon, Paul M., Chris Baber, Catherine Burns, Tony Carden, Nancy Cooke, Missy Cummings, Peter Hancock, Scott McLean, Gemma J. M. Read, and Neville A. Stanton. 2023. “Managing the Risks of Artificial General Intelligence: A Human Factors and Ergonomics Perspective.” Human Factors and Ergonomics in Manufacturing & Service Industries 33 (5): 366–378. doi:10.1002/hfm.20996.
  • Shao, Z., R. Zhao, S. Yuan, M. Ding, and Y. Wang. 2022. “Tracing the Evolution of AI in the past Decade and Forecasting the Emerging Trends.” Expert Systems with Applications 209: 118221. doi:10.1016/j.eswa.2022.118221.
  • Sheridan, T. B., and W. L. Verplank. 1978. Human and Computer Control of Undersea Teleoperators. Cambridge, MA: MIT Man-Machines System Laboratory.
  • Shneiderman, B. 2022. Human-Centered AI. Oxford: Oxford University Press.
  • Silver, David, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. 2017. “Mastering the Game of Go without Human Knowledge.” Nature 550 (7676): 354–359. doi:10.1038/nature24270.
  • Snowden, D. J., and M. E. Boone. 2007. “A Leader’s Framework for Decision Making.” Harvard Business Review 85 (11): 68–76, 149.
  • Suchman, L. A. 1987. Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge: Cambridge University Press.
  • van de Ven, G., T. Tuytelaars, and A. Tolias. 2022. “Three Types of Incremental Learning.” Nature Machine Intelligence 4 (12): 1185–1197. doi:10.1038/s42256-022-00568-3.
  • Vicente, K. J. 1999. Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work. Mahwah, NJ: Lawrence Erlbaum Associates.
  • Vicente, K. J. 2002. “Ecological Interface Design: Progress and Challenges.” Human Factors 44 (1): 62–78. doi:10.1518/0018720024494829.
  • Vinyals, Oriol, Igor Babuschkin, Wojciech M. Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John P. Agapiou, Max Jaderberg, Alexander S. Vezhnevets, Rémi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom L. Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wünsch, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, and David Silver. 2019. “Grandmaster Level in StarCraft II Using Multi-Agent Reinforcement Learning.” Nature 575 (7782): 350–354. doi:10.1038/s41586-019-1724-z.
  • Weick, K., and K. M. Sutcliffe. 2001. Managing the Unexpected: Assuring High Performance in an Age of Complexity. San Francisco: Jossey Bass.
  • Wong, W. B. L., P. J. Sallis, and D. O. O’Hare. 1998. “The Ecological Approach to Interface Design: Applying the Abstraction Hierarchy to Intentional Domains.” In Proceedings 1998 Australasian Computer Human Interaction Conference, Los Alamitos, CA, 144–151.
  • Woods, D. D., J. Tittle, M. Feil, and A. Roesler. 2004. “Envisioning Human-Robot Coordination in Future Operations.” IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews) 34 (2): 210–218. doi:10.1109/TSMCC.2004.826272.
  • Woods, D. D., and E. Hollnagel. 2006. Joint Cognitive Systems: Patterns in Cognitive Systems Engineering. Boca Raton, FL: CRC Press.
  • Zhao, W. X. 2023. “A Survey of Large Language Models.” arXiv prepreprint arXiv:2303.18223.