1,919
Views
1
CrossRef citations to date
0
Altmetric
Articles

Are humans still necessary?

Pages 1711-1718 | Received 26 Apr 2023, Accepted 05 Jul 2023, Published online: 02 Aug 2023

Abstract

Our long accepted and historically-persistent human narrative almost exclusively places us at the motivational centre of events. The wellspring of this anthropocentric fable arises from the unitary and bounded nature of personal consciousness. Such immediate conscious experience frames the heroic vision we have told to, and subsequently sold to ourselves. But need this centrality necessarily be a given? The following work challenges this, oft unquestioned, foundational assumption, especially in light of developments in automated, autonomous, and artificially-intelligent systems. For, in these latter technologies, human contributions are becoming ever more peripheral and arguably unnecessary. The removal of the human operator from the inner loops of momentary control has progressed to now an ever more remote function as some form of supervisory monitor. The natural progression of that line of evolution is the eventual excision of humans from access to any form of control loop at all. This may even include system maintenance and then, prospectively, even initial design. The present argument features a ‘unit of analysis’ provocation which explores the proposition that socially, and even ergonomically, the human individual no longer occupies priority or any degree of pre-eminent centrality. Rather, we are witnessing a transitional phase of development in which socio-technical collectives are evolving as the principle sources of what, may well be profoundly unhuman motivation. These developing proclivities occupy our landscape of technological innovations that daily act to magnify, rather than diminish, such progressive inhumanities. Where this leaves a science focused on work as a human-centred enterprise serves to occupy the culminating consideration of the present discourse.

Practitioners Summary

Understanding the changes in discretionary, as compared to obligatory, roles of human users and operators in systems is central to Ergonomic practice. Envisioning this path of potential progress, and then witnessing and impacting its actual realisation, permits practitioners to optimise their professional and personal strategies as they deal with this next critical step in the relationship between humans and technology.

Introduction

In a publication that is devoted to the advancement of human-centred technology it may seem curious, outlandish, or even bizarre to question the need for human participation in systems’ operations. Nevertheless, this question is the central proposition considered here. In a year which now witnesses half a century of progress since Bainbridge’s (Citation1983) pivotal publication, it is more than pertinent to consider our current state of development, especially in light of ever-increasing degrees of task automation (Hancock Citation2014; Hancock, Chignell, and Loewenthal Citation1985, Hancock et al. Citation2013). This becomes an even more pointed necessity with the burgeoning growth of systemic levels of autonomy that have occurred in the intervening interval, and which continue apace today (and see Hancock Citation2017; Kaplan et al. Citation2023; Russell Citation2019; Tegmark Citation2017). To the question, are humans still necessary, we are tempted to respond immediately, viscerally, reflexively, and unequivocally with a resounding, ‘yes’ (and see Parasuraman and Wickens Citation2008). After all, this affirmative is both the facile and the popular response, especially when directed to a human audience still individually and collectively imbued in, and persuaded of, its unquestioned primacy. Yet times change, and we find ourselves amidst such innovations which themselves continue to ‘accelerate’ (cf., Gleick Citation2000; Honoré Citation2005; Kurzweil Citation2005). Since we must reserve our most vehement doubts for our most cherished beliefs, it is a worthwhile exercise to contemplate the rather counter-intuitive proposition of continuing human necessity. For, there are growing numbers of examples around us of systems in which human participation has not simply diminished but has virtually been eliminated altogether (Byrne Citation2017). We now have miner-less mines, farmer-less dairy farms, pilot-less aircraft, and even imminently driver-less off- and on-road vehicles (Hancock, Nourbakhsh, and Stewart Citation2019; Tabone et al. Citation2021). We might well ask when and where the plebiscite was conducted to approve these socially critical steps. Of course, there was no collective, rational decision to do so and thus the answer appears to lie in the ‘invisible,’ but undeniable profit-driven hand of the market. Whether such developments portend a long-term human good is a highly debateable proposition. In the short term, an important empirical question arises as to precisely what becomes of the miners, the farmers, the pilots, and the drivers who now no longer mine, farm, fly, or drive. The broadening out of such a question leads us to consider the nature of work itself and most especially the evolving human role in such a changing landscape of demand. The present assessment begins with brief evaluation as to why whole professions and pursuits of distinct but now dissipating forms of work can so easily be ‘forgotten’ in the welter of constant change. This, as succeeding perceptions of reality are now so quickly fabricated, revised, and subsequently overlaid. Collectively accepted ‘norms’ have always been calibrated to each generation’s experience. As a result, the present discourse begins with consideration of these malleability of beliefs about what constitutes normal experience.

The transience of individual experience

Habituation is the assassin of insight. Habituation is facilitated by the frailty of human memory, which is itself fragile, selective, and fallible (Schacter Citation2001). And, of course, each individual is born to experience the world through the limited window of reality to which they are necessarily constrained. The combination of these propensities means that the accepted ‘normality’ of each person’s existence is exquisitely contingent upon their own historical epoch and their expressed capacities during those times. With progressive maturity, each individual becomes aware of the assumptions and constraints that frame this, their personal existence. It is critical to emphasise that these forms of conscious experience have, to date, always necessarily been heterogeneous and idiographic. What is accepted as normal today may be considered odd or even extreme some decades ago, or some decades into the future. All individuals react to the circumstances with which they are presented. However, in temporal terms, these human-framed epochs can be far too short, or more polemically, far too long. Let us clarify and illustrate these collective principles with a human-centred example. One well known characteristic of human remembering concerns the occurrence of flashbulb memories. These are recalled instances of high salience events and they can be used to help individuals calibrate the ‘times of their life.’ To personalise this a little more, one of my own first flashbulb memories concerns the assassination of President John F. Kennedy. I will not go into specific detail except to say that for many (if not most) readers, this event will be viewed as ‘historical;’ that is, they were not alive at that time. But let us now slide the window of social memory forward to include flashbulbs such as the death of Princess Diana or 9/11. However, some readers will still have no direct memory of the latter moments. Yet, for these individuals it may be useful to consider the proposition that there are now growing children who have only become ‘aware’ during a time of pandemic, and now even the latter global catastrophe is beginning to ‘fade’ from the immediacy of social consciousness. These observations lead us to recognise a progressive and invidious ‘window of ignorance.’ It is evident that the nominally comparable construct of reality, for a machine system, need bear no necessary resemblance to this human characteristic of experienced reality. The frame of the former, human window is differentiable by direct perception versus indirect inference (at least historically speaking, and in time-in-passing). Importantly, modern technologies are beginning to dissolve this rigid threshold of momentary reality. The latter holds great promise for us to live beyond our own bespoken limits (Hancock Citation2023). However, as a general proposition, newly accepted social ‘normalities’ are constantly being created, and with the pace of technology, at an ever-faster rate. In essence, this is where the miners, farmers, pilots, and drivers go. They dissolve in a new ‘norm.’

Crucially, the temporal framing of human-autonomy dyads promises to be not simply quantitatively different but rather, qualitatively different also. For example, there is no necessary reason why any innovative machine autonomies will possess a finite expiry date. While it is likely such an ‘entity’ will evolve, its emerging self-awareness will not be a terminal one. The latter, of course, currently characterises the human condition (Hancock Citation2002). Yet, bifurcation between human and machine in this manner of mortality, may well become completely vestigial as the impetus of emergence suggests. This asks us to consider whether the ‘unit of analysis’ for work, and even for life itself, needs to evolve from a purely human-centred one to something quite radically different. More starkly, is the human still actually the entity to focus on?

A saltation in the unit of analysis?

When we ask the general question, are humans still necessary, we need to first frame the context of that question. For, it may be a simple truism that humans will always be necessary, at least to themselves. However, the present discussion is somewhat constrained since the arguments are largely framed around the ergonomic context of the ‘laws of work.’ Thus work, and its accomplishment, is the proximal focus and especially the human role in completing that work. For well beyond the limits of recorded history, human-motivated work has predominantly been accomplished through the use of ability-facilitating tools (cf., Oakley Citation1957; Navarro and Hancock Citation2023). This is not to say that all human work is always accomplished through the use of tools nor indeed that other animals do not use tools for their own purposes; assuredly they do (see Seed and Byrne Citation2010). However, the primary forms of material and electronic transformation that we currently view as work are effected by a combination of human(s) and tool(s) in concert. But this operational dyad is not static, and the tool element of that combination is evolving at an incommensurately rapid pace compared to its human host (Hancock Citation2009). An obvious consequence of these differential rates of change is that the balance of ‘power’ between human and tool is also evolving. A ‘unit of analysis’ which once exclusively featured humans, and thus human-centred designs and user-centred perspectives, is now becoming less clear as to whom is in charge. Some commentators have opined that the ‘unit of analysis’ might now be focused on ‘use-centred design’ (Flach and Dominguez Citation1995). Here, the task becomes the central focus and the combination of entities that achieve that goal are subordinate to that purpose and are then individually not the focus of analysis. However, any such re-focusing eventually has to return to the nature of the conduit of the work. If the unification and intimacy of human-technology interaction is the new ‘unit of analysis,’ where is the stopping-rule to the expansion of the impact of the tool component of the emergent partnership? As we know, the propensity in all evolution is for the more adapted element to over-take, predominate, and then prospectively excise the less abled. This is the dystopian vision that is now readily feared. However, this is to restrict an ‘artificial’ creation (Simon Citation1968) to the strictures of organic evolution. Critically, it remains undetermined as to whether the principles of biomimesis (Thompson Citation1917; Passino Citation2004) can apply to autonomous technological systems, or even realise the tantalising vision of a superior dyad of human-autonomy integration which may still emerge (and see Shneiderman Citation2022). The proposition here is that whatever the prospective degree of human-technology intimacy develops, the ‘unit of analysis’ of a science of work has itself to change and evolve. While ergonomists may well still focus upon some form of human-featured perspective, it is more likely in our present socio-economic, capital-driven configuration that the ‘unit of work’ and its cost, begins more and more to predominate. In the searchlight of that hunt for efficiency, the human worker now looks less and less an appealing instrument for ‘work’ delivery, even when in tandem with an autonomous ‘team-mate’ (and see: Harris et al. Citation1995; Hancock Citation2017a). Given the possibility of such a saltation in the considered unit of analysis, exactly what is it that is expected to ‘emerge.’ It is to this question that we now turn.

The emergence of emergence

It is, and will begin to become ever-clearer, that the intelligent systems that we have and continue to create are already starting to exhibit emergent behaviour (cf., Dekker, Hancock, and Wilkin Citation2013; Kosinski Citation2023). Such emergent behaviour has been described as occurring when a quantitative change in a system results in a qualitative change in its behaviour (and see Anderson Citation1972). The evidence we have seems to be most prominent in certain areas of neuroscientific specialisation (e.g. Mohsenzadeh et al. Citation2020; Nasr, Viswanathan, and Nieder Citation2019; Stoianov and Zorzi Citation2012). Most recently these have centred upon the intriguing properties of large-scale language models (see e.g. Wei et al. Citation2022). Of course, in and of themselves, emergent properties possess no necessary privilege. However, they may well exhibit surprising propensities since, almost by definition, such behaviours are no simple linear extrapolations of the component behaviours from which they ‘emerge.’ Emergence then can be highly problematic since human observers frequently, although of course not ubiquitously, believe in a mantra of predictable extrapolation and explainability. One of the examples that I have discussed now upon multiple occasions are the assumptions associated with time (e.g. Hancock Citation2022). Since it is challenging to describe what ‘emergent time’ might be for intelligent systems, I have focused more upon the more tractable issue of intrinsic time-scales of their actions. Ultimately, there is no necessity for emerging intelligent systems to adhere to our human conception of what composes ‘normal’ epochs of time. So, regardless of any outré or unpredictable nature of emergence, there is no necessary reason why the speed of any autonomous system actions have to accord with, or cater to, our own intrinsic human scaling of time. Unless we build constraints into such operational capacities (Hancock Citation2017); the salutary answer to which is that we have not, and are not presently doing so. But emergence is more immediately problematic than this one-dimensional consideration. We are familiar with applicable Gedanken experiments in which AI-fueled systems optimise on one singular goal alone to the exclusion of all other considerations (see e.g. Manheim Citation2019). Yet, such examples are themselves linear extrapolations and largely anthropocentric in nature. Far more likely that emergence will follow types and forms of other empirical exploratory and evolutionary behaviour. Many, if not most forms of autonomy, will be hidden from human-view and consciousness by dimensions of spatio-temporal scale. Emergent autonomous systems may also experience extinction at temporal rates far exceeding the lower bounds of human temporal perception. Their ecosphere of existence is and will be, radically different to our own.

Notwithstanding the above, during the indeterminate and currently advertised ‘collaborative‘ phase between humans and such systems, their forms of ‘intelligence’ will be turned, like mightily over-powered weapons, often upon human problems of trivial import. Thus contemporary ‘robots’ are often charged with paltry tasks, but they can and will rapidly expand in their functionality, all still perhaps ‘within plain sight.’ If the latter implies some form of Machiavellian machine conspiracy it only reflects our own propensity to consider such a future as a ‘human nightmare as opposed to a machine dream’. (Hancock Citation1997). Much of this concern is founded upon the propensity of attribution and the inherent errors that are embedded in that proclivity (Hancock, Lee, and Senders Citation2023). Our own well-tuned capacities for empathy and veridical attribution may well mis-serve us here. This, even as we continue to pursue machine replicants of humanity; a trope that has occupied literature from the Golem through Frankenstein’s monster to all the modern media incarnations (Schaefer et al. Citation2015). In short, what emerges is largely unpredictable from what is known and that which is assumed to be known (Hancock Citation2022a). To suppose otherwise is to court disaster. But what does that mean for a world of work and the continuity of necessary human participation in it?

As far as we might presently see, emergent intelligence results overall in a continuing diminution of human work participation. While optimists protest for an ever expanding ‘vista’ of elaborated collaborative work, reality is showing us a very different picture. For, the work that is created is well-described as ‘high-end’ forms of human cognitive participation, which is rather characteristic of the optimists themselves. The work that is ‘transformed’ is of the lower end, rote form which characterises much of what is actually done by the majority of people. Optimism, whilst inextricably linked to the ‘human as hero’ narrative, tends to overestimate average human capabilities. At best, humans are capable of an approximately three-fold improvement with skill accumulation (cf., De Winter and Hancock Citation2021). However, now such an order of improvement represent at best only a comparatively marginal return on the time invested. Prognostications from literary sources are replete with such warnings of human diminishment (e.g. Pohl Citation1975; Vonnegut Citation1952). While it is true that many current labours involve humans amusing themselves which seem to necessitate continued human participation, many more service functions are being ‘automated’ daily and soon AI-created student essays will be marked by AI-replaced human Professors, potentially obviating the need for any human in the higher educational loop at all. The latter may seem humorous when stated today but much less so next year, or within the next picosecond according to machine evolution time scale (and see Dolgikh Citation2018). Less amusingly, soon only robots will prepare and serve your fries.

Humans will, if presented with the opportunity, long debate the nature of the self and the philosophical and psychological implications of that discussion (and see Baumeister Citation2010). Many still hold to the mantra of human exceptionalism in still remaining work domains. Yet any and all such considerations and discussions are themselves only feasible in a coherent form as a result of civilised existence. Independent of many other ways in which human existence is threatened by its own social actions (Hancock Citation2019), our world is founded upon the sustenance of remunerative work. The assurance of that world is beginning to crumble. Contemporary safety concerns have been raised (on this the day of writing) concerning under-manned, cross-country trains transporting toxic materials. Yet soon fully automated trains are liable to become the new norm, as discussed above. Job losses to automation have always been considered to be absorbed by other sectors of work. However, machine ‘emergence’ is, either by direction or by exploration, liable to flood virtually all of these niches. The general expectation is that human legislators and arbitrators will advocate and enact human–first policies. That perception is incorrect since shortly, emergent capacities will obstruct such actions, albeit for reasons that at first seem reasonable and even moral, e.g. safety and productivity (e.g. Park et al. Citation2020), and then for reasons that no one will be able either to fully justify or determine. The choice is not between optimism and pessimism but now between a misplaced enthusiasm and a looming reality.

The long goodbye?

When the work environment can be deterministically sculpted (Simon Citation1988), automation is first feasible, then subsequently becomes practicable and next gradually predominates. We have witnessed these developments in technologies such as elevators, airport shuttles, automated doors, and the like. In modern terms, advertising for the job of elevator operator, shuttle driver and doorman (doorperson?) is now, never really contemplated. In environments that are less controlled one can introduce automation, contingent upon the degree of dynamic uncertainty involved and the injection of designed elements sufficient to reduce this uncertainty to the point automation can be technically and economically justified. Examples here include milk production, and mining as referred to earlier, in which all tasks can now be removed from manual contributions and full task automated even achieved. Especially for manual work, roboticists fight to secure ever greater ranges of functionality. Finally, in the realm of manual work, characterised by some expression of physical transformation, there still remain human tasks, (e.g. tree-trimming) which might be done by robots but up to the present are not considered cost-effective enough to develop.Footnote1 This gap between artificial capabilities and functionally optimal necessities is rapidly compressing. This is the case since both ends of that equation are moveable. That is, robots become more effective and ubiquitous while tasks and work environments become progressively more designed to accommodate them. It may therefore not be too long before the preponderance of work environments will even be designed to not permit human contributions. Finally, there will remain some vestigial human occupations, but these will be in bespoken niches and those not necessarily the ones we might presently expect. The erosion of the hegemony of human dominance over physical work is on-going and evident.Footnote2 The ‘battle’ over cognitive work is somewhat more complicated but equally as embroiled. Similar principles apply across physical and cognitive demand (Marras and Hancock Citation2014).

Although humans acknowledge limits to their physical skills, they are perhaps proudest of their cognitive achievements. In reality, the brain and its capacities are considered the very essence of human exceptionality. Thus, threats to cognitive work do not only involve imperilling discrete or even collective ‘jobs,’ they represent an attack on the very conception of the superiority of the human species itself. Most human jobs featuring cognitive transformations are already vulnerable to machine usurpation and indeed many are being overtaken on a daily basis. Optimists, especially creative and innovative individuals, champion the ‘partnership’ vision of future hybrid working. Yet, this is to focus upon only a highly limited segment of cognitive work. If systems were primarily human-centred, this vision might have greater prospective credence. Yet a general, human-centred concern is not the dominant motivation of our world, in which profit remains the primary driver. And as I have previously opined ‘work is agnostic as to source, but not as to cost.’ How long before the building blocks of cognitive work are subsumed by technology? Already, tasks dominated by pattern matching demands are falling to current innovations (Hancock Citation2022, Citation2022b).Footnote3 Yet even these ‘disturbing’ advances are only the very recent expressions of a long march that shows no evidence of ceasing. Since Fitts (Citation1951) and indeed well before, humans have clung to the illogically founded pabulum of their continuing necessity. The 1951 Fitts et al, listing of what ‘men are better at,’ (Scallen and Hancock Citation2001) has now recast itself in one form as an eightfold specification of human expertise. These yearnings include the following:

  1. memory skills which draw on deep experience and stored mental models to problem solve,

  2. perceptual skills to focus on what matters,

  3. cognitive skills to interpret perceptual input, imagine future possibilities, create new opportunities, monitor their own performance, and reflect on their impact,

  4. motor skills shaped by deep experience then guided by the current perception and cognition,

  5. tool skills with physical devices and fluency with mathematics, language, software, etc., and

  6. social skills to effectively work with peers, superiors, and staff.

  7. metacognitive skills by which experts seek to improve their skills, mentor more junior experts, and advance their tools and work environment.

  8. accept responsibility for their actions, morally, and legally. (Shneiderman Citation2022, direct, on-line communication)

Yet these remaining optimistic qualities readily promise to evanesce before ‘time’s wan wave.’ Thus, the foundation for our apparently assured belief in human necessity, at least in respect to work, is crumbling before our eyes. But because these are eyes framed by each bespoken ‘window of ignorance,’ we have collectively managed to postpone, then avoid the more problematic implications of this evolution. The landscape of motivation may well shift in tectonic ways on timescales barely registerable by human beings. The human social dysfunctionality that will follow should already be clear and evident.

Summary and conclusions

One of the earliest luminaries in all of experimental psychology opined that you make no impact in the world when you temper your argument and provide a balanced treatment of the issues (Watson Citation1913). Much as we would like to dispute this assertion, it most probably holds greater truth now than we, in science, would care to acknowledge. Human attention is drawn by extremes and controversially, polarising statements (and see Hancock Citation2012, Citation2022c). Thus, with respect to work, I would like to suggest that the age of the human worker is waning. I do not argue that work itself will disappear, nor the wellspring of motivation from which is flows. We have, across all history, recorded moments in which technology has threatened discrete groups of human workers (see Lincoln Citation1931). From Kett’s Rebellion (Clayton Citation1912; Hoare Citation1999; Jary Citation2018) through the cabal of the Luddites to even more contentious expressions of modern saboteurs, there have always been movements that have contested the power of the machine and the right of the human as involved in the process of work (Hancock Citation2019a). But now we face planetary economies of scale when not just discrete groups are embroiled but when all human participation is in peril. Like other existential threats that are impressing themselves upon us (Bucknall and Dori-Hacohen Citation2022; Carlsmith Citation2022; Hancock Citation2023), the answer is always sought in technology, although the nature of any such technological answer may be becoming less and less palatable by the day.

Major points

  • As individuals, human beings will always remain necessary, at least to themselves.

  • Economic profit is agnostic as to the source of work, but not its cost.

  • The growth of autonomous systems radically changes this landscape of work.

  • With respect to work, there is no guarantee that human participation is in any way assured.

  • The ‘unit of analysis’ of work is thus in a state of great flux.

  • The idea of what it is to be ‘human’ may well change in light of ever-greater human-machine intimacy.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

The author reported there is no funding associated with the work featured in this article.

Notes

1 In a fast-moving world, I have little doubt that this statement will soon be falsified.

2 Some will protest that the majority of work still falls to human hands, and this may still even be true today but unfortunately, false tomorrow. This situation is shifting, and the weight of progress is virtually all being thrown on the non-human side of the balance.

3 Such is the concern here that there are world-wide calls for some immediate pause on large-scale AI developments: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ One test of the present polemic and its projections lies in whether such a hiatus will actually occur. The assertion here, of course, is that it will not.

References

  • Anderson, P. 1972. “More is Different.” Science (New York, N.Y.) 177 (4047): 393–396. doi:10.1126/science.177.4047.393.
  • Bainbridge, L. 1983. “Ironies of Automation.” Automatica 19 (6): 775–779. doi:10.1016/0005-1098(83)90046-8.
  • Baumeister, R.F. 2010. The Self. Oxford: Oxford University Press.
  • Bucknall, B.S, and S. Dori-Hacohen. 2022. “Current and near-Term AI as a Potential Existential Risk Factor.” Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 119–129. doi:10.1145/3514094.3534146.
  • Byrne, D. 2017. “Eliminating the Human.” MIT Technology Review 120 (5): 8–10.
  • Carlsmith, J. 2022. “Is Power-Seeking AI an Existential Risk?” arXiv preprint arXiv:2206.13353.
  • Clayton, J. 1912. Robert Lett and the Norfolk Rising. (2010 Edition, Mousehold Press; Norfolk).
  • Dekker, S.W.A., P.A. Hancock, and P. Wilkin. 2013. “Ergonomics and Sustainability: Towards an Embrace of Complexity and Emergence.” Ergonomics 56 (3): 357–364. doi:10.1080/00140139.2012.718799.
  • De Winter, J., and P.A. Hancock. 2021. “Why Human Factors Science is Demonstrably Necessary: Historical and Evolutionary Foundations.” Ergonomics 64 (9): 1115–1131. doi:10.1080/00140139.2021.1905882.
  • Dolgikh, S. 2018. “Spontaneous Concept Learning with Deep Autoencoder.” International Journal of Computational Intelligence Systems 12 (1): 1. doi:10.2991/ijcis.2018.25905178.
  • Fitts, P. M. (Ed.). 1951. Human Engineering for an Effective Air Navigation and Traffic Control System. Washington, DC: National Research Council.
  • Flach, J.M, and C.O. Dominguez. 1995. “Use-Centered Design: Integrating the User, Instrument, and Goal.” Ergonomics in Design: The Quarterly of Human Factors Applications 3 (3): 19–24. doi:10.1177/106480469500300306.
  • Gleick, J. 2000. Faster: The Acceleration of Just about Everything. New York: Vintage.
  • Hancock, P.A. 1997. Essays on the Future of Human-Machine Systems. Eden Prairie, MN: Banta.
  • Hancock, P.A. 2002. “The Time of Your Life.” Kronoscope, 2 (2): 135–165.
  • Hancock, P.A. 2009. Mind, Machine, and Morality. Ashgate Publishing, Aldershot, England.
  • Hancock, P.A. 2012. “Notre Trahison Des Clercs: Implicit Aspiration, Explicit Exploitation.” In Psychology of Science: Implicit and Explicit Reasoning, edited by R.W. Proctor, and E.J. Capaldi, 479–495. New York: Oxford University Press.
  • Hancock, P.A. 2014. “Automation: How Much is Too Much?” Ergonomics 57 (3): 449–454. doi:10.1080/00140139.2013.816375.
  • Hancock, P.A. 2017. “Imposing Limits on Autonomous Systems.” Ergonomics 60 (2): 284–291. doi:10.1080/00140139.2016.1190035.
  • Hancock, P.A. 2017a. “On the Nature of Vigilance.” Human Factors 59 (1): 35–43. doi:10.1177/0018720816655240.
  • Hancock, P. A. 2019. “In Praise of Civicide.” Sustainable Earth 2 (1): 1–6. doi:10.1186/s42055-019-0014-9.
  • Hancock, P.A. 2019a. “The Humane Use of Human Beings.” Applied Ergonomics 79: 91–97. doi:10.1016/j.apergo.2018.07.009.
  • Hancock, P.A. 2022. “Avoiding Autonomous Agents’ Adverse Actions.” Human–Computer Interaction 37 (3): 211–236. doi:10.1080/07370024.2021.1970556.
  • Hancock, P.A. 2022a. “Reacting and Responding to Rare, Uncertain, and Unprecedented Events.” Ergonomics 66 (4): 454–478. doi:10.1080/00140139.2022.2095443.
  • Hancock, P.A. 2022b. “Advisory Adumbrations about Autonomy’s Acceptability.” Human–Computer Interaction 37 (3): 263–280. doi:10.1080/07370024.2022.2039658.
  • Hancock, P.A. 2022c. “Machining the Mind to Mind the Machine.” Theoretical Issues in Ergonomics Science 24 (1): 111–128. doi:10.1080/1463922X.2022.2062067.
  • Hancock, P.A. 2023. “Quintessential Solutions to Existential Problems: How Human Factors and Ergonomics Can and Should Address the Imminent Challenges of Our Times.” Human Factors. Advance online publication. doi:10.1177/00187208231162448.
  • Hancock, P.A., M.H. Chignell, and A. Loewenthal. 1985. “An Adaptive Human-Machine System.” Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, 15, 627–630.
  • Hancock, P.A., R. Jagacinski, R. Parasuraman, C.D. Wickens, G. Wilson, and D. Kaber. 2013. “Human-Automation Interaction Research: Past, Present, and Future.” Ergonomics in Design: The Quarterly of Human Factors Applications 21 (2): 9–14. doi:10.1177/1064804613477099.
  • Hancock, P.A., J.D. Lee, and J.W. Senders. 2023. “Attribution Errors by People and Intelligent Machines.” Human Factors. Advance online publication. doi:10.1177/00187208211036323.
  • Hancock, P.A., I. Nourbakhsh, and J. Stewart. 2019. “On the Future of Transportation in an Era of Automated and Autonomous Vehicles.” Proceedings of the National Academy of Sciences of the United States of America 116 (16): 7684–7691. doi:10.1073/pnas.1805770115.
  • Harris, W.C., P.A. Hancock, E. Arthur, and J.K. Caird. 1995. “Performance, Workload, and Fatigue Changes Associated with Automation.” The International Journal of Aviation Psychology 5 (2): 169–185. doi:10.1207/s15327108ijap0502_3.
  • Hoare, A. 1999. An Unlikely Rebel: Robert Kett and the Norfolk Rising, 1549. Reeve: Wymondham, Norfolk.
  • Honoré, C. 2005. In Praise of Slowness: Challenging the Cult of Speed. New York, NY: HarperCollins Publishers.
  • Jary, L.R. 2018. Kett-1549: Rewriting the Rebellion. Poppyland Publishing: Lowestoft, England.
  • Kaplan, A.D., T.T. Kessler, J.C. Brill, and P.A. Hancock. 2023. “Trust in Artificial Intelligence: Meta-Analytic Findings.” Human Factors 65 (2): 337–359. doi:10.1177/00187208211013988.
  • Kosinski, M. 2023. Theory of mind may have spontaneously emerged in large Language models. Preprint. https://osf.io/csdhb/.
  • Kurzweil, R. 2005. The Singularity is Near. New York: Viking Books
  • Lincoln, B. 1931. “Is Man Doomed by the Machine Age?” Modern Mechanics and Inventions 5 (5): 50–55.
  • Manheim, D. 2019. “Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence.” Big Data and Cognitive Computing 3 (2): 21. doi:10.3390/bdcc3020021.
  • Marras, W., and P.A. Hancock. 2014. “Putting Mind and Body Back Together: A Human-Systems Approach to the Integration of the Physical and Cognitive Dimensions of Task Design and Operations.” Applied Ergonomics 45 (1): 55–60. doi:10.1016/j.apergo.2013.03.025.
  • Mohsenzadeh, Y., C. Mullin, B. Lahner, and A. Oliva. 2020. “Emergence of Visual Center-Periphery Spatial Organization in Deep Convolutional Neural Networks.” Scientific Reports 10 (1): 4638. doi:10.1038/s41598-020-61409-0.
  • Nasr, K., P. Viswanathan, and A. Nieder. 2019. “Number Detectors Spontaneously Emerge in a Deep Neural Network Designed for Visual Object Recognition.” Science Advances 5 (5): eaav7903. doi:10.1126/sciadv.aav7903.
  • Navarro, J., and P.A. Hancock. 2023. “Did Tools Create Humans?” Theoretical Issues in Ergonomics Science 24 (2): 206–232. doi:10.1080/1463922X.2022.2076954.
  • Oakley, K. 1957. “Tools Makyth Man.” Antiquity 31 (124): 199–209. doi:10.1017/S0003598X00028453.
  • Parasuraman, R., and C.D. Wickens. 2008. “Humans: Still Vital after All These Years of Automation.” Human Factors 50 (3): 511–520. doi:10.1518/001872008X312198.
  • Park, Chan Woo, Sung Wook Seo, Noeul Kang, BeomSeok Ko, Byung Wook Choi, Chang Min Park, Dong Kyung Chang, Hwiyoung Kim, Hyunchul Kim, Hyunna Lee, Jinhee Jang, Jong Chul Ye, Jong Hong Jeon, Joon Beom Seo, Kwang Joon Kim, Kyu Hwan Jung, Namkug Kim, Seungwook Paek, Soo Yong Shin, Soyoung Yoo, Yoon Sup Choi, Youngjun Kim, and Hyung Jin Yoon. 2020. “Artificial Intelligence in Health Care: Current Applications and Issues.” Journal of Korean Medical Science 35 (42): e379. doi:10.3346/jkms.2020.35.e379.
  • Passino, K. M. 2004. Biomimicry for Optimization, Control, and Automation. New York: Springer.
  • Pohl, F. 1975. “The Midas Plague.” In The Best of Frederik Pohl, edited by L. del Rey, 112–161. New York: Taplinger Publishing Company, (Originally Published in 1954).
  • Russell, S. 2019. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking.
  • Scallen, S.F, and P.A. Hancock. 2001. “Implementing Adaptive Function Allocation.” The International Journal of Aviation Psychology 11 (2): 197–221. doi:10.1207/S15327108IJAP1102_05.
  • Schacter, D.L. 2001. The Seven Sins of Memory: How the Mind Forgets and Remembers. New York: Houghton-Mifflin.
  • Schaefer, K.E., J.K. Adams, J.G. Cook, A. Bardwell-Owens, and P.A. Hancock. 2015. “The Future of Robotic Design: Trends from the History of Media Representations.” Ergonomics in Design: The Quarterly of Human Factors Applications 23 (1): 13–19. doi:10.1177/1064804614562214.
  • Seed, A., and R. Byrne. 2010. “Animal Tool-Use.” Current Biology: CB 20 (23): R1032–R1039. doi:10.1016/j.cub.2010.09.042.
  • Shneiderman, B. 2022. “Extraordinary Excitement Empowering Enhancing Everyone.” Human–Computer Interaction 37 (3): 243–245. doi:10.1080/07370024.2021.1977128.
  • Simon, H.A. 1968. Sciences of the Artificial. Boston: MIT Press.
  • Simon, H.A. 1988. “The Science of Design: Creating the Artificial.” Design Issues 4 (1/2): 67–82. doi:10.2307/1511391.
  • Stoianov, I., and M. Zorzi. 2012. “Emergence of a “Visual Number Sense” in Hierarchical Generative Models.” Nature Neuroscience 15 (2): 194–196. doi:10.1038/nn.2996.
  • Tabone, Wilbert, Joost de Winter, Claudia Ackermann, Jonas Bärgman, Martin Baumann, Shuchisnigdha Deb, Colleen Emmenegger, Azra Habibovic, Marjan Hagenzieker, P.A. Hancock, Riender Happee, Josef Krems, John D. Lee, Marieke Martens, Natasha Merat, Don Norman, Thomas B. Sheridan, and Neville A. Stanton. 2021. “Vulnerable Road Users and the Coming Wave of Automated Vehicles: Experts Perspectives.” Transportation Research Interdisciplinary Perspectives 9: 100293. doi:10.1016/j.trip.2020.100293.
  • Tegmark, M. 2017. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Knopf.
  • Thompson, D’Arcy W. 1917. On Growth and Form. Cambridge, United Kingdom: Dover (1992 reprint of 1942 2nd ed). (1st ed., 1917).
  • Vonnegut, K. 1952. Player Piano. New York: Delacorte Press/Seymour Lawrence,
  • Watson, J.B. 1913. “Psychology as the Behaviorist Views It.” Psychological Review 20 (2): 158–177. doi:10.1037/h0074428.
  • Wei, J., Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, … W. Fedus. 2022. “Emergent Abilities of Large Language Models.” arXiv preprint arXiv:2206.07682.