Publication Cover
Innovation
Organization & Management
Volume 23, 2021 - Issue 1
1,402
Views
8
CrossRef citations to date
0
Altmetric
Essay

Revising the ‘science of the organisation’: theorising AI agency and actorhood

ORCID Icon &
Pages 127-144 | Received 20 Jan 2020, Accepted 24 Aug 2020, Published online: 11 Sep 2020
 

ABSTRACT

Artificial intelligence is a central technology underpinning the fourth industrial revolution, driving dramatic changes in contemporary cyber-physical systems and challenging existing ways of theorising organisations and management.  AI agency and the rise of the artificially intelligent agent are both fundamentally different and yet increasingly similar to human agency in terms of intentionality and reflexivity. As ‘Child AI’ emerges—AI that is created by other AI—the early human design and interaction becomes increasingly distant and removed. These developments, while seemingly futuristic, change the human-technology interface through which we organise. In this essay, we explore understandings of AI agency, capability, and governance, and present implications for organisatonal theorising in sociomateriality, actor-network theory, institutional theory and the behavioral theory of the firm. We contribute to a growing and reflexive research agenda that can accommodate and regenerate theorising around this significant technological advancement.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1. From here on, we will refer to AI when talking about Child AI, as in the end, also Child AI is AI.

2. AI will be fundamentally different from human intelligence. Intelligence is ‘the complex expression of a complex set of principles’ (Yudkowsky, Citation2007, p. 389), which consists of multiple interdependent subsystems linked to each other. Intelligence exists due to evolution and enables humans to model, predict and manipulate reality. It enables us to reason backwards and forwards from a mental image, and reason regarding desired future outcomes (Yudkowsky, Citation2007). Evolution created intelligence, but evolution does not possess this foresight. In fact, this evolutionary process is an unintelligent process and has resulted in flaws in human intelligence (Yudkowsky, Citation2008). Due to various constraints (such as food availability and trade-offs with other organs or biological materials) our brains may not have evolved in the most optimised way (Armstrong et al., Citation2012). Since AI is developed by (artificially) intelligent actors with foresight capabilities that evolution does not possess (Yudkowsky, Citation2007), and uses materials and processes better suited for intelligence, it is likely that AI consist of new forms of intelligence unfamiliar to humankind today (Armstrong et al., Citation2012; Bostrom, Citation2014).

3. It is true that almost all AI created to date is biased. After all, AI is often trained with biased data and developed by biased humans (O’Neil, Citation2016). This means that the social is very much involved in the creation of such AI. However, we are now seeing developments of AI being trained without (biased) data at all (Deepmind, Citation2018, Citation2020) or created without (biased) developers (Le & Zoph, Citation2017). Upcoming developments, such as self-supervised learning could become a technique that would create data-efficient AI systems (LeCun, Citation2020).

4. If an AI has been trained with biased data, AI will still always improve itself from an AI perspective. This means, it is becoming better at the objective it was given (for example, hiring the right candidate) but it might no longer be seen as an improvement from a human perspective (it only hires male candidates as the biased training data showed that males where hired more often in the past). As O’Neil (Citation2016) clearly showed, an AI that discriminates has not been built correctly by the developer, but it will still always improve itself over time (Bostrom, Citation2014).

5. We thank a Reviewer for this point.

6. In the case of AutoML Zero, repeating cycles aim to develop better and better learning algorithms. Two or more models are randomly selected and compete against each other. The most accurate model becomes the Parent model, which clones itself into a Child model, which then gets randomly mutated. The mutated Child AI is then evaluated and paired against another model. With improved hardware and increased computing power in the coming years, it is likely that fundamentally new algorithms will be discovered, with very little to no social involved in it (Real et al., Citation2020).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 603.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.