2,042
Views
1
CrossRef citations to date
0
Altmetric
Articles

Evolving Coagency between Artists and AI in the Spatial Cocreative Process of Artmaking

, &
Pages 2203-2218 | Received 19 Nov 2021, Accepted 02 May 2023, Published online: 07 Jul 2023

Abstract

This article applies theoretical and empirical discussions of emerging human and digital technology relations to our interest in collaborative artist–artificial intelligence (AI) artmaking processes. Thus far, the theoretical focus has largely been on mediating (code) and merged (cyborg) human–technology relations, with mutual (coagency) relations yet to be adequately explored. To address this, we nuance the theoretical discussion and extend the empirical research, analyzing the spatial cocreative artmaking process through video interviews with eighteen Finnish artists using AI. Drawing on the work of Barad, we regard humans and AI as fundamentally entwined, receiving their agencies through intra-action. Building on this, we demonstrate how the agencies of artists and AI emerge and mutually evolve across three stages of the creative process: (1) coding and data, (2) learning and training, and (3) curating the outcome. Thus, through our empirical research on how artist and AI create new material and meaningful artworlds, we are able to nuance understanding of coagency as a spatial process. Key Words: agency, art, artificial intelligence, cocreativity, digital geographies.

本文将新兴的人类和数字技术关系的理论和实证讨论, 应用于艺术家-人工智能(AI)的合作艺术创作过程。迄今为止, 理论研究的主要重点是调节(代码)和融合(电子人)人与技术关系, 没有对相互(协作)关系进行充分讨论。为此, 我们开展了详尽的理论研究, 扩展了实证研究。基于对使用人工智能的18位芬兰艺术家的视频采访, 分析了艺术的空间合作创作过程。根据巴拉德的研究, 我们认为, 人类和人工智能本质上是交织在一起的, 并产生内作用。在此基础上, 我们展示了艺术家和人工智能的作用如何出现于并相互演化于创作的三个阶段: (1)编程和数据, (2)学习和训练, (3)策划结果。通过对艺术家和人工智能如何创造新材料和艺术世界的实证研究, 我们得以详尽理解协作的空间过程。

Este artículo aplica las discusiones teóricas y empíricas de las relaciones emergentes entre los humanos y la tecnología digital a nuestro interés en los procesos colaborativos de producción estética entre el artista y la inteligencia artificial (IA). Hasta este momento, el foco teórico en gran medida se ha centrado en las relaciones humano-tecnológicas mediadoras (código) y ligadas (cíborg), con las relaciones mutuas (coagencia), todavía sin exploración adecuada. Para abordar este asunto, matizamos la discusión teórica y extendemos la investigación empírica, analizando el proceso espacial de producción artística cocreativa por medio de entrevistas en vídeo con dieciocho artistas finlandeses que usan IA. Basándonos en el trabajo de Barad, consideramos que los humanos y la IA están fundamentalmente entrelazados, recibiendo los efectos de sus agencias a través de la intra-acción. A partir de lo anterior, demostramos cómo las agencias de los artistas y la IA emergen y evolucionan mutuamente a través de tres etapas del procesos creativo: (1) codificación y datos, (2) aprendizaje y entrenamiento y (3) curaduría del resultado. Así, por medio de nuestra investigación empírica sobre cómo el artista y la IA crean nuevos materiales y mundos de arte significativos, nos habilitamos para matizar el entendimiento de la coagencia como un proceso espacial.

Rapidly developing digital technologies are profoundly transforming our world. Automatic digital technologies have already come under extensive scrutiny in digital geographies (Thrift and French Citation2002; Dodge and Kitchin Citation2007; Kitchin and Dodge Citation2011; M. Graham, Zook, and Boulton Citation2013; Pink and Fors Citation2017; Thulin, Vilhelmson, and Schwanen Citation2020), and autonomous code-embedded technologies—such as artificial intelligence (AI) and robots—are eliciting increasing interest (Del Casino Citation2016; Batty Citation2018; Chen, Marvin, and While Citation2020; Cugurullo Citation2020; Bissell Citation2021; Macrorie et al. Citation2021). As such, there is a pressing need to investigate the creative processes taking place between humans and digital technologies, with the role of AI a particularly novel topic for geographers (Lynch and Del Casino Citation2020; Lynch Citation2022; Wingström, Hautala, and Lundman Citation2022; Lundman and Nordström Citation2023).

Rather than there being a single form of AI, a variety of AIs are employed for different purposes, with humans—or as we understand it, human–AI relations—required for such AIs to participate in creative processes (see, e.g., Del Casino et al. Citation2020). Although artificial can be taken to refer to a nonhuman, a technological subject, or both, the term intelligence has been much debated (Lynch and Del Casino Citation2020), and is often considered a human skill or capacity not yet existent in machines. Nonetheless, AI involves a display of human-like behaviors, such as learning, perception, and decision-making (Bostrom Citation2017; cf. Cugurullo Citation2020; Russell and Norvig Citation2020). Compared to automation, where digital technologies simply follow instructions (see Thrift and French Citation2002; Dodge and Kitchin Citation2007, Citation2009), AI is capable of learning autonomously (Cugurullo Citation2020; de Vries Citation2020; Forbes Citation2020), meaning actions can change during the process. Here, autonomous means human supervision of the learning process is not required (Cugurullo Citation2020; Santos et al. Citation2021; cf. Mitchell Citation1997). AI’s capacity to “exercise choice and make decisions” (Hayles Citation2017, 31) contains the potential to produce novel and surprising outputs, including within art and other creative practices (Miller Citation2019).

In this article, we study the spatial cocreative artmaking process undertaken between artists and AI. To do this, we bring digital (Ash, Kitchin, and Leszczynski Citation2018a) and creative geographies (Hawkins Citation2017a) into contact with the posthuman thinking of Barad (Citation2003, Citation2007, Citation2012, Citation2015; see also Barad and Gandorfer Citation2021). Barad’s strength lies in how she conceptualizes the relationship between agencies in the world-forming processes, with her basic unit of interest the phenomena through which agencies develop their capacities to act. Although Barad’s thinking is gaining momentum within research investigating digital technologies (Draude Citation2020; Lipp and Dickel Citation2022) and creative agencies (A. Harris Citation2018, Citation2021), it has yet to be widely applied in digital geographies (Rose Citation2017, 780).

Creativity and interaction with humans are central motivations and challenges when it comes to developing AI (Boden Citation2009; Colton and Wiggins Citation2012). As such, we propose that the cocreative artmaking process between humans and AI is a promising case with which to study human–technology relations, as well as to explore how such relations might be conceptualized in digital geographies (see also Wingström, Hautala, and Lundman Citation2022; Lundman and Nordström Citation2023). We interpret the spatialities of artmaking as artworlds, referring to the two-way relationship whereby artworks make worlds (i.e., how art performs in the world) and worlds make artworks (i.e., how the world in which art is made shapes art; Hawkins Citation2017b, xvii–xviii). The concept of cocreativity is drawn from the field of computational creativity, which is concerned with how humans and AI interact, collaborate, and facilitate within a creative process (Davis Citation2013; Feldman Citation2017). By definition, a creative process involves not just artmaking’s outcome-oriented practices, which are aimed at generating meaningful ideas or artifacts (cf. Mace and Ward Citation2002; Boden Citation2009; St-Louis and Vallerand Citation2015), but the affective, material, and relational events arising between humans and nonhumans, which hold the potential for continuous change in the world (Williams Citation2016; Nordström Citation2018; McCormack Citation2019). Hence, the cocreative process undertaken between artists and AI can lead to material, meaningful artworlds—a topic of considerable interest in creative (and) digital geographies, as well as related fields.

In this article, we advance digital geographies in two key ways. First, we nuance the theoretical discussion of what we call coagency as a mutual relation through use of Barad’s (Citation2003, Citation2007, Citation2012, Citation2015) concepts of intra-action, agential separability, topological relationalities, and mattering. In doing so, we step toward what we see as posthuman cocreative spatialities (Lundman and Nordström Citation2023). Second, we enrich the empirical research by shifting focus away from other collaborative digital technologies (e.g., screens, surveillance cameras, smartphones, and advanced visualization and rendering technologies; cf. Woodward et al. Citation2015; Rose Citation2017, 786) and toward new AI-based digital techniques. As stated, AI is different from other technologies due to its autonomous capacities. Despite this, there has been little empirical research in digital geographies on the creative process between humans and AI (except for Birtchnell and Elliott Citation2018; Wingström, Hautala, and Lundman Citation2022; Lundman and Nordström Citation2023) or other digital technologies (except for Woodward et al. Citation2015). As such, we present novel empirical findings about the cocreative processes of eighteen Finnish artists working with AI techniques, ranging from neural networks to generative adversarial networks (GANs). Based on our previous research (Wingström, Hautala, and Lundman Citation2022; Lundman and Nordström Citation2023) and the existing literature (e.g., Davis Citation2013; Kantosalo and Toivonen Citation2016), our presumption is that humans and AI are cocreative together.

In what follows, we first review the literature on human–technology relations and agencies in digital geographies, with a focus on technological productive capacities that can “make things happen” in the world (Dodge and Kitchin Citation2005a, 162). For the purposes of the review, we categorize three main approaches toward human–technology relations in the digital geography literature, namely (1) code as mediating relation, (2) cyborg as merged relation, and (3) coagency as mutual relation. Then, we develop our theoretical framework by applying the posthuman thinking of Barad to the cocreative process between artists and AI, before going on to present our methods and empirical material. Having done this, we analyze the three phases of the cocreative process as described by the artists working with AI, namely (1) entering the cocreative process with a selection of data and code, (2) training and learning during cocreation, and (3) curating creative outcomes. Finally, we summarize our main findings and offer conclusions, discussing how the creative agencies of artists and AI evolve together in the cocreative artmaking process through being connected to various spaces and times when creating new material and meaningful artworlds. By attending to the cocreative spatial process of artists and AI, we present a processual way of approaching relations between humans and technology in the era of AI—a topical issue in contemporary digital geographies.

Detecting Creativity in Human–Technology Relations

Human–Technology Relations and Agencies in Digital Geographies

We have identified three key ways of approaching human–technology relations within the existing digital geography literature: (1) code as mediating relation, (2) cyborg as merged relation, and (3) coagency as mutual relation. Although these typologies sometimes overlap and the categorization is not exhaustive, they nevertheless provide insight into how this tradition of digital geographies can be nuanced in the era of AI.

The first approach, code as mediating relation (M. Graham, Zook, and Boulton Citation2013; Leszczynski Citation2015; Pink and Fors Citation2017; Cugurullo Citation2020; Thulin, Vilhelmson, and Schwanen Citation2020), regards humans and automated code technologies as having separate agencies that affect each other, with human life influenced by “living-with technology” (Leszczynski Citation2015, 741–42). Digital geographies using this approach reveal the ways in which automated code technologies have reproduced (Thrift and French Citation2002), governed (Amin and Thrift Citation2002; S. Graham Citation2005; Dodge and Kitchin Citation2005a, Citation2005b; Leszczynski Citation2020), managed (S. Graham and Marvin Citation2001; Dodge and Kitchin Citation2005a), and augmented (M. Graham, Zook, and Boulton Citation2013) everyday life and tasks. Such technologies can alter human experience and affective response (Pink and Fors Citation2017; Leszczynski Citation2019) in a variety of environments, from gaming (Ash Citation2012, Citation2013) to urban spaces (Boulton and Zook Citation2013; Bissell Citation2020). The robot provides a prominent example of code-based agency that can facilitate human life, and has been increasingly discussed in human geography (e.g., Del Casino Citation2016; Bissell and Del Casino Citation2017; Lynch Citation2021).

The theoretical basis for many of these studies stems from philosophies of science and technology, including concepts such as “transduction” (Mackenzie Citation2002; Simondon Citation2016), “technicity” (Mackenzie Citation2002), and “technical mediation” (Latour Citation1994). Automated code technologies produce “code/spaces” where software (code) and everyday life are made through each another (Kitchin and Dodge Citation2011). Here, the potentiality for creativity manifests through transduction, which constantly creates “new means of being and acting” in everyday life (Dodge and Kitchin Citation2009, 1363), engendering new experiences, senses, and intelligences in human agencies and subjectivities (Thrift Citation2004, 596; see also Leszczynski Citation2015; Pink and Sumartojo Citation2018; Bissell Citation2021). Although human life is in many ways governed or constrained by automatic code (S. Graham Citation2005; Dodge and Kitchin Citation2005b), human agency can also be enhanced by these technologies through the possibilities they offer for movement, experimentation, and coping with uncertainty (Ash Citation2012).

The agency of automated code is limited to millions of calculations proceeding in a specific order (Thrift Citation2004, 584; cf. Thrift and French Citation2002). Thus, even as such code creates calculative and accurate actions, it might also contain errors and instabilities, sometimes careening “out of control” (Cockayne and Richardson Citation2017, 1651). Robots, which are a combination of software (code) and moving hardware, are a good example of this. Some of their functions might apply AI, making them partly autonomous agencies that can escape standard human–robot relations—for example, when drones or self-driving cars harm humans in accidents or wars (e.g., Del Casino Citation2016; Bissell and Del Casino Citation2017; Shaw Citation2017; Bissell Citation2018). Human–robot relations are usually considered hierarchical, however, and are formed in assemblages (Shaw Citation2017). As such, robot and human agency remain separate in the creative process. This is also the case for other code-based agencies that emphasize technology’s calculative, preprogrammed, and repetitive aspects. Although it is possible for code-based technologies to have creative agency—for example, when robots change their environment (Del Casino Citation2016, 846) or affect the creative agency of human beings (Lynch Citation2021)—for the most part the calculative perspective on code-based agency implicitly undermines the definition of creativity as something new and surprising (Boden Citation2009). Further research is therefore needed in digital geographies to understand the creative potentiality of new technologies, including autonomously learning AI.

The second approach within digital geographies, cyborg as merged relation, is influenced by Haraway’s (Citation1991) cyborg. Although this approach does not involve pondering the separate agencies of humans and technology in processes of hybridization (Kitchin Citation1998; Schuurman Citation2002, Citation2004), some acknowledge that different subject and object positions might be formed along the way (Wilson Citation2009; Cockayne and Richardson Citation2017). With the cyborg, geographers have aimed to break the dichotomy between what is defined as human and what is defined as technology, with particular attention paid to how subject–object positions are altered when human–nonhuman boundaries are blurred (Wilson Citation2009). Haraway’s (Citation1991, 180) cyborg, especially, is both an ontological figure of machine–organism hybrid—that is, “the machine is us”—and epistemological figure for creating alternative knowledge. When human and machine merge to form a hybrid (Wilson Citation2009, 504), there is no separate human or technology agency but only the agency of the cyborg.

Geographers have applied the idea of a cyborg to such issues as critical urbanism (e.g., Gandy Citation2005), feminist geographic information systems (Schuurman Citation2002, Citation2004), and global online education (Sparke Citation2017). Cyborgs can see, write, and act in new ways, giving them interesting potentiality for creative agency. As a methodological tool in geography, the cyborg is used for writing counterhistories about conventional everyday situations (Henderson et al. Citation2014), building alternative ways of knowing (Schuurman Citation2002, Citation2004; Wilson Citation2009), understanding hybrid knowledge-formation processes (Sparke Citation2017), and “queering” the code/space (Cockayne and Richardson Citation2017). Regarding agency, Cockayne and Richardson (Citation2017, 1652, 1655) claimed that the potential for “social agency with technology emerges from instabilities in the code; that is, sites where “bodies and machines perform code/space together.” The cyborg partly liberates humans from the autonomous self, thereby providing “a means of becoming ‘posthuman’” (Gandy Citation2005, 32). Nonetheless, in terms of researching cocreativity, the cyborg as merged relation is insufficient when it comes to understanding how various agencies emerge and mutually evolve through the process, and therefore further investigations on creative relations and agencies are required in digital geographies.

More recently, a third approach, coagency as mutual relation, has emerged in digital geographies, focusing on collaboration between digital technologies and humans (Woodward et al. Citation2015; Rose Citation2017). Epistemologically, mutual coagency exists in the space between the first (separated) and second (merged) categorizations of agency. Although productive encounters regarding what humans can do or experience through technology have previously been endorsed in geography (Leszczynski Citation2015; see also Verbeek Citation2005), this emerging approach emphasizes human–technology cooperation as the starting point. In geography, Woodward et al. (Citation2015), drawing on the philosophies of Simondon, explained how technical objects can be collaborators capable of transforming human–computer collectives by presenting unique problems. As they put it, when “technical objects have the capacity to participate in relations of invention” (507), new collective individuals are created. Rose (Citation2017), following Stiegler, put forward the idea of how a human, when coconstituted with technologies, becomes a posthuman agency. Moreover, Pink and Fors (Citation2018) argued that greater attention should be paid to the coconstitution of humans and digital technologies. Lynch and Del Casino (Citation2020), meanwhile, discussed how human intelligence is augmented by AI, which, according to them, is based on data “smashed … and parsed” by algorithms formed partly by humans and partly by machine-learning technology (609). In addition, While, Marvin, and Kovacic (Citation2021, 771) recognized that robotics has become increasingly concerned with “human robotic co-evolution,” which represents more than merely replacing or replicating human actions (Chen, Marvin, and While Citation2020).

We draw on these pioneering geographical discussions in our research on how human and AI creative agencies are tied together in cocreative artmaking processes. Although recent studies in digital geographies have referred to the productive agency of digital technology (Dodge and Kitchin Citation2005b; Woodward et al. Citation2015) and changing (post)human embodiments and subjectivities (Wilson Citation2009; Pink and Fors Citation2017; Rose Citation2017), they have not examined in any great depth how the agency of digital technology changes in these relations, nor how this affects processes that generate new spatialities and subjectivities. In particular, we are interested in the changes both human and AI agencies undergo during the cocreative process, and how this evolving mutuality can transform artworlds.

Human–Technology Relations in Related Fields

Outside geography, several recent studies offer intriguing findings about relations that could be considered mutual coagencies between humans and technology. The first field of interest is human–computer interaction (HCI) research. Guzman and Lewis (Citation2020) established a research agenda related to AI’s role as communicator rather than mediator, arguing that the ontological boundaries of what constitutes human, machine, and communication are blurred, and that communication involving AI extends meaning-making beyond operations “through” the machine to those “with” the machine (81; see also Gemeinboeck and Saunders Citation2022). Draude (Citation2020), who discussed Barad and Haraway’s critical posthumanism in the context of HCI studies, argued that HCI should be thought of “as a relational, networked activity of matter and meaning intermeshing” (23). Both these studies are valuable in analyzing how artworlds are made sense of in the cocreative artmaking process between artists and AI.

In other fields, there is also a growing interest in comprehending human–digital technology relations as a process. In anthropology, Hasse (Citation2022, 146) noted this “human–machine relations as a process” when studying children’s changing perceptions about robots. We find such processual relation underdeveloped, however, as it only acknowledges changing human agency (which, as shown earlier, had already been articulated by numerous geographers). More interesting is Lipp and Dickel’s (Citation2022) focus on how humans and machines are both separated and interconnected through “interfacing,” which for them is a “folded performativity of human/machine relations” (15), taking place here and now. We advance this thinking with our empirical research on cocreative practices between artists and AI, as well as our considerations of how the cocreative artmaking process can be interpreted from a spatial perspective.

Elsewhere, A. Harris’s (Citation2018) conceptualization of “creative ecologies” provides a noteworthy research opening, as it recognizes how humans and nonhumans can more effectively generate and evolve when facilitated by creative environments and networks. Harris put forward “creative agency” as an emergent assemblage or embodiment of intra-action and the mutual constitution of entwined agencies. We further develop this notion by studying the evolving agencies of humans and AI, and how this coagency affects artworlds. In other words, we are not only interested in how collective agency arises from creative environments and networks, but also how agencies mutually change in the artmaking process when connected to other spaces and times (cf. Massey Citation2005; Nordström Citation2018).

Developing Digital Geographies through Research on the Human–AI Cocreativity Process

The Entwinement of Humans and AI

We recognize the concept of coagency as mutual relation as being suitable for empirical research on human–AI cocreativity processes. Whereas Rose (Citation2017) discussed how (post)human agency is always sociotechnical, Barad’s concept of intra-action (discussed further later) helps us understand just how entwined humans and AI are. The agencies of humans (in our case, artists) and AI emerge as partly separated from each other, generating special capacities that act through what we call a spatial artmaking process. In our study, agency formation is related to the creative process: Artists (as humans) hold the potentiality for multiple agencies, identities, and roles, but when they enter the creative process they perform as artists, and AI emerges as a creative AI. We acknowledge, however, that human agency is different from the agency of AI, which only mimics human performance (Simonsen Citation2013). Humans can feel and experience the newness, surprise, and value of outcomes. AI, by contrast, can learn to generate novel outputs, but cannot make sense of them.

Before proceeding, it is necessary to clarify with reference to some geographical texts why humans and AI do not exist as independent entities but are fundamentally entwined. By this, we mean that humans and AI are already bonded through the ways in which AI is produced (by humans) and how this affects everyday human life. Here, we find resonance with Woodward et al. (Citation2015), who touched on Simondon’s interpretation of technical objects as entities that contain human nature in their technical being. The same applies to AI techniques: AI is a specific form of code-embedded technology aimed at replicating human intelligence, and has a capacity to learn, perceive, make sense, solve problems, and make decisions (Bostrom Citation2017; cf. Cugurullo Citation2020; Russell and Norvig Citation2020). AI is formed by algorithms consisting of explicit instructions and models of ordered action (Berry Citation2015), enabling the AI to perform tasks autonomously based on human-set preconditions (cf. Del Casino et al. Citation2020).

Kitchin and Dodge (Citation2011) wrote about “ontogenetic” algorithms, which are “always in the state of becoming” and “teased into being” (Kitchin Citation2017, 18). Such algorithms are coded by humans as part of sociotechnological assemblages and then applied and modified for various aims at multiple sites (Kitchin Citation2017; see also Dodge and Kitchin Citation2009). This idea of ontogenesis is also applicable to how AI algorithms are created and distributed collectively. Moreover, before AI can perform intelligence, it must be fed with data produced by various technologies (Janowicz et al. Citation2020; see also Wilson Citation2011); to learn autonomously, it must study input data and training material (e.g., images, sounds, texts, movements) selected by humans and often produced for some other initial purpose.

Geographers have also been in interested in viewing AI as part of wider constellations that affect humans in multiple ways, including exercising power over bodies through governance (Kitchin Citation2017; Cugurullo Citation2020) and surveillance (Del Casino Citation2016; Chen, Marvin, and While Citation2020), restructuring the urban context and planning (Batty Citation2018; Macrorie, Marvin, and While Citation2021), exacerbating and improving uneven development (McDuie‐Ra and Gulson Citation2020), and managing services through digital platforms (Bissell Citation2020; Leszczynski Citation2020). In addition, numerous businesses and public organizations have taken an interest in AI-related methods, funding, and customers (Lundman and Nordström Citation2023). To sum up, from a geographical perspective, humans and AI cannot be viewed separately—rather, they are intimately entwined through the production of AI algorithms and training data, as well as through AI’s influence on humans via a variety of situated performances and societal contexts and processes.

The Cocreative Process between Humans and AI in Artmaking

Our approach to human and AI cocreativity is processual, meaning we define creativity in terms of two key aspects. The first involves a context-specific approach focused on the directional elements of the creative process, where the aim is to create concrete outcomes through creative practices (Lundman and Nordström Citation2023). The second concerns the continuous changes taking place in the world, and encompasses the potentiality for “new relations between the corporeal and material to emerge” (Williams Citation2016, 1561) and new ideas and ways of seeing to develop (Woodward et al. Citation2015; Nordström Citation2018). These might be experienced by the artist as creative moments or events. Such moments can happen between humans and nonhumans, and as such, they are linked to what has been referred to as posthuman creativity (Łapińska Citation2020; Chappell Citation2022; D. Harris and Holman Jones Citation2022; Jagodzinski Citation2022) or postanthropologic creativity (Roudavski and McCormack Citation2016; McCormack Citation2019). Interpreted in Barad’s (Citation2003) terms, we approach cocreativity as “neither a matter of strict determinism nor unconstrained freedom” (826). Accordingly, the cocreative process in artmaking revolves around directional and goal-oriented actions that arise from the creative capacities developed through artist–AI relations (Boden Citation2009), as well as uncontrollable experiments in chance, play, and surprise (Lundman and Nordström Citation2023; cf. Williams Citation2016; Hawkins Citation2017a).

The cocreative process follows what can be characterized as the general phases of artmaking. This begins with theme preparation and idea generation and selection, which might be subconscious activities, followed by the production and finally completion of the artwork (Mace and Ward Citation2002; St-Louis and Vallerand Citation2015). Artistic creation does not necessarily, however, proceed linearly from one stage to the next—instead, the developing artwork might “return to an earlier developmental phase,” or new ideas might emerge during the process (Mace and Ward Citation2002, 182). This formation of new ideas is tied to changes in ways of seeing, with artists experiencing new thoughts, insights, and perspectives throughout the process (Lundman and Nordström Citation2023). AI affects this process of artmaking in various ways, as it is capable of autonomous learning and can generate new material by following what it has learned (see also Mazzone and Elgammal Citation2019). In other words, AI changes the artistic creative process into a cocreative process between humans and AI.

In traditional algorithmic art, an artist defines the code’s rules to attain the desired aesthetic. In art involving AI, the artist works with algorithms that have the capacity to change, learn, and make decisions based on previous phases of the process (Mazzone and Elgammal Citation2019). In particular, neural networks and deep learning—AI techniques based on autonomous machine-learning techniques (Santos et al. Citation2021; cf. Mitchell Citation1997)—carry the potentiality for AI to hold creative agency of its own. Although AI has a creative agency similar or close to that of humans, its creativity is not solely the product of computational models—rather, it is affected by various humans in situated encounters (Avdeeff Citation2019; Lynch Citation2022).

Generally, AI and humans affect each other mutually in the cocreative process. In artmaking, AI can inspire artists, provide critical knowledge, generate new ideas and material, and help artists progress when creation seems obstructed (Kantosalo and Toivonen Citation2016; Karimi et al. Citation2020; Lundman and Nordström Citation2023). At the same time, artists affect AI by supplying it with data such as images, sounds, and movements. Mazzone and Elgammal (Citation2019) claimed that the key artist–AI cocreative practices occur pre- and postcuration. When precurating with visual AI, for instance, an artist selects visual material, which the AI starts imitating and changing, forming new images. In postcuration, the artist selects an AI-produced image or set of images, developing it into a final artwork. Between pre- and postcuration, the artist “tweaks” the code to acquire interesting images from the AI, which, in our work, relates to how new artworlds emerge spatially in the intra-action between artists and AI.

Cocreative Human–AI Agencies in Creating New Artworlds

Through adopting a processual perspective in the context of artmaking, we can approach human–AI cocreativity as a spatial process that stretches beyond any particular domain to new artworlds. In speaking of artworlds, we refer to the idea that art is not in space but is “of space,” and furthermore, “space is of art” (Hawkins Citation2017b, xvii). In terms of our study, this means that artmaking is affected by worldly materialities and relations (Massey Citation2005; Lysgård and Rye 2015) that are not entirely traceable, and conversely the created artworks have further influence in the world. By focusing on coagency, we can study how artists and creative AI evolve mutually, and how this influences the spatialities created—in this case, material and meaningful artworlds. To conceptualize the coagency from a spatial perspective (discussed but not explicitly theorized by Rose Citation2017 and Woodward et al. Citation2015), we now turn to Barad’s concepts of intra-action, topological relationalities, agential separability, and mattering.

Although interaction between humans and digital technologies has been widely discussed in digital geographies (e.g., Woodward et al. Citation2015; Rose Citation2017; Ash et al. Citation2018; Lynch and Del Casino Citation2020), Barad’s concept of intra-action allows us to recognize how different human–technology agencies emerge and evolve during a cocreative process. In current digital geographies, humans are seen as constantly individuating in interactions with digital technologies referred to as objects (a combination of software, hardware, and data) that can alter and, in some cases, actively pose problems (Woodward et al. Citation2015; see also Rose Citation2016). The focus in collaborative relations has thus far either been on humans who are changing with technology (Rose Citation2017), or the collective individualization of a team of humans and changing digital objects (Woodward et al. Citation2015; see also Rose Citation2016). Barad’s (Citation2007) approach zooms in on the fundamental entanglements and processes in which material differences develop, creating a variety of agencies (cf. Rose Citation2017) with diverse capacities to act in the world-generating process. Regarding digital technologies, our interest is in the active agency of AI and its entwinements with human agency.

In Barad’s (Citation2012) intra-action, agencies are “(be)coming together-apart.” Barad (Citation2007) noted that “since individually determinate entities do not exist, measurements do not entail an interaction between separate entities; rather, determinate entities emerge from their intra-action” (128). In terms of our study, this translates into humans and AI gaining their creative agencies as artists and creative AI through one another in an ongoing, entwined cocreative process. Although this entwinement of changing agencies has yet to become a focus of digital geographies, for Barad every intra-action is thoroughly spatial. As such, she described them as dynamic and topological relationalities, and (re)articulations of agencies, materialities, and meanings of the world (Barad Citation2003). Dynamic refers to change and can be interpreted as the processuality of the world (see also Williams Citation2016), whereas topological refers to the processual connectedness of emerging agencies and relations to various spaces and times. Geographers are already familiar with discussions about multiple space–times (Massey Citation2005) and topological spatiality (e.g., Allen Citation2011, Citation2016; Lysgård and Rye Citation2017). Building on this, our focus on intra-action draws on Rose’s (Citation2017) interest in the temporality and spatiality of practices and meanings used in the reinvention of the posthuman digitally mediated subject. Here, Barad’s idea of topological relationalities is useful for understanding how the emerging agencies of artists and AI are connected to each other and the world (cf. Lysgård and Rye Citation2017).

Furthermore, Barad (Citation2003) explained how we come to understand changing processes, and how such processes introduce something new to the world: “We do not obtain knowledge by standing outside of the world; we know because ‘we’ are of the world. We are part of the world in its differential becoming” (829). Thus, rather than occupying an absolutely external position to the phenomenon being investigated, the knower (in our case, an artist giving meaning to an artwork) is in a position of “exteriority within phenomena,” which Barad (Citation2007, 89, 176–77) called agential separability. This concept allows us to further identify the moments when, through meaning-making, artists articulate their creative process with AI. Barad interpreted meaning-making (Barad and Gandorfer Citation2021, 25–27) as mattering, capturing how concepts are “material configurations of the world” in the field of “spacetimemattering.” Similarly, when it comes to AI-based art, creative outputs interpreted as artworks emerge from the topological relationalities of intra-acting humans and nonhumans. Hence, creating meaningful artworlds can be seen an act of mattering, in which giving meaning is tied to the materialities of cocreative processes that stretch beyond any single location to many spaces and times (Barad and Gandorfer Citation2021, 24–25; Massey Citation2005). To sum up, not only is the artist’s creativity mediated through working with AI (cf. Hawkins Citation2021; Wingström, Hautala, and Lundman Citation2022; Lundman and Nordström Citation2023), but AI is imbued with its own creative agency. Hence, artists and AI act as mutual cocreative agencies that together create new material and meaningful artworlds.

Material and Methods

The empirical material that forms the basis of our study consists of recorded and transcribed video interviews conducted with eighteen Finnish artists who use AI in their work. We asked the artists to demonstrate some of their AI-based artworks and, if possible, the techniques and algorithms involved. The artists were part of the group interviewed for our preceding research (Lundman and Nordström Citation2023), and their pseudonyms (A1–A27) are taken from this earlier work. For reasons of anonymity, the videos and transcriptions are not openly accessible, as the artists could be recognized based on their artworks and methods. The length of the videos varied between six and fifty minutes, in total constituting six hours and fifteen minutes of material. All but three of the interviews were conducted online due to COVID-19 restrictions. Some video recordings (A12, A13, A25) did not meet our research expectations, so they were omitted from the analysis.

The interviewees were mainly new media artists with backgrounds in the visual arts, music, sound art, or the performing arts. They had a variety of coding skills and used a wide range of data, codes, algorithms, and programs in their artworks. Most of the artists used or had experimented with AI techniques based on so-called neural networks, which involve deep-learning methods and machine-learning techniques (Santos et al. Citation2021). The artists were given the freedom to present their artworks and working techniques as they saw fit, meaning the interviews varied in terms of their content and focus. As such, we based our analysis on qualitative methods and illustrative examples of the artists’ narratives regarding their cocreative processes with AI.

We chose to use videos as our research method and recording technique to better capture the conversations in a multisensory manner (see Garrett Citation2011). This allowed the artists to mediate their experience directly (cf. Garrett Citation2011) and explain and demonstrate their creative process in a real setting—in this case via a digital interface. As such, we were able to closely follow how the artists presented their work with AI, providing us with a deeper understanding of their artistic practice (cf. Pink and Sumartojo Citation2018). We asked the artists to present their work and talk freely about the creative processes that led to the final artwork. This was followed by supplementary questions, such as what they considered the most creative moment in the code or technique they used. Using qualitative content analysis (Hsieh and Shannon Citation2005), we categorized the interview material according to which phase of the process was being referred to, after which we proceeded to form more descriptive categorizations of the cocreativity between artists and AI. This led to the following three interlinked categories: (1) entering the cocreative process with a selection of data and code, (2) training and learning during cocreation, and (3) curating creative outcomes. Using the concepts of intra-action, topological relationalities, agential separability, and mattering from Barad’s posthuman theory, we analyzed how the agencies of artists and creative AI emerged and mutually evolved through these phases.

Analyzing the Cocreative Process between Artists and AI

Entering into the Cocreative Process with Data and Code

The cocreative process begins when the creative agencies of artists and AI start evolving through data and code. Initially, the agencies are hierarchical and calculative. An artist codes the AI algorithms (or selects or alters existing ones), then selects or produces the input and training data for the AI, and then initiates the learning process and sets temporal limits on it. One artist described this phase as the “months of basic training on the neural network, that’s the way I cultivate, in the same way that an artist cultivates his own skills on top of the old” (A9). Here, we see how posthuman does not mean entirely detaching from the human—rather, human agency is reconfigured as part of different material processes (Barad Citation2007, 27) with digital technologies (Rose Citation2017). AI’s agency forms and evolves within the limits and possibilities of the selected code and data. Hence, the AI is not automated but an autonomous creative code technology (cf. Cugurullo Citation2020). Once the AI’s agency has been recognized, mutual coagencies between humans and AI can emerge (cf. Barad’s intra-action).

In this first phase of the cocreative process, the artist’s agency manifests through the selection and production of data. Many artists considered creating personal “data sets,” a critical element of the process, with, in computational terms, a large collection of data needed to train AI. Here, the data set does not have determinate boundaries, as the data applied by the artists—including images (photographs, cartoons, paintings, drawings, graphics), texts, sound files, and videos—are entangled in various topological relationalities. The sources were either the artists’ own productions or ready-made material collected from newspapers, museum databases, social media platforms, sounds and audio recordings, and Internet open data sources. Some artists applied data that had already been processed by the AI or combined their own data sets with AI-processed data. A10, for instance, used this method of accelerating the AI’s learning process to “always start from somewhere.” This practice creates distance from the original source data, enabling the AI to widen its agency. A17, for example, used sounds created by one AI—which s/he calls “sequences of broken memories”—as input data for another AI that creates music in such a way as to remove any discernable trace of the original track. In these examples, the agency of some artists was directed toward “creative” outcomes, whereas others wished primarily to play and experiment with AI (see also Lundman and Nordström Citation2023). Many artists envisioned producing particular kinds of artworks, although they could not always know the outcome at this stage of the process.

The artists’ agency related to coding was realized within the assemblages produced by the coders and AI. A27 described how the code is not “completely your own” as “no one did it from start to finish.” Similarly, it is impossible to trace the origin of the code to the data set, as rather than being produced in any one location, both code and data are assembled in many digital and physical sites through complex human–technology relations. The AI algorithms and programs applied by the artists had often been retrieved from open-source code development platforms (e.g., GitHub) or companies developing AI software (e.g., DeepMind, OpenAI, Nvidia, Microsoft, IMB, Google). These AIs had already been created for a particular purpose and trained through specific data sets, meaning that neither the artist nor the AI entered their relation as preexisting entities—rather, their agencies emerged in different situated entanglements (cf. Barad Citation2012). A17, touching on the entanglement of the AI and artist, stated, “A human … has coded … all the AIs. … It is a combination of AI and human, they are not separated.” A1, meanwhile, described the coding process of neural networks as “we walk together, but … that machine is made by me. I do it with my materials.” Some artists built new codes and functions for the AI, whereas a few had entirely generated their own neural networks. All the interviewed artists had the skills to “tweak,” or as A19 expressed it, “poke” the AI’s code. As such, the artists could limit or enable the AI’s creative agency by changing parameters that would “make small changes in the program itself” (A2), altering the code (A9) or purposefully coding mistakes (A22, A23). A14 described this playing with the parameters as “pulling the lever.”

Although AI agency is mostly calculative and subordinate to human agency at the beginning of the cocreative process, it nevertheless holds considerable creative potentiality. In fact, rather than AI having a single agency, various AIs are equipped with different agential potentialities. The AI techniques applied by the artists were mostly based on neural networks, with GANs and neural style transfers (NSTs) being two of the most common AIs applied by the artists (see Forbes Citation2020). Although many of the artists questioned the creative agency of AI alone, some considered GANs to be an exception, because they were seen to produce genuinely new and surprising (and thus creative) outcomes through their generative capacities (the technique we explain in the next section). A9 explicitly stated that GANs can be creative on their own, mimicking humans’ unplanned biological processes, and A2 gave an example of a GAN starting with a random noise and proceeding to create a completely new sound.

Training and Learning during the Cocreation Process

Gradually, the cocreative agencies of the artists and AI begin to evolve. We detected this in the videos not only when an artist trained the AI, but when the AI started to train itself. Here, the agencies of the artist and creative AI evolve together-apart, with their capacities to act changing through the cocreative process of making art (cf. Barad and Gandorfer Citation2021, 15, 59). An AI’s training process represents a shared action on the part of both the artist who teaches the AI using code and data, and the AI itself, which learns to perceive images and sounds—a process we call “learning to see.” This form of seeing involves multisensuous activity (cf. Woodward et al. Citation2015; Nordström Citation2018) that in the case of AI is algorithmic in nature.

The training and learning of GANs, in particular, is intriguing in terms of cocreativity. Here, the evolving agency of AI arises from two competing neural networks: the generator and the discriminator (see Karras, Laine, and Aila Citation2020). Whereas the generator learns to produce outcomes that are increasingly “creative” in the eyes of the discriminator, the discriminator increasingly learns to recognize “faults” in these outcomes (Karras, Laine, and Aila Citation2020; Santos et al. Citation2021). The aim of the generator is to create original material that could have been in the input images or sounds. The work of A1—in which the AI (GAN) started learning by “watching TV” to create faces—provides a good example. The generator created images, and the discriminator tried to recognize the “real images” among them. Such training and learning are entwined with processes of change and iteration. A9, who had studied how AI “learns to see” animals and their body parts (e.g., hair, ears, eyes, nose), demonstrated how the learning and seeing occurs only gradually: “It is chaotically searching. … It is not correct and a little bit in that direction. Then it starts slowly to form. … It shapes, it starts to find those ears.” As A1 described, the AI’s iterative process of learning to see is slow: “A neural network may take days before it begins to understand anything because it has an awful lot of layers that it needs to learn to perceive.” This process is far from perfect: “It [AI] tries to learn. It does not always succeed very well. … It gets lost, interrupts, and tries something else” (A1). This incomplete learning includes “mistakes” (A12, A22). Moreover, AI can “get confused” (A27), which makes it more human-like and connects it with human agencies.

The artist’s agency evolves through observing the AI and intervening in its internal processes. This includes paying close attention to how an AI’s seeing changes, with the artist also “learning to see” differently alongside the AI. A1 documented this process carefully, including which data and codes limited the AI’s actions, how the artist tweaked these, and at what point s/he saw intriguing images: “I taught it [AI] to scratch the images and it became really exciting. … This image here. Where did it come from? … There! Now it is there and then it vanishes as the learning progresses!” A17, meanwhile, observed how AI learns to see with sounds: “It makes the composition by itself, re-organizes the sounds, loops … like this sounds really exciting, it tries to get into the swing of it. There emerges this characteristic … squeaking sound.” A1 found the new material introduced by the AI in the middle of the learning process intriguing: “In the beginning, it’s very interesting, when it [AI] has one way to perceive and then you teach it … and it starts learning. … The effect of both [human and AI] is combined, and that’s where something interesting can be found.” Understanding how the AI behaves requires a considerable amount of experimental work, as the artists need to understand both the structure of the code and the visuality involved. To do this, the artist must follow the code on the machine, observing how the neural network changes during the learning process.

Some interviewees experienced feelings of stress when intra-acting with AI in its learning process, with A27, for example, commenting that, “Sometimes it is really frustrating when you try to spin that code for many days, like ‘start working now, start working now!’—and nothing happens.” Sometimes the artists had to adjust their behavior and actions to suit the AI. This was the case for A7, who explained, “It’s been frustrating that the AI does not obey our interpretation.” The imperfection of working with AI also holds potential, though: “I thought it was a mistake. But now, seven years later, I realized that I can do art with this” (A10). Although the artists’ agency evolved simply by paying attention to the AI’s learning process, they also wanted to find and create interesting outcomes. Despite this, the results were often unexpected: “I give the prompt … ‘Tell me about x’ … And then the text comes. … It’s so terribly unpredictable what’s going to come from there” (A15).

In general, these examples highlight a profound experience of creativity that would not be possible without AI, which is why these creative moments during AI training and learning are fundamentally intra-active. Intra-action means both artist and AI evolve as coagencies responsive to each other’s actions: “The artist does something and then AI catches it and takes input … and creates something new from it or processes it in some direction … and then the artist responds to it” (A2).

Curating the Outcome of the Cocreative Process

The final stage of the cocreative process involves curating the creative outcome, for instance, paper prints, digital proofs, videos, sound, shows, narratives, or installations. Here, again, the issue of the artist’s agency is evident: “I feel more like a curator … . I don’t directly accept the answer that comes [from the AI]” (A15). In this phase, artists select and make sense of the creative outputs as meaningful works of art. In doing so, new material artworlds emerge, with creative outputs finalized and made available for further exploration by audiences. For Barad, concepts do not exist as free-floating ideas; rather, they are only imbued with meaning through mattering (cf. Barad and Gandorfer Citation2021, 24–25). Similarly, making sense of a creative output as an artwork can only be done through the dynamics of the cocreative process and the mattering involved. Barad also argued that sense-making is not an individual affair but iterative and collaborative (Barad and Gandorfer Citation2021, 28), meaning that when it comes to a work of art, the audience and other actors (e.g., art critics) continue meaning-making after it has been presented or performed for the public.

In the curating stage, the artist’s agency evolves through the materialities of working with AI and in selecting the outcomes generated by the AI (cf. Mazzone and Elgammal Citation2019). In practice, the artist is the one who makes sense of the cocreative process, finalizing the outcome as an artwork presented to an audience. Although AI can generate novel outputs, the artists in our study observed that it lacks the capacity to judge outcomes (see also Avdeeff Citation2019). For instance, while A17 recognized that AI could create good, original music—perhaps even blurring the boundaries of what constitutes “an artist”—s/he nonetheless considered the presence of human agency crucial given the artist’s role in directing the generation, selection, and curation processes. A22 commented on the limitations of AI: “It is capable of finding many different variations, but it often lacks taste with which to select the best … creative or good variation.” Moreover, only the artist can imagine what it is like to be a human spectator watching the final artwork (A27). A3 was interested in finding creative symbols in AI outputs, which, according to the artist, the audience could then attach meaning to. Although meaning-making is a strong factor in the artist’s agency, it is firmly connected with the ebbs and flows of the AI: “I’m finding more meaning in it all the time. It kind of takes me deeper and deeper into a rabbit’s hole. I look for meanings in things where there are no meanings, because for AI, they don’t exist” (A27). There can also be uncertainty and fluidity when selecting the final outcomes: “It is kind of curating, curating the visual material … I don’t know if these are artworks, though” (A9). Hence, the artist does not occupy an absolutely external position when it comes to judging the cocreative process—rather, an agential separability is generated within the artmaking (cf. Barad Citation2003, 828).

When talking about the selection of outcomes, some interviewees reflected on the entire cocreative artmaking process, including how their experiences of intra-action affected the articulation and expression of the final artwork. A11, for example, was interested in the visual input data’s influence on the final output: “A collection of images with a certain kind of light or a certain kind of feeling, content; then it is possible to bring that out in implementation.” Similarly, A17 paid attention to both the input and output material when curating the sounds made with AI. Many of those using GAN as a creative method found the data produced by the generator during the training of AI interesting, with the result that the training process itself could affect curation. For instance, both A1 and A9 described how fascinating (but imperfect) images emerge when two different data sets meet, with A9 recounting an intriguing example discovered when exploring animals through the eyes of AI: “I am not interested in when it looks perfect, but when it looks interesting.”

In the curating stage, the cocreative process between artists and AI is realized as experimental and open to possibilities, laden with potential for the creation of unforeseen artworlds (cf. Lundman and Nordström Citation2023). Although many artistic processes can result in artworks that cannot be fully envisaged beforehand, the element of surprise is even more pronounced when working with AI. As the interviews revealed, the artists could not foresee what kind of artwork they would end up finalizing.

Conclusion

As a rapidly developing technology, it is crucial to understand AI’s potential as a cocreative agency. Through bringing digital and creative geographies into contact with Barad’s posthuman digital thinking, we have spotlighted mutual coagency as a way of researching human–AI relations and related spatialities. In doing so, we have demonstrated that mutual coagency and cocreative artmaking process can be spatially explained. Here, Barad’s inherently spatial posthuman theory enriches the digital geographies endeavor, guiding geographers toward better understanding of how human and digital technologies are fundamentally entwined. Moreover, Barad’s thinking helps us discern how different agencies receive their capacities to act and evolve as part of processes “of the world.”

Our empirical research demonstrates how the cocreative process between artists and AI begins with the selection of materials (data and code) for artmaking (or precuration, according to Mazzone and Elgammal Citation2019). Here, the agencies of artist and AI are hierarchical and calculative, and the directional element of creativity is present. The artist’s agency primarily emerges in the fact that it is they who launch the cocreative process and define the temporal limits governing the training of the AI. Whereas the AI only exists because it has been coded in different sociotechnological assemblages (Ash, Kitchin, and Leszczynski Citation2018, 36–37; see also McLean Citation2020), the artist creates, applies, or tweaks the code. Moreover, the data introduced into the cocreative process by the artist have often been created in some other context for some other purpose. Hence, rather than being confined to the here and now (Lipp and Dickel Citation2022), the cocreative process is linked to various spaces and times (cf. Massey Citation2005; Nordström Citation2018). New artworlds develop within the topological relationalities of the selected data and code. Although less dominant in this early phase, AI agency emerges in the potential to create art alongside a human.

After data and code have been selected, the creative agencies of artists and AI begin to evolve mutually through processes of training and learning, which involve creative moments and adjustments in ways of seeing. Training and learning take place through intra-action, with the capacities to act of the artist and AI changing in relation to each other. Here, AI agency evolves from potentiality to actual creative agency, as AI is able to create novel outputs through a process of “learning to see.” This ability to modify its own actions sets AI apart from merely being an artist’s tool, such as a paintbrush or hammer (Mazzone and Elgammal Citation2019), temporarily bestowing it with autonomous creative agency (cf. Pink and Sumartojo Citation2018; Cugurullo Citation2020). The creative agency of AI alone is, however, restricted, as its capacity to differentiate is limited (de Vries Citation2020, 2113). In turn, the artist’s agency evolves through close collaboration with AI, allowing them to “learn to see” differently as well (cf. Woodward et al. Citation2015; Nordström Citation2018). Creative potential can also be found in the mistakes made by AI when it does not act as anticipated (cf. Cockayne and Richardson Citation2017), with—from the point of view of artist agency—feelings of joy and frustration a crucial part of the creative process (Simonsen Citation2013; St-Louis and Vallerand Citation2015). This phase of the cocreative process (i.e., AI creating novel outputs and artists finding them intriguing) is also when the agencies of humans and AI most clearly evolve mutually as coagencies. Such intra-actions between artist and AI hold the potential for new and surprising artworlds to emerge.

In curating the outcomes, it is the artist who selects and articulates the final creative output (an act that could also be called postcurating; cf. Mazzone and Elgammal Citation2019). Here, our understanding differs from the posthuman approach, where humans and nonhumans hold equal weight in the creative process, without constraints (A. Harris Citation2021). Rather, by emphasizing artist agency in this phase, we touch on notions of agential separability and mattering (Barad and Gandorfer Citation2021). The artist’s agency emerges in giving meaning to the creative output as works of art, a process that happens “together apart” with AI. This is not a return to humanism (Simonsen Citation2013), as meaning-making is closely tied to the materialities and mattering of the cocreative process with AI (see also Draude Citation2020). Nonetheless, meaning-making is also iterative and social, continuing into when the artwork is presented to the audience. The artworlds created as part of the cocreative process of artmaking become not only meaningful and material, but unforeseen and experimental.

Although Mazzone and Elgammal (Citation2019) stated that artists and AI are creative together in pre- and postcurating, our research shows that the artmaking process is much more interlinked, with the creative agencies of both artists and AI evolving during cocreation. Moreover, drawing on Barad’s thinking, we have developed understanding of the spatialities of the cocreative process, revealing how the resultant artworlds are linked to the different spaces and times where data and code were created; how they are formed in the intra-action between artists and AI; and how they become meaningful and material in the process of curating the final outcomes.

With our focus on AI as an evolving agency, we have brought forward the geographical discussion about the role of digital objects as active agents in creative processes (Woodward et al. Citation2015). In doing so, our research on actual cocreative practices between artists and AI provides crucial insights into how creative agencies mutually emerge and evolve in artmaking. All in all, our contribution to digital geographies spotlights the role of other-than-humans in creating and understanding our world(s) in the making. Even so, the topic is not straightforward, as AI is not (yet) fully autonomous, and humans can develop different capacities as agencies (Rose Citation2017). Once digital technologies develop further, other (non)human creative agencies could emerge and new topological relationalities could be formed. Digital geographies should follow this development and be open to the novel human–technology relations and agencies yet to come.

Acknowledgments

We would like to thank the anonymous reviewers for their valuable comments. We are also grateful for insights and discussions with our colleagues Astrid Huopalainen and Rosa Wingström, as well as the organizers and participants in the digital session “Actually Existing Digital Geographies in the Antipodes (and Elsewhere)” at IAGNZGS 2021. Our special gratitude goes to the interviewed artists.

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Additional information

Funding

This work was supported by Kone Foundation (grant number 201902784) and Academy of Finland (grant number 319872).

Notes on contributors

Paulina Nordström

PAULINA NORDSTRÖM is an Associate Professor in the Department of Global Development and Planning, University of Agder, PB 422, 4604 Kristiansand, Norway. E-mail: [email protected]. Her research interests include spatialities of creativity, creative processes, and art in various contexts.

Riina Lundman

RIINA LUNDMAN is a Postdoctoral Researcher in the Unit of Social Research, Tampere University, Tampere 33100, Finland. E-mail: [email protected]. Her research interests are various, including art, play, and creative geographies.

Johanna Hautala

JOHANNA HAUTALA is an Associate Professor (Regional Studies), University of Vaasa, PB 700, 65101 Vaasa, Finland. E-mail: [email protected]. Her research interests focus on spatiotemporal creativity and knowledge creation processes in the contexts of advanced technologies.

References

  • Allen, J. 2011. Topological twists: Power’s shifting geographies. Dialogues in Human Geography 1 (3):283–98. doi: 10.1177/2043820611421546.
  • Allen, J. 2016. Topologies of power. London and New York: Routledge.
  • Amin, A., and N. J. Thrift. 2002. Cities: Reimagining the urban. Cambridge: Polity Press.
  • Ash, J. 2012. Technology, technicity, and emerging practices of temporal sensitivity in videogames. Environment and Planning A: Economy and Space 44 (1):187–203. doi: 10.1068/a44171.
  • Ash, J. 2013. Technologies of captivation: Videogames and the attunement of affect. Body & Society 19 (1):27–51. doi: 10.1177/1357034X11411737.
  • Ash, J., B. Anderson, R. Gordon, and P. Langley. 2018. Digital interface design and power. Environment and Planning D: Society and Space 36 (6):1136–53. doi: 10.1177/0263775818767426.
  • Ash, J., R. Kitchin, and A. Leszczynski. 2018. Digital turn, digital geographies? Progress in Human Geography 42 (1):25–43. doi: 10.1177/0309132516664800.
  • Avdeeff, M. 2019. Artificial intelligence & popular music. Arts 8 (4):130. doi: 10.3390/arts8040130.
  • Barad, K. 2003. Posthumanist performativity. Signs: Journal of Women in Culture and Society 28 (3):801–31. doi: 10.1086/345321.
  • Barad, K. 2007. Meeting the universe halfway. Durham, NC: Duke University Press.
  • Barad, K. 2012. Interview with Karen Barad. In New materialism: Interviews and cartographies, 48–70. Ann Arbor, MI: Open Humanities Press.
  • Barad, K. 2015. On touching—The inhuman that therefore I am (version 1.1). https://www.academia.edu/7375696/On_Touching_The_Inhuman_That_Therefore_I_Am_v1_1_
  • Barad, K., and D. Gandorfer. 2021. Political desirings: Yearnings for mattering (,) differently. Theory & Event 24 (1):14–66. doi: 10.1353/tae.2021.0002.
  • Batty, M. 2018. Artificial intelligence and smart cities. Environment and Planning B: Urban Analytics and City Science 45 (1):3–6. doi: 10.1177/2399808317751169.
  • Berry, D. 2015. The philosophy of software. 2nd ed. Hampshire, UK: Palgrave MacMillan.
  • Birtchnell, T., and A. Elliott. 2018. Automating the black art. Geoforum 96:77–86. doi: 10.1016/j.geoforum.2018.08.005.
  • Bissell, D. 2018. Automation interrupted: How autonomous vehicle accidents transform the material politics of automation. Political Geography 65:57–66. doi: 10.1016/j.polgeo.2018.05.003.
  • Bissell, D. 2020. Affective platform urbanism: Changing habits of digital on-demand consumption. Geoforum 115:102–10. doi: 10.1016/j.geoforum.2020.06.026.
  • Bissell, D. 2021. Encountering automation: Redefining bodies through stories of technological change. Environment and Planning D: Society and Space 39 (2):366–84. doi: 10.1177/0263775820963128.
  • Bissell, D., and V. Del Casino. 2017. Whither labor geography and the rise of the robots? Social & Cultural Geography 18 (3):435–42. doi: 10.1080/14649365.2016.1273380.
  • Boden, M. A. 2009. Computer models of creativity. AI Magazine 30 (3):23. doi: 10.1609/aimag.v30i3.2254.
  • Bostrom, N. 2017. Superintelligence. Oxford, UK: Oxford University Press.
  • Boulton, A., and M. Zook. 2013. Landscape, locative media, and the duplicity of code. In The Wiley-Blackwell companion to cultural geography, ed. N. C. Johnson, R. H. Schein, and J. Winders, 437–51. Hoboken, NJ: Wiley.
  • Chappell, K. 2022. Researching posthumanizing creativity. Qualitative Inquiry 28 (5):496–506. doi: 10.1177/10778004211065802.
  • Chen, B., S. Marvin, and A. While. 2020. Containing COVID-19 in China: AI and the robotic restructuring of future cities. Dialogues in Human Geography 10 (2):238–41. doi: 10.1177/2043820620934267.
  • Cockayne, D., and L. Richardson. 2017. Queering code/space: The co-production of socio-sexual codes and digital technologies. Gender, Place & Culture 24 (11):1642–58. doi: 10.1080/0966369X.2017.1339672.
  • Colton, S., and G. Wiggins. 2012. Computational creativity: The final frontier? In Proceedings of the 20th European Conference on Artificial Intelligence (ECAI), 21–26. Amsterdam: IOS Press.
  • Cugurullo, F. 2020. Urban artificial intelligence: From automation to autonomy in the smart city. Frontiers in Sustainable Cities 2(38):1–14. doi: 10.3389/frsc.2020.00038.
  • Davis, N. 2013. Human–computer co-creativity. In The doctoral consortium at AIIDE, ed. G. Smith and A. Smith, 9–12. Palo Alto, CA: AAAI Press.
  • Del Casino, V. 2016. Social geographies II: Robots. Progress in Human Geography 40 (6):846–55. doi: 10.1177/0309132515618807.
  • Del Casino, V. J., Jr., L. House-Peters, J. W. Crampton, and H. Gerhardt. 2020. The social life of robots. Antipode 52 (3):605–18. doi: 10.1111/anti.12616.
  • de Vries, K. 2020. You never fake alone. Creative AI in action. Information, Communication & Society 23 (14):2110–27. doi: 10.1080/1369118X.2020.1754877.
  • Dodge, M., and R. Kitchin. 2005a. Code and the transduction of space. Annals of the Association of American Geographers 95 (1):162–80. doi: 10.1111/j.1467-8306.2005.00454.x.
  • Dodge, M., and R. Kitchin. 2005b. Codes of life. Environment and Planning D: Society and Space 23 (6):851–81. doi: 10.1068/d378t.
  • Dodge, M., and R. Kitchin. 2007. The automatic management of drivers and driving spaces. Geoforum 38 (2):264–75. doi: 10.1016/j.geoforum.2006.08.004.
  • Dodge, M., and R. Kitchin. 2009. Software, objects, and home space. Environment and Planning A: Economy and Space 41 (6):1344–65. doi: 10.1068/a4138.
  • Draude, C. 2020. “Boundaries do not sit still”: From interaction to agential intra-action in HCI. In Human–computer interaction, ed. M. Kurosu, Vol. 12181, 20–32. Cham: Springer.
  • Feldman, S. 2017. Co-creation: Human and AI collaboration in creative expression. Proceedings of EVA, UK, 422–29. doi: 10.14236/ewic/EVA2017.84.
  • Forbes, A. 2020. Creative AI: From expressive mimicry to critical inquiry. Artnodes 26 (26):1–10. doi: 10.7238/a.v0i26.3370.
  • Gandy, M. 2005. Cyborg urbanization. International Journal of Urban and Regional Research 29 (1):26–49. doi: 10.1111/j.1468-2427.2005.00568.x.
  • Garrett, B. 2011. Videographic geographies. Progress in Human Geography 35 (4):521–41. doi: 10.1177/0309132510388337.
  • Gemeinboeck, P., and R. Saunders. 2022. Moving beyond the mirror: Relational and performative meaning making in human–robot communication. AI & Society 37 (2):549–63. doi: 10.1007/s00146-021-01212-1.
  • Graham, M., M. Zook, and A. Boulton. 2013. Augmented reality in urban places: Contested content and the duplicity of code. Transactions of the Institute of British Geographers 38 (3):464–79. doi: 10.1111/j.1475-5661.2012.00539.x.
  • Graham, S. D. N. 2005. Software-sorted geographies. Progress in Human Geography 29 (5):562–80. doi: 10.1191/0309132505ph568oa.
  • Graham, S., and S. Marvin. 2001. Splintering urbanism: Networked infrastructures, technological mobilities and the urban condition. London and New York: Routledge.
  • Guzman, A., and S. Lewis. 2020. Artificial intelligence and communication. New Media & Society 22 (1):70–86. doi: 10.1177/1461444819858691.
  • Haraway, D. 1991. Simians, cyborgs and women. New York: Routledge.
  • Harris, A. 2018. Creative agency/creative ecologies. In Creativity policy, partnerships and practice in education, ed. K. Snepvangers, P. Thomson and A. Harris, 65–87. London: Palgrave Macmillan.
  • Harris, A. 2021. Creative agency. London: Palgrave Macmillan.
  • Harris, D., and S. Holman Jones. 2022. A manifesto for posthuman creativity studies. Qualitative Inquiry 28 (5):522–30. doi: 10.1177/10778004211066632.
  • Hasse, C. 2022. Humanism, posthumanism, and new humanism. In The Palgrave handbook of the anthropology of technology, ed. M. Hojer Bruun, A. Walhberg, R. Douglas-Jones, C. Hasse, K. Hoeyer, D. B. Kristensen, and B. R. Winthereik, 145–64. London: Palgrave Macmillan.
  • Hawkins, H. 2017a. Creativity. London and New York: Routledge.
  • Hawkins, H. 2017b. Foreword: Art works: Art worlds—Scales, spaces, spaces. In Art and the city, ed. J. Luger and J. Ren, xvi–ix. London and New York: Routledge.
  • Hawkins, H. 2021. Cultural geography I: Mediums. Progress in Human Geography 45 (6):1709–20. doi: 10.1177/03091325211000827.
  • Hayles, K. 2017. Unthought: The power of the cognitive nonconscious. Chicago: University of Chicago Press.
  • Henderson, V., J. Davidson, K. Hemsworth, and S. Edwards. 2014. Hacking the master code: Cyborg stories and the boundaries of autism. Social & Cultural Geography 15 (5):504–24. doi: 10.1080/14649365.2014.898781.
  • Hsieh, H., and S. Shannon. 2005. Three approaches to qualitative content analysis. Qualitative Health Research 15 (9):1277–88. doi: 10.1177/1049732305276687.
  • Jagodzinski, J. 2022. Quantum creativity. Qualitative Inquiry 28 (5):586–95. doi: 10.1177/10778004211066633.
  • Janowicz, K., S. Gao, G. McKenzie, Y. Hu, and B. Bhaduri. 2020. GeoAI: Spatially explicit artificial intelligence techniques for geographic knowledge discovery and beyond. International Journal of Geographical Information Science 34 (4):625–36. doi: 10.1080/13658816.2019.1684500.
  • Kantosalo, A., and H. Toivonen. 2016. Modes for creative human–computer collaboration. In Proceedings of the seventh international conference on computational creativity, ed. F. Pachet, A. Cardoso, V. Corruble, and F. Ghedini, 77–84. Paris: Sony CSL.
  • Karimi, P., J. Rezwana, S. Siddiqui, M. Maher, and N. Dehbozorgi. 2020. Creative sketching partner: An analysis of human–AI co-creativity. In Proceedings of the 25th international conference on intelligent user interfaces, ed. F. Paternò and N. Oliver, 221–30. New York: Association for Computing Machinery.
  • Karras, T., S. Laine, and T. Aila. 2020. A style-based generator architecture for generative adversarial networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 43:4217–28.
  • Kitchin, R. 1998. Towards geographies of cyberspace. Progress in Human Geography 223 (3):385–406. doi: 10.1191/030913298668331585.
  • Kitchin, R. 2017. Thinking critically about and researching algorithms. Information, Communication & Society 20 (1):14–29. doi: 10.1080/1369118X.2016.1154087.
  • Kitchin, R., and M. Dodge. 2011. Code/space: Software and everyday life. Cambridge, MA: MIT Press.
  • Łapińska, J. 2020. Creativity of human and non-human matter interwoven: Autonomous sensory meridian response videos in a posthuman perspective. Creativity Studies 13 (2):336–50. doi: 10.3846/cs.2020.11703.
  • Latour, B. 1994. On technical mediation. Common Knowledge 3:29–64.
  • Leszczynski, A. 2015. Spatial media/tion. Progress in Human Geography 39 (6):729–51. doi: 10.1177/0309132514558443.
  • Leszczynski, A. 2019. Platform affects of geolocation. Geoforum 107:207–15. doi: 10.1016/j.geoforum.2019.05.011.
  • Leszczynski, A. 2020. Glitchy vignettes of platform urbanism. Environment and Planning D: Society and Space 38 (2):189–208. doi: 10.1177/0263775819878721.
  • Lipp, B., and S. Dickel. 2022. Interfacing the human/machine. Distinktion: Journal of Social Theory. Advance online publication. doi: 10.1080/1600910X.2021.2012709.
  • Lundman, R., and P. Nordström. 2023. Creative geographies in the age of AI: Co-creative spatiality and the emerging techno-material relations between artists and artificial intelligence. Transactions of the Institute of British Geographers. doi: 10.1111/tran.12608.
  • Lynch, C. 2021. Critical geographies of social robotics. Digital Geography and Society 2:100010. doi: 10.1016/j.diggeo.2021.100010.
  • Lynch, C. R. 2022. Glitch epistemology and the question of (artificial) intelligence. Dialogues in Human Geography 12 (3):379–83. doi: 10.1177/20438206221102952.
  • Lynch, C., and V. Del Casino. 2020. Smart spaces, information processing, and the question of intelligence. Annals of the American Association of Geographers 110 (2):382–90. doi: 10.1080/24694452.2019.1617103.
  • Lysgård, H. K., and S. Rye. 2017. Between striated and smooth space: Exploring the topology of transnational student mobility. Environment and Planning A: Economy and Space 49 (9):2116–34. doi: 10.1177/0308518X17711945.
  • Mace, M., and T. Ward. 2002. Modeling the creative process. Creativity Research Journal 14 (2):179–92. doi: 10.1207/S15326934CRJ1402_5.
  • Mackenzie, A. 2002. Transductions: Bodies and machines at speed. London: Continuum Press.
  • Macrorie, R., S. Marvin, and A. While. 2021. Robotics and automation in the city. Urban Geography 42 (2):197–217. doi: 10.1080/02723638.2019.1698868.
  • Massey, D. 2005. For space. London: Sage.
  • Mazzone, M., and A. Elgammal. 2019. Art, creativity, and the potential of artificial intelligence. Arts 8 (1):26. doi: 10.3390/arts8010026.
  • McCormack, J. 2019. Creative systems. In Computational creativity, ed. T. Veale and F. Cardoso, 327–52. Cham: Springer.
  • McDuie‐Ra, D., and K. Gulson. 2020. The backroads of AI: The uneven geographies of artificial intelligence and development. Area 52 (3):626–33. doi: 10.1111/area.12602.
  • McLean, J. 2020. Changing digital geographies. Cham, Switzerland: Palgrave Macmillan.
  • Miller, A. I. 2019. The artist in the machine: The world of AI-powered creativity. Cambridge, MA: MIT Press.
  • Mitchell, T. M. 1997. Machine learning. New York: McGraw-Hill.
  • Nordström, P. 2018. Creative landscapes: Events at the sites of encounter. Doctoral diss., University of Turku.
  • Pink, S., and V. Fors. 2017. Being in a mediated world. Cultural Geographies 24 (3):375–88. doi: 10.1177/1474474016684127.
  • Pink, S., and S. Sumartojo. 2018. The lit world: Living with everyday urban automation. Social and Cultural Geography 19 (7):833–52. doi: 10.1080/14649365.2017.1312698.
  • Rose, G. 2016. Rethinking the geographies of cultural “objects” through digital technologies. Progress in Human Geography 40 (3):334–51. doi: 10.1177/0309132515580493.
  • Rose, G. 2017. Posthuman agency in the digitally mediated city. Annals of the American Association of Geographers 107 (4):779–93. doi: 10.1080/24694452.2016.1270195.
  • Roudavski, S., and J. McCormack. 2016. Post-anthropocentric creativity. Digital Creativity 27 (1):3–6. doi: 10.1080/14626268.2016.1151442.
  • Russell, S. J., and P. Norvig. 2020. Artificial intelligence: A modern approach. 4th ed. Harlow, UK: Pearson Education.
  • Santos, I., L. Castro, N. Rodriguez-Fernandez, Á. Torrente-Patiño, and A. Carballal. 2021. Artificial neural networks and deep learning in the visual arts: A review. Neural Computing and Applications 33 (1):121–57. doi: 10.1007/s00521-020-05565-4.
  • Schuurman, N. 2002. Women and technology in geography: A cyborg manifesto for GIS. The Canadian Geographer/Le Géographe Canadien 46 (3):258–65. doi: 10.1111/j.1541-0064.2002.tb00748.x.
  • Schuurman, N. 2004. Databases and bodies: A cyborg update. Environment and Planning A: Economy and Space 36 (8):1337–40. doi: 10.1068/a3608_b.
  • Shaw, I. 2017. Robot wars: US empire and geopolitics in the robotic age. Security Dialogue 48 (5):451–70. doi: 10.1177/0967010617713157.
  • Simondon, G. 2016. On the mode of existence of technical objects. Vol. 39. Minneapolis: Univocal Publishing.
  • Simonsen, K. 2013. In quest of a new humanism. Progress in Human Geography 37 (1):10–26. doi: 10.1177/0309132512467573.
  • Sparke, M. 2017. Situated cyborg knowledge in not so borderless online global education. Geopolitics 22 (1):51–72. doi: 10.1080/14650045.2016.1204601.
  • Stiegler, B. 1998. Technics and time I. Palo Alto, CA: Stanford University Press.
  • St-Louis, A., and R. Vallerand. 2015. A successful creative process. Creativity Research Journal 27 (2):175–87. doi: 10.1080/10400419.2015.1030314.
  • Thrift, N. 2004. Movement-space: The changing domain of thinking resulting from the development of new kinds of spatial awareness. Economy and Society 33 (4):582–604. doi: 10.1080/0308514042000285305.
  • Thrift, N., and S. French. 2002. The automatic production of space. Transactions of the Institute of British Geographers 27 (3):309–35. doi: 10.1111/1475-5661.00057.
  • Thulin, E., B. Vilhelmson, and T. Schwanen. 2020. Absent friends? Smartphones, mediated presence, and the recoupling of online social contact in everyday life. Annals of the American Association of Geographers 110 (1):166–83. doi: 10.1080/24694452.2019.1629868.
  • Verbeek, P. P. 2005. Artifacts and attachment: A post-script philosophy of mediation. In Inside the politics of technology: Agency and normativity in the co-production of technology and society, ed. H. Harbers, 126–46. Amsterdam: Amsterdam University Press.
  • While, A., S. Marvin, and M. Kovacic. 2021. Urban robotic experimentation. Urban Studies 58 (4):769–86. doi: 10.1177/0042098020917790.
  • Williams, N. 2016. Creative processes: From interventions in art to intervallic experiments through Bergson. Environment and Planning A: Economy and Space 48 (8):1549–64. doi: 10.1177/0308518X16642769.
  • Wilson, M. 2009. Cyborg geographies: Towards hybrid epistemologies. Gender, Place and Culture 16 (5):499–516. doi: 10.1080/09663690903148390.
  • Wilson, M. W. 2011. Data matter(s): Legitimacy, coding, and qualifications-of-life. Environment and Planning D: Society and Space 29 (5):857–72. doi: 10.1068/d7910.
  • Wingström, R., J. Hautala, and R. Lundman. 2022. Redefining creativity in the era of AI? Creativity Research Journal. Advance online publication. doi: 10.1080/10400419.2022.2107850.
  • Woodward, K., J. P. Jones, L. Vigdor, S. Marston, H. Hawkins, and D. Dixon. 2015. One sinister hurricane: Simondon and collaborative visualization. Annals of the Association of American Geographers 105 (3):496–511. doi: 10.1080/00045608.2015.1018788.