0
Views
0
CrossRef citations to date
0
Altmetric
Review Article

How Does Narrow AI Impact Human Creativity?

Received 25 Apr 2023, Published online: 15 Jul 2024

ABSTRACT

While Artificial Intelligence (AI) is set to transform how we live, work, and play, competencies such as creativity remain fundamentally human. However, the way we learn, develop, and deploy creativity will be impacted by AI. This paper sets out four underpinning propositions that will help to guide the integration of AI and human creativity. These propositions explain not only why AI is not independently creative, but also how AI can support and augment human creativity in areas such as education. It will also highlight implications for the development of creativity in an educational context.

Introduction

The notion of a Fourth Industrial Revolution, characterized by rapid growth in automation, big data, artificial intelligence, and cyber-physical systems (Cropley & Cropley, Citation2021), is now established in developed economies. A key consequence of this process of digitalisation is a sharp focus on the division of labor between humans and technology. Typically referred to as the Future of Work (e.g., Oppert et al., Citation2022), this concerns the jobs, and the ways, that human workers will be replaced by technology.

There is now a consensus (Chui et al., Citation2016) that the key strengths of digital technologies include the ability to perform, rapidly and accurately, algorithmic tasks (i.e., predictable physical or cognitive tasks) while the key strengths of humans lie in the ability to tackle tasks that are non-algorithmic in nature (i.e., unpredictable physical or cognitive tasks). This is reflected in the frequent analyses of 21st century skills (e.g., World Economic Forum, Citation2020) that consistently rank problem-solving and creativity, among others, as essential and uniquely human abilities. This is driving a renewed focus on developing these traditionally soft skills (or general capabilities) in school and university education (e.g., ACARA, Citation2010). Put simply, in a rapidly digitalizing world, the key to preparing students for the future of work is ensuring that they have well-developed general capabilities such as creativity. Without this preparation, students may find themselves less competitive for emerging job opportunities.

At first glance, therefore, the future of work is a straightforward reorientation. Technology (robots, AI) replaces humans in algorithmic tasks because it is better (i.e., more accurate), faster, and frequently cheaper. Humans, on the other hand, pivot to a new focus on those activities that technology cannot do, namely soft skills such as creativity and complex problem-solving. There are two problems, however, with this view. First, many jobs and tasks are not easily and cleanly split along this algorithmic/non-algorithmic line. The likely future, therefore, is that many jobs will be a complex blend of activities suited both to technology and humans. This raises the question of how technology (especially AI) and humans will collaborate. Second, as machine/deep learning has reached a new level of maturity, there are many tools and applications that developers now claim allow AI to exhibit creativity. If AI can be creative, then the fundamental premise of the future of work – humans do creativity, machines do the repetitive, algorithmic work – is false, and the prospects for human workers look very uncertain. However, before any deeper discussion of human and artificial creativity can take place, the question of what creativity is must be resolved.

Defining creativity

One issue that underpins, but frequently confounds, discussions of AI and creativity is how creativity is defined. Indeed creativity, in general, remains dogged by definitional confusion. Not so much among creativity researchers in psychology, where a robust definition has emerged, thanks to more than seven decades of research (e.g., Plucker & Beghetto, Citation2004; Runco & Jaeger, Citation2012), but in society more broadly. In fact, the end-user application of psychological creativity research findings often is impeded by long-standing, popular myths and misconceptions (Glăveanu, Citation2014). In education, for example, persistent myths such as the conflation of creativity with art (Patston et al., Citation2018) hinder efforts to embed creativity across all parts of the curriculum.

This definitional confusion, however, is not just a phenomenon observed among non-experts. The field of computational creativity, which should be a starting point for understanding the creative potential of AI, and which first emerged in the late 1980s, frequently either ignores psychological definitions, misconstrues them, or changes the definition of creativity to fit – ex post facto – what computational creativity is able to do. Two examples illustrate this. Rowe and Partridge (Citation1993) discuss creative behaviour, but define five characteristics of computational models of creativity that are clearly anchored in the criteria of divergent thinking (e.g., Torrance, Citation1988), though seemingly arrived at independently of Torrance. Furthermore, they use highly convergent problem-solving tasks (e.g., a version of the Wisconsin Card-Sorting Test,Footnote1 p. 65) as the goal of a computationally creative system. While divergent cognition is an important part of creativity, it is not its entirety.

Wiggins (Citation2006), on the other hand, sidesteps psychological definitions of creativity by defining computational creativity in terms of “ … behaviour exhibited by natural and artificial systems, which would be deemed creative if exhibited by humans” (p. 210). The key here is the term behaviour. Like Rowe and Partridge (Citation1993), Wiggins (Citation2006, p. 221) keeps this anchored to the idea of divergent cognition, implying that all an artificial system needs to do to be called creative is to enumerate a conceptual space, i.e., generate alternatives.

To be fair, Cropley et al. (Citation2022) note that computational systems should not be required to replicate uniquely human aspects of creativity (e.g., attitudes and dispositions). Nevertheless, much of the computational creativity literature reduces creativity only to the replication of divergent cognition, frequently overlooking (a) the nature of the output (the product) generated (is it actually creative?) and (b) the fact that human creativity (the behaviour which computational creativity seeks to replicate) is always tied to a series of stages that explain why humans seek to be creative in the first place, and how they do so (see Cropley et al., Citation2022). Human creativity, in other words, cannot be reduced simply to one cognitive process (divergent thinking), free of any other factors.

A third element of definitional confusion also hinders discussions of AI and creativity. The verb “to create” – for example, “I created a picture” – is frequently used only to mean that something was brought into existence without any reference to the novelty or utility of the thing. The characteristics, or qualities, of the product determine whether or not it is creative (e.g., Cropley et al., Citation2011), not merely its existence.

The key here is that before doing anything else, we need to understand that there is a clear, consistent, and accepted definition of human (i.e., psychological) creativity. Furthermore, this must serve as the benchmark for any discussion about the ability (or otherwise) of AI to either replicate, or supplement, that human creativity. First articulated by Rhodes (Citation1961), and concisely summarized by Plucker and Beghetto (Citation2004, p. 90), creativity is “the interaction among aptitude, process and environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context.” The latter element, focused on creative products, is often referred to as the standard definition of creativity (e.g., Runco & Jaeger, Citation2012).

Nevertheless, as Cropley et al. (Citation2022) explain, not all facets of this definition are in scope for AI and creativity. Personal factors, such as a willingness to take risks, are irrelevant. Creativity in humans boils down to the ability to generate novel and effective products, which MacKinnon (Citation1978) noted are the bedrock of all studies of creativity (p. 187). The fundamental standard for human creativity – the bottom line – is the production of novel and effective products. The rest – personality, processes and environmental factors – are simply the means, and the constraints, that accompany human attempts to generate creative products. In this light, it becomes clear that while the elements of process and person are integral to the broader understanding of creativity in humans, the more objective measures of novelty and effectiveness in the products or outcomes are paramount in evaluating creativity involving either humans or AI. This distinction is crucial for a clear and quantifiable assessment, shifting the focus from subjective interpretations of creativity to a more tangible evaluation of the outputs. Such an approach is essential in the context of AIs such as ChatGPT, where the novelty and effectiveness of its generated text provide a concrete basis to gauge its creative capacity, against a human benchmark. This methodology offers a more empirical way to discern between mere mimicry and truly original, effective contributions (i.e., creativity) produced by artificial intelligence.

Defining AI

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines, enabling them to perform tasks that typically require human cognition. This encompasses capabilities such as learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions), and self-correction (Xu et al., Citation2021). Particularly in the context of modern computational systems, AI can be categorized into two main types: narrow or weak AI, where the system is designed and trained for a particular task, and general or strong AI, where the system possesses generalized human cognitive abilities, allowing it to find solutions without human intervention (Sajja Citation2021). Within this context, the examples used will focus on current AI capabilities which fall into the category of narrow or weak AI and therefore are limited to systems trained for one specific task.

With the definition of creativity clearly articulated, it is now possible to address the key questions regarding the impact of AI on the development of human creativity. The most fundamental of these is: can AI exhibit creativity? Can AI, in other words, satisfy the standard definition of creativity and generate novel and effective (i.e., creative) products? Failing that, can AI, at the very least, mimic a core cognitive process that humans use to generate creative products? In other words, can AI think and act creatively?

Can AI exhibit creativity?

As Artificial Intelligence (AI) continues to mature, increasingly frequent claims are made that computational systems – robots, algorithms, AI – exhibit creativity. Some of these examples fall prey to the definitional problems described earlier, but some may exhibit symptoms of creativity that require more careful analysis. Cropley et al. (Citation2022) previously explored two examples. The first – dubbed the Android Author – was a story-writing AI. The second – termed the Catalysing Computer – was a laboratory robot designed to conduct chemical experiments. However, as Cropley, Medeiros and Damadzic explain, neither system succeeded in its quest for creativity, with the former merely reconstructing a story written by humans, and the latter simply executing a highly algorithmic laboratory process, albeit much faster and more efficiently than humans.

Nevertheless, while these previous examples failed to exhibit creativity consistent with the definitions given here, two recent examples of AI systems have generated a great deal of discussion. Are these recent examples of AI able to exhibit creativity?

Is ChatGPT creative?

ChatGPT, or the Chat Generative Pre-Trained Transformer, is an example of a deep-learning language model that produces human-like text in response to prompts. Whether the developers of this modelFootnote2 claim that it is creative or not, many recent discussions, for example in popular media, endow it with qualities closely associated with human creativity. For example, Johnson and Iziev (Citation2022) described the earlier version of this model, GPT-3, as being able to write original material and likened the fluency of the model as equivalent to that of a human.

Given the prevalence of myths and misunderstandings of creativity, it is unwise to rely on even academic papers from outside of the discipline of creativity, let alone popular media, for any analysis of the creative ability of tools such as ChatGPT. The two facets of creativity in scope for AI – creative (i.e., novel and effective) outputs, and divergent thinking – are easily, and objectively, tested.

Furthermore, even though “process” and “person” are elements in the broader definition of creativity, the aspects of “novelty” and “effectiveness” pertaining to process and product offer a more objective framework for analysis in this context. These criteria allow for a quantifiable approach to evaluating creative outputs. In the case of AI like ChatGPT, the emphasis shifts to measuring the novelty and effectiveness of its responses, providing a clearer, more empirical basis to assess its creative capabilities.

ChatGPT and divergent thinking

Verbal tests of divergent thinking have been an important foundation in creativity research for many decades. The verbal component of the common Torrance Tests of Creative Thinking (Torrance, Citation1988) assesses the fluency (number of ideas), flexibility (number of different categories of ideas), elaboration (the development of the ideas) and originality (uncommonness of ideas) of responses to, for example, the question “what would happen if there was no gravity.” Most recently, Olson et al. (Citation2021) developed the Divergent Association Task (DAT) creating an online tool to assess verbal divergent thinking. What better way to test this aspect of creativity for AIs such as GPT-3 or ChatGPT than to give them the DAT?

Using the standard user interface,Footnote3 in September 2022 we first gave GPT-3 as a prompt, the instructions for the DAT.Footnote4 These consist of asking the test taker to enter ten words that are as different from each other as possible, with some constraints (e.g., no proper nouns). GPT-3 generated ten words (Chair; Table; Window; Door; Floor; House; Tree; Dog; Cat; Cow) in response to the prompt and we entered these into the DAT interface.

The result scored GPT-3’s verbal creativity/divergent thinking as 56.55 (Mean = 78) which was assessed as higher than only 0.07% of the sample of people (estimated at millions) who have completed the DAT. In other words, the verbal divergent thinking of GPT-3 was extremely low. A second attempt, immediately following the first, resulted in a new set of words (Mother; Father; Sister; Brother; Cousin; Aunt; Uncle; Niece; Nephew; Grandmother) and a score of 23.82. This attempt was assessed as higher than 0% of other respondents and may, in fact, be the lowest score ever seen on the DAT.

Following the release of ChatGPT in November 2022, with reported improvements to the conversational capabilities of the AI, we re-tested the creativity of the system on the DAT. The first attempt resulted in the following set of responses: Flute; Sunflower; Window; Butterfly; Bagel; Mountain; Television; Flamingo; Balloon; Cloud. This scored 83.6 and was assessed as higher than 79.9% of respondents. A second attempt resulted in: Keyboard; Bookshelf; Omelette; Globe; Rainbow; Bubble; Kangaroo; Fireplace; Antique; Zephyr. This attempt scored 83.37 (higher than 78.92% of respondents).

Two factors contextualize these results. First, it would seem that ChatGPT is able to exhibit creativity (in the sense of verbal divergent thinking) at a high-average level, compared to humans. The results of the most recent tests (close to the 80th percentile) are close to one standard deviation above the mean (assuming the results of the DAT are normally distributed). While this is clearly more impressive than the GPT-3 results, it remains somewhat disappointing, given the enormous corpus of data (some 570GB, or 175 billion parameters) used to train the model. Second, the result must be understood against the timeline of when different versions of GPT-3 were trained and when the Divergent Association Task (DAT) was first published. In simple terms, it is unlikely that the first version we tested on the DAT – namely GPT-3 – could have included, in its corpus of training data, any results related to the Divergent Association Task. GPT-3 was trained prior to its release in May 2020, while the initial study by Olson et al. (Citation2021) was submitted for publication only in October 2020, and comprised a sample of 9,000 respondents. However, by the time ChatGPT was released (November 2022), the DAT had been publicly available for some time, and had been collecting millions of responses. It is therefore likely that the responses of ChatGPT to the Divergent Association Task reflect the fact that the latest version of the AI has seen many examples of responses to this test, in a way that the earlier version had not. Both versions – GPT-3 and ChatGPT – have access to the same enormous vocabulary, but whereas GPT-3 was unable to produce even average verbal divergent thinking, ChatGPT was able to do so at a modestly above-average level. The most parsimonious explanation for this sudden change is not that the AI, sometime between May 2020 and November 2022, became endowed with creativity, but merely that, by November 2022, it had consumed enough information to mimic, therefore giving the illusion of a moderate level of verbal divergent thinking. The likening, by Bender et al. (Citation2021), of large models such as GPT-3 to a “stochastic parrot” may therefore be a little unfair. GPT-3, ChatGPT and similar models are not so much stochastic as obedient and well-trained parrots. Most recently, Cropley (Citation2023) tested the verbal divergent production of two versions of ChatGPT (GPT3.5 and GPT4) concluding that, while both exhibit levels exceeding human means, they are subject to large variances, and frequent repetition, making ChatGPT an unreliable source of verbal divergent production.

ChatGPT and creative products

The second key facet of creativity, testable for ChatGPT, is its ability to generate a novel and effective outcome (i.e., a product, a solution). This requires two things. Effectiveness anchors the outcome to the solution of a specific need or problem (Cropley et al., Citation2011) while novelty requires that the outcome is new (i.e., original, never-seen-before). However, it’s crucial to understand that judgments of both novelty and effectiveness are not static. They are bound to a specific temporal context, as dynamic definitions of creativity suggest that what is considered novel or effective can change over time (Corazza, Citation2016). Importantly, an outcome may be effective (it satisfies the need or solves the problem) without being novel. While this is useful (and typical of many products in engineering: see Cropley, Citation2020) such an outcome cannot be called creative. Similarly, an outcome may be novel, but not effective (it fails to satisfy a need or solve a problem), in which case it is merely fantasy, and not creativity (Cropley & Cropley, Citation2009, p. 105).

We tested ChatGPT’s ability to generate creative outcomes by asking it, through the user interface, to give examples of creative products or services. ChatGPT responded with: (a) a virtual reality game that allows users to experience different cultures; (b) a service that helps people find lost pets, and (c) a social media platform for connecting people who have similar interests. While each of these is, or could be, effective, none is novel and therefore none is creative.Footnote5 While it might be argued that these ideas were, at some previous point in time, novel and therefore creative, we are not testing AI’s ability to function as an archive of past creativity.

On the basis of these examples, we therefore offer the first of four propositions governing the impact of AI on the development of human creativity:

Proposition 1:

AI cannot exhibit meaningful levels of creativity independently.

Can AI and humans collaborate in creativity?

If AI exhibits low levels of independent creativity – in other words, if AI neither produces very creative outputs, nor exhibits effective or reliable divergent thinking – can it, nevertheless, collaborate with humans in achieving creative outcomes? Can it support human creativity in some manner?

This question returns to our earlier observation that the jobs of the future, while broadly split along AI and Human lines, in practice are more likely to be a blend of algorithmic and non-algorithmic tasks requiring a more nuanced collaboration between humans and AI. This exploration aligns with the manifesto by Vinchon et al. (Citation2023) that discusses various scenarios of human-machine collaboration in creative endeavors. Four distinct scenarios are presented: “Co-Cre-AI-tion,” “Organic,” “Plagiarism 3.0,” and “Shut down.” Each paints a different future, contingent on the synergy between humans and machines. To further explore this question, we turn to another recent AI tool – DALL-E 2 – and ask if this is able to collaborate with humans and exhibit creativity?

Is DALL-E 2 creative?

DALL-E 2Footnote6 is a machine learning model that generates digital images in response to text prompts. The images generated combine a variety of concepts (e.g., dogs riding on horses), attributes (e.g., large, brown dogs with sad eyes) and styles (in the style of an oil-painting by Leonardo da Vinci). DALL-E 2 uses the same underpinning technology as ChatGPT.

Like ChatGPT, DALL-E 2 has generated a vigorous discussion about the creative ability of AI. These discussions are frequently complicated by the prevalence of the myths and misunderstandings that plague creativity. Analyzing the creative potential of DALL-E 2 is tricky because many people still fall prey to the myth that creativity = art (Patston et al., Citation2018). With that misconception as a premise, many people see DALL-E 2 produce a picture, assume that “pictures = creativity” and conclude that DALL-E 2 is creative: case closed!

It is easy to see why the “art = creativity” myth is so seductive in the case of DALL-E 2. If we prompt DALL-E 2 with the following: “an oil-painting, in the style of da Vinci’s Mona Lisa, showing a young woman scrolling through her mobile phone, and looking bored” we get the outcome shown in . That this has never existed before (i.e., is novel) and is an appropriate response to the prompt (it is, indeed, an oil-painting, similar in style to the Mona Lisa, depicting a young woman, etc.) suggests that this output meets the key criteria (e.g., Runco & Jaeger, Citation2012) of creativity. While it may or may not be the result of divergent thinking, it certainly appears to be a novel and effective product.

Figure 1. Moderna Lisa.

Figure 1. Moderna Lisa.

However, to understand why DALL-E 2 still fails to exhibit creativity, we must return to the earlier question of what creativity really is and how humans and AI can and will collaborate in the future.

A systems view of creativity

The common, consensus definition of creativity (e.g., Plucker & Beghetto, Citation2004) highlights four key components: the person (who we are), the process (how we think), the environment (where we work) and the product (the end result). These 4Ps (Rhodes, Citation1961) tell us what are, in effect, the pre-requisites for human creativity. What they do not describe, however, is how creativity happens, in a pragmatic, problem-solving sense. For that, we need to combine the static 4Ps with a dynamic model of how creative problem-solving unfolds.

Guilford (Citation1959) describes the stages involved in using creativity to solve a problem. The process of creativity begins with Problem Definition. What is the goal of the creative process? Once the problem is defined, the process can then proceed with Idea Generation. This is the stage of divergent thinking that is often mistaken for the totality of creativity. Having defined many possible solutions to the problem, through the stage of idea generation, the process then moves to Idea Evaluation. This highly convergent stage seeks to eliminate unfeasible ideas, focusing now on finding an effective solution. Finally, the process ends with Solution Validation. This final stage ensures that not only is a solution novel, but that it satisfies the original need, and is, therefore, genuinely effective. These stages are discussed in further detail in Cropley (Citation2015) and Cropley et al. (Citation2022).

Crucially, this holistic, systems model of creative problem-solving (or innovation) allows us to understand where both AI and humans fit into the creative problem-solving process. This, in turn, both explains why even DALL-E 2 cannot, independently, exhibit creativity, and also how AI might support human creativity.

Why DALL-E 2 is not creative from a systems view

Creative problem-solving begins with Problem Definition. The simplest argument for why DALL-E 2 is not creative is that it can only function in response to a prompt. It does not spontaneously generate pictures, but only does so when prompted by a human user. Thus, the answer to the question “why did DALL-E 2 produce a particular image?” will always be, echoing Guckelsberger et al. (Citation2017), “because my [human] programmer told me to.”

Once prompted, it could be said that DALL-E 2 then engages in a process of Idea Generation that may be divergent in nature. However, as the programmers of DALL-E 2 point out, the more specific the prompt, the better DALL-E 2 is able to respond. Indeed, an open-ended prompt such as “make a picture” does not lead to a wide and divergent variety of images, but simply images of a person “taking a picture.” DALL-E 2 can only respond to this open-ended prompt by interpreting it as narrowly and convergently as possible.

Once provided with a highly convergent prompt, DALL-E 2 then generates a set of different responses. These are, however, variations on the same underlying, and explicitly defined, theme. The next stage of the creative problem-solving process then relies on the human user engaging in Idea Evaluation – picking the image that he or she feels best satisfies their prompt – before engaging in Solution Validation, for example, by posting the image on social media.

DALL-E 2, in other words, is certainly not thinking divergently and, most importantly, is only exhibiting creativity in a very limited sense, entirely dependent on the human user. Indeed, the main contribution of DALL-E 2 in this process is merely its ability to save the human user the time and trouble of learning how to paint. In other words, it was the human user’s ability to recognize and define an open-ended problem (e.g., how to contrast the modern obsession with social media and mobile devices with the technological simplicity of 15th century Italy, with the irony of representing this in an oil painting), and their ability to evaluate the resulting image as satisfactory, that encapsulates the creativity. DALL-E 2 is simply a (very handy) means to an end. It is a team member with a particular, useful skill. Nevertheless, this conclusion is important because it leads to the question of exactly how AI might support human creativity. How can we use AI’s particular skill set to augment human creativity?

Proposition 2:

AI and humans are able to collaborate in the process of creativity.

Can AI improve human creativity?

Although AI cannot (yet) be independently creative, there are still benefits to incorporating AI into the creative process. Indeed, several scholars (e.g., Cropley et al., Citation2022; Medeiros et al., Citation2023; Smith et al., Citation2017) have called for increased attention on how AI can augment, rather than replace, human creative problem-solving. One way AI can support creative problem-solving is by speeding up specific processes.

Creative problem-solving requires both convergent and divergent thinking (Guilford, Citation1959). Convergent thinking refers to searching for and identifying a single correct solution. For instance, when attempting to make a traditional Neapolitan pizza, there are specific ingredients required. In contrast, divergent thinking refers to searching for a number of potential solutions when there is no single correct answer. Returning to our pizza example, asking what toppings make for the best pizza requires divergent thinking as there is no one correct response. Medeiros et al. (Citation2023) argued that AI’s current strength lies in convergent thinking and, at present, it cannot successfully engage in divergent thinking. As a result, the researchers argued that AI may be particularly suited to augmenting the convergent thinking processes. Two convergent thinking processes where AI can contribute are: a) information gathering and b) idea evaluation.

Information gathering refers to the post-problem definition process in which problem solvers seek out information relevant to the problem at hand that assists in generating solutions (Mumford et al., Citation1991). This process is a critical component of any creative problem-solving effort, as to successfully develop a novel and workable solution, one must fully understand the problem at hand and the extant solutions (Baer, Citation2015). As information gathering is a primarily convergent process, AI is especially well placed to aid humans in this task (Cropley et al., Citation2022; Medeiros et al., Citation2023; Vinchon et al., Citation2023). For example, AI can respond to human-directed queries, sourcing large amounts of information from a wide range of sources in a quick manner or as Vinchon et al. (Citation2023) note, AI can handle the “blind variation” phase of gathering information. For example, AI has been particularly useful in detecting and classifying skin cancer. AI can sort through large amounts of information on different types of skin cancers and make matches based on the symptoms presented. In comparison to human-made diagnoses, the AI process is both less expensive and faster (Takiddin et al., Citation2021). Given AI’s unlimited attention and speed of processing, it should be able to complete this task quicker than a human. Additionally, offloading information gathering to AI may reduce a human’s time spent on activities such as downloading and saving files – activities which take away from more complex thinking tasks (Medeiros et al., Citation2023). As such, offloading the information gathering process to AI, may increase the speed of the process and hasten the creative problem-solving process along in a way not possible with humans.

Once sufficient information is gathered, problem solvers begin an iterative exchange between idea generation and additional information gathering (Sawyer, Citation2021). After moving through these processes, problem solvers must then evaluate those ideas against criteria identified in the earlier problem definition process (Cropley, Citation2015). Moreover, evidence indicates that this evaluation phase is not solely cognitive; it may also be influenced by affective and metacognitive experiences (Puente-Díaz, Citation2023). The comparison to set standards, criteria, or constraints, inherently implies a correct answer. For instance, evaluating whether or not the proposed solution falls within the allotted budget has a correct response – yes or no. As such, idea evaluation is again, a largely convergent thinking process. Medeiros et al. (Citation2023) argued that, at present, AI may be capable of evaluating specific components of a creative solution. Specifically, when criteria are explicitly stated, AI should be capable of evaluating a proposed solution against those criteria. Returning to the budget example, if given a strict budget and a series of costs, AI can quickly calculate whether or not a solution falls within a prescribed budget. More complexly, IBM’s Watson’s demonstrated AI’s idea evaluation ability when it was asked to choose scenes form the movie, Morgan, to compose into a trailer. Watson was provided with a series of a prototypical horror movie trailers and then evaluated scenes from Morgan according to these criteria. This dramatically reduced time spent on this activity, as IBM reported Watson reduced the trailer scene selection from “what could be a weeks-long process to one day.” (IBM)

Offloading processes such as information gathering and idea evaluation to AI can speed up the process by AI simply completing the task quicker than humans can, and also by then allowing the humans more free cognitive space to begin working on the complex divergent thinking tasks sooner than if humans needed to complete these processes on their own. Given the potential heavy costs associated with delayed market entry (e.g., Kalish & Lilien, Citation1986) speeding up the process can have a significant economic benefit. Bearing this in mind, AI makes a speedy partner in the creative problem-solving process.

For education, these arguments suggest that what processes we focus on in creativity education, and how they are executed might evolve when AI is a viable team member. For instance, should information gathering transition to being primarily or even exclusively managed by AI, would there remain a significant need to emphasize traditional information-gathering techniques in creative education? If not, it might be more pertinent for education to shift its focus toward how individuals interpret, choose, and utilize AI-provided information during the creative process. An important factor to consider here is the quality of the prompt given by the user; the output’s relevance and accuracy often hinge on the clarity and precision of the initial input. This could mean a heightened attention to sensemaking – the capacity to comprehend and amalgamate information within its context (e.g., Weick et al., Citation2005) – rather than just the information search process. In the context of idea evaluation within AI-augmented creativity, there may be a transition in priorities. Instead of predominantly assessing potential solutions based on established constraints and standards, there could be a heightened emphasis on evaluating the novelty and human elements. In this dynamic, the AI can play a pivotal role in identifying novelty by analyzing the statistical rarity of an idea. Meanwhile, the human evaluator is influenced by a myriad of factors, including their preexisting knowledge, current state of mind, and cultural perspectives. Further, how to integrate those two pieces of information, and perhaps how to appropriately release one’s control and trust to AI may be important factors to develop. This shift could be reflected in creativity education through a change in foci from evaluating standards, novelty, and human impact, to one focused primarily on the latter two. Additionally, creativity education may need to better emphasize human-AI interactions and how humans can better work with, understand, and leverage AI for creative pursuits.

Proposition 3:

AI can help speed up the creative process in humans.

Can AI support human creativity?

AI can support human creativity by addressing critical issues associated with the assessment of creativity. Scholars have long debated the best techniques to assess creativity. Objectively scored creativity tests have previously been considered the gold standard, as they are highly reliable and valid but are slow and expensive to administer and score. As a result, many creativity researchers default to faster self-report measures of creativity; however, there are obvious weaknesses associated when asking school children to self-assess their creativity.

Marrone, Cropley, and Wang (Citation2023) propose five critical issues with current creativity assessment techniques. These are: (a) A lack of domain/subject specificity; (b) Inconsistency, leading to a lack of trust; (c) A lack of authenticity in classroom settings; (d) Slowness (in providing useful results); (e) High cost to administer. Moreover, Beaty and Johnson (Citation2021), summarize the issues of creativity assessment as subjectivity and effort. Subjectivity is proposed as the limitations in inter-rater agreement (p. 757). Effort is the notion that many creativity assessments require humans to invest time and money into the assessment process. Due to these limitations, recent research has begun to explore the use of computational approaches to the assessment of creativity.

There is an increase in research utilizing machine learning techniques and demonstrating the practicality of using AI to support creativity assessment. Marrone, Cropley, and Wang (Citation2022), trained a Natural Language Processing algorithm to assess mathematical creativity in K-12 students. The results highlight that the algorithm was as reliable as human raters, and was much quicker and cheaper to administer. Latent Semantic Analysis (LSA) techniques are being used to automate divergent thinking-based tasks such as the Alternate Uses Task by a number of researchers, including (Beaty & Johnson, Citation2021; Dumas & Dunbar, Citation2014; Forster & Dunbar, Citation2009 & Harbinson & Haarmann, Citation2014). These studies demonstrate that individual responses to creativity and originality are as effectively scored by the algorithm as they are human raters. Additionally, Kovalkov et al. (Citation2021) use machine learning to assess the creativity of computer programs, while Cropley and Marrone (Citation2022) used a computational neural network to assess the figural creativity of responses to the Test of Creative Thinking – Drawing Production (Jellen & Urban, Citation1986). These techniques address the issues of subjectivity and effort and speed up creativity assessment. In a world where creativity assessments should be dynamic and quick, AI allows flexibility. Similarly, whilst modern and sophisticated approaches to the measurement of twenty-first-century skills such as creativity have been proposed (see Wilson & Scalise, Citation2015), there has been less focus on authentic learning and working environments. AI allows subject-specific assessments to be conducted rapidly and in a real-world classroom context. This approach is much more impactful in a school environment where the outputs can be used to support students and inform decisions.

Proposition 4:

AI can provide rapid, accurate feedback on human creativity.

Conclusions

The Future of Work requires a careful examination of the division of labor between humans and AI. It is widely accepted that the key advantage that humans possess is the ability, among various soft skills, to be creative. However, many people also claim that AI can exhibit creativity. If this is true, the Future of Work for human may be very challenging. A more likely scenario, however, is that the jobs of the future will be neither explicitly human nor AI, but will require a blended, collaborative effort.

Therefore, two issues need to be explored. First, can AI exhibit creativity, or is creativity really the domain of humans? Second, in a future where the most likely scenario is one of collaboration, what does human/AI collaboration involve in the case of creativity, and how can it be optimized?

Our discussion draws four important, salient conclusions. First, current Narrow AI applications cannot be independently creative. The core elements of creativity – divergent thinking and the generation of novel and effective products – are either beyond the capacity of AI or, quite simply, require inputs that can only be provided by a human. Second, there are clear opportunities for where AI and humans can collaborate in the process of creativity, such as in information gathering tasks. Third, AI can help speed up the creative process currently, or traditionally, only undertaken by humans, allowing humans to invest more time in more complex tasks. And fourth, AI can provide rapid, accurate feedback on human creativity. In an education context, this allows an increase in flexible and subject-specific assessments.

There are, of course, many other important factors that will impact the relationship between humans and AI with regard to creativity. The exact nature of human/AI co-creation – the levels of creativity that are possible, in the sense of Kaufman and Beghetto’s (Citation2009) 4Cs – is a case in point. Will human/AI co-creation focus on creativity at the level of novel personal insights (mini-c), or will it address major, paradigm changing impacts on societies (Big-C)? Currently this paper is limited as it explores only narrow AI applications, however, there are many questions that must be addressed in future scholarship as this rapidly evolving technology becomes ubiquitous. The conversation must also develop to involve the consideration of elements of creativity including process and personality. The role of this paper was to demonstrate if AI can objectively me creative, however, the subjective nature of creativity is not addressed.

The 4 propositions outlined in this paper seek to assist the field of education as it grapples with the impact of AI on the future of work and life in a digital world and as a result, how to effectively prepare learners to collaborate with and be supported by AI in creative efforts.

Ethics approval statement

There are no ethical approval statements associated with this paper.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

There is no data associated with this paper.

Additional information

Funding

The authors declare no funding associated with this paper.

Notes

1. See Berg (Citation1948).

2. OpenAI. See: https://openai.com/.

5. An example of (a) is Boulevard Arts (blvrd.com). An example of (b) is PawBoost (pawboost.com). An example of (c) is, of course, Facebook.

6. https://openai.com/dall-e-2/.

References

  • ACARA. (2010). The shape of the National curriculum. Australian Curriculum Assessment and Reporting Authority. http://docs.acara.edu.au/resources/Shape_of_the_Australian_Curriculum.pdf
  • Baer, J. (2015). The importance of domain-specific expertise in creativity. Roeper Review, 37(3), 165–178. https://doi.org/10.1080/02783193.2015.1047480
  • Beaty, R. E., & Johnson, D. R. (2021). Automating creativity assessment with SemDis: An open platform for computing semantic distance. Behavior Research Methods, 53(2), 757–780.
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of Stochastic Parrots: Can language models be too big? [ Paper presentation]. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Canada.
  • Berg, E. A. (1948). A simple objective technique for measuring flexibility in thinking. The Journal of General Psychology, 39(1), 15–22. https://doi.org/10.1080/00221309.1948.9918159
  • Chui, M., Manyika, J., & Miremadi, M. (2016). Where machines could replace humans - and where they can’t (yet). McKinsey Quarterly(July) (pp. 1–12). https://www.mckinsey.de/~/media/McKinsey/Business%20Functions/McKinsey%20Digital/Our%20Insights/Where%20machines%20could%20replace%20humans%20and%20where%20they%20cant/Where-machines-could-replace-humans-and-where-they-cant-yet.pdf.
  • Corazza, G. E. (2016). Potential originality and effectiveness: The dynamic definition of creativity. Creativity Research Journal, 28(3), 258–267. https://doi.org/10.1080/10400419.2016.1195627
  • Cropley, A. J., & Cropley, D. H. (2009). Fostering creativity: A diagnostic approach for education and organizations. Hampton Press.
  • Cropley, D. H. (2015). Promoting creativity and innovation in engineering education. Psychology of Aesthetics, Creativity, and the Arts, 9(2), 161–171. https://doi.org/10.1037/aca0000008
  • Cropley, D. H. (2020). Engineering: The ultimate expression of creativity? In M. A. Runco & S. R. Pritzker (Eds.), Encyclopedia of creativity (3 ed. Vol. 1, pp. 434–439). Academic Press.
  • Cropley, D. H. (2023). Is artificial intelligence more creative than humans? ChatGPT and the divergent association task. Learning Letters, 1(8), 13–13.
  • Cropley, D. H., & Cropley, A. J. (2021). Core capabilities for Industry 4.0 - Foundation of the cyber-psychology of engineering. Wbv Media.
  • Cropley, D. H., Kaufman, J. C., & Cropley, A. J. (2011). Measuring creativity for innovation management. Journal of Technology Management & Innovation, 6(3), 13–30. https://doi.org/10.4067/S0718-27242011000300002
  • Cropley, D. H., Madeiros, K., & Demadzic, A. (2022). The intersection of human and artificial creativity. In D. Henriksen & P. Mishra (Eds.), Creative provocations: Speculations on the future of creativity, technology & learning (pp. 19–34). Springer.
  • Cropley, D. H., & Marrone, R. L. (2022). Automated scoring of figural creativity using a convolutional neural network. Psychology of Aesthetics, Creativity, and the Arts.
  • Dumas, D., & Dunbar, K. N. (2014). Understanding fluency and originality: A latent variable perspective. Thinking Skills and Creativity, 14, 56–67.
  • Dunbar, K., & Forster, E. (2009). Creativity evaluation through latent semantic analysis. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 31, No. 31).
  • Forum, W. E. (2020). The Future of Jobs Report 2020. http://www3.weforum.org/docs/WEF_Future_of_Jobs_2020.pdf
  • Glăveanu, V. P. (2014). Revisiting the “art bias” in lay conceptions of creativity. Creativity Research Journal, 26(1), 11–20. https://doi.org/10.1080/10400419.2014.873656
  • Guckelsberger, C., Salge, C., & Colton, S. (2017). Addressing the” why?” in computational creativity: A non-anthropocentric, minimal model of intentional creative agency [ Paper presentation]. 8th International Conference on Computational Creativity, Atlanta, GA.
  • Guilford, J. P. (1959). Traits of creativity. In H. H. Anderson (Ed.), Creativity and its cultivation (pp. 142–161). Harper.
  • Harbinson, J., & Haarman, H. (2014). Automated scoring of originality using semantic representations. Proceedings of the Annual Meeting of the Cognitive Science Society,36(36).
  • Jellen, H. G., & Urban, K. K. (1986). The TCT-DP (test for creative thinking-drawing production): An instrument that can be applied to most age and ability groups. Creative Child & Adult Quarterly.
  • Johnson, S., & Iziev, N. (2022, April 15). AI is mastering language. Should we trust what it says? New York Times.
  • Kalish, S., & Lilien, G. L. (1986). A market entry timing model for new technologies. Management Science, 32(2), 194–205. https://doi.org/10.1287/mnsc.32.2.194
  • Kaufman, J. C., & Beghetto, R. A. (2009). Beyond big and little: The four C model of creativity. Review of General Psychology, 13(1), 1–12. https://doi.org/10.1037/a0013688
  • Kovalkov, A., Paaßen, B., Segal, A., Pinkwart, N., & Gal, K. (2021). Automatic creativity measurement in scratch programs across modalities. IEEE Transactions on Learning Technologies, 14(6), 740–753.
  • MacKinnon, D. W. (1978). In search of human effectiveness: Identifying and developing creativity. Creative Education Foundation.
  • Marrone, R., Cropley, D. H., & Wang, Z. (2023). Automatic assessment of mathematical creativity using natural language processing. Creativity Research Journal, 35(4), 661–676.
  • Medeiros, K. E., Marrone, R. L., Joksimovic, S., Cropley, D. H., & Siemens, G. (2023). Promises and realities of artificial creativity. In S. Hunter & R. Reiter-Palmon (Eds.), Handbook of organizational creativity (2nd ed, 275–289). Elsevier.
  • Mumford, M. D., Mobley, M. I., Reiter-Palmon, R., Uhlman, C. E., & Doares, L. M. (1991). Process analytic models of creative capacities. Creativity Research Journal, 4(2), 91–122. https://doi.org/10.1080/10400419109534380
  • Olson, J. A., Nahas, J., Chmoulevitch, D., Cropper, S. J., & Webb, M. E. (2021). Naming unrelated words predicts creativity. Proceedings of the National Academy of Sciences, 118(25). https://doi.org/10.1073/pnas.2022340118
  • Oppert, M. L., Dollard, M. F., Murugavel, V. R., Reiter-Palmon, R., Reardon, A., Cropley, D. H., & O’Keeffe, V. (2022). A mixed-methods study of creative problem solving and psychosocial safety climate: Preparing engineers for the future of work. Frontiers in Psychology, 12, 12. https://doi.org/10.3389/fpsyg.2021.759226
  • Patston, T. J., Cropley, D. H., Marrone, R. L., & Kaufman, J. C. (2018). Teacher implicit beliefs of creativity: Is there an arts bias? Teaching & Teacher Education, 75, 366–374. https://doi.org/10.1016/j.tate.2018.08.001
  • Plucker, J. A., & Beghetto, R. A. (2004). Why creativity is domain general, why it looks domain specific, and why the distinction does not matter, 153–167. https://doi.org/10.1037/10692-009
  • Puente-Díaz, R. (2023). Metacognitive feelings as a source of information for the creative process: A conceptual exploration. Journal of Intelligence, 11(3), 49. https://doi.org/10.3390/jintelligence11030049
  • Rhodes, M. (1961). An analysis of creativity. Phi Delta Kappan, 42(7), 305–310.
  • Rowe, J., & Partridge, D. (1993). Creativity: A survey of AI approaches. Artificial Intelligence Review, 7(1), 43–70. https://doi.org/10.1007/BF00849197
  • Runco, M. A., & Jaeger, G. J. (2012). The standard definition of creativity. Creativity Research Journal, 24(1), 92–96. https://doi.org/10.1080/10400419.2012.650092
  • Sajja, P. S. (2021). Introduction to artificial intelligence. Illustrated Computational Intelligence: Examples and Applications, 1–25. https://doi.org/10.1007/978-981-15-9589-9_1
  • Sawyer, R. K. (2021). The iterative and improvisational nature of the creative process. Journal of Creativity, 31, 100002. https://doi.org/10.1016/j.jyoc.2021.1000002
  • Smith, J. R., Joshi, D., Huet, B., Hsu, W., & Cota, J. (2017). Harnessing A.I. for augmenting creativity: Application to movie trailer creation. Proceedings of the 25th ACM International Conference on Multimedia (pp. 1799–1808). https://doi.org/10.1145/3123266.3127906
  • Takiddin, A., Schneider, J., Yang, Y., Abd-Alrazaq, A., & Househ, M. (2021). Artificial intelligence for skin cancer detection: Scoping review. Journal of Medical Internet Research, 23(11), e22934. https://doi.org/10.2196/22934
  • Torrance, E. P. (1988). The nature of creativity as manifest in its testing. The Nature of Creativity, 43–75.
  • Vinchon, F., Lubart, T., Bartolotta, S., Gironnay, V., Botella, M., Bourgeois-Bougrine, S., Burkhardt, J., Bonnardel, N., Corazza, G. E., Glăveanu, V., Hanchett Hanson, M., Ivcevic, Z., Karwowski, M., Kaufman, J. C., Okada, T., Reiter‐Palmon, R., & Gaggioli, A. (2023). Artificial Intelligence & creativity: A manifesto for collaboration. The Journal of Creative Behavior. https://doi.org/10.1002/jocb.597
  • Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (2005). Organizing and the process of sensemaking. Organization Science, 16(4), 327–451. https://doi.org/10.1287/orsc.1050.0133
  • Wiggins, G. A. (2006). Searching for computational creativity. New Generation Computing, 24(3), 209–222. https://doi.org/10.1007/BF03037332
  • Wilson, M., Scalise, K., & Gochyyev, P. (2015). Rethinking ICT literacy: From computer skills to social network settings. Thinking Skills and Creativity, 18, 65–80.
  • Xu, Y., Liu, X., Cao, X., Huang, C., Liu, E., Qian, S., Liu, X., Wu, Y., Dong, F., Qiu, C.-W., Qiu, J., Hua, K., Su, W., Wu, J., Xu, H., Han, Y., Fu, C., Yin, Z., Liu, M., … Zhang, J. (2021). Artificial intelligence: A powerful paradigm for scientific research. The Innovation, 2(4), 100179. https://doi.org/10.1016/j.xinn.2021.100179