2,918
Views
3
CrossRef citations to date
0
Altmetric
Research Article

Cognitive Vulnerability, Artificial Intelligence, and the Image of God in Humans

ORCID Icon

Abstract

Recent progress in artificial intelligence (AI) opens up the possibility that one day machines could do anything that a human being can do, raising thus serious questions regarding human distinctiveness. For theological anthropology, the prospect of human-level AI brings a fresh opportunity to clarify the definition of the image of God. Comparing human and artificial intelligence leads to replacing the Aristotelian-like interpretation of the image of God as rationality with a relational model. Instead of regarding our cognitive biases as vulnerabilities, they should be seen as instrumental in bringing about our unique type of intelligence, one marked by relationality.

Animals, artificial intelligence, and the image of god

Artificial intelligence (AI) is nowadays breaking milestone after milestone in the cognitive domain, outsmarting humans at intellectual tasks as varied as game playing, language translation, or medical diagnosis, and acquiring capacities that were once thought to be uniquely human, such as creativity, learning, or intuition. If progress in AI continues at the current pace, there is a non-trivial possibility for human-level AI, or artificial general intelligence (AGI), to emerge in the not-so-distant future. AI reaching human level would be an event of colossal proportions in earthly history, with consequences that are hard to overestimate for every aspect of human life and society.

AGI would, by definition, be capable of doing anything that a standard human being can do at a similar or superior level. Fears that it would radically impact global economy, by taking over human jobs, or that it might even wipe humans out, accidentally or intentionally, are not totally unfounded, and deserve to be treated with as much attention as possible. However, beyond all these, there arguably looms a potential identity crisis over humanity as a whole, triggered by the emergence of an entity that could fully replicate human behavior and perhaps even be more intelligent than us. At the root of this crisis lies the age-old question of what, if anything, makes humans unique and distinctive.

In theological anthropology, the notion of human distinctiveness has traditionally been articulated in the doctrine that humans are created in the image of God (Latin, imago Dei). The creation story in Genesis 1 specifically describes humans as being created in the image and likeness of God (Gen. 1, 26-27), a characteristic that presumably sets them apart from the rest of the creatures. Apart from this bold statement right at its beginning, the Hebrew Bible is largely silent in respect to what the imago Dei actually means. The New Testament, although employing the concept more extensively, does not add too much clarity regarding its meaning, except for the affirmation that the person of Jesus Christ represents the best possible instantiation of the image. Ever since Patristic times, Christian anthropology has constantly wrestled with how the imago Dei should be understood. Theologians have produced a variety of interpretations in dialogue with the philosophical views of their time, a creative endeavor that is still at work nowadays. Nonetheless, full consensus has never been reached, and contemporary debates seem to be diverging toward ever more diversity of interpretation, instead of converging toward a more unitary view (Cortez, Citation2010; Grenz, Citation2001; Herzfeld, Citation2002; Van Huyssteen, Citation2006).

The question of human distinctiveness has so far only been asked in regard to what renders us different from the animals, the only others with which we could meaningfully compare ourselves. However, this approach does not lead too far. This is not due to the lack of difference between humans and non-human animals, but rather due to the sheer perceived scope of that difference. Simply put, we seem to differ from the animals on so many dimensions that it is difficult to pinpoint to any of them as being the decisive distinguishing factor. One would have a hard time choosing between complex articulated language, imagination, the capacity for abstract thinking, artistic creativity, reflexivity, reason, religiosity, or ability for long-term planning, to name only a few of the proposed candidates. While it is true that many of these capacities have been identified in rudimentary form in at least some of the higher animals, humans exhibit and employ them on an utterly different scale. We may be part of the same evolutionary continuum as all the other earthly life forms, but due to our cognitive abilities we live very different kinds of lives, and we are endowed with powers to transform the world unlike any other creature. For these reasons, human distinctiveness has never been seriously threatened so far. Similarly, finding the one feature that, in Colin Gunton’s words, makes human nature “different from the nature of nonhumans and, therefore, like the nature of God” has proven very challenging, given the multitude of options (Gunton, Citation1991, p. 47).

Most of what differentiates us from the animals has to do with our intellectual powers. For many centuries, the dominant interpretation of the image of God in Christian anthropology has equated imago with human intellect, following the Aristotelian philosophical tradition that defines the human as the rational animal. This is one version of the substantive interpretation, which locates the divine image in some (usually intellectual) ability or set of capacities that humans have. Another proposal, the functional interpretation, understands the image as something that humans are appointed to do, namely to exercise stewardship and dominion over creation as God’s representatives. Lastly, the relational interpretation understands the image of God through the prism of the covenantal I-Thou relationship that humans are called to have with God, which is the fundament of human existence. To be in the image of God would thus also imply to be capable of relationship with God and with other human beings. All these interpretations have valid claims to be accounting for the imago Dei,Footnote1 so one is left to choosing between them according to one’s taste and degree of commitment to certain theological and exegetical traditions.

The emergence of human-level AI would significantly change this. For the first time we would be faced with the possibility of an equal or superior other, one that could potentially match us in both our intellectual capacities, in what we can do in the world, and in our relational abilities. But the AGI scenario need not be seen only in such gloomy terms. On the contrary, instead of agonizing about AI replacing us or rendering us irrelevant, we could relish the opportunity to better understand our distinctiveness through the insights brought about by the emergence of this new other.

On the one hand, if Christian tradition is taken seriously, then there is no need to panic about losing our distinctiveness. Theologically speaking, the intuition of our uniqueness, encapsulated in the doctrine of the imago Dei, is part of divine revelation. It is therefore non-negotiable in its core claim, even if its formulation might vary with the evolution of our philosophical paradigms in time.Footnote2 This implies that even if AI manages to completely emulate human behavior, we would still be distinctive somehow. Secular anthropology might be troubled by the AGI scenario, but not theological anthropology, and it is the task of the latter to figure out what would still render human distinctive from intelligent machines.

On the other hand, the emergence of AGI might present theologians with an extraordinary opportunity to narrow down their definitions of human distinctiveness and the image of God. As argued above, a comparison with animals leaves us with a wide field of relevant differences. Looking at how humans would be different from human-level AI may provide just the right amount of conceptual constraint needed for a better definition of the imago Dei. In this respect, our encounter with AI might prove to be our best shot at comparing ourselves with a different type of intelligence, apart from maybe the possibility of ever finding extra-terrestrial intelligence in the universe.

How similar would AGI be to humans?

It is far from clear whether human-level AI is possible, and any attempt to describe it now is doomed to remain speculative. But even so, by looking at past and current AI applications, we can get a glimpse of how similar AGI could be to humans.

AI research began in the 1950s by employing an approach known as symbolic AI. This departed from the assumption that higher intelligence, of the type that humans exhibit, consists at its very basis of the manipulation of symbols through logical operations. Human knowledge, it was believed at the time, could be decomposed into a finite set of basic symbols. Just as any complex information can be linguistically expressed as a finite set of English sentences, using the same 26 characters, so too anything humans know and do could be expressed by combining basic symbols from this postulated finite set. If one could teach a machine all these symbols and the rules of how to combine them, then AI could potentially match and surpass human intelligence.

After some impressive initial success in game playing and theorem proving, symbolic AI soon ran into trouble when it tried to solve more ‘mundane’ tasks, such as vision or kinesthetic coordination. As roboticist Hans Moravec famously noted, it is paradoxically easier to replicate the higher functions of human cognition, but it is notoriously difficult to endow a machine with the perceptive and mobility skills of a one-year-old (Moravec, Citation1988, p. 15). What this means is that symbolic thinking is not how human cognition really works. To be sure, humans are indeed capable of symbolic thinking, but this is not something we naturally excel at. Just think how much formal training one needs in order to have a shot at understanding complex mathematics, or how much of our day-to-day behavior does not seem to obey such rigorous logical rules, if it’s not outright irrational.

Regarding the latter point, philosopher Hubert Dreyfus, one of the fiercest critics of symbolic AI, identifies two main types of processes in human cognition, which he calls “knowing-that” and “knowing-how” (Dreyfus & Dreyfus, Citation1986, pp. 16–51), based on Martin Heidegger’s distinction between “present-at-hand” and “ready-to-hand” (Heidegger, Citation1962). ‘Knowing-that’ and ‘knowing-how’ refer to two different modes of human cognition, the former conscious and the latter unconscious. The conscious mode is sequential and algorithmic in nature, operating with symbols that are independent of context. This is being deployed when people execute complex planning or solve difficult logical problems. The best symbolic AI programs do manage to master this class of cognitive skills. Nevertheless, the unconscious, ‘know-how’ mode is much faster and more frequently employed in human regular activities. Our ability to quickly assess a situation based on context, for example, falls within the boundaries of this intuitive and unconscious cognitive mode. Dreyfus convincingly demonstrates that the nature of this background knowledge, or sense of context, is not symbolic.

With the failure of symbolic AI to replicate human intelligence came disappointment, but also a more complex and nuanced view of the latter. We already knew that in a way we were more than simply animals, due to our ability for this higher thinking, which enabled us to develop culture, technology, complex societies, and ultimately to ask philosophical and religious questions about ourselves and the universe. In addition to that, we now know that we are digital computers neither, whose thoughts and actions can be fully described by a logical approach. Between animals and AI, we seem to be a strange combination of both, and perhaps this is where our distinctiveness lies, from both non-human animals and intelligent machines. A more cogent account of the imago Dei could likely be conceived at the crossroads of these insights.

What replaced symbolic AI in the past three decades or so is the connectionist paradigm, better known nowadays as machine learning (ML). Symbolic AI departed from a theory of how the human mind works and tried to create computer programs that can perform similar operations. ML, on the other hand, draws inspiration from our improved knowledge of the architecture of the brain, and tries to create a crude version of an artificial brain by employing artificial neural networks. Modern ML algorithms are no longer taught how to think. Instead, they are being exposed to huge sets of selected data, in the hope that they will develop their own rules for how the data should be interpreted. Instead of teaching an ML algorithm that a cat is a furry mammal with four paws, pointed ears, and so forth, the program is trained on hundreds of thousands of pictures of cats and non-cats, by being ‘rewarded’ or ‘punished’ every time it makes a guess about what’s in the picture. After extensive training, some neural pathways become strengthened, while others are weakened or discarded. The end result is that the algorithm learns to recognize cats, without its human programmers necessarily understanding how exactly it reaches its conclusions. This is a crude and simplified description of ML, but it hopefully conveys the basic rationale at its root.

ML algorithms of this kind are behind the impressive successes of contemporary AI. Machine vision, as illustrated in the example with cats above, is one area of relative success, with remarkable applications in the security (face recognition) and medical domains. A program called LYNA, developed by Google, is reportedly capable of spotting breast cancer with 99% accuracy. For comparison, human pathologists can only achieve a score of 81% (Archer, Citation2018). Speech recognition, language translation, and text generation are applications of natural language processing, another area in which modern AI seems to be doing very well. Most of us are familiar with these technologies from interacting with chatbots on the Internet or with intelligent assistants, such as Siri or Alexa.

Does this new paradigm manage to close in on human intelligence? This is a rather difficult question. On the one hand, the pallet of AI capabilities seems to be growing fast, so the possibility that AI might one day be able to do everything that a human does should probably not be dismissed. On the other hand, looking at how current AI programs achieve their results could be revealing for the kind of intelligence that they are developing. Even when machines manage to achieve similar or superior results to humans in cognitive tasks, they do it in a very different fashion. Not only are humans capable of learning from much fewer than hundreds of thousands of examples – sometimes they need as little as one – but they are also able to provide explanations for their conclusions, and can also use what they just learned in very different settings and domains. After IBM’s program Watson stunningly won the “Jeopardy!” TV contest against the best human players in its history, philosopher John Searle emphatically wrote a piece in the Wall Street Journal entitled “Watson Doesn’t Know It Won on ‘Jeopardy!’” (Searle, Citation2011).

Furthermore, the notion of common sense, which is so familiar to humans, is completely lacking in AI algorithms. Even when their average performance is better than that of human experts, the few mistakes that they do make reveal a very disturbing lack of understanding from their part. It has been shown, for example, that sticking minuscule white stickers, almost imperceptible to the human eye, on a Stop sign on the road causes the AI algorithms used in self-driving vehicles to misclassify it as a Speed Limit 45 sign (Eykholt et al., Citation2018).

The tentative conclusion is that even when AI does replicate human cognitive abilities, it does so in a very non-humanlike fashion. This means that if AGI is ever achieved via the current approach or something similar to it, it will very likely be a very alien type of intelligence. Even if there is a growing functional overlapping between human and artificial intelligence, the two seem to be rather different types of phenomena.

Relationality, cognitive vulnerability, and human distinctiveness from animals and intelligent machines

The difference between human-level and humanlike intelligence can be of relevance for the theological discussion of human distinctiveness and the image of God. Having only the animals to compare ourselves with has gotten us used to draw a strong correlation between increased intelligence and humanness. The more intelligent an animal is, the closer to us we are willing to consider it. This might say more about our tendency to anthropomorphize other creatures than about anything else, but it still produces real consequences from an ethical and/or legal point of view, because it often determines what kind of rights we are prepared to grant to certain non-human animals.

However, this correlation ceases to function when we try to imagine intelligent robots. The thought experiment of AGI shows that it could be in principle possible to have a machine that exhibits human-level performance and even humanlike behavior, without necessarily having the underlying structure of human intelligence. AI shows that is possible to have an entity with similar cognitive capabilities and that behaves like a human, without being like us at the level of its structure or motivation. From the point of view of the various proposals for how the imago Dei should be interpreted, this realization could be seen to produce strong arguments against the substantive and the functional interpretation. If intelligent machines could have all the intellectual capacities that we have, and if they could do everything that we can do, and still be fundamentally non-humanlike, then this means that our deepest intuition of our distinctiveness lies in something that has to do with how we are. From the major proposals mentioned above – substantive, functional, and relational – it is only the latter that could perhaps still give a satisfactory account of human distinctiveness in the AGI scenario.

Furthermore, an analysis of why AI hasn’t yet reached human level and what it would need in order to do so also points toward relationality as being the crux of what it means to be human and what intelligent machines still notoriously lack. In a 2003 article, computer scientist William Clocksin makes a good case that a general intelligence comparable to the human one is not even possible without the kind of personal relationships that humans have with each other. He localizes the root of the current problematic view of intelligence in Aristotelian philosophy, which speaks of rationality as the seat of intelligence. This diagnosis shows a remarkable similarity to how in theological anthropology human distinctiveness and the image of God have traditionally been framed in substantive terms, largely due to the uncritical acceptance of premises from the same philosophical tradition. To contradict this definition of intelligence-as-rationality, Clocksin names two characteristics of human thinking that defy logical consistency:

People can happily entertain contradictory views (even without being aware of it) and, when put to the test, human ‘rationality’ is frail and fallible. People do not rely on deductive inference in their everyday thinking. Human experience is marked by incompetence, blunders and acts of misjudgment […] Human reasoning thrives on the basis of actions and beliefs that cannot be justified nor supported logically. We often make profoundly irrational assumptions, then argue rationally to reach conclusions that are irrational but desirable. We sometimes proceed not by what we discover to be true, but by what we want to be true. Yet non-rationality works in everyday situations (Clocksin 2003, p. 1732).

Besides these examples, human logic is marked by many other systemic glitches, largely known as cognitive biases (see Haselton et al., Citation2005). Their existence demonstrates that human cognition is characterized by a fundamental vulnerability, if one were to characterize the lack of rationality this way.Footnote3 It is in fact difficult to provide a definition of what it is to be human without reference to our cognitive vulnerabilities. Most of our best experiences in life, from falling in love, to enjoying a work of art, and to living a spiritual episode have an important irrational component.

What AI seems to be missing is precisely this side of human cognition. By employing a rather narrow definition of intelligence as problem solving, AI strives to produce an ultra-rational entity, one that won’t be plagued by the biases that supposedly negatively affect human intelligence. However, as Clocksin points out, problem solving can hardly be separated from the context of relationship with others and with the world in which it emerges:

AI research has collected a body of theory about algorithms for problem solving, but these theories are not about intelligence nor are they about persons. Problem solving has been seen as an abstract capability independent from the needs and desires of the individual, and independent of the cultural and social context (Clocksin 2003, p. 1730).

In order to come close to building human-level intelligence, Clocksin continues, AI theory should “break away from the rationalist tradition in order to do justice to alternative models of minds—models based on persons, identity and social context” (p. 1730). These concepts are all relational, strengthening the argument that human intelligence is fundamentally relational.

Our very definition of certain propensities of the human mind as cognitive vulnerabilities might be completely wrong, and a symptom of an alarming tendency that has been going on in our Western culture for the last couple of centuries. In his book, “The Master and His Emissary”, psychiatrist Iain McGilchrist convincingly argues this, based on the differences between the left and right hemispheres of the human brain (McGilchrist, Citation2009). According to him, as a society we have come to appreciate more the left hemisphere type of intelligence, one that is more abstract, rational, and linguistic, as opposed to the right hemisphere type, which is less articulated, but more intuitive, better related to the world, and has more common sense. Similarly to an emissary who usurps the power of the master who sent him, so too the ultra-rational type of intelligence has gradually taken over the stage in the society of our age, leading to many of the problems that we face today.

Perhaps we should instead switch to a view that regards our cognitive biases not solely as problems, but also as positive factors, by looking at how they are instrumental in enabling our unique type of intelligence. To begin with, the algorithms of human cognition thrive on such illogical steps, as pointed out earlier by William Clocksin. Our thinking is marked by imperfect heuristics that do not always work, but are still crucial in enabling us to make quick decisions in the world. AI, on the other hand, is much more thorough and accurate in its decision-making process, but so far without too much success in the real world. Without denying that cognitive biases sometimes lead us to bad decisions, this is an argument that such vulnerabilities also have a positive role.

One such bias, for instance, called hyperactive agency detection, was arguably involved in the acquisition of the first religious ideas by our ancestors (Barrett, Citation2000). Briefly, it appears that humans are inclined to attribute intentions to agents in their life, sometimes more than would be warranted by rational analysis. One likely explanation is that, from an evolutionary point of view, it was beneficial to see more agency in the world, rather than not seeing any agency at all. The hypothesis goes something along these lines: early humans were inclined to attribute agency to natural elements or phenomena, such as the thunder or the forest, and this is how the first religious ideas were born. In other words, we might have started to look for God due to a systemic error in our cognition! This argument has nothing to say about the existence or inexistence of God. But if God exists, and if it is through such a bias that we acquired the first intuition of God’s existence, then the irony is enormous. It would be akin to trying to solve a mathematical equation by making a lot of errors that cancel each other out to finally lead to the correct answer. A perfectly rational creature, such as an intelligent robot, could never have done this. This is not the place to analyze the hyperactive agency detection hypothesis in detail. It suffices to note that, if correct, it emphasizes even further the importance of cognitive vulnerability for the definition of human distinctiveness.

Most of the features that make the human way of being unique could probably be traced back at least partially to such cognitive biases. Having what is known as a theory of mind, which means inferring that other people have minds, thoughts, and intentions, just like oneself, is arguably instrumental in treating them like persons and in enhancing mutual collaboration within our species. However, it can be argued that the theory of mind is fundamentally just something we believe, without ever having solid proof that the others aren’t just zombies who are empty on the inside and only act as if they had a mind.Footnote4

This is not an argument that other people apart from oneself do not have minds, which would indeed be hard to defend, but only that our generalized belief that they do is not entirely rational. We engage in relationships with others precisely because we believe that they are essentially similar to us, without overthinking how unverifiable that assumption really is. It is not at all warranted that a hyper-rational entity, such as AGI, would make a similar assumption. Since AGI could by definition do everything that we do, it could probably simulate relational behavior pretty convincingly.Footnote5 However, our gut feeling tells us that such behavior would somehow be different from the real relationships that we have with other humans and with God. When comapred to the real thing, there seems to be something very fundamental that goes missing in a simulation of love or thinking.

The topic has been extensively explored in the philosophy of mind, with John Searle’s “Chinese room experiment” perhaps best conveying this crucial difference between real mental processes and their simulated counterparts (Searle, Citation1980). Nevertheless, we do not yet have a satisfying explanation of why that is, because the same doubts that we cast over AI can be legitimately raised in respect to ourselves. Whenever we deem human thinking or feelings as being real, in opposition to the simulated ones in machines, the burden of proof is indeed upon us to explain why human experience is different, in spite of allegedly arising from similar material processes and bio-chemical ‘algorithms.’ The only obvious reason that comes to mind has to do with the phenomenological aspect of our cognition and experience: we know it feels like something to be us, and we have an intuition that it does not feel the same way, if indeed it feels like anything at all, to be a machine. The purely intuitive nature of this argument is partly why it seems to be so difficult to ground human distinctiveness without appealing to cartesian body-mind dualism.

What I am trying to suggest is that it is possible to speak of human distinctiveness without necessarily employing a dualistic view of reality, and that reflecting upon our distinction from AI is of great help in this endeavor. John Searle comes close to explaining why digital computers will forever be barred from a humanlike type of cognition, one marked by thinking, because, in his view, only biological brains can have minds. He is not saying that machines cannot in principle have minds, because humans are, after all, precisely such machines (Searle, Citation1980, p. 422). However, the only type of machines that can have a mind, according to Searle, are biological brains, a view known as biological naturalism. The reason why humans can have mental states is that their brains are made of the right sort of material:

I am a certain sort of organism with a certain biological (i.e. chemical and physical) structure, and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena (p. 422).

Unfortunately, Searle offers neither solid arguments for this position, nor criteria to distinguish between the types of materials and structures that are capable of leading to mental properties and those that are not. This leaves biological naturalism open to the criticism of being more of a dogmatic claim than a valid scientific hypothesis (Pylyshyn, Citation1980, p. 442).

In light of the brief earlier discussion about the differences between artificial and human intelligence, perhaps a better way of grounding human distinctiveness is to follow-up on Searle’s biological naturalism, but specify it more precisely as the unique path that characterizes human evolution, one marked by both biology and culture. This would imply that in order for something to exhibit humanlike intelligence, it would not only need to have a biological brain, but it would also need to undergo a similar evolutionary history. The quirks that make us unique, those referred to as cognitive biases, are deeply rooted in the particular challenges that we had to face throughout the history of our species. Our enhanced and complex relationality, for example, is arguably related to the ways in which we had to rely on each other more than other mammals with comparable levels of intelligence did. If one were to develop AI that not only acts like us on the outside, but also thinks like us on the inside, then it is hard to imagine how that could be achieved without incorporating our so-called intellectual vulnerabilities.

Vulnerability, Christ, and the image of god

How about human distinctiveness and the image of God then? The questions brought about by the emergence of AI are a blessing in disguise for theological anthropology. We have been very used to define ourselves in comparison with the animals, always being more of something than them. For the first time in our existence as a species we are faced with the possibility of something else, namely AI, being more than us, at least in some of the aspects of what we define as intelligence. Our distinction from human-level AI could likely imply being less of something, rather than more. This gives rise to the intriguing idea that human intelligence is perhaps placed in a sort of Goldilocks zone,Footnote6 neither too low, nor too high to sustain meaningful personal relationships. Maybe what we are used to see as vulnerabilities in our cognition will prove to be the key that preserves human distinctiveness when machines outsmart us.

For the theological interpretation of the imago Dei, this can mean that what makes us in the image of God is not what we can do, but the fact that we are a fundamentally relational being, whose existence is always dependent upon its relationships with the others. At the most foundational level, humanity’s very existence is ontologically rooted in its continual relationship with God. At a more pragmatical level, each of us can survive and flourish only in a community of loving relationships. An honest look at current AI and our own psychology reveals that our unique way of being in the world as persons, in which relationality plays a critical part, is not due to our rationality, but quite the contrary. A hyper-rational being would likely have very serious challenges to engage fully in relationships, by making oneself totally vulnerable to the loved other. It surely does not sound very smart. And still, we humans do this tirelessly, and it is such experiences that give meaning and fulfillment to our lives.

Furthermore, this relational definition is in complete agreement with the Christological dimension of the image of God. If Christ truly is the best instantiation of the divine image (Col. 1: 15; 2 Cor. 4: 4), then the whole process of biological evolution could be seen as directed toward producing a being in which God could incarnate. Since God did not incarnate as a crocodile, but as a human, then presumably there exists a set of conditions that humans satisfy, and other creatures do not. To define the image of God in such a case, we could ask what were the necessary features that humans had to possess in order for the incarnation to be possible?

This question inescapably leads to the same relational aspects of human nature identified earlier. For the Christ event to be possible, humans did not need to be perfect logicians, or to exercise total domination over their environment. Arguably what human nature did need was an ability to develop free loving relationships, to recognize the others as persons, and to be open to God’s revelation. These faculties are all part of the relational register, and not so much of the rational, problem solving type of intelligence. It is a shocking but powerful idea to think that Jesus Christ wouldn’t have been the same, and indeed couldn’t have been at all, without the supposed vulnerabilities that mark human intelligence and make us such relational beings. Robots may outsmart us, but as long as they do not share our vulnerability and capacity for personal relationship, they could not partake in the image of God. It is perhaps for such reasons that the idea of a robot Christ, God incarnated as AI, sounds like pure absurdity.

If looking at AI teaches theologians something, it is that our limitations are just as important as our capabilities. We are vulnerable, just as our God has revealed to be vulnerable, as shown through the incarnation and the life and teaching of Jesus Christ. Being like God does not necessarily mean being more intelligent, especially when intelligence is seen as rationality or problem solving. Christ – whether considered historically or symbolically – shows that what we value most about human nature are traits like empathy, meekness, and forgiveness, which are eminently relational qualities. If it is true indeed that behind such qualities are ways of thinking that are related more to our cognitive vulnerabilities than with our rational minds, then we should wholeheartedly join the apostle Paul in “boast[ing] all the more gladly about [our] weaknesses […] for when [we] are weak, then [we] are strong” (2 Cor. 12: 9-10).

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This publication was made possible through the support of a grant from the Templeton World Charity Foundation. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the Templeton World Charity Foundation.

Notes

1 The substantive interpretation has come under severe criticism from evolutionary science for drawing artificial boundaries between humans and non-human animals, given the lack of any ontological gap between the two (Burdett, Citation2015, pp. 6–8). However, as argued earlier in the paper, a compelling case can be made that the degree in which many cognitive faculties are present in humans can account for a qualitative distinction between humans and animals.

2 Benno van den Toren draws a useful distinction between doctrine, which is the non-negotiable core of belief, and theological theory, which evolves over time with our increased knowledge of ourselves and the world (Van den Toren, Citation2018).

3 Although the term “cognitive vulnerability” refers in clinical psychology only to those cognitive biases “that are hypothesized to set the stage for later psychological problems when they arise” (Riskind & Black, Citation2005, p. 122), there is a strong tendency throughout scientific and popular literature to see all cognitive biases as vulnerabilities (see Ciampaglia & Menczer, Citation2018).

4 In philosophy, this is known as “the problem of other minds” (Avramides, Citation2019).

5 This is, in fact, the basis of the standard test for establishing whether AI has reached true intelligence, known as the Turing Test (Turing, Citation1950).

6 The Goldilocks principle means having just the right amount of something in order to fulfill certain conditions. In astrobiology, for example, the Earth is considered to be in the Goldilocks zone, being simultaneously close and far enough from the Sun to sustain liquid water

References