On one hand, it appeared to be nothing more than a harmless publicity stunt. On October 25, 2017, Saudi Arabia granted Sophia—a lifelike robot, powered by artificial intelligence software—citizenship. This took place at the FII conference, held in Riyahd, providing a prime opportunity for Hanson Robotics to showcase its most advanced robotics system to date. And, it also served as a chance for Saudi Arabia to establish itself as a world leader in AI technology.
But, on the other hand, granting Sophia citizenship establishes a dangerous precedent, acting as a harbinger to a dystopian future where machines (and animals with enhanced intelligence) are afforded the same rights as human beings. Elevating machines to the same status as human beings threatens to undermine human dignity and worth and, along with it, the biblical conception of humanity.
Still, the notion of granting citizenship to robots makes sense within a materialistic/naturalistic worldview. In this intellectual framework, human beings are largely regarded as biological machines and the human brain as an organic computer. If AI systems can be created with self-awareness and emotional capacity, what makes them any different from human beings? Is a silicon-based computer any different from one made up of organic matter?
For many people, sentience or self-awareness is the key determinant of personhood. And persons are guaranteed rights, whether they are human beings, AI machines, or super-intelligent animals created by genetic engineering or implanting human brain organoids (grown in a lab) into the brains of animals.
In other words, the way we regard AI technology has wide-ranging consequences for how we view and value human life. And while views of AI rooted in a materialistic/naturalistic worldview potentially threaten human dignity, a Christian worldview perspective of AI actually highlights human exceptionalism—in a way that aligns with the biblical concept of the image of God.
Will AI Systems Ever Be Self-Aware?
The linchpin for granting AI citizenship—and the same rights as human beings—is self-awareness.
But are AI systems self-aware? And will they ever be?
From my perspective, the answers to both questions are “no.” To be certain, AI systems are on a steep trajectory toward ever-increasing sophistication. But there is little prospect that they will ever truly be sentient. AI systems are becoming better and better at mimicking human cognitive abilities, emotions, and even self-awareness. But these systems do not inherently possess these capabilities—and I don’t think they ever will.
Researchers are able to create AI systems with the capacity to mimic human qualities through the combination of natural-language processing and machine-learning algorithms. In effect, natural-language processing is pattern matching, in which the AI system employs prewritten scripts that are combined, spliced, and recombined to make the AI systems’ comments and responses to questions seem natural. For example, Sophia performs really well responding to scripted questions. But, when questions posed to her are off-script, she often provides nonsensical answers or responds with non-sequiturs. These failings reflect limitations of the natural-language processing algorithms. Undoubtedly, Sophia’s responses will improve thanks to machine-learning protocols. These algorithms incorporate new information into the software inputs to generate improved outcomes. In fact, through machine-learning algorithms, Sophia is “learning” how to emote, by controlling mechanical hardware to produce appropriate facial expressions in response to the comments made by “her” conversation partner. But, these improvements will just be a little bit more of the same—differing in degree, not kind. They will never propel Sophia, or any AI system, to genuine self-awareness.
As the algorithms and hardware improve, Sophia (and other AI systems) are going to become better at mimicking human beings and, in doing so, seem to be more and more like us. But, even now, it is tempting to view Sophia as humanlike. But this tendency has little to do with AI technology. Instead, it has to do with our tendency to anthropomorphize animals and even inanimate objects. Often, we attribute human qualities to nonhuman, nonliving entities. And, undoubtedly, we will do the same for AI systems such as Sophia.
Our tendency to anthropomorphize arises from our theory-of-mind capacity—unique to human beings. As human beings, we recognize that other people have minds just like ours. As a consequence of this capacity, we anticipate what others are thinking and feeling. But we can’t turn off our theory-of-mind abilities. And as a consequence, we attribute human qualities to animals and machines. To put it another way, AI systems seem to be self-aware, because we have an innate tendency to view them as such, even if they are not.
Ironically, a quality unique to human beings—one that contributes to human exceptionalism and can be understood as a manifestation of the image of God—makes us susceptible to seeing AI systems as sentient “beings.” And because of this tendency, and because of our empathy (which relates to our theory of mind capacity), we want to grant AI systems the same rights afforded to us. But when we think carefully about our tendency to anthropomorphize, it should become evident that our proclivity to regard AI systems as humanlike stems from the fact that we are made in God’s image.
AI Systems and the Case for Human Exceptionalism
There is another way that research in AI systems evinces human exceptionalism. It is provocative to think that human beings are the only species that has ever existed that has the capability to create machines that are like us—at least, in some sense. Clearly, this achievement is beyond the capabilities of the great apes, and no evidence exists to think that Neanderthals could have ever pulled off a feat such as creating AI systems. Neanderthals—who first appear in the fossil record around 250,000 to 200,000 years ago and disappear around 40,000 years ago—existed on Earth longer than modern humans have. Yet, our technology has progressed exponentially, while Neanderthal technology remained largely static.
Our ability to create AI systems stems from the capacity for symbolism. As human beings, we effortlessly represent the world with discrete symbols. We denote abstract concepts with symbols. And our ability to represent the world symbolically has interesting consequences when coupled with our abilities to combine and recombine those symbols in a nearly infinite number of ways to create alternate possibilities.
Our capacity for symbolism manifests in the form of language, art, music, and even body ornamentation. And we desire to communicate the scenarios we construct in our minds with other human beings. In a sense, symbolism and our open-ended capacity to generate alternative hypotheses are scientific descriptors of the image of God. No other creature, including the great apes or Neanderthals, possesses these two qualities. In short, we can create AI systems because we uniquely bear God’s image.
AI Systems and the Case for Creation
Our ability to create AI systems also provides evidence that we are the product of a Creator’s handiwork. The creation of AI systems requires the work of highly trained scientists and engineers who rely on several hundred years of scientific and technological advances. Creating AI systems requires designing and building highly advanced computer systems, engineering complex robotics systems, and writing sophisticated computer code. In other words, AI systems are intelligently designed. Or to put it another way, work in AI provides empirical evidence that a mind is required to create a mind—or, at least, a facsimile of a mind. And this conclusion means that the human mind must come from a Mind, as well. In light of this conclusion, is it reasonable to think that the human mind arose through unguided, undirected, historically contingent processes?
Developments in AI will undoubtedly lead to important advances that will improve the quality of our lives. And while it is tempting to see AI systems in human terms, these devices are machines—and nothing more. No justification exists for AI systems to be granted the same rights as human beings. In fact, when we think carefully about the nature and origin of AI, these systems highlight our exceptional nature as human beings, evincing the biblical view of humanity.
Only human beings deserve the rights of citizenship because these rights—justifiably called inalienable—are due us because we bear God’s image.
- Who Was Adam? A Creation Model Approach to the Origin of Humanity by Fazale Rana with Hugh Ross (book)
- Creating Life in the Lab: How New Discoveries in Synthetic Biology Make a Case for the Creator by Fazale Rana (book)
- “Brain Synchronization Study Evinces the Image of God” by Fazale Rana (article)
- “Molecular-Scale Robotics Build Case for Design” by Fazale Rana (article)
- “A Theology for Synthetic Biology, Part 1 (of 2)” by Fazale Rana (article)
- “A Theology for Synthetic Biology, Part 2 (of 2)” by Fazale Rana (article)