Does Transhumanism Refute Human Exceptionalism? A Response to Peter Clarke

Untitled 11
BY FAZALE RANA – APRIL 3, 2019

I just finished binge-watching Altered Carbon. Based on the 2002 science fiction novel written by Richard K. Morgan, this Netflix original series is provocative, to say the least.

Altered Carbon takes place in the future, where humans can store their personalities as digital files in devices called stacks. These disc-like devices are implanted at the top of the spinal column. When people die, their stacks can be removed from their body (called sleeves) and stored indefinitely until they are re-sleeved—if and when another body becomes available to them.

In this world, people who possess extreme wealth can live indefinitely, without ever having to spend any time in storage. Referred to as Meths (after the biblical figure Methuselah, who lived 969 years), the wealthy have the financial resources to secure a continual supply of replacement bodies through cloning. Their wealth also affords them the means to back up their stacks once a day, storing the data in a remote location in case their stacks are destroyed. In effect, Meths use technology to attain a form of immortality.

Forthcoming Posthuman Reality?

The world of Altered Carbon is becoming a reality right before our eyes. Thanks to recent advances in biotechnology and bioengineering, the idea of using technology to help people live indefinitely no longer falls under the purview of science fiction. Emerging technologies such as CRISPR-Cas9 gene editing and brain-computer interfaces offer hope to people suffering from debilitating diseases and injuries. They can also be used for human enhancements—extending our physical, intellectual, and psychological capabilities beyond natural biological limits.

These futuristic possibilities give fuel to a movement known as transhumanism. Residing on the fringe of the academy and culture for several decades, the movement has gone mainstream in the ivory towers of the academy and on the street. Sociologist James Hughes describes the transhumanist vision this way in his book Citizen Cyborg:

“In the twenty-first century the convergence of artificial intelligence, nanotechnology and genetic engineering will allow human beings to achieve things previously imagined only in science fiction. Lifespans will extend well beyond a century. Our senses and cognition will be enhanced. We will gain control over our emotions and memory. We will merge with machines, and machines will become more like humans. These technologies will allow us to evolve into varieties of “posthumans” and usher us into a “transhuman” era and society. . . . Transhuman technologies, technologies that push the boundaries of humanism, can radically improve our quality of life, and . . . we have a fundamental right to use them to control our bodies and minds. But to ensure these benefits we need to democratically regulate these technologies and make them equally available in free societies.”1

blog__inline--does-transhumanism-refute-human-exceptionalism

Figure 1: The transhumanism symbol. Image credit: Wikimedia Commons

In short, transhumanists want us to take control of our own evolution, transforming human beings into posthumans and in the process creating a utopian future that carves out a path to immortality.

Depending on one’s philosophical or religious perspective, transhumanists’ vision and the prospects of a posthuman reality can bring excitement or concern or a little bit of both. Should we pursue the use of technology to enhance ourselves, transcending the constraints of our biology? What role should these emerging biotechnologies play in shaping our future? What are the boundaries for developing and using these technologies? Should there be any boundaries?2

All of these questions revolve around a central question: Who are we as human beings?

Are Humans Exceptional?

Prior to the rising influence of transhumanism, the answer to this question followed along one of two lines. For people who hold to a Judeo-Christian worldview, human beings are exceptional, standing apart from all other creatures on the planet. Accordingly, our exceptional nature results from the image of God. As image bearers, human beings have infinite worth and value.

On the other hand, those influenced by the evolutionary paradigm maintain that human beings are nothing more than animals—differing in degree, not kind, from other creatures. In fact, many who hold this view of humanity find the notion of human exceptionalism repugnant. In their view, to elevate the value of human beings above that of other creatures constitutes speciesism and reflects an unjustifiable arrogance.

And now transhumanism enters into the fray. People on both sides of the controversy about human nature and identity argue that transhumanism brings an end to any notion about human exceptionalism, once and for all.

One is Peter Clarke. In an article published on the Areo website entitled “Transhumanism and the Death of Human Exceptionalism,” Clarke says:

“As a philosophical movement, transhumanism advocates for improving humanity through genetic modifications and technological augmentations, based upon the position that there is nothing particularly sacred about the human condition. It acknowledges up front that our bodies and minds are riddled with flaws that not only can but should be fixed. Even more radically, as the name implies, transhumanism embraces the potential of one day moving beyond the human condition, transitioning our sentience into more advanced forms of life, including genetically modified humans, superhuman cyborgs, and immortal digital intelligences.”3

On the other side of the aisle is Wesley J. Smith of the Discovery Institute. In his article “Transhumanist Bill of Wrongs,” Smith writes:

“Transhumanism would shatter human exceptionalism. The moral philosophy of the West holds that each human being is possessed of natural rights that adhere solely and merely because we are human. But transhumanists yearn to remake humanity in their own image—including as cyborgs, group personalities residing in the Internet Cloud, or AI-controlled machines. That requires denigrating natural man as unexceptional to justify our substantial deconstruction and redesign.”4

In other words, transhumanism highlights the notion that our bodies, minds, and personalities are inherently flawed and we have a moral imperative, proponents say, to correct these flaws. But this view denigrates humanity, opponents say, and with it the notion of human exceptionalism. For Clarke, this nonexceptional perspective is something to be celebrated. For Smith, transhumanism is of utmost concern and must be opposed.

Evidence of Exceptionalism

While I am sympathetic to Smith’s concern, I would take a differing perspective. I find that transhumanism provides one of the most powerful pieces of evidence for human exceptionalism—and along with it the image of God.

In my forthcoming book (coauthored with Ken Samples), Humans 2.0, I write:

“Ironically, progress in human enhancement technology and the prospects of a posthuman future serve as one of the most powerful arguments for human exceptionalism and, consequently, the image of God. Human beings are the only species that exists—or that has ever existed—that can create technologies to enhance our capabilities beyond our biological limits. We alone work toward effecting our own immortality, take control of evolution, and look to usher in a posthuman world. These possibilities stem from our unique and exceptional capacity to investigate and develop an understanding of nature (including human biology) through science and then turn that insight into technology.”5

Our ability to carry out the scientific enterprise and develop technology stems from four qualities that a growing number of anthropologists and primatologists think are unique to humans, including:

  • symbolism
  • open-ended generative capacity
  • theory of mind
  • our capacity to form complex social networks

From my perspective as a Christian, these qualities stand as scientific descriptors of the image of God.

As human beings, we effortlessly represent the world with discrete symbols. We denote abstract concepts with symbols. And our ability to represent the world symbolically has interesting consequences when coupled with our abilities to combine and recombine those symbols in a nearly infinite number of ways to create alternate possibilities.

Human capacity for symbolism manifests in the form of language, art, music, and even body ornamentation. And we desire to communicate the scenarios we construct in our minds with other human beings.

For anthropologists and primatologists who think that human beings differ in kind—not degree—from other animals, these qualities demarcate us from the great apes and Neanderthals. The separation becomes most apparent when we consider the remarkable technological advances we have made during our tenure as a species. Primatologist Thomas Suddendorf puts it this way:

“We reflect on and argue about our present situation, our history, and our destiny. We envision wonderful harmonious worlds as easily as we do dreadful tyrannies. Our powers are used for good as they are for bad, and we incessantly debate which is which. Our minds have spawned civilizations and technologies that have changed the face of the Earth, while our closest living animal relatives sit unobtrusively in their remaining forests. There appears to be a tremendous gap between human and animal minds.”6

Moreover, no convincing evidence exists that leads us to think that Neanderthals shared the qualities that make us exceptional. Neanderthals—who first appear in the fossil record around 250,000 to 200,000 years ago and disappear around 40,000 years ago—existed on Earth longer than modern humans have. Yet our technology has progressed exponentially, while Neanderthal technology remained largely static.

According to paleoanthropologist Ian Tattersall and linguist Noam Chomsky (and their coauthors):

“Our species was born in a technologically archaic context, and significantly, the tempo of change only began picking up after the point at which symbolic objects appeared. Evidently, a new potential for symbolic thought was born with our anatomically distinctive species, but it was only expressed after a necessary cultural stimulus had exerted itself. This stimulus was most plausibly the appearance of language. . . . Then, within a remarkably short space of time, art was invented, cities were born, and people had reached the moon.”7

In other words, the evolution of human technology signifies that there is something special—exceptional—about us as human beings. In this sense, transhumanism highlights our exceptional nature precisely because the prospects for controlling our own evolution stem from our ability to advance technology.

To be clear, transhumanism possesses an existential risk for humanity. Unquestioningly, it has the potential to strip human beings of dignity and worth. But, ironically, transhumanism is possible only because we are exceptional as human beings.

Responsibility as the Crown of Creation

Ultimately, our exceptional nature demands that we thoughtfully deliberate on how to use emerging biotechnologies to promote human flourishing, while ensuring that no human being is exploited or marginalized by these technologies. It also means that we must preserve our identity as human beings at all costs.

It is one thing to enjoy contemplating a posthuman future by binge-watching a sci-fi TV series. But, it is another thing altogether to live it out. May we be guided by ethical wisdom to live well.

Resources

Endnotes
  1. James Hughes, Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Humans of the Future (Cambridge, MA: Westview Press, 2004), xii.
  2. Ken Samples and I take on these questions and more in our book Humans 2.0, due to be published in July of 2019.
  3. Peter Clarke, “Transhumanism and the Death of Human Exceptionalism,” Areo (March 6, 2019), https://areomagazine.com/2019/03/06/transhumanism-and-the-death-of-human-exceptionalism/.
  4. Wesley J. Smith,“Transhumanist Bill of Wrongs,” Discovery Institute (October 23, 2018), https://www.discovery.org/a/transhumanist-bill-of-wrongs/.
  5. Fazale Rana with Kenneth Samples, Humans 2.0: Scientific, Philosophical, and Theological Perspectives on Transhumanism (Covina, CA: RTB Press, 2019) in press.
  6. Thomas Suddendorf, The Gap: The Science of What Separates Us from Other Animals (New York: Basic Books, 2013), 2.
  7. Johan J. Bolhuis et al., “How Could Language Have Evolved?” PLoS Biology 12, no.8 (August 26, 2014): e1001934, doi:10.1371/journal.pbio.1001934.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/04/03/does-transhumanism-refute-human-exceptionalism-a-response-to-peter-clarke

Does Development of Artificial Intelligence Undermine Human Exceptionalism?

doesdevelopmentofartificial

BY FAZALE RANA – JANUARY 17, 2018
In each case catalytic technologies, such as artificial wombs, the repair of brain injuries with prostheses and the enhancement of animal intelligence, will force us to choose between pre-modern human-racism and the cyborg citizenship implicit in the liberal democratic tradition.
—James Hughes, Citizen Cyborg

On one hand, it appeared to be nothing more than a harmless publicity stunt. On October 25, 2017, Saudi Arabia granted Sophia—a lifelike robot, powered by artificial intelligence software—citizenship. This took place at the FII conference, held in Riyahd, providing a prime opportunity for Hanson Robotics to showcase its most advanced robotics system to date. And, it also served as a chance for Saudi Arabia to establish itself as a world leader in AI technology.

But, on the other hand, granting Sophia citizenship establishes a dangerous precedent, acting as a harbinger to a dystopian future where machines (and animals with enhanced intelligence) are afforded the same rights as human beings. Elevating machines to the same status as human beings threatens to undermine human dignity and worth and, along with it, the biblical conception of humanity.

Still, the notion of granting citizenship to robots makes sense within a materialistic/naturalistic worldview. In this intellectual framework, human beings are largely regarded as biological machines and the human brain as an organic computer. If AI systems can be created with self-awareness and emotional capacity, what makes them any different from human beings? Is a silicon-based computer any different from one made up of organic matter?

For many people, sentience or self-awareness is the key determinant of personhood. And persons are guaranteed rights, whether they are human beings, AI machines, or super-intelligent animals created by genetic engineering or implanting human brain organoids (grown in a lab) into the brains of animals.

In other words, the way we regard AI technology has wide-ranging consequences for how we view and value human life. And while views of AI rooted in a materialistic/naturalistic worldview potentially threaten human dignity, a Christian worldview perspective of AI actually highlights human exceptionalism—in a way that aligns with the biblical concept of the image of God.

Will AI Systems Ever Be Self-Aware?

The linchpin for granting AI citizenship—and the same rights as human beings—is self-awareness.

But are AI systems self-aware? And will they ever be?

From my perspective, the answers to both questions are “no.” To be certain, AI systems are on a steep trajectory toward ever-increasing sophistication. But there is little prospect that they will ever truly be sentient. AI systems are becoming better and better at mimicking human cognitive abilities, emotions, and even self-awareness. But these systems do not inherently possess these capabilities—and I don’t think they ever will.

Researchers are able to create AI systems with the capacity to mimic human qualities through the combination of natural-language processing and machine-learning algorithms. In effect, natural-language processing is pattern matching, in which the AI system employs prewritten scripts that are combined, spliced, and recombined to make the AI systems’ comments and responses to questions seem natural. For example, Sophia performs really well responding to scripted questions. But, when questions posed to her are off-script, she often provides nonsensical answers or responds with non-sequiturs. These failings reflect limitations of the natural-language processing algorithms. Undoubtedly, Sophia’s responses will improve thanks to machine-learning protocols. These algorithms incorporate new information into the software inputs to generate improved outcomes. In fact, through machine-learning algorithms, Sophia is “learning” how to emote, by controlling mechanical hardware to produce appropriate facial expressions in response to the comments made by “her” conversation partner. But, these improvements will just be a little bit more of the same—differing in degree, not kind. They will never propel Sophia, or any AI system, to genuine self-awareness.

As the algorithms and hardware improve, Sophia (and other AI systems) are going to become better at mimicking human beings and, in doing so, seem to be more and more like us. But, even now, it is tempting to view Sophia as humanlike. But this tendency has little to do with AI technology. Instead, it has to do with our tendency to anthropomorphize animals and even inanimate objects. Often, we attribute human qualities to nonhuman, nonliving entities. And, undoubtedly, we will do the same for AI systems such as Sophia.

Our tendency to anthropomorphize arises from our theory-of-mind capacity—unique to human beings. As human beings, we recognize that other people have minds just like ours. As a consequence of this capacity, we anticipate what others are thinking and feeling. But we can’t turn off our theory-of-mind abilities. And as a consequence, we attribute human qualities to animals and machines. To put it another way, AI systems seem to be self-aware, because we have an innate tendency to view them as such, even if they are not.

Ironically, a quality unique to human beings—one that contributes to human exceptionalism and can be understood as a manifestation of the image of God—makes us susceptible to seeing AI systems as sentient “beings.” And because of this tendency, and because of our empathy (which relates to our theory of mind capacity), we want to grant AI systems the same rights afforded to us. But when we think carefully about our tendency to anthropomorphize, it should become evident that our proclivity to regard AI systems as humanlike stems from the fact that we are made in God’s image.

AI Systems and the Case for Human Exceptionalism

There is another way that research in AI systems evinces human exceptionalism. It is provocative to think that human beings are the only species that has ever existed that has the capability to create machines that are like us—at least, in some sense. Clearly, this achievement is beyond the capabilities of the great apes, and no evidence exists to think that Neanderthals could have ever pulled off a feat such as creating AI systems. Neanderthals—who first appear in the fossil record around 250,000 to 200,000 years ago and disappear around 40,000 years ago—existed on Earth longer than modern humans have. Yet, our technology has progressed exponentially, while Neanderthal technology remained largely static.

Our ability to create AI systems stems from the capacity for symbolism. As human beings, we effortlessly represent the world with discrete symbols. We denote abstract concepts with symbols. And our ability to represent the world symbolically has interesting consequences when coupled with our abilities to combine and recombine those symbols in a nearly infinite number of ways to create alternate possibilities.

Our capacity for symbolism manifests in the form of language, art, music, and even body ornamentation. And we desire to communicate the scenarios we construct in our minds with other human beings. In a sense, symbolism and our open-ended capacity to generate alternative hypotheses are scientific descriptors of the image of God. No other creature, including the great apes or Neanderthals, possesses these two qualities. In short, we can create AI systems because we uniquely bear God’s image.

AI Systems and the Case for Creation

Our ability to create AI systems also provides evidence that we are the product of a Creator’s handiwork. The creation of AI systems requires the work of highly trained scientists and engineers who rely on several hundred years of scientific and technological advances. Creating AI systems requires designing and building highly advanced computer systems, engineering complex robotics systems, and writing sophisticated computer code. In other words, AI systems are intelligently designed. Or to put it another way, work in AI provides empirical evidence that a mind is required to create a mind—or, at least, a facsimile of a mind. And this conclusion means that the human mind must come from a Mind, as well. In light of this conclusion, is it reasonable to think that the human mind arose through unguided, undirected, historically contingent processes?

Developments in AI will undoubtedly lead to important advances that will improve the quality of our lives. And while it is tempting to see AI systems in human terms, these devices are machines—and nothing more. No justification exists for AI systems to be granted the same rights as human beings. In fact, when we think carefully about the nature and origin of AI, these systems highlight our exceptional nature as human beings, evincing the biblical view of humanity.

Only human beings deserve the rights of citizenship because these rights—justifiably called inalienable—are due us because we bear God’s image.

Resources

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/01/17/does-development-of-artificial-intelligence-undermine-human-exceptionalism