Does Animal Planning Undermine the Image of God?

Untitled 6
BY FAZALE RANA – JANUARY 23, 2019

A few years ago, we had an all-white English Bulldog named Archie. He would lumber toward even complete strangers, eager to befriend them and earn their affections. And people happily obliged this playful pup.

Archie wasn’t just an adorable dog. He was also well trained. We taught him to ring a bell hanging from a sliding glass door in our kitchen so he could let us know when he wanted to go out. He rarely would ring the bell. Instead, he would just sit by the door and wait . . . unless the neighbor’s cat was in the backyard. Then, Archie would repeatedly bang on the bell with great urgency. He had to get the cat at all costs. Clearly, he understood the bell’s purpose. He just chose to use it for his own devices.

Anyone who has owned a cat or dog knows that these animals do remarkable things. Animals truly are intelligent creatures.

But there are some people who go so far as to argue that animal intelligence is much more like human intelligence than we might initially believe. They base this claim, in part, on a handful of high-profile studies that indicate that some animals such as great apes and ravens can problem-solve and even plan for the future—behaviors that make them like us in some important ways.

Great Apes Plan for the Future

In 2006, two German anthropologists conducted a set of experiments on bonobos and orangutans in captivity that seemingly demonstrated that these creatures can plan for the future. Specifically, the test subjects selected, transported, and saved tools for use 1 hour and 14 hours later, respectively.1

To begin the study, the researchers trained both bonobos and orangutans to use a tool to get a reward from an apparatus. In the first experiment, the researchers blocked access to the apparatus. They laid out eight tools for the apes to select—two were suitable for the task and six were unsuitable. After selecting the tools, the apes were ushered into another room where they were kept for 1 hour. The apes were then allowed back into the room and granted access to the apparatus. To gain the reward, the apes had to select the correct tool and transport it to and from the waiting area. The anthropologists observed that the apes successfully obtained the reward in 70 percent of the trials by selecting and hanging on to the correct tool as they moved from room to room.

In the second experiment, the delay between tool selection and access to the apparatus was extended to 14 hours. This experiment focused on a single female individual. Instead of taking the test subject to the waiting room, the researchers took her to a sleeping room one floor above the waiting room before returning her to the room with the apparatus. She selected and held on to to the tool for 14 hours while she moved from room to room in 11 of the 12 trials—each time successfully obtaining the reward.

On the basis of this study, the researchers concluded that great apes have the ability to plan for the future. They also argued that this ability emerged in the common ancestor of humans and great apes around 14 million years ago. So, even though we like to think of planning for the future as one of the “most formidable human cognitive achievements,”2 it doesn’t appear to be unique to human beings.

Ravens Plan for the Future

In 2017, two researchers from Lund University in Sweden demonstrated that ravens are capable of flexible planning just like the great apes.3 These cognitive scientists conducted a series of experiments with ravens, demonstrating that the large black birds can plan for future events and exert self-control for up to 17 hours prior to using a tool or bartering with humans for a reward. (Self-control is crucial for successfully planning for the future.)

The researchers taught ravens to use a tool to gain a reward from an apparatus. As part of the training phase, the test subjects also learned that other objects wouldn’t work on the apparatus.

In the first experiment, the ravens were exposed to the apparatus without access to tools. As such, they couldn’t gain the reward. Then the researchers removed the apparatus. One hour later, the ravens were taken to a different location and offered tools. Then, the researchers presented them with the apparatus 15 minutes later. On average, the raven test subjects selected and used tools to gain the reward in approximately 80 percent of the trials.

In the next experiment, the ravens were trained to barter by exchanging a token for a food reward. After the training, the ravens were taken to a different location and presented with a tray containing the token and three distractor objects by a researcher who had no history of bartering with the ravens. As with the results of the tool selection experiment, the ravens selected and used the token to successfully barter for food in approximately 80 percent of the trials.

When the scientists modified the experimental design to increase the time delay from 15 minutes to 17 hours between tool or token selection and access to the reward, the ravens successfully completed the task in nearly 90 percent of the trials.

Next, the researchers wanted to determine if the ravens could exercise self-control as part of their planning for the future. First, they presented the ravens with trays that contained a small food reward. Of course, all of the ravens took the reward. Next, the researchers offered the ravens trays that had the food reward and either tokens or tools and distractor items. By selecting the token or the tools, the ravens were ensured a larger food reward in the future. The researchers observed that the ravens selected the tool in 75 percent of the trials and the token in about 70 percent, instead of taking the small morsel of food. After selecting the tool or token, the ravens were given the opportunity to receive the reward about 15 minutes later.

The researchers concluded that, like the great apes, ravens can plan for the future. Moreover, these researchers argue that this insight opens up greater possibilities for animal cognition because, from an evolutionary perspective, ravens are regarded as avian dinosaurs. And mammals (including the great apes) are thought to have shared an evolutionary ancestor with dinosaurs 320 million years ago.

Are Humans Exceptional?

In light of these studies (and others like them), it becomes difficult to maintain that human beings are exceptional. Self-control and the ability to flexibly plan for future events is considered by many to be the cornerstone of human cognition. Planning for the future requires mental representation of temporally distant events, the ability to set aside current sensory inputs for unobservable future events, and an understanding of what current actions result in achieving a future goal.

For many Christians, such as me, the loss of human exceptionalism is concerning because if this idea is untenable, so, too, is the biblical view of human nature. According to Scripture, human beings stand apart from all other creatures because we bear God’s image. And, because every human being possesses the image of God, every human being has intrinsic worth and value. But if, in essence, human beings are no different from animals, it is challenging to maintain that we are the crown of creation, as Scripture teaches.

Yet recent work by biologist Johan Lind from Stockholm University (Sweden) indicates that the results of these two studies and others like them may be misleading. In effect, when properly interpreted, these studies pose no threat to human exceptionalism in any way. According to Lind, animals can engage in behavior that resembles flexible planning through a different behavior: associative learning.4 If so, this insight preserves the case for human exceptionalism and the image of God, because it means that only humans engage in genuine flexible planning for the future through higher-order cognitive processes.

Associative Learning and Planning for the Future

Lind points out that researchers working in artificial intelligence (AI) have long known that associative learning can produce complex behaviors in AI systems that give the appearance of having the capacity for planning. (Associative learning is the process that animals [and AI systems] use to establish an association between two stimuli or events, usually by the use of punishments or rewards.)

blog__inline--does-animal-planning-undermine-the-image-of-god

Figure 1: An illustration of associative learning in dogs. Image credit: Shutterstock

Lind wonders why researchers studying animal cognition ignore the work in AI. Applying the insights from the work on AI systems, Lind developed mathematical models based on associative learning that he used to simulate results of the studies on the great apes and ravens. He discovered that associative learning produced the same behaviors as observed by the two research teams for the great apes and ravens. In other words, planning-like behavior can actually emerge through associative learning. That is, the same processes that give AI systems the capacity to beat humans in chess can, through associative learning, account for the planning-like behavior of animals.

The results of Lind’s simulations mean that it is most likely that animals “plan” for the future in ways that are entirely different from humans. In effect, the planning-like behavior of animals is an outworking of associative learning. On the other hand, humans uniquely engage in bona fide flexible planning through advanced cognitive processes such as mental time travel, among others.

Humans Are Exceptional

Even though the idea of human exceptionalism is continually under assault, it remains intact, as the latest work by Johan Lind illustrates. When the entire body of evidence is carefully weighed, there really is only one reasonable conclusion: Human beings uniquely possess advanced cognitive abilities that make possible our capacity for symbolism, open-ended generative capacity, theory of mind, and complex social interactions—scientific descriptors of the image of God.

Resources

Endnotes
  1. Nicholas J. Mulcahy and Josep Call, “Apes Save Tools for Future Use,” Science 312 (May 19, 2006): 1038–40, doi:10.1126/science.1125456.
  2. Mulcahy and Call, “Apes Save Tools for Future Use.”
  3. Can Kabadayi and Mathias Osvath, “Ravens Parallel Great Apes in Flexible Planning for Tool-Use and Bartering,” Science 357 (July 14, 2017): 202–4, doi:10.1126/science.aam8138.
  4. Johan Lind, “What Can Associative Learning Do for Planning?” Royal Society Open Science 5 (November 28, 2018): 180778, doi:10.1098/rsos.180778.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/01/23/does-animal-planning-undermine-the-image-of-god

Does Development of Artificial Intelligence Undermine Human Exceptionalism?

doesdevelopmentofartificial

BY FAZALE RANA – JANUARY 17, 2018
In each case catalytic technologies, such as artificial wombs, the repair of brain injuries with prostheses and the enhancement of animal intelligence, will force us to choose between pre-modern human-racism and the cyborg citizenship implicit in the liberal democratic tradition.
—James Hughes, Citizen Cyborg

On one hand, it appeared to be nothing more than a harmless publicity stunt. On October 25, 2017, Saudi Arabia granted Sophia—a lifelike robot, powered by artificial intelligence software—citizenship. This took place at the FII conference, held in Riyahd, providing a prime opportunity for Hanson Robotics to showcase its most advanced robotics system to date. And, it also served as a chance for Saudi Arabia to establish itself as a world leader in AI technology.

But, on the other hand, granting Sophia citizenship establishes a dangerous precedent, acting as a harbinger to a dystopian future where machines (and animals with enhanced intelligence) are afforded the same rights as human beings. Elevating machines to the same status as human beings threatens to undermine human dignity and worth and, along with it, the biblical conception of humanity.

Still, the notion of granting citizenship to robots makes sense within a materialistic/naturalistic worldview. In this intellectual framework, human beings are largely regarded as biological machines and the human brain as an organic computer. If AI systems can be created with self-awareness and emotional capacity, what makes them any different from human beings? Is a silicon-based computer any different from one made up of organic matter?

For many people, sentience or self-awareness is the key determinant of personhood. And persons are guaranteed rights, whether they are human beings, AI machines, or super-intelligent animals created by genetic engineering or implanting human brain organoids (grown in a lab) into the brains of animals.

In other words, the way we regard AI technology has wide-ranging consequences for how we view and value human life. And while views of AI rooted in a materialistic/naturalistic worldview potentially threaten human dignity, a Christian worldview perspective of AI actually highlights human exceptionalism—in a way that aligns with the biblical concept of the image of God.

Will AI Systems Ever Be Self-Aware?

The linchpin for granting AI citizenship—and the same rights as human beings—is self-awareness.

But are AI systems self-aware? And will they ever be?

From my perspective, the answers to both questions are “no.” To be certain, AI systems are on a steep trajectory toward ever-increasing sophistication. But there is little prospect that they will ever truly be sentient. AI systems are becoming better and better at mimicking human cognitive abilities, emotions, and even self-awareness. But these systems do not inherently possess these capabilities—and I don’t think they ever will.

Researchers are able to create AI systems with the capacity to mimic human qualities through the combination of natural-language processing and machine-learning algorithms. In effect, natural-language processing is pattern matching, in which the AI system employs prewritten scripts that are combined, spliced, and recombined to make the AI systems’ comments and responses to questions seem natural. For example, Sophia performs really well responding to scripted questions. But, when questions posed to her are off-script, she often provides nonsensical answers or responds with non-sequiturs. These failings reflect limitations of the natural-language processing algorithms. Undoubtedly, Sophia’s responses will improve thanks to machine-learning protocols. These algorithms incorporate new information into the software inputs to generate improved outcomes. In fact, through machine-learning algorithms, Sophia is “learning” how to emote, by controlling mechanical hardware to produce appropriate facial expressions in response to the comments made by “her” conversation partner. But, these improvements will just be a little bit more of the same—differing in degree, not kind. They will never propel Sophia, or any AI system, to genuine self-awareness.

As the algorithms and hardware improve, Sophia (and other AI systems) are going to become better at mimicking human beings and, in doing so, seem to be more and more like us. But, even now, it is tempting to view Sophia as humanlike. But this tendency has little to do with AI technology. Instead, it has to do with our tendency to anthropomorphize animals and even inanimate objects. Often, we attribute human qualities to nonhuman, nonliving entities. And, undoubtedly, we will do the same for AI systems such as Sophia.

Our tendency to anthropomorphize arises from our theory-of-mind capacity—unique to human beings. As human beings, we recognize that other people have minds just like ours. As a consequence of this capacity, we anticipate what others are thinking and feeling. But we can’t turn off our theory-of-mind abilities. And as a consequence, we attribute human qualities to animals and machines. To put it another way, AI systems seem to be self-aware, because we have an innate tendency to view them as such, even if they are not.

Ironically, a quality unique to human beings—one that contributes to human exceptionalism and can be understood as a manifestation of the image of God—makes us susceptible to seeing AI systems as sentient “beings.” And because of this tendency, and because of our empathy (which relates to our theory of mind capacity), we want to grant AI systems the same rights afforded to us. But when we think carefully about our tendency to anthropomorphize, it should become evident that our proclivity to regard AI systems as humanlike stems from the fact that we are made in God’s image.

AI Systems and the Case for Human Exceptionalism

There is another way that research in AI systems evinces human exceptionalism. It is provocative to think that human beings are the only species that has ever existed that has the capability to create machines that are like us—at least, in some sense. Clearly, this achievement is beyond the capabilities of the great apes, and no evidence exists to think that Neanderthals could have ever pulled off a feat such as creating AI systems. Neanderthals—who first appear in the fossil record around 250,000 to 200,000 years ago and disappear around 40,000 years ago—existed on Earth longer than modern humans have. Yet, our technology has progressed exponentially, while Neanderthal technology remained largely static.

Our ability to create AI systems stems from the capacity for symbolism. As human beings, we effortlessly represent the world with discrete symbols. We denote abstract concepts with symbols. And our ability to represent the world symbolically has interesting consequences when coupled with our abilities to combine and recombine those symbols in a nearly infinite number of ways to create alternate possibilities.

Our capacity for symbolism manifests in the form of language, art, music, and even body ornamentation. And we desire to communicate the scenarios we construct in our minds with other human beings. In a sense, symbolism and our open-ended capacity to generate alternative hypotheses are scientific descriptors of the image of God. No other creature, including the great apes or Neanderthals, possesses these two qualities. In short, we can create AI systems because we uniquely bear God’s image.

AI Systems and the Case for Creation

Our ability to create AI systems also provides evidence that we are the product of a Creator’s handiwork. The creation of AI systems requires the work of highly trained scientists and engineers who rely on several hundred years of scientific and technological advances. Creating AI systems requires designing and building highly advanced computer systems, engineering complex robotics systems, and writing sophisticated computer code. In other words, AI systems are intelligently designed. Or to put it another way, work in AI provides empirical evidence that a mind is required to create a mind—or, at least, a facsimile of a mind. And this conclusion means that the human mind must come from a Mind, as well. In light of this conclusion, is it reasonable to think that the human mind arose through unguided, undirected, historically contingent processes?

Developments in AI will undoubtedly lead to important advances that will improve the quality of our lives. And while it is tempting to see AI systems in human terms, these devices are machines—and nothing more. No justification exists for AI systems to be granted the same rights as human beings. In fact, when we think carefully about the nature and origin of AI, these systems highlight our exceptional nature as human beings, evincing the biblical view of humanity.

Only human beings deserve the rights of citizenship because these rights—justifiably called inalienable—are due us because we bear God’s image.

Resources

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/01/17/does-development-of-artificial-intelligence-undermine-human-exceptionalism