Resurrected Proteins and the Case for Biological Evolution

resurrectedproteinsandthecase

BY FAZALE RANA – OCTOBER 14, 2013

Recently, a team of researchers from Spain resurrected a 4-billion-year-old version of a protein that belongs to a class of proteins known as thioredoxins. The ability to resurrect ancient proteins using principles integral to the evolutionary paradigm is the type of advance that many scientists point to as evidence for biological evolution. In this article, I discuss how this work can be seamlessly accommodated within a creation/design paradigm.

Recently, a team of biochemists from Spain resurrected an ancient version of a protein, known as a thioredoxin. The successful restoration of this antiquated protein is the kind of advance that many scientists point to as evidence for the evolutionary paradigm.

Presumably, the protein they “brought back to life” would have been as it was 4 billion years ago.1 By studying the structure and function of the ancient thioredoxin, the research team was able to gain insight into the biology of some of the first life-forms on Earth. This is not the first time biochemists have pulled off this feat. Over the last several years, life scientists have announced the re-creation of a number of ancient proteins.2

The procedure for resurrecting ancient proteins makes use of evolutionary trees that are built using the amino acid sequences of extant proteins. From these trees, scientists infer the structure of the ancestral protein. They then go into the lab and make that protein—and more often than not, the molecule adopts a stable structure with discernible function. It is remarkable to think that scientists can use evolutionary trees to interpolate the probable structure of an ancestral protein to then make a biomolecule that displays function. I truly understand why people would point to this type of work as evidence for biological evolution.

So, how does someone who advocates for intelligent design/creationism make sense of scientists’ ability to resurrect ancient proteins?

For the sake of brevity, I will provide a quick response to this question. For a more detailed discussion of the production of ancient thioredoxins and how I view resurrected proteins from a design/creation model perspective listen to the August 12, 2013 episode of Science News Flash.

To appreciate a design/creation interpretation of this work, it is important to first understand how scientists determine the amino acid sequence for ancient proteins. Evolutionary biologists make an inference by comparing amino acid sequences of extant proteins. (In this most recent study, scientists compared around 200 thioredoxins from organisms representing all three domains of life.) Based on the patterns of similarities and differences in the sequences, they propose evolutionary relationships among the proteins.

The assumption is that the differences in the amino acid sequences of extant proteins stem from mutations to the genes encoding the proteins. Accordingly, these mutations would be passed on to subsequent generations. As the different lineages diverge, different types of mutations would accrue in the protein-coding genes in the distinct lineages. The branch points, or nodes, in the evolutionary tree, would then represent the ancestral protein shared by all proteins found in the lineages that split from that point. Researchers then infer the most likely amino acid sequence of the ancestral protein by working their way backwards from extant amino acid sequences of proteins which fall along the branches that stem from the node.

At this juncture, it is important to note that evolutionary biologists actively choose to interpret the similarities and differences in the amino acid sequences of extant proteins from an evolutionary perspective. I maintain that it is equally valid to interpret the sequence similarities and differences from a design/creation standpoint as well. With this approach, the archetype takes the place of the common ancestor. And the differences in the amino acid sequences represent variations around an archetypical design shared by all the proteins that are members of a particular family, such as the thioredoxins. In light of this concept, it is interesting the researchers discovered that the structure of ancient thioredoxins is highly conserved moving back through time, with only limited variation in the structure, which varied around a core design.

What about the process for determining the ancestral/archetypical sequence from an evolutionary tree? Doesn’t this fact run contrary to a design explanation?

Not necessarily. Consider the variety of automobiles that exist. These vehicles are all variants of an archetypical design. Even though automobiles are the products of intelligent agents, they can be organized into an “evolutionary tree” based on design similarities and differences. In this case, the nodes in the tree represent the core design of the automobiles that are found on the branches that arise from the node.

By analogy, one could also regard the extant members of a protein family as the work of a Designer. Just like automobiles, the protein variants can be organized into a tree-like diagram. In this case the nodes correspond to the common design elements of the proteins found on the branches.

In my view, when evolutionary biologists uncover what they believe to be the ancestral sequence of a protein family, they are really identifying

Endnotes

  1. Alvaro Ingles-Prieto et al., “Conservation of Protein Structure over Four Billion Years,” Structure21 (September 3, 2013): 1690–97.
  2. For example see Michael J. Harms and Joseph W. Thornton, “Analyzing Protein Structure and Function Using Ancestral Gene Reconstruction,” Current Opinion in Structural Biology 20 (June 2010): 360–66.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/todays-new-reason-to-believe/read/tnrtb/2013/10/15/resurrected-proteins-and-the-case-for-biological-evolution

Endosymbiont Hypothesis and the Ironic Case for a Creator

endosymbionthypothesisandtheironic

BY FAZALE RANA – DECEMBER 12, 2018

i ·ro ·ny

The use of words to express something different from and often opposite to their literal meaning.
Incongruity between what might be expected and what actually occurs.

—The Free Dictionary

People often use irony in humor, rhetoric, and literature, but few would think it has a place in science. But wryly, this has become the case. Recent work in synthetic biology has created a real sense of irony among the scientific community—particularly for those who view life’s origin and design from an evolutionary framework.

Increasingly, life scientists are turning to synthetic biology to help them understand how life could have originated and evolved. But, they have achieved the opposite of what they intended. Instead of developing insights into key evolutionary transitions in life’s history, they have, ironically, demonstrated the central role intelligent agency must play in any scientific explanation for the origin, design, and history of life.

This paradoxical situation is nicely illustrated by recent work undertaken by researchers from Scripps Research (La Jolla, CA). Through genetic engineering, the scientific investigators created a non-natural version of the bacterium E. coli. This microbe is designed to take up permanent residence in yeast cells. (Cells that take up permanent residence within other cells are referred to as endosymbionts.) They hope that by studying these genetically engineered endosymbionts, they can gain a better understanding of how the first eukaryotic cells evolved. Along the way, they hope to find added support for the endosymbiont hypothesis.1

The Endosymbiont Hypothesis

Most biologists believe that the endosymbiont hypothesis (symbiogenesis) best explains one of the key transitions in life’s history; namely, the origin of complex cells from bacteria and archaea. Building on the ideas of Russian botanist Konstantin Mereschkowski, Lynn Margulis(1938–2011) advanced the endosymbiont hypothesis in the 1960s to explain the origin of eukaryotic cells.

Margulis’s work has become an integral part of the evolutionary paradigm. Many life scientists find the evidence for this idea compelling and consequently view it as providing broad support for an evolutionary explanation for the history and design of life.

According to this hypothesis, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe. Presumably, organelles such as mitochondria were once endosymbionts. Evolutionary biologists believe that once engulfed by the host cell, the endosymbionts took up permanent residency, with the endosymbiont growing and dividing inside the host.

Over time, the endosymbionts and the host became mutually interdependent. Endosymbionts provided a metabolic benefit for the host cell—such as an added source of ATP—while the host cell provided nutrients to the endosymbionts. Presumably, the endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism.

endosymbiont-hypothesis-and-the-ironic-case-for-a-creator-1

Figure 1: Endosymbiont hypothesis. Image credit: Wikipedia.

Life scientists point to a number of similarities between mitochondria and alphaproteobacteria as evidence for the endosymbiont hypothesis. (For a description of the evidence, see the articles listed in the Resources section.) Nevertheless, they don’t understand how symbiogenesis actually occurred. To gain this insight, scientists from Scripps Research sought to experimentally replicate the earliest stages of mitochondrial evolution by engineering E. coli and brewer’s yeast (S. cerevisiae) to yield an endosymbiotic relationship.

Engineering Endosymbiosis

First, the research team generated a strain of E. coli that no longer has the capacity to produce the essential cofactor thiamin. They achieved this by disabling one of the genes involved in the biosynthesis of the compound. Without this metabolic capacity, this strain becomes dependent on an exogenous source of thiamin in order to survive. (Because the E. coli genome encodes for a transporter protein that can pump thiamin into the cell from the exterior environment, it can grow if an external supply of thiamin is available.) When incorporated into yeast cells, the thiamin in the yeast cytoplasm becomes the source of the exogenous thiamin, rendering E. coli dependent on the yeast cell’s metabolic processes.

Next, they transferred the gene that encodes a protein called ADP/ATP translocase into the E. coli strain. This gene was harbored on a plasmid (which is a small circular piece of DNA). Normally, the gene is found in the genome of an endosymbiotic bacterium that infects amoeba. This protein pumps ATP from the interior of the bacterial cell to the exterior environment.2

The team then exposed yeast cells (that were deficient in ATP production) to polyethylene glycol, which creates a passageway for E. coli cells to make their way into the yeast cells. In doing so, E. coli becomes established as endosymbionts within the yeast cells’ interior, with the E. coli providing ATP to the yeast cell and the yeast cell providing thiamin to the bacterial cell.

Researchers discovered that once taken up by the yeast cells, the E. coli did not persist inside the cell’s interior. They reasoned that the bacterial cells were being destroyed by the lysosomal degradation pathway. To prevent their destruction, the research team had to introduce three additional genes into the E. coli from three separate endosymbiotic bacteria. Each of these genes encodes proteins—called SNARE-like proteins—that interfere with the lysosomal destruction pathway.

Finally, to establish a mutualistic relationship between the genetically-engineered strain of E. coli and the yeast cell, the researchers used a yeast strain with defective mitochondria. This defect prevented the yeast cells from producing an adequate supply of ATP. Because of this limitation, the yeast cells grow slowly and would benefit from the E. coli endosymbionts, with the engineered capacity to transport ATP from their cellular interior to the exterior environment (the yeast cytoplasm.)

The researchers observed that the yeast cells with E. coli endosymbionts appeared to be stable for 40 rounds of cell doublings. To demonstrate the potential utility of this system to study symbiogenesis, the research team then began the process of genome reduction for the E. coli endosymbionts. They successively eliminated the capacity of the bacterial endosymbiont to make the key metabolic intermediate NAD and the amino acid serine. These triply-deficient E. coli strains survived in the yeast cells by taking up these nutrients from the yeast cytoplasm.

Evolution or Intentional Design?

The Scripps Research scientific team’s work is impressive, exemplifying science at its very best. They hope that their landmark accomplishment will lead to a better understanding of how eukaryotic cells appeared on Earth by providing the research community with a model system that allows them to probe the process of symbiogenesis. It will also allow them to test the various facets of the endosymbiont hypothesis.

In fact, I would argue that this study already has made important strides in explaining the genesis of eukaryotic cells. But ironically, instead of proffering support for an evolutionary origin of eukaryotic cells (even though the investigators operated within the confines of the evolutionary paradigm), their work points to the necessary role intelligent agency must have played in one of the most important events in life’s history.

This research was executed by some of the best minds in the world, who relied on a detailed and comprehensive understanding of biochemical and cellular systems. Such knowledge took a couple of centuries to accumulate. Furthermore, establishing mutualistic interactions between the two organisms required a significant amount of ingenuity—genius that is reflected in the experimental strategy and design of their study. And even at that point, execution of their experimental protocols necessitated the use of sophisticated laboratory techniques carried out under highly controlled, carefully orchestrated conditions. To sum it up: intelligent agency was required to establish the endosymbiotic relationship between the two microbes.

endosymbiont-hypothesis-and-the-ironic-case-for-a-creator-2

Figure 2: Lab researcher. Image credit: Shutterstock.

Or, to put it differently, the endosymbiotic relationship between these two organisms was intelligently designed. (All this work was necessary to recapitulate only the presumed first step in the process of symbiogenesis.) This conclusion gains added support given some of the significant problems confronting the endosymbiotic hypothesis. (For more details, see the Resources section.) By analogy, it seems reasonable to conclude that eukaryotic cells, too, must reflect the handiwork of a Divine Mind—a Creator.

Resources

Endnotes

  1. Angad P. Mehta et al., “Engineering Yeast Endosymbionts as a Step toward the Evolution of Mitochondria,” Proceedings of the National Academy of Sciences, USA 115 (November 13, 2018): doi:10.1073/pnas.1813143115.
  2. ATP is a biochemical that stores energy used to power the cell’s operation. Produced by mitochondria, ATP is one of the end products of energy harvesting pathways in the cell. The ATP produced in mitochondria is pumped into the cell’s cytoplasm from within the interior of this organelle by an ADP/ATP transporter.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/12/12/endosymbiont-hypothesis-and-the-ironic-case-for-a-creator

Did Neanderthals Start Fires?

neanderthalsstartfire

BY FAZALE RANA – DECEMBER 5, 2018

It is one of the most iconic Christmas songs of all time.

Written by Bob Wells and Mel Torme in the summer of 1945, “The Christmas Song” (subtitled “Chestnuts Roasting on an Open Fire”) was crafted in less than an hour. As the story goes, Wells and Torme were trying to stay cool during the blistering summer heat by thinking cool thoughts and then jotting them down on paper. And, in the process, “The Christmas Song” was born.

Many of the song’s lyrics evoke images of winter, particularly around Christmastime. But none has come to exemplify the quiet peace of a Christmas evening more than the song’s first line, “Chestnuts roasting on an open fire . . . ”

Gathering around the fire to stay warm, to cook food, and to share in a community has been an integral part of the human experience throughout history—including human prehistory. Most certainly our ability to master fire played a role in our survival as a species and in our ability as human beings to occupy and thrive in some of the world’s coldest, harshest climates.

But fire use is not limited only to modern humans. There is strong evidence that Neanderthals made use of fire. But, did these creatures have control over fire in the same way we do? In other words, did Neanderthals master fire? Or, did they merely make opportunistic use of natural fires? These questions are hotly debated by anthropologists today and they contribute to a broader discussion about the cognitive capacity of Neanderthals. Part of that discussion includes whether these creatures were cognitively inferior to us or whether they were our intellectual equals.

In an attempt to answer these questions, a team of researchers from the Netherlands and France characterized the microwear patterns on bifacial (having opposite sides that have been worked on to form an edge) tools made from flint recovered from Neanderthal sites, and concluded that the wear patterns suggest that these hominins used pyrite to repeatedly strike the flint. This process generates sparks that can be used to start fires.1 To put it another way, the researchers concluded that Neanderthals had mastery over fire because they knew how to start fires.

start-fires-1

Figure 1: Biface tools for cutting or scraping. Image credit: Shutterstock

However, a closer examination of the evidence along with results of other studies, including recent insight into the cause of Neanderthal extinction, raises significant doubts about this conclusion.

What Do the Microwear Patterns on Flint Say?

The investigators focused on the microwear patterns of flint bifaces recovered from Neanderthal sites as a marker for fire mastery because of the well-known practice among hunter-gatherers and pastoralists of striking flint with pyrite (an iron disulfide mineral) to generate sparks to start fires. Presumably, the first modern humans also used this technique to start fires.

start-fires-2

Figure 2: Starting a fire with pyrite and flint. Image credit: Shutterstock

The research team reasoned that if Neanderthals started fires, they would use a similar tactic. Careful examination of the microwear patterns on the bifaces led the research team to conclude that these tools were repeatedly struck by hard materials, with the strikes all occurring in the same direction along the bifaces’ long axis.

The researchers then tried to experimentally recreate the microwear pattern in a laboratory setting. To do so, they struck biface replicas with a number of different types of materials, including pyrites, and concluded that the patterns produced by the pyrite strikes most closely matched the patterns on the bifaces recovered from Neanderthal sites. On this basis, the researchers claim that they have found evidence that Neanderthals deliberately started fires.

Did Neanderthals Master Fire?

While this conclusion is possible, at best this study provides circumstantial, not direct, evidence for Neanderthal mastery of fire. In fact, other evidence counts against this conclusion. For example, bifaces with the same type of microwear patterns have been found at other Neanderthal sites, locales that show no evidence of fire use. These bifaces would have had a range of usages, including butchery of the remains of dead animals. So, it is possible that these tools were never used to start fires—even at sites with evidence for fire usage.

Another challenge to the conclusion comes from the failure to detect any pyrite on the bifaces recovered from the Neanderthal sites. Flint recovered from modern human sites shows visible evidence of pyrite. And yet the research team failed to detect even trace amounts of pyrite on the Neanderthal bifaces during the course of their microanalysis.

This observation raises further doubt about whether the flint from the Neanderthal sites was used as a fire starter tool. Rather, it points to the possibility that Neanderthals struck the bifaces with materials other than pyrite for reasons not yet understood.

The conclusion that Neanderthals mastered fire also does not square with results from other studies. For example, a careful assessment of archaeological sites in southern France occupied by Neanderthals from about 100,000 to 40,000 years ago indicates that Neanderthals could not create fire. Instead, these hominins made opportunistic use of natural fire when it was available to them.2

These French sites do show clear evidence of Neanderthal fire use, but when researchers correlated the archaeological layers displaying evidence for fire use with the paleoclimate data, they found an unexpected pattern. Neanderthals used fire during warm climate conditions and failed to use fire during cold periods—the opposite of what would be predicted if Neanderthals had mastered fire.

Lightning strikes that would generate natural fires are much more likely to occur during warm periods. Instead of creating fire, Neanderthals most likely harnessed natural fire and cultivated it as long as they could before it extinguished.

Another study also raises questions about the ability of Neanderthals to start fires.3 This research indicates that cold climates triggered Neanderthal extinctions. By studying the chemical composition of stalagmites in two Romanian caves, an international research team concluded that there were two prolonged and extremely cold periods between 44,000 and 40,000 years ago. (The chemical composition of stalagmites varies with temperature.)

The researchers also noted that during these cold periods, the archaeological record for Neanderthals disappears. They interpret this disappearance to reflect a dramatic reduction in Neanderthal population numbers. Researchers speculate that when this population downturn took place during the first cold period, modern humans made their way into Europe. Being better suited for survival in the cold climate, modern human numbers increased. When the cold climate mitigated, Neanderthals were unable to recover their numbers because of the growing populations of modern humans in Europe. Presumably, after the second cold period, Neanderthal numbers dropped to the point that they couldn’t recover, and hence, became extinct.

But why would modern humans be more capable than Neanderthals of surviving under extremely cold conditions? It seems as if it should be the other way around. Neanderthals had a hyper-polar body design that made them ideally suited to withstand cold conditions. Neanderthal bodies were stout and compact, comprised of barrel-shaped torsos and shorter limbs, which helped them retain body heat. Their noses were long and sinus cavities extensive, which helped them warm the cold air they breathed before it reached their lungs. But, despite this advantage, Neanderthals died out and modern humans thrived.

Some anthropologists believe that the survival discrepancy could be due to dietary differences. Some data indicates that modern humans had a more varied diet than Neanderthals. Presumably, these creatures primarily consumed large herbivores—animals that disappeared when the climatic conditions turned cold, thereby threatening Neanderthal survival. On the other hand, modern humans were able to adjust to the cold conditions by shifting their diets.

But could there be a different explanation? Could it be that with their mastery of fire, modern humans were able to survive cold conditions? And did Neanderthals die out because they could not start fires?

Taken in its entirety, the data seems to indicate that Neanderthals lacked mastery of fire but could use it opportunistically. And, in a broader context, the data indicates that Neanderthals were cognitively inferior to humans.

What Difference Does It Make?

One of the most important ideas taught in Scripture is that human beings uniquely bear God’s image. As such, every human being has immeasurable worth and value. And because we bear God’s image, we can enter into a relationship with our Maker.

However, if Neanderthals possessed advanced cognitive ability just like that of modern humans, then it becomes difficult to maintain the view that modern humans are unique and exceptional. If human beings aren’t exceptional, then it becomes a challenge to defend the idea that human beings are made in God’s image.

Yet, claims that Neanderthals are cognitive equals to modern humans fail to withstand scientific scrutiny, time and time, again. Now it’s time to light a fire in my fireplace and enjoy a few contemplative moments thinking about the real meaning of Christmas.

Resources

Endnotes

  1. A. C. Sorensen, E. Claud, and M. Soressi, “Neanderthal Fire-Making Technology Inferred from Microwear Analysis,” Scientific Reports 8 (July 19, 2018): 10065, doi:10.1038/s41598-018-28342-9.
  2. Dennis M. Sandgathe et al., “Timing of the Appearance of Habitual Fire Use,” Proceedings of the National Academy of Sciences, USA 108 (July 19, 2011), E298, doi:10.1073/pnas.1106759108; Paul Goldberg et al., “New Evidence on Neandertal Use of Fire: Examples from Roc de Marsal and Pech de l’Azé IV,” Quaternary International 247 (2012): 325–40, doi:10.1016/j.quaint.2010.11.015; Dennis M. Sandgathe et al., “On the Role of Fire in Neandertal Adaptations in Western Europe: Evidence from Pech de l’Azé IV and Roc de Marsal, France,” PaleoAnthropology (2011): 216–42, doi:10.4207/PA.2011.ART54.
  3. Michael Staubwasser et al., “Impact of Climate Change on the Transition of Neanderthals to Modern Humans in Europe,” Proceedings of the National Academy of Sciences, USA 115 (September 11, 2018): 9116–21, doi:10.1073/pnas.1808647115.

Spider Silk Inspires New Technology and the Case for a Creator

spidersilk

BY FAZALE RANA – NOVEMBER 28, 2018
Mark your calendars!

On December 14th (2018), Columbia Pictures—in collaboration with Sony Pictures Animation—will release a full-length animated feature: Spider-Man: Into the Spider-Verse. The story features Miles Morales, an Afro-Latino teenager, as Spider-Man.

Morales accidentally becomes transported from his universe to ours, where Peter Parker is Spider-Man. Parker meets Morales and teaches him how to be Spider-Man. Along the way, they encounter different versions of Spider-Man from alternate dimensions. All of them team up to save the multiverse and to find a way to return back to their own versions of reality.

What could be better than that?

In 1962, Spider-Man’s creators, Stan Lee and Steve Ditko, drew inspiration for their superhero in the amazing abilities of spiders. And today, engineers find similar inspiration, particularly, when it comes to spider silk. The remarkable properties of spider’s silk is leading to the creation of new technologies.

Synthetic Spider Silk

Engineers are fascinated by spider silk because this material displays astonishingly high tensile strength and ductility (pliability), properties that allow it to absorb huge amounts of energy before breaking. Only one-sixth the density of steel, spider silk can be up to four times stronger, on a per weight basis.

By studying this remarkable substance, engineers hope that they can gain insight and inspiration to engineer next-generation materials. According to Northwestern University researcher Nathan C. Gianneschi, who is attempting to produce synthetic versions of spider silk, “One cannot overstate the potential impact on materials and engineering if we can synthetically replicate the natural process to produce artificial fibers at scale. Simply put, it would be transformative.”1

Gregory P. Holland of San Diego State University, one of Gianneschi’s collaborators, states, “The practical applications for materials like this are essentially limitless.”2 As a case in point, synthetic versions of spider silk could be used to make textiles for military personnel and first responders and to make construction materials such as cables. They would also have biomedical utility and could be used to produce environmentally friendly plastics.

The Quest to Create Synthetic Spider Silk

But things aren’t that simple. Even though life scientists and engineers understand the chemical structure of spider’s silk and how its structural features influence its mechanical properties, they have not been able to create synthetic versions of it with the same set of desired properties.

 

blog__inline--spider-silk-inspires-new-technology

Figure 1: The Molecular Architecture of Spider Silk. Fibers of spider silk consist of proteins that contain crystalline regions separated by amorphous regions. The crystals form from regions of the protein chain that fold into structures called beta-sheets. These beta-sheets stack together to give the spider silk its tensile strength. The amorphous regions give the silk fibers ductility. Image credit: Chen-Pan Liao.

Researchers working to create synthetic spider silk speculate that the process by which the spider spins the silk may play a critical role in establishing the biomaterial’s tensile strength and ductility. Before it is extruded, silk exists in a precursor form in the silk gland. Researchers think that the key to generating synthetic spider silk with the same properties as naturally formed spider silk may be found by mimicking the structure of the silk proteins in precursor form.

Previous work suggests that the proteins that make up spider silk exist as simple micelles in the silk gland and that when spun from this form, fibers with greater-than-steel strength are formed. But researchers’ attempts to apply this insight in a laboratory setting failed to yield synthetic silk with the desired properties.

The Structure of Spider Silk Precursors

Hoping to help unravel this problem, a team of American collaborators led by Gianneschi and Holland recently provided a detailed characterization of the structure of the silk protein precursors in spider glands.3 They discovered that the silk proteins form micelles, but the micelles aren’t simple. Instead, they assemble into a complex structure comprised of a hierarchy of subdomains. Researchers also learned that when they sheared these nanoassemblies of precursor proteins, fibers formed. If they can replicate these hierarchical nanostructures in the lab, researchers believe they may be able to construct synthetic spider silk with the long-sought-after tensile strength and ductility.

Biomimetics and Bioinspiration

Attempts to find inspiration for new technology is n0t limited to spider silk. It has become rather commonplace for engineers to employ insights from arthropod biology (which includes spiders and insects) to solve engineering problems and to inspire the invention of new technologies—even technologies unlike anything found in nature. In fact, I discuss this practice in an essay I contributed for the book God and the World of Insects.

This activity falls under the domain of two relatively new and exciting areas of engineering known as biomimetics and bioinspiration. As the names imply, biomimetics involves direct mimicry of designs from biology, whereas bioinspiration relies on insights from biology to guide the engineering enterprise.

The Converse Watchmaker Argument for God’s Existence

The idea that biological designs can inspire engineering and technology advances is highly provocative. It highlights the elegant designs found throughout the living realm. In the case of spider silk, design elegance is not limited to the structure of spider silk but extends to its manufacturing process as well—one that still can’t be duplicated by engineers.

The elegance of these designs makes possible a new argument for God’s existence—one I have named the converse Watchmaker argument. (For a detailed discussion see the essay I contributed to the book Building Bridges, entitled, “The Inspirational Design of DNA.”)

The argument can be stated like this: if biological designs are the work of a Creator, then these systems should be so well-designed that they can serve as engineering models for inspiring the development of new technologies. Indeed, this scenario is what scientists observe in nature. Therefore, it becomes reasonable to think that biological designs are the work of a Creator.

Biomimetics and the Challenge to the Evolutionary Paradigm

From my perspective, the use of biological designs to guide engineering efforts seems fundamentally at odds with evolutionary theory. Generally speaking, evolutionary biologists view biological systems as the products of an unguided, historically contingent process that co-opts preexisting systems to cobble together new ones. Evolutionary mechanisms can optimize these systems, but even then they are, in essence, still kludges.

Given the unguided nature of evolutionary mechanisms, does it make sense for engineers to rely on biological systems to solve problems and inspire new technologies? Is it in alignment with evolutionary beliefs to build an entire subdiscipline of engineering upon mimicking biological designs? I would argue that these engineering subdisciplines do not fit with the evolutionary paradigm.

On the other hand, biomimetics and bioinspiration naturally flow out of a creation model approach to biology. Using designs in nature to inspire engineering only makes sense if these designs arose from an intelligent Mind, whether in this universe or in any of the dimensions of the Spider-Verse.

Resources

Endnotes

  1. Northwestern University, “Mystery of How Black Widow Spiders Create Steel-Strength Silk Webs further Unravelled,” Phys.org, Science X, October 22, 2018, https://phys.org/news/2018-10-mystery-black-widow-spiders-steel-strength.html.
  2. Northwestern University, “Mystery of How Black Widow Spiders Create.”
  3. Lucas R. Parent et al., “Hierarchical Spidroin Micellar Nanoparticles as the Fundamental Precursors of Spider Silks,” Proceedings of the National Academy of Sciences USA (October 2018), doi:10.1073/pnas.1810203115.

Vocal Signals Smile on the Case for Human Exceptionalism

vocalsignalssmile

BY FAZALE RANA – NOVEMBER 21, 2018

Before Thanksgiving each year, those of us who work at Reasons to Believe (RTB) headquarters take part in an annual custom. We put our work on pause and use that time to call donors, thanking them for supporting RTB’s mission. (It’s a tradition we have all come to love, by the way.)

Before we start making our calls, our ministry advancement team leads a staff meeting to organize our efforts. And each year at these meetings, they remind us to smile when we talk to donors. I always found this to be an odd piece of advice, but they insist that when we talk to people, our smiles come across over the phone.

Well, it turns out that the helpful advice of our ministry advancement team has scientific merit, based on a recent study from a team of neuroscientists and psychologists from France and the UK.1 This research highlights the importance of vocal signaling for communicating emotions between people. And from my perspective, the work also supports the notion of human exceptionalism and the biblical concept of the image of God.

We Can Hear Smiles

The research team was motivated to perform this study in order to learn the role vocal signaling plays in social cognition. They chose to focus on auditory “smiles,” because, as these researchers point out, smiles are among the most powerful facial expressions and one of the earliest to develop in children. As I am sure we all know, smiles express positive feelings and are contagious.

When we smile, our zygomaticus major muscle contracts bilaterally and causes our lips to stretch. This stretching alters the sounds of our voices. So, the question becomes: Can we hear other people when they smile?

headanatomy

Figure 1: Zygomaticus major. Image credit: Wikipedia

To determine if people can “hear” smiles, the researchers recorded actors who spoke a range of French phonemes, with and without smiling. Then, they modeled the changes in the spectral patterns that occurred in the actors’ voices when they smiled while they spoke.

The researchers used this model to manipulate recordings of spoken sentences so that they would sound like they were spoken by someone who was smiling (while keeping other features such as pitch, content, speed, gender, etc., unchanged). Then, they asked volunteers to rate the “smiley-ness” of voices before and after manipulation of the recordings. They found that the volunteers could distinguish the transformed phonemes from those that weren’t altered.

Next, they asked the volunteers to mimic the sounds of the “smiley” phonemes. The researchers noted that for the volunteers to do so, they had to smile.

Following these preliminary experiments, the researchers asked volunteers to describe their emotions when listening to transformed phonemes compared to those that weren’t transformed. They found that when volunteers heard the altered phonemes, they expressed a heightened sense of joy and irony.

Lastly, the researchers used electromyography to monitor the volunteers’ facial muscles so that they could detect smiling and frowning as the volunteers listened to a set of 60 sentences—some manipulated (to sound as if they were spoken by someone who was smiling) and some unaltered. They found that when the volunteers judged speech to be “smiley,” they were more likely to smile and less likely to frown.

In other words, people can detect auditory smiles and respond by mimicking them with smiles of their own.

Auditory Signaling and Human Exceptionalism

This research demonstrates that both the visual and auditory clues we receive from other people help us to understand their emotional state and to become influenced by it. Our ability to see and hear smiles helps us develop empathy toward others. Undoubtedly, this trait plays an important role in our ability to link our minds together and to form complex social structures—two characteristics that some anthropologists believe contribute to human exceptionalism.

The notion that human beings differ in degree, not kind, from other creatures has been a mainstay concept in anthropology and primatology for over 150 years. And it has been the primary reason why so many people have abandoned the belief that human beings bear God’s image.

Yet, this stalwart view in anthropology is losing its mooring, with the concept of human exceptionalism taking its place. A growing minority of anthropologists and primatologists now believe that human beings really are exceptional. They contend that human beings do, indeed, differ in kind, not merely degree, from other creatures—including Neanderthals. Ironically, the scientists who argue for this updated perspective have developed evidence for human exceptionalism in their attempts to understand how the human mind evolved. And, yet, these new insights can be used to marshal support for the biblical conception of humanity.

Anthropologists identify at least four interrelated qualities that make us exceptional: (1) symbolism, (2) open-ended generative capacity, (3) theory of mind, and (4) our capacity to form complex social networks.

Human beings effortlessly represent the world with discrete symbols and to denote abstract concepts. Our ability to represent the world symbolically and to combine and recombine those symbols in a countless number of ways to create alternate possibilities has interesting consequences. Human capacity for symbolism manifests in the form of language, art, music, and body ornamentation. And humans alone desire to communicate the scenarios we construct in our minds with other people.

But there is more to our interactions with other human beings than a desire to communicate. We want to link our minds together and we can do so because we possess a theory of mind. In other words, we recognize that other people have minds just like ours, allowing us to understand what others are thinking and feeling. We also possess the brain capacity to organize people we meet and know into hierarchical categories, allowing us to form and engage in complex social networks.

Thus, I would contend that our ability to hear people’s smiles plays a role in theory of mind and our sophisticated social capacities. It contributes to human exceptionalism.

In effect, these four qualities could be viewed as scientific descriptors of the image of God. In other words, evidence for human exceptionalism is evidence that human beings bear God’s image.

So, even though many people in the scientific community promote a view of humanity that denigrates the image of God, scientific evidence and common-day experience continually support the notion that we are unique and exceptional as human beings. It makes me grin from ear to ear to know that scientific investigations into our cognitive and behavioral capacities continue to affirm human exceptionalism and, with it, the image of God.

Indeed, we are the crown of creation. And that makes me thankful!

Resources

Endnotes

  1. Pablo Arias, Pascal Belin, and Jean-Julien Aucouturier, “Auditory Smiles Trigger Unconscious Facial Imitation,” Current Biology 28 (July 23, 2018): PR782–R783, doi:10.1016/j.cub.2018.05.084.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/11/21/vocal-signals-smile-on-the-case-for-human-exceptionalism

When Did Modern Human Brains—and the Image of God—Appear?

modernhumanbrains

BY FAZALE RANA – NOVEMBER 14, 2018

When I was a kid, I enjoyed reading Ripley’s Believe It or Not! I couldn’t get enough of the bizarre facts described in the pages of this comic.

I was especially drawn to the panels depicting people who had oddly shaped heads. I found it fascinating to learn about people whose skulls were purposely forced into unnatural shapes by a practice known as intentional cranial deformation.

For the most part, this practice is a thing of the past. It is rarely performed today (though there are still a few people groups who carry out this procedure). But for much of human history, cultures all over the world have artificially deformed people’s crania (often for reasons yet to be fully understood). They accomplished this feat by binding the heads of infants, which distorts the normal growth of the skull. Through this practice, the shape of the human head can be readily altered to be abnormally flat, elongated, rounded, or conical.

skullfigure1

Figure 1: Deformed ancient Peruvian skull. Image credit: Shutterstock.

It is remarkable that the human skull is so malleable. Believe it, or not!

skullfigure2

Figure 2: Parts of the human skull. Image credit: Shutterstock.

For physical anthropologists, the normal shape of the modern human skull is just as bizarre as the conical-shaped skulls found among the remains of the Nazca culture of Peru. Compared to other hominins (such as Neanderthals and Homo erectus), modern humans have oddly shaped skulls. The skull shape of the hominins was elongated along the anterior-posterior axis. But the skull shape of modern humans is globular, with bulging and enlarged parietal and cerebral areas. The modern human skull also has another distinctive feature: the face is retracted and relatively small.

skullfigure3

Figure 3: Comparison of modern human and Neanderthal skulls. Image credit: Wikipedia.

Anthropologists believe that the difference in skull shape (and hence, brain shape) has profound significance and helps explain the advanced cognitive abilities of modern humans. The parietal lobe of the brain is responsible for:

  • Perception of stimuli
  • Sensorimotor transformation (which plays a role in planning)
  • Visuospatial integration (which provides hand-eye coordination needed for throwing spears and making art)
  • Imagery
  • Self-awareness
  • Working and long-term memory

Human beings seem to uniquely possess these capabilities. They make us exceptional compared to other hominins. Thus, for paleoanthropologists, two key questions are: when and how did the globular human skull appear?

Recently, a team of researchers from the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, addressed these questions. And their answers add evidence for human exceptionalism while unwittingly providing support for the RTB human origins model.1

The Appearance of the Modern Human Brain

To characterize the mode and tempo for the origin of the unusual morphology (shape) of the modern human skull, the German researchers generated and analyzed the CT scans of 20 fossil specimens representing three windows of time: (1) 300,000 to 200,000 years ago; (2) 130,000 to 100,000 years ago; and (3) 35,000 to 10,000 years ago. They also included 89 cranially diverse skulls from present-day modern humans, 8 Neanderthal skulls, and 8 from Homo erectus in their analysis.

The first group consisted of three specimens: (1) Jebel Irhoud 1 (dating to 315,000 years in age); (2) Jebel Irhoud 2 (also dating to 315,000 years in age); and (3) Omo Kibish (dating to 195,000 years in age). The specimens that comprise this group are variously referred to as near anatomically modern humans or archaic Homo sapiens.

The second group consisted of four specimens: (1) LH 18 (dating to 120,000 years in age); (2) Skhul (dating to 115,000 years in age); (3) Qafzeh 6; and (4) Qafzeh 9 (both dating to about 115,000 years in age. This group consists of specimens typically considered to be anatomically modern humans. The third group consisted of thirteen specimens that are all considered to be anatomically and behaviorally modern humans.

Researchers discovered that the group one specimens had facial features like that of modern humans. They also had brain sizes that were similar to Neanderthals and modern humans. But their endocranial shape was unlike that of modern humans and appeared to be intermediate between H. erectus and Neanderthals.

On the other hand, the specimens from group two displayed endocranial shapes that clustered with the group three specimens and the present-day samples. In short, modern human skull morphology (and brain shape) appeared between 130,000 to 100,000 years ago.

Confluence of Evidence Locates Humanity’s Origin

This result aligns with several recent archaeological finds that place the origin of symbolism in the same window of time represented by the group two specimens. (See the Resources section for articles detailing some of these finds.) Symbolism—the capacity to represent the world and abstract ideas with symbols—appears to be an ability that is unique to modern humans and is most likely a manifestation of the modern human brain shape, specifically an enlarged parietal lobe.

Likewise, this result coheres with the most recent dates for mitochondrial Eve and Y-chromosomal Adam around 120,000 to 150,000 years ago. (Again, see the Resources section for articles detailing some of these finds.) In other words, the confluence of evidence (anatomical, behavioral, and genetic) pinpoints the origin of modern humans (us) between 150,000 to 100,000 years ago, with the appearance of modern human anatomy coinciding with the appearance of modern human behavior.

What Does This Finding Mean for the RTB Human Origins Model?

To be clear, the researchers carrying out this work interpret their results within the confines of the evolutionary framework. Therefore, they conclude that the globular skulls—characteristic of modern humans—evolved recently, only after the modern human facial structure had already appeared in archaic Homo sapiens around 300,000 years ago. They also conclude that the globular skull of modern humans had fully emerged by the time humans began to migrate around the world (around 40,000 to 50,000 years ago).

Yet, the fossil evidence doesn’t show the gradual emergence of skull globularity. Instead, modern human specimens form a distinct cluster isolated from the distinct clusters formed by H. erectus, Neanderthals, and archaic H. sapiens. There are no intermediate globular specimens between archaic and modern humans, as would be expected if this trait evolved. Alternatively, the distinct clusters are exactly as expected if modern humans were created.

It appears that the globularity of our skull distinguishes modern humans from H. erectus, Neanderthals, and archaic Homo sapiens (near anatomically modern humans). This globularity of the modern human skull has implications for when modern human behavior and advanced cognitive abilities emerged.

For this reason, I see this work as offering support for the RTB human origins creation model (and, consequently, the biblical account of human origins and the biblical conception of human nature). RTB’s model (1) views human beings as cognitively superior and distinct from other hominins, and (2) posits that human beings uniquely possess a quality called the image of God that I believe manifests as human exceptionalism.

This work supports both predictions by highlighting the uniqueness and exceptional qualities of modern humans compared to H. erectus, Neanderthals, and archaic H. sapiens, calling specific attention to our unusual skull and brain morphology. As noted, anthropologists believe that this unusual brain morphology supports our advanced cognitive capabilities—abilities that I believe reflect the image of God. Because archaic H. sapiens, Neanderthals, and H. erectus did not possess this brain morphology, it makes it unlikely that these creatures had the sophisticated cognitive capacity displayed by modern humans.

In light of RTB’s model, it is gratifying to learn that the origin of anatomically modern humans coincides with the origin of modern human behavior.

Believe it or not, our oddly shaped head is part of the scientific case that can be made for the image of God.

Resources

Endnotes

  1. Simon Neubauer, Jean-Jacques Hublin, and Philipp Gunz, “The Evolution of Modern Human Brain Shape,” Science Advances 4 (January 24, 2018): eaao596, doi:10.1126/sciadv.aao5961.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/11/14/when-did-modern-human-brains-and-the-image-of-god-appear

Is Raising Children with Religion Child Abuse?

raisingchildrenwithreligion

BY FAZALE RANA – NOVEMBER 7, 2018

“Horrible as sexual abuse no doubt was, the damage was arguably less than the long-term psychological damage inflicted by bringing the child up Catholic in the first place.”

—Richard Dawkins, atheist and evolutionary biologist

blog__inline--is-raising-children-with-religion-child-abuse

Image: Richard Dawkins. Image credit: Shutterstock

With his typical flair for provocation, on more than one occasion Richard Dawkins has asserted that indoctrinating children in religion is a form of child abuse. In fact, he argues that the mental torment inflicted by religion on children is worse than sexual abuse carried out by priests—or by any other adult, for that matter. By way of support, he cites a conversation he had with someone who was molested by a Catholic priest. According to Dawkins, a woman told him “that while being abused by a priest was a ‘yucky’ experience, being told as a child that a Protestant friend who died would ‘roast in Hell’ was more distressing.”1

Of course, every time Dawkins has made this proclamation, people from nearly every philosophical and religious perspective have condemned his outlandish statements. But are condemnations enough to keep him from making the assertion? What about evidence?

A study recently published by researchers from Harvard T. H. Chan School of Public Health demonstrates that when Dawkins claims that indoctrinating children with religion is worse than child molestation, he is not only being outlandish, but wrong. The Harvard researchers discovered that children raised with religion are mentally and physically healthier than children raised without religion.2

The study’s conclusions are based on analysis of data from the Nurses’ Health Study II and the Growing Up Today Study. Sampling between 5,500 to 7,500 individuals (depending on the question asked), the researchers discovered that compared to no attendance, children and adolescents (between 12–19 years of age) who attended weekly religious services displayed:

  • Greater life satisfaction
  • Greater sense of mission
  • Greater volunteerism
  • Greater forgiveness
  • Fewer depressive symptoms
  • Lower likelihood of PTSD
  • Lower drug use
  • Reduced cigarette smoking
  • Lower sexual initiation
  • Lower levels of STIs (sexually transmitted infections)
  • Reduced incidences of abnormal Pap smears

The team noticed that when regular attendance of religious services was combined with prayer and meditation, the effects were slightly diminished. At this juncture, they don’t understand this counterintuitive finding.

They also discovered mental and physical health benefits for children and adolescents who did not attend religious services but prayed and/or meditated.

Apparently, attending religious services regularly and praying keeps young people from engaging in risky behaviors, makes them more disciplined, and helps develop their character. All of this translates into healthier, better adjusted, more resilient young men and women.

The results of this study align with results of previous studies. Study after study consistently shows that people who practice religion enjoy numerous mental and physical health benefits compared to those who don’t. (See the Resources section for more on this topic.) Previous studies focused on adults, but as the study by the Harvard School of Public Health team reveals, the benefits are realized for children and adolescents, too.

Ying Chen, one of the study’s authors, concludes, “These findings are important for both our understanding of health and our understanding of parenting practices. Many children are raised religiously, and our study shows that this can powerfully affect their health behaviors, mental health, and overall happiness and well-being.”3

Far from being abusive, raising children with religion comprises one facet of responsible parenting. And if Richard Dawkins is truly a man of science, he should be willing to acknowledge the real benefits of teaching religion to our children.

Resources:

Endnotes

  1. Rob Cooper, “Forcing a Religion on Your Children Is as Bad as Child Abuse, Claims Atheist Professor Richard Dawkins,” The Daily Mail (April 23, 2013), co.uk/news/article-2312813/Richard-Dawkins-Forcing-religion-children-child-abuse-claims-atheist-professor.html.
  2. Ying Chen and Tyler J. VanderWeele, “Associations of Religious Upbringing with Subsequent Health and Well-Being from Adolescence to Young Adulthood: An Outcome-Wide Analysis,” American Journal of Epidemiology (2018): kwy142, doi:10.1093/aje/kwy142.
  3. Alice G. Walton, “Raising Kids with Religion or Spirituality May Protect Their Mental Health,” Forbes (September 17, 2018), com/sites/alicegwalton/2018/09/17/raising-kids-with-religion-or-spirituality-may-protect-their-mental-health-study/#7555d89f3287.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/11/07/is-raising-children-with-religion-child-abuse

Is Fossil-Associated Cholesterol a Biomarker for a Young Earth?

fossilassociatedcholesterol

BY FAZALE RANA – OCTOBER 24, 2018

Like many Americans, I receive a yearly physical. Even though I find these exams to be a bit of a nuisance, I recognize their importance. These annual checkups allow my doctor to get a read on my overall health.

An important part of any physical exam is blood work. Screening a patient’s blood for specific biomarkers gives physicians data that allows them to assess a patient’s risk for various diseases. For example, the blood levels of total cholesterol and the ratio of HDLs to LDLs serve as useful biomarkers for cardiovascular disease.

fossil-asscociated-cholesterol-3

Figure 1: Cholesterol. Image credit: BorisTM. Public domain via Wikimedia Commons, https://commons.wikimedia.org/wiki/File:Cholesterol.svg.

As it turns out, physicians aren’t the only ones who use cholesterol as a diagnostic biomarker. So, too, do paleontologists. In fact, recently a team of paleontologists used cholesterol biomarkers to determine the identity of an enigmatic fossil recovered in Precambrian rock formations that dated to 588 million years in age.1 This diagnosis was possible because they were able to extract low levels of cholesterol derivatives from the fossil. Based on the chemical profile of the extracts, researchers concluded that Dickinsonia specimens are the fossil remains of some of the oldest animals on Earth.

Without question, this finding has important implications for how we understand the origin and history of animal life on Earth. But young-earth creationists (YECs) think that this finding has important implications for another reason. They believe that the recovery of cholesterol derivatives from Dickinsonia provides compelling evidence that the earth is only a few thousand years old and the fossil record results from a worldwide flood event. They argue that there is no way organic materials such as cholesterol could survive for hundreds of millions of years in the geological column. Consequently, they argue that the methods used to date fossils such as Dickinsonia must not be reliable, calling into question the age of the earth determined by radiometric techniques.

Is this claim valid? Is the recovery of cholesterol derivatives from fossils that date to hundreds of millions of years evidence for a young earth? Or can the recovery of cholesterol derivatives from 588 million-year-old fossils be explained in an old-earth paradigm?

How Can Cholesterol Derivatives Survive for Millions of Years?

Despite the protests of YECs, for several converging reasons the isolation of cholesterol derivatives from the Dickinsonia specimen is easily explained—even if the specimen dates to 588 million years in age.

The research team did not recover high levels of cholesterol from the Dickinsonia specimen (which would be expected if the fossils were only 3,000 years old), but trace levels of cholestane (which would be expected if the fossils were hundreds of millions of years old). Cholestane is a chemical derivative of cholesterol that is produced when cholesterol undergoes diagenetic changes.

fossil-asscociated-cholesterol-2

Figure 2: Cholestane. Image credit: Calvero. (Self-made with ChemDraw.) Public domain via Wikimedia Commons, https://commons.wikimedia.org/wiki/File:Cholestane.svg.

Cholestane is a chemically inert hydrocarbon that is expected to be stable for vast periods of time. In fact, geochemists have recovered steranes (other biomarkers) from rock formations that date to 2.8 billion years in age.

The Dickinsonia specimens that yielded cholestanes were exceptionally well-preserved. Specifically, they were unearthed from the White Sea Cliffs in northwest Russia. This rock formation has escaped deep burial and geological heating, making it all the more reasonable that compounds such as cholestanes could survive for nearly 600 million years.

In short, the recovery of cholesterol derivatives from Dickinsonia does not reflect poorly on the health of the old-earth paradigm. When the chemical properties of cholesterol and cholestane are considered, and given the preservation conditions of the Dickinsonia specimens, the interpretation that these materials were recovered from 588-million-year-old fossil specimens passes the physical exam.

Resources

Featured image: Dickinsonia Costata. Image credit: https://commons.wikimedia.org/wiki/File:DickinsoniaCostata.jpg.

Endnotes

  1. Ilya Bobrovskiy et al., “Ancient Steroids Establish the Ediacaran Fossil Dickinsonia as One of the Earliest Animals,” Science 361 (September 21, 2018): 1246–49, doi:10.1126/science.aat7228.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/10/24/is-fossil-associated-cholesterol-a-biomarker-for-a-young-earth

Further Review Overturns Neanderthal Art Claim

furtherreviewoverturns

BY FAZALE RANA – OCTOBER 17, 2018

As I write this blog post, the 2018–19 NFL season is just underway.

During the course of any NFL season, several key games are decided by a controversial call made by the officials. Nobody wants the officials to determine the outcome of a game, so the NFL has instituted a way for coaches to challenge calls on the field. When a call is challenged, part of the officiating crew looks at a computer tablet on the sidelines—reviewing the game footage from a number of different angles in an attempt to get the call right. After two minutes of reviewing the replays, the senior official makes his way to the middle of the field and announces, “Upon further review, the call on the field . . .”

Recently, a team of anthropologists from Spain and the UK created quite a bit of controversy based on a “call” they made from working in the field. Using a new U-Th dating method, these researchers age-dated the artwork in caves from Iberia. Based on the age of a few of their samples, they concluded that Neanderthals produced cave paintings.1 But new work by three independent research teams challenges the “call” from the field—overturning the conclusion that Neanderthals made art and displayed symbolism like modern humans.

U-Th Dating Method

The new dating method under review measures the age of calcite deposits beneath cave paintings and those formed over the artwork after the paintings were created. As water flows down cave walls, it deposits calcite. When calcite forms, it contains trace amounts of U-238. This isotope decays into Th-230. Normally, detection of such low quantities of the isotopes would require extremely large samples. Researchers discovered that by using accelerator mass spectrometry, they could get by with 10-milligram samples. And by dating the calcite samples with this technique, they produced minimum and maximum ages for the cave paintings.2

Call from the Field: Neanderthals Are Artists

The team applied their dating method to the art found in three cave sites in Iberia (ancient Spain): (1) La Pasiega, which houses paintings of animals, linear signs, claviform signs, and dots; (2) Ardales, which contains about 1,000 paintings of animals, along with dots, discs, lines, geometric shapes, and hand stencils; and (3) Maltravieso, which displays a set of hand stencils and geometric designs. The research team took a total of 53 samples from 25 carbonate formations associated with the cave art in these three cave sites. While most of the samples dated to 40,000 years old or less (which indicates that modern humans were the artists), three measurements produced minimum ages of around 65,000 years, including: (1) red scalariform from La Pasiega, (2) red areas from Ardales, and (3) a hand stencil from Maltravieso. On the basis of the three measurements, the team concluded that the art must have been made by Neanderthals because modern humans had not made their way into Iberia at that time. In other words, Neanderthals made art, just like modern humans did.

blog__inline--further-review-overturns-neanderthal-art-claim

Figure: Maltravieso Cave Entrance, SpainImage credit: Shutterstock

Shortly after the findings were published, I wrote a piece expressing skepticism about this claim for two reasons.

First, I questioned the reliability of the method. Once the calcite deposit forms, the U-Th method will only yield reliable results if none of the U or Th moves in or out of the deposit. Based on the work of researchers from France and the US, it does not appear as if the calcite films are closed systems. The calcite deposits on the cave wall formed because of hydrological activity in the cave. Once a calcite film forms, water will continue to flow over its surface, leeching out U (because U is much more water soluble than Th). By removing U, water flowing over the calcite will make it seem as if the deposit and, hence, the underlying artwork is much older than it actually is.3

Secondly, I expressed concern that the 65,000-year-old dates measured for a few samples are outliers. Of the 53 samples measured, only three gave age-dates of 65,000 years. The remaining samples dated much younger, typically around 40,000 years in age. So why should we give so much credence to three measurements, particularly if we know that the calcite deposits are open systems?

Upon Further Review: Neanderthals Are Not Artists

Within a few months, three separate research groups published papers challenging the reliability of the U-Th method for dating cave art and, along with it, the claim that Neanderthals produced cave art.4 It is not feasible to detail all their concerns in this article, but I will highlight six of the most significant complaints. In several instances, the research teams independently raised the same concerns.

  1. The U-Th method is unreliable because the calcite deposits are an open system. The concern that I raised was reiterated by two of the research teams for the same reason I expressed. The U-Th dating technique can only yield reliable results if no U or Th moves in or out of the system once the calcite film forms. The continued water flow over the calcite deposits will preferentially leech U from the deposit, making the deposit appear to be older than it is.
  2. The U-Th method is unreliable because it fails to account for nonradiogenic Th. This isotope would have been present in the source water producing the calcite deposits. As a result, Th would already be present in calcite at the time of formation. This nonradiogenic Th would make the samples appear to be older than they actually are.
  3. The 65,000-year-old dates for the three measurements from La Pasiega, Ardales, and Maltravieso are likely outliers. Just as I pointed out before, two of the research groups expressed concern that only 3 of the 53 measurements came in at 65,000 years in age. This discrepancy suggests that these dates are outliers, most likely reflecting the fact that the calcite deposits are an open system that formed with Th already present. Yet, the researchers from Spain and the UK who reported these results emphasized the few older dates while downplaying the younger dates.
  4. Multiple measurements on the same piece of art yielded discordant ages. For example, the researchers made five age-date measurements of the hand stencil at Maltravieso. These dates (66.7 kya [thousand years ago], 55.2 kya, 35.3 kya, 23.1 kys, and 14.7 kya) were all over the place. And yet, the researchers selected the oldest date for the age of the hand stencil, without justification.
  5. Some of the red “markings” on cave walls that were dated may not be art. Red markings are commonplace on cave walls and can be produced by microorganisms that secrete organic materials or iron oxide deposits. It is possible that some of the markings that were dated were not art at all.
  6. The method used by the researchers to sample the calcite deposits may have been flawed. One team expressed concern that the sampling technique may have unwittingly produced dates for the cave surface on which the paintings were made rather than the pigments used to make the art itself. If the researchers inadvertently dated the cave surface, it could easily be older than the art.

In light of these many shortcomings, it is questionable if the U-Th method to date cave art is reliable. After review, the call from the field is overturned. There is no conclusive evidence that Neanderthals made art.

Why Does This Matter?

Artistic expression reflects a capacity for symbolism. And many people view symbolism as a quality unique to human beings that contributes to our advanced cognitive abilities and exemplifies our exceptional nature. In fact, as a Christian, I see symbolism as a manifestation of the image of God. If Neanderthals possessed symbolic capabilities, such a quality would undermine human exceptionalism (and with it the biblical view of human nature), rendering human beings nothing more than another hominin. At this juncture, every claim for Neanderthal symbolism has failed to withstand scientific scrutiny.

Now, it is time for me to go back to the game.

Who dey! Who dey! Who dey think gonna beat dem Bengals!

Resources:

Endnotes

  1. L. Hoffmann et al., “U-Th Dating of Carbonate Crusts Reveals Neandertal Origin of Iberian Cave Art,” Science359 (February 23, 2018): 912–15, doi:10.1126/science.aap7778.
  2. W. G. Pike et al., “U-Series Dating of Paleolithic Art in 11 Caves in Spain,” Science 336 (June 15, 2012): 1409–13, doi:10.1126/science.1219957.
  3. Georges Sauvet et al., “Uranium-Thorium Dating Method and Palaeolithic Rock Art,” Quaternary International 432 (2017): 86–92, doi:10.1016/j.quaint.2015.03.053.
  4. Ludovic Slimak et al., “Comment on ‘U-Th Dating of Carbonate Crusts Reveals Neandertal Origin of Iberian Cave Art,’” Science 361 (September 21, 2018): eaau1371, doi:10.1126/science.aau1371; Maxime Aubert, Adam Brumm, and Jillian Huntley, “Early Dates for ‘Neanderthal Cave Art’ May Be Wrong,” Journal of Human Evolution (2018), doi:10.1016/j.jhevol.2018.08.004; David G. Pearce and Adelphine Bonneau, “Trouble on the Dating Scene,” Nature Ecology and Evolution 2 (June 2018): 925–26, doi:10.1038/s41559-018-0540-4.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/10/17/further-review-overturns-neanderthal-art-claim

Can Evolution Explain the Origin of Language?

canevolutionexplain

BY FAZALE RANA – OCTOBER 10, 2018

Oh honey hush, yes you talk too much
Oh honey hush, yes you talk too much
Listenin’ to your conversation is just about to separate us

—Albert Collins

He was called the “Master of the Telecaster.” He was also known as the “Iceman,” because his guitar playing was so hot, he was cold. Albert Collins (1932–93) was an electric blues guitarist and singer whose distinct style of play influenced the likes of Stevie Ray Vaughn and Robert Cray.

origin-of-language

Image: Albert Collins in 1990. Image Credit: Masahiro Sumori [GFDL (https://www.gnu.org/copyleft/fdl.html), CC-BY-SA-3.0 (https://creativecommons.org/licenses/by-sa/3.0/) or CC BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5)], from Wikimedia Commons.

Collins was known for his sense of humor and it often came through in his music. In one of Collins’s signature songs, Honey Hush, the bluesman complains about his girlfriend who never stops talking: “You start talkin’ in the morning; you’re talkin’ all day long.” Collins finds his girlfriend’s nonstop chatter so annoying that he contemplates ending their relationship.

While Collins may have found his girlfriend’s unending conversation irritating, the capacity for conversation is a defining feature of human beings (modern humans). As human beings, we can’t help ourselves—we “talk too much.”

What does our capacity for language tell us about human nature and our origins?

Language and Human Exceptionalism

Human language flows out of our capacity for symbolism. Humans have the innate ability to represent the world (and abstract ideas) using symbols. And we can embed symbols within symbols to construct alternative possibilities and then link our scenario-building minds together through language, music, art, etc.

As a Christian, I view our symbolism as a facet of the image of God. While animals can communicate, as far as we know only human beings possess abstract language. And despite widespread claims about Neanderthal symbolism, the scientific case for symbolic expression among these hominids keeps coming up short. To put it another way, human beings appear to be uniquely exceptional in ways that align with the biblical concept of the image of God, with our capacity for language serving as a significant contributor to the case for human exceptionalism.

Recent insights into the mode and tempo of language’s emergence strengthen the scientific case for the biblical view of human nature. As I have written in previous articles (see Resources) and in Who Was Adam?, language appears to have emerged suddenly—and it coincides with the appearance of anatomically modern humans. Additionally, when language first appeared, it was syntactically as complex as contemporary language. That is, there was no evolution of language—proceeding from a proto-language through simple language and then to complex language. Language emerges all at once as a complete package.

From my vantage point, the sudden appearance of language that uniquely coincides with the first appearance of humans is a signature for a creation event. It is precisely what I would expect if human beings were created in God’s image, as Scripture describes.

Darwin’s Problem

This insight into the origin of language also poses significant problems for the evolutionary paradigm. As linguist Noam Chomsky and anthropologist Ian Tattersall admit, “The relatively sudden origin of language poses difficulties that may be called ‘Darwin’s problem.’”1

Anthropologist Chris Knight’s insights compound “Darwin’s problem.” He concludes that “language exists, but for reasons which no currently accepted theoretical paradigm can explain.”2 Knight arrives at this conclusion by surveying the work of three scientists (Noam Chomsky, Amotz Zahavi, and Dan Sperber) who study language’s origin using three distinct approaches. All three converge on the same conclusion; namely, evolutionary processes should not produce language or any form of symbolic communication.

Chris Knight writes:

Language evolved in no other species than humans, suggesting a deep-going obstacle to its evolution. One possibility is that language simply cannot evolve in a Darwinian world—that is, in a world based ultimately on competition and conflict. The underlying problem may be that the communicative use of language presupposes anomalously high levels of mutual cooperation and trust—levels beyond anything which current Darwinian theory can explain . . . suggesting a deep-going obstacle to its evolution.3

To support this view, Knight synthesizes the insights of linguist Noam Chomsky, ornithologist and theoretical biologist Amotz Zahavi, and anthropologist Dan Sperber. All three scientists determine that language cannot evolve from animal communication for three distinct reasons.

Three Reasons Why Language Is Unique to Humans

Chomsky views animal minds as only being capable of bounded ranges of expression. On the other hand, human language makes use of a finite set of symbols to communicate an infinite array of thoughts and ideas. For Chomsky, there are no intermediate steps between bounded and infinite expression of ideas. The capacity to express an unlimited array of thoughts and ideas stems from a capacity that must have appeared all at once. And this ability must be supported by brain and vocalization structures. Brain structures and the ability to vocalize would either have to already be in place at the time language appeared (because these structures were selected by the evolutionary process for entirely different purposes) or they simultaneously arose with the capacity to conceive of infinite thoughts and ideas. To put it another way, language could not have emerged from animal communication through a step-evolutionary process. It had to appear all at once and be fully intact at the time of its genesis. No one knows of any mechanism that can effect that type of transformation.

Zahavi’s work centers on understanding the evolutionary origin of signaling in the animal world. Endemic to his approach, Zahavi divides natural selection into two components: utilitarian selection (which describes selection for traits that improve the efficiency of some biological process—enhancing the organism’s fitness) and signal selection (which involves the selection of traits that are wasteful). Though counterintuitive, signal selection contributes to the fitness of the organism because it communicates the organism’s fitness to other animals (either members of the same or different species). The example Zahavi uses to illustrate signal selection is the unusual behavior of gazelles. These creatures stot (jump up and down, stomp the ground, loudly snort) when they detect a predator, which calls attention to themselves. This behavior is counterintuitive. Shouldn’t these creatures use their energy to run away, getting the biggest jump they can on the pursuing predator? As it turns out, the “wasteful and costly” behavior communicates to the predator the fitness of the gazelle. In the face of danger, the gazelle is willing to take on risk, because it is so fit. The gazelle’s behavior dissuades the predator from attacking. Observations in the wild confirm Zahavi’s ideas. Predators most often will go after gazelles that don’t stot or that display limited stotting behavior.

Animal signaling is effective and reliable only when actual costly handicaps are communicated. The signaling can only be effective when a limited and bounded range of signals is presented. This constraint is the only way to communicate the handicap. In contrast, language is open-ended and infinite. Given the constraints on animal signaling, it cannot evolve into language. Natural selection prevents animal communication from evolving into language because, in principle, when the infinite can be communicated, in practice, nothing is communicated at all.

Based in part on fieldwork he conducted in Ethiopia with the Dorze people, Dan Sperber concluded that people use language to primarily communicate alternative possibilities and realities—falsehoods—rather than information that is true about the world. To be certain, people use language to convey brute facts about the world. But most often language is used to communicate institutional facts—agreed-upon truths—that don’t necessarily reflect the world as it actually is. According to Sperber, symbolic communication is characterized by extravagant imagery and metaphor. Human beings often build metaphor upon metaphor—and falsehood upon falsehood—when we communicate. For Sperber, this type of communication can’t evolve from animal signaling. What evolutionary advantage arises by transforming communication about reality (animal signaling) to communication about alternative realities (language)?

Synthesizing the insights of Chomsky, Zahavi, and Sperber, Knight concludes that language is impossible in a Darwinian world. He states, “The Darwinian challenge remains real. Language is impossible not simply by definition, but—more interestingly—because it presupposes unrealistic levels of trust. . . . To guard against the very possibility of being deceived, the safest strategy is to insist on signals that just cannot be lies. This rules out not only language, but symbolic communication of any kind.”4

Signal for Creation

And yet, human beings possess language (along with other forms of symbolism, such as art and music). Our capacity for abstract language is one of the defining features of human beings.

For Christians like me, our language abilities reflect the image of God. And what appears as a profound challenge and mystery for the evolutionary paradigm finds ready explanation in the biblical account of humanity’s origin.

Is it time for our capacity for conversation to separate us from the evolutionary explanation for humanity’s origin?

Resources:

Endnotes

  1. Johan J. Bolhuis et al., “How Could Language Have Evolved?” PLoS Biology 12 (August 2014): e1001934, doi:10.1371/journal.pbio.1001934.
  2. Chris Knight, “Puzzles and Mysteries in the Origins of Language,” Language and Communication 50 (September 2016): 12–21, doi:10.1016/j.langcom.2016.09.002.
  3. Knight, “Puzzles and Mysteries,” 12–21.
  4. Knight, “Puzzles and Mysteries,” 12–21.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/10/10/can-evolution-explain-the-origin-of-language