Self-Assembly of Protein Machines: Evidence for Evolution or Creation?

Untitled 12
BY FAZALE RANA – APRIL 17, 2019

I finally upgraded my iPhone a few weeks ago from a 5s to an 8 Plus. I had little choice. The battery on my cell phone would no longer hold a charge.

I’d put off getting a new one for as long as possible. It just didn’t make sense to spend money chasing the latest and greatest technology when current cell phone technology worked perfectly fine for me. Apart from the battery life and a less-than-ideal camera, I was happy with my iPhone 5s. Now I am really glad I made the switch.

Then, the other day I caught myself wistfully eyeing the iPhone X. And, today, I learned that Apple is preparing the release of the iPhone 11 (or XI or XT). Where will Apple’s technology upgrades take us next? I can’t wait to find out.

Have I become a technology junkie?

It is remarkable how quickly cell phone technology advances. It is also remarkable how alluring new technology can be. The next thing you know, Apple will release an iPhone that will assemble itself when it comes out of the box. . . . Probably not.

But, if the work of engineers at MIT ever reaches fruition, it is possible that smartphone manufacturers one day just might rely on a self-assembly process to produce cell phones.

A Self-Assembling Cell Phone

The Self-Assembly Lab at MIT has developed a pilot process to manufacture cell phones by self-assembly.

To do this, they designed their cell phone to consist of six parts that fit together in a lock-in-key manner. By placing the cell phone pieces into a tumbler that turns at the just-right speed, the pieces automatically combine with one another, bit by bit, until the cell phone is assembled.

Few errors occur during the assembly process. Only pieces designed to fit together combine with one another because of the lock-in-key fabrication.

Self-Assembly and the Case for a Creator

It is quite likely that the work of MIT’s Self-Assembly Lab (and other labs like it) will one day revolutionize manufacturing—not just for iPhones, but for other types of products as well.

As alluring as this new technology might be, I am more intrigued by its implications for the creation-evolution controversy. What do self-assembly processes have to do with the creation-evolution debate? More than we might realize.

I believe self-assembly processes strengthen the watchmaker argument for God’s existence (and role in the origin of life). Namely, this cutting-edge technology makes it possible to respond to a common objection leveled against this design argument.

To understand why this engineering breakthrough is so important for the Watchmaker argument, a little background is necessary.

The Watchmaker Argument

Anglican natural theologian William Paley (1743–1805) posited the Watchmaker argument in the eighteenth century. It went on to become one of the best-known arguments for God’s existence. The argument hinges on the comparison Paley made between a watch and a rock. He argued that a rock’s existence can be explained by the outworking of natural processes—not so for a watch.

The characteristics of a watch—specifically the complex interaction of its precision parts for the purpose of telling time—implied the work of an intelligent designer. Employing an analogy, Paley asserted that just as a watch requires a watchmaker, so too, life requires a Creator. Paley noted that biological systems display a wide range of features characterized by the precise interplay of complex parts designed to interact for specific purposes. In other words, biological systems have much more in common with a watch than a rock. This similarity being the case, it logically follows that life must stem from the work of a Divine Watchmaker.

Biochemistry and the Watchmaker Argument

As I discuss in my book The Cell’s Design, advances in biochemistry have reinvigorated the Watchmaker argument. The hallmark features of biochemical systems are precisely the same properties displayed in objects, devices, and systems designed and crafted by humans.

Cells contain protein complexes that are structured to operate as biomolecular motors and machines. Some molecular-level biomachines are strict analogs to machinery produced by human designers. In fact, in many instances, a one-to-one relationship exists between the parts of manufactured machines and the molecular components of biomachines. (A few examples of these biomolecular machines are discussed in the articles listed in the Resources section.)

We know that machines originate in human minds that comprehend and then implement designs. So, when scientists discover example after example of biomolecular machines inside the cell with an eerie and startling similarity to the machines we produce, it makes sense to conclude that these machines and, hence, life, must also have originated in a Mind.

A Skeptic’s Challenge

As you might imagine, skeptics have leveled objections against the Watchmaker argument since its introduction in the 1700s. Today, when skeptics criticize the latest version of the Watchmaker argument (based on biochemical designs), the influence of Scottish skeptic David Hume (1711–1776) can be seen and felt.

In his 1779 work Dialogues Concerning Natural Religion, Hume presented several criticisms of design arguments. The foremost centered on the nature of analogical reasoning. Hume argued that the conclusions resulting from analogical reasoning are only sound when the things compared are highly similar to each other. The more similar, the stronger the conclusion. The less similar, the weaker the conclusion.

Hume dismissed the original version of the Watchmaker argument by maintaining that organisms and watches are nothing alike. They are too dissimilar for a good analogy. In other words, what is true for a watch is not necessarily true for an organism and, therefore, it doesn’t follow that organisms require a Divine Watchmaker, just because a watch does.

In effect, this is one of the chief reasons why some skeptics today dismiss the biochemical Watchmaker argument. For example, philosopher Massimo Pigliucci has insisted that Paley’sanalogy is purely metaphorical and does not reflect a true analogical relationship. He maintains that any similarity between biomolecular machines and human designs reflects merely illustrative analogies that life scientists use to communicate the structure and function of these protein complexes via familiar concepts and language. In other words, it is illegitimate to use the “analogies” between biomolecular machines and manufactured machines to make a case for a Creator.1

A Response Based on Insights from Nanotechnology

I have responded to this objection by pointing out that nanotechnologists have isolated biomolecular machines from the cell and incorporated these protein complexes into nanodevices and nanosystems for the explicit purpose of taking advantage of their machine-like properties. These transplanted biomachines power motion and movements in the devices, which otherwise would be impossible with current technology. In other words, nanotechnologists view these biomolecular systems as actual machines and utilize them as such. Their work demonstrates that biomolecular machines are literal, not metaphorical, machines. (See the Resources section for articles describing this work.)

Is Self-Assembly Evidence of Evolution or Design?

Another criticism—inspired by Hume—is that machines designed by humans don’t self-assemble, but biochemical machines do. Skeptics say this undermines the Watchmaker analogy. I have heard this criticism in the past, but it came up recently in a dialogue I had with a skeptic in a Facebook group.

I wrote that “What we discover when we work out the structure and function of protein complexes are features that are akin to an automobile engine, not an outcropping of rocks.”

A skeptic named Maurice responded: “Your analogy is false. Cars do not spontaneously self-assemble—in that case there is a prohibitive energy barrier. But hexagonal lava rocks can and do—there is no energy barrier to prohibit that from happening.”

Maurice argues that my analogy is a poor one because protein complexes in the cell self-assemble, whereas automobile engines can’t. For Maurice (and other skeptics), this distinction serves to make manufactured machines qualitatively different from biomolecular machines. On the other hand, hexagonal patterns in lava rocks give the appearance of design but are actually formed spontaneously. For skeptics like Maurice, this feature indicates that the design displayed by protein complexes in the cell is apparent, not true, design.

Maurice added: “Given that nature can make hexagonal lava blocks look ‘designed,’ it can certainly make other objects look ‘designed.’ Design is not a scientific term.”

Self-Assembly and the Watchmaker Argument

This is where the MIT engineers’ fascinating work comes into play.

Engineers continue to make significant progress toward developing self-assembly processes for manufacturing purposes. It very well could be that in the future a number of machines and devices will be designed to self-assemble. Based on the researchers’ work, it becomes evident that part of the strategy for designing machines that self-assemble centers on creating components that not only contribute to the machine’s function, but also precisely interact with the other components so that the machine assembles on its own.

The operative word here is designed. For machines to self-assemble they must be designed to self-assemble.

This requirement holds true for biochemical machines, too. The protein subunits that interact to form the biomolecular machines appear to be designed for self-assembly. Protein-protein binding sites on the surface of the subunits mediate this self-assembly process. These binding sites require high-precision interactions to ensure that the binding between subunits takes place with a high degree of accuracy—in the same way that the MIT engineers designed the cell phone pieces to precisely combine through lock-in-key interactions.

blog__inline--self-assembly-of-protein-machines

Figure: ATP Synthase is a biomolecular motor that is literally an electrically powered rotary motor. This biomachine is assembled from protein subunits. Credit: Shutterstock

The level of design required to ensure that protein subunits interact precisely to form machine-like protein complexes is only beginning to come into full view.2 Biochemists who work in the area of protein design still don’t fully understand the biophysical mechanisms that dictate the assembly of protein subunits. And, while they can design proteins that will self-assemble, they struggle to replicate the complexity of the self-assembly process that routinely takes place inside the cell.

Thanks to advances in technology, biomolecular machines’ ability to self-assemble should no longer count against the Watchmaker argument. Instead, self-assembly becomes one more feature that strengthens Paley’s point.

The Watchmaker Prediction

Advances in self-assembly also satisfy the Watchmaker prediction, further strengthening the case for a Creator. In conjunction with my presentation of the revitalized Watchmaker argument in The Cell’s Design, I proposed the Watchmaker prediction. I contend that many of the cell’s molecular systems currently go unrecognized as analogs to human designs because the corresponding technology has yet to be developed.

The possibility that advances in human technology will ultimately mirror the molecular technology that already exists as an integral part of biochemical systems leads to the Watchmaker prediction. As human designers develop new technologies, examples of these technologies, though previously unrecognized, will become evident in the operation of the cell’s molecular systems. In other words, if the Watchmaker argument truly serves as evidence for a Creator’s existence, then it is reasonable to expect that life’s biochemical machinery anticipates human technological advances.

In effect, the developments in self-assembly technology and its prospective use in future manufacturing operations fulfill the Watchmaker prediction. Along these lines, it’s even more provocative to think that cellular self-assembly processes are providing insight to engineers who are working to develop similar technology.

Maybe I am a technology junkie, after all. I find it remarkable that as we develop new technologies we discover that they already exist in the cell, and because they do the Watchmaker argument becomes more and more compelling.

Can you hear me now?

Resources

The Biochemical Watchmaker Argument

Challenges to the Biochemical Watchmaker Argument

Endnotes
  1. Massimo Pigliucci and Maarten Boudry, “Why Machine-Information Metaphors are Bad for Science and Science Education,” Science and Education 20, no. 5–6 (May 2011): 453–71; doi:10.1007/s11191-010-9267-6.
  2. For example, see Christoffer H. Norn and Ingemar André, “Computational Design of Protein Self-Assembly,” Current Opinion in Structural Biology 39 (August 2016): 39–45, doi:10.1016/j.sbi.2016.04.002.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/04/17/self-assembly-of-protein-machines-evidence-for-evolution-or-creation

Does Transhumanism Refute Human Exceptionalism? A Response to Peter Clarke

Untitled 11
BY FAZALE RANA – APRIL 3, 2019

I just finished binge-watching Altered Carbon. Based on the 2002 science fiction novel written by Richard K. Morgan, this Netflix original series is provocative, to say the least.

Altered Carbon takes place in the future, where humans can store their personalities as digital files in devices called stacks. These disc-like devices are implanted at the top of the spinal column. When people die, their stacks can be removed from their body (called sleeves) and stored indefinitely until they are re-sleeved—if and when another body becomes available to them.

In this world, people who possess extreme wealth can live indefinitely, without ever having to spend any time in storage. Referred to as Meths (after the biblical figure Methuselah, who lived 969 years), the wealthy have the financial resources to secure a continual supply of replacement bodies through cloning. Their wealth also affords them the means to back up their stacks once a day, storing the data in a remote location in case their stacks are destroyed. In effect, Meths use technology to attain a form of immortality.

Forthcoming Posthuman Reality?

The world of Altered Carbon is becoming a reality right before our eyes. Thanks to recent advances in biotechnology and bioengineering, the idea of using technology to help people live indefinitely no longer falls under the purview of science fiction. Emerging technologies such as CRISPR-Cas9 gene editing and brain-computer interfaces offer hope to people suffering from debilitating diseases and injuries. They can also be used for human enhancements—extending our physical, intellectual, and psychological capabilities beyond natural biological limits.

These futuristic possibilities give fuel to a movement known as transhumanism. Residing on the fringe of the academy and culture for several decades, the movement has gone mainstream in the ivory towers of the academy and on the street. Sociologist James Hughes describes the transhumanist vision this way in his book Citizen Cyborg:

“In the twenty-first century the convergence of artificial intelligence, nanotechnology and genetic engineering will allow human beings to achieve things previously imagined only in science fiction. Lifespans will extend well beyond a century. Our senses and cognition will be enhanced. We will gain control over our emotions and memory. We will merge with machines, and machines will become more like humans. These technologies will allow us to evolve into varieties of “posthumans” and usher us into a “transhuman” era and society. . . . Transhuman technologies, technologies that push the boundaries of humanism, can radically improve our quality of life, and . . . we have a fundamental right to use them to control our bodies and minds. But to ensure these benefits we need to democratically regulate these technologies and make them equally available in free societies.”1

blog__inline--does-transhumanism-refute-human-exceptionalism

Figure 1: The transhumanism symbol. Image credit: Wikimedia Commons

In short, transhumanists want us to take control of our own evolution, transforming human beings into posthumans and in the process creating a utopian future that carves out a path to immortality.

Depending on one’s philosophical or religious perspective, transhumanists’ vision and the prospects of a posthuman reality can bring excitement or concern or a little bit of both. Should we pursue the use of technology to enhance ourselves, transcending the constraints of our biology? What role should these emerging biotechnologies play in shaping our future? What are the boundaries for developing and using these technologies? Should there be any boundaries?2

All of these questions revolve around a central question: Who are we as human beings?

Are Humans Exceptional?

Prior to the rising influence of transhumanism, the answer to this question followed along one of two lines. For people who hold to a Judeo-Christian worldview, human beings are exceptional, standing apart from all other creatures on the planet. Accordingly, our exceptional nature results from the image of God. As image bearers, human beings have infinite worth and value.

On the other hand, those influenced by the evolutionary paradigm maintain that human beings are nothing more than animals—differing in degree, not kind, from other creatures. In fact, many who hold this view of humanity find the notion of human exceptionalism repugnant. In their view, to elevate the value of human beings above that of other creatures constitutes speciesism and reflects an unjustifiable arrogance.

And now transhumanism enters into the fray. People on both sides of the controversy about human nature and identity argue that transhumanism brings an end to any notion about human exceptionalism, once and for all.

One is Peter Clarke. In an article published on the Areo website entitled “Transhumanism and the Death of Human Exceptionalism,” Clarke says:

“As a philosophical movement, transhumanism advocates for improving humanity through genetic modifications and technological augmentations, based upon the position that there is nothing particularly sacred about the human condition. It acknowledges up front that our bodies and minds are riddled with flaws that not only can but should be fixed. Even more radically, as the name implies, transhumanism embraces the potential of one day moving beyond the human condition, transitioning our sentience into more advanced forms of life, including genetically modified humans, superhuman cyborgs, and immortal digital intelligences.”3

On the other side of the aisle is Wesley J. Smith of the Discovery Institute. In his article “Transhumanist Bill of Wrongs,” Smith writes:

“Transhumanism would shatter human exceptionalism. The moral philosophy of the West holds that each human being is possessed of natural rights that adhere solely and merely because we are human. But transhumanists yearn to remake humanity in their own image—including as cyborgs, group personalities residing in the Internet Cloud, or AI-controlled machines. That requires denigrating natural man as unexceptional to justify our substantial deconstruction and redesign.”4

In other words, transhumanism highlights the notion that our bodies, minds, and personalities are inherently flawed and we have a moral imperative, proponents say, to correct these flaws. But this view denigrates humanity, opponents say, and with it the notion of human exceptionalism. For Clarke, this nonexceptional perspective is something to be celebrated. For Smith, transhumanism is of utmost concern and must be opposed.

Evidence of Exceptionalism

While I am sympathetic to Smith’s concern, I would take a differing perspective. I find that transhumanism provides one of the most powerful pieces of evidence for human exceptionalism—and along with it the image of God.

In my forthcoming book (coauthored with Ken Samples), Humans 2.0, I write:

“Ironically, progress in human enhancement technology and the prospects of a posthuman future serve as one of the most powerful arguments for human exceptionalism and, consequently, the image of God. Human beings are the only species that exists—or that has ever existed—that can create technologies to enhance our capabilities beyond our biological limits. We alone work toward effecting our own immortality, take control of evolution, and look to usher in a posthuman world. These possibilities stem from our unique and exceptional capacity to investigate and develop an understanding of nature (including human biology) through science and then turn that insight into technology.”5

Our ability to carry out the scientific enterprise and develop technology stems from four qualities that a growing number of anthropologists and primatologists think are unique to humans, including:

  • symbolism
  • open-ended generative capacity
  • theory of mind
  • our capacity to form complex social networks

From my perspective as a Christian, these qualities stand as scientific descriptors of the image of God.

As human beings, we effortlessly represent the world with discrete symbols. We denote abstract concepts with symbols. And our ability to represent the world symbolically has interesting consequences when coupled with our abilities to combine and recombine those symbols in a nearly infinite number of ways to create alternate possibilities.

Human capacity for symbolism manifests in the form of language, art, music, and even body ornamentation. And we desire to communicate the scenarios we construct in our minds with other human beings.

For anthropologists and primatologists who think that human beings differ in kind—not degree—from other animals, these qualities demarcate us from the great apes and Neanderthals. The separation becomes most apparent when we consider the remarkable technological advances we have made during our tenure as a species. Primatologist Thomas Suddendorf puts it this way:

“We reflect on and argue about our present situation, our history, and our destiny. We envision wonderful harmonious worlds as easily as we do dreadful tyrannies. Our powers are used for good as they are for bad, and we incessantly debate which is which. Our minds have spawned civilizations and technologies that have changed the face of the Earth, while our closest living animal relatives sit unobtrusively in their remaining forests. There appears to be a tremendous gap between human and animal minds.”6

Moreover, no convincing evidence exists that leads us to think that Neanderthals shared the qualities that make us exceptional. Neanderthals—who first appear in the fossil record around 250,000 to 200,000 years ago and disappear around 40,000 years ago—existed on Earth longer than modern humans have. Yet our technology has progressed exponentially, while Neanderthal technology remained largely static.

According to paleoanthropologist Ian Tattersall and linguist Noam Chomsky (and their coauthors):

“Our species was born in a technologically archaic context, and significantly, the tempo of change only began picking up after the point at which symbolic objects appeared. Evidently, a new potential for symbolic thought was born with our anatomically distinctive species, but it was only expressed after a necessary cultural stimulus had exerted itself. This stimulus was most plausibly the appearance of language. . . . Then, within a remarkably short space of time, art was invented, cities were born, and people had reached the moon.”7

In other words, the evolution of human technology signifies that there is something special—exceptional—about us as human beings. In this sense, transhumanism highlights our exceptional nature precisely because the prospects for controlling our own evolution stem from our ability to advance technology.

To be clear, transhumanism possesses an existential risk for humanity. Unquestioningly, it has the potential to strip human beings of dignity and worth. But, ironically, transhumanism is possible only because we are exceptional as human beings.

Responsibility as the Crown of Creation

Ultimately, our exceptional nature demands that we thoughtfully deliberate on how to use emerging biotechnologies to promote human flourishing, while ensuring that no human being is exploited or marginalized by these technologies. It also means that we must preserve our identity as human beings at all costs.

It is one thing to enjoy contemplating a posthuman future by binge-watching a sci-fi TV series. But, it is another thing altogether to live it out. May we be guided by ethical wisdom to live well.

Resources

Endnotes
  1. James Hughes, Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Humans of the Future (Cambridge, MA: Westview Press, 2004), xii.
  2. Ken Samples and I take on these questions and more in our book Humans 2.0, due to be published in July of 2019.
  3. Peter Clarke, “Transhumanism and the Death of Human Exceptionalism,” Areo (March 6, 2019), https://areomagazine.com/2019/03/06/transhumanism-and-the-death-of-human-exceptionalism/.
  4. Wesley J. Smith,“Transhumanist Bill of Wrongs,” Discovery Institute (October 23, 2018), https://www.discovery.org/a/transhumanist-bill-of-wrongs/.
  5. Fazale Rana with Kenneth Samples, Humans 2.0: Scientific, Philosophical, and Theological Perspectives on Transhumanism (Covina, CA: RTB Press, 2019) in press.
  6. Thomas Suddendorf, The Gap: The Science of What Separates Us from Other Animals (New York: Basic Books, 2013), 2.
  7. Johan J. Bolhuis et al., “How Could Language Have Evolved?” PLoS Biology 12, no.8 (August 26, 2014): e1001934, doi:10.1371/journal.pbio.1001934.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/04/03/does-transhumanism-refute-human-exceptionalism-a-response-to-peter-clarke

Timing of Neanderthals’ Disappearance Makes Art Claims Unlikely

Untitled 10
BY FAZALE RANA – MARCH 27, 2019

In Latin it literally means, “somewhere else.”

Legal experts consider an alibi to be one of the most effective legal defenses available in a court of law because it has the potential to prove a defendant’s innocence. It goes without saying: if a defendant has an alibi, it means that he or she was somewhere else when the crime was committed.

As it turns out, paleoanthropologists have discovered that Neanderthals have an alibi, of sorts. Evidence indicates that they weren’t the ones to scratch up the floor of Gorham’s Cave.

Based on recent radiocarbon dates measured for samples from Bajondillo Cave (located on the southern part of the Iberian Peninsula—southwest corner of Europe), a research team from the Japan Agency for Marine-Earth Science and Technology and several Spanish institutions determined that modern humans made their way to the southernmost tip of Iberia around 43,000 years ago, displacing Neanderthals.1

Because Neanderthals disappeared from Iberia at that time, it becomes unlikely that they were responsible for hatch marks (dated to be 39,000 years in age) made on the floor of Gorham’s Cave on the island of Gibraltar. These scratches have been interpreted by some paleoanthropologists as evidence that Neanderthals possessed symbolic capabilities.

But how could Neanderthals have made the hatch marks if they weren’t there? Ladies and gentlemen of the jury: the perfect alibi. Instead, it looks as if modern humans were the culprits who marked up the cave floor.

blog__inline--timing-of-neanderthals-disappearance-1

Figure 1: Gorham’s Cave. Image credit: Wikipedia

The Case for Neanderthal Exceptionalism

Two of the biggest questions in anthropology today relate to Neanderthals:

  • When did these creatures disappear from Europe?
  • Did they possess symbolic capacity like modern humans, thus putting their cognitive abilities on par with ours as a species?

For paleoanthropologists, these two questions have become inseparable. With regard to the second question, some paleoanthropologists are convinced that Neanderthals displayed symbolic capabilities.

It is important to note that the case for Neanderthal symbolism is largely based on correlations between the archaeological and fossil records. Toward this end, some anthropologists have concluded that Neanderthals possessed symbolism because researchers have recovered artifacts (presumably reflecting symbolic capabilities) from the same layers that harbored Neanderthal fossils. Unfortunately, this approach is complicated by other studies that show that the cave layers have been mixed by either cave occupants (either hominid or modern human) or animals living in the caves. This mixing leads to the accidental association of fossil and archaeological remains. In other words, the mixing of layers raises questions about who the manufacturers of these artifacts were.

Because we know modern humans possess the capacity for symbolism, it is much more likely that modern humans, not Neanderthals, made the symbolic artifacts, in these instances. Then, only through an upheaval of the cave layers did the artifacts mix with Neanderthal remains. (See the Resources section for articles that elaborate this point.)

More often than not, archaeological remains are unearthed by themselves with no corresponding fossil specimens. This is the reason why understanding the timing of Neanderthals’ disappearance and modern humans’ arrival in different regions of Europe becomes so important (and why the two questions interrelate). Paleoanthropologists believe that if they can show that Neanderthals lived in a locale at the time symbolic artifacts were produced, then it becomes conceivable that these creatures made the symbolic items. This interpretation increases in plausiblity if no modern humans were around at the time.

Some researchers have argued along these lines regarding the hatch marks found on the floor of Gorham’s Cave.2 The markings were made in the bedrock of the cave floor. The layers above the bedrock date to between 30,000 and 39,000 years in age. Some paleoanthropologists argue that Neanderthals must have made the markings. Why? Because, even though modern humans were already in Europe by that time, these paleoanthropologists think that modern humans had not yet made their way to the southern part of the Iberian Peninsula. These same researchers also think that Neanderthals survived in Iberia until about 32,000 years ago, even though their counterparts in other parts of Europe had already disappeared. So, on this basis, paleoanthropologists conclude that Neanderthals produced the hatch marks and, thus, displayed symbolic capabilities.

blog__inline--timing-of-neanderthals-disappearance-2

Figure 2: Hatch marks on the floor of Gorham’s Cave. Image credit: Wikipedia

When Did Neanderthals Disappear from Iberia?

But recent work challenges this conclusion. The Spanish and Japanese team took 17 new radiocarbon measurements from layers of the Bajondillo Cave (located in southern Iberia, near Gorham’s Cave) with the hopes of precisely documenting the change in technology from Mousterian (made by Neanderthals) to Aurignacian (made by modern humans). This transition corresponds to the replacement of Neanderthals by modern humans elsewhere in Europe.

The researchers combined the data from their samples with previous measurements made at the site to pinpoint this transition at around 43,000 years ago—not 32,000 years ago. In other words, modern humans occupied Iberia at the same time they occupied other places in Europe. This result also means that Neanderthals had disappeared from Iberia well before the hatch marks in Gorham’s Cave were made.

Were Neanderthals Exceptional Like Modern Humans?

Though claims of Neanderthal exceptionalism abound in the scientific literature and in popular science articles, the claims universally fail to withstand ongoing scientific scrutiny, as this latest discovery attests. Simply put, based on the archaeological record, there are no good reasons to think that Neanderthals displayed symbolism.

From my perspective, the case for Neanderthal symbolism seems to be driven more by ideology than actual scientific evidence.

It is also worth noting that comparative studies on Neanderthal and modern human brain structures also lead to the conclusion that humans displayed symbolism and Neanderthals did not. (See the Resources section for articles that describe this work in more detail.)

Why Does It Matter?

Questions about Neanderthal symbolic capacity and, hence, exceptionalism have bearing on how we understand human beings. Are human beings unique in our capacity for symbolism or is this quality displayed by other hominins? If humans are not alone in our capacity for symbolism, then we aren’t exceptional. And, if we aren’t exceptional then it becomes untenable to embrace the biblical concept of human beings as God’s image bearers. (As a Christian, I see symbolism as a manifestation of the image of God.)

But, based on the latest scientific evidence, the verdict is in: modern humans are the only species to display the capacity for symbolism. In this way, scientific advance affirms that humans are exceptional in a way that aligns with the biblical concept of the image of God.

The Neanderthals’ alibi holds up. They weren’t there, but humans were. Case closed.

Resources

Endnotes
  1. Miguel Cortés-Sánchez et al., “An Early Aurignacian Arrival in Southwestern Europe,” Nature Ecology and Evolution 3 (January 21, 2019): 207–12, doi:10.1038/s41559-018-0753-6.
  2. Joaquín Rodríguez-Vidal et al., “A Rock Engraving Made by Neanderthals in Gibraltar,” Proceedings of the National Academy of Sciences USA 111, no. 37 (September 16, 2014): 13301–6, doi:10.1073/pnas.1411529111.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/03/27/timing-of-neanderthals-disappearance-makes-art-claims-unlikely

Origins of Monogamy Cause Evolutionary Paradigm Breakup

Untitled 9
BY FAZALE RANA – MARCH 20, 2019

Gregg Allman fronted the Allman Brothers Band for over 40 years until his death in 2017 at the age of 69. Writer Mark Binelli described Allman’s voice as “a beautifully scarred blues howl, old beyond its years.”1

A rock legend who helped pioneer southern rock, Allman was as well known for his chaotic, dysfunctional personal life as for his accomplishments as a musician. Allman struggled with drug abuse and addiction. He was also married six times, with each marriage ending in divorce and, at times, in a public spectacle.

In a 2009 interview with Binelli for Rolling Stone, Allman reflected on his failed marriages: “To tell you the truth, it’s my sixth marriage—I’m starting to think it’s me.”2

Allman isn’t the only one to have trouble with marriage. As it turns out, so do evolutionary biologists—but for different reasons than Greg Allman.

To be more exact, evolutionary biologists have made an unexpected discovery about the evolutionary origin of monogamy (a single mate for at least a season) in animals—an insight that raises questions about the evolutionary explanation. Based on recent work headed by a large research team of investigators from the University of Texas (UT), Austin, it looks like monogamy arose independently, multiple times, in animals. And these origin events were driven, in each instance, by the same genetic changes.3

In my view, this remarkable example of evolutionary convergence highlights one of the many limitations of evolutionary theory. It also contributes to my skepticism (and that of other intelligent design proponents/creationists) about the central claim of the evolutionary paradigm; namely, the origin, design, history, and diversity of life can be fully explained by evolutionary mechanisms.

At the same time, the independent origins of monogamy—driven by the same genetic changes—(as well as other examples of convergence) find a ready explanation within a creation model framework.

Historical Contingency

To appreciate why I believe this discovery is problematic for the evolutionary paradigm, it is necessary to consider the nature of evolutionary mechanisms. According to the evolutionary biologist Stephen Jay Gould (1941–2002), evolutionary transformations occur in a historically contingent manner.This means that the evolutionary process consists of an extended sequence of unpredictable, chance events. If any of these events were altered, it would send evolution down a different trajectory.

To help clarify this concept, Gould used the metaphor of “replaying life’s tape.” If one were to push the rewind button, erase life’s history, and then let the tape run again, the results would be completely different each time. In other words, the evolutionary process should not repeat itself. And rarely should it arrive at the same end point.

Gould based the concept of historical contingency on his understanding of the mechanisms that drive evolutionary change. Since the time of Gould’s original description of historical contingency, several studies have affirmed his view. (For descriptions of some representative studies, see the articles listed in the Resources section.) In other words, researchers have experimentally shown that the evolutionary process is, indeed, historically contingent.

A Failed Prediction of the Evolutionary Paradigm

Given historical contingency, it seems unlikely that distinct evolutionary pathways would lead to identical or nearly identical outcomes. Yet, when viewed from an evolutionary standpoint, it appears as if repeated evolutionary outcomes are a common occurrence throughout life’s history. This phenomenon—referred to as convergence—is widespread. Evolutionary biologists Simon Conway Morris and George McGhee point out in their respective books, Life’s Solution and Convergent Evolution, that identical evolutionary outcomes are a characteristic feature of the biological realm.5 Scientists see these repeated outcomes at the ecological, organismal, biochemical, and genetic levels. In fact, in my book The Cell’s Design, I describe 100 examples of convergence at the biochemical level.

In other words, biologists have made two contradictory observations within the evolutionary framework: (1) evolutionary processes are historically contingent and (2) evolutionary convergence is widespread. Since the publication of The Cell’s Design, many new examples of convergence have been unearthed, including the recent origin of monogamy discovery.

Convergent Origins of Monogamy

Working within the framework of the evolutionary paradigm, the UT research team sought to understand the evolutionary transition to monogamy. To achieve this insight, they compared the gene expression profiles in the neural tissues of reproductive males for closely related pairs of species, with one species displaying monogamous behavior and the other nonmonogamous reproduction.

The species pairs spanned the major vertebrate groups and included mice, voles, songbirds, frogs, and cichlids. From an evolutionary perspective, these organisms would have shared a common ancestor 450 million years ago.

Monogamous behavior is remarkably complex. It involves the formation of bonds between males and females, care of offspring by both parents, and increased territorial defense. Yet, the researchers discovered that in each instance of monogamy the gene expression profiles in the neural tissues of the monogamous species were identical and distinct from the gene expression patterns for their nonmonogamous counterparts. Specifically, they observed the same differences in gene expression for the same 24 genes. Interestingly, genes that played a role in neural development, cell-cell signaling, synaptic activity, learning and memory, and cognitive function displayed enhanced gene expression. Genes involved in gene transcription and AMPA receptor regulation were down-regulated.

So, how do the researchers account for this spectacular example of convergence? They conclude that a “universal transcriptomic mechanism” exists for monogamy and speculate that the gene modules needed for monogamous behavior already existed in the last common ancestor of vertebrates. When needed, these modules were independently recruited at different times in evolutionary history to yield monogamous species.

Yet, given the number of genes involved and the specific changes in gene expression needed to produce the complex behavior associated with monogamous reproduction, it seems unlikely that this transformation would happen a single time, let alone multiple times, in the exact same way. In fact, Rebecca Young, the lead author of the journal article detailing the UT research team’s work, notes that “Most people wouldn’t expect that across 450 million years, transitions to such complex behaviors would happen the same way every time.”6

So, is there another way to explain convergence?

Convergence and the Case for a Creator

Prior to Darwin (1809–1882), biologists referred to shared biological features found in organisms that cluster into disparate biological groups as analogies. (In an evolutionary framework, analogies are referred to as evolutionary convergences.) They viewed analogous systems as designs conceived by the Creator that were then physically manifested in the biological realm and distributed among unrelated organisms.

In light of this historical precedence, I interpret convergent features (analogies) as the handiwork of a Divine mind. The repeated origins of biological features equate to the repeated creations by an intelligent Agent who employs a common set of solutions to address a common set of problems facing unrelated organisms.

Thus, the idea of monogamous convergence seems to divorce itself from the evolutionary framework, but it makes for a solid marriage in a creation model framework.

Resources

Endnotes
  1. Mark Binelli, “Gregg Allman: The Lost Brother,” Rolling Stone, no. 1082/1083 (July 9–23, 2009), https://www.rollingstone.com/music/music-features/gregg-allman-the-lost-brother-108623/.
  2. Binelli, “Gregg Allman: The Lost Brother.”
  3. Rebecca L. Young et al., “Conserved Transcriptomic Profiles underpin Monogamy across Vertebrates,” Proceedings of the National Academy of Sciences, USA 116, no. 4 (January 22, 2019): 1331–36, doi:10.1073/pnas.1813775116.
  4. Stephen Jay Gould, Wonderful Life: The Burgess Shale and the Nature of History (New York: W. W. Norton & Company, 1990).
  5. Simon Conway Morris, Life’s Solution: Inevitable Humans in a Lonely Universe (New York: Cambridge University Press, 2003); George McGhee, Convergent Evolution: Limited Forms Most Beautiful (Cambridge, MA: MIT Press, 2011).
  6. University of Texas at Austin, “Evolution Used Same Genetic Formula to Turn Animals Monogamous,” ScienceDaily (January 7, 2019), www.sciencedaily.com/releases/2019/01/1901071507.htm.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/03/20/origins-of-monogamy-cause-evolutionary-paradigm-breakup

Biochemical Synonyms Restate the Case for a Creator

Untitled 8
BY FAZALE RANA – MARCH 13, 2019

Sometimes I just can’t help myself. I know it’s clickbait but I click on the link anyway.

A few days ago, as a result of momentary weakness, I found myself reading an article from the ScoopWhoop website, “16 Things Most of Us Think Are the Same but Actually Aren’t.”

OK. OK. Now that you saw the title you want to click on the link, too.

To save you from wasting five minutes of your life, here is the ScoopWhoop list:

  • Weather and Climate
  • Turtle and Tortoise
  • Jam and Jelly
  • Eraser and Rubber
  • Great Britain and the UK
  • Pill and Tablet
  • Shrimp and Prawn
  • Butter and Margarine
  • Orange and Tangerine
  • Biscuits and Cookies
  • Cupcakes and Muffins
  • Mushrooms and Toadstools
  • Tofu and Paneer
  • Rabbits and Hares
  • Alligators and Crocodiles
  • Rats and Mice

And there you have it. Not a very impressive list, really.

If I were putting together a biochemist’s version of this list, I would start with synonymous mutations. Even though many life scientists think they are the same, studies indicate that they “actually aren’t.”

If you have no idea what I am talking about or what this insight has to do with the creation/evolution debate, let me explain by starting with some background information, beginning with the central dogma of molecular biology and the genetic code.

Central Dogma of Molecular Biology

According to this tenet of molecular biology, the information stored in DNA is functionally expressed through the activities of proteins. When it is time for the cell’s machinery to produce a particular protein, it copies the appropriate information from the DNA molecule through a process called transcription and produces a molecule called messenger RNA(mRNA). Once assembled, mRNA migrates to the ribosome, where it directs the synthesis of proteins through a process known as translation.

blog__inline--biochemical-synonyms-restate-1

Figure 1: The central dogma of molecular biology. Image credit: Shutterstock

The Genetic Code

At first glance, there appears to be a mismatch between the stored information in DNA and the information expressed in proteins. A one-to-one relationship cannot exist between the four different nucleotides that make up DNA and the twenty different amino acids used to assemble proteins. The cell handles this mismatch by using a code comprised of groupings of three nucleotides, called codons, to specify the twenty different amino acids.

 

blog__inline--biochemical-synonyms-restate-2

Figure 2: Codons. Image credit: Wikipedia

The cell uses a set of rules to relate these nucleotide triplet sequences to the twenty amino acids that comprise proteins. Molecular biologists refer to this set of rules as the genetic code. The nucleotide triplets represent the fundamental units of the genetic code. The code uses each combination of nucleotide triplets to signify an amino acid. This code is essentially universal among all living organisms.

Sixty-four codons make up the genetic code. Because the code only needs to encode twenty amino acids, some of the codons are redundant. That is, different codons code for the same amino acid. In fact, up to six different codons specify some amino acids. Others are specified by only one codon.1

blog__inline--biochemical-synonyms-restate-3

Figure 3: The genetic code. Image credit: Shutterstock

A little more background information about mutations will help fill out the picture.

Mutations

A mutation refers to any change that takes place in the DNA nucleotide sequence. DNA can experience several different types of mutations. Substitution mutations are one common type. When a substitution mutation occurs, one (or more) of the nucleotides in the DNA strand is replaced by another nucleotide. For example, an A may be replaced by a G, or a C may be replaced by a T. This substitution changes the codon. Interestingly, the genetic code is structured in such a way that when substitution mutations take place, the resulting codon often specifies the same amino acid (due to redundancy) or an amino acid that has similar chemical and physical properties to the amino acid originally encoded.

Synonymous and Nonsynonymous Mutations

When substitution mutations generate a new codon that specifies the same amino acid as initially encoded, it’s referred to as a synonymous mutation. However, when a substitution produces a codon that specifies a different amino acid, it’s called a nonsynonymous mutation.

Nonsynonymous mutations can be deleterious if they affect a critical amino acid or if they significantly alter the chemical and physical profile along the protein chain. If the substituted amino acid possesses dramatically different physicochemical properties from the native amino acid, it may cause the protein to fold improperly. Improper folding impacts the protein’s structure, yielding a biomolecule with reduced or even lost function.

On the other hand, biochemists have long thought that synonymous mutations have no effect on protein structure and function because these types of mutations don’t change the amino acid sequences of proteins. Even though biochemists think that synonymous mutations are silent—having no functional consequences—evolutionary biologists find ways to use them, including using patterns of synonymous mutations to establish evolutionary relationships.

Patterns of Synonymous Mutations and the Case for Biological Evolution

Evolutionary biologists consider shared genetic features found in organisms that naturally group together as compelling evidence for common descent. One feature of particular interest is the identical (or nearly identical) DNA sequence patterns found in genomes. According to this line of reasoning, the shared patterns arose as a result of a series of substitution mutations that occurred in the common ancestor’s genome. Presumably, as the varying evolutionary lineages diverged from the nexus point, they carried with them the altered sequences created by the primordial mutations.

Synonymous mutations play a significant role in this particular argument for common descent. Because synonymous mutations don’t alter the amino acid sequence of proteins, their effects are considered to be inconsequential. So, when the same (or nearly the same) patterns of synonymous mutations are observed in genomes of organisms that cluster together into the same group, most life scientists interpret them as compelling evidence of the organisms’ common evolutionary history.

It is conceivable that nonsynonymous mutations, which alter the protein amino acid sequences, may impart some type of benefit and, therefore, shared patterns of nonsynonymous changes could be understood as evidence for shared design. (See the last section of this article.) But this is not the case when it comes to synonymous mutations, which raises the question: Why would a Creator intentionally introduce new codons that code for the same amino acid into genes when these changes have no functional utility?

Apart from invoking a Creator, the shared patterns of synonymous mutations make perfect sense if genomes have been shaped by evolutionary processes and an evolutionary history. However, this argument for biological evolution (shared ancestry) and challenge to a creation model interpretation (shared design) hinges on the underlying assumption that synonymous mutations have no functional consequence.

But what if this assumption no longer holds?

Synonymous Mutations Are Not Interchangeable

Biochemists used to think that synonymous mutations had no impact whatsoever on protein structure and, hence, function, but this view is changing thanks to studies such as the one carried out by researchers at University of Colorado, Boulder.2

These researchers discovered synonymous mutations that increase the translational efficiency of a gene (found in the genome of Salmonella enterica). This gene codes for an enzyme that plays a role in the biosynthetic pathway for the amino acid arginine. (This enzyme also plays a role in the biosynthesis of proline.) They believe that these mutations alter the three-dimensional structure of the DNA sequence near the beginning of the coding portion of the gene. They also think that the synonymous mutations improve the stability of the messenger RNA molecule. Both effects would lead to greater translational efficiency at the ribosome.

As radical (and unexpected) as this finding may seem to be, it follows on the heels of other recent discoveries that also recognize the functional importance of synonymous mutations.3Generally speaking, biochemists have discovered that synonymous mutations function to influence not only the rate and efficiency of translation (as the scientists from the University of Colorado, Bolder learned) and the folding of the proteins after they are produced at the ribosome.

Even though synonymous mutations leave the amino acid sequence of the protein unchanged, they can exert influence by altering the:

  • regulatory regions of the gene that influence the transcription rate
  • secondary and tertiary structure of messenger RNA that influences the rate of translation
  • stability of messenger RNA that influences the amount of protein produced
  • translation rate that influences the folding of the protein as it exits the ribosome

Biochemists are just beginning to come to terms with the significance of these discoveries, but it is already clear that synonymous mutations have biomedical consequences.They also impact models for molecular evolution. But for now, I want to focus on the impact these discoveries has on the creation/evolution debate.

Patterns of Synonymous Mutations and the Case for Creation

As noted, many people consider the most compelling evidence for common descent to be the shared genetic features displayed by organisms that naturally cluster together. But if life is the product of a Creator’s handiwork, the shared genetic features could be understood as shared designs deployed by a Creator. In fact, a historical precedent exists for the common design interpretation. Prior to Darwin, biologists viewed shared biological features as manifestations of archetypical designs that existed in the Creator’s mind.

But the common design interpretation requires that the shared features be functional. (Or, that they arise independently in a nonrandom manner.) For those who view life from the framework of the evolutionary paradigm, the shared patterns of synonymous mutations invalidate the common design explanation—because these mutations are considered to be functionally insignificant.

But in the face of mounting evidence for the functional importance of synonymous mutations, this objection to common design has begun to erode. Though many life scientists are quick to dismiss the common design interpretation of biology, advances in molecular biology continue to strengthen this explanation and, with it, the case for a Creator.

Resources

Endnotes
  1. As I discuss in The Cell’s Design, the rules of the genetic code and the nature of the redundancy appear to be designed to minimize errors in translating information from DNA into proteins that would occur due to substitution mutations. This optimization stands as evidence for the work of an intelligent Agent.
  2. JohnCarlo Kristofich et al., “Synonymous Mutations Make Dramatic Contributions to Fitness When Growth Is Limited by Weak-Link Enzyme,” PLoS Genetics 14, no. 8 (August 27, 2018): e1007615, doi:10.1371/journal.pgen.1007615.
  3. Here are a few representative studies that ascribe functional significance to synonymous mutations: Anton A. Komar, Thierry Lesnik, and Claude Reiss, “Synonymous Codon Substitutions Affect Ribosome Traffic and Protein Folding during in vitro Translation,” FEBS Letters 462, no. 3 (November 30, 1999): 387–91, doi:10.1016/S0014-5793(99)01566-5; Chung-Jung Tsai et al., “Synonymous Mutations and Ribosome Stalling Can Lead to Altered Folding Pathways and Distinct Minima,” Journal of Molecular Biology 383, no. 2 (November 7, 2008): 281–91, doi:10.1016/j.jmb.2008.08.012; Florian Buhr et al., “Synonymous Codons Direct Cotranslational Folding toward Different Protein Conformations,” Molecular Cell Biology 61, no. 3 (February 4, 2016): 341–51, doi:10.1016/j.molcel.2016.01.008; Chien-Hung Yu et al., “Codon Usage Influences the Local Rate of Translation Elongation to Regulate Co-translational Protein Folding,” Molecular Cell Biology 59, no. 5 (September 3, 2015): 744–55, doi:10.1016/j.molcel.2015.07.018.
  4. Zubin E. Sauna and Chava Kimchi-Sarfaty,” Understanding the Contribution of Synonymous Mutations to Human Disease,” Nature Reviews Genetics 12 (August 31, 2011): 683–91, doi:10.1038/nrg3051.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/03/13/biochemical-synonyms-restate-the-case-for-a-creator

Discovery of Intron Function Interrupts Evolutionary Paradigm

Untitled 7
BY FAZALE RANA – MARCH 6, 2019

Nobody likes to be interrupted when they are talking. It feels disrespectful and can be frustrating. Interruptions derail the flow of a conversation.

The editors tell me that I need to interrupt this lead to provide a “tease” for what is to come. So, here goes: Interruptions happen in biochemical systems, too. Life scientists long thought that these interruptions disrupted the flow of biochemical information. But, it turns out these interruptions serve an important function, offering a rejoinder a common argument against intelligent design.

Now back to the lead.

Perhaps it is no surprise that some psychologists study interruptions1 with the hope of discovering answers to questions such as:

  • Why do people interrupt?
  • Who is most likely to interrupt?
  • Do we all perceive interruptions in the same way?

While there is still much to learn about the science of interruptions, psychologists have discovered that men interrupt more often than women. Ironically, men often view women who interrupt as ruder and less intelligent than men who interrupt during conversations.

Researchers have also found that a person’s cultural background influences the likelihood that he or she will interrupt during a discourse. Personality also plays a role. Some people are more sensitive to pauses in conversation and, therefore, find themselves interrupting more often than those who are less uncomfortable with periods of silence.

Psychologists have learned that not all interruptions are the same. Some people interrupt because they want the “floor.” These people are called intrusive interrupters. Cooperativeinterrupters help move the conversation along by agreeing with the speaker and finishing the speaker’s thoughts.

Interruptions are not confined to conversations. They are a part of life, including the biochemical operations that take place inside the cell.

In fact, biochemists have discovered that the information harbored in genes, which contains the instructions to build proteins—the workhorse molecules of the cell—experience interruptions in their coding sequences. These intrusive interruptions would disrupt the flow of information in the cell during the process of protein synthesis if the interrupting sequences weren’t removed by the cell’s machinery.

Molecular biologists have long viewed these genetic “interruptions” (called introns) as serving no useful purpose for the cell, with introns comprising a portion of the junk DNA found in the genomes of eukaryotic organisms. But it turns out that introns—like cooperative interruptions during a conversation—serve a useful purpose, according to the recent work of two independent teams of molecular biologists.

Introns Are Abundant

Noncoding regions within genes, introns consist of DNA sequences that interrupt the coding regions (called exons) of a gene. Introns are pervasive in genomes of eukaryotic organisms. For example, 90 percent of genes in mammals consists of introns, with an average of 8 per gene.

After the information stored in a gene is copied into messenger RNA, the intron sequences are excised, and the exons spliced together by a protein-RNA complex known as a spliceosome.

blog__inline--discovery-of-intron-function-1

Figure 1: Drawing of pre-mRNA to mRNA. Image credit: Wikipedia

Molecular biologists have long wondered why eukaryotic genes would be riddled with introns. Introns seemingly make the structure and expression of eukaryotic genes unnecessarily complicated. What possible purpose could introns serve? Researchers also thought that once the introns were spliced out of the messenger RNA sequences, they were discarded as genetic debris.

Introns Serve a Functional Purpose

But recent work by two independent research teams from Sherbrooke University in Quebec, Canada, and MIT, respectively, indicates that molecular biologists have been wrong about introns. They have learned that once spliced from messenger RNA, these fragments play a role in helping cells respond to stress.

Both research teams studied baker’s yeast. One advantage of using yeast as a model organism relates to the relatively small number of introns (295) in its genome.

blog__inline--discovery-of-intron-function-2

Figure 2: A depiction of baker’s yeast. Image credit: Shutterstock

Taking advantage of the limited number of introns in baker’s yeast, the team from Sherbrooke University created hundreds of yeast strains—each one missing just one of its introns. When grown under normal conditions with a ready supply of available nutrients, the strains missing a single intron grew normally—suggesting that introns aren’t of much importance. But when the researchers grew the yeast cells under conditions of food scarcity, the yeast with the deleted introns frequently died.2

The MIT team observed something similar. They noticed that during the stationary phase of growth (when nutrients become depleted, slowing down growth), introns spliced from RNA accumulated in the growth medium. The researchers deleted the specific introns that they found in the growth medium from the baker’s yeast genome and discovered that the resulting yeast strains struggled to survive under nutrient-poor conditions.3

At this point, it isn’t clear how introns help cells respond to stress caused by a lack of nutrients, but they have some clues. The Sherbrooke University team thinks that the spliced-out introns play a role in repressing the production of proteins that help form ribosomes. These biochemical machines manufacture proteins. Because protein synthesis requires building block materials and energy, during periods when nutrients are scarce, protein production slows down in cells. Ratcheting down protein synthesis impedes cell growth but affords them a better chance to survive a lack of nutrients. One way cells can achieve this objective is to stop making ribosomes.

The MIT team thinks that some spliced-out introns interact with spliceosomes, preventing them from splicing out other introns. When this disruption happens, it slows down protein synthesis.

Both research groups believe that in times when nutrients are abundant, the spliced-out introns are broken down by the cell’s machinery. But when nutrients are scarce, that condition triggers intron accumulation.

At this juncture, it isn’t clear if the two research teams have uncovered distinct mechanisms that work collaboratively to slow down protein production, or if they are observing facets of the same mechanism. Regardless, it is evident that introns display functional utility. It’s a surprising insight that has important ramifications for our understanding of the structure and function of genomes. This insight has potential biomedical utility and theological implications, as well.

Intron Function and the Case for Creation

Scientists who view biology through the lens of the evolutionary paradigm are quick to conclude that the genomes of organisms reflect the outworking of evolutionary history. Their perspective causes them to see the features of genomes, such as introns, as little more than the remnants of an unguided evolutionary process. Within this framework, there is no reason to think that any particular DNA sequence element, including introns, harbors function. In fact, many life scientists regard the “evolutionary vestiges” in the genome as junk DNA. This clearly has been the case for introns.

Yet, a growing body of data indicates that virtually every category of so-called junk DNA displays function. We can now add introns—cooperative interrupters—to the list. And based on the data on hand, we can make a strong case that most of the sequence elements in genomes possess functional utility.

Could it be that scientists really don’t understand the biology of genomes? Or maybe we have the wrong paradigm?

It seems to me that science is in the midst of a revolution in our understanding of genome structure and function. Instead of being a wasteland of evolutionary debris, most of the genome appears to be functional. And the architecture and operations of genomes appear to be far more elegant and sophisticated than anyone ever imagined—at least within the confines of the evolutionary paradigm.

But what if the genome is viewed from a creation model framework?

The elegance and sophistication of genomes are features that are increasingly coming into scientific view. And this is precisely what I would expect if genomes were the product of a Mind—the handiwork of a Creator.

Now that is a discovery worth talking about.

Resources

Endnotes
  1. Teal Burrell, “The Science behind Interrupting: Gender, Nationality and Power, and the Roles They Play,” Post Magazine (March 14, 2018), https://www.scmp.com/magazines/post-magazine/long-reads/article/2137023/science-behind-interrupting-gender-nationality; Alex Shashkevich, “Why Do People Interrupt? It Depends on Whom You’re Talking To,” The Guardian (May 18, 2018), https://www.theguardian.com/lifeandstyle/2018/may/18/why-do-people-interrupt-it-depends-on-whom-youre-talking-to.
  2. Julie Parenteau et al., “Introns Are Mediators of Cell Response to Starvation,” Nature 565 (January 16, 2019): 612–17, doi:10.1038/s41586-018-0859-7.
  3. Jeffrey T. Morgan, Gerald R. Fink, and David P. Bartel, “Excised Linear Introns Regulate Growth in Yeast,” Nature 565 (January 16, 2019): 606–11, doi:10.1038/s41586-018-0828-1.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/03/06/discovery-of-intron-function-interrupts-evolutionary-paradigm

Does Animal Planning Undermine the Image of God?

Untitled 6
BY FAZALE RANA – JANUARY 23, 2019

A few years ago, we had an all-white English Bulldog named Archie. He would lumber toward even complete strangers, eager to befriend them and earn their affections. And people happily obliged this playful pup.

Archie wasn’t just an adorable dog. He was also well trained. We taught him to ring a bell hanging from a sliding glass door in our kitchen so he could let us know when he wanted to go out. He rarely would ring the bell. Instead, he would just sit by the door and wait . . . unless the neighbor’s cat was in the backyard. Then, Archie would repeatedly bang on the bell with great urgency. He had to get the cat at all costs. Clearly, he understood the bell’s purpose. He just chose to use it for his own devices.

Anyone who has owned a cat or dog knows that these animals do remarkable things. Animals truly are intelligent creatures.

But there are some people who go so far as to argue that animal intelligence is much more like human intelligence than we might initially believe. They base this claim, in part, on a handful of high-profile studies that indicate that some animals such as great apes and ravens can problem-solve and even plan for the future—behaviors that make them like us in some important ways.

Great Apes Plan for the Future

In 2006, two German anthropologists conducted a set of experiments on bonobos and orangutans in captivity that seemingly demonstrated that these creatures can plan for the future. Specifically, the test subjects selected, transported, and saved tools for use 1 hour and 14 hours later, respectively.1

To begin the study, the researchers trained both bonobos and orangutans to use a tool to get a reward from an apparatus. In the first experiment, the researchers blocked access to the apparatus. They laid out eight tools for the apes to select—two were suitable for the task and six were unsuitable. After selecting the tools, the apes were ushered into another room where they were kept for 1 hour. The apes were then allowed back into the room and granted access to the apparatus. To gain the reward, the apes had to select the correct tool and transport it to and from the waiting area. The anthropologists observed that the apes successfully obtained the reward in 70 percent of the trials by selecting and hanging on to the correct tool as they moved from room to room.

In the second experiment, the delay between tool selection and access to the apparatus was extended to 14 hours. This experiment focused on a single female individual. Instead of taking the test subject to the waiting room, the researchers took her to a sleeping room one floor above the waiting room before returning her to the room with the apparatus. She selected and held on to to the tool for 14 hours while she moved from room to room in 11 of the 12 trials—each time successfully obtaining the reward.

On the basis of this study, the researchers concluded that great apes have the ability to plan for the future. They also argued that this ability emerged in the common ancestor of humans and great apes around 14 million years ago. So, even though we like to think of planning for the future as one of the “most formidable human cognitive achievements,”2 it doesn’t appear to be unique to human beings.

Ravens Plan for the Future

In 2017, two researchers from Lund University in Sweden demonstrated that ravens are capable of flexible planning just like the great apes.3 These cognitive scientists conducted a series of experiments with ravens, demonstrating that the large black birds can plan for future events and exert self-control for up to 17 hours prior to using a tool or bartering with humans for a reward. (Self-control is crucial for successfully planning for the future.)

The researchers taught ravens to use a tool to gain a reward from an apparatus. As part of the training phase, the test subjects also learned that other objects wouldn’t work on the apparatus.

In the first experiment, the ravens were exposed to the apparatus without access to tools. As such, they couldn’t gain the reward. Then the researchers removed the apparatus. One hour later, the ravens were taken to a different location and offered tools. Then, the researchers presented them with the apparatus 15 minutes later. On average, the raven test subjects selected and used tools to gain the reward in approximately 80 percent of the trials.

In the next experiment, the ravens were trained to barter by exchanging a token for a food reward. After the training, the ravens were taken to a different location and presented with a tray containing the token and three distractor objects by a researcher who had no history of bartering with the ravens. As with the results of the tool selection experiment, the ravens selected and used the token to successfully barter for food in approximately 80 percent of the trials.

When the scientists modified the experimental design to increase the time delay from 15 minutes to 17 hours between tool or token selection and access to the reward, the ravens successfully completed the task in nearly 90 percent of the trials.

Next, the researchers wanted to determine if the ravens could exercise self-control as part of their planning for the future. First, they presented the ravens with trays that contained a small food reward. Of course, all of the ravens took the reward. Next, the researchers offered the ravens trays that had the food reward and either tokens or tools and distractor items. By selecting the token or the tools, the ravens were ensured a larger food reward in the future. The researchers observed that the ravens selected the tool in 75 percent of the trials and the token in about 70 percent, instead of taking the small morsel of food. After selecting the tool or token, the ravens were given the opportunity to receive the reward about 15 minutes later.

The researchers concluded that, like the great apes, ravens can plan for the future. Moreover, these researchers argue that this insight opens up greater possibilities for animal cognition because, from an evolutionary perspective, ravens are regarded as avian dinosaurs. And mammals (including the great apes) are thought to have shared an evolutionary ancestor with dinosaurs 320 million years ago.

Are Humans Exceptional?

In light of these studies (and others like them), it becomes difficult to maintain that human beings are exceptional. Self-control and the ability to flexibly plan for future events is considered by many to be the cornerstone of human cognition. Planning for the future requires mental representation of temporally distant events, the ability to set aside current sensory inputs for unobservable future events, and an understanding of what current actions result in achieving a future goal.

For many Christians, such as me, the loss of human exceptionalism is concerning because if this idea is untenable, so, too, is the biblical view of human nature. According to Scripture, human beings stand apart from all other creatures because we bear God’s image. And, because every human being possesses the image of God, every human being has intrinsic worth and value. But if, in essence, human beings are no different from animals, it is challenging to maintain that we are the crown of creation, as Scripture teaches.

Yet recent work by biologist Johan Lind from Stockholm University (Sweden) indicates that the results of these two studies and others like them may be misleading. In effect, when properly interpreted, these studies pose no threat to human exceptionalism in any way. According to Lind, animals can engage in behavior that resembles flexible planning through a different behavior: associative learning.4 If so, this insight preserves the case for human exceptionalism and the image of God, because it means that only humans engage in genuine flexible planning for the future through higher-order cognitive processes.

Associative Learning and Planning for the Future

Lind points out that researchers working in artificial intelligence (AI) have long known that associative learning can produce complex behaviors in AI systems that give the appearance of having the capacity for planning. (Associative learning is the process that animals [and AI systems] use to establish an association between two stimuli or events, usually by the use of punishments or rewards.)

blog__inline--does-animal-planning-undermine-the-image-of-god

Figure 1: An illustration of associative learning in dogs. Image credit: Shutterstock

Lind wonders why researchers studying animal cognition ignore the work in AI. Applying the insights from the work on AI systems, Lind developed mathematical models based on associative learning that he used to simulate results of the studies on the great apes and ravens. He discovered that associative learning produced the same behaviors as observed by the two research teams for the great apes and ravens. In other words, planning-like behavior can actually emerge through associative learning. That is, the same processes that give AI systems the capacity to beat humans in chess can, through associative learning, account for the planning-like behavior of animals.

The results of Lind’s simulations mean that it is most likely that animals “plan” for the future in ways that are entirely different from humans. In effect, the planning-like behavior of animals is an outworking of associative learning. On the other hand, humans uniquely engage in bona fide flexible planning through advanced cognitive processes such as mental time travel, among others.

Humans Are Exceptional

Even though the idea of human exceptionalism is continually under assault, it remains intact, as the latest work by Johan Lind illustrates. When the entire body of evidence is carefully weighed, there really is only one reasonable conclusion: Human beings uniquely possess advanced cognitive abilities that make possible our capacity for symbolism, open-ended generative capacity, theory of mind, and complex social interactions—scientific descriptors of the image of God.

Resources

Endnotes
  1. Nicholas J. Mulcahy and Josep Call, “Apes Save Tools for Future Use,” Science 312 (May 19, 2006): 1038–40, doi:10.1126/science.1125456.
  2. Mulcahy and Call, “Apes Save Tools for Future Use.”
  3. Can Kabadayi and Mathias Osvath, “Ravens Parallel Great Apes in Flexible Planning for Tool-Use and Bartering,” Science 357 (July 14, 2017): 202–4, doi:10.1126/science.aam8138.
  4. Johan Lind, “What Can Associative Learning Do for Planning?” Royal Society Open Science 5 (November 28, 2018): 180778, doi:10.1098/rsos.180778.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/01/23/does-animal-planning-undermine-the-image-of-god

Prebiotic Chemistry and the Hand of God

Untitled 2
BY FAZALE RANA – JANUARY 16, 2019

“Many of the experiments designed to explain one or other step in the origin of life are either of tenuous relevance to any believable prebiotic setting or involve an experimental rig in which the hand of the researcher becomes for all intents and purposes the hand of God.”

Simon Conway MorrisLife’s Solution

If you could time travel, would you? Would you travel to the past or the future?

If asked this question, I bet many origin-of-life researchers would want to travel to the time in Earth’s history when life originated. Given the many scientifically impenetrable mysteries surrounding life’s genesis, I am certain many of the scientists working on these problems would love to see firsthand how life got its start.

It is true, origin-of-life researchers have some access to the origin-of-life process through the fossil and geochemical records of the oldest rock formations on Earth—yet this evidence only affords them a glimpse through the glass, dimly.

Because of these limitations, origin-of-life researchers have to carry out most of their work in laboratory settings, where they try to replicate the myriad steps they think contributed to the origin-of-life process. Pioneered by the late Stanley Miller in the early 1950s, this approach—dubbed prebiotic chemistry—has become a scientific subdiscipline in its own right.

blog__inline--prebiotic-chemistry-and-the-hand-of-god-1

Figure 1: Chemist Stanley Miller, circa 1980. Image credit: Wikipedia

Prebiotic Chemistry

In effect, the goals of prebiotic chemistry are threefold.

  • Proof of principle. The objective of these types of experiments is to determine—in principle—if a chemical or physical process that could potentially contribute to one or more steps in the origin-of-life pathway even exists.
  • Mechanism studies. Once processes have been identified that could contribute to the emergence of life, researchers study them in detail to get at the mechanisms undergirding these physicochemical transformations.
  • Geochemical relevance. Perhaps the most important goal of prebiotic studies is to establish the geochemical relevance of the physicochemical processes believed to have played a role in life’s start. In other words, how well do the chemical and physical processes identified and studied in the laboratory translate to early Earth’s conditions?

Without question, over the last 6 to 7 decades, origin-of-life researchers have been wildly successful with respect to the first two objectives. It is safe to say that origin-of-life investigators have demonstrated that—in principle—the chemical and physical processes needed to generate life through chemical evolutionary pathways exist.

But when it comes to the third objective, origin-of-life researchers have experienced frustration—and, arguably, failure.

Researcher Intervention and Prebiotic Chemistry

In an ideal world, humans would not intervene at all in any prebiotic study. But this ideal isn’t possible. Researchers involve themselves in the experimental design out of necessity, but also to ensure that the results of the study are reproducible and interpretable. If researchers don’t set up the experimental apparatus, adjust the starting conditions, add the appropriate reactants, and analyze the product, then by definition the experiment would never happen. Utilizing carefully controlled conditions and chemically pure reagents is necessary for reproducibility and to make sense of the results. In fact, this level of control is essential for proof-of-principle and mechanistic prebiotic studies—and perfectly acceptable.

However, when it comes to prebiotic chemistry’s third goal, geochemical relevance, the highly controlled conditions of the laboratory become a liability. Here researcher intervention becomes potentially unwarranted. It goes without saying that the conditions of early Earth were uncontrolled and chemically and physically complex. Chemically pristine and physically controlled conditions didn’t exist. And, of course, origin-of-life researchers weren’t present to oversee the processes and guide them to their desired end. Yet, it is rare for prebiotic simulation studies to fully take the actual conditions of early Earth into account in the experimental design. It is rarer for origin-of-life investigators to acknowledge this limitation.

blog__inline--prebiotic-chemistry-and-the-hand-of-god-2

Figure 2: Laboratory technician. Image credit: Shutterstock

This complication means that many prebiotic studies designed to simulate processes on early Earth seldom accomplish anything of the sort due to excessive researcher intervention. Yet, it isn’t always clear when examining an experimental design if researcher involvement is legitimate or unwarranted.

As I point out in my book Creating Life in the Lab (Baker, 2011), one main reason for the lack of progress relates to the researcher’s role in the experimental design—a role not often recognized when experimental results are reported. Origin-of-life investigator Clemens Richert from the University of Stuttgart in Germany now acknowledges this very concern in a recent comment piece published by Nature Communications.1

As Richert points out, the role of researcher intervention and a clear assessment of geochemical relevance is rarely acknowledged or properly explored in prebiotic simulation studies. To remedy this problem, Richert calls for origin-of-life investigators to do three things when they report the results of prebiotic studies.

  • State explicitly the number of instances in which researchers engaged in manual intervention.
  • Describe precisely the prebiotic scenario a particular prebiotic simulation study seeks to model.
  • Reduce the number of steps involving manual intervention in whatever way possible.

Still, as Richert points out, it is not possible to provide a quantitative measure (a score) of geochemical relevance. And, hence, there will always be legitimate disagreement about the geochemical relevance of any prebiotic experiment.

Yet, Richert’s commentary represents an important first step toward encouraging more realistic prebiotic simulation studies and a more cautious approach to interpreting the results of these studies. Hopefully, it will also lead to a more circumspect assessment on the importance of these types of studies for accounting for the various steps in the origin-of-life process.

Researcher Intervention and the Hand of God

One concern not addressed by Richert in his commentary piece is the fastidiousness of many of the physicochemical transformations origin-of-life researchers deem central to chemical evolution. As I discuss in Creating Life in the Lab, mechanistic studies indicate that these processes are often dependent upon exacting conditions in the laboratory. To put it another way, these processes only take place—even under the most ideal laboratory conditions—because of human intervention. As a corollary, these processes would be unproductive on early Earth. They often require chemically pristine conditions, unrealistically high concentrations of reactants, carefully controlled order of additions, carefully regulated temperature, pH, salinity levels, etc.

As Richert states, “It’s not easy to see what replaced the flasks, pipettes, and stir bars of a chemistry lab during prebiotic evolution, let alone the hands of the chemist who performed the manipulations. (And yes, most of us are not comfortable with the idea of divine intervention.)”2

Sadly, since I made the point about researcher intervention nearly a decade ago, it has often been ignored, dismissed, and even ridiculed by many in the scientific community—simply because I have the temerity to think that a Creator brought life into existence.

Even though Richert and his many colleagues in the origin-of-life research community do whatever they can to eschew a Creator’s role in the origin-of-life, could it be that abiogenesis (life from nonlife) required the hand of God—divine intervention?

I would argue that this conclusion follows from nearly seven decades of work in prebiotic chemistry and the consistent demonstration of the central role that origin-of-life researchers play in the success of prebiotic simulation studies. It is becoming increasingly evident for whoever will “see” that the hand of the researcher serves as the analog for the hand of God.

Resources

Endnotes
  1. Clemens Richert, “Prebiotic Chemistry and Human Intervention,” Nature Communications 9 (December 12, 2018): 5177, doi:10.1038/s41467-018-07219-5.
  2. Richert, “Prebiotic Chemistry.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/01/16/prebiotic-chemistry-and-the-hand-of-god

Soft Tissue Preservation Mechanism Stabilizes the Case for Earth’s Antiquity

Untitled 16
BY FAZALE RANA – DECEMBER 19, 2018

One of the highlights of the year at Reasons to Believe (well, it’s a highlight for some of us, anyway) is the white elephant gift exchange at our staff Christmas party. It is great fun to laugh together as a staff as we take turns unwrapping gifts—some cheesy, some useless, and others highly prized—and then “stealing” from one another those two or three gifts that everyone seems to want.

Over the years, I have learned a few lessons about choosing a white elephant gift to unwrap. Avoid large gifts. If the gift is a dud, large items are more difficult to find a use for than small ones. Also, more often than not, the most beautifully wrapped gifts turn out to be the biggest letdowns of all.

Giving and receiving gifts isn’t just limited to Christmas. People exchange all types of gifts with one another for all sorts of reasons.

Gifting is even part of the scientific enterprise—with the gifts taking on the form of scientific discoveries and advances. Many times, discoveries lead to new beneficial insights and technologies—gifts for humanity. Other times, these breakthroughs are gifts for scientists, signaling a new way to approach a scientific problem or opening up new vistas of investigation.

Soft Tissue Remnants Preserved in Fossils

One such gift was given to the scientific community over a decade ago by Mary Schweitzer, a paleontologist at North Carolina State University. Schweitzer and her team of collaborators recovered flexible, hollow, and transparent blood vessels from the remains of a T. rex specimen after removing the mineral component of the fossil.1 These blood vessels harbored microstructures with a cell-like morphology (form and structure) that she and her collaborators interpreted to be the remnants of red blood cells. This work showed conclusively that soft tissue materials could be preserved in fossil remains.

Though unexpected, the discovery was a landmark achievement for paleontology. Since Schweitzer’s discovery, paleontologists have unearthed the remnants of all sorts of soft tissue materials from fossils representing a wide range of organisms. (For a catalog of some of these finds, see my book Dinosaur Blood and the Age of the Earth.)

With access to soft tissue materials in fossils, paleontologists have a new window into the biology of Earth’s ancient life.

The Scientific Case for a Young Earth

Some Christians also saw Schweitzer’s discovery as a gift. But for them the value of this scientific present wasn’t the insight it provides about past life on Earth. Instead, they viewed this discovery (and others like it) as evidence that the earth must be no more than a few thousand years old. From a young-earth creationist (YEC) perspective, the survival of soft tissue materials in fossils indicates that these remains can’t be millions of years old. As a case in point, at the time Schweitzer reported her findings, John Morris, a young-earth proponent from the Institute for Creation Research, wrote:

Indeed, it is hard to imagine how soft tissue could have lasted even 5,000 years or so since the Flood of Noah’s day when creationists propose the dinosaur was buried. Such a thing could hardly happen today, for soft tissue decays rather quickly under any condition.2

In other words, from a YEC perspective, it is impossible for fossils to contain soft tissue remnants and be millions of years old. Soft tissues shouldn’t survive that long; they should readily degrade in a few thousand years. From a YEC view, soft tissue discoveries challenge the reliability of radiometric dating methods used to determine the fossils’ ages and, consequently, Earth’s antiquity. Furthermore, these breakthrough discoveries provide compelling scientific evidence for a young earth and support the idea that the fossil record results from a recent global (worldwide) flood.

Admittedly, on the surface the argument carries some weight. At first glance, it is hard to envision how soft tissue materials could survive for vast periods of time, given the wide range of mechanisms that drive the degradation of biological materials.

Preservation of Soft Tissues in Fossil Remains

Despite this first impression, over the last decade or so paleontologists have identified a number of mechanisms that can delay the degradation of soft tissues long enough for them to become entombed within a mineral shell. When this entombment happens, the soft tissue materials escape further degradation (for the most part). In other words, it is a race against time. Can mineral entombment take place before the soft tissue materials fully decompose? If so, then soft tissue remnants can survive for hundreds of millions of years. And any chemical or physical process that can delay the degradation will contribute to soft tissue survival by giving the entombment process time to take place.

In Dinosaur Blood and the Age of the Earth, I describe several mechanisms that likely promote soft tissue survival. Since the book’s publication (2016), researchers have deepened their understanding of the processes that make it possible for soft tissues to survive. The recent work of an international team of collaborators headed by researchers from Yale University provides an example of this growing insight.3

These researchers discovered that the deposition environment during the fossilization process plays a significant role in soft tissue preservation, and they have identified the chemical reactions that contribute to this preservation. The team examined 24 specimens of biomineralized vertebrate tissues ranging in age from modern to the Late Jurassic (approximately 163–145 million years ago) time frame. These specimens were taken from both chemically oxidative and reductive environments.

After demineralizing the samples, the researchers discovered that all modern specimens yielded soft tissues. However, demineralization only yielded soft tissues for fossils formed under oxidative conditions. Fossils formed under reductive conditions failed to yield any soft tissue material, whatsoever. The soft tissues from the oxidative settings (which included extracellular matrices, cell remnants, blood vessel remnants, and nerve materials) were stained brown. Researchers noted that the brown color of the soft tissue materials increased in intensity as a function of the fossil’s age, with older specimens displaying greater browning than younger specimens.

The team was able to reproduce this brown color in soft tissues taken from modern-day specimens by heating the samples and exposing them to air. This process converted the soft tissues from translucent white to brown in appearance.

Using Raman spectroscopy, the researchers detected spectral signatures for proteins and N-heterocycle pyridine rings in the soft tissue materials. They believe that the N-heterocycle pyridine rings arise from the formation of advanced glycoxidation end-products (AGEs) and advanced lipoxidation end-products (ALEs). AGEs and ALEs are the by-products of the reactions that take place between proteins and sugars (AGEs) and proteins and lipids or fats (ALEs). (As an aside, AGEs and ALEs form when foods are cooked, and they occur at high levels when food is burnt, giving overly cooked foods their brownish color.) The researchers noted that spectral features for N-heterocycle pyridine rings become more prominent for soft tissues isolated from older fossil specimens, with the spectral features for the proteins becoming less pronounced.

AGEs and ALEs are heavily cross-linked compounds. This chemical property makes them extremely difficult to break down once they form. In other words, the formation of AGEs and ALEs in soft tissue remnants delays their decomposition long enough for mineral entombment to take place.

Iron from the environment or released from red blood cells promotes the formation of AGEs and ALEs. So do alkaline conditions.

In addition to stabilizing soft tissues from degradation because of the cross-links, AGEs and ALEs protect adjacent proteins from breakdown because of their hydrophobic (water repellent) nature. Water promotes soft tissue breakdown through a chemical process called hydrolysis. But because AGEs and ALEs are hydrophobic, they inhibit the hydrolytic reactions that would otherwise break down proteins that escape glycoxidation and lipoxidation reactions.

Finally, AGEs and ALEs are also resistant to microbial attack, further adding to the stability of the soft tissue materials. In other words, soft tissue materials recovered from fossil specimens are not the original, intact material, because they have undergone extensive chemical alteration. As it turns out, this alteration stabilized the soft tissue remnants long enough for mineral entombment to occur.

In short, this research team has made significant strides toward understanding the process by which soft tissue materials become preserved in fossil remains. The recovery of soft tissue materials from the ancient fossil remains makes perfect sense within an old-earth framework. These insights also undermine what many people believe to be one of the most compelling scientific arguments for a young earth.

Why Does It Matter?

In my experience, many skeptics and seekers alike reject Christian truth claims because of the misperception that Genesis 1 teaches that the earth is only 6,000 years old. This misperception becomes reinforced by vocal (and well-meaning) YECs who not only claim the only valid interpretation of Genesis 1 is the calendar-day view, but also maintain that ample scientific evidence—such as the recovery of soft tissue remnants in fossils—exists for a young earth.

Yet, as the latest work headed by scientists from Yale University demonstrates, soft tissue remnants associated with fossils find a ready explanation from an old-earth standpoint. It has been a gift to science that advances understanding of a sophisticated process.

Unfortunately, for YECs the fossil-associated soft tissues have turned out to be little more than a bad white elephant gift.

Resources:

Endnotes
  1. Mary H. Schweitzer et al., “Soft-Tissue Vessels and Cellular Preservation in Tyrannosaurus rex,” Science 307 (March 25, 2005): 1952–55, doi:10.1126/science.1108397.
  2. John D. Morris, “Dinosaur Soft Parts,” Acts & Facts (June 1, 2005), icr.org/article/2032/.
  3. Jasmina Wiemann et al., “Fossilization Transforms Vertebrate Hard Tissue Proteins into N-Heterocyclic Polymers,” Nature Communications 9 (November 9, 2018): 4741, doi:10.1038/s41467-018-07013-3.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/12/19/soft-tissue-preservation-mechanism-stabilizes-the-case-for-earth-s-antiquity