Origins of Monogamy Cause Evolutionary Paradigm Breakup

Untitled 9
BY FAZALE RANA – MARCH 20, 2019

Gregg Allman fronted the Allman Brothers Band for over 40 years until his death in 2017 at the age of 69. Writer Mark Binelli described Allman’s voice as “a beautifully scarred blues howl, old beyond its years.”1

A rock legend who helped pioneer southern rock, Allman was as well known for his chaotic, dysfunctional personal life as for his accomplishments as a musician. Allman struggled with drug abuse and addiction. He was also married six times, with each marriage ending in divorce and, at times, in a public spectacle.

In a 2009 interview with Binelli for Rolling Stone, Allman reflected on his failed marriages: “To tell you the truth, it’s my sixth marriage—I’m starting to think it’s me.”2

Allman isn’t the only one to have trouble with marriage. As it turns out, so do evolutionary biologists—but for different reasons than Greg Allman.

To be more exact, evolutionary biologists have made an unexpected discovery about the evolutionary origin of monogamy (a single mate for at least a season) in animals—an insight that raises questions about the evolutionary explanation. Based on recent work headed by a large research team of investigators from the University of Texas (UT), Austin, it looks like monogamy arose independently, multiple times, in animals. And these origin events were driven, in each instance, by the same genetic changes.3

In my view, this remarkable example of evolutionary convergence highlights one of the many limitations of evolutionary theory. It also contributes to my skepticism (and that of other intelligent design proponents/creationists) about the central claim of the evolutionary paradigm; namely, the origin, design, history, and diversity of life can be fully explained by evolutionary mechanisms.

At the same time, the independent origins of monogamy—driven by the same genetic changes—(as well as other examples of convergence) find a ready explanation within a creation model framework.

Historical Contingency

To appreciate why I believe this discovery is problematic for the evolutionary paradigm, it is necessary to consider the nature of evolutionary mechanisms. According to the evolutionary biologist Stephen Jay Gould (1941–2002), evolutionary transformations occur in a historically contingent manner.This means that the evolutionary process consists of an extended sequence of unpredictable, chance events. If any of these events were altered, it would send evolution down a different trajectory.

To help clarify this concept, Gould used the metaphor of “replaying life’s tape.” If one were to push the rewind button, erase life’s history, and then let the tape run again, the results would be completely different each time. In other words, the evolutionary process should not repeat itself. And rarely should it arrive at the same end point.

Gould based the concept of historical contingency on his understanding of the mechanisms that drive evolutionary change. Since the time of Gould’s original description of historical contingency, several studies have affirmed his view. (For descriptions of some representative studies, see the articles listed in the Resources section.) In other words, researchers have experimentally shown that the evolutionary process is, indeed, historically contingent.

A Failed Prediction of the Evolutionary Paradigm

Given historical contingency, it seems unlikely that distinct evolutionary pathways would lead to identical or nearly identical outcomes. Yet, when viewed from an evolutionary standpoint, it appears as if repeated evolutionary outcomes are a common occurrence throughout life’s history. This phenomenon—referred to as convergence—is widespread. Evolutionary biologists Simon Conway Morris and George McGhee point out in their respective books, Life’s Solution and Convergent Evolution, that identical evolutionary outcomes are a characteristic feature of the biological realm.5 Scientists see these repeated outcomes at the ecological, organismal, biochemical, and genetic levels. In fact, in my book The Cell’s Design, I describe 100 examples of convergence at the biochemical level.

In other words, biologists have made two contradictory observations within the evolutionary framework: (1) evolutionary processes are historically contingent and (2) evolutionary convergence is widespread. Since the publication of The Cell’s Design, many new examples of convergence have been unearthed, including the recent origin of monogamy discovery.

Convergent Origins of Monogamy

Working within the framework of the evolutionary paradigm, the UT research team sought to understand the evolutionary transition to monogamy. To achieve this insight, they compared the gene expression profiles in the neural tissues of reproductive males for closely related pairs of species, with one species displaying monogamous behavior and the other nonmonogamous reproduction.

The species pairs spanned the major vertebrate groups and included mice, voles, songbirds, frogs, and cichlids. From an evolutionary perspective, these organisms would have shared a common ancestor 450 million years ago.

Monogamous behavior is remarkably complex. It involves the formation of bonds between males and females, care of offspring by both parents, and increased territorial defense. Yet, the researchers discovered that in each instance of monogamy the gene expression profiles in the neural tissues of the monogamous species were identical and distinct from the gene expression patterns for their nonmonogamous counterparts. Specifically, they observed the same differences in gene expression for the same 24 genes. Interestingly, genes that played a role in neural development, cell-cell signaling, synaptic activity, learning and memory, and cognitive function displayed enhanced gene expression. Genes involved in gene transcription and AMPA receptor regulation were down-regulated.

So, how do the researchers account for this spectacular example of convergence? They conclude that a “universal transcriptomic mechanism” exists for monogamy and speculate that the gene modules needed for monogamous behavior already existed in the last common ancestor of vertebrates. When needed, these modules were independently recruited at different times in evolutionary history to yield monogamous species.

Yet, given the number of genes involved and the specific changes in gene expression needed to produce the complex behavior associated with monogamous reproduction, it seems unlikely that this transformation would happen a single time, let alone multiple times, in the exact same way. In fact, Rebecca Young, the lead author of the journal article detailing the UT research team’s work, notes that “Most people wouldn’t expect that across 450 million years, transitions to such complex behaviors would happen the same way every time.”6

So, is there another way to explain convergence?

Convergence and the Case for a Creator

Prior to Darwin (1809–1882), biologists referred to shared biological features found in organisms that cluster into disparate biological groups as analogies. (In an evolutionary framework, analogies are referred to as evolutionary convergences.) They viewed analogous systems as designs conceived by the Creator that were then physically manifested in the biological realm and distributed among unrelated organisms.

In light of this historical precedence, I interpret convergent features (analogies) as the handiwork of a Divine mind. The repeated origins of biological features equate to the repeated creations by an intelligent Agent who employs a common set of solutions to address a common set of problems facing unrelated organisms.

Thus, the idea of monogamous convergence seems to divorce itself from the evolutionary framework, but it makes for a solid marriage in a creation model framework.

Resources

Endnotes
  1. Mark Binelli, “Gregg Allman: The Lost Brother,” Rolling Stone, no. 1082/1083 (July 9–23, 2009), https://www.rollingstone.com/music/music-features/gregg-allman-the-lost-brother-108623/.
  2. Binelli, “Gregg Allman: The Lost Brother.”
  3. Rebecca L. Young et al., “Conserved Transcriptomic Profiles underpin Monogamy across Vertebrates,” Proceedings of the National Academy of Sciences, USA 116, no. 4 (January 22, 2019): 1331–36, doi:10.1073/pnas.1813775116.
  4. Stephen Jay Gould, Wonderful Life: The Burgess Shale and the Nature of History (New York: W. W. Norton & Company, 1990).
  5. Simon Conway Morris, Life’s Solution: Inevitable Humans in a Lonely Universe (New York: Cambridge University Press, 2003); George McGhee, Convergent Evolution: Limited Forms Most Beautiful (Cambridge, MA: MIT Press, 2011).
  6. University of Texas at Austin, “Evolution Used Same Genetic Formula to Turn Animals Monogamous,” ScienceDaily (January 7, 2019), www.sciencedaily.com/releases/2019/01/1901071507.htm.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/03/20/origins-of-monogamy-cause-evolutionary-paradigm-breakup

Biochemical Synonyms Restate the Case for a Creator

Untitled 8
BY FAZALE RANA – MARCH 13, 2019

Sometimes I just can’t help myself. I know it’s clickbait but I click on the link anyway.

A few days ago, as a result of momentary weakness, I found myself reading an article from the ScoopWhoop website, “16 Things Most of Us Think Are the Same but Actually Aren’t.”

OK. OK. Now that you saw the title you want to click on the link, too.

To save you from wasting five minutes of your life, here is the ScoopWhoop list:

  • Weather and Climate
  • Turtle and Tortoise
  • Jam and Jelly
  • Eraser and Rubber
  • Great Britain and the UK
  • Pill and Tablet
  • Shrimp and Prawn
  • Butter and Margarine
  • Orange and Tangerine
  • Biscuits and Cookies
  • Cupcakes and Muffins
  • Mushrooms and Toadstools
  • Tofu and Paneer
  • Rabbits and Hares
  • Alligators and Crocodiles
  • Rats and Mice

And there you have it. Not a very impressive list, really.

If I were putting together a biochemist’s version of this list, I would start with synonymous mutations. Even though many life scientists think they are the same, studies indicate that they “actually aren’t.”

If you have no idea what I am talking about or what this insight has to do with the creation/evolution debate, let me explain by starting with some background information, beginning with the central dogma of molecular biology and the genetic code.

Central Dogma of Molecular Biology

According to this tenet of molecular biology, the information stored in DNA is functionally expressed through the activities of proteins. When it is time for the cell’s machinery to produce a particular protein, it copies the appropriate information from the DNA molecule through a process called transcription and produces a molecule called messenger RNA(mRNA). Once assembled, mRNA migrates to the ribosome, where it directs the synthesis of proteins through a process known as translation.

blog__inline--biochemical-synonyms-restate-1

Figure 1: The central dogma of molecular biology. Image credit: Shutterstock

The Genetic Code

At first glance, there appears to be a mismatch between the stored information in DNA and the information expressed in proteins. A one-to-one relationship cannot exist between the four different nucleotides that make up DNA and the twenty different amino acids used to assemble proteins. The cell handles this mismatch by using a code comprised of groupings of three nucleotides, called codons, to specify the twenty different amino acids.

 

blog__inline--biochemical-synonyms-restate-2

Figure 2: Codons. Image credit: Wikipedia

The cell uses a set of rules to relate these nucleotide triplet sequences to the twenty amino acids that comprise proteins. Molecular biologists refer to this set of rules as the genetic code. The nucleotide triplets represent the fundamental units of the genetic code. The code uses each combination of nucleotide triplets to signify an amino acid. This code is essentially universal among all living organisms.

Sixty-four codons make up the genetic code. Because the code only needs to encode twenty amino acids, some of the codons are redundant. That is, different codons code for the same amino acid. In fact, up to six different codons specify some amino acids. Others are specified by only one codon.1

blog__inline--biochemical-synonyms-restate-3

Figure 3: The genetic code. Image credit: Shutterstock

A little more background information about mutations will help fill out the picture.

Mutations

A mutation refers to any change that takes place in the DNA nucleotide sequence. DNA can experience several different types of mutations. Substitution mutations are one common type. When a substitution mutation occurs, one (or more) of the nucleotides in the DNA strand is replaced by another nucleotide. For example, an A may be replaced by a G, or a C may be replaced by a T. This substitution changes the codon. Interestingly, the genetic code is structured in such a way that when substitution mutations take place, the resulting codon often specifies the same amino acid (due to redundancy) or an amino acid that has similar chemical and physical properties to the amino acid originally encoded.

Synonymous and Nonsynonymous Mutations

When substitution mutations generate a new codon that specifies the same amino acid as initially encoded, it’s referred to as a synonymous mutation. However, when a substitution produces a codon that specifies a different amino acid, it’s called a nonsynonymous mutation.

Nonsynonymous mutations can be deleterious if they affect a critical amino acid or if they significantly alter the chemical and physical profile along the protein chain. If the substituted amino acid possesses dramatically different physicochemical properties from the native amino acid, it may cause the protein to fold improperly. Improper folding impacts the protein’s structure, yielding a biomolecule with reduced or even lost function.

On the other hand, biochemists have long thought that synonymous mutations have no effect on protein structure and function because these types of mutations don’t change the amino acid sequences of proteins. Even though biochemists think that synonymous mutations are silent—having no functional consequences—evolutionary biologists find ways to use them, including using patterns of synonymous mutations to establish evolutionary relationships.

Patterns of Synonymous Mutations and the Case for Biological Evolution

Evolutionary biologists consider shared genetic features found in organisms that naturally group together as compelling evidence for common descent. One feature of particular interest is the identical (or nearly identical) DNA sequence patterns found in genomes. According to this line of reasoning, the shared patterns arose as a result of a series of substitution mutations that occurred in the common ancestor’s genome. Presumably, as the varying evolutionary lineages diverged from the nexus point, they carried with them the altered sequences created by the primordial mutations.

Synonymous mutations play a significant role in this particular argument for common descent. Because synonymous mutations don’t alter the amino acid sequence of proteins, their effects are considered to be inconsequential. So, when the same (or nearly the same) patterns of synonymous mutations are observed in genomes of organisms that cluster together into the same group, most life scientists interpret them as compelling evidence of the organisms’ common evolutionary history.

It is conceivable that nonsynonymous mutations, which alter the protein amino acid sequences, may impart some type of benefit and, therefore, shared patterns of nonsynonymous changes could be understood as evidence for shared design. (See the last section of this article.) But this is not the case when it comes to synonymous mutations, which raises the question: Why would a Creator intentionally introduce new codons that code for the same amino acid into genes when these changes have no functional utility?

Apart from invoking a Creator, the shared patterns of synonymous mutations make perfect sense if genomes have been shaped by evolutionary processes and an evolutionary history. However, this argument for biological evolution (shared ancestry) and challenge to a creation model interpretation (shared design) hinges on the underlying assumption that synonymous mutations have no functional consequence.

But what if this assumption no longer holds?

Synonymous Mutations Are Not Interchangeable

Biochemists used to think that synonymous mutations had no impact whatsoever on protein structure and, hence, function, but this view is changing thanks to studies such as the one carried out by researchers at University of Colorado, Boulder.2

These researchers discovered synonymous mutations that increase the translational efficiency of a gene (found in the genome of Salmonella enterica). This gene codes for an enzyme that plays a role in the biosynthetic pathway for the amino acid arginine. (This enzyme also plays a role in the biosynthesis of proline.) They believe that these mutations alter the three-dimensional structure of the DNA sequence near the beginning of the coding portion of the gene. They also think that the synonymous mutations improve the stability of the messenger RNA molecule. Both effects would lead to greater translational efficiency at the ribosome.

As radical (and unexpected) as this finding may seem to be, it follows on the heels of other recent discoveries that also recognize the functional importance of synonymous mutations.3Generally speaking, biochemists have discovered that synonymous mutations function to influence not only the rate and efficiency of translation (as the scientists from the University of Colorado, Bolder learned) and the folding of the proteins after they are produced at the ribosome.

Even though synonymous mutations leave the amino acid sequence of the protein unchanged, they can exert influence by altering the:

  • regulatory regions of the gene that influence the transcription rate
  • secondary and tertiary structure of messenger RNA that influences the rate of translation
  • stability of messenger RNA that influences the amount of protein produced
  • translation rate that influences the folding of the protein as it exits the ribosome

Biochemists are just beginning to come to terms with the significance of these discoveries, but it is already clear that synonymous mutations have biomedical consequences.They also impact models for molecular evolution. But for now, I want to focus on the impact these discoveries has on the creation/evolution debate.

Patterns of Synonymous Mutations and the Case for Creation

As noted, many people consider the most compelling evidence for common descent to be the shared genetic features displayed by organisms that naturally cluster together. But if life is the product of a Creator’s handiwork, the shared genetic features could be understood as shared designs deployed by a Creator. In fact, a historical precedent exists for the common design interpretation. Prior to Darwin, biologists viewed shared biological features as manifestations of archetypical designs that existed in the Creator’s mind.

But the common design interpretation requires that the shared features be functional. (Or, that they arise independently in a nonrandom manner.) For those who view life from the framework of the evolutionary paradigm, the shared patterns of synonymous mutations invalidate the common design explanation—because these mutations are considered to be functionally insignificant.

But in the face of mounting evidence for the functional importance of synonymous mutations, this objection to common design has begun to erode. Though many life scientists are quick to dismiss the common design interpretation of biology, advances in molecular biology continue to strengthen this explanation and, with it, the case for a Creator.

Resources

Endnotes
  1. As I discuss in The Cell’s Design, the rules of the genetic code and the nature of the redundancy appear to be designed to minimize errors in translating information from DNA into proteins that would occur due to substitution mutations. This optimization stands as evidence for the work of an intelligent Agent.
  2. JohnCarlo Kristofich et al., “Synonymous Mutations Make Dramatic Contributions to Fitness When Growth Is Limited by Weak-Link Enzyme,” PLoS Genetics 14, no. 8 (August 27, 2018): e1007615, doi:10.1371/journal.pgen.1007615.
  3. Here are a few representative studies that ascribe functional significance to synonymous mutations: Anton A. Komar, Thierry Lesnik, and Claude Reiss, “Synonymous Codon Substitutions Affect Ribosome Traffic and Protein Folding during in vitro Translation,” FEBS Letters 462, no. 3 (November 30, 1999): 387–91, doi:10.1016/S0014-5793(99)01566-5; Chung-Jung Tsai et al., “Synonymous Mutations and Ribosome Stalling Can Lead to Altered Folding Pathways and Distinct Minima,” Journal of Molecular Biology 383, no. 2 (November 7, 2008): 281–91, doi:10.1016/j.jmb.2008.08.012; Florian Buhr et al., “Synonymous Codons Direct Cotranslational Folding toward Different Protein Conformations,” Molecular Cell Biology 61, no. 3 (February 4, 2016): 341–51, doi:10.1016/j.molcel.2016.01.008; Chien-Hung Yu et al., “Codon Usage Influences the Local Rate of Translation Elongation to Regulate Co-translational Protein Folding,” Molecular Cell Biology 59, no. 5 (September 3, 2015): 744–55, doi:10.1016/j.molcel.2015.07.018.
  4. Zubin E. Sauna and Chava Kimchi-Sarfaty,” Understanding the Contribution of Synonymous Mutations to Human Disease,” Nature Reviews Genetics 12 (August 31, 2011): 683–91, doi:10.1038/nrg3051.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/03/13/biochemical-synonyms-restate-the-case-for-a-creator

Discovery of Intron Function Interrupts Evolutionary Paradigm

Untitled 7
BY FAZALE RANA – MARCH 6, 2019

Nobody likes to be interrupted when they are talking. It feels disrespectful and can be frustrating. Interruptions derail the flow of a conversation.

The editors tell me that I need to interrupt this lead to provide a “tease” for what is to come. So, here goes: Interruptions happen in biochemical systems, too. Life scientists long thought that these interruptions disrupted the flow of biochemical information. But, it turns out these interruptions serve an important function, offering a rejoinder a common argument against intelligent design.

Now back to the lead.

Perhaps it is no surprise that some psychologists study interruptions1 with the hope of discovering answers to questions such as:

  • Why do people interrupt?
  • Who is most likely to interrupt?
  • Do we all perceive interruptions in the same way?

While there is still much to learn about the science of interruptions, psychologists have discovered that men interrupt more often than women. Ironically, men often view women who interrupt as ruder and less intelligent than men who interrupt during conversations.

Researchers have also found that a person’s cultural background influences the likelihood that he or she will interrupt during a discourse. Personality also plays a role. Some people are more sensitive to pauses in conversation and, therefore, find themselves interrupting more often than those who are less uncomfortable with periods of silence.

Psychologists have learned that not all interruptions are the same. Some people interrupt because they want the “floor.” These people are called intrusive interrupters. Cooperativeinterrupters help move the conversation along by agreeing with the speaker and finishing the speaker’s thoughts.

Interruptions are not confined to conversations. They are a part of life, including the biochemical operations that take place inside the cell.

In fact, biochemists have discovered that the information harbored in genes, which contains the instructions to build proteins—the workhorse molecules of the cell—experience interruptions in their coding sequences. These intrusive interruptions would disrupt the flow of information in the cell during the process of protein synthesis if the interrupting sequences weren’t removed by the cell’s machinery.

Molecular biologists have long viewed these genetic “interruptions” (called introns) as serving no useful purpose for the cell, with introns comprising a portion of the junk DNA found in the genomes of eukaryotic organisms. But it turns out that introns—like cooperative interruptions during a conversation—serve a useful purpose, according to the recent work of two independent teams of molecular biologists.

Introns Are Abundant

Noncoding regions within genes, introns consist of DNA sequences that interrupt the coding regions (called exons) of a gene. Introns are pervasive in genomes of eukaryotic organisms. For example, 90 percent of genes in mammals consists of introns, with an average of 8 per gene.

After the information stored in a gene is copied into messenger RNA, the intron sequences are excised, and the exons spliced together by a protein-RNA complex known as a spliceosome.

blog__inline--discovery-of-intron-function-1

Figure 1: Drawing of pre-mRNA to mRNA. Image credit: Wikipedia

Molecular biologists have long wondered why eukaryotic genes would be riddled with introns. Introns seemingly make the structure and expression of eukaryotic genes unnecessarily complicated. What possible purpose could introns serve? Researchers also thought that once the introns were spliced out of the messenger RNA sequences, they were discarded as genetic debris.

Introns Serve a Functional Purpose

But recent work by two independent research teams from Sherbrooke University in Quebec, Canada, and MIT, respectively, indicates that molecular biologists have been wrong about introns. They have learned that once spliced from messenger RNA, these fragments play a role in helping cells respond to stress.

Both research teams studied baker’s yeast. One advantage of using yeast as a model organism relates to the relatively small number of introns (295) in its genome.

blog__inline--discovery-of-intron-function-2

Figure 2: A depiction of baker’s yeast. Image credit: Shutterstock

Taking advantage of the limited number of introns in baker’s yeast, the team from Sherbrooke University created hundreds of yeast strains—each one missing just one of its introns. When grown under normal conditions with a ready supply of available nutrients, the strains missing a single intron grew normally—suggesting that introns aren’t of much importance. But when the researchers grew the yeast cells under conditions of food scarcity, the yeast with the deleted introns frequently died.2

The MIT team observed something similar. They noticed that during the stationary phase of growth (when nutrients become depleted, slowing down growth), introns spliced from RNA accumulated in the growth medium. The researchers deleted the specific introns that they found in the growth medium from the baker’s yeast genome and discovered that the resulting yeast strains struggled to survive under nutrient-poor conditions.3

At this point, it isn’t clear how introns help cells respond to stress caused by a lack of nutrients, but they have some clues. The Sherbrooke University team thinks that the spliced-out introns play a role in repressing the production of proteins that help form ribosomes. These biochemical machines manufacture proteins. Because protein synthesis requires building block materials and energy, during periods when nutrients are scarce, protein production slows down in cells. Ratcheting down protein synthesis impedes cell growth but affords them a better chance to survive a lack of nutrients. One way cells can achieve this objective is to stop making ribosomes.

The MIT team thinks that some spliced-out introns interact with spliceosomes, preventing them from splicing out other introns. When this disruption happens, it slows down protein synthesis.

Both research groups believe that in times when nutrients are abundant, the spliced-out introns are broken down by the cell’s machinery. But when nutrients are scarce, that condition triggers intron accumulation.

At this juncture, it isn’t clear if the two research teams have uncovered distinct mechanisms that work collaboratively to slow down protein production, or if they are observing facets of the same mechanism. Regardless, it is evident that introns display functional utility. It’s a surprising insight that has important ramifications for our understanding of the structure and function of genomes. This insight has potential biomedical utility and theological implications, as well.

Intron Function and the Case for Creation

Scientists who view biology through the lens of the evolutionary paradigm are quick to conclude that the genomes of organisms reflect the outworking of evolutionary history. Their perspective causes them to see the features of genomes, such as introns, as little more than the remnants of an unguided evolutionary process. Within this framework, there is no reason to think that any particular DNA sequence element, including introns, harbors function. In fact, many life scientists regard the “evolutionary vestiges” in the genome as junk DNA. This clearly has been the case for introns.

Yet, a growing body of data indicates that virtually every category of so-called junk DNA displays function. We can now add introns—cooperative interrupters—to the list. And based on the data on hand, we can make a strong case that most of the sequence elements in genomes possess functional utility.

Could it be that scientists really don’t understand the biology of genomes? Or maybe we have the wrong paradigm?

It seems to me that science is in the midst of a revolution in our understanding of genome structure and function. Instead of being a wasteland of evolutionary debris, most of the genome appears to be functional. And the architecture and operations of genomes appear to be far more elegant and sophisticated than anyone ever imagined—at least within the confines of the evolutionary paradigm.

But what if the genome is viewed from a creation model framework?

The elegance and sophistication of genomes are features that are increasingly coming into scientific view. And this is precisely what I would expect if genomes were the product of a Mind—the handiwork of a Creator.

Now that is a discovery worth talking about.

Resources

Endnotes
  1. Teal Burrell, “The Science behind Interrupting: Gender, Nationality and Power, and the Roles They Play,” Post Magazine (March 14, 2018), https://www.scmp.com/magazines/post-magazine/long-reads/article/2137023/science-behind-interrupting-gender-nationality; Alex Shashkevich, “Why Do People Interrupt? It Depends on Whom You’re Talking To,” The Guardian (May 18, 2018), https://www.theguardian.com/lifeandstyle/2018/may/18/why-do-people-interrupt-it-depends-on-whom-youre-talking-to.
  2. Julie Parenteau et al., “Introns Are Mediators of Cell Response to Starvation,” Nature 565 (January 16, 2019): 612–17, doi:10.1038/s41586-018-0859-7.
  3. Jeffrey T. Morgan, Gerald R. Fink, and David P. Bartel, “Excised Linear Introns Regulate Growth in Yeast,” Nature 565 (January 16, 2019): 606–11, doi:10.1038/s41586-018-0828-1.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/03/06/discovery-of-intron-function-interrupts-evolutionary-paradigm

Does Animal Planning Undermine the Image of God?

Untitled 6
BY FAZALE RANA – JANUARY 23, 2019

A few years ago, we had an all-white English Bulldog named Archie. He would lumber toward even complete strangers, eager to befriend them and earn their affections. And people happily obliged this playful pup.

Archie wasn’t just an adorable dog. He was also well trained. We taught him to ring a bell hanging from a sliding glass door in our kitchen so he could let us know when he wanted to go out. He rarely would ring the bell. Instead, he would just sit by the door and wait . . . unless the neighbor’s cat was in the backyard. Then, Archie would repeatedly bang on the bell with great urgency. He had to get the cat at all costs. Clearly, he understood the bell’s purpose. He just chose to use it for his own devices.

Anyone who has owned a cat or dog knows that these animals do remarkable things. Animals truly are intelligent creatures.

But there are some people who go so far as to argue that animal intelligence is much more like human intelligence than we might initially believe. They base this claim, in part, on a handful of high-profile studies that indicate that some animals such as great apes and ravens can problem-solve and even plan for the future—behaviors that make them like us in some important ways.

Great Apes Plan for the Future

In 2006, two German anthropologists conducted a set of experiments on bonobos and orangutans in captivity that seemingly demonstrated that these creatures can plan for the future. Specifically, the test subjects selected, transported, and saved tools for use 1 hour and 14 hours later, respectively.1

To begin the study, the researchers trained both bonobos and orangutans to use a tool to get a reward from an apparatus. In the first experiment, the researchers blocked access to the apparatus. They laid out eight tools for the apes to select—two were suitable for the task and six were unsuitable. After selecting the tools, the apes were ushered into another room where they were kept for 1 hour. The apes were then allowed back into the room and granted access to the apparatus. To gain the reward, the apes had to select the correct tool and transport it to and from the waiting area. The anthropologists observed that the apes successfully obtained the reward in 70 percent of the trials by selecting and hanging on to the correct tool as they moved from room to room.

In the second experiment, the delay between tool selection and access to the apparatus was extended to 14 hours. This experiment focused on a single female individual. Instead of taking the test subject to the waiting room, the researchers took her to a sleeping room one floor above the waiting room before returning her to the room with the apparatus. She selected and held on to to the tool for 14 hours while she moved from room to room in 11 of the 12 trials—each time successfully obtaining the reward.

On the basis of this study, the researchers concluded that great apes have the ability to plan for the future. They also argued that this ability emerged in the common ancestor of humans and great apes around 14 million years ago. So, even though we like to think of planning for the future as one of the “most formidable human cognitive achievements,”2 it doesn’t appear to be unique to human beings.

Ravens Plan for the Future

In 2017, two researchers from Lund University in Sweden demonstrated that ravens are capable of flexible planning just like the great apes.3 These cognitive scientists conducted a series of experiments with ravens, demonstrating that the large black birds can plan for future events and exert self-control for up to 17 hours prior to using a tool or bartering with humans for a reward. (Self-control is crucial for successfully planning for the future.)

The researchers taught ravens to use a tool to gain a reward from an apparatus. As part of the training phase, the test subjects also learned that other objects wouldn’t work on the apparatus.

In the first experiment, the ravens were exposed to the apparatus without access to tools. As such, they couldn’t gain the reward. Then the researchers removed the apparatus. One hour later, the ravens were taken to a different location and offered tools. Then, the researchers presented them with the apparatus 15 minutes later. On average, the raven test subjects selected and used tools to gain the reward in approximately 80 percent of the trials.

In the next experiment, the ravens were trained to barter by exchanging a token for a food reward. After the training, the ravens were taken to a different location and presented with a tray containing the token and three distractor objects by a researcher who had no history of bartering with the ravens. As with the results of the tool selection experiment, the ravens selected and used the token to successfully barter for food in approximately 80 percent of the trials.

When the scientists modified the experimental design to increase the time delay from 15 minutes to 17 hours between tool or token selection and access to the reward, the ravens successfully completed the task in nearly 90 percent of the trials.

Next, the researchers wanted to determine if the ravens could exercise self-control as part of their planning for the future. First, they presented the ravens with trays that contained a small food reward. Of course, all of the ravens took the reward. Next, the researchers offered the ravens trays that had the food reward and either tokens or tools and distractor items. By selecting the token or the tools, the ravens were ensured a larger food reward in the future. The researchers observed that the ravens selected the tool in 75 percent of the trials and the token in about 70 percent, instead of taking the small morsel of food. After selecting the tool or token, the ravens were given the opportunity to receive the reward about 15 minutes later.

The researchers concluded that, like the great apes, ravens can plan for the future. Moreover, these researchers argue that this insight opens up greater possibilities for animal cognition because, from an evolutionary perspective, ravens are regarded as avian dinosaurs. And mammals (including the great apes) are thought to have shared an evolutionary ancestor with dinosaurs 320 million years ago.

Are Humans Exceptional?

In light of these studies (and others like them), it becomes difficult to maintain that human beings are exceptional. Self-control and the ability to flexibly plan for future events is considered by many to be the cornerstone of human cognition. Planning for the future requires mental representation of temporally distant events, the ability to set aside current sensory inputs for unobservable future events, and an understanding of what current actions result in achieving a future goal.

For many Christians, such as me, the loss of human exceptionalism is concerning because if this idea is untenable, so, too, is the biblical view of human nature. According to Scripture, human beings stand apart from all other creatures because we bear God’s image. And, because every human being possesses the image of God, every human being has intrinsic worth and value. But if, in essence, human beings are no different from animals, it is challenging to maintain that we are the crown of creation, as Scripture teaches.

Yet recent work by biologist Johan Lind from Stockholm University (Sweden) indicates that the results of these two studies and others like them may be misleading. In effect, when properly interpreted, these studies pose no threat to human exceptionalism in any way. According to Lind, animals can engage in behavior that resembles flexible planning through a different behavior: associative learning.4 If so, this insight preserves the case for human exceptionalism and the image of God, because it means that only humans engage in genuine flexible planning for the future through higher-order cognitive processes.

Associative Learning and Planning for the Future

Lind points out that researchers working in artificial intelligence (AI) have long known that associative learning can produce complex behaviors in AI systems that give the appearance of having the capacity for planning. (Associative learning is the process that animals [and AI systems] use to establish an association between two stimuli or events, usually by the use of punishments or rewards.)

blog__inline--does-animal-planning-undermine-the-image-of-god

Figure 1: An illustration of associative learning in dogs. Image credit: Shutterstock

Lind wonders why researchers studying animal cognition ignore the work in AI. Applying the insights from the work on AI systems, Lind developed mathematical models based on associative learning that he used to simulate results of the studies on the great apes and ravens. He discovered that associative learning produced the same behaviors as observed by the two research teams for the great apes and ravens. In other words, planning-like behavior can actually emerge through associative learning. That is, the same processes that give AI systems the capacity to beat humans in chess can, through associative learning, account for the planning-like behavior of animals.

The results of Lind’s simulations mean that it is most likely that animals “plan” for the future in ways that are entirely different from humans. In effect, the planning-like behavior of animals is an outworking of associative learning. On the other hand, humans uniquely engage in bona fide flexible planning through advanced cognitive processes such as mental time travel, among others.

Humans Are Exceptional

Even though the idea of human exceptionalism is continually under assault, it remains intact, as the latest work by Johan Lind illustrates. When the entire body of evidence is carefully weighed, there really is only one reasonable conclusion: Human beings uniquely possess advanced cognitive abilities that make possible our capacity for symbolism, open-ended generative capacity, theory of mind, and complex social interactions—scientific descriptors of the image of God.

Resources

Endnotes
  1. Nicholas J. Mulcahy and Josep Call, “Apes Save Tools for Future Use,” Science 312 (May 19, 2006): 1038–40, doi:10.1126/science.1125456.
  2. Mulcahy and Call, “Apes Save Tools for Future Use.”
  3. Can Kabadayi and Mathias Osvath, “Ravens Parallel Great Apes in Flexible Planning for Tool-Use and Bartering,” Science 357 (July 14, 2017): 202–4, doi:10.1126/science.aam8138.
  4. Johan Lind, “What Can Associative Learning Do for Planning?” Royal Society Open Science 5 (November 28, 2018): 180778, doi:10.1098/rsos.180778.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/01/23/does-animal-planning-undermine-the-image-of-god

Prebiotic Chemistry and the Hand of God

Untitled 2
BY FAZALE RANA – JANUARY 16, 2019

“Many of the experiments designed to explain one or other step in the origin of life are either of tenuous relevance to any believable prebiotic setting or involve an experimental rig in which the hand of the researcher becomes for all intents and purposes the hand of God.”

Simon Conway MorrisLife’s Solution

If you could time travel, would you? Would you travel to the past or the future?

If asked this question, I bet many origin-of-life researchers would want to travel to the time in Earth’s history when life originated. Given the many scientifically impenetrable mysteries surrounding life’s genesis, I am certain many of the scientists working on these problems would love to see firsthand how life got its start.

It is true, origin-of-life researchers have some access to the origin-of-life process through the fossil and geochemical records of the oldest rock formations on Earth—yet this evidence only affords them a glimpse through the glass, dimly.

Because of these limitations, origin-of-life researchers have to carry out most of their work in laboratory settings, where they try to replicate the myriad steps they think contributed to the origin-of-life process. Pioneered by the late Stanley Miller in the early 1950s, this approach—dubbed prebiotic chemistry—has become a scientific subdiscipline in its own right.

blog__inline--prebiotic-chemistry-and-the-hand-of-god-1

Figure 1: Chemist Stanley Miller, circa 1980. Image credit: Wikipedia

Prebiotic Chemistry

In effect, the goals of prebiotic chemistry are threefold.

  • Proof of principle. The objective of these types of experiments is to determine—in principle—if a chemical or physical process that could potentially contribute to one or more steps in the origin-of-life pathway even exists.
  • Mechanism studies. Once processes have been identified that could contribute to the emergence of life, researchers study them in detail to get at the mechanisms undergirding these physicochemical transformations.
  • Geochemical relevance. Perhaps the most important goal of prebiotic studies is to establish the geochemical relevance of the physicochemical processes believed to have played a role in life’s start. In other words, how well do the chemical and physical processes identified and studied in the laboratory translate to early Earth’s conditions?

Without question, over the last 6 to 7 decades, origin-of-life researchers have been wildly successful with respect to the first two objectives. It is safe to say that origin-of-life investigators have demonstrated that—in principle—the chemical and physical processes needed to generate life through chemical evolutionary pathways exist.

But when it comes to the third objective, origin-of-life researchers have experienced frustration—and, arguably, failure.

Researcher Intervention and Prebiotic Chemistry

In an ideal world, humans would not intervene at all in any prebiotic study. But this ideal isn’t possible. Researchers involve themselves in the experimental design out of necessity, but also to ensure that the results of the study are reproducible and interpretable. If researchers don’t set up the experimental apparatus, adjust the starting conditions, add the appropriate reactants, and analyze the product, then by definition the experiment would never happen. Utilizing carefully controlled conditions and chemically pure reagents is necessary for reproducibility and to make sense of the results. In fact, this level of control is essential for proof-of-principle and mechanistic prebiotic studies—and perfectly acceptable.

However, when it comes to prebiotic chemistry’s third goal, geochemical relevance, the highly controlled conditions of the laboratory become a liability. Here researcher intervention becomes potentially unwarranted. It goes without saying that the conditions of early Earth were uncontrolled and chemically and physically complex. Chemically pristine and physically controlled conditions didn’t exist. And, of course, origin-of-life researchers weren’t present to oversee the processes and guide them to their desired end. Yet, it is rare for prebiotic simulation studies to fully take the actual conditions of early Earth into account in the experimental design. It is rarer for origin-of-life investigators to acknowledge this limitation.

blog__inline--prebiotic-chemistry-and-the-hand-of-god-2

Figure 2: Laboratory technician. Image credit: Shutterstock

This complication means that many prebiotic studies designed to simulate processes on early Earth seldom accomplish anything of the sort due to excessive researcher intervention. Yet, it isn’t always clear when examining an experimental design if researcher involvement is legitimate or unwarranted.

As I point out in my book Creating Life in the Lab (Baker, 2011), one main reason for the lack of progress relates to the researcher’s role in the experimental design—a role not often recognized when experimental results are reported. Origin-of-life investigator Clemens Richert from the University of Stuttgart in Germany now acknowledges this very concern in a recent comment piece published by Nature Communications.1

As Richert points out, the role of researcher intervention and a clear assessment of geochemical relevance is rarely acknowledged or properly explored in prebiotic simulation studies. To remedy this problem, Richert calls for origin-of-life investigators to do three things when they report the results of prebiotic studies.

  • State explicitly the number of instances in which researchers engaged in manual intervention.
  • Describe precisely the prebiotic scenario a particular prebiotic simulation study seeks to model.
  • Reduce the number of steps involving manual intervention in whatever way possible.

Still, as Richert points out, it is not possible to provide a quantitative measure (a score) of geochemical relevance. And, hence, there will always be legitimate disagreement about the geochemical relevance of any prebiotic experiment.

Yet, Richert’s commentary represents an important first step toward encouraging more realistic prebiotic simulation studies and a more cautious approach to interpreting the results of these studies. Hopefully, it will also lead to a more circumspect assessment on the importance of these types of studies for accounting for the various steps in the origin-of-life process.

Researcher Intervention and the Hand of God

One concern not addressed by Richert in his commentary piece is the fastidiousness of many of the physicochemical transformations origin-of-life researchers deem central to chemical evolution. As I discuss in Creating Life in the Lab, mechanistic studies indicate that these processes are often dependent upon exacting conditions in the laboratory. To put it another way, these processes only take place—even under the most ideal laboratory conditions—because of human intervention. As a corollary, these processes would be unproductive on early Earth. They often require chemically pristine conditions, unrealistically high concentrations of reactants, carefully controlled order of additions, carefully regulated temperature, pH, salinity levels, etc.

As Richert states, “It’s not easy to see what replaced the flasks, pipettes, and stir bars of a chemistry lab during prebiotic evolution, let alone the hands of the chemist who performed the manipulations. (And yes, most of us are not comfortable with the idea of divine intervention.)”2

Sadly, since I made the point about researcher intervention nearly a decade ago, it has often been ignored, dismissed, and even ridiculed by many in the scientific community—simply because I have the temerity to think that a Creator brought life into existence.

Even though Richert and his many colleagues in the origin-of-life research community do whatever they can to eschew a Creator’s role in the origin-of-life, could it be that abiogenesis (life from nonlife) required the hand of God—divine intervention?

I would argue that this conclusion follows from nearly seven decades of work in prebiotic chemistry and the consistent demonstration of the central role that origin-of-life researchers play in the success of prebiotic simulation studies. It is becoming increasingly evident for whoever will “see” that the hand of the researcher serves as the analog for the hand of God.

Resources

Endnotes
  1. Clemens Richert, “Prebiotic Chemistry and Human Intervention,” Nature Communications 9 (December 12, 2018): 5177, doi:10.1038/s41467-018-07219-5.
  2. Richert, “Prebiotic Chemistry.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/01/16/prebiotic-chemistry-and-the-hand-of-god

Soft Tissue Preservation Mechanism Stabilizes the Case for Earth’s Antiquity

Untitled 16
BY FAZALE RANA – DECEMBER 19, 2018

One of the highlights of the year at Reasons to Believe (well, it’s a highlight for some of us, anyway) is the white elephant gift exchange at our staff Christmas party. It is great fun to laugh together as a staff as we take turns unwrapping gifts—some cheesy, some useless, and others highly prized—and then “stealing” from one another those two or three gifts that everyone seems to want.

Over the years, I have learned a few lessons about choosing a white elephant gift to unwrap. Avoid large gifts. If the gift is a dud, large items are more difficult to find a use for than small ones. Also, more often than not, the most beautifully wrapped gifts turn out to be the biggest letdowns of all.

Giving and receiving gifts isn’t just limited to Christmas. People exchange all types of gifts with one another for all sorts of reasons.

Gifting is even part of the scientific enterprise—with the gifts taking on the form of scientific discoveries and advances. Many times, discoveries lead to new beneficial insights and technologies—gifts for humanity. Other times, these breakthroughs are gifts for scientists, signaling a new way to approach a scientific problem or opening up new vistas of investigation.

Soft Tissue Remnants Preserved in Fossils

One such gift was given to the scientific community over a decade ago by Mary Schweitzer, a paleontologist at North Carolina State University. Schweitzer and her team of collaborators recovered flexible, hollow, and transparent blood vessels from the remains of a T. rex specimen after removing the mineral component of the fossil.1 These blood vessels harbored microstructures with a cell-like morphology (form and structure) that she and her collaborators interpreted to be the remnants of red blood cells. This work showed conclusively that soft tissue materials could be preserved in fossil remains.

Though unexpected, the discovery was a landmark achievement for paleontology. Since Schweitzer’s discovery, paleontologists have unearthed the remnants of all sorts of soft tissue materials from fossils representing a wide range of organisms. (For a catalog of some of these finds, see my book Dinosaur Blood and the Age of the Earth.)

With access to soft tissue materials in fossils, paleontologists have a new window into the biology of Earth’s ancient life.

The Scientific Case for a Young Earth

Some Christians also saw Schweitzer’s discovery as a gift. But for them the value of this scientific present wasn’t the insight it provides about past life on Earth. Instead, they viewed this discovery (and others like it) as evidence that the earth must be no more than a few thousand years old. From a young-earth creationist (YEC) perspective, the survival of soft tissue materials in fossils indicates that these remains can’t be millions of years old. As a case in point, at the time Schweitzer reported her findings, John Morris, a young-earth proponent from the Institute for Creation Research, wrote:

Indeed, it is hard to imagine how soft tissue could have lasted even 5,000 years or so since the Flood of Noah’s day when creationists propose the dinosaur was buried. Such a thing could hardly happen today, for soft tissue decays rather quickly under any condition.2

In other words, from a YEC perspective, it is impossible for fossils to contain soft tissue remnants and be millions of years old. Soft tissues shouldn’t survive that long; they should readily degrade in a few thousand years. From a YEC view, soft tissue discoveries challenge the reliability of radiometric dating methods used to determine the fossils’ ages and, consequently, Earth’s antiquity. Furthermore, these breakthrough discoveries provide compelling scientific evidence for a young earth and support the idea that the fossil record results from a recent global (worldwide) flood.

Admittedly, on the surface the argument carries some weight. At first glance, it is hard to envision how soft tissue materials could survive for vast periods of time, given the wide range of mechanisms that drive the degradation of biological materials.

Preservation of Soft Tissues in Fossil Remains

Despite this first impression, over the last decade or so paleontologists have identified a number of mechanisms that can delay the degradation of soft tissues long enough for them to become entombed within a mineral shell. When this entombment happens, the soft tissue materials escape further degradation (for the most part). In other words, it is a race against time. Can mineral entombment take place before the soft tissue materials fully decompose? If so, then soft tissue remnants can survive for hundreds of millions of years. And any chemical or physical process that can delay the degradation will contribute to soft tissue survival by giving the entombment process time to take place.

In Dinosaur Blood and the Age of the Earth, I describe several mechanisms that likely promote soft tissue survival. Since the book’s publication (2016), researchers have deepened their understanding of the processes that make it possible for soft tissues to survive. The recent work of an international team of collaborators headed by researchers from Yale University provides an example of this growing insight.3

These researchers discovered that the deposition environment during the fossilization process plays a significant role in soft tissue preservation, and they have identified the chemical reactions that contribute to this preservation. The team examined 24 specimens of biomineralized vertebrate tissues ranging in age from modern to the Late Jurassic (approximately 163–145 million years ago) time frame. These specimens were taken from both chemically oxidative and reductive environments.

After demineralizing the samples, the researchers discovered that all modern specimens yielded soft tissues. However, demineralization only yielded soft tissues for fossils formed under oxidative conditions. Fossils formed under reductive conditions failed to yield any soft tissue material, whatsoever. The soft tissues from the oxidative settings (which included extracellular matrices, cell remnants, blood vessel remnants, and nerve materials) were stained brown. Researchers noted that the brown color of the soft tissue materials increased in intensity as a function of the fossil’s age, with older specimens displaying greater browning than younger specimens.

The team was able to reproduce this brown color in soft tissues taken from modern-day specimens by heating the samples and exposing them to air. This process converted the soft tissues from translucent white to brown in appearance.

Using Raman spectroscopy, the researchers detected spectral signatures for proteins and N-heterocycle pyridine rings in the soft tissue materials. They believe that the N-heterocycle pyridine rings arise from the formation of advanced glycoxidation end-products (AGEs) and advanced lipoxidation end-products (ALEs). AGEs and ALEs are the by-products of the reactions that take place between proteins and sugars (AGEs) and proteins and lipids or fats (ALEs). (As an aside, AGEs and ALEs form when foods are cooked, and they occur at high levels when food is burnt, giving overly cooked foods their brownish color.) The researchers noted that spectral features for N-heterocycle pyridine rings become more prominent for soft tissues isolated from older fossil specimens, with the spectral features for the proteins becoming less pronounced.

AGEs and ALEs are heavily cross-linked compounds. This chemical property makes them extremely difficult to break down once they form. In other words, the formation of AGEs and ALEs in soft tissue remnants delays their decomposition long enough for mineral entombment to take place.

Iron from the environment or released from red blood cells promotes the formation of AGEs and ALEs. So do alkaline conditions.

In addition to stabilizing soft tissues from degradation because of the cross-links, AGEs and ALEs protect adjacent proteins from breakdown because of their hydrophobic (water repellent) nature. Water promotes soft tissue breakdown through a chemical process called hydrolysis. But because AGEs and ALEs are hydrophobic, they inhibit the hydrolytic reactions that would otherwise break down proteins that escape glycoxidation and lipoxidation reactions.

Finally, AGEs and ALEs are also resistant to microbial attack, further adding to the stability of the soft tissue materials. In other words, soft tissue materials recovered from fossil specimens are not the original, intact material, because they have undergone extensive chemical alteration. As it turns out, this alteration stabilized the soft tissue remnants long enough for mineral entombment to occur.

In short, this research team has made significant strides toward understanding the process by which soft tissue materials become preserved in fossil remains. The recovery of soft tissue materials from the ancient fossil remains makes perfect sense within an old-earth framework. These insights also undermine what many people believe to be one of the most compelling scientific arguments for a young earth.

Why Does It Matter?

In my experience, many skeptics and seekers alike reject Christian truth claims because of the misperception that Genesis 1 teaches that the earth is only 6,000 years old. This misperception becomes reinforced by vocal (and well-meaning) YECs who not only claim the only valid interpretation of Genesis 1 is the calendar-day view, but also maintain that ample scientific evidence—such as the recovery of soft tissue remnants in fossils—exists for a young earth.

Yet, as the latest work headed by scientists from Yale University demonstrates, soft tissue remnants associated with fossils find a ready explanation from an old-earth standpoint. It has been a gift to science that advances understanding of a sophisticated process.

Unfortunately, for YECs the fossil-associated soft tissues have turned out to be little more than a bad white elephant gift.

Resources:

Endnotes
  1. Mary H. Schweitzer et al., “Soft-Tissue Vessels and Cellular Preservation in Tyrannosaurus rex,” Science 307 (March 25, 2005): 1952–55, doi:10.1126/science.1108397.
  2. John D. Morris, “Dinosaur Soft Parts,” Acts & Facts (June 1, 2005), icr.org/article/2032/.
  3. Jasmina Wiemann et al., “Fossilization Transforms Vertebrate Hard Tissue Proteins into N-Heterocyclic Polymers,” Nature Communications 9 (November 9, 2018): 4741, doi:10.1038/s41467-018-07013-3.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/12/19/soft-tissue-preservation-mechanism-stabilizes-the-case-for-earth-s-antiquity

Resurrected Proteins and the Case for Biological Evolution

resurrectedproteinsandthecase

BY FAZALE RANA – OCTOBER 14, 2013

Recently, a team of researchers from Spain resurrected a 4-billion-year-old version of a protein that belongs to a class of proteins known as thioredoxins. The ability to resurrect ancient proteins using principles integral to the evolutionary paradigm is the type of advance that many scientists point to as evidence for biological evolution. In this article, I discuss how this work can be seamlessly accommodated within a creation/design paradigm.

Recently, a team of biochemists from Spain resurrected an ancient version of a protein, known as a thioredoxin. The successful restoration of this antiquated protein is the kind of advance that many scientists point to as evidence for the evolutionary paradigm.

Presumably, the protein they “brought back to life” would have been as it was 4 billion years ago.1 By studying the structure and function of the ancient thioredoxin, the research team was able to gain insight into the biology of some of the first life-forms on Earth. This is not the first time biochemists have pulled off this feat. Over the last several years, life scientists have announced the re-creation of a number of ancient proteins.2

The procedure for resurrecting ancient proteins makes use of evolutionary trees that are built using the amino acid sequences of extant proteins. From these trees, scientists infer the structure of the ancestral protein. They then go into the lab and make that protein—and more often than not, the molecule adopts a stable structure with discernible function. It is remarkable to think that scientists can use evolutionary trees to interpolate the probable structure of an ancestral protein to then make a biomolecule that displays function. I truly understand why people would point to this type of work as evidence for biological evolution.

So, how does someone who advocates for intelligent design/creationism make sense of scientists’ ability to resurrect ancient proteins?

For the sake of brevity, I will provide a quick response to this question. For a more detailed discussion of the production of ancient thioredoxins and how I view resurrected proteins from a design/creation model perspective listen to the August 12, 2013 episode of Science News Flash.

To appreciate a design/creation interpretation of this work, it is important to first understand how scientists determine the amino acid sequence for ancient proteins. Evolutionary biologists make an inference by comparing amino acid sequences of extant proteins. (In this most recent study, scientists compared around 200 thioredoxins from organisms representing all three domains of life.) Based on the patterns of similarities and differences in the sequences, they propose evolutionary relationships among the proteins.

The assumption is that the differences in the amino acid sequences of extant proteins stem from mutations to the genes encoding the proteins. Accordingly, these mutations would be passed on to subsequent generations. As the different lineages diverge, different types of mutations would accrue in the protein-coding genes in the distinct lineages. The branch points, or nodes, in the evolutionary tree, would then represent the ancestral protein shared by all proteins found in the lineages that split from that point. Researchers then infer the most likely amino acid sequence of the ancestral protein by working their way backwards from extant amino acid sequences of proteins which fall along the branches that stem from the node.

At this juncture, it is important to note that evolutionary biologists actively choose to interpret the similarities and differences in the amino acid sequences of extant proteins from an evolutionary perspective. I maintain that it is equally valid to interpret the sequence similarities and differences from a design/creation standpoint as well. With this approach, the archetype takes the place of the common ancestor. And the differences in the amino acid sequences represent variations around an archetypical design shared by all the proteins that are members of a particular family, such as the thioredoxins. In light of this concept, it is interesting the researchers discovered that the structure of ancient thioredoxins is highly conserved moving back through time, with only limited variation in the structure, which varied around a core design.

What about the process for determining the ancestral/archetypical sequence from an evolutionary tree? Doesn’t this fact run contrary to a design explanation?

Not necessarily. Consider the variety of automobiles that exist. These vehicles are all variants of an archetypical design. Even though automobiles are the products of intelligent agents, they can be organized into an “evolutionary tree” based on design similarities and differences. In this case, the nodes in the tree represent the core design of the automobiles that are found on the branches that arise from the node.

By analogy, one could also regard the extant members of a protein family as the work of a Designer. Just like automobiles, the protein variants can be organized into a tree-like diagram. In this case the nodes correspond to the common design elements of the proteins found on the branches.

In my view, when evolutionary biologists uncover what they believe to be the ancestral sequence of a protein family, they are really identifying

Endnotes

  1. Alvaro Ingles-Prieto et al., “Conservation of Protein Structure over Four Billion Years,” Structure21 (September 3, 2013): 1690–97.
  2. For example see Michael J. Harms and Joseph W. Thornton, “Analyzing Protein Structure and Function Using Ancestral Gene Reconstruction,” Current Opinion in Structural Biology 20 (June 2010): 360–66.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/todays-new-reason-to-believe/read/tnrtb/2013/10/15/resurrected-proteins-and-the-case-for-biological-evolution

Endosymbiont Hypothesis and the Ironic Case for a Creator

endosymbionthypothesisandtheironic

BY FAZALE RANA – DECEMBER 12, 2018

i ·ro ·ny

The use of words to express something different from and often opposite to their literal meaning.
Incongruity between what might be expected and what actually occurs.

—The Free Dictionary

People often use irony in humor, rhetoric, and literature, but few would think it has a place in science. But wryly, this has become the case. Recent work in synthetic biology has created a real sense of irony among the scientific community—particularly for those who view life’s origin and design from an evolutionary framework.

Increasingly, life scientists are turning to synthetic biology to help them understand how life could have originated and evolved. But, they have achieved the opposite of what they intended. Instead of developing insights into key evolutionary transitions in life’s history, they have, ironically, demonstrated the central role intelligent agency must play in any scientific explanation for the origin, design, and history of life.

This paradoxical situation is nicely illustrated by recent work undertaken by researchers from Scripps Research (La Jolla, CA). Through genetic engineering, the scientific investigators created a non-natural version of the bacterium E. coli. This microbe is designed to take up permanent residence in yeast cells. (Cells that take up permanent residence within other cells are referred to as endosymbionts.) They hope that by studying these genetically engineered endosymbionts, they can gain a better understanding of how the first eukaryotic cells evolved. Along the way, they hope to find added support for the endosymbiont hypothesis.1

The Endosymbiont Hypothesis

Most biologists believe that the endosymbiont hypothesis (symbiogenesis) best explains one of the key transitions in life’s history; namely, the origin of complex cells from bacteria and archaea. Building on the ideas of Russian botanist Konstantin Mereschkowski, Lynn Margulis(1938–2011) advanced the endosymbiont hypothesis in the 1960s to explain the origin of eukaryotic cells.

Margulis’s work has become an integral part of the evolutionary paradigm. Many life scientists find the evidence for this idea compelling and consequently view it as providing broad support for an evolutionary explanation for the history and design of life.

According to this hypothesis, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe. Presumably, organelles such as mitochondria were once endosymbionts. Evolutionary biologists believe that once engulfed by the host cell, the endosymbionts took up permanent residency, with the endosymbiont growing and dividing inside the host.

Over time, the endosymbionts and the host became mutually interdependent. Endosymbionts provided a metabolic benefit for the host cell—such as an added source of ATP—while the host cell provided nutrients to the endosymbionts. Presumably, the endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism.

endosymbiont-hypothesis-and-the-ironic-case-for-a-creator-1

Figure 1: Endosymbiont hypothesis. Image credit: Wikipedia.

Life scientists point to a number of similarities between mitochondria and alphaproteobacteria as evidence for the endosymbiont hypothesis. (For a description of the evidence, see the articles listed in the Resources section.) Nevertheless, they don’t understand how symbiogenesis actually occurred. To gain this insight, scientists from Scripps Research sought to experimentally replicate the earliest stages of mitochondrial evolution by engineering E. coli and brewer’s yeast (S. cerevisiae) to yield an endosymbiotic relationship.

Engineering Endosymbiosis

First, the research team generated a strain of E. coli that no longer has the capacity to produce the essential cofactor thiamin. They achieved this by disabling one of the genes involved in the biosynthesis of the compound. Without this metabolic capacity, this strain becomes dependent on an exogenous source of thiamin in order to survive. (Because the E. coli genome encodes for a transporter protein that can pump thiamin into the cell from the exterior environment, it can grow if an external supply of thiamin is available.) When incorporated into yeast cells, the thiamin in the yeast cytoplasm becomes the source of the exogenous thiamin, rendering E. coli dependent on the yeast cell’s metabolic processes.

Next, they transferred the gene that encodes a protein called ADP/ATP translocase into the E. coli strain. This gene was harbored on a plasmid (which is a small circular piece of DNA). Normally, the gene is found in the genome of an endosymbiotic bacterium that infects amoeba. This protein pumps ATP from the interior of the bacterial cell to the exterior environment.2

The team then exposed yeast cells (that were deficient in ATP production) to polyethylene glycol, which creates a passageway for E. coli cells to make their way into the yeast cells. In doing so, E. coli becomes established as endosymbionts within the yeast cells’ interior, with the E. coli providing ATP to the yeast cell and the yeast cell providing thiamin to the bacterial cell.

Researchers discovered that once taken up by the yeast cells, the E. coli did not persist inside the cell’s interior. They reasoned that the bacterial cells were being destroyed by the lysosomal degradation pathway. To prevent their destruction, the research team had to introduce three additional genes into the E. coli from three separate endosymbiotic bacteria. Each of these genes encodes proteins—called SNARE-like proteins—that interfere with the lysosomal destruction pathway.

Finally, to establish a mutualistic relationship between the genetically-engineered strain of E. coli and the yeast cell, the researchers used a yeast strain with defective mitochondria. This defect prevented the yeast cells from producing an adequate supply of ATP. Because of this limitation, the yeast cells grow slowly and would benefit from the E. coli endosymbionts, with the engineered capacity to transport ATP from their cellular interior to the exterior environment (the yeast cytoplasm.)

The researchers observed that the yeast cells with E. coli endosymbionts appeared to be stable for 40 rounds of cell doublings. To demonstrate the potential utility of this system to study symbiogenesis, the research team then began the process of genome reduction for the E. coli endosymbionts. They successively eliminated the capacity of the bacterial endosymbiont to make the key metabolic intermediate NAD and the amino acid serine. These triply-deficient E. coli strains survived in the yeast cells by taking up these nutrients from the yeast cytoplasm.

Evolution or Intentional Design?

The Scripps Research scientific team’s work is impressive, exemplifying science at its very best. They hope that their landmark accomplishment will lead to a better understanding of how eukaryotic cells appeared on Earth by providing the research community with a model system that allows them to probe the process of symbiogenesis. It will also allow them to test the various facets of the endosymbiont hypothesis.

In fact, I would argue that this study already has made important strides in explaining the genesis of eukaryotic cells. But ironically, instead of proffering support for an evolutionary origin of eukaryotic cells (even though the investigators operated within the confines of the evolutionary paradigm), their work points to the necessary role intelligent agency must have played in one of the most important events in life’s history.

This research was executed by some of the best minds in the world, who relied on a detailed and comprehensive understanding of biochemical and cellular systems. Such knowledge took a couple of centuries to accumulate. Furthermore, establishing mutualistic interactions between the two organisms required a significant amount of ingenuity—genius that is reflected in the experimental strategy and design of their study. And even at that point, execution of their experimental protocols necessitated the use of sophisticated laboratory techniques carried out under highly controlled, carefully orchestrated conditions. To sum it up: intelligent agency was required to establish the endosymbiotic relationship between the two microbes.

endosymbiont-hypothesis-and-the-ironic-case-for-a-creator-2

Figure 2: Lab researcher. Image credit: Shutterstock.

Or, to put it differently, the endosymbiotic relationship between these two organisms was intelligently designed. (All this work was necessary to recapitulate only the presumed first step in the process of symbiogenesis.) This conclusion gains added support given some of the significant problems confronting the endosymbiotic hypothesis. (For more details, see the Resources section.) By analogy, it seems reasonable to conclude that eukaryotic cells, too, must reflect the handiwork of a Divine Mind—a Creator.

Resources

Endnotes

  1. Angad P. Mehta et al., “Engineering Yeast Endosymbionts as a Step toward the Evolution of Mitochondria,” Proceedings of the National Academy of Sciences, USA 115 (November 13, 2018): doi:10.1073/pnas.1813143115.
  2. ATP is a biochemical that stores energy used to power the cell’s operation. Produced by mitochondria, ATP is one of the end products of energy harvesting pathways in the cell. The ATP produced in mitochondria is pumped into the cell’s cytoplasm from within the interior of this organelle by an ADP/ATP transporter.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/12/12/endosymbiont-hypothesis-and-the-ironic-case-for-a-creator

Did Neanderthals Start Fires?

neanderthalsstartfire

BY FAZALE RANA – DECEMBER 5, 2018

It is one of the most iconic Christmas songs of all time.

Written by Bob Wells and Mel Torme in the summer of 1945, “The Christmas Song” (subtitled “Chestnuts Roasting on an Open Fire”) was crafted in less than an hour. As the story goes, Wells and Torme were trying to stay cool during the blistering summer heat by thinking cool thoughts and then jotting them down on paper. And, in the process, “The Christmas Song” was born.

Many of the song’s lyrics evoke images of winter, particularly around Christmastime. But none has come to exemplify the quiet peace of a Christmas evening more than the song’s first line, “Chestnuts roasting on an open fire . . . ”

Gathering around the fire to stay warm, to cook food, and to share in a community has been an integral part of the human experience throughout history—including human prehistory. Most certainly our ability to master fire played a role in our survival as a species and in our ability as human beings to occupy and thrive in some of the world’s coldest, harshest climates.

But fire use is not limited only to modern humans. There is strong evidence that Neanderthals made use of fire. But, did these creatures have control over fire in the same way we do? In other words, did Neanderthals master fire? Or, did they merely make opportunistic use of natural fires? These questions are hotly debated by anthropologists today and they contribute to a broader discussion about the cognitive capacity of Neanderthals. Part of that discussion includes whether these creatures were cognitively inferior to us or whether they were our intellectual equals.

In an attempt to answer these questions, a team of researchers from the Netherlands and France characterized the microwear patterns on bifacial (having opposite sides that have been worked on to form an edge) tools made from flint recovered from Neanderthal sites, and concluded that the wear patterns suggest that these hominins used pyrite to repeatedly strike the flint. This process generates sparks that can be used to start fires.1 To put it another way, the researchers concluded that Neanderthals had mastery over fire because they knew how to start fires.

start-fires-1

Figure 1: Biface tools for cutting or scraping. Image credit: Shutterstock

However, a closer examination of the evidence along with results of other studies, including recent insight into the cause of Neanderthal extinction, raises significant doubts about this conclusion.

What Do the Microwear Patterns on Flint Say?

The investigators focused on the microwear patterns of flint bifaces recovered from Neanderthal sites as a marker for fire mastery because of the well-known practice among hunter-gatherers and pastoralists of striking flint with pyrite (an iron disulfide mineral) to generate sparks to start fires. Presumably, the first modern humans also used this technique to start fires.

start-fires-2

Figure 2: Starting a fire with pyrite and flint. Image credit: Shutterstock

The research team reasoned that if Neanderthals started fires, they would use a similar tactic. Careful examination of the microwear patterns on the bifaces led the research team to conclude that these tools were repeatedly struck by hard materials, with the strikes all occurring in the same direction along the bifaces’ long axis.

The researchers then tried to experimentally recreate the microwear pattern in a laboratory setting. To do so, they struck biface replicas with a number of different types of materials, including pyrites, and concluded that the patterns produced by the pyrite strikes most closely matched the patterns on the bifaces recovered from Neanderthal sites. On this basis, the researchers claim that they have found evidence that Neanderthals deliberately started fires.

Did Neanderthals Master Fire?

While this conclusion is possible, at best this study provides circumstantial, not direct, evidence for Neanderthal mastery of fire. In fact, other evidence counts against this conclusion. For example, bifaces with the same type of microwear patterns have been found at other Neanderthal sites, locales that show no evidence of fire use. These bifaces would have had a range of usages, including butchery of the remains of dead animals. So, it is possible that these tools were never used to start fires—even at sites with evidence for fire usage.

Another challenge to the conclusion comes from the failure to detect any pyrite on the bifaces recovered from the Neanderthal sites. Flint recovered from modern human sites shows visible evidence of pyrite. And yet the research team failed to detect even trace amounts of pyrite on the Neanderthal bifaces during the course of their microanalysis.

This observation raises further doubt about whether the flint from the Neanderthal sites was used as a fire starter tool. Rather, it points to the possibility that Neanderthals struck the bifaces with materials other than pyrite for reasons not yet understood.

The conclusion that Neanderthals mastered fire also does not square with results from other studies. For example, a careful assessment of archaeological sites in southern France occupied by Neanderthals from about 100,000 to 40,000 years ago indicates that Neanderthals could not create fire. Instead, these hominins made opportunistic use of natural fire when it was available to them.2

These French sites do show clear evidence of Neanderthal fire use, but when researchers correlated the archaeological layers displaying evidence for fire use with the paleoclimate data, they found an unexpected pattern. Neanderthals used fire during warm climate conditions and failed to use fire during cold periods—the opposite of what would be predicted if Neanderthals had mastered fire.

Lightning strikes that would generate natural fires are much more likely to occur during warm periods. Instead of creating fire, Neanderthals most likely harnessed natural fire and cultivated it as long as they could before it extinguished.

Another study also raises questions about the ability of Neanderthals to start fires.3 This research indicates that cold climates triggered Neanderthal extinctions. By studying the chemical composition of stalagmites in two Romanian caves, an international research team concluded that there were two prolonged and extremely cold periods between 44,000 and 40,000 years ago. (The chemical composition of stalagmites varies with temperature.)

The researchers also noted that during these cold periods, the archaeological record for Neanderthals disappears. They interpret this disappearance to reflect a dramatic reduction in Neanderthal population numbers. Researchers speculate that when this population downturn took place during the first cold period, modern humans made their way into Europe. Being better suited for survival in the cold climate, modern human numbers increased. When the cold climate mitigated, Neanderthals were unable to recover their numbers because of the growing populations of modern humans in Europe. Presumably, after the second cold period, Neanderthal numbers dropped to the point that they couldn’t recover, and hence, became extinct.

But why would modern humans be more capable than Neanderthals of surviving under extremely cold conditions? It seems as if it should be the other way around. Neanderthals had a hyper-polar body design that made them ideally suited to withstand cold conditions. Neanderthal bodies were stout and compact, comprised of barrel-shaped torsos and shorter limbs, which helped them retain body heat. Their noses were long and sinus cavities extensive, which helped them warm the cold air they breathed before it reached their lungs. But, despite this advantage, Neanderthals died out and modern humans thrived.

Some anthropologists believe that the survival discrepancy could be due to dietary differences. Some data indicates that modern humans had a more varied diet than Neanderthals. Presumably, these creatures primarily consumed large herbivores—animals that disappeared when the climatic conditions turned cold, thereby threatening Neanderthal survival. On the other hand, modern humans were able to adjust to the cold conditions by shifting their diets.

But could there be a different explanation? Could it be that with their mastery of fire, modern humans were able to survive cold conditions? And did Neanderthals die out because they could not start fires?

Taken in its entirety, the data seems to indicate that Neanderthals lacked mastery of fire but could use it opportunistically. And, in a broader context, the data indicates that Neanderthals were cognitively inferior to humans.

What Difference Does It Make?

One of the most important ideas taught in Scripture is that human beings uniquely bear God’s image. As such, every human being has immeasurable worth and value. And because we bear God’s image, we can enter into a relationship with our Maker.

However, if Neanderthals possessed advanced cognitive ability just like that of modern humans, then it becomes difficult to maintain the view that modern humans are unique and exceptional. If human beings aren’t exceptional, then it becomes a challenge to defend the idea that human beings are made in God’s image.

Yet, claims that Neanderthals are cognitive equals to modern humans fail to withstand scientific scrutiny, time and time, again. Now it’s time to light a fire in my fireplace and enjoy a few contemplative moments thinking about the real meaning of Christmas.

Resources

Endnotes

  1. A. C. Sorensen, E. Claud, and M. Soressi, “Neanderthal Fire-Making Technology Inferred from Microwear Analysis,” Scientific Reports 8 (July 19, 2018): 10065, doi:10.1038/s41598-018-28342-9.
  2. Dennis M. Sandgathe et al., “Timing of the Appearance of Habitual Fire Use,” Proceedings of the National Academy of Sciences, USA 108 (July 19, 2011), E298, doi:10.1073/pnas.1106759108; Paul Goldberg et al., “New Evidence on Neandertal Use of Fire: Examples from Roc de Marsal and Pech de l’Azé IV,” Quaternary International 247 (2012): 325–40, doi:10.1016/j.quaint.2010.11.015; Dennis M. Sandgathe et al., “On the Role of Fire in Neandertal Adaptations in Western Europe: Evidence from Pech de l’Azé IV and Roc de Marsal, France,” PaleoAnthropology (2011): 216–42, doi:10.4207/PA.2011.ART54.
  3. Michael Staubwasser et al., “Impact of Climate Change on the Transition of Neanderthals to Modern Humans in Europe,” Proceedings of the National Academy of Sciences, USA 115 (September 11, 2018): 9116–21, doi:10.1073/pnas.1808647115.

Spider Silk Inspires New Technology and the Case for a Creator

spidersilk

BY FAZALE RANA – NOVEMBER 28, 2018
Mark your calendars!

On December 14th (2018), Columbia Pictures—in collaboration with Sony Pictures Animation—will release a full-length animated feature: Spider-Man: Into the Spider-Verse. The story features Miles Morales, an Afro-Latino teenager, as Spider-Man.

Morales accidentally becomes transported from his universe to ours, where Peter Parker is Spider-Man. Parker meets Morales and teaches him how to be Spider-Man. Along the way, they encounter different versions of Spider-Man from alternate dimensions. All of them team up to save the multiverse and to find a way to return back to their own versions of reality.

What could be better than that?

In 1962, Spider-Man’s creators, Stan Lee and Steve Ditko, drew inspiration for their superhero in the amazing abilities of spiders. And today, engineers find similar inspiration, particularly, when it comes to spider silk. The remarkable properties of spider’s silk is leading to the creation of new technologies.

Synthetic Spider Silk

Engineers are fascinated by spider silk because this material displays astonishingly high tensile strength and ductility (pliability), properties that allow it to absorb huge amounts of energy before breaking. Only one-sixth the density of steel, spider silk can be up to four times stronger, on a per weight basis.

By studying this remarkable substance, engineers hope that they can gain insight and inspiration to engineer next-generation materials. According to Northwestern University researcher Nathan C. Gianneschi, who is attempting to produce synthetic versions of spider silk, “One cannot overstate the potential impact on materials and engineering if we can synthetically replicate the natural process to produce artificial fibers at scale. Simply put, it would be transformative.”1

Gregory P. Holland of San Diego State University, one of Gianneschi’s collaborators, states, “The practical applications for materials like this are essentially limitless.”2 As a case in point, synthetic versions of spider silk could be used to make textiles for military personnel and first responders and to make construction materials such as cables. They would also have biomedical utility and could be used to produce environmentally friendly plastics.

The Quest to Create Synthetic Spider Silk

But things aren’t that simple. Even though life scientists and engineers understand the chemical structure of spider’s silk and how its structural features influence its mechanical properties, they have not been able to create synthetic versions of it with the same set of desired properties.

 

blog__inline--spider-silk-inspires-new-technology

Figure 1: The Molecular Architecture of Spider Silk. Fibers of spider silk consist of proteins that contain crystalline regions separated by amorphous regions. The crystals form from regions of the protein chain that fold into structures called beta-sheets. These beta-sheets stack together to give the spider silk its tensile strength. The amorphous regions give the silk fibers ductility. Image credit: Chen-Pan Liao.

Researchers working to create synthetic spider silk speculate that the process by which the spider spins the silk may play a critical role in establishing the biomaterial’s tensile strength and ductility. Before it is extruded, silk exists in a precursor form in the silk gland. Researchers think that the key to generating synthetic spider silk with the same properties as naturally formed spider silk may be found by mimicking the structure of the silk proteins in precursor form.

Previous work suggests that the proteins that make up spider silk exist as simple micelles in the silk gland and that when spun from this form, fibers with greater-than-steel strength are formed. But researchers’ attempts to apply this insight in a laboratory setting failed to yield synthetic silk with the desired properties.

The Structure of Spider Silk Precursors

Hoping to help unravel this problem, a team of American collaborators led by Gianneschi and Holland recently provided a detailed characterization of the structure of the silk protein precursors in spider glands.3 They discovered that the silk proteins form micelles, but the micelles aren’t simple. Instead, they assemble into a complex structure comprised of a hierarchy of subdomains. Researchers also learned that when they sheared these nanoassemblies of precursor proteins, fibers formed. If they can replicate these hierarchical nanostructures in the lab, researchers believe they may be able to construct synthetic spider silk with the long-sought-after tensile strength and ductility.

Biomimetics and Bioinspiration

Attempts to find inspiration for new technology is n0t limited to spider silk. It has become rather commonplace for engineers to employ insights from arthropod biology (which includes spiders and insects) to solve engineering problems and to inspire the invention of new technologies—even technologies unlike anything found in nature. In fact, I discuss this practice in an essay I contributed for the book God and the World of Insects.

This activity falls under the domain of two relatively new and exciting areas of engineering known as biomimetics and bioinspiration. As the names imply, biomimetics involves direct mimicry of designs from biology, whereas bioinspiration relies on insights from biology to guide the engineering enterprise.

The Converse Watchmaker Argument for God’s Existence

The idea that biological designs can inspire engineering and technology advances is highly provocative. It highlights the elegant designs found throughout the living realm. In the case of spider silk, design elegance is not limited to the structure of spider silk but extends to its manufacturing process as well—one that still can’t be duplicated by engineers.

The elegance of these designs makes possible a new argument for God’s existence—one I have named the converse Watchmaker argument. (For a detailed discussion see the essay I contributed to the book Building Bridges, entitled, “The Inspirational Design of DNA.”)

The argument can be stated like this: if biological designs are the work of a Creator, then these systems should be so well-designed that they can serve as engineering models for inspiring the development of new technologies. Indeed, this scenario is what scientists observe in nature. Therefore, it becomes reasonable to think that biological designs are the work of a Creator.

Biomimetics and the Challenge to the Evolutionary Paradigm

From my perspective, the use of biological designs to guide engineering efforts seems fundamentally at odds with evolutionary theory. Generally speaking, evolutionary biologists view biological systems as the products of an unguided, historically contingent process that co-opts preexisting systems to cobble together new ones. Evolutionary mechanisms can optimize these systems, but even then they are, in essence, still kludges.

Given the unguided nature of evolutionary mechanisms, does it make sense for engineers to rely on biological systems to solve problems and inspire new technologies? Is it in alignment with evolutionary beliefs to build an entire subdiscipline of engineering upon mimicking biological designs? I would argue that these engineering subdisciplines do not fit with the evolutionary paradigm.

On the other hand, biomimetics and bioinspiration naturally flow out of a creation model approach to biology. Using designs in nature to inspire engineering only makes sense if these designs arose from an intelligent Mind, whether in this universe or in any of the dimensions of the Spider-Verse.

Resources

Endnotes

  1. Northwestern University, “Mystery of How Black Widow Spiders Create Steel-Strength Silk Webs further Unravelled,” Phys.org, Science X, October 22, 2018, https://phys.org/news/2018-10-mystery-black-widow-spiders-steel-strength.html.
  2. Northwestern University, “Mystery of How Black Widow Spiders Create.”
  3. Lucas R. Parent et al., “Hierarchical Spidroin Micellar Nanoparticles as the Fundamental Precursors of Spider Silks,” Proceedings of the National Academy of Sciences USA (October 2018), doi:10.1073/pnas.1810203115.