Why Would God Create a World Where Animals Eat Their Offspring?

Untitled 18
BY FAZALE RANA – MAY 22, 2019

What a book a Devil’s chaplain might write on the clumsy, wasteful, blundering, low and horridly cruel works of nature!

–Charles Darwin, “Letter to J. D. Hooker,” Darwin Correspondence Project

You may not have ever heard of him, but he played an important role in ushering in the Darwinian revolution in biology. His name was Asa Gray.

Gray (1810–1888) was a botanist at Harvard University. He was among the first scientists in the US to adopt Darwin’s theory of evolution. Asa Gray was also a devout Christian.

blog__inline--why-would-god-create-a-world-where-animals-eat-their-offspring-1

Asa Gray in 1864. Image credit: John Adams Whipple, Wikipedia

Gray was convinced that Darwin’s theory of evolution was sound. He was also convinced that nature displayed unmistakable evidence for design. For this reason, he reasoned that God must have used evolution as the means to create and, in doing so, Gray may have been the first person to espouse theistic evolution.

In his book Darwinia, Asa Gray presents a number of essays defending Darwin’s theory. Yet, he also expresses his deepest convictions that nature is filled with indicators of design. He attributed that design to a type of God-ordained, God-guided process. Gray argued that God is the source of all evolutionary change.

blog__inline--why-would-god-create-a-world-where-animals-eat-their-offspring-2

Gray and Darwin struck up a friendship and exchanged around 300 letters. In the midst of their correspondence, Gray asked Darwin if he thought it possible that God used evolution as the means to create. Darwin’s reply revealed that he wasn’t very impressed with this idea.

I cannot persuade myself that a beneficent & omnipotent God would have designedly created the Ichneumonidæ with the express intention of their feeding within the living bodies of caterpillars, or that a cat should play with mice. Not believing this, I see no necessity in the belief that the eye was expressly designed. On the other hand I cannot anyhow be contented to view this wonderful universe & especially the nature of man, & to conclude that everything is the result of brute force. I am inclined to look at everything as resulting from designed laws, with the details, whether good or bad, left to the working out of what we may call chance. Not that this notion at all satisfies me. I feel most deeply that the whole subject is too profound for the human intellect. A dog might as well speculate on the mind of Newton. Let each man hope & believe what he can.1

Darwin could not embrace Gray’s theistic evolution because of the cruelty he saw in nature that seemingly causes untold pain and suffering in animals. Darwin—along with many skeptics today—couldn’t square a world characterized by that much suffering with the existence of a God who is all-powerful, all-knowing, and all-good.

Filial Cannibalism

The widespread occurrence of filial cannibalism (when animals eat their young or consume their eggs after laying them) and abandonment (leading to death) exemplify such cruelty in animals. It seems such a low and brutal feature of nature.

Why would God create animals that eat their offspring and abandon their young?

Is Cruelty in Nature Really Evil?

But what if there are good reasons for God to allow pain and suffering in the animal kingdom? I have written about good scientific reasons to think that a purpose exists for animal pain and suffering (see “Scientists Uncover a Good Purpose for Long-Lasting Pain in Animals” by Fazale Rana).

And, what if animal death is a necessary feature of nature? Other studies indicate that animal death promotes biodiversity and ecosystem stability (see “Of Weevils and Wasps: God’s Good Purpose in Animal Death” by Maureen Moser, and “Animal Death Prevents Ecological Meltdown” by Fazale Rana).

There also appears to be a reason for filial cannibalism and offspring abandonment, at least based on a study by researchers from Oxford University (UK) and the University of Tennessee.2 These researchers demonstrated that filial cannibalism and offspring abandonment comprise a form of parental care.

What? How is that conclusion possible?

It turns out that when animals eat their offspring or abandon their young, the reduction promotes the survival of the remaining offspring. To arrive at this conclusion, the researchers performed mathematical modeling of a generic egg-laying species. They discovered that when animals sacrificed a few of their young, the culling led to greater fitness for their offspring than when animals did not engage in filial cannibalism or egg abandonment.

These behaviors become important when animals lay too many eggs. In order to properly care for their eggs (protect, incubate, feed, and clean), animals confine egg-laying to a relatively small space. This practice leads to a high density of eggs. But this high density can have drawbacks, making the offspring more vulnerable to diseases and lack of sufficient food and oxygen. Filial cannibalism reduces the density, ensuring a greater chance of survival for those eggs that are left behind. So, ironically, when egg density is too high for the environmental conditions, more offspring survive when the parents consume some, rather than none, of the eggs.

So, why lay so many eggs in the first place?

In general, the more eggs that are laid, the greater the number of surviving offspring—assuming there are unlimited resources and no threats of disease. But it is difficult for animals to know how many eggs to lay because the environment is unpredictable and constantly changing. A better way to ensure reproductive fitness is to lay more eggs and remove some of them if the environment can’t sustain the egg density.

So, it appears as if there is a good reason for God to create animals that eat their young. In fact, you might even argue that filial cannibalism leads to a world with less cruelty and suffering than a world where filial cannibalism doesn’t exist at all. This feature of nature is consistent with the idea of an all-powerful, all-knowing, and all-good God who has designed the creation for his good purposes.

Resources

Endnotes
  1. To Asa Gray 22 May [1860],” Darwin Correspondence Project, University of Cambridge, accessed May 15, 2019, https://www.darwinproject.ac.uk/letter/DCP-LETT-2814.xml.
  2. Mackenzie E. Davenport, Michael B. Bansall, and Hope Klug, “Unconventional Care: Offspring Abandonment and Filial Cannibalism Can Function as Forms of Parental Care,” Frontiers in Ecology and Evolution 7 (April 17, 2019): 113, doi:10.3389/fevo.2019.00113.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/05/22/why-would-god-create-a-world-where-animals-eat-their-offspring

Origins of Monogamy Cause Evolutionary Paradigm Breakup

Untitled 9
BY FAZALE RANA – MARCH 20, 2019

Gregg Allman fronted the Allman Brothers Band for over 40 years until his death in 2017 at the age of 69. Writer Mark Binelli described Allman’s voice as “a beautifully scarred blues howl, old beyond its years.”1

A rock legend who helped pioneer southern rock, Allman was as well known for his chaotic, dysfunctional personal life as for his accomplishments as a musician. Allman struggled with drug abuse and addiction. He was also married six times, with each marriage ending in divorce and, at times, in a public spectacle.

In a 2009 interview with Binelli for Rolling Stone, Allman reflected on his failed marriages: “To tell you the truth, it’s my sixth marriage—I’m starting to think it’s me.”2

Allman isn’t the only one to have trouble with marriage. As it turns out, so do evolutionary biologists—but for different reasons than Greg Allman.

To be more exact, evolutionary biologists have made an unexpected discovery about the evolutionary origin of monogamy (a single mate for at least a season) in animals—an insight that raises questions about the evolutionary explanation. Based on recent work headed by a large research team of investigators from the University of Texas (UT), Austin, it looks like monogamy arose independently, multiple times, in animals. And these origin events were driven, in each instance, by the same genetic changes.3

In my view, this remarkable example of evolutionary convergence highlights one of the many limitations of evolutionary theory. It also contributes to my skepticism (and that of other intelligent design proponents/creationists) about the central claim of the evolutionary paradigm; namely, the origin, design, history, and diversity of life can be fully explained by evolutionary mechanisms.

At the same time, the independent origins of monogamy—driven by the same genetic changes—(as well as other examples of convergence) find a ready explanation within a creation model framework.

Historical Contingency

To appreciate why I believe this discovery is problematic for the evolutionary paradigm, it is necessary to consider the nature of evolutionary mechanisms. According to the evolutionary biologist Stephen Jay Gould (1941–2002), evolutionary transformations occur in a historically contingent manner.This means that the evolutionary process consists of an extended sequence of unpredictable, chance events. If any of these events were altered, it would send evolution down a different trajectory.

To help clarify this concept, Gould used the metaphor of “replaying life’s tape.” If one were to push the rewind button, erase life’s history, and then let the tape run again, the results would be completely different each time. In other words, the evolutionary process should not repeat itself. And rarely should it arrive at the same end point.

Gould based the concept of historical contingency on his understanding of the mechanisms that drive evolutionary change. Since the time of Gould’s original description of historical contingency, several studies have affirmed his view. (For descriptions of some representative studies, see the articles listed in the Resources section.) In other words, researchers have experimentally shown that the evolutionary process is, indeed, historically contingent.

A Failed Prediction of the Evolutionary Paradigm

Given historical contingency, it seems unlikely that distinct evolutionary pathways would lead to identical or nearly identical outcomes. Yet, when viewed from an evolutionary standpoint, it appears as if repeated evolutionary outcomes are a common occurrence throughout life’s history. This phenomenon—referred to as convergence—is widespread. Evolutionary biologists Simon Conway Morris and George McGhee point out in their respective books, Life’s Solution and Convergent Evolution, that identical evolutionary outcomes are a characteristic feature of the biological realm.5 Scientists see these repeated outcomes at the ecological, organismal, biochemical, and genetic levels. In fact, in my book The Cell’s Design, I describe 100 examples of convergence at the biochemical level.

In other words, biologists have made two contradictory observations within the evolutionary framework: (1) evolutionary processes are historically contingent and (2) evolutionary convergence is widespread. Since the publication of The Cell’s Design, many new examples of convergence have been unearthed, including the recent origin of monogamy discovery.

Convergent Origins of Monogamy

Working within the framework of the evolutionary paradigm, the UT research team sought to understand the evolutionary transition to monogamy. To achieve this insight, they compared the gene expression profiles in the neural tissues of reproductive males for closely related pairs of species, with one species displaying monogamous behavior and the other nonmonogamous reproduction.

The species pairs spanned the major vertebrate groups and included mice, voles, songbirds, frogs, and cichlids. From an evolutionary perspective, these organisms would have shared a common ancestor 450 million years ago.

Monogamous behavior is remarkably complex. It involves the formation of bonds between males and females, care of offspring by both parents, and increased territorial defense. Yet, the researchers discovered that in each instance of monogamy the gene expression profiles in the neural tissues of the monogamous species were identical and distinct from the gene expression patterns for their nonmonogamous counterparts. Specifically, they observed the same differences in gene expression for the same 24 genes. Interestingly, genes that played a role in neural development, cell-cell signaling, synaptic activity, learning and memory, and cognitive function displayed enhanced gene expression. Genes involved in gene transcription and AMPA receptor regulation were down-regulated.

So, how do the researchers account for this spectacular example of convergence? They conclude that a “universal transcriptomic mechanism” exists for monogamy and speculate that the gene modules needed for monogamous behavior already existed in the last common ancestor of vertebrates. When needed, these modules were independently recruited at different times in evolutionary history to yield monogamous species.

Yet, given the number of genes involved and the specific changes in gene expression needed to produce the complex behavior associated with monogamous reproduction, it seems unlikely that this transformation would happen a single time, let alone multiple times, in the exact same way. In fact, Rebecca Young, the lead author of the journal article detailing the UT research team’s work, notes that “Most people wouldn’t expect that across 450 million years, transitions to such complex behaviors would happen the same way every time.”6

So, is there another way to explain convergence?

Convergence and the Case for a Creator

Prior to Darwin (1809–1882), biologists referred to shared biological features found in organisms that cluster into disparate biological groups as analogies. (In an evolutionary framework, analogies are referred to as evolutionary convergences.) They viewed analogous systems as designs conceived by the Creator that were then physically manifested in the biological realm and distributed among unrelated organisms.

In light of this historical precedence, I interpret convergent features (analogies) as the handiwork of a Divine mind. The repeated origins of biological features equate to the repeated creations by an intelligent Agent who employs a common set of solutions to address a common set of problems facing unrelated organisms.

Thus, the idea of monogamous convergence seems to divorce itself from the evolutionary framework, but it makes for a solid marriage in a creation model framework.

Resources

Endnotes
  1. Mark Binelli, “Gregg Allman: The Lost Brother,” Rolling Stone, no. 1082/1083 (July 9–23, 2009), https://www.rollingstone.com/music/music-features/gregg-allman-the-lost-brother-108623/.
  2. Binelli, “Gregg Allman: The Lost Brother.”
  3. Rebecca L. Young et al., “Conserved Transcriptomic Profiles underpin Monogamy across Vertebrates,” Proceedings of the National Academy of Sciences, USA 116, no. 4 (January 22, 2019): 1331–36, doi:10.1073/pnas.1813775116.
  4. Stephen Jay Gould, Wonderful Life: The Burgess Shale and the Nature of History (New York: W. W. Norton & Company, 1990).
  5. Simon Conway Morris, Life’s Solution: Inevitable Humans in a Lonely Universe (New York: Cambridge University Press, 2003); George McGhee, Convergent Evolution: Limited Forms Most Beautiful (Cambridge, MA: MIT Press, 2011).
  6. University of Texas at Austin, “Evolution Used Same Genetic Formula to Turn Animals Monogamous,” ScienceDaily (January 7, 2019), www.sciencedaily.com/releases/2019/01/1901071507.htm.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/03/20/origins-of-monogamy-cause-evolutionary-paradigm-breakup

Biochemical Synonyms Restate the Case for a Creator

Untitled 8
BY FAZALE RANA – MARCH 13, 2019

Sometimes I just can’t help myself. I know it’s clickbait but I click on the link anyway.

A few days ago, as a result of momentary weakness, I found myself reading an article from the ScoopWhoop website, “16 Things Most of Us Think Are the Same but Actually Aren’t.”

OK. OK. Now that you saw the title you want to click on the link, too.

To save you from wasting five minutes of your life, here is the ScoopWhoop list:

  • Weather and Climate
  • Turtle and Tortoise
  • Jam and Jelly
  • Eraser and Rubber
  • Great Britain and the UK
  • Pill and Tablet
  • Shrimp and Prawn
  • Butter and Margarine
  • Orange and Tangerine
  • Biscuits and Cookies
  • Cupcakes and Muffins
  • Mushrooms and Toadstools
  • Tofu and Paneer
  • Rabbits and Hares
  • Alligators and Crocodiles
  • Rats and Mice

And there you have it. Not a very impressive list, really.

If I were putting together a biochemist’s version of this list, I would start with synonymous mutations. Even though many life scientists think they are the same, studies indicate that they “actually aren’t.”

If you have no idea what I am talking about or what this insight has to do with the creation/evolution debate, let me explain by starting with some background information, beginning with the central dogma of molecular biology and the genetic code.

Central Dogma of Molecular Biology

According to this tenet of molecular biology, the information stored in DNA is functionally expressed through the activities of proteins. When it is time for the cell’s machinery to produce a particular protein, it copies the appropriate information from the DNA molecule through a process called transcription and produces a molecule called messenger RNA(mRNA). Once assembled, mRNA migrates to the ribosome, where it directs the synthesis of proteins through a process known as translation.

blog__inline--biochemical-synonyms-restate-1

Figure 1: The central dogma of molecular biology. Image credit: Shutterstock

The Genetic Code

At first glance, there appears to be a mismatch between the stored information in DNA and the information expressed in proteins. A one-to-one relationship cannot exist between the four different nucleotides that make up DNA and the twenty different amino acids used to assemble proteins. The cell handles this mismatch by using a code comprised of groupings of three nucleotides, called codons, to specify the twenty different amino acids.

 

blog__inline--biochemical-synonyms-restate-2

Figure 2: Codons. Image credit: Wikipedia

The cell uses a set of rules to relate these nucleotide triplet sequences to the twenty amino acids that comprise proteins. Molecular biologists refer to this set of rules as the genetic code. The nucleotide triplets represent the fundamental units of the genetic code. The code uses each combination of nucleotide triplets to signify an amino acid. This code is essentially universal among all living organisms.

Sixty-four codons make up the genetic code. Because the code only needs to encode twenty amino acids, some of the codons are redundant. That is, different codons code for the same amino acid. In fact, up to six different codons specify some amino acids. Others are specified by only one codon.1

blog__inline--biochemical-synonyms-restate-3

Figure 3: The genetic code. Image credit: Shutterstock

A little more background information about mutations will help fill out the picture.

Mutations

A mutation refers to any change that takes place in the DNA nucleotide sequence. DNA can experience several different types of mutations. Substitution mutations are one common type. When a substitution mutation occurs, one (or more) of the nucleotides in the DNA strand is replaced by another nucleotide. For example, an A may be replaced by a G, or a C may be replaced by a T. This substitution changes the codon. Interestingly, the genetic code is structured in such a way that when substitution mutations take place, the resulting codon often specifies the same amino acid (due to redundancy) or an amino acid that has similar chemical and physical properties to the amino acid originally encoded.

Synonymous and Nonsynonymous Mutations

When substitution mutations generate a new codon that specifies the same amino acid as initially encoded, it’s referred to as a synonymous mutation. However, when a substitution produces a codon that specifies a different amino acid, it’s called a nonsynonymous mutation.

Nonsynonymous mutations can be deleterious if they affect a critical amino acid or if they significantly alter the chemical and physical profile along the protein chain. If the substituted amino acid possesses dramatically different physicochemical properties from the native amino acid, it may cause the protein to fold improperly. Improper folding impacts the protein’s structure, yielding a biomolecule with reduced or even lost function.

On the other hand, biochemists have long thought that synonymous mutations have no effect on protein structure and function because these types of mutations don’t change the amino acid sequences of proteins. Even though biochemists think that synonymous mutations are silent—having no functional consequences—evolutionary biologists find ways to use them, including using patterns of synonymous mutations to establish evolutionary relationships.

Patterns of Synonymous Mutations and the Case for Biological Evolution

Evolutionary biologists consider shared genetic features found in organisms that naturally group together as compelling evidence for common descent. One feature of particular interest is the identical (or nearly identical) DNA sequence patterns found in genomes. According to this line of reasoning, the shared patterns arose as a result of a series of substitution mutations that occurred in the common ancestor’s genome. Presumably, as the varying evolutionary lineages diverged from the nexus point, they carried with them the altered sequences created by the primordial mutations.

Synonymous mutations play a significant role in this particular argument for common descent. Because synonymous mutations don’t alter the amino acid sequence of proteins, their effects are considered to be inconsequential. So, when the same (or nearly the same) patterns of synonymous mutations are observed in genomes of organisms that cluster together into the same group, most life scientists interpret them as compelling evidence of the organisms’ common evolutionary history.

It is conceivable that nonsynonymous mutations, which alter the protein amino acid sequences, may impart some type of benefit and, therefore, shared patterns of nonsynonymous changes could be understood as evidence for shared design. (See the last section of this article.) But this is not the case when it comes to synonymous mutations, which raises the question: Why would a Creator intentionally introduce new codons that code for the same amino acid into genes when these changes have no functional utility?

Apart from invoking a Creator, the shared patterns of synonymous mutations make perfect sense if genomes have been shaped by evolutionary processes and an evolutionary history. However, this argument for biological evolution (shared ancestry) and challenge to a creation model interpretation (shared design) hinges on the underlying assumption that synonymous mutations have no functional consequence.

But what if this assumption no longer holds?

Synonymous Mutations Are Not Interchangeable

Biochemists used to think that synonymous mutations had no impact whatsoever on protein structure and, hence, function, but this view is changing thanks to studies such as the one carried out by researchers at University of Colorado, Boulder.2

These researchers discovered synonymous mutations that increase the translational efficiency of a gene (found in the genome of Salmonella enterica). This gene codes for an enzyme that plays a role in the biosynthetic pathway for the amino acid arginine. (This enzyme also plays a role in the biosynthesis of proline.) They believe that these mutations alter the three-dimensional structure of the DNA sequence near the beginning of the coding portion of the gene. They also think that the synonymous mutations improve the stability of the messenger RNA molecule. Both effects would lead to greater translational efficiency at the ribosome.

As radical (and unexpected) as this finding may seem to be, it follows on the heels of other recent discoveries that also recognize the functional importance of synonymous mutations.3Generally speaking, biochemists have discovered that synonymous mutations function to influence not only the rate and efficiency of translation (as the scientists from the University of Colorado, Bolder learned) and the folding of the proteins after they are produced at the ribosome.

Even though synonymous mutations leave the amino acid sequence of the protein unchanged, they can exert influence by altering the:

  • regulatory regions of the gene that influence the transcription rate
  • secondary and tertiary structure of messenger RNA that influences the rate of translation
  • stability of messenger RNA that influences the amount of protein produced
  • translation rate that influences the folding of the protein as it exits the ribosome

Biochemists are just beginning to come to terms with the significance of these discoveries, but it is already clear that synonymous mutations have biomedical consequences.They also impact models for molecular evolution. But for now, I want to focus on the impact these discoveries has on the creation/evolution debate.

Patterns of Synonymous Mutations and the Case for Creation

As noted, many people consider the most compelling evidence for common descent to be the shared genetic features displayed by organisms that naturally cluster together. But if life is the product of a Creator’s handiwork, the shared genetic features could be understood as shared designs deployed by a Creator. In fact, a historical precedent exists for the common design interpretation. Prior to Darwin, biologists viewed shared biological features as manifestations of archetypical designs that existed in the Creator’s mind.

But the common design interpretation requires that the shared features be functional. (Or, that they arise independently in a nonrandom manner.) For those who view life from the framework of the evolutionary paradigm, the shared patterns of synonymous mutations invalidate the common design explanation—because these mutations are considered to be functionally insignificant.

But in the face of mounting evidence for the functional importance of synonymous mutations, this objection to common design has begun to erode. Though many life scientists are quick to dismiss the common design interpretation of biology, advances in molecular biology continue to strengthen this explanation and, with it, the case for a Creator.

Resources

Endnotes
  1. As I discuss in The Cell’s Design, the rules of the genetic code and the nature of the redundancy appear to be designed to minimize errors in translating information from DNA into proteins that would occur due to substitution mutations. This optimization stands as evidence for the work of an intelligent Agent.
  2. JohnCarlo Kristofich et al., “Synonymous Mutations Make Dramatic Contributions to Fitness When Growth Is Limited by Weak-Link Enzyme,” PLoS Genetics 14, no. 8 (August 27, 2018): e1007615, doi:10.1371/journal.pgen.1007615.
  3. Here are a few representative studies that ascribe functional significance to synonymous mutations: Anton A. Komar, Thierry Lesnik, and Claude Reiss, “Synonymous Codon Substitutions Affect Ribosome Traffic and Protein Folding during in vitro Translation,” FEBS Letters 462, no. 3 (November 30, 1999): 387–91, doi:10.1016/S0014-5793(99)01566-5; Chung-Jung Tsai et al., “Synonymous Mutations and Ribosome Stalling Can Lead to Altered Folding Pathways and Distinct Minima,” Journal of Molecular Biology 383, no. 2 (November 7, 2008): 281–91, doi:10.1016/j.jmb.2008.08.012; Florian Buhr et al., “Synonymous Codons Direct Cotranslational Folding toward Different Protein Conformations,” Molecular Cell Biology 61, no. 3 (February 4, 2016): 341–51, doi:10.1016/j.molcel.2016.01.008; Chien-Hung Yu et al., “Codon Usage Influences the Local Rate of Translation Elongation to Regulate Co-translational Protein Folding,” Molecular Cell Biology 59, no. 5 (September 3, 2015): 744–55, doi:10.1016/j.molcel.2015.07.018.
  4. Zubin E. Sauna and Chava Kimchi-Sarfaty,” Understanding the Contribution of Synonymous Mutations to Human Disease,” Nature Reviews Genetics 12 (August 31, 2011): 683–91, doi:10.1038/nrg3051.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/03/13/biochemical-synonyms-restate-the-case-for-a-creator

Endosymbiont Hypothesis and the Ironic Case for a Creator

endosymbionthypothesisandtheironic

BY FAZALE RANA – DECEMBER 12, 2018

i ·ro ·ny

The use of words to express something different from and often opposite to their literal meaning.
Incongruity between what might be expected and what actually occurs.

—The Free Dictionary

People often use irony in humor, rhetoric, and literature, but few would think it has a place in science. But wryly, this has become the case. Recent work in synthetic biology has created a real sense of irony among the scientific community—particularly for those who view life’s origin and design from an evolutionary framework.

Increasingly, life scientists are turning to synthetic biology to help them understand how life could have originated and evolved. But, they have achieved the opposite of what they intended. Instead of developing insights into key evolutionary transitions in life’s history, they have, ironically, demonstrated the central role intelligent agency must play in any scientific explanation for the origin, design, and history of life.

This paradoxical situation is nicely illustrated by recent work undertaken by researchers from Scripps Research (La Jolla, CA). Through genetic engineering, the scientific investigators created a non-natural version of the bacterium E. coli. This microbe is designed to take up permanent residence in yeast cells. (Cells that take up permanent residence within other cells are referred to as endosymbionts.) They hope that by studying these genetically engineered endosymbionts, they can gain a better understanding of how the first eukaryotic cells evolved. Along the way, they hope to find added support for the endosymbiont hypothesis.1

The Endosymbiont Hypothesis

Most biologists believe that the endosymbiont hypothesis (symbiogenesis) best explains one of the key transitions in life’s history; namely, the origin of complex cells from bacteria and archaea. Building on the ideas of Russian botanist Konstantin Mereschkowski, Lynn Margulis(1938–2011) advanced the endosymbiont hypothesis in the 1960s to explain the origin of eukaryotic cells.

Margulis’s work has become an integral part of the evolutionary paradigm. Many life scientists find the evidence for this idea compelling and consequently view it as providing broad support for an evolutionary explanation for the history and design of life.

According to this hypothesis, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe. Presumably, organelles such as mitochondria were once endosymbionts. Evolutionary biologists believe that once engulfed by the host cell, the endosymbionts took up permanent residency, with the endosymbiont growing and dividing inside the host.

Over time, the endosymbionts and the host became mutually interdependent. Endosymbionts provided a metabolic benefit for the host cell—such as an added source of ATP—while the host cell provided nutrients to the endosymbionts. Presumably, the endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism.

endosymbiont-hypothesis-and-the-ironic-case-for-a-creator-1

Figure 1: Endosymbiont hypothesis. Image credit: Wikipedia.

Life scientists point to a number of similarities between mitochondria and alphaproteobacteria as evidence for the endosymbiont hypothesis. (For a description of the evidence, see the articles listed in the Resources section.) Nevertheless, they don’t understand how symbiogenesis actually occurred. To gain this insight, scientists from Scripps Research sought to experimentally replicate the earliest stages of mitochondrial evolution by engineering E. coli and brewer’s yeast (S. cerevisiae) to yield an endosymbiotic relationship.

Engineering Endosymbiosis

First, the research team generated a strain of E. coli that no longer has the capacity to produce the essential cofactor thiamin. They achieved this by disabling one of the genes involved in the biosynthesis of the compound. Without this metabolic capacity, this strain becomes dependent on an exogenous source of thiamin in order to survive. (Because the E. coli genome encodes for a transporter protein that can pump thiamin into the cell from the exterior environment, it can grow if an external supply of thiamin is available.) When incorporated into yeast cells, the thiamin in the yeast cytoplasm becomes the source of the exogenous thiamin, rendering E. coli dependent on the yeast cell’s metabolic processes.

Next, they transferred the gene that encodes a protein called ADP/ATP translocase into the E. coli strain. This gene was harbored on a plasmid (which is a small circular piece of DNA). Normally, the gene is found in the genome of an endosymbiotic bacterium that infects amoeba. This protein pumps ATP from the interior of the bacterial cell to the exterior environment.2

The team then exposed yeast cells (that were deficient in ATP production) to polyethylene glycol, which creates a passageway for E. coli cells to make their way into the yeast cells. In doing so, E. coli becomes established as endosymbionts within the yeast cells’ interior, with the E. coli providing ATP to the yeast cell and the yeast cell providing thiamin to the bacterial cell.

Researchers discovered that once taken up by the yeast cells, the E. coli did not persist inside the cell’s interior. They reasoned that the bacterial cells were being destroyed by the lysosomal degradation pathway. To prevent their destruction, the research team had to introduce three additional genes into the E. coli from three separate endosymbiotic bacteria. Each of these genes encodes proteins—called SNARE-like proteins—that interfere with the lysosomal destruction pathway.

Finally, to establish a mutualistic relationship between the genetically-engineered strain of E. coli and the yeast cell, the researchers used a yeast strain with defective mitochondria. This defect prevented the yeast cells from producing an adequate supply of ATP. Because of this limitation, the yeast cells grow slowly and would benefit from the E. coli endosymbionts, with the engineered capacity to transport ATP from their cellular interior to the exterior environment (the yeast cytoplasm.)

The researchers observed that the yeast cells with E. coli endosymbionts appeared to be stable for 40 rounds of cell doublings. To demonstrate the potential utility of this system to study symbiogenesis, the research team then began the process of genome reduction for the E. coli endosymbionts. They successively eliminated the capacity of the bacterial endosymbiont to make the key metabolic intermediate NAD and the amino acid serine. These triply-deficient E. coli strains survived in the yeast cells by taking up these nutrients from the yeast cytoplasm.

Evolution or Intentional Design?

The Scripps Research scientific team’s work is impressive, exemplifying science at its very best. They hope that their landmark accomplishment will lead to a better understanding of how eukaryotic cells appeared on Earth by providing the research community with a model system that allows them to probe the process of symbiogenesis. It will also allow them to test the various facets of the endosymbiont hypothesis.

In fact, I would argue that this study already has made important strides in explaining the genesis of eukaryotic cells. But ironically, instead of proffering support for an evolutionary origin of eukaryotic cells (even though the investigators operated within the confines of the evolutionary paradigm), their work points to the necessary role intelligent agency must have played in one of the most important events in life’s history.

This research was executed by some of the best minds in the world, who relied on a detailed and comprehensive understanding of biochemical and cellular systems. Such knowledge took a couple of centuries to accumulate. Furthermore, establishing mutualistic interactions between the two organisms required a significant amount of ingenuity—genius that is reflected in the experimental strategy and design of their study. And even at that point, execution of their experimental protocols necessitated the use of sophisticated laboratory techniques carried out under highly controlled, carefully orchestrated conditions. To sum it up: intelligent agency was required to establish the endosymbiotic relationship between the two microbes.

endosymbiont-hypothesis-and-the-ironic-case-for-a-creator-2

Figure 2: Lab researcher. Image credit: Shutterstock.

Or, to put it differently, the endosymbiotic relationship between these two organisms was intelligently designed. (All this work was necessary to recapitulate only the presumed first step in the process of symbiogenesis.) This conclusion gains added support given some of the significant problems confronting the endosymbiotic hypothesis. (For more details, see the Resources section.) By analogy, it seems reasonable to conclude that eukaryotic cells, too, must reflect the handiwork of a Divine Mind—a Creator.

Resources

Endnotes

  1. Angad P. Mehta et al., “Engineering Yeast Endosymbionts as a Step toward the Evolution of Mitochondria,” Proceedings of the National Academy of Sciences, USA 115 (November 13, 2018): doi:10.1073/pnas.1813143115.
  2. ATP is a biochemical that stores energy used to power the cell’s operation. Produced by mitochondria, ATP is one of the end products of energy harvesting pathways in the cell. The ATP produced in mitochondria is pumped into the cell’s cytoplasm from within the interior of this organelle by an ADP/ATP transporter.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/12/12/endosymbiont-hypothesis-and-the-ironic-case-for-a-creator

Did Neanderthals Start Fires?

neanderthalsstartfire

BY FAZALE RANA – DECEMBER 5, 2018

It is one of the most iconic Christmas songs of all time.

Written by Bob Wells and Mel Torme in the summer of 1945, “The Christmas Song” (subtitled “Chestnuts Roasting on an Open Fire”) was crafted in less than an hour. As the story goes, Wells and Torme were trying to stay cool during the blistering summer heat by thinking cool thoughts and then jotting them down on paper. And, in the process, “The Christmas Song” was born.

Many of the song’s lyrics evoke images of winter, particularly around Christmastime. But none has come to exemplify the quiet peace of a Christmas evening more than the song’s first line, “Chestnuts roasting on an open fire . . . ”

Gathering around the fire to stay warm, to cook food, and to share in a community has been an integral part of the human experience throughout history—including human prehistory. Most certainly our ability to master fire played a role in our survival as a species and in our ability as human beings to occupy and thrive in some of the world’s coldest, harshest climates.

But fire use is not limited only to modern humans. There is strong evidence that Neanderthals made use of fire. But, did these creatures have control over fire in the same way we do? In other words, did Neanderthals master fire? Or, did they merely make opportunistic use of natural fires? These questions are hotly debated by anthropologists today and they contribute to a broader discussion about the cognitive capacity of Neanderthals. Part of that discussion includes whether these creatures were cognitively inferior to us or whether they were our intellectual equals.

In an attempt to answer these questions, a team of researchers from the Netherlands and France characterized the microwear patterns on bifacial (having opposite sides that have been worked on to form an edge) tools made from flint recovered from Neanderthal sites, and concluded that the wear patterns suggest that these hominins used pyrite to repeatedly strike the flint. This process generates sparks that can be used to start fires.1 To put it another way, the researchers concluded that Neanderthals had mastery over fire because they knew how to start fires.

start-fires-1

Figure 1: Biface tools for cutting or scraping. Image credit: Shutterstock

However, a closer examination of the evidence along with results of other studies, including recent insight into the cause of Neanderthal extinction, raises significant doubts about this conclusion.

What Do the Microwear Patterns on Flint Say?

The investigators focused on the microwear patterns of flint bifaces recovered from Neanderthal sites as a marker for fire mastery because of the well-known practice among hunter-gatherers and pastoralists of striking flint with pyrite (an iron disulfide mineral) to generate sparks to start fires. Presumably, the first modern humans also used this technique to start fires.

start-fires-2

Figure 2: Starting a fire with pyrite and flint. Image credit: Shutterstock

The research team reasoned that if Neanderthals started fires, they would use a similar tactic. Careful examination of the microwear patterns on the bifaces led the research team to conclude that these tools were repeatedly struck by hard materials, with the strikes all occurring in the same direction along the bifaces’ long axis.

The researchers then tried to experimentally recreate the microwear pattern in a laboratory setting. To do so, they struck biface replicas with a number of different types of materials, including pyrites, and concluded that the patterns produced by the pyrite strikes most closely matched the patterns on the bifaces recovered from Neanderthal sites. On this basis, the researchers claim that they have found evidence that Neanderthals deliberately started fires.

Did Neanderthals Master Fire?

While this conclusion is possible, at best this study provides circumstantial, not direct, evidence for Neanderthal mastery of fire. In fact, other evidence counts against this conclusion. For example, bifaces with the same type of microwear patterns have been found at other Neanderthal sites, locales that show no evidence of fire use. These bifaces would have had a range of usages, including butchery of the remains of dead animals. So, it is possible that these tools were never used to start fires—even at sites with evidence for fire usage.

Another challenge to the conclusion comes from the failure to detect any pyrite on the bifaces recovered from the Neanderthal sites. Flint recovered from modern human sites shows visible evidence of pyrite. And yet the research team failed to detect even trace amounts of pyrite on the Neanderthal bifaces during the course of their microanalysis.

This observation raises further doubt about whether the flint from the Neanderthal sites was used as a fire starter tool. Rather, it points to the possibility that Neanderthals struck the bifaces with materials other than pyrite for reasons not yet understood.

The conclusion that Neanderthals mastered fire also does not square with results from other studies. For example, a careful assessment of archaeological sites in southern France occupied by Neanderthals from about 100,000 to 40,000 years ago indicates that Neanderthals could not create fire. Instead, these hominins made opportunistic use of natural fire when it was available to them.2

These French sites do show clear evidence of Neanderthal fire use, but when researchers correlated the archaeological layers displaying evidence for fire use with the paleoclimate data, they found an unexpected pattern. Neanderthals used fire during warm climate conditions and failed to use fire during cold periods—the opposite of what would be predicted if Neanderthals had mastered fire.

Lightning strikes that would generate natural fires are much more likely to occur during warm periods. Instead of creating fire, Neanderthals most likely harnessed natural fire and cultivated it as long as they could before it extinguished.

Another study also raises questions about the ability of Neanderthals to start fires.3 This research indicates that cold climates triggered Neanderthal extinctions. By studying the chemical composition of stalagmites in two Romanian caves, an international research team concluded that there were two prolonged and extremely cold periods between 44,000 and 40,000 years ago. (The chemical composition of stalagmites varies with temperature.)

The researchers also noted that during these cold periods, the archaeological record for Neanderthals disappears. They interpret this disappearance to reflect a dramatic reduction in Neanderthal population numbers. Researchers speculate that when this population downturn took place during the first cold period, modern humans made their way into Europe. Being better suited for survival in the cold climate, modern human numbers increased. When the cold climate mitigated, Neanderthals were unable to recover their numbers because of the growing populations of modern humans in Europe. Presumably, after the second cold period, Neanderthal numbers dropped to the point that they couldn’t recover, and hence, became extinct.

But why would modern humans be more capable than Neanderthals of surviving under extremely cold conditions? It seems as if it should be the other way around. Neanderthals had a hyper-polar body design that made them ideally suited to withstand cold conditions. Neanderthal bodies were stout and compact, comprised of barrel-shaped torsos and shorter limbs, which helped them retain body heat. Their noses were long and sinus cavities extensive, which helped them warm the cold air they breathed before it reached their lungs. But, despite this advantage, Neanderthals died out and modern humans thrived.

Some anthropologists believe that the survival discrepancy could be due to dietary differences. Some data indicates that modern humans had a more varied diet than Neanderthals. Presumably, these creatures primarily consumed large herbivores—animals that disappeared when the climatic conditions turned cold, thereby threatening Neanderthal survival. On the other hand, modern humans were able to adjust to the cold conditions by shifting their diets.

But could there be a different explanation? Could it be that with their mastery of fire, modern humans were able to survive cold conditions? And did Neanderthals die out because they could not start fires?

Taken in its entirety, the data seems to indicate that Neanderthals lacked mastery of fire but could use it opportunistically. And, in a broader context, the data indicates that Neanderthals were cognitively inferior to humans.

What Difference Does It Make?

One of the most important ideas taught in Scripture is that human beings uniquely bear God’s image. As such, every human being has immeasurable worth and value. And because we bear God’s image, we can enter into a relationship with our Maker.

However, if Neanderthals possessed advanced cognitive ability just like that of modern humans, then it becomes difficult to maintain the view that modern humans are unique and exceptional. If human beings aren’t exceptional, then it becomes a challenge to defend the idea that human beings are made in God’s image.

Yet, claims that Neanderthals are cognitive equals to modern humans fail to withstand scientific scrutiny, time and time, again. Now it’s time to light a fire in my fireplace and enjoy a few contemplative moments thinking about the real meaning of Christmas.

Resources

Endnotes

  1. A. C. Sorensen, E. Claud, and M. Soressi, “Neanderthal Fire-Making Technology Inferred from Microwear Analysis,” Scientific Reports 8 (July 19, 2018): 10065, doi:10.1038/s41598-018-28342-9.
  2. Dennis M. Sandgathe et al., “Timing of the Appearance of Habitual Fire Use,” Proceedings of the National Academy of Sciences, USA 108 (July 19, 2011), E298, doi:10.1073/pnas.1106759108; Paul Goldberg et al., “New Evidence on Neandertal Use of Fire: Examples from Roc de Marsal and Pech de l’Azé IV,” Quaternary International 247 (2012): 325–40, doi:10.1016/j.quaint.2010.11.015; Dennis M. Sandgathe et al., “On the Role of Fire in Neandertal Adaptations in Western Europe: Evidence from Pech de l’Azé IV and Roc de Marsal, France,” PaleoAnthropology (2011): 216–42, doi:10.4207/PA.2011.ART54.
  3. Michael Staubwasser et al., “Impact of Climate Change on the Transition of Neanderthals to Modern Humans in Europe,” Proceedings of the National Academy of Sciences, USA 115 (September 11, 2018): 9116–21, doi:10.1073/pnas.1808647115.

Further Review Overturns Neanderthal Art Claim

furtherreviewoverturns

BY FAZALE RANA – OCTOBER 17, 2018

As I write this blog post, the 2018–19 NFL season is just underway.

During the course of any NFL season, several key games are decided by a controversial call made by the officials. Nobody wants the officials to determine the outcome of a game, so the NFL has instituted a way for coaches to challenge calls on the field. When a call is challenged, part of the officiating crew looks at a computer tablet on the sidelines—reviewing the game footage from a number of different angles in an attempt to get the call right. After two minutes of reviewing the replays, the senior official makes his way to the middle of the field and announces, “Upon further review, the call on the field . . .”

Recently, a team of anthropologists from Spain and the UK created quite a bit of controversy based on a “call” they made from working in the field. Using a new U-Th dating method, these researchers age-dated the artwork in caves from Iberia. Based on the age of a few of their samples, they concluded that Neanderthals produced cave paintings.1 But new work by three independent research teams challenges the “call” from the field—overturning the conclusion that Neanderthals made art and displayed symbolism like modern humans.

U-Th Dating Method

The new dating method under review measures the age of calcite deposits beneath cave paintings and those formed over the artwork after the paintings were created. As water flows down cave walls, it deposits calcite. When calcite forms, it contains trace amounts of U-238. This isotope decays into Th-230. Normally, detection of such low quantities of the isotopes would require extremely large samples. Researchers discovered that by using accelerator mass spectrometry, they could get by with 10-milligram samples. And by dating the calcite samples with this technique, they produced minimum and maximum ages for the cave paintings.2

Call from the Field: Neanderthals Are Artists

The team applied their dating method to the art found in three cave sites in Iberia (ancient Spain): (1) La Pasiega, which houses paintings of animals, linear signs, claviform signs, and dots; (2) Ardales, which contains about 1,000 paintings of animals, along with dots, discs, lines, geometric shapes, and hand stencils; and (3) Maltravieso, which displays a set of hand stencils and geometric designs. The research team took a total of 53 samples from 25 carbonate formations associated with the cave art in these three cave sites. While most of the samples dated to 40,000 years old or less (which indicates that modern humans were the artists), three measurements produced minimum ages of around 65,000 years, including: (1) red scalariform from La Pasiega, (2) red areas from Ardales, and (3) a hand stencil from Maltravieso. On the basis of the three measurements, the team concluded that the art must have been made by Neanderthals because modern humans had not made their way into Iberia at that time. In other words, Neanderthals made art, just like modern humans did.

blog__inline--further-review-overturns-neanderthal-art-claim

Figure: Maltravieso Cave Entrance, SpainImage credit: Shutterstock

Shortly after the findings were published, I wrote a piece expressing skepticism about this claim for two reasons.

First, I questioned the reliability of the method. Once the calcite deposit forms, the U-Th method will only yield reliable results if none of the U or Th moves in or out of the deposit. Based on the work of researchers from France and the US, it does not appear as if the calcite films are closed systems. The calcite deposits on the cave wall formed because of hydrological activity in the cave. Once a calcite film forms, water will continue to flow over its surface, leeching out U (because U is much more water soluble than Th). By removing U, water flowing over the calcite will make it seem as if the deposit and, hence, the underlying artwork is much older than it actually is.3

Secondly, I expressed concern that the 65,000-year-old dates measured for a few samples are outliers. Of the 53 samples measured, only three gave age-dates of 65,000 years. The remaining samples dated much younger, typically around 40,000 years in age. So why should we give so much credence to three measurements, particularly if we know that the calcite deposits are open systems?

Upon Further Review: Neanderthals Are Not Artists

Within a few months, three separate research groups published papers challenging the reliability of the U-Th method for dating cave art and, along with it, the claim that Neanderthals produced cave art.4 It is not feasible to detail all their concerns in this article, but I will highlight six of the most significant complaints. In several instances, the research teams independently raised the same concerns.

  1. The U-Th method is unreliable because the calcite deposits are an open system. The concern that I raised was reiterated by two of the research teams for the same reason I expressed. The U-Th dating technique can only yield reliable results if no U or Th moves in or out of the system once the calcite film forms. The continued water flow over the calcite deposits will preferentially leech U from the deposit, making the deposit appear to be older than it is.
  2. The U-Th method is unreliable because it fails to account for nonradiogenic Th. This isotope would have been present in the source water producing the calcite deposits. As a result, Th would already be present in calcite at the time of formation. This nonradiogenic Th would make the samples appear to be older than they actually are.
  3. The 65,000-year-old dates for the three measurements from La Pasiega, Ardales, and Maltravieso are likely outliers. Just as I pointed out before, two of the research groups expressed concern that only 3 of the 53 measurements came in at 65,000 years in age. This discrepancy suggests that these dates are outliers, most likely reflecting the fact that the calcite deposits are an open system that formed with Th already present. Yet, the researchers from Spain and the UK who reported these results emphasized the few older dates while downplaying the younger dates.
  4. Multiple measurements on the same piece of art yielded discordant ages. For example, the researchers made five age-date measurements of the hand stencil at Maltravieso. These dates (66.7 kya [thousand years ago], 55.2 kya, 35.3 kya, 23.1 kys, and 14.7 kya) were all over the place. And yet, the researchers selected the oldest date for the age of the hand stencil, without justification.
  5. Some of the red “markings” on cave walls that were dated may not be art. Red markings are commonplace on cave walls and can be produced by microorganisms that secrete organic materials or iron oxide deposits. It is possible that some of the markings that were dated were not art at all.
  6. The method used by the researchers to sample the calcite deposits may have been flawed. One team expressed concern that the sampling technique may have unwittingly produced dates for the cave surface on which the paintings were made rather than the pigments used to make the art itself. If the researchers inadvertently dated the cave surface, it could easily be older than the art.

In light of these many shortcomings, it is questionable if the U-Th method to date cave art is reliable. After review, the call from the field is overturned. There is no conclusive evidence that Neanderthals made art.

Why Does This Matter?

Artistic expression reflects a capacity for symbolism. And many people view symbolism as a quality unique to human beings that contributes to our advanced cognitive abilities and exemplifies our exceptional nature. In fact, as a Christian, I see symbolism as a manifestation of the image of God. If Neanderthals possessed symbolic capabilities, such a quality would undermine human exceptionalism (and with it the biblical view of human nature), rendering human beings nothing more than another hominin. At this juncture, every claim for Neanderthal symbolism has failed to withstand scientific scrutiny.

Now, it is time for me to go back to the game.

Who dey! Who dey! Who dey think gonna beat dem Bengals!

Resources:

Endnotes

  1. L. Hoffmann et al., “U-Th Dating of Carbonate Crusts Reveals Neandertal Origin of Iberian Cave Art,” Science359 (February 23, 2018): 912–15, doi:10.1126/science.aap7778.
  2. W. G. Pike et al., “U-Series Dating of Paleolithic Art in 11 Caves in Spain,” Science 336 (June 15, 2012): 1409–13, doi:10.1126/science.1219957.
  3. Georges Sauvet et al., “Uranium-Thorium Dating Method and Palaeolithic Rock Art,” Quaternary International 432 (2017): 86–92, doi:10.1016/j.quaint.2015.03.053.
  4. Ludovic Slimak et al., “Comment on ‘U-Th Dating of Carbonate Crusts Reveals Neandertal Origin of Iberian Cave Art,’” Science 361 (September 21, 2018): eaau1371, doi:10.1126/science.aau1371; Maxime Aubert, Adam Brumm, and Jillian Huntley, “Early Dates for ‘Neanderthal Cave Art’ May Be Wrong,” Journal of Human Evolution (2018), doi:10.1016/j.jhevol.2018.08.004; David G. Pearce and Adelphine Bonneau, “Trouble on the Dating Scene,” Nature Ecology and Evolution 2 (June 2018): 925–26, doi:10.1038/s41559-018-0540-4.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/10/17/further-review-overturns-neanderthal-art-claim

Differences in Human and Neanderthal Brains Explain Human Exceptionalism

diffinhumanandneanderthal

BY FAZALE RANA – SEPTEMBER 19, 2018

When I was a little kid, my mom went through an Agatha Christie phase. She was a huge fan of the murder mystery writer and she read all of Christie’s books.

Agatha Christie was caught up in a real-life mystery of her own when she disappeared for 10 days in December 1926 under highly suspicious circumstances. Her car was found near her home, close to the edge of a cliff. But, she was nowhere to be found. It looked as if she disappeared without a trace, without any explanation. Eleven days after her disappearance, she turned up in a hotel room registered under an alias.

Christie never offered an explanation for her disappearance. To this day, it remains an enduring mystery. Some think it was a callous publicity stunt. Some say she suffered a nervous breakdown. Others think she suffered from amnesia. Some people suggest more sinister reasons. Perhaps, she was suicidal. Or maybe she was trying to frame her husband and his mistress for her murder.

Perhaps we will never know.

Like Christie’s fictional detectives Hercule Poirot and Miss Marple, paleoanthropologists are every bit as eager to solve a mysterious disappearance of their own. They want to know why Neanderthals vanished from the face of the earth. And what role did human beings (Homo sapiens) play in the Neanderthal disappearance, if any? Did we kill off these creatures? Did we outcompete them or did Neanderthals just die off on their own?

Anthropologists have proposed various scenarios to account for the Neanderthals’ disappearance. Some paleoanthropologists think that differences in the cognitive capabilities of modern humans and Neanderthals help explain the creatures’ extinction. According to this model, superior reasoning abilities allowed humans to thrive while Neanderthals faced inevitable extinction. As a consequence, we replaced Neanderthals in the Middle East, Europe, and Asia when we first migrated to these parts of the world.

Computational Neuroanatomy

Innovative work by researchers from Japan offers support for this scenario.1 Using a technique called computational neuroanatomy, researchers reconstructed the brain shape of Neanderthals and modern humans from the fossil record. In their study, the researchers used four Neanderthal specimens:

  • Amud 1 (50,000 to 70,000 years in age)
  • La Chapelle-aux Saints 1 (47,000 to 56,000 years in age)
  • La Ferrassie 1 (43,000 to 45,000 years in age)
  • Forbes’ Quarry 1 (no age dates)

They also worked with four Homo sapiens specimens:

  • Qafzeh 9 (90,000 to 120,000 years in age)
  • Skhūl 5 (100,000 to 135,000 years in age
  • Mladeč 1 (35,000 years in age)
  • Cro-Magnon 1 (32,000 years in age)

Researchers used computed tomography scans to construct virtual endocasts (cranial cavity casts) of the fossil brains. After generating endocasts, the team determined the 3D brain structure of the fossil specimens by deforming the 3D structure of the average human brain so that it fit into the fossil crania and conformed to the endocasts.

This technique appears to be valid, based on control studies carried out on chimpanzee and bonobo brains. Using computational neuroanatomy, researchers can deform a chimpanzee brain to accurately yield the bonobo brain, and vice versa.

Brain Differences, Cognitive Differences

The Japanese team learned that the chief difference between human and Neanderthal brains is the size and shape of the cerebellum. The cerebellar hemisphere is projected more toward the interior in the human brain than in the Neanderthal brain and the volume of the human cerebellum is larger. Researchers also noticed that the right side of the Neanderthal cerebellum is significantly smaller than the left side—a phenomenon called volumetric laterality. This discrepancy doesn’t exist in the human brain. Finally, the Japanese researchers observed that the parietal regions in the human brain were larger than those regions in Neanderthals’ brains.

differences-in-human-and-neanderthal-brains
Image credit: Shutterstock

 

Because of these brain differences, the researchers argue that humans were socially and cognitively more sophisticated than Neanderthals. Neuroscientists have discovered that the cerebellum helps motor functions and higher cognition by contributing to language function, working memory, thought, and social abilities. Hence, the researchers argue that the reduced size of the right cerebellar hemisphere in Neanderthals limits the connection to the prefrontal regions—a connection critical for language processing. Neuroscientists have also discovered that the parietal lobe plays a role in visuo-spatial imagery, episodic memory, self-related mental representations, coordination between self and external spaces, and sense of agency.

On the basis of this study, it seems that humans either outcompeted Neanderthals for limited resources—driving them to extinction—or simply were better suited to survive than Neanderthals because of superior mental capabilities. Or perhaps their demise occurred for more sinister reasons. Maybe we used our sophisticated reasoning skills to kill off these creatures.

Did Neanderthals Make Art, Music, Jewelry, etc.?

Recently, a flurry of reports has appeared in the scientific literature claiming that Neanderthals possessed the capacity for language and the ability to make art, music, and jewelry. Other studies claim that Neanderthals ritualistically buried their dead, mastered fire, and used plants medicinally. All of these claims rest on highly speculative interpretations of the archaeological record. In fact, other studies present evidence that refutes every one of these claims (see Resources).

Comparisons of human and Neanderthal brain morphology and size become increasingly important in the midst of this controversy. This recent study—along with previous work (go here and here)—indicates that Neanderthals did not have the brain architecture and, hence, cognitive capacity to communicate symbolically through language, art, music, and body ornamentation. Nor did they have the brain capacity to engage in complex social interactions. In short, Neanderthal brain anatomy does not support any interpretation of the archaeological record that attributes advanced cognitive abilities to these creatures.

While this study provides important clues about the disappearance of Neanderthals, we still don’t know why they went extinct. Nor do we know any of the mysterious details surrounding their demise as a species.

Perhaps we will never know.

But we do know that in terms of our cognitive and social capacities, human beings stand apart from Neanderthals and all other creatures. Human brain biology and behavior render us exceptional, one-of-a-kind, in ways consistent with the image of God.

Resources

Endnotes

  1. Takanori Kochiyama et al., “Reconstructing the Neanderthal Brain Using Computational Anatomy,” Science Reports 8 (April 26, 2018): 6296, doi:10.1038/s41598-018-24331-0.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/09/19/differences-in-human-and-neanderthal-brains-explain-human-exceptionalism

The Endosymbiont Hypothesis: Things Aren’t What They Seem to Be

theendosymbionthypothesis

BY FAZALE RANA – AUGUST 29, 2018

Sometimes, things just aren’t what they seem to be. For example, when it comes to the world of biology:

  • Fireflies are not flies; they are beetles
  • Prairie dogs are not dogs; they are rodents
  • Horned toads are not toads; they are lizards
  • Douglas firs are not firs; they are pines
  • Silkworms are not worms; they are caterpillars
  • Peanuts are not nuts; they are legumes
  • Koala bears are not bears; they are marsupials
  • Guinea pigs are not from Guinea and they are not pigs; they are rodents from South America
  • Banana trees are not trees; they are herbs
  • Cucumbers are not vegetables; they are fruit
  • Mexican jumping beans are not beans; they are seeds with a larva inside

And . . . mitochondria are not alphaproteobacteria. In fact, evolutionary biologists don’t know what they are—at least, if recent work by researchers from Uppsala University in Sweden is to be taken seriously.1

As silly as this list may be, evolutionary biologists are not amused by this latest insight about the identity of mitochondria. Uncertainty about the evolutionary origin of mitochondria removes from the table one of the most compelling pieces of evidence for the endosymbiont hypothesis.

A cornerstone idea within the modern evolutionary framework, biology textbooks often present the endosymbiont hypothesis as a well-evidenced, well-established evolutionary explanation for the origin of complex cells (eukaryotic cells). Yet, confusion and uncertainty surround this idea, as this latest discovery attests. To put it another way: when it comes to the evolutionary explanation for the origin of complex cells in biology textbooks, things aren’t what they seem.

The Endosymbiont Hypothesis

Most evolutionary biologists believe that the endosymbiont hypothesis is the best explanation for one of the key transitions in life’s history—namely, the origin of complex cells from bacteria and archaea. Building on the ideas of Russian botanist Konstantin Mereschkowski, Lynn Margulis (1938–2011) advanced the endosymbiont hypothesis to explain the origin of eukaryotic cells in the 1960s.

Since that time, Margulis’s ideas on the origin of complex cells have become an integral part of the evolutionary paradigm. Many life scientists find the evidence for this hypothesis compelling; consequently, they view it as providing broad support for an evolutionary explanation for the history and design of life.

According to this hypothesis, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe. (Ingested cells that take up permanent residence within other cells are referred to as endosymbionts.)

the-endosymbiont-hypothesis

The Evolution of Eukaryotic Cells According to the Endosymbiont Hypothesis

Image source: Wikipedia

Presumably, organelles such as mitochondria were once endosymbionts. Evolutionary biologists believe that once taken inside the host cell, the endosymbionts took up permanent residence, with the endosymbiont growing and dividing inside the host. Over time, endosymbionts and hosts became mutually interdependent, with the endosymbionts providing a metabolic benefit for the host cell. The endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from endosymbionts’ genomes were transferred into the genome of the host organism. Eventually, the host cell evolved machinery to produce proteins needed by the former endosymbiont and processes to transport those proteins into the organelle’s interior.

Evidence for the Endosymbiont Hypothesis

The morphological similarity between organelles and bacteria serve as one line of evidence for the endosymbiont hypothesis. For example, mitochondria are about the same size and shape as a typical bacterium and they have a double membrane structure like the gram-negative cells. These organelles also divide in a way that is reminiscent of bacterial cells.

Biochemical evidence also seems to support the endosymbiont hypothesis. Evolutionary biologists view the presence of the diminutive mitochondrial genome as a vestige of this organelle’s evolutionary history. Additionally, biologists also take the biochemical similarities between mitochondrial and bacterial genomes as further evidence for the evolutionary origin of these organelles.

The presence of the unique lipid cardiolipin in the mitochondrial inner membrane also serves as evidence for the endosymbiont hypothesis. Cardiolipin is an important lipid component of bacterial inner membranes. Yet, it is not found in the membranes of eukaryotic cells—except for the inner membranes of mitochondria. In fact, biochemists consider it a signature lipid for mitochondria and a vestige of this organelle’s evolutionary history.

But, as compelling as these observations may be, for many evolutionary biologists phylogenetic analysis provides the most convincing evidence for the endosymbiont hypothesis. Evolutionary trees built from the DNA sequences of mitochondria, bacteria, and archaea place these organelles among a group of microbes called alphaproteobacteria. And, for many (but not all) evolutionary trees, mitochondria cluster with the bacteria, Rickettsiales.For evolutionary biologists, these results mean that the endosymbionts that eventually became the first mitochondria were alphaproteobacteria. If mitochondria were notevolutionarily derived from alphaproteobacteria, why would the DNA sequences of these organelles group with these bacteria in evolutionary trees?

But . . . Mitochondria Are Not Alphaproteobacteria

Even though evolutionary biologists seem certain about the phylogenetic positioning of mitochondria among the alphaproteobacteria, there has been an ongoing dispute as to the precise positioning of mitochondria in evolutionary trees, specifically whether or not mitochondria group with Rickettsiales. Looking to bring an end to this dispute, the Uppsula University research team developed a more comprehensive data set to build their evolutionary trees, with the hope that they could more precisely locate mitochondria among alphaproteobacteria. The researchers point out that the alphaproteobacterial genomes used to construct evolutionary trees stem from microbes found in clinical and agricultural settings, which is a small sampling of the alphaproteobacteria found in nature. Researchers knew this was a limitation, but, up to this point, this was the only DNA sequence data available to them.

To avoid the bias that arises from this limited data set, the researchers screened databases of DNA sequences collected from the Pacific and Atlantic Oceans for undiscovered alphaproteobacteria. They uncovered twelve new groups of alphaproteobacteria. In turn, they included these new genome sequences along with DNA sequences from previously known alphaproteobacterial genomes to build a new set of evolutionary trees. To their surprise, their analysis indicates that mitochondria are not alphaproteobacteria.

Instead, it looks like mitochondria belong to a side branch that separated from the evolutionary tree before alphaproteobacteria emerged. Adding to their surprise, the research team was unable to identify any bacterial species alive today that would group with mitochondria.

To put it another way: the latest study indicates that evolutionary biologists have no candidate for the evolutionary ancestor of mitochondria.

Does the Endosymbiont Hypothesis Successfully Account for the Origin of Mitochondria?

Evolutionary biologists suggest that there’s compelling evidence for the endosymbiont hypothesis. But when researchers attempt to delineate the details of this presumed evolutionary transition, such as the identity of the original endosymbiont, it becomes readily apparent that biologists lack a genuine explanation for the origin of mitochondria and, in a broader context, the origin of eukaryotic cells.

As I have written previously, the problems with the endosymbiont hypothesis are not limited to the identity of the evolutionary ancestor of mitochondria. They are far more pervasive, confounding each evolutionary step that life scientists envision to be part of the emergence of complex cells. (For more examples, see the Resources section.)

When it comes to the endosymbiont hypothesis, things are not what they seem to be. If mitochondria are not alphaproteobacteria, and if evolutionary biologists have no candidate for their evolutionary ancestor, could it be possible that they are the handiwork of the Creator?

Resources

Endnotes

  1. Joran Martijn et al., “Deep Mitochondrial Origin Outside the Sampled Alphaproteobacteria,” Nature 557 (May 3, 2018): 101–5, doi:10.1038/s41586-018-0059-5.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/08/29/the-endosymbiont-hypothesis-things-aren-t-what-they-seem-to-be

Evolution’s Flawed Approach to Science

evolutionsflawedapproach

BY FAZALE RANA – AUGUST 8, 2018

One of the things I find most troubling about the evolutionary paradigm is the view it fosters about the nature of biological systems—including human beings.

Evolution’s mechanisms, it is said, generate biological innovations by co-opting existing designs and cobbling them together to create new ones. As a result, many people in the scientific community regard biological systems as fundamentally flawed.

As biologist Ken Miller explains in an article for Technology Review:

“Evolution . . . does not produce perfection. The fact that every intermediate stage in the development of an organ must confer a selective advantage means that the simplest and most elegant design for an organ cannot always be produced by evolution. In fact, the hallmark of evolution is the modification of pre-existing structures. An evolved organism, in short, should show the tell-tale signs of this modification.1″

So, instead of regarding humans as “fearfully and wonderfully made” (as Scripture teaches), the evolutionary paradigm denigrates human beings, as a logical entailment of its mechanisms. It renders human beings as nothing more than creatures that have been cobbled together by evolutionary mechanisms.

Adding to this concern is the impact that the evolutionary paradigm has on scientific advance. Because many in the scientific community view biological systems as fundamentally flawed, they are predisposed to conclude—oftentimes, prematurely—that biological systems lack function or purpose when initial investigations into these systems fail to uncover any obvious rationale for why these systems are the way they are. And, once these investigators conclude that a biological system is flawed, the motivation to continue studying the system dissipates. Why try to understand a flawed design? Why focus attention on biological systems that lack function? Why invest research dollars studying systems that serve no purpose?

I would contend that viewing biological systems as the Creator’s handiwork provides a superior framework for promoting scientific advance, particularly when the rationale for the structure and function of a particular biological system is not apparent. If biological systems have been created, then there must be good reasons why these systems are structured and function the way they do. And this expectation drives further study of seemingly nonfunctional, purposeless systems with the full anticipation that their functional roles will eventually be uncovered.

Recent history validates the creation model approach. During the course of the last couple of decades, the scientific community has made discovery after discovery demonstrating (1) function for biological systems long thought to be useless evolutionary vestiges, or (2) an ingenious rationale for the architecture and operation of systems long regarded as flawed designs. (For examples, see the articles listed in the Resources section.)

These discoveries were made not because of the evolutionary paradigm but in spite of it.

So often, creationists and intelligent design proponents are accused of standing in the way of scientific advance. Skeptics of creation claim that if we conclude that God created biological systems, then science grinds to a halt. If God made it, then why continue to investigate the system in question?

But, I would assert that the opposite is true. The evolutionary paradigm stultifies science by viewing biological systems as flawed and vestigial. Yet, for the biological systems discussed in the articles listed in the Resources section, the view spawned by the evolutionary paradigm delayed important advances that could have been leveraged for biomedical purposes sooner, alleviating a lot of pain and suffering.

Because a creation model perspective regards designs in nature as part of God’s handiwork, it provides the motivation to keep pressing forward, seeking a rationale for systems that seemingly lack purpose. In the handful of instances in which the scientific community has adopted this mindset, it has been rewarded, paving the way for new scientific insight that leads to biomedical breakthroughs.

Resources

Endnotes

  1. Kenneth R. Miller, “Life’s Grand Design,” Technology Review 97 (February/March 1994): 24–32.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/08/08/evolution-s-flawed-approach-to-science

Do Plastic-Eating Bacteria Dump the Case for Creation?

doplasticeatingbacteria

BY FAZALE RANA – JULY 18, 2018

At the risk of stating the obvious: Plastics are an indispensable part of our modern world. Yet, plastic materials cause untold problems for the environment. One of the properties that makes plastics so useful also makes them harmful. Plastics don’t readily degrade.

Recently, researchers discovered a new strain of bacteria that had recently evolved the ability to degrade plastics. These microbes may help solve some of the environmental problems caused by plastics, but their evolution seemingly causes new problems for people who hold the view that a Creator is responsible for life’s origin and design. But, is this really the case? To find out, we need to break down this discovery.

One plastic in widespread use today is polyethylene terephthalate (PET). This polymer was patented in the 1940s and became widely used in the 1970s. Most people are familiar with PET because it is used to make drinking bottles.

This material is produced by reacting ethylene glycol with terephthalic acid (both produced from petroleum). Crystalline in nature, this plastic is a durable material that is difficult to break down, because of the inaccessibility of the ester linkages that form between the terephthalic acid and ethylene glycol subunits that make up the polymer backbone.

PET can be recycled, thereby mitigating its harmful effects on the environment. A significant portion of PET is mechanically recycled by converting it into fibers used to manufacture carpets.

In principle, PET could be recycled by chemically breaking the ester linkages holding the polymer together. When the ester linkages are cleaved, ethylene glycol and terephthalic acid are the breakdown products. These recovered starting materials could be reused to make more PET. Unfortunately, chemical recycling of PET is expensive and difficult to carry out because of the inaccessibility of the ester linkages. In fact, it is cheaper to produce PET from petroleum products than from the recycled monomers.

Can Bacteria Recycle PET?

An interesting advance took place in 2016 that has important implications for PET recycling. A team of Japanese researchers discovered a strain of the bacterium Ideonella sakaiensis that could break down PET into terephthalic acid and ethylene glycol.1 This strain was discovered by screening wastewater, soil, sediments, and sludge from a PET recycling facility. The microbe produces two enzymes, dubbed PETase and MHETase, that work in tandem to convert PET into its constituent monomers.

Evolution in Action

Researchers think that this microbe acquired DNA from the environment or another microbe via horizontal gene transfer. Presumably, this DNA fragment harbored the genes for cutinase, an enzyme that breaks down ester linkages. Once the I. sakaiensis strain picked up the DNA and incorporated it into its genome, the cutinase gene must have evolved so that it now encodes the information to produce two enzymes with the capacity to break down PET. Plus, this new capability must have evolved rather quickly, over the span of a few decades.

PETase Structure and Evolution

In an attempt to understand how PETase and MHETase evolved and how these two enzymes might be engineered for recycling and bioremediation purposes, a team of investigators from the University of Plymouth determined the structure of PETase with atomic level detail.2 They learned that this enzyme has the structural components characteristic of a family of enzymes called alpha/beta hydrolases. Based on the amino acid sequence of the PETase, the researchers concluded that its closest match to any existing enzyme is to a cutinase produced by the bacterium Thermobifida fusca. One of the most significant differences between these two enzymes is found at their active sites. (The active site is the location on the enzyme surface that binds the compounds that the enzyme chemically alters.) The active site of the PETase is broader than the T. fusca cutinase, allowing it to accommodate PET polymers.

As researchers sought to understand how PETase evolved from cutinase, they engineered amino acid changes in PETase, hoping to revert it to a cutinase. To their surprise, the resulting enzyme was even more effective at degrading PET than the PETase found in nature.

This insight does not help explain the evolutionary origin of PETase, but the serendipitous discovery does point the way to using engineered PETases for recycling and bioremediation. One could envision spraying this enzyme (or the bacterium I. sakaiensis) onto a landfill or in patches of plastics floating in the Earth’s oceans. Or alternatively using this enzyme at recycling facilities to generate the PET monomers.

As a Christian, I find this discovery exciting. Advances such as these will help us do a better job as planetary caretakers and as stewards of God’s creation, in accord with the mandate given to us in Genesis 1.

But, this discovery does raise a question: Does the evolution of a PET-eating bacterium prove that evolution is true? Does this discovery undermine the case for creation? After all, it is evolution happening right before our eyes.

Is Evolution in Action Evidence for Evolution?

To answer this question, we need to recognize that the term “evolution” can take on a variety of meanings. Each one reflects a different type of biological transformation (or presumed transformation).

It is true that organisms can change as their environment changes. This occurs through mutations to the genetic material. In rare circumstances, these mutations can create new biochemical and biological traits, such as the ones that produced the strain of I. sakaiensis that can degrade PET. If these new traits help the organism survive, it will reproduce more effectively than organisms lacking the trait. Over time, this new trait will take hold in the population, causing a transformation of the species.

And this is precisely what happened with I. sakaiensis. However, microbial evolution is not controversial. Most creationists and intelligent design proponents acknowledge evolution at this scale. In a sense, it is not surprising that single-celled microbes can evolve, given their extremely large population sizes and capacity to take up large pieces of DNA from their surroundings and incorporate it into their genomes.

Yet, I. sakaiensis is still I. sakaiensis. In fact, the similarity between PETase and cutinases indicates that only a few amino acid changes can explain the evolutionary origin of new enzymes. Along these lines, it is important to note that both cutinase and PETase cleave ester linkages. The difference between these two enzymes involves subtle structural differences triggered by altering a few amino acids. In other words, the evolution of a PET-degrading bacterium is easy to accomplish through a form of biochemical microevolution.

But just because microbes can undergo limited evolution at a biochemical level does not mean that evolutionary mechanisms can account for the origin of biochemical systems and the origin of life. That is an unwarranted leap. This study is evidence for microbial evolution, nothing more.

Though this advance can help us in our planetary stewardship role, this study does not provide the type of evidence needed to explain the origin of biochemistry and, hence, the origin of life through evolutionary means. Nor does it provide the type of evidence needed to explain the evolutionary origin of life’s major groups. Evolutionary biologists must develop appropriate evidence for these putative transformations, and so far, they haven’t.

Evidence of microbial evolution in action is not evidence for the evolutionary paradigm.

Resources:

Endnotes

  1. Shosuke Yoshida et al., “A Bacterium that Degrades and Assimilates Poly(ethylene terephthalate)” Science 351 (March 11, 2016): 1196–99, doi:10.1126/science.aad6359.
  2. Harry P. Austin, et al., “Characterization and Engineering of a Plastic-Degrading Aromatic Polyesterase,” Proceedings of the National Academy of Sciences, USA (April 17, 2018): preprint, doi:10.1073/pnas.1718804115.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/07/18/do-plastic-eating-bacteria-dump-the-case-for-creation