Endosymbiont Hypothesis and the Ironic Case for a Creator

endosymbionthypothesisandtheironic

BY FAZALE RANA – DECEMBER 12, 2018

i ·ro ·ny

The use of words to express something different from and often opposite to their literal meaning.
Incongruity between what might be expected and what actually occurs.

—The Free Dictionary

People often use irony in humor, rhetoric, and literature, but few would think it has a place in science. But wryly, this has become the case. Recent work in synthetic biology has created a real sense of irony among the scientific community—particularly for those who view life’s origin and design from an evolutionary framework.

Increasingly, life scientists are turning to synthetic biology to help them understand how life could have originated and evolved. But, they have achieved the opposite of what they intended. Instead of developing insights into key evolutionary transitions in life’s history, they have, ironically, demonstrated the central role intelligent agency must play in any scientific explanation for the origin, design, and history of life.

This paradoxical situation is nicely illustrated by recent work undertaken by researchers from Scripps Research (La Jolla, CA). Through genetic engineering, the scientific investigators created a non-natural version of the bacterium E. coli. This microbe is designed to take up permanent residence in yeast cells. (Cells that take up permanent residence within other cells are referred to as endosymbionts.) They hope that by studying these genetically engineered endosymbionts, they can gain a better understanding of how the first eukaryotic cells evolved. Along the way, they hope to find added support for the endosymbiont hypothesis.1

The Endosymbiont Hypothesis

Most biologists believe that the endosymbiont hypothesis (symbiogenesis) best explains one of the key transitions in life’s history; namely, the origin of complex cells from bacteria and archaea. Building on the ideas of Russian botanist Konstantin Mereschkowski, Lynn Margulis(1938–2011) advanced the endosymbiont hypothesis in the 1960s to explain the origin of eukaryotic cells.

Margulis’s work has become an integral part of the evolutionary paradigm. Many life scientists find the evidence for this idea compelling and consequently view it as providing broad support for an evolutionary explanation for the history and design of life.

According to this hypothesis, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe. Presumably, organelles such as mitochondria were once endosymbionts. Evolutionary biologists believe that once engulfed by the host cell, the endosymbionts took up permanent residency, with the endosymbiont growing and dividing inside the host.

Over time, the endosymbionts and the host became mutually interdependent. Endosymbionts provided a metabolic benefit for the host cell—such as an added source of ATP—while the host cell provided nutrients to the endosymbionts. Presumably, the endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism.

endosymbiont-hypothesis-and-the-ironic-case-for-a-creator-1

Figure 1: Endosymbiont hypothesis. Image credit: Wikipedia.

Life scientists point to a number of similarities between mitochondria and alphaproteobacteria as evidence for the endosymbiont hypothesis. (For a description of the evidence, see the articles listed in the Resources section.) Nevertheless, they don’t understand how symbiogenesis actually occurred. To gain this insight, scientists from Scripps Research sought to experimentally replicate the earliest stages of mitochondrial evolution by engineering E. coli and brewer’s yeast (S. cerevisiae) to yield an endosymbiotic relationship.

Engineering Endosymbiosis

First, the research team generated a strain of E. coli that no longer has the capacity to produce the essential cofactor thiamin. They achieved this by disabling one of the genes involved in the biosynthesis of the compound. Without this metabolic capacity, this strain becomes dependent on an exogenous source of thiamin in order to survive. (Because the E. coli genome encodes for a transporter protein that can pump thiamin into the cell from the exterior environment, it can grow if an external supply of thiamin is available.) When incorporated into yeast cells, the thiamin in the yeast cytoplasm becomes the source of the exogenous thiamin, rendering E. coli dependent on the yeast cell’s metabolic processes.

Next, they transferred the gene that encodes a protein called ADP/ATP translocase into the E. coli strain. This gene was harbored on a plasmid (which is a small circular piece of DNA). Normally, the gene is found in the genome of an endosymbiotic bacterium that infects amoeba. This protein pumps ATP from the interior of the bacterial cell to the exterior environment.2

The team then exposed yeast cells (that were deficient in ATP production) to polyethylene glycol, which creates a passageway for E. coli cells to make their way into the yeast cells. In doing so, E. coli becomes established as endosymbionts within the yeast cells’ interior, with the E. coli providing ATP to the yeast cell and the yeast cell providing thiamin to the bacterial cell.

Researchers discovered that once taken up by the yeast cells, the E. coli did not persist inside the cell’s interior. They reasoned that the bacterial cells were being destroyed by the lysosomal degradation pathway. To prevent their destruction, the research team had to introduce three additional genes into the E. coli from three separate endosymbiotic bacteria. Each of these genes encodes proteins—called SNARE-like proteins—that interfere with the lysosomal destruction pathway.

Finally, to establish a mutualistic relationship between the genetically-engineered strain of E. coli and the yeast cell, the researchers used a yeast strain with defective mitochondria. This defect prevented the yeast cells from producing an adequate supply of ATP. Because of this limitation, the yeast cells grow slowly and would benefit from the E. coli endosymbionts, with the engineered capacity to transport ATP from their cellular interior to the exterior environment (the yeast cytoplasm.)

The researchers observed that the yeast cells with E. coli endosymbionts appeared to be stable for 40 rounds of cell doublings. To demonstrate the potential utility of this system to study symbiogenesis, the research team then began the process of genome reduction for the E. coli endosymbionts. They successively eliminated the capacity of the bacterial endosymbiont to make the key metabolic intermediate NAD and the amino acid serine. These triply-deficient E. coli strains survived in the yeast cells by taking up these nutrients from the yeast cytoplasm.

Evolution or Intentional Design?

The Scripps Research scientific team’s work is impressive, exemplifying science at its very best. They hope that their landmark accomplishment will lead to a better understanding of how eukaryotic cells appeared on Earth by providing the research community with a model system that allows them to probe the process of symbiogenesis. It will also allow them to test the various facets of the endosymbiont hypothesis.

In fact, I would argue that this study already has made important strides in explaining the genesis of eukaryotic cells. But ironically, instead of proffering support for an evolutionary origin of eukaryotic cells (even though the investigators operated within the confines of the evolutionary paradigm), their work points to the necessary role intelligent agency must have played in one of the most important events in life’s history.

This research was executed by some of the best minds in the world, who relied on a detailed and comprehensive understanding of biochemical and cellular systems. Such knowledge took a couple of centuries to accumulate. Furthermore, establishing mutualistic interactions between the two organisms required a significant amount of ingenuity—genius that is reflected in the experimental strategy and design of their study. And even at that point, execution of their experimental protocols necessitated the use of sophisticated laboratory techniques carried out under highly controlled, carefully orchestrated conditions. To sum it up: intelligent agency was required to establish the endosymbiotic relationship between the two microbes.

endosymbiont-hypothesis-and-the-ironic-case-for-a-creator-2

Figure 2: Lab researcher. Image credit: Shutterstock.

Or, to put it differently, the endosymbiotic relationship between these two organisms was intelligently designed. (All this work was necessary to recapitulate only the presumed first step in the process of symbiogenesis.) This conclusion gains added support given some of the significant problems confronting the endosymbiotic hypothesis. (For more details, see the Resources section.) By analogy, it seems reasonable to conclude that eukaryotic cells, too, must reflect the handiwork of a Divine Mind—a Creator.

Resources

Endnotes

  1. Angad P. Mehta et al., “Engineering Yeast Endosymbionts as a Step toward the Evolution of Mitochondria,” Proceedings of the National Academy of Sciences, USA 115 (November 13, 2018): doi:10.1073/pnas.1813143115.
  2. ATP is a biochemical that stores energy used to power the cell’s operation. Produced by mitochondria, ATP is one of the end products of energy harvesting pathways in the cell. The ATP produced in mitochondria is pumped into the cell’s cytoplasm from within the interior of this organelle by an ADP/ATP transporter.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/12/12/endosymbiont-hypothesis-and-the-ironic-case-for-a-creator

The Optimal Design of the Genetic Code

theoptimaldesign

BY FAZALE RANA – OCTOBER 3, 2018

Were there no example in the world of contrivance except that of the eye, it would be alone sufficient to support the conclusion which we draw from it, as to the necessity of an intelligent Creator.

–William Paley, Natural Theology

In his classic work, Natural TheologyWilliam Paley surveyed a range of biological systems, highlighting their similarities to human-made designs. Paley noticed that human designs typically consist of various components that interact in a precise way to accomplish a purpose. According to Paley, human designs are contrivances—things produced with skill and cleverness—and they come about via the work of human agents. They come about by the work of intelligent designers. And because biological systems are contrivances, they, too, must come about via the work of a Creator.

For Paley, the pervasiveness of biological contrivances made the case for a Creator compelling. But he was especially struck by the vertebrate eye. For Paley, if the only example of a biological contrivance available to us was the eye, its sophisticated design and elegant complexity alone justify the “necessity of an intelligent creator” to explain its origin.

As a biochemist, I am impressed with the elegant designs of biochemical systems. The sophistication and ingenuity of these designs convinced me as a graduate student that life must stem from the work of a Mind. In my book The Cell’s Design, I follow in Paley’s footsteps by highlighting the eerie similarity between human designs and biochemical systems—a similarity I describe as an intelligent design pattern. Because biochemical systems conform to the intelligent design pattern, they must be the work of a Creator.

As with Paley, I view the pervasiveness of the intelligent design pattern in biochemical systems as critical to making the case for a Creator. Yet, in particular, I am struck by the design of a single biochemical system: namely, the genetic code. On the basis of the structure of the genetic code alone, I think one is justified to conclude that life stems from the work of a Divine Mind. The latest work by a team of German biochemists on the genetic code’s design convinces me all the more that the genetic code is the product of a Creator’s handiwork.1

To understand the significance of this study and the code’s elegant design, a short primer on molecular biology is in order. (For those who have a background in biology, just skip ahead to The Optimal Genetic Code.)

Proteins

The “workhorse” molecules of life, proteins take part in essentially every cellular and extracellular structure and activity. Proteins are chain-like molecules folded into precise three-dimensional structures. Often, the protein’s three-dimensional architecture determines the way it interacts with other proteins to form a functional complex.

Proteins form when the cellular machinery links together (in a head-to-tail fashion) smaller subunit molecules called amino acids. To a first approximation, the cell employs 20 different amino acids to make proteins. The amino acids that make up proteins possess a variety of chemical and physical properties.

optimal-design-of-the-genetic-code-1

Figure 1: The Amino Acids. Image credit: Shutterstock

Each specific amino acid sequence imparts the protein with a unique chemical and physical profile along the length of its chain. The chemical and physical profile determines how the protein folds and, therefore, its function. Because structure determines the function of a protein, the amino acid sequence is key to dictating the type of work a protein performs for the cell.

DNA

The cell’s machinery uses the information harbored in the DNA molecule to make proteins. Like these biomolecules, DNA consists of chain-like structures known as polynucleotides. Two polynucleotide chains align in an antiparallel fashion to form a DNA molecule. (The two strands are arranged parallel to one another with the starting point of one strand located next to the ending point of the other strand, and vice versa.) The paired polynucleotide chains twist around each other to form the well-known DNA double helix. The cell’s machinery forms polynucleotide chains by linking together four different subunit molecules called nucleotides. The four nucleotides used to build DNA chains are adenosine, guanosine, cytidine, and thymidine, familiarly known as A, G, C, and T, respectively.

optimal-design-of-the-genetic-code-2

Figure 2: The Structure of DNA. Image credit: Shutterstock

As noted, DNA stores the information necessary to make all the proteins used by the cell. The sequence of nucleotides in the DNA strands specifies the sequence of amino acids in protein chains. Scientists refer to the amino-acid-coding nucleotide sequence that is used to construct proteins along the DNA strand as a gene.

The Genetic Code

A one-to-one relationship cannot exist between the 4 different nucleotides of DNA and the 20 different amino acids used to assemble polypeptides. The cell addresses this mismatch by using a code comprised of groupings of three nucleotides to specify the 20 different amino acids.

The cell uses a set of rules to relate these nucleotide triplet sequences to the 20 amino acids making up proteins. Molecular biologists refer to this set of rules as the genetic code. The nucleotide triplets, or “codons” as they are called, represent the fundamental communication units of the genetic code, which is essentially universal among all living organisms.

Sixty-four codons make up the genetic code. Because the code only needs to encode 20 amino acids, some of the codons are redundant. That is, different codons code for the same amino acid. In fact, up to six different codons specify some amino acids. Others are specified by only one codon.

Interestingly, some codons, called stop codons or nonsense codons, code no amino acids. (For example, the codon UGA is a stop codon.) These codons always occur at the end of the gene, informing the cell where the protein chain ends.

Some coding triplets, called start codons, play a dual role in the genetic code. These codons not only encode amino acids, but also “tell” the cell where a protein chain begins. For example, the codon GUG encodes the amino acid valine and also specifies the starting point of the proteins.

optimal-design-of-the-genetic-code-3

Figure 3: The Genetic Code. Image credit: Shutterstock

The Optimal Genetic Code

Based on visual inspection of the genetic code, biochemists had long suspected that the coding assignments weren’t haphazard—a frozen accident. Instead it looked to them like a rationale undergirds the genetic code’s architecture. This intuition was confirmed in the early 1990s. As I describe in The Cell’s Design, at that time, scientists from the University of Bath (UK) and from Princeton University quantified the error-minimization capacity of the genetic code. Their initial work indicated that the naturally occurring genetic code withstands the potentially harmful effects of substitution mutations better than all but 0.02 percent (1 out of 5,000) of randomly generated genetic codes with codon assignments different from the universal genetic code.2

Subsequent analysis performed later that decade incorporated additional factors. For example, some types of substitution mutations (called transitions) occur more frequently in nature than others (called transversions). As a case in point, an A-to-G substitution occurs more frequently than does either an A-to-C or an A-to-T mutation. When researchers included this factor into their analysis, they discovered that the naturally occurring genetic code performed better than one million randomly generated genetic codes. In a separate study, they also found that the genetic code in nature resides near the global optimum for all possible genetic codes with respect to its error-minimization capacity.3

It could be argued that the genetic code’s error-minimization properties are more dramatic than these results indicate. When researchers calculated the error-minimization capacity of one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution where the naturally occurring genetic code’s capacity occurred outside the distribution. Researchers estimate the existence of 1018 (a quintillion) possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. Nearly all of these codes fall within the error-minimization distribution. This finding means that of 1018 possible genetic codes, only a few have an error-minimization capacity that approaches the code found universally in nature.

Frameshift Mutations

Recently, researchers from Germany wondered if this same type of optimization applies to frameshift mutations. Biochemists have discovered that these mutations are much more devastating than substitution mutations. Frameshift mutations result when nucleotides are inserted into or deleted from the DNA sequence of the gene. If the number of inserted/deleted nucleotides is not divisible by three, the added or deleted nucleotides cause a shift in the gene’s reading frame—altering the codon groupings. Frameshift mutations change all the original codons to new codons at the site of the insertion/deletion and onward to the end of the gene.

optimal-design-of-the-genetic-code-4

Figure 4: Types of Mutations. Image credit: Shutterstock

The Genetic Code Is Optimized to Withstand Frameshift Mutations

Like the researchers from the University of Bath, the German team generated 1 million random genetic codes with the same type and degree of redundancy as the genetic code found in nature. They discovered that the code found in nature is better optimized to withstand errors that result from frameshift mutations (involving either the insertion or deletion of 1 or 2 nucleotides) than most of the random genetic codes they tested.

The Genetic Code Is Optimized to Harbor Multiple Overlapping Codes

The optimization doesn’t end there. In addition to the genetic code, genes harbor other overlapping codes that independently direct the binding of histone proteins and transcription factors to DNA and dictate processes like messenger RNA folding and splicing. In 2007, researchers from Israel discovered that the genetic code is also optimized to harbor overlapping codes.4

The Genetic Code and the Case for a Creator

In The Cell’s Design, I point out that common experience teaches us that codes come from minds. By analogy, the mere existence of the genetic code suggests that biochemical systems come from a Mind. This conclusion gains considerable support based on the exquisite optimization of the genetic code to withstand errors that arise from both substitution and frameshift mutations, along with its optimal capacity to harbor multiple overlapping codes.

The triple optimization of the genetic code arises from its redundancy and the specific codon assignments. Over 1018 possible genetic codes exist and any one of them could have been “selected” for the code in nature. Yet, the “chosen” code displays extreme optimization—a hallmark feature of designed systems. As the evidence continues to mount, it becomes more and more evident that the genetic code displays an eerie perfection.5

An elegant contrivance such as the genetic code—which resides at the heart of biochemical systems and defines the information content in the cell—is truly one in a million when it comes to reasons to believe.

Resources

Endnotes

  1. Regine Geyer and Amir Madany Mamlouk, “On the Efficiency of the Genetic Code after Frameshift Mutations,” PeerJ 6 (2018): e4825, doi:10.7717/peerj.4825.
  2. David Haig and Laurence D. Hurst, “A Quantitative Measure of Error Minimization in the Genetic Code,” Journal of Molecular Evolution33 (1991): 412–17, doi:1007/BF02103132.
  3. Gretchen Vogel, “Tracking the History of the Genetic Code,” Science281 (1998): 329–31, doi:1126/science.281.5375.329; Stephen J. Freeland and Laurence D. Hurst, “The Genetic Code Is One in a Million,” Journal of Molecular Evolution 47 (1998): 238–48, doi:10.1007/PL00006381.; Stephen J. Freeland et al., “Early Fixation of an Optimal Genetic Code,” Molecular Biology and Evolution 17 (2000): 511–18, doi:10.1093/oxfordjournals.molbev.a026331.
  4. Shalev Itzkovitz and Uri Alon, “The Genetic Code Is Nearly Optimal for Allowing Additional Information within Protein-Coding Sequences,” Genome Research(2007): advanced online, doi:10.1101/gr.5987307.
  5. In The Cell’s Design, I explain why the genetic code cannot emerge through evolutionary processes, reinforcing the conclusion that the cell’s information systems—and hence, life—must stem from the handiwork of a Creator.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/10/03/the-optimal-design-of-the-genetic-code

Protein Amino Acids Form a “Just-Right” Set of Biological Building Blocks

proteinaminoacids

BY FAZALE RANA – FEBRUARY 21, 2018

Like most kids, I had a set of Lego building blocks. But, growing up in the 1960s, the Lego sets were nothing like the ones today. I am amazed at how elaborate and sophisticated Legos have become, consisting of interlocking blocks of various shapes and sizes, gears, specialty parts, and figurines—a far cry from the square and rectangular blocks that made up the Lego sets of my youth. The most imaginative things I could ever hope to build were long walls and high towers.

It goes to show: the set of building blocks make all the difference in the world.

This truism applies to the amino acid building blocks that make up proteins. As it turns out, proteins are built from a specialty set of amino acids that have the just-right set of properties to make life possible, as recent work by researchers from Germany attests.1 From my vantage point as a biochemist and a Christian, the just-right amino acid composition of proteins evinces intelligent design and is part of the reason I think a Creator must have played a direct role in the origin and design of life.

Why is the Same Set of Twenty Amino Acids Used to Build Proteins?

It stands as one of the most important insights about protein structure discovered by biochemists: The set of amino acids used to build proteins is universal. In other words, the proteins found in every organism on Earth are made up of the same 20 amino acids.

Yet, hundreds of amino acids exist in nature. And, this abundance prompts the question: Why these 20 amino acids? From an evolutionary standpoint, the set of amino acids used to build proteins should reflect:

1) the amino acids available on early Earth, generated by prebiotic chemical reactions;

2) the historically contingent outworking of evolutionary processes.

In other words, evolutionary mechanisms would have cobbled together an amino acid set that works “just good enough” for life to survive, but nothing more. No one would expect evolutionary processes to piece together a “just-right,” optimal set of amino acids. In other words, if evolutionary processes shaped the amino acid set used to build proteins, these biochemical building blocks should be much like the unsophisticated Lego sets little kids played with in the 1960s.

An Optimal Set of Amino Acids

But, contrary to this expectation, in the early 1980s biochemists discovered that an exquisite molecular rationale undergirds the amino acid set used to make proteins. Every aspect of the amino acid structure has to be precisely the way it is for life to be possible. On top of that, researchers from the University of Hawaii have conducted a quantitative comparison of the range of chemical and physical properties possessed by the 20 protein-building amino acids versus random sets of amino acids that could have been selected from early Earth’s hypothetical prebiotic soup.2 They concluded that the set of 20 amino acids is optimal. It turns out that the set of amino acids found in biological systems possesses the “just-right” properties that evenly and uniformly vary across a broad range of size, charge, and hydrophobicity. They also showed that the amino acids selected for proteins are a “highly unusual set of 20 amino acids; a maximum of 0.03% random sets outperformed the standard amino acid alphabet in two properties, while no single random set exhibited greater coverage in all three properties simultaneously.”3

A New Perspective on the 20 Protein Amino Acids

Beyond charge, size, and hydrophobicity, the German researchers wondered if quantum mechanical effects play a role in dictating the universal set of 20 protein amino acids. To address this question, they examined the gap between the HOMO (highest occupied molecular orbital) and the LUMO (lowest unoccupied molecular orbital) for the protein amino acids. The HOMO-LUMO gap is one of the quantum mechanical determinants of chemical reactivity. More reactive molecules have smaller HOMO-LUMO gaps than molecules that are relatively nonreactive.

The German biochemists discovered that the HOMO-LUMO gap was small for 7 of the 20 amino acids (histidine, phenylalanine cysteine, methionine, tyrosine, and tryptophan), and hence, these molecules display a high level of chemical activity. Interestingly, some biochemists think that these 7 amino acids are not necessary to build proteins. Previous studies have demonstrated that a wide range of foldable, functional proteins can be built from only 13 amino acids (glycine, alanine, valine, leucine, isoleucine, proline, serine, threonine, aspartic acid, glutamic acid, asparagine, lysine, and arginine). As it turns out, this subset of 13 amino acids has a relatively large HOMO-LUMO gap and, therefore, is relatively unreactive. This suggests that the reactivity of histidine, phenylalanine cysteine, methionine, tyrosine, and tryptophan may be part of the reason for the inclusion of the 7 amino acids in the universal set of 20.

As it turns out, these amino acids readily react with the peroxy free radical, a highly corrosive chemical species that forms when oxygen is present in the atmosphere. The German biochemists believe that when these 7 amino acids reside on the surface of proteins, they play a protective role, keeping the proteins from oxidative damage.

As I discussed in a previous article, these 7 amino acids contribute in specific ways to protein structure and function. And they contribute to the optimal set of chemical and physical properties displayed by the universal set of 20 amino acids. And now, based on the latest work by the German researchers, it seems that the amino acids’ newly recognized protective role against oxidative damage adds to their functional and structural significance in proteins.

Interestingly, because of the universal nature of biochemistry, these 7 amino acids must have been present in the proteins of the last universal common ancestor (LUCA) of all life on Earth. And yet, there was little or no oxygen present on early Earth, rendering the protective effect of these amino acids unnecessary. The importance of the small HOMO-LUMO gaps for these amino acids would not have become realized until much later in life’s history when oxygen levels became elevated in Earth’s atmosphere. In other words, inclusion of these amino acids in the universal set at life’s start seemingly anticipates future events in Earth’s history.

Protein Amino Acids Chosen by a Creator

The optimality, foresight, and molecular rationale undergirding the universal set of protein amino acids is not expected if life had an evolutionary origin. But, it is exactly what I would expect if life stems from a Creator’s handiwork. As I discuss in The Cell’s Design, objects and systems created and produced by human designers are typically well thought out and optimized. Both are indicative of intelligent design. In human designs, optimization is achieved through foresight and planning. Optimization requires inordinate attention to detail and careful craftsmanship. By analogy, the optimized biochemistry, epitomized by the amino acid set that makes up proteins, rationally points to the work of a Creator.

Resources

Endnotes

  1. Matthias Granhold et al., “Modern Diversification of the Amino Acid Repertoire Driven by Oxygen,” Proceedings of the National Academy of Sciences USA 115 (January 2, 2018): 41–46, doi:10.1073/pnas.1717100115.
  2. Gayle K. Philip and Stephen J. Freeland, “Did Evolution Select a Nonrandom ‘Alphabet’ of Amino Acids?” Astrobiology 11 (April 2011): 235–40, doi:10.1089/ast.2010.0567.
  3. Philip and Freeland, “Did Evolution Select,” 235–40.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/02/21/protein-amino-acids-form-a-just-right-set-of-biological-building-blocks

Is the Laminin “Cross” Evidence for a Creator?

isthelaminincrossevidence

BY FAZALE RANA – JANUARY 31, 2018

As I interact with people on social media and travel around the country to speak on the biochemical evidence for a Creator, I am frequently asked to comment on laminin.1 The people who mention this protein are usually quite excited, convinced that its structure provides powerful scientific evidence for the Christian faith. Unfortunately, I don’t agree.

Motivating this unusual question is the popularized claim of a well-known Christian pastor that laminin’s structure provides physical evidence that the God of the Bible created human beings and also sustains our lives. While I wholeheartedly believe God did create and does sustain human life, laminin’s apparent cross-shape does not make the case.

Laminin is one of the key components of the basal lamina, a thin sheet-like structure that surrounds cells in animal tissue. The basal lamina is part of the extracellular matrix (ECM). This structure consists of a meshwork of fibrous proteins and polysaccharides secreted by the cells. It forms the space between cells in animal tissue. The ECM carries out a wide range of functions that include providing anchor points and support for cells.

Laminin is a relatively large protein made of three different protein subunits that combine to form a t-shaped structure when the flexible rod-like regions of laminin are fully extended. Each of the four “arms” of laminin contains sites that allow this biomolecule to bind to other laminin molecules, other proteins (like collagen), and large polysaccharides. Laminin also provides a binding site for proteins called integrins, which are located in the cell membrane.

is-the-laminin-cross-evidence-for-a-creator

Figure: The structure of laminin. Image credit: Wikipedia

Laminin’s architecture and binding sites make this protein ideally suited to interact with other proteins and polysaccharides to form a network called the basal reticulum and to anchor cells to its biochemical scaffolding. The basal reticulum helps hold cells together to form tissues and, in turn, helps cement that tissue to connective tissues.

The cross-like shape of laminin and the role it plays in holding tissues together has prompted the claim that this biomolecule provides scientific support for passages such as Colossians 1:15–17 and shows how the God of the Bible must have made humans and continues to sustain them.

I would caution Christians against using this “argument.” I see a number of problems with it. (And so do many skeptics.)

First, the cross shape is a simple structure found throughout nature. So, it’s probably not a good idea to attach too much significance to laminin’s shape. The t configuration makes laminin ideally suited to connect proteins to each other and cells to the basal reticulum. This is undoubtedly the reason for its structure.

Secondly, the cross shape of laminin is an idealized illustration of the molecule. Portraying complex biomolecules in simplified ways is a common practice among biochemists. Depicting laminin in this extended form helps scientists visualize and catalog the binding sites along its four arms. This configuration should not be interpreted to represent its actual shape in biological systems. In the basal reticulum, laminin adopts all sorts of shapes that bear no resemblance to a cross. In fact, it’s much more common to observe laminin in a swastika configuration than in a cross-like one. Even electron micrographs of isolated laminin molecules that appear cross-shaped may be misleading. Their shape is likely an artifact of sample preparation. I have seen other electron micrographs that show laminin adopting a variety of twisted shapes that, again, bear no resemblance to a cross.

Finally, laminin is not the only molecule “holding things together.” A number of other proteins and polysaccharides are also indispensable components of the basal reticulum. None of these molecules is cross-shaped.

As I argue in my book, The Cell’s Design, the structure and operation of biochemical systems provide some of the most potent support for a Creator’s role in fabricating living systems. Instead of pointing to superficial features of biomolecules such as the “cross-shaped” architecture of laminin, there are many more substantive ways to use biochemistry to argue for the necessity of a Creator and for the value he places on human life. As a case in point, the salient characteristics of biochemical systems identically match those features we would recognize immediately as evidence for the work of a human design engineer. The close similarity between biochemical systems and the devices produced by human designers logically compels this conclusion: life’s most fundamental processes and structures stem from the work of an intelligent, intentional Agent.

When Christians invest the effort to construct a careful case for the Creator, skeptics and seekers find it difficult to deny the powerful evidence from biochemistry and other areas of science for God’s existence.

Resources:

Endnotes

  1. This article was originally published in the April 1, 2009, edition of New Reasons to Believe.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/01/31/is-the-laminin-cross-evidence-for-a-creator

Fatty Acids Are Beautiful

fattyacidsarebeautiful

BY FAZALE RANA – NOVEMBER 22, 2017

Who says that fictions onely and false hair
Become a verse? Is there in truth no beauty?
Is all good structure in a winding stair?
May no lines passe, except they do their dutie
Not to a true, but painted chair?

George Herbert, “Jordan (I)”

I doubt the typical person would ever think fatty acids are a thing of beauty. In fact, most people try to do everything they can to avoid them—at least in their diets. But, as a biochemist who specializes in lipids (a class of biomolecules that includes fatty acids) and cell membranes, I am fascinated by these molecules—and by the biochemical and cellular structures they form.

I know, I know—I’m a science geek. But for me, the chemical structures and the physicochemical properties of lipids are as beautiful as an evening sunset. As an expert, I thought I knew most of what there is to know about fatty acids, so I was surprised to learn that researchers from Germany recently uncovered an elegant mathematical relationship that explains the structural makeup of fatty acids.From my vantage point, this newly revealed mathematical structure boggles my mind, providing new evidence for a Creator’s role in bringing life into existence.

Fatty Acids

To first approximation, fatty acids are relatively simple compounds, consisting of a carboxylic acid head group and a long-chain hydrocarbon tail.

fatty-acids-are-beautiful-1

Structure of two typical fatty acids
Image credit: Edgar181/Wikimedia Commons

Despite their structural simplicity, a bewildering number of fatty acid species exist. For example, the hydrocarbon chain of fatty acids can vary in length from 1 carbon atom to over 30. One or more double bonds can occur at varying positions along the chain, and the double bonds can be either cis or trans in geometry. The hydrocarbon tails can be branched and can be modified by carbonyl groups and by hydroxyl substituents at varying points along the chain. As the hydrocarbon chains become longer, the number of possible structural variants increases dramatically.

How Many Fatty Acids Exist in Nature?

This question takes on an urgency today because advances in analytical techniques now make it possible for researchers to identify and quantify the vast number of lipid species found in biological systems, birthing the discipline of lipidomics. Investigators are interested in understanding how lipid compositions vary spatially and temporally in biological systems and how these compositions change in response to altered physiological conditions and pathologies.

To process and make sense of the vast amount of data generated in lipidomics studies, biochemists need to have some understanding of the number of lipid species that are theoretically possible. Recently, researchers from Friedrich Schiller University in Germany took on this challenge—at least, in part—by attempting to calculate the number of chemical species that exist for fatty acids varying in size from 1 to 30 atoms.

Fatty Acids and Fibonacci Numbers

To accomplish this objective, the German researchers developed mathematical equations that relate the number of carbon atoms in fatty acids to the number of structural variants (isomers). They discovered that this relationship conforms to the Fibonacci series, with the number of possible fatty acid species increasing by a factor of 1.618—the golden mean—for each carbon atom added to the fatty acid. Though not immediately evident when first examining the wide array of fatty acids found in nature, deeper analysis reveals that a beautiful yet simple mathematical structure underlies the seemingly incomprehensible structural diversity of these biomolecules.

This discovery indicates it is unlikely that the fatty acid compositions found in nature reflect the haphazard outcome of an undirected, historically contingent evolutionary history, as many biochemists are prone to think. Instead, the fatty acids found throughout the biological realm appear to be fundamentally dictated by the tenets of nature. It is provocative to me that the fatty acid diversity produced by the laws of nature is precisely the isomers needed to for life to be possible—a fitness to purpose, if you will.

Understanding this mathematical relationship and knowing the theoretical number of fatty acid species will certainly aid biochemists working in lipidomics. But for me, the real significance of these results lies in the philosophical and theological arenas.

The Mathematical Beauty of Fatty Acids

The golden mean occurs throughout nature, describing the spiral patterns found in snail shells and the flowers and leaves of plants, as examples, highlighting the pervasiveness of mathematical structures and patterns that describe many aspects of the world in which we live.

But there is more. As it turns out, we perceive the golden mean to be a thing of beauty. In fact, architects and artists often make use of the golden mean in their work because of its deeply aesthetic qualities.

Everywhere we look in nature—whether the spiral arms of galaxies, the shell of a snail, or the petals of a flower—we see a grandeur so great that we are often moved to our very core. This grandeur is not confined to the elements of nature we perceive with our senses; it also exists in the underlying mathematical structure of nature, such the widespread occurrence of the Fibonacci sequence and the golden mean. And it is remarkable that this beautiful mathematical structure even extends to the relationship between the number of carbon atoms in a fatty acid and the number of isomers.

As a Christian, nature’s beauty—including the elegance exemplified by the mathematically dictated composition of fatty acids—prompts me to worship the Creator. But this beauty also points to the reality of God’s existence and supports the biblical view of humanity. If God created the universe, then it is reasonable to expect it to be a beautiful universe. Yet, if the universe came into existence through mechanism alone, there is no reason to think it would display beauty. In other words, the beauty in the world around us signifies the Divine.

Furthermore, if the universe originated through uncaused physical mechanisms, there is no reason to think that humans would possess an aesthetic sense. But if human beings are made in God’s image, as Scripture teaches, we should be able to discern and appreciate the universe’s beauty, made by our Creator to reveal his glory and majesty.

Resources to Dig Deeper

Endnotes

  1. Stefan Schuster, Maximilian Fichtner, and Severin Sasso, “Use of Fibonacci Numbers in Lipidomics—Enumerating Various Classes of Fatty Acids,” Scientific Reports 7 (January 2017): 39821, doi:10.1038/srep39821.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2017/11/22/fatty-acids-are-beautiful

The Human Genome: Copied by Design

thehumangenomecopiedbydesign

BY FAZALE RANA – SEPTEMBER 19, 2017

The time my wife Amy and I spent in graduate school studying biochemistry were some of the best days of our lives. But it wasn’t all fun and games. For the most part, we spent long days and nights working in the lab.

But we weren’t alone. Most of the graduate students in the chemistry department at Ohio University kept the same hours we did, with all-nighters broken up around midnight by “Dew n’ Donut” runs to the local 7-Eleven. Even though everybody worked hard, some people were just more productive than others. I soon came to realize that activity and productivity were two entirely different things. Some of the busiest people I knew in graduate school rarely accomplished anything.

This same dichotomy lies at the heart of an important scientific debate taking place about the meaning of the ENCODE project results. This controversy centers around the question: Is the biochemical activity measured for the human genome merely biochemical noise or is it productive for the cell? Or to phrase the question the way a biochemist would: Is biochemical activity associated with the human genome the same thing as biochemical function?

The answer to this question doesn’t just have scientific implications. It impacts questions surrounding humanity’s origin. Did we arise through evolutionary processes or are we the product of a Creator’s handiwork?

The ENCODE Project

The ENCODE project—a program carried out by a consortium of scientists with the goal of identifying the functional DNA sequence elements in the human genome—reported phase II results in the fall of 2012. To the surprise of many, the ENCODE project reported that around 80% of the human genome displays biochemical activity, and hence function, with the expectation that this percentage should increase with phase III of the project.

If valid, the ENCODE results force a radical revision of the way scientists view the human genome. Instead of a wasteland littered with junk DNA sequences (as the evolutionary paradigm predicts), the human genome (and the genomes of other organisms) is packed with functional elements (as expected if a Creator brought human beings into existence).

Within hours of the publication of the phase II results, evolutionary biologists condemned the ENCODE results, citing technical issues with the way the study was designed and the way the results were interpreted. (For a response to these complaints go herehere, and here.)

Is Biochemical Activity the Same Thing As Function?

One of the technical complaints relates to how the ENCODE consortium determined biochemical function. Critics argue that ENCODE scientists conflated biochemical activity with function. For example, the ENCODE Project determined that about 60% of the human genome is transcribed to produceRNA. ENCODE skeptics argue that most of these transcripts lack function. Evolutionary biologist Dan Graur has asserted that “some studies even indicate that 90% of transcripts generated by RNA polymerase II may represent transcriptional noise.”In other words, the biochemical activity measured by the ENCODE project can be likened to busy but nonproductive graduate students who hustle and bustle about the lab but fail to get anything done.

When I first learned how many evolutionary biologists interpreted the ENCODE results I was skeptical. As a biochemist, I am well aware that living systems could not tolerate such high levels of transcriptional noise.

Transcription is an energy- and resource-intensive process. Therefore, it would be untenable to believe that most transcripts are mere biochemical noise. Such a view ignores cellular energetics. Transcribing 60% of the genome when most of the transcripts serve no useful function would routinely waste a significant amount of the organism’s energy and material stores. If such an inefficient practice existed, surely natural selection would eliminate it and streamline transcription to produce transcripts that contribute to the organism’s fitness.

Most RNA Transcripts Are Functional

Recent work supports my intuition as a biochemist. Genomics scientists are quickly realizing that most of the RNA molecule transcribed from the human genome serve critical functional roles.

For example, a recently published report from the Second Aegean International Conference on the Long and the Short of Non-Coding RNAs (held in Greece between June 9–14, 2017) highlights this growing consensus. Based on the papers presented at the conference, the authors of the report conclude, “Non-coding RNAs . . . are not simply transcriptional by-products, or splicing artefacts, but comprise a diverse population of actively synthesized and regulated RNA transcripts. These transcripts can—and do—function within the contexts of cellular homeostasis and human pathogenesis.”2

Shortly before this conference was held, a consortium of scientists from the RIKEN Center for Life Science Technologies in Japan published an atlas of long non-coding RNAs transcribed from the human genome. (Long non-coding RNAs are a subset of RNA transcripts produced from the human genome.) They identified nearly 28,000 distinct long non-coding RNA transcripts and determined that nearly 19,200 of these play some functional role, with the possibility that this number may increase as they and other scientific teams continue to study long non-coding RNAs.3 One of the researchers involved in this project acknowledges that “There is strong debate in the scientific community on whether the thousands of long non-coding RNAs generated from our genomes are functional or simply byproducts of a noisy transcriptional machinery . . . we find compelling evidence that the majority of these long non-coding RNAs appear to be functional.”4

Copied by Design

Based on these results, it becomes increasingly difficult for ENCODE skeptics to dismiss the findings of the ENCODE project. Independent studies affirm the findings of the ENCODE consortium—namely, that a vast proportion of the human genome is functional.

We have come a long way from the early days of the human genome project. When completed in 2003, many scientists at that time estimated that around 95% of the human genome consisted of junk DNA. And in doing so, they seemingly provided compelling evidence that humans must be the product of an evolutionary history.

But, here we are, nearly 15 years later. And the more we learn about the structure and function of genomes, the more elegant and sophisticated they appear to be. And the more reasons we have to think that the human genome is the handiwork of our Creator.

Resources

Endnotes

  1. Dan Graur et al., “On the Immortality of Television Sets: ‘Function’ in the Human Genome According to the Evolution-Free Gospel of ENCODE,” Genome Biology and Evolution5 (March 1, 2013): 578–90, doi:10.1093/gbe/evt028.
  2. Jun-An Chen and Simon Conn, “Canonical mRNA is the Exception, Rather than the Rule,” Genome Biology 18 (July 7, 2017): 133, doi:10.1186/s13059-017-1268-1.
  3. Chung-Chau Hon et al., “An Atlas of Human Long Non-Coding RNAs with Accurate 5′ Ends,” Nature 543 (March 9, 2017): 199–204, doi:10.1038/nature21374.
  4. RIKEN, “Improved Gene Expression Atlas Shows that Many Human Long Non-Coding RNAs May Actually Be Functional,” ScienceDaily, March 1, 2017, www.sciencedaily.com/releases/2017/03/170301132018.htm.

Dollo’s Law at Home with a Creation Model, Reprised*

dolloslawathome

BY FAZALE RANA – SEPTEMBER 12, 2017

*This article is an expanded and updated version of an article published in 2011 on reasons.org.

Published posthumously, Thomas Wolfe’s 1940 novel, You Can’t Go Home Againconsidered by many to be his most significant work—explores how brutally unfair the passage of time can be. In the finale, George Webber (the story’s protagonist) concedes, “You can’t go back home” to family, childhood, familiar places, dreams, and old ways of life.

In other words, there’s an irreversible quality to life. Call it the arrow of time.

Like Wolfe, most evolutionary biologists believe there is an irreversibility to life’s history and the evolutionary process. In fact, this idea is codified in Dollo’s Law, which states that an organism cannot return, even partially, to a previous evolutionary stage occupied by one of its ancestors. Yet, several recent studies have uncovered what appears to be violations of Dollo’s Law. These violations call into question the sufficiency of the evolutionary paradigm to fully account for life’s history. On the other hand, the return to ‘ancestral states’ finds an explanation in an intelligent design/creation model approach to life’s history.

Dollo’s Law

French paleontologist Louis Dollo formulated the law that bears his name in 1893 before the advent of modern-day genetics, basing it on patterns he unearthed from the fossil record. Today, his idea finds undergirding in contemporary understanding of genetics and developmental biology.

Evolutionary biologist Richard Dawkins explains the modern-day conception of Dollo’s Law this way:

“Dollo’s Law is really just a statement about the statistical improbability of following exactly the same evolutionary trajectory twice . . . in either direction. A single mutational step can easily be reversed. But for larger numbers of mutational steps . . . mathematical space of all possible trajectories is so vast that the chance of two trajectories ever arriving at the same point becomes vanishingly small.”1

If a biological trait is lost during the evolutionary process, then the genes and developmental pathways responsible for that feature will eventually degrade, because they are no longer under selective pressure. In 1994, using mathematical modeling, researchers from Indiana University determined that once a biological trait is lost, the corresponding genes can be “reactivated” with reasonable probability over time scales of five hundred thousand to six million years. But once a time span of ten million years has transpired, unexpressed genes and dormant developmental pathways become permanently lost.2

In 2000, a scientific team from the University of Oregon offered a complementary perspective on the timescale for evolutionary reversals when they calculated how long it takes for a duplicated gene to lose function.3 (Duplicated genes serve as a proxy for dormant genes rendered useless because the trait they encode has been lost.) According to the evolutionary paradigm, once a gene becomes duplicated, it is no longer under the influence of natural selection. That is, it undergoes neutral evolution, and eventually becomes silenced as mutations accrue. As it turns out, the half-life for this process is approximately four million years. To put it another way, sixteen to twenty-four million years after the duplication event, the duplicated gene will have completely lost its function. Presumably, this result applies to dormant, unexpressed genes rendered unnecessary because the trait they specify is lost.

Both scenarios assume neutral evolution and the accumulation of mutations in a clockwise manner. But what if the loss of gene function is advantageous? Collaborative work by researchers from Harvard University and NYU in 2007 demonstrated that loss of gene function can take place on the order of about one million years if natural selection influences gene loss.4 This research team studied the loss of eyes in the cave fish, the Mexican tetra. Because they live in a dark cave environment, eyes serve no benefit for these creatures. The team discovered that eye reduction offers an advantage for these fish, because of the high metabolic cost associated with maintaining eyes. The reduced metabolic cost associated with eye loss accelerates the loss of gene function through the operation of natural selection.

Based on these three studies, it is reasonable to conclude that once a trait has been lost, the time limit for evolutionary reversals is on the order of about 20 million years.

The very nature of evolutionary mechanisms and the constraints of genetic mutations make it extremely improbable that evolutionary processes would allow an organism to revert to an ancestral state or to recover a lost biological trait. You can’t go home again.

Violations of Dollo’s Law

Despite this expectation, over the course of the last several years, researchers have uncovered several instances in which Dollo’s Law has been violated. A brief description of a handful of these occurrences follows:

The re-evolution of mandibular teeth in the frog genus Gastrotheca. This group is the only one that includes living frogs with true teeth on the lower jaw. When examined from an evolutionary framework, mandibular teeth were present in ancient frogs and then lost in the ancestor of all living frogs. It also looks as if teeth have been absent in frogs for 225 million years before they reappeared in Gastrotheca.5

The re-evolution of oviparity in sand boas. When viewed from an evolutionary perspective, it appears as if live-birth (viviparity) evolved from egg-laying (oviparity) behaviors in reptiles several times. For example, estimates indicate that this evolutionary transition has occurred in snakes at least thirty times. As a case in point, there are 41 species of boas in the Old and New Worlds that give live births. Yet, two recently described sand boas, the Arabian sand boas (Eryx jayakari) and the Saharan sand boa (Eryx muelleri) lay eggs. Phylogenetic analysis carried out by researchers from Yale University indicates that the egg-laying in these two species of sand boas re-evolved 60 million years after the transition to viviparity took place.6

The re-evolution of rotating sex combs in Drosophila. Sex combs are modified bristles unique to male fruit flies, used for courtship and mating. Compared to transverse sex combs, rotating sex combs result when several rows of bristles undergo a rotation of ninety degrees. In the ananassae fruit fly group most of the twenty or so species have simple transverse sex combs, with Drosophila bipectinata and Drosophila parabipectinata the two exceptions. These fruit fly species possess rotating sex combs. Phylogenetic analysis conducted by investigators from the University of California, Davis indicates that the rotating sex combs in these two species re-evolved, twelve million years after being lost.7

The re-evolution of sexuality in mites belonging to the taxa, Crotoniidae. Mites exhibit a wide range of reproductive modes, including parthenogenesis. In fact, this means of reproduction is prominent in the group Oribatida, clustering into two subgroups that display parthenogenesis, almost exclusively. However, residing within one of these clusters is the taxa Crotoniidae, which displays sexual reproduction. Based on an evolutionary analysis, a team of German researchers conclude this group re-evolved the capacity for sexual reproduction.8

The re-evolution of shell coiling in limpets. From an evolutionary perspective, the coiled shell has been lost in gastropod lineages numerous times, producing a limpet shape, consisting of a cap-shaped shell and a large foot. Evolutionary biologists have long thought that the loss of the coiled shell represents an evolutionary dead end. However, researchers from Venezuela have shown that coiled shell morphology re-evolved, at least one time, in calyptraeids, 20 to 100 million years after its loss.9

This short list gives just a few recently discovered examples of Dollo’s Law violations. Surveying the scientific literature, evolutionary biologist J. J. Wiens identified an additional eight examples in which Dollo’s Law was violated and determined that in all cases the lost trait reappeared after at least 20 million years had passed and in some instances after 120 million years had transpired.10

Violation of Dollo’s Law and the Theory of Evolution

Given that the evolutionary paradigm predicts that re-evolution of traits should not occur after the trait has been lost for twenty million years, the numerous discoveries of Dollo’s Law violations provide a basis for skepticism about the capacity of the evolutionary paradigm to fully account for life’s history. The problem is likely worse than it initially appears. J. J. Wiens points out that Dollo’s Law violations may be more widespread than imagined, but difficult to detect for methodological reasons.11

In response to this serious problem, evolutionary biologists have offered two ways to account for Dollo’s Law violations.12 The first is to question the validity of the evolutionary analysis that exposes the violations. To put it another way, these scientists claim that the recently identified Dollo’s Law violations are artifacts of the evolutionary analysis, and not real. However, this work-around is unconvincing. The evolutionary biologists who discovered the different examples of Dollo’s Law violations were aware of this complication and took painstaking efforts to ensure the validity of the evolutionary analysis they performed.

Other evolutionary biologists argue that some genes and developmental modules serve more than one function. So, even though the trait specified by a gene or a developmental module is lost, the gene or the module remains intact because they serve other roles. This retention makes it possible for traits to re-evolve, even after a hundred million years. Though reasonable, this explanation still must be viewed as speculative. Evolutionary biologists have yet to apply the same mathematical rigor to this explanation as they have when estimating the timescale for loss of function in dormant genes. These calculations are critical given the expansive timescales involved in some of the Dollo’s Law violations.

Considering the nature of evolutionary processes, this response neglects the fact that genes and developmental pathways will continue to evolve under the auspices of natural selection, once a trait is lost. Free from the constraints of the lost function, the genes and developmental modules experience new evolutionary possibilities, previously unavailable to them. The more functional roles a gene or developmental module assumes, the less likely it is that these systems can evolve. Shedding one of their roles increases the likelihood that these genes and developmental pathways will become modified as the evolutionary process explores new space now available to it. In this scenario, it is reasonable to think that natural selection could modify the genes and developmental modules to such an extent that the lost trait would be just as unlikely to re-evolve as it would if gene loss was a consequence of neutral evolution. In fact, the study of eye loss in the Mexican tetra suggests that the modification of these genes and developmental modules could occur at a faster rate if governed by natural selection rather than neutral evolution.

Violation of Dollo’s Law and the Case for Creation

While Dollo’s Law violations are problematic for the evolutionary paradigm, the re-evolution—or perhaps, more appropriately, the reappearance—of the same biological traits after their disappearance makes sense from a creation model/intelligent design perspective. The reappearance of biological systems could be understood as the work of the Creator. It is not unusual for engineers to reuse the same design or to revisit a previously used design feature in a new prototype. While there is an irreversibility to the evolutionary process, designers are not constrained in that way and can freely return to old designs.

Dollo’s Law violations are at home in a creation model, highlighting the value of this approach to understanding life’s history.

Endnotes

  1. Richard Dawkins, The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design (New York: W.W. Norton, 2015), 94.
  2. Charles R. Marshall, Elizabeth C. Raff, and Rudolf A. Raff, “Dollo’s Law and the Death and Resurrection of Genes,” Proceedings of the National Academy of Sciences USA 91 (December 6, 1994): 12283–87.
  3. Michael Lynch and John S. Conery, “The Evolutionary Fate and Consequences of Duplicate Genes,” Science 290 (November 10, 2000): 1151–54, doi:10.1126/science.290.5494.1151.
  4. Meredith Protas et al., “Regressive Evolution in the Mexican Cave Tetra, Astyanax mexicanus,” Current Biology 17 (March 6, 2007): 452–54, doi:10.1016/j.cub.2007.01.051.
  5. John J. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs after More than 200 Million Years, and Re-evaluating Dollo’s Law,” Evolution 65 (May 2011): 1283–96, doi:10.1111/j.1558-5646.2011.01221.x.
  6. Vincent J. Lynch and Günter P. Wagner, “Did Egg-Laying Boas Break Dollo’s Law? Phylogenetic Evidence for Reversal to Oviparity in Sand Boas (Eryx: Boidae),” Evolution 64 (January 2010): 207–16, doi:10.1111/j.1558-5646.2009.00790.x.
  7. Thaddeus D. Seher et al., “Genetic Basis of a Violation of Dollo’s Law: Re-Evolution of Rotating Sex Combs in Drosophila bipectinata,” Genetics 192 (December 1, 2012): 1465–75, doi:10.1534/genetics.112.145524.
  8. Katja Domes et al., “Reevolution of Sexuality Breaks Dollo’s Law,” Proceedings of the National Academy of Sciences USA 104 (April 24, 2007): 7139–44, doi:10.1073/pnas.0700034104.
  9. Rachel Collin and Roberto Cipriani, “Dollo’s Law and the Re-Evolution of Shell Coiling,” Proceedings of the Royal Society B 270 (December 22, 2003): 2551–55, doi:10.1098/rspb.2003.2517.
  10. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs.”
  11. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs.”
  12. Rachel Collin and Maria Pia Miglietta, “Reversing Opinions on Dollo’s Law,” Trends in Ecology and Evolution 23 (November 2008): 602–9, doi:10.1016/j.tree.2008.06.013.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2017/09/12/dollos-law-at-home-with-a-creation-model-reprised

Is 75% of the Human Genome Junk DNA?

is75percentofthehumangenome
BY FAZALE RANA – AUGUST 29, 2017

By the rude bridge that arched the flood,
Their flag to April’s breeze unfurled,
Here once the embattled farmers stood,
And fired the shot heard round the world.

–Ralph Waldo Emerson, Concord Hymn

Emerson referred to the Battles of Lexington and Concord, the first skirmishes of the Revolutionary War, as the “shot heard round the world.”

While not as loud as the gunfire that triggered the Revolutionary War, a recent article published in Genome Biology and Evolution by evolutionary biologist Dan Graur has garnered a lot of attention,1 serving as the latest salvo in the junk DNA wars—a conflict between genomics scientists and evolutionary biologists about the amount of functional DNA sequences in the human genome.

Clearly, this conflict has important scientific ramifications, as researchers strive to understand the human genome and seek to identify the genetic basis for diseases. The functional content of the human genome also has significant implications for creation-evolution skirmishes. If most of the human genome turns out to be junk after all, then the case for a Creator potentially suffers collateral damage.

According to Graur, no more than 25% of the human genome is functional—a much lower percentage than reported by the ENCODE Consortium. Released in September 2012, phase II results of the ENCODE project indicated that 80% of the human genome is functional, with the expectation that the percentage of functional DNA in the genome would rise toward 100% when phase III of the project reached completion.

If true, Graur’s claim would represent a serious blow to the validity of the ENCODE project conclusions and devastate the RTB human origins creation model. Intelligent design proponents and creationists (like me) have heralded the results of the ENCODE project as critical in our response to the junk DNA challenge.

Junk DNA and the Creation vs. Evolution Battle

Evolutionary biologists have long considered the presence of junk DNA in genomes as one of the most potent pieces of evidence for biological evolution. Skeptics ask, “Why would a Creator purposely introduce identical nonfunctional DNA sequences at the same locations in the genomes of different, though seemingly related, organisms?”

When the draft sequence was first published in 2000, researchers thought only around 2–5% of the human genome consisted of functional sequences, with the rest being junk. Numerous skeptics and evolutionary biologists claim that such a vast amount of junk DNA in the human genome is compelling evidence for evolution and the most potent challenge against intelligent design/creationism.

But these arguments evaporate in the wake of the ENCODE project. If valid, the ENCODE results would radically alter our view of the human genome. No longer could the human genome be regarded as a wasteland of junk; rather, the human genome would have to be recognized as an elegantly designed system that displays sophistication far beyond what most evolutionary biologists ever imagined.

ENCODE Skeptics

The findings of the ENCODE project have been criticized by some evolutionary biologists who have cited several technical problems with the study design and the interpretation of the results. (See articles listed under “Resources to Go Deeper” for a detailed description of these complaints and my responses.) But ultimately, their criticisms appear to be motivated by an overarching concern: if the ENCODE results stand, then it means key features of the evolutionary paradigm can’t be correct.

Calculating the Percentage of Functional DNA in the Human Genome

Graur (perhaps the foremost critic of the ENCODE project) has tried to discredit the ENCODE findings by demonstrating that they are incompatible with evolutionary theory. Toward this end, he has developed a mathematical model to calculate the percentage of functional DNA in the human genome based on mutational load—the amount of deleterious mutations harbored by the human genome.

Graur argues that junk DNA functions as a “sponge” absorbing deleterious mutations, thereby protecting functional regions of the genome. Considering this buffering effect, Graur wanted to know how much junk DNA must exist in the human genome to buffer against the loss of fitness—which would result from deleterious mutations in functional DNA—so that a constant population size can be maintained.

Historically, the replacement level fertility rates for human beings have been two to three children per couple. Based on Graur’s modeling, this fertility rate requires 85–90% of the human genome to be composed of junk DNA in order to absorb deleterious mutations—ensuring a constant population size, with the upper limit of functional DNA capped at 25%.

Graur also calculated a fertility rate of 15 children per couple, at minimum, to maintain a constant population size, assuming 80% of the human genome is functional. According to Graur’s calculations, if 100% of the human genome displayed function, the minimum replacement level fertility rate would have to be 24 children per couple.

He argues that both conclusions are unreasonable. On this basis, therefore, he concludes that the ENCODE results cannot be correct.

Response to Graur

So, has Graur’s work invalidated the ENCODE project results? Hardly. Here are four reasons why I’m skeptical.

1. Graur’s estimate of the functional content of the human genome is based on mathematical modeling, not experimental results.

An adage I heard repeatedly in graduate school applies: “Theories guide, experiments decide.” Though the ENCODE project results theoretically don’t make sense in light of the evolutionary paradigm, that is not a reason to consider them invalid. A growing number of studies provide independent experimental validation of the ENCODE conclusions. (Go here and here for two recent examples.)

To question experimental results because they don’t align with a theory’s predictions is a “Bizarro World” approach to science. Experimental results and observations determine a theory’s validity, not the other way around. Yet when it comes to the ENCODE project, its conclusions seem to be weighed based on their conformity to evolutionary theory. Simply put, ENCODE skeptics are doing science backwards.

While Graur and other evolutionary biologists argue that the ENCODE results don’t make sense from an evolutionary standpoint, I would argue as a biochemist that the high percentage of functional regions in the human genome makes perfect sense. The ENCODE project determined that a significant fraction of the human genome is transcribed. They also measured high levels of protein binding.

ENCODE skeptics argue that this biochemical activity is merely biochemical noise. But this assertion does not make sense because (1) biochemical noise costs energy and (2) random interactions between proteins and the genome would be harmful to the organism.

Transcription is an energy- and resource-intensive process. To believe that most transcripts are merely biochemical noise would be untenable. Such a view ignores cellular energetics. Transcribing a large percentage of the genome when most of the transcripts serve no useful function would routinely waste a significant amount of the organism’s energy and material stores. If such an inefficient practice existed, surely natural selection would eliminate it and streamline transcription to produce transcripts that contribute to the organism’s fitness.

Apart from energetics considerations, this argument ignores the fact that random protein binding would make a dire mess of genome operations. Without minimizing these disruptive interactions, biochemical processes in the cell would grind to a halt. It is reasonable to think that the same considerations would apply to transcription factor binding with DNA.

2. Graur’s model employs some questionable assumptions.

Graur uses an unrealistically high rate for deleterious mutations in his calculations.

Graur determined the deleterious mutation rate using protein-coding genes. These DNA sequences are highly sensitive to mutations. In contrast, other regions of the genome that display function—such as those that (1) dictate the three-dimensional structure of chromosomes, (2) serve as transcription factors, and (3) aid as histone binding sites—are much more tolerant to mutations. Ignoring these sequences in the modeling work artificially increases the amount of required junk DNA to maintain a constant population size.

3. The way Graur determines if DNA sequence elements are functional is questionable. 

Graur uses the selected-effect definition of function. According to this definition, a DNA sequence is only functional if it is undergoing negative selection. In other words, sequences in genomes can be deemed functional only if they evolved under evolutionary processes to perform a particular function. Once evolved, these sequences, if they are functional, will resist evolutionary change (due to natural selection) because any alteration would compromise the function of the sequence and endanger the organism. If deleterious, the sequence variations would be eliminated from the population due to the reduced survivability and reproductive success of organisms possessing those variants. Hence, functional sequences are those under the effects of selection.

In contrast, the ENCODE project employed a causal definition of function. Accordingly, function is ascribed to sequences that play some observationally or experimentally determined role in genome structure and/or function.

The ENCODE project focused on experimentally determining which sequences in the human genome displayed biochemical activity using assays that measured

  • transcription,
  • binding of transcription factors to DNA,
  • histone binding to DNA,
  • DNA binding by modified histones,
  • DNA methylation, and
  • three-dimensional interactions between enhancer sequences and genes.

In other words, if a sequence is involved in any of these processes—all of which play well-established roles in gene regulation—then the sequences must have functional utility. That is, if sequenceQperforms functionG, then sequenceQis functional.

So why does Graur insist on a selected-effect definition of function? For no other reason than a causal definition ignores the evolutionary framework when determining function. He insists that function be defined exclusively within the context of the evolutionary paradigm. In other words, his preference for defining function has more to do with philosophical concerns than scientific ones—and with a deep-seated commitment to the evolutionary paradigm.

As a biochemist, I am troubled by the selected-effect definition of function because it is theory-dependent. In science, cause-and-effect relationships (which include biological and biochemical function) need to be established experimentally and observationally,independent of any particular theory. Once these relationships are determined, they can then be used to evaluate the theories at hand. Do the theories predict (or at least accommodate) the established cause-and-effect relationships, or not?

Using a theory-dependent approach poses the very real danger that experimentally determined cause-and-effect relationships (or, in this case, biological functions) will be discarded if they don’t fit the theory. And, again, it should be the other way around. A theory should be discarded, or at least reevaluated, if its predictions don’t match these relationships.

What difference does it make which definition of function Graur uses in his model? A big difference. The selected-effect definition is more restrictive than the causal-role definition. This restrictiveness translates into overlooked function and increases the replacement level fertility rate.

4. Buffering against deleterious mutations is a function.

As part of his model, Graur argues that junk DNA is necessary in the human genome to buffer against deleterious mutations. By adopting this view, Graur has inadvertently identified function for junk DNA. In fact, he is not the first to argue along these lines. Biologist Claudiu Bandea has posited that high levels of junk DNA can make genomes resistant to the deleterious effects of transposon insertion events in the genome. If insertion events are random, then the offending DNA is much more likely to insert itself into “junk DNA” regions instead of coding and regulatory sequences, thus protecting information-harboring regions of the genome.

If the last decade of work in genomics has taught us anything, it is this: we are in our infancy when it comes to understanding the human genome. The more we learn about this amazingly complex biochemical system, the more elegant and sophisticated it becomes. Through this process of discovery, we continue to identify functional regions of the genome—DNA sequences long thought to be “junk.”

In short, the criticisms of the ENCODE project reflect a deep-seated commitment to the evolutionary paradigm and, bluntly, are at war with the experimental facts.

Bottom line: if the ENCODE results stand, it means that key aspects of the evolutionary paradigm can’t be correct.

Resources to Go Deeper

Endnotes

  1. Dan Graur, “An Upper Limit on the Functional Fraction of the Human Genome,” Genome Biology and Evolution 9 (July 2017): 1880–85, doi:10.1093/gbe/evx121.

DNA Replication Winds Up the Case for Intelligent Design

dnareplicationwindsup
BY FAZALE RANA – AUGUST 8, 2017

One of my classmates and friends in high school was a kid we nicknamed “Radar.” He was a cool kid who had special needs. He was mentally challenged. He was also funny and as good-hearted as they come, never causing any real problems—other than playing hooky from school, for days on end. Radar hated going to school.

When he eventually showed up, he would be sent to the principal’s office to explain his unexcused absences to Mr. Reynolds. And each time, Radar would offer the same excuse: his grandmother died. But Mr. Reynolds didn’t buy it—for obvious reasons. It didn’t require much investigation on the principal’s part to know that Radar was lying.

Skeptics have something in common with my friend Radar. They use the same tired excuse when presented with compelling evidence for design from biochemistry. Inevitably, they dismiss the case for a Creator by pointing out all the “flawed” designs in biochemical systems. But this excuse never sticks. Upon further investigation, claimed instances of bad designs turn out to be elegant, in virtually every instance, as recent work by scientists from UC Davis illustrates.

These researchers accomplished an important scientific milestone by using single molecule techniques to observe the replication of a single molecule of DNA.1 Their unexpected insights have bearing on how we understand this key biochemical operation. The work also has important implications for the case for biochemical design.

For those familiar with DNA’s structure and replication process, you can skip the next two sections. But for those of you who are not, a little background information is necessary to appreciate the research team’s findings and their relevance to the creation-evolution debate.

DNA’s Structure

DNA consists of two molecular chains (called “polynucleotides”) aligned in an antiparallel fashion. (The two strands are arranged parallel to one another with the starting point of one strand of the polynucleotide duplex located next to the ending point of the other strand and vice versa.) The paired molecular chains twist around each other forming the well-known DNA double helix. The cell’s machinery generates the polynucleotide chains using four different nucleotides: adenosineguanosinecytidine, and thymidine, abbreviated as A, G, C, and T, respectively.

A special relationship exists between the nucleotide sequences of the two DNA strands. Biochemists say the DNA sequences of the two strands are complementary. When the DNA strands align, the adenine (A) side chains of one strand always pair with thymine (T) side chains from the other strand. Likewise, the guanine (G) side chains from one DNA strand always pair with cytosine (C) side chains from the other strand. Biochemists refer to these relationships as “base-pairing rules.” Consequently, if biochemists know the sequence of one DNA strand, they can readily determine the sequence of the other strand. Base-pairing plays a critical role in DNA replication.

 

Image 1: DNA’s Structure

DNA Replication

Biochemists refer to DNA replication as a “template-directed, semiconservative process.” By “template-directed,” biochemists mean that the nucleotide sequences of the “parent” DNA molecule function as a template, directing the assembly of the DNA strands of the two “daughter” molecules using the base-pairing rules. By “semiconservative,” biochemists mean that after replication, each daughter DNA molecule contains one newly formed DNA strand and one strand from the parent molecule.

 

Image 2: Semiconservative DNA Replication

Conceptually, template-directed, semiconservative DNA replication entails the separation of the parent DNA double helix into two single strands. By using the base-pairing rules, each strand serves as a template for the cell’s machinery to use when it forms a new DNA strand with a nucleotide sequence complementary to the parent strand. Because each strand of the parent DNA molecule directs the production of a new DNA strand, two daughter molecules result. Each one possesses an original strand from the parent molecule and a newly formed DNA strand produced by a template-directed synthetic process.

DNA replication begins at specific sites along the DNA double helix, called “replication origins.” Typically, prokaryotic cells have only a single origin of replication. More complex eukaryotic cells have multiple origins of replication.

The DNA double helix unwinds locally at the origin of replication to produce what biochemists call a “replication bubble.” During the course of replication, the bubble expands in both directions from the origin. Once the individual strands of the DNA double helix unwind and are exposed within the replication bubble, they are available to direct the production of the daughter strand. The site where the DNA double helix continuously unwinds is called the “replication fork.” Because DNA replication proceeds in both directions away from the origin, there are two replication forks within each bubble.

 

Image 3: DNA Replication Bubble

DNA replication can only proceed in a single direction, from the top of the DNA strand to the bottom. Because the strands that form the DNA double helix align in an antiparallel fashion with the top of one strand juxtaposed with the bottom of the other strand, only one strand at each replication fork has the proper orientation (bottom-to-top) to direct the assembly of a new strand, in the top-to-bottom direction. For this strand—referred to as the “leading strand”—DNA replication proceeds rapidly and continuously in the direction of the advancing replication fork.

DNA replication cannot proceed along the strand with the top-to-bottom orientation until the replication bubble has expanded enough to expose a sizable stretch of DNA. When this happens, DNA replication moves away from the advancing replication fork. DNA replication can only proceed a short distance for the top-to-bottom-oriented strand before the replication process has to stop and wait for more of the parent DNA strand to be exposed. When a sufficient length of the parent DNA template is exposed a second time, DNA replication can proceed again, but only briefly before it has to stop again and wait for more DNA to be exposed. The process of discontinuous DNA replication takes place repeatedly until the entire strand is replicated. Each time DNA replication starts and stops, a small fragment of DNA is produced.

Biochemists refer to these pieces of DNA (that will eventually compose the daughter strand) as “Okazaki fragments”—after the biochemist who discovered them. Biochemists call the strand produced discontinuously the “lagging strand” because DNA replication for this strand lags behind the more rapidly produced leading strand. One additional point: the leading strand at one replication fork is the lagging strand at the other replication fork since the replication forks at the two ends of the replication bubble advance in opposite directions.

An ensemble of proteins is needed to carry out DNA replication. Once the origin recognition complex (which consists of several different proteins) identifies the replication origin, a protein called “helicase” unwinds the DNA double helix to form the replication fork.

 

Image 4: DNA Replication Proteins

Once the replication fork is established and stabilized, DNA replication can begin. Before the newly formed daughter strands can be produced, a small RNA primer must be produced. The protein that synthesizes new DNA by reading the parent DNA template strand—DNA polymerase—can’t start production from scratch. It must be primed. A massive protein complex, called the “primosome,” which consists of over 15 different proteins, produces the RNA primer needed by DNA polymerase.

Once primed, DNA polymerase will continuously produce DNA along the leading strand. However, for the lagging strand, DNA polymerase can only generate DNA in spurts to produce Okazaki fragments. Each time DNA polymerase generates an Okazaki fragment, the primosome complex must produce a new RNA primer.

Once DNA replication is completed, the RNA primers are removed from the continuous DNA of the leading strand and from the Okazaki fragments that make up the lagging strand. A protein called a “3’-5’ exonuclease” removes the RNA primers. A different DNA polymerase fills in the gaps created by the removal of the RNA primers. Finally, a protein called a “ligase” connects all the Okazaki fragments together to form a continuous piece of DNA out of the lagging strand.

Are Leading and Lagging Strand Polymerases Coordinated?

Biochemists had long assumed that the activities of the leading and lagging strand DNA polymerase enzymes were coordinated. If not, then DNA replication of one strand would get too far ahead of the other, increasing the likelihood of mutations.

As it turns out, the research team from UC Davis discovered that the activities of the two polymerases are not coordinated. Instead, the leading and lagging strand DNA polymerase enzymes replicate DNA autonomously. To the researchers’ surprise, they learned that the leading strand DNA polymerase replicated DNA in bursts, suddenly stopping and starting. And when it did replicate DNA, the rate of production varied by a factor of ten. On the other hand, the researchers discovered that the rate of DNA replication on the lagging strand depended on the rate of RNA primer formation.

The researchers point out that if not for single molecule techniques—in which replication is characterized for individual DNA molecules—the autonomous behavior of leading and lagging strand DNA polymerases would not have been detected. Up to this point, biochemists have studied the replication process using a relatively large number of DNA molecules. These samples yield average replication rates for leading and lagging strand replication, giving the sense that replication of both strands is coordinated.

According to the researchers, this discovery is a “real paradigm shift, and undermines a great deal of what’s in the textbooks.”Because the DNA polymerase activity is not coordinated but autonomous, they conclude that the DNA replication process is a flawed design, driven by stochastic (random) events. Also, the lack of coordination between the leading and lagging strands means that leading strand replication can get ahead of the lagging strand, yielding long stretches of vulnerable single-stranded DNA.

Diminished Design or Displaced Design?

Even though this latest insight appears to undermine the elegance of the DNA replication process, other observations made by the UC Davis research team indicate that the evidence for design isn’t diminished, just displaced.

These investigators discovered that the activity of helicase—the enzyme that unwinds the double helix at the replication fork—somehow senses the activity of the DNA polymerase on the leading strand. When the DNA polymerase stalls, the activity of the helicase slows down by a factor of five until the DNA polymerase catches up. The researchers believe that another protein (called the “tau protein”) mediates the interaction between the helicase and DNA polymerase molecules. In other words, the interaction between DNA polymerase and the helicase compensates for the stochastic behavior of the leading strand polymerase, pointing to a well-designed process.

As already noted, the research team also learned that the rate of lagging strand replication depends on primer production. They determined that the rate of primer production exceeds the rate of DNA replication on the leading strand. This fortuitous coincidence ensures that as soon as enough of the bubble opens for lagging strand replication to continue, the primase can immediately lay down the RNA primer, restarting the process. It turns out that the rate of primer production is controlled by the primosome concentration in the cell, with primer production increasing as the number of primosome copies increase. The primosome concentration appears to be fine-tuned. If the concentration of this protein complex is too large, the replication process becomes “gummed up”; if too small, the disparity between leading and lagging strand replication becomes too great, exposing single-stranded DNA. Again, the fine-tuning of primosome concentration highlights the design of this cellular operation.

It is remarkable how two people can see things so differently. For scientists influenced by the evolutionary paradigm, the tendency is to dismiss evidence for design and, instead of seeing elegance, become conditioned to see flaws. Though DNA replication takes place in a haphazard manner, other features of the replication process appear to be engineered to compensate for the stochastic behavior of the DNA polymerases and, in the process, elevate the evidence for design.

And, that’s no lie.

Resources

Endnotes

  1. James E. Graham et al., “Independent and Stochastic Action of DNA Polymerases in the Replisome,” Cell 169 (June 2017): 1201–13, doi:10.1016/j.cell.2017.05.041.
  2. Bec Crew, “DNA Replication Has Been Filmed for the First Time, and It’s Not What We Expected,” ScienceAlert, June 19, 2017, https://sciencealert.com/dna-replication-has-been-filmed-for-the-first-time-and-it-s-stranger-than-we-thought.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2017/08/08/dna-replication-winds-up-the-case-for-intelligent-design

Can Intelligent Design Be Part of the Construct of Science?

canintelligentdesignbepart

BY FAZALE RANA – JUNE 27, 2017

“If this result stands up to scrutiny, it does indeed change everything we thought we knew about the earliest human occupation of the Americas.”1

This was the response of Christopher Stringer—a highly-regarded paleoanthropologist at the Natural History Museum in London—to the recent scientific claim that Neanderthals made their way to the Americas 100,000 years before the first modern humans.2

At this point, many anthropologists have expressed skepticism about this claim, because it requires them to abandon long-held ideas about the way the Americas were populated by modern humans. As Stringer cautions, “Many of us will want to see supporting evidence of this ancient occupation from other sites before we abandon the conventional model.”3

Yet, the archaeologists making the claim have amassed an impressive cache of evidence that points to Neanderthal occupation of North America.

As Stringer points out, this work has radical implications for anthropology. But, in my view, the importance of the work extends beyond questions relating to human migrations around the world. It demonstrates that intelligent design/creation models have a legitimate place in science.

The Case for Neanderthal Occupation of North America

In the early 1990s, road construction crews working near San Diego, CA, uncovered the remains of a single mastodon. Though the site was excavated from 1992 to 1993, scientists were unable to date the remains. Both radiocarbon and luminescence dating techniques failed.

Recently, researchers turned failure into success, age-dating the site to be about 130,000 years old, using uranium-series disequilibrium methods. This result shocked them because analysis at the site indicated that the mastodon remainswere deliberately processed by hominids, most likely Neanderthals.

The researchers discovered that the mastodon bones displayed spiral fracture patterns that looked as if a creature, such as a Neanderthal, struck the bone with a rock—most likely to extract nutrient-rich marrow from the bones. The team also found rocks (called cobble) with the mastodon bones that bear markings consistent with having been used to strike bones and other rocks.

To confirm this scenario, the archaeologists took elephant and cow bones and broke them open with a hammerstone. In doing so, they produced the same type of spiral fracture patterns in the bones and the same type of markings on the hammerstone as those found at the archaeological site. The researchers also ruled out other possible explanations, such as wild animals creating the fracture patterns on the bones while scavenging the mastodon carcass.

Despite this compelling evidence, some anthropologists remain skeptical that Neanderthals—or any other hominid—modified the mastodon remains. Why? Not only does this claim fly in the face of the conventional explanation for the populating of the Americas by humans, but the sophistication of the tool kit does not match that produced by Neanderthals 130,000 years ago based on archaeological sites in Europe and Asia.

So, did Neanderthals make their way to the Americas 100,000 years before modern humans? An interesting debate will most certainly ensue in the years to come.

But, this work does make one thing clear: intelligent design/creation is a legitimate part of the construct of science.

A Common Skeptical Response to the Case for a Creator

Based on my experience, when confronted with scientific evidence for a Creator, skeptics will often summarily dismiss the argument by asserting that intelligent design/creation isn’t science and, therefore, it is not legitimate to draw the conclusion that a Creator exists from scientific advances.

Undergirding this objection is the conviction that science is the best, and perhaps the only, way to discover truth. By dismissing the evidence for God’s existence—insisting that it is nonscientific—they hope to undermine the argument, thereby sidestepping the case for a Creator.

There are several ways to respond to this objection. One way is to highlight the fact that intelligent design is part of the construct of science. This response is not motivated by a desire to “reform” science, but by a desire to move the scientific evidence into a category that forces skeptics to interact with it properly.

The Case for a Creator’s Role in the Origin of Life

It is interesting to me that the line of reasoning the archaeologists use to establish the presence of Neanderthals in North America equates to the line of reasoning I use to make the case that the origin of life reflects the product of a Creator’s handiwork, as presented in my three books: The Cell’s DesignOrigins of Life, and Creating Life in the Lab. There are three facets to this line of reasoning.

The Appearance of Design

The archaeologists argued that: (1) the arrangement of the bones and the cobble and (2) the markings on the cobble and the fracture patterns on the bones appear to result from the intentional activity of a hominid. To put it another way, the archaeological site shows the appearance of design.

In The Cell’s Design I argue that the analogies between biochemical systems and human designs evince the work of a Mind, serving to revitalize Paley’s Watchmaker argument for God’s existence. In other words, biochemical systems display the appearance of design.

Failure to Explain the Evidence through Natural Processes

The archaeologists explored and rejected alternative explanations—such as scavenging by wild animals—for the arrangement, fracture patterns, and markings of the bones and stones.

In Origins of Life, Hugh Ross (my coauthor) and I explore and demonstrate the deficiency of natural process, mechanistic explanations (such as replicator-first, metabolism-first, and membrane-first scenarios) for the origin of life and, hence, biological systems.

Reproduction of the Design Patterns

The archaeologists confirmed—by striking elephant and cow bones with a rock—that the markings on the cobble and the fracture patterns on the bone were made by a hominid. That is, through experimental work in the laboratory, they demonstrated that the design features were, indeed, produced by intelligent agency.

In Creating Life in the Lab, I describe how work in synthetic biology and prebiotic chemistry empirically demonstrate the necessary role intelligent agency plays in transforming chemicals into living cells. In other words, when scientists go into the lab and create protocells, they are demonstrating that the design of biochemical systems is intelligent design.

So, is it legitimate for skeptics to reject the scientific case for a Creator, by dismissing it as non-scientific?

Work in archaeology illustrates that intelligent design is an integral part of science, and it highlights the fact that the same scientific reasoning used to interpret the mastodon remains discovered near San Diego, likewise, undergirds the case for a Creator.

Resources

Endnotes

  1. Colin Barras, “First Americans May Have Been Neanderthals 130,000 Years Ago,” New Scientist, April 26, 2017, https://www.newscientist.com/article/2129042-first-americans-may-have-been-neanderthals-130000-years-ago/.
  2. Steven R. Holen et al., “A 130,000-Year-Old Archaeological Site in Southern California, USA,” Nature 544 (April 27, 2017): 479–83, doi:10.1038/nature22065.
  3. Barras, “First Americans.”