Pseudogene Discovery Pains Evolutionary Paradigm

Untitled 15

It was one of the most painful experiences I ever had. A few years ago, I had two back-to-back bouts of kidney stones. I remember it as if it were yesterday. Man, did it hurt when I passed the stones! All I wanted was for the emergency room nurse to keep the Demerol coming.


Figure 1: Schematic Depiction of Kidney Stones Moving through the Urinary Tract. Image Credit: Shutterstock

When all that misery was going down, I wished I was one of those rare individuals who doesn’t experience pain. There are some people who, due to genetic mutations, live pain-free lives. This condition is called hypoalgesia. (Of course, there is a serious downside to hypoalgesia. Pain lets us know when our body is hurt or sick. Because hypoalgesics can’t experience pain, they are prone to serious injury, etc.)

Biomedical researchers possess a keen interest in studying people with hypoalgesia. Identifying the mutations responsible for this genetic condition helps investigators understand the physiological processes that undergird the pain sensation. This insight then becomes indispensable to guiding efforts to develop new drugs and techniques to treat pain.

By studying the genetic profile of a 66-year-old woman who lived a lifetime with pain-free injuries, a research team from the UK recently discovered a novel genetic mutation that causes hypoalgesia.1 The mutation responsible for this patient’s hypoalgesia occurred in a pseudogene, a region of the genome considered nonfunctional “junk DNA.”

This discovery adds to the mounting evidence that shows junk DNA is functional. At this point, molecular geneticists have demonstrated that virtually every class of junk DNA has function. This notion undermines the best evidence for common descent and, hence, undermines an evolutionary interpretation of biology. More importantly, the discovery adds support for the competitive endogenous RNA hypothesis, which can be marshaled to support RTB’s genomics model. It is becoming more and more evident to me that genome structure and function reflect the handiwork of a Creator.

The Role of a Pseudogene in Mediating Hypoalgesia

To identify the genetic mutation responsible for the 66-year-old’s hypoalgesia, the research team scanned her DNA along with samples taken from her mother and two children. The team discovered two genetic changes: (1) mutations to the FAAH gene that reduced its expression, and (2) deletion of part of the FAAH pseudogene.

The FAAH gene encodes for a protein called fatty acid amide hydrolase (FAAH). This protein breaks down fatty acid amides. Some of these compounds interact with cannabinoid receptors. These receptors are located in the membranes of cells found in tissues throughout the body. They mediate pain sensation, among other things. When fatty acid amide concentrations become elevated in the circulatory system, it produces an analgesic effect.

Researchers found elevated fatty acid amide levels in the patient’s blood, consistent with reduced expression of the FAAH gene. It appears that both mutations are required for the complete hypoalgesia observed in the patient. The patient’s mother, daughter, and son all display only partial hypoalgesia. The mother and daughter have the same mutation in the FAAH gene but an intact FAAH pseudogene. The patient’s son is missing the FAAH pseudogene, but has a “normal” FAAH gene.

Based on the data, it looks like proper expression levels of the FAAH gene require an intact FAAH pseudogene. This is not the first time that biomedical researchers have observed the same effect. There are a number of gene-pseudogene pairs in which both must be intact and transcribed for the gene to be expressed properly. In 2011, researchers from Harvard University proposed that the competitive endogenous RNA hypothesis explains why transcribed pseudogenes are so important for gene expression.2

The Competitive Endogenous RNA Hypothesis

Biochemists and molecular biologists have long believed that the primary mechanism for regulating gene expression centered around controlling the frequency and amount of mRNA produced during transcription. For housekeeping genes, mRNA is produced continually, while for genes that specify situational proteins, it is produced as needed. Greater amounts of mRNA are produced for genes expressed at high levels and limited amounts for genes expressed at low levels.

Researchers long thought that once the mRNA was produced it would be translated into proteins, but recent discoveries indicate this is not the case. Instead, an elaborate mechanism exists that selectively degrades mRNA transcripts before they can be used to direct the protein production at the ribosome. This mechanism dictates the amount of protein produced by permitting or preventing mRNA from being translated. The selective degradation of mRNA also plays a role in gene expression, functioning in a complementary manner to the transcriptional control of gene expression.

Another class of RNA molecules, called microRNAs, mediates the selective degradation of mRNA. In the early 2000s, biochemists recognized that by binding to mRNA (in the 3′ untranslated region of the transcript), microRNAs play a crucial role in gene regulation. Through binding, microRNAs flag the mRNA for destruction by RNA-induced silencing complex (RISC).


Figure 2: Schematic of the RNA-Induced Silencing Mechanism. Image Credit: Wikipedia

Various distinct microRNA species in the cell bind to specific sites in the 3′ untranslated region of mRNA transcripts. (These binding locations are called microRNA response elements.) The selective binding by the population of microRNAs explains the role that duplicated pseudogenes play in regulating gene expression.

The sequence similarity between the duplicated pseudogene and the corresponding “intact” gene means that the same microRNAs will bind to both mRNA transcripts. (It is interesting to note that most duplicated pseudogenes are transcribed.) When microRNAs bind to the transcript of the duplicated pseudogene, it allows the transcript of the “intact” gene to escape degradation. In other words, the transcript of the duplicated pseudogene is a decoy. The mRNA transcript can then be translated and, hence, the “intact” gene expressed.

It is not just “intact” and duplicated pseudogenes that harbor the same microRNA response elements. Other genes share the same set of microRNA response elements in the 3′ untranslated region of the transcripts and, consequently, will bind the same set of microRNAs. These genes form a network that, when transcribed, will influence the expression of all genes in the network. This relationship means that all the mRNA transcripts in the network can function as decoys. This recognition accounts for the functional utility of unitary pseudogenes.

One important consequence of this hypothesis is that mRNA has dual functions inside the cell. First, it encodes information needed to make proteins. Second, it helps regulate the expression of other transcripts that are part of its network.

Junk DNA and the Case for Creation

Evolutionary biologists have long maintained that identical (or nearly identical) pseudogene sequences found in corresponding locations in genomes of organisms that naturally group together (such as humans and the great apes) provide compelling evidence for shared ancestry. This interpretation was persuasive because molecular geneticists regarded pseudogenes as nonfunctional, junk DNA. Presumably, random biochemical events transformed functional DNA sequences (genes) into nonfunctional garbage.

Creationists and intelligent design proponents had little to offer by way of evidence for the intentional design of genomes. But all this changed with the discovery that virtually every class of junk DNA has function, including all three types of pseudogenes (processed, duplicated, and unitary).

If junk DNA is functional, then the sequences previously thought to show common descent could be understood as shared designs. The competitive endogenous RNA hypothesis supports this interpretation. This model provides an elegant rationale for the structural similarity between gene-pseudogene pairs and also makes sense of the widespread presence of unitary pseudogenes in genomes.

Of course, this insight also supports the RTB genomics model. And that sure feels good to me.


  1. Abdella M. Habib et al., “Microdeletion in a FAAH Pseudogene Identified in a Patient with High Anandamide Concentrations and Pain Insensitivity,” British Journal of Anaesthesia, advanced access publication, doi:10.1016/j.bja.2019.02.019.
  2. Ana C. Marques, Jennifer Tan, and Chris P. Ponting, “Wrangling for microRNAs Provokes Much Crosstalk,” Genome Biology 12, no. 11 (November 2011): 132, doi:10.1186/gb-2011-12-11-132; Leonardo Salmena et al., “A ceRNA Hypothesis: The Rosetta Stone of a Hidden RNA Language?”, Cell 146, no. 3 (August 5, 2011): 353–58, doi:10.1016/j.cell.2011.07.014.

Reprinted with permission by the author
Original article at:

Why Mitochondria Make My List of Best Biological Designs

Untitled 14

A few days ago, I ran across a BuzzFeed list that catalogs 24 of the most poorly designed things in our time. Some of the items that stood out from the list for me were:

  • serial-wired Christmas lights
  • economy airplane seats
  • clamshell packaging
  • juice cartons
  • motion sensor faucets
  • jewel CD packaging
  • umbrellas

What were people thinking when they designed these things? It’s difficult to argue with BuzzFeed’s list, though I bet you might add a few things of your own to their list of poor designs.

If biologists were to make a list of poorly designed things, many would probably include…everything in biology. Most life scientists are influenced by an evolutionary perspective. Thus, they view biological systems as inherently flawed vestiges cobbled together by a set of historically contingent mechanisms.

Yet as our understanding of biological systems improves, evidence shows that many “poorly designed” systems are actually exquisitely assembled. It also becomes evident that many biological designs reflect an impeccable logic that explains why these systems are the way they are. In other words, advances in biology reveal that it makes better sense to attribute biological systems to the work of a Mind, not to unguided evolution.

Based on recent insights by biochemist and origin-of-life researcher Nick Lane, I would add mitochondria to my list of well-designed biological systems. Lane argues that complex cells and, ultimately, multicellular organisms would be impossible if it weren’t for mitochondria.1(These organelles generate most of the ATP molecules used to power the operations of eukaryotic cells.) Toward this end, Lane has demonstrated that mitochondria’s properties are just-right for making complex eukaryotic cells possible. Without mitochondria, life would be limited to prokaryotic cells (bacteria and archaea).

To put it another way, Nick Lane has shown that prokaryotic cells could never evolve the complexity needed to form cells with complexity akin to the eukaryotic cells required for multicellular organisms. The reason has to do with bioenergetic constraints placed on prokaryotic cells. According to Lane, the advent of mitochondria allowed life to break free from these constraints, paving the way for complex life.


Figure 1: A Mitochondrion. Image credit: Shutterstock

Through Lane’s discovery, mitochondria reveal exquisite design and logical architecture and operations. Yet this is not necessarily what I (or many others) would have expected if mitochondria were the result of evolution. Rather, we’d expect biological systems to appear haphazard and purposeless, just good enough for the organism to survive and nothing more.

To understand why I (and many evolutionary biologists) would hold this view about mitochondria and eukaryotic cells (assuming that they were the product of evolutionary processes), it is necessary to review the current evolutionary explanation for their origins.

The Endosymbiont Hypothesis

Most biologists believe that the endosymbiont hypothesis is the best explanation for the origin of complex eukaryotic cells. This hypothesis states that complex cells originated when single-celled microbes formed symbiotic relationships. “Host” microbes (most likely archaea) engulfed other archaea and/or bacteria, which then existed inside the host as endosymbionts.

The presumption, then, is that organelles, including mitochondria, were once endosymbionts. Evolutionary biologists believe that, once engulfed, the endosymbionts took up permanent residency within the host cell and even grew and divided inside the host. Over time, the endosymbionts and the host became mutually interdependent. For example, the endosymbionts provided a metabolic benefit for the host cell, such as serving as a source of ATP. In turn, the host cell provided nutrients to the endosymbionts. The endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism.

Based on this scenario, there is no real rationale for the existence of mitochondria (and eukaryotic cells). They are the way they are because they just wound up that way.

But Nick Lane’s insights suggest otherwise.

Lane’s analysis identifies a deep-seated rationale that accounts for the features of mitochondria (and eukaryotic cells) related to their contribution to cellular bioenergetics. To understand why mitochondria and eukaryotic cells are the way they are, we first need to understand why prokaryotic cells can never evolve into large complex cells, a necessary step for the advent of complex multicellular organisms.

Bioenergetics Constraints on Prokaryotic Cells

Lane has discovered that bioenergetics constraints keep bacterial and archaeal cells trapped at their current size and complexity. Key to discovering this constraint is a metric Lane devised called Available Energy per Gene (AEG). It turns out that AEG in eukaryotic cells can be as much as 200,000 times larger than the AEG in prokaryotic cells. This extra energy allows eukaryotic cells to engage in a wide range of metabolic processes that support cellular complexity. Prokaryotic cells simply can’t afford such processes.

An average eukaryotic cell has between 20,000 to 40,000 genes; a typical bacterial cell has about 5,000 genes. Each gene encodes the information the cell’s machinery needs to make a distinct protein. And proteins are the workhorse molecules of the cell. More genes mean a more diverse suite of proteins, which means greater biochemical complexity.

So, what is so special about eukaryotic cells? Why don’t prokaryotic cells have the same AEG? Why do eukaryotic cells have an expanded repertoire of genes and prokaryotic cells don’t?

In short, the answer is: mitochondria.

On average, the volume of eukaryotic cells is about 15,000 times larger than that of prokaryotic cells. Eukaryotic cells’ larger size allows for their greater complexity. Lane estimates that for a prokaryotic cell to scale up to this volume, its radius would need to increase 25-fold and its surface area 625-fold.

Because the plasma membrane of bacteria is the site for ATP synthesis, increases in the surface area would allow the hypothetically enlarged bacteria to produce 625 times more ATP. But this increased ATP production doesn’t increase the AEG. Why is that?

The bacteria would have to produce 625 times more proteins to support the increased ATP production. Because the cell’s machinery must access the bacteria’s DNA to make these proteins, a single copy of the genome is insufficient to support all of the activity centered around the synthesis of that many proteins. In fact, Lane estimates that for bacteria to increase its ATP production 625-fold, it would require 625 copies of its genome. In other words, even though the bacteria increased in size, in effect, the AEG remains unchanged.


Figure 2: ATP Production at the Cell Membrane Surface. Image credit: Shutterstock

Things become more complicated when factoring in cell volume. When the surface area (and concomitant ATP production) increase by a factor of 625, the volume of the cell expands 15,000 times. To satisfy the demands of a larger cell, even more copies of the genome would be required, perhaps as many as 15,000. But energy production tops off at a 625-fold increase. This mismatch means that the AEG drops by 25 percent per gene. For a genome consisting of 5,000 genes, this drop means that a bacterium the size of a eukaryotic cell would have about 125,000 times less AEG than a typical eukaryotic cell and 200,000 times less AEG when compared to eukaryotes with genome sizes approaching 40,000 genes.

Bioenergetic Freedom for Eukaryotic Cells

Thanks to mitochondria, eukaryotic cells are free from the bioenergetic constraints that ensnare prokaryotic cells. Mitochondria generate the same amount of ATP as a bacterial cell. However, their genome consists of only 13 proteins, thus the organelle’s ATP demand is low. The net effect is that the mitochondria’s AEG skyrockets. Furthermore, mitochondrial membranes come equipped with an ATP transport protein that can pump the vast excess of ATP from the organelle interior into the cytoplasm for the eukaryotic cell to use.

To summarize, mitochondria’s small genome plus its prodigious ATP output are the keys to eukaryotic cells’ large AEG.

Of course, this raises a question: Why do mitochondria have genomes at all? Well, as it turns out, mitochondria need genomes for several reasons (which I’ve detailed in previous articles).

Other features of mitochondria are also essential for ATP production. For example, cardiolipinin the organelle’s inner membrane plays a role in stabilizing and organizing specific proteinsneeded for cellular energy production.

From a creation perspective it seems that if a Creator was going to design a eukaryotic cell from scratch, he would have to create an organelle just like a mitochondrion to provide the energy needed to sustain the cell’s complexity with a high AEG. Far from being an evolutionary “kludge job,” mitochondria appear to be an elegantly designed feature of eukaryotic cells with a just-right set of properties that allow for the cellular complexity needed to sustain complex multicellular life. It is eerie to think that unguided evolutionary events just happened to traverse the just-right evolutionary path to yield such an organelle.

As a Christian, I see the rationale that undergirds the design of mitochondria as the signature of the Creator’s handiwork in biology. I also view the anthropic coincidence associated with the origin of eukaryotic cells as reason to believe that life’s history has purpose and meaning, pointing toward the advent of complex life and humanity.

So, now you know why mitochondria make my list.


  1. Nick Lane, “Bioenergetic Constraints on the Evolution of Complex Life,” Cold Spring Harbor Perspectives in Biology 6, no. 5 (May 2014): a015982, doi:10.1101/cshperspect.a015982.

Reprinted with permission by the author
Original article at:

Self-Assembly of Protein Machines: Evidence for Evolution or Creation?

Untitled 12

I finally upgraded my iPhone a few weeks ago from a 5s to an 8 Plus. I had little choice. The battery on my cell phone would no longer hold a charge.

I’d put off getting a new one for as long as possible. It just didn’t make sense to spend money chasing the latest and greatest technology when current cell phone technology worked perfectly fine for me. Apart from the battery life and a less-than-ideal camera, I was happy with my iPhone 5s. Now I am really glad I made the switch.

Then, the other day I caught myself wistfully eyeing the iPhone X. And, today, I learned that Apple is preparing the release of the iPhone 11 (or XI or XT). Where will Apple’s technology upgrades take us next? I can’t wait to find out.

Have I become a technology junkie?

It is remarkable how quickly cell phone technology advances. It is also remarkable how alluring new technology can be. The next thing you know, Apple will release an iPhone that will assemble itself when it comes out of the box. . . . Probably not.

But, if the work of engineers at MIT ever reaches fruition, it is possible that smartphone manufacturers one day just might rely on a self-assembly process to produce cell phones.

A Self-Assembling Cell Phone

The Self-Assembly Lab at MIT has developed a pilot process to manufacture cell phones by self-assembly.

To do this, they designed their cell phone to consist of six parts that fit together in a lock-in-key manner. By placing the cell phone pieces into a tumbler that turns at the just-right speed, the pieces automatically combine with one another, bit by bit, until the cell phone is assembled.

Few errors occur during the assembly process. Only pieces designed to fit together combine with one another because of the lock-in-key fabrication.

Self-Assembly and the Case for a Creator

It is quite likely that the work of MIT’s Self-Assembly Lab (and other labs like it) will one day revolutionize manufacturing—not just for iPhones, but for other types of products as well.

As alluring as this new technology might be, I am more intrigued by its implications for the creation-evolution controversy. What do self-assembly processes have to do with the creation-evolution debate? More than we might realize.

I believe self-assembly processes strengthen the watchmaker argument for God’s existence (and role in the origin of life). Namely, this cutting-edge technology makes it possible to respond to a common objection leveled against this design argument.

To understand why this engineering breakthrough is so important for the Watchmaker argument, a little background is necessary.

The Watchmaker Argument

Anglican natural theologian William Paley (1743–1805) posited the Watchmaker argument in the eighteenth century. It went on to become one of the best-known arguments for God’s existence. The argument hinges on the comparison Paley made between a watch and a rock. He argued that a rock’s existence can be explained by the outworking of natural processes—not so for a watch.

The characteristics of a watch—specifically the complex interaction of its precision parts for the purpose of telling time—implied the work of an intelligent designer. Employing an analogy, Paley asserted that just as a watch requires a watchmaker, so too, life requires a Creator. Paley noted that biological systems display a wide range of features characterized by the precise interplay of complex parts designed to interact for specific purposes. In other words, biological systems have much more in common with a watch than a rock. This similarity being the case, it logically follows that life must stem from the work of a Divine Watchmaker.

Biochemistry and the Watchmaker Argument

As I discuss in my book The Cell’s Design, advances in biochemistry have reinvigorated the Watchmaker argument. The hallmark features of biochemical systems are precisely the same properties displayed in objects, devices, and systems designed and crafted by humans.

Cells contain protein complexes that are structured to operate as biomolecular motors and machines. Some molecular-level biomachines are strict analogs to machinery produced by human designers. In fact, in many instances, a one-to-one relationship exists between the parts of manufactured machines and the molecular components of biomachines. (A few examples of these biomolecular machines are discussed in the articles listed in the Resources section.)

We know that machines originate in human minds that comprehend and then implement designs. So, when scientists discover example after example of biomolecular machines inside the cell with an eerie and startling similarity to the machines we produce, it makes sense to conclude that these machines and, hence, life, must also have originated in a Mind.

A Skeptic’s Challenge

As you might imagine, skeptics have leveled objections against the Watchmaker argument since its introduction in the 1700s. Today, when skeptics criticize the latest version of the Watchmaker argument (based on biochemical designs), the influence of Scottish skeptic David Hume (1711–1776) can be seen and felt.

In his 1779 work Dialogues Concerning Natural Religion, Hume presented several criticisms of design arguments. The foremost centered on the nature of analogical reasoning. Hume argued that the conclusions resulting from analogical reasoning are only sound when the things compared are highly similar to each other. The more similar, the stronger the conclusion. The less similar, the weaker the conclusion.

Hume dismissed the original version of the Watchmaker argument by maintaining that organisms and watches are nothing alike. They are too dissimilar for a good analogy. In other words, what is true for a watch is not necessarily true for an organism and, therefore, it doesn’t follow that organisms require a Divine Watchmaker, just because a watch does.

In effect, this is one of the chief reasons why some skeptics today dismiss the biochemical Watchmaker argument. For example, philosopher Massimo Pigliucci has insisted that Paley’sanalogy is purely metaphorical and does not reflect a true analogical relationship. He maintains that any similarity between biomolecular machines and human designs reflects merely illustrative analogies that life scientists use to communicate the structure and function of these protein complexes via familiar concepts and language. In other words, it is illegitimate to use the “analogies” between biomolecular machines and manufactured machines to make a case for a Creator.1

A Response Based on Insights from Nanotechnology

I have responded to this objection by pointing out that nanotechnologists have isolated biomolecular machines from the cell and incorporated these protein complexes into nanodevices and nanosystems for the explicit purpose of taking advantage of their machine-like properties. These transplanted biomachines power motion and movements in the devices, which otherwise would be impossible with current technology. In other words, nanotechnologists view these biomolecular systems as actual machines and utilize them as such. Their work demonstrates that biomolecular machines are literal, not metaphorical, machines. (See the Resources section for articles describing this work.)

Is Self-Assembly Evidence of Evolution or Design?

Another criticism—inspired by Hume—is that machines designed by humans don’t self-assemble, but biochemical machines do. Skeptics say this undermines the Watchmaker analogy. I have heard this criticism in the past, but it came up recently in a dialogue I had with a skeptic in a Facebook group.

I wrote that “What we discover when we work out the structure and function of protein complexes are features that are akin to an automobile engine, not an outcropping of rocks.”

A skeptic named Maurice responded: “Your analogy is false. Cars do not spontaneously self-assemble—in that case there is a prohibitive energy barrier. But hexagonal lava rocks can and do—there is no energy barrier to prohibit that from happening.”

Maurice argues that my analogy is a poor one because protein complexes in the cell self-assemble, whereas automobile engines can’t. For Maurice (and other skeptics), this distinction serves to make manufactured machines qualitatively different from biomolecular machines. On the other hand, hexagonal patterns in lava rocks give the appearance of design but are actually formed spontaneously. For skeptics like Maurice, this feature indicates that the design displayed by protein complexes in the cell is apparent, not true, design.

Maurice added: “Given that nature can make hexagonal lava blocks look ‘designed,’ it can certainly make other objects look ‘designed.’ Design is not a scientific term.”

Self-Assembly and the Watchmaker Argument

This is where the MIT engineers’ fascinating work comes into play.

Engineers continue to make significant progress toward developing self-assembly processes for manufacturing purposes. It very well could be that in the future a number of machines and devices will be designed to self-assemble. Based on the researchers’ work, it becomes evident that part of the strategy for designing machines that self-assemble centers on creating components that not only contribute to the machine’s function, but also precisely interact with the other components so that the machine assembles on its own.

The operative word here is designed. For machines to self-assemble they must be designed to self-assemble.

This requirement holds true for biochemical machines, too. The protein subunits that interact to form the biomolecular machines appear to be designed for self-assembly. Protein-protein binding sites on the surface of the subunits mediate this self-assembly process. These binding sites require high-precision interactions to ensure that the binding between subunits takes place with a high degree of accuracy—in the same way that the MIT engineers designed the cell phone pieces to precisely combine through lock-in-key interactions.


Figure: ATP Synthase is a biomolecular motor that is literally an electrically powered rotary motor. This biomachine is assembled from protein subunits. Credit: Shutterstock

The level of design required to ensure that protein subunits interact precisely to form machine-like protein complexes is only beginning to come into full view.2 Biochemists who work in the area of protein design still don’t fully understand the biophysical mechanisms that dictate the assembly of protein subunits. And, while they can design proteins that will self-assemble, they struggle to replicate the complexity of the self-assembly process that routinely takes place inside the cell.

Thanks to advances in technology, biomolecular machines’ ability to self-assemble should no longer count against the Watchmaker argument. Instead, self-assembly becomes one more feature that strengthens Paley’s point.

The Watchmaker Prediction

Advances in self-assembly also satisfy the Watchmaker prediction, further strengthening the case for a Creator. In conjunction with my presentation of the revitalized Watchmaker argument in The Cell’s Design, I proposed the Watchmaker prediction. I contend that many of the cell’s molecular systems currently go unrecognized as analogs to human designs because the corresponding technology has yet to be developed.

The possibility that advances in human technology will ultimately mirror the molecular technology that already exists as an integral part of biochemical systems leads to the Watchmaker prediction. As human designers develop new technologies, examples of these technologies, though previously unrecognized, will become evident in the operation of the cell’s molecular systems. In other words, if the Watchmaker argument truly serves as evidence for a Creator’s existence, then it is reasonable to expect that life’s biochemical machinery anticipates human technological advances.

In effect, the developments in self-assembly technology and its prospective use in future manufacturing operations fulfill the Watchmaker prediction. Along these lines, it’s even more provocative to think that cellular self-assembly processes are providing insight to engineers who are working to develop similar technology.

Maybe I am a technology junkie, after all. I find it remarkable that as we develop new technologies we discover that they already exist in the cell, and because they do the Watchmaker argument becomes more and more compelling.

Can you hear me now?


The Biochemical Watchmaker Argument

Challenges to the Biochemical Watchmaker Argument

  1. Massimo Pigliucci and Maarten Boudry, “Why Machine-Information Metaphors are Bad for Science and Science Education,” Science and Education 20, no. 5–6 (May 2011): 453–71; doi:10.1007/s11191-010-9267-6.
  2. For example, see Christoffer H. Norn and Ingemar André, “Computational Design of Protein Self-Assembly,” Current Opinion in Structural Biology 39 (August 2016): 39–45, doi:10.1016/

Reprinted with permission by the author
Original article at:

Endosymbiont Hypothesis and the Ironic Case for a Creator



i ·ro ·ny

The use of words to express something different from and often opposite to their literal meaning.
Incongruity between what might be expected and what actually occurs.

—The Free Dictionary

People often use irony in humor, rhetoric, and literature, but few would think it has a place in science. But wryly, this has become the case. Recent work in synthetic biology has created a real sense of irony among the scientific community—particularly for those who view life’s origin and design from an evolutionary framework.

Increasingly, life scientists are turning to synthetic biology to help them understand how life could have originated and evolved. But, they have achieved the opposite of what they intended. Instead of developing insights into key evolutionary transitions in life’s history, they have, ironically, demonstrated the central role intelligent agency must play in any scientific explanation for the origin, design, and history of life.

This paradoxical situation is nicely illustrated by recent work undertaken by researchers from Scripps Research (La Jolla, CA). Through genetic engineering, the scientific investigators created a non-natural version of the bacterium E. coli. This microbe is designed to take up permanent residence in yeast cells. (Cells that take up permanent residence within other cells are referred to as endosymbionts.) They hope that by studying these genetically engineered endosymbionts, they can gain a better understanding of how the first eukaryotic cells evolved. Along the way, they hope to find added support for the endosymbiont hypothesis.1

The Endosymbiont Hypothesis

Most biologists believe that the endosymbiont hypothesis (symbiogenesis) best explains one of the key transitions in life’s history; namely, the origin of complex cells from bacteria and archaea. Building on the ideas of Russian botanist Konstantin Mereschkowski, Lynn Margulis(1938–2011) advanced the endosymbiont hypothesis in the 1960s to explain the origin of eukaryotic cells.

Margulis’s work has become an integral part of the evolutionary paradigm. Many life scientists find the evidence for this idea compelling and consequently view it as providing broad support for an evolutionary explanation for the history and design of life.

According to this hypothesis, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe. Presumably, organelles such as mitochondria were once endosymbionts. Evolutionary biologists believe that once engulfed by the host cell, the endosymbionts took up permanent residency, with the endosymbiont growing and dividing inside the host.

Over time, the endosymbionts and the host became mutually interdependent. Endosymbionts provided a metabolic benefit for the host cell—such as an added source of ATP—while the host cell provided nutrients to the endosymbionts. Presumably, the endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism.


Figure 1: Endosymbiont hypothesis. Image credit: Wikipedia.

Life scientists point to a number of similarities between mitochondria and alphaproteobacteria as evidence for the endosymbiont hypothesis. (For a description of the evidence, see the articles listed in the Resources section.) Nevertheless, they don’t understand how symbiogenesis actually occurred. To gain this insight, scientists from Scripps Research sought to experimentally replicate the earliest stages of mitochondrial evolution by engineering E. coli and brewer’s yeast (S. cerevisiae) to yield an endosymbiotic relationship.

Engineering Endosymbiosis

First, the research team generated a strain of E. coli that no longer has the capacity to produce the essential cofactor thiamin. They achieved this by disabling one of the genes involved in the biosynthesis of the compound. Without this metabolic capacity, this strain becomes dependent on an exogenous source of thiamin in order to survive. (Because the E. coli genome encodes for a transporter protein that can pump thiamin into the cell from the exterior environment, it can grow if an external supply of thiamin is available.) When incorporated into yeast cells, the thiamin in the yeast cytoplasm becomes the source of the exogenous thiamin, rendering E. coli dependent on the yeast cell’s metabolic processes.

Next, they transferred the gene that encodes a protein called ADP/ATP translocase into the E. coli strain. This gene was harbored on a plasmid (which is a small circular piece of DNA). Normally, the gene is found in the genome of an endosymbiotic bacterium that infects amoeba. This protein pumps ATP from the interior of the bacterial cell to the exterior environment.2

The team then exposed yeast cells (that were deficient in ATP production) to polyethylene glycol, which creates a passageway for E. coli cells to make their way into the yeast cells. In doing so, E. coli becomes established as endosymbionts within the yeast cells’ interior, with the E. coli providing ATP to the yeast cell and the yeast cell providing thiamin to the bacterial cell.

Researchers discovered that once taken up by the yeast cells, the E. coli did not persist inside the cell’s interior. They reasoned that the bacterial cells were being destroyed by the lysosomal degradation pathway. To prevent their destruction, the research team had to introduce three additional genes into the E. coli from three separate endosymbiotic bacteria. Each of these genes encodes proteins—called SNARE-like proteins—that interfere with the lysosomal destruction pathway.

Finally, to establish a mutualistic relationship between the genetically-engineered strain of E. coli and the yeast cell, the researchers used a yeast strain with defective mitochondria. This defect prevented the yeast cells from producing an adequate supply of ATP. Because of this limitation, the yeast cells grow slowly and would benefit from the E. coli endosymbionts, with the engineered capacity to transport ATP from their cellular interior to the exterior environment (the yeast cytoplasm.)

The researchers observed that the yeast cells with E. coli endosymbionts appeared to be stable for 40 rounds of cell doublings. To demonstrate the potential utility of this system to study symbiogenesis, the research team then began the process of genome reduction for the E. coli endosymbionts. They successively eliminated the capacity of the bacterial endosymbiont to make the key metabolic intermediate NAD and the amino acid serine. These triply-deficient E. coli strains survived in the yeast cells by taking up these nutrients from the yeast cytoplasm.

Evolution or Intentional Design?

The Scripps Research scientific team’s work is impressive, exemplifying science at its very best. They hope that their landmark accomplishment will lead to a better understanding of how eukaryotic cells appeared on Earth by providing the research community with a model system that allows them to probe the process of symbiogenesis. It will also allow them to test the various facets of the endosymbiont hypothesis.

In fact, I would argue that this study already has made important strides in explaining the genesis of eukaryotic cells. But ironically, instead of proffering support for an evolutionary origin of eukaryotic cells (even though the investigators operated within the confines of the evolutionary paradigm), their work points to the necessary role intelligent agency must have played in one of the most important events in life’s history.

This research was executed by some of the best minds in the world, who relied on a detailed and comprehensive understanding of biochemical and cellular systems. Such knowledge took a couple of centuries to accumulate. Furthermore, establishing mutualistic interactions between the two organisms required a significant amount of ingenuity—genius that is reflected in the experimental strategy and design of their study. And even at that point, execution of their experimental protocols necessitated the use of sophisticated laboratory techniques carried out under highly controlled, carefully orchestrated conditions. To sum it up: intelligent agency was required to establish the endosymbiotic relationship between the two microbes.


Figure 2: Lab researcher. Image credit: Shutterstock.

Or, to put it differently, the endosymbiotic relationship between these two organisms was intelligently designed. (All this work was necessary to recapitulate only the presumed first step in the process of symbiogenesis.) This conclusion gains added support given some of the significant problems confronting the endosymbiotic hypothesis. (For more details, see the Resources section.) By analogy, it seems reasonable to conclude that eukaryotic cells, too, must reflect the handiwork of a Divine Mind—a Creator.



  1. Angad P. Mehta et al., “Engineering Yeast Endosymbionts as a Step toward the Evolution of Mitochondria,” Proceedings of the National Academy of Sciences, USA 115 (November 13, 2018): doi:10.1073/pnas.1813143115.
  2. ATP is a biochemical that stores energy used to power the cell’s operation. Produced by mitochondria, ATP is one of the end products of energy harvesting pathways in the cell. The ATP produced in mitochondria is pumped into the cell’s cytoplasm from within the interior of this organelle by an ADP/ATP transporter.
Reprinted with permission by the author
Original article at:

The Optimal Design of the Genetic Code



Were there no example in the world of contrivance except that of the eye, it would be alone sufficient to support the conclusion which we draw from it, as to the necessity of an intelligent Creator.

–William Paley, Natural Theology

In his classic work, Natural TheologyWilliam Paley surveyed a range of biological systems, highlighting their similarities to human-made designs. Paley noticed that human designs typically consist of various components that interact in a precise way to accomplish a purpose. According to Paley, human designs are contrivances—things produced with skill and cleverness—and they come about via the work of human agents. They come about by the work of intelligent designers. And because biological systems are contrivances, they, too, must come about via the work of a Creator.

For Paley, the pervasiveness of biological contrivances made the case for a Creator compelling. But he was especially struck by the vertebrate eye. For Paley, if the only example of a biological contrivance available to us was the eye, its sophisticated design and elegant complexity alone justify the “necessity of an intelligent creator” to explain its origin.

As a biochemist, I am impressed with the elegant designs of biochemical systems. The sophistication and ingenuity of these designs convinced me as a graduate student that life must stem from the work of a Mind. In my book The Cell’s Design, I follow in Paley’s footsteps by highlighting the eerie similarity between human designs and biochemical systems—a similarity I describe as an intelligent design pattern. Because biochemical systems conform to the intelligent design pattern, they must be the work of a Creator.

As with Paley, I view the pervasiveness of the intelligent design pattern in biochemical systems as critical to making the case for a Creator. Yet, in particular, I am struck by the design of a single biochemical system: namely, the genetic code. On the basis of the structure of the genetic code alone, I think one is justified to conclude that life stems from the work of a Divine Mind. The latest work by a team of German biochemists on the genetic code’s design convinces me all the more that the genetic code is the product of a Creator’s handiwork.1

To understand the significance of this study and the code’s elegant design, a short primer on molecular biology is in order. (For those who have a background in biology, just skip ahead to The Optimal Genetic Code.)


The “workhorse” molecules of life, proteins take part in essentially every cellular and extracellular structure and activity. Proteins are chain-like molecules folded into precise three-dimensional structures. Often, the protein’s three-dimensional architecture determines the way it interacts with other proteins to form a functional complex.

Proteins form when the cellular machinery links together (in a head-to-tail fashion) smaller subunit molecules called amino acids. To a first approximation, the cell employs 20 different amino acids to make proteins. The amino acids that make up proteins possess a variety of chemical and physical properties.


Figure 1: The Amino Acids. Image credit: Shutterstock

Each specific amino acid sequence imparts the protein with a unique chemical and physical profile along the length of its chain. The chemical and physical profile determines how the protein folds and, therefore, its function. Because structure determines the function of a protein, the amino acid sequence is key to dictating the type of work a protein performs for the cell.


The cell’s machinery uses the information harbored in the DNA molecule to make proteins. Like these biomolecules, DNA consists of chain-like structures known as polynucleotides. Two polynucleotide chains align in an antiparallel fashion to form a DNA molecule. (The two strands are arranged parallel to one another with the starting point of one strand located next to the ending point of the other strand, and vice versa.) The paired polynucleotide chains twist around each other to form the well-known DNA double helix. The cell’s machinery forms polynucleotide chains by linking together four different subunit molecules called nucleotides. The four nucleotides used to build DNA chains are adenosine, guanosine, cytidine, and thymidine, familiarly known as A, G, C, and T, respectively.


Figure 2: The Structure of DNA. Image credit: Shutterstock

As noted, DNA stores the information necessary to make all the proteins used by the cell. The sequence of nucleotides in the DNA strands specifies the sequence of amino acids in protein chains. Scientists refer to the amino-acid-coding nucleotide sequence that is used to construct proteins along the DNA strand as a gene.

The Genetic Code

A one-to-one relationship cannot exist between the 4 different nucleotides of DNA and the 20 different amino acids used to assemble polypeptides. The cell addresses this mismatch by using a code comprised of groupings of three nucleotides to specify the 20 different amino acids.

The cell uses a set of rules to relate these nucleotide triplet sequences to the 20 amino acids making up proteins. Molecular biologists refer to this set of rules as the genetic code. The nucleotide triplets, or “codons” as they are called, represent the fundamental communication units of the genetic code, which is essentially universal among all living organisms.

Sixty-four codons make up the genetic code. Because the code only needs to encode 20 amino acids, some of the codons are redundant. That is, different codons code for the same amino acid. In fact, up to six different codons specify some amino acids. Others are specified by only one codon.

Interestingly, some codons, called stop codons or nonsense codons, code no amino acids. (For example, the codon UGA is a stop codon.) These codons always occur at the end of the gene, informing the cell where the protein chain ends.

Some coding triplets, called start codons, play a dual role in the genetic code. These codons not only encode amino acids, but also “tell” the cell where a protein chain begins. For example, the codon GUG encodes the amino acid valine and also specifies the starting point of the proteins.


Figure 3: The Genetic Code. Image credit: Shutterstock

The Optimal Genetic Code

Based on visual inspection of the genetic code, biochemists had long suspected that the coding assignments weren’t haphazard—a frozen accident. Instead it looked to them like a rationale undergirds the genetic code’s architecture. This intuition was confirmed in the early 1990s. As I describe in The Cell’s Design, at that time, scientists from the University of Bath (UK) and from Princeton University quantified the error-minimization capacity of the genetic code. Their initial work indicated that the naturally occurring genetic code withstands the potentially harmful effects of substitution mutations better than all but 0.02 percent (1 out of 5,000) of randomly generated genetic codes with codon assignments different from the universal genetic code.2

Subsequent analysis performed later that decade incorporated additional factors. For example, some types of substitution mutations (called transitions) occur more frequently in nature than others (called transversions). As a case in point, an A-to-G substitution occurs more frequently than does either an A-to-C or an A-to-T mutation. When researchers included this factor into their analysis, they discovered that the naturally occurring genetic code performed better than one million randomly generated genetic codes. In a separate study, they also found that the genetic code in nature resides near the global optimum for all possible genetic codes with respect to its error-minimization capacity.3

It could be argued that the genetic code’s error-minimization properties are more dramatic than these results indicate. When researchers calculated the error-minimization capacity of one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution where the naturally occurring genetic code’s capacity occurred outside the distribution. Researchers estimate the existence of 1018 (a quintillion) possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. Nearly all of these codes fall within the error-minimization distribution. This finding means that of 1018 possible genetic codes, only a few have an error-minimization capacity that approaches the code found universally in nature.

Frameshift Mutations

Recently, researchers from Germany wondered if this same type of optimization applies to frameshift mutations. Biochemists have discovered that these mutations are much more devastating than substitution mutations. Frameshift mutations result when nucleotides are inserted into or deleted from the DNA sequence of the gene. If the number of inserted/deleted nucleotides is not divisible by three, the added or deleted nucleotides cause a shift in the gene’s reading frame—altering the codon groupings. Frameshift mutations change all the original codons to new codons at the site of the insertion/deletion and onward to the end of the gene.


Figure 4: Types of Mutations. Image credit: Shutterstock

The Genetic Code Is Optimized to Withstand Frameshift Mutations

Like the researchers from the University of Bath, the German team generated 1 million random genetic codes with the same type and degree of redundancy as the genetic code found in nature. They discovered that the code found in nature is better optimized to withstand errors that result from frameshift mutations (involving either the insertion or deletion of 1 or 2 nucleotides) than most of the random genetic codes they tested.

The Genetic Code Is Optimized to Harbor Multiple Overlapping Codes

The optimization doesn’t end there. In addition to the genetic code, genes harbor other overlapping codes that independently direct the binding of histone proteins and transcription factors to DNA and dictate processes like messenger RNA folding and splicing. In 2007, researchers from Israel discovered that the genetic code is also optimized to harbor overlapping codes.4

The Genetic Code and the Case for a Creator

In The Cell’s Design, I point out that common experience teaches us that codes come from minds. By analogy, the mere existence of the genetic code suggests that biochemical systems come from a Mind. This conclusion gains considerable support based on the exquisite optimization of the genetic code to withstand errors that arise from both substitution and frameshift mutations, along with its optimal capacity to harbor multiple overlapping codes.

The triple optimization of the genetic code arises from its redundancy and the specific codon assignments. Over 1018 possible genetic codes exist and any one of them could have been “selected” for the code in nature. Yet, the “chosen” code displays extreme optimization—a hallmark feature of designed systems. As the evidence continues to mount, it becomes more and more evident that the genetic code displays an eerie perfection.5

An elegant contrivance such as the genetic code—which resides at the heart of biochemical systems and defines the information content in the cell—is truly one in a million when it comes to reasons to believe.



  1. Regine Geyer and Amir Madany Mamlouk, “On the Efficiency of the Genetic Code after Frameshift Mutations,” PeerJ 6 (2018): e4825, doi:10.7717/peerj.4825.
  2. David Haig and Laurence D. Hurst, “A Quantitative Measure of Error Minimization in the Genetic Code,” Journal of Molecular Evolution33 (1991): 412–17, doi:1007/BF02103132.
  3. Gretchen Vogel, “Tracking the History of the Genetic Code,” Science281 (1998): 329–31, doi:1126/science.281.5375.329; Stephen J. Freeland and Laurence D. Hurst, “The Genetic Code Is One in a Million,” Journal of Molecular Evolution 47 (1998): 238–48, doi:10.1007/PL00006381.; Stephen J. Freeland et al., “Early Fixation of an Optimal Genetic Code,” Molecular Biology and Evolution 17 (2000): 511–18, doi:10.1093/oxfordjournals.molbev.a026331.
  4. Shalev Itzkovitz and Uri Alon, “The Genetic Code Is Nearly Optimal for Allowing Additional Information within Protein-Coding Sequences,” Genome Research(2007): advanced online, doi:10.1101/gr.5987307.
  5. In The Cell’s Design, I explain why the genetic code cannot emerge through evolutionary processes, reinforcing the conclusion that the cell’s information systems—and hence, life—must stem from the handiwork of a Creator.
Reprinted with permission by the author
Original article at:

Protein Amino Acids Form a “Just-Right” Set of Biological Building Blocks



Like most kids, I had a set of Lego building blocks. But, growing up in the 1960s, the Lego sets were nothing like the ones today. I am amazed at how elaborate and sophisticated Legos have become, consisting of interlocking blocks of various shapes and sizes, gears, specialty parts, and figurines—a far cry from the square and rectangular blocks that made up the Lego sets of my youth. The most imaginative things I could ever hope to build were long walls and high towers.

It goes to show: the set of building blocks make all the difference in the world.

This truism applies to the amino acid building blocks that make up proteins. As it turns out, proteins are built from a specialty set of amino acids that have the just-right set of properties to make life possible, as recent work by researchers from Germany attests.1 From my vantage point as a biochemist and a Christian, the just-right amino acid composition of proteins evinces intelligent design and is part of the reason I think a Creator must have played a direct role in the origin and design of life.

Why is the Same Set of Twenty Amino Acids Used to Build Proteins?

It stands as one of the most important insights about protein structure discovered by biochemists: The set of amino acids used to build proteins is universal. In other words, the proteins found in every organism on Earth are made up of the same 20 amino acids.

Yet, hundreds of amino acids exist in nature. And, this abundance prompts the question: Why these 20 amino acids? From an evolutionary standpoint, the set of amino acids used to build proteins should reflect:

1) the amino acids available on early Earth, generated by prebiotic chemical reactions;

2) the historically contingent outworking of evolutionary processes.

In other words, evolutionary mechanisms would have cobbled together an amino acid set that works “just good enough” for life to survive, but nothing more. No one would expect evolutionary processes to piece together a “just-right,” optimal set of amino acids. In other words, if evolutionary processes shaped the amino acid set used to build proteins, these biochemical building blocks should be much like the unsophisticated Lego sets little kids played with in the 1960s.

An Optimal Set of Amino Acids

But, contrary to this expectation, in the early 1980s biochemists discovered that an exquisite molecular rationale undergirds the amino acid set used to make proteins. Every aspect of the amino acid structure has to be precisely the way it is for life to be possible. On top of that, researchers from the University of Hawaii have conducted a quantitative comparison of the range of chemical and physical properties possessed by the 20 protein-building amino acids versus random sets of amino acids that could have been selected from early Earth’s hypothetical prebiotic soup.2 They concluded that the set of 20 amino acids is optimal. It turns out that the set of amino acids found in biological systems possesses the “just-right” properties that evenly and uniformly vary across a broad range of size, charge, and hydrophobicity. They also showed that the amino acids selected for proteins are a “highly unusual set of 20 amino acids; a maximum of 0.03% random sets outperformed the standard amino acid alphabet in two properties, while no single random set exhibited greater coverage in all three properties simultaneously.”3

A New Perspective on the 20 Protein Amino Acids

Beyond charge, size, and hydrophobicity, the German researchers wondered if quantum mechanical effects play a role in dictating the universal set of 20 protein amino acids. To address this question, they examined the gap between the HOMO (highest occupied molecular orbital) and the LUMO (lowest unoccupied molecular orbital) for the protein amino acids. The HOMO-LUMO gap is one of the quantum mechanical determinants of chemical reactivity. More reactive molecules have smaller HOMO-LUMO gaps than molecules that are relatively nonreactive.

The German biochemists discovered that the HOMO-LUMO gap was small for 7 of the 20 amino acids (histidine, phenylalanine cysteine, methionine, tyrosine, and tryptophan), and hence, these molecules display a high level of chemical activity. Interestingly, some biochemists think that these 7 amino acids are not necessary to build proteins. Previous studies have demonstrated that a wide range of foldable, functional proteins can be built from only 13 amino acids (glycine, alanine, valine, leucine, isoleucine, proline, serine, threonine, aspartic acid, glutamic acid, asparagine, lysine, and arginine). As it turns out, this subset of 13 amino acids has a relatively large HOMO-LUMO gap and, therefore, is relatively unreactive. This suggests that the reactivity of histidine, phenylalanine cysteine, methionine, tyrosine, and tryptophan may be part of the reason for the inclusion of the 7 amino acids in the universal set of 20.

As it turns out, these amino acids readily react with the peroxy free radical, a highly corrosive chemical species that forms when oxygen is present in the atmosphere. The German biochemists believe that when these 7 amino acids reside on the surface of proteins, they play a protective role, keeping the proteins from oxidative damage.

As I discussed in a previous article, these 7 amino acids contribute in specific ways to protein structure and function. And they contribute to the optimal set of chemical and physical properties displayed by the universal set of 20 amino acids. And now, based on the latest work by the German researchers, it seems that the amino acids’ newly recognized protective role against oxidative damage adds to their functional and structural significance in proteins.

Interestingly, because of the universal nature of biochemistry, these 7 amino acids must have been present in the proteins of the last universal common ancestor (LUCA) of all life on Earth. And yet, there was little or no oxygen present on early Earth, rendering the protective effect of these amino acids unnecessary. The importance of the small HOMO-LUMO gaps for these amino acids would not have become realized until much later in life’s history when oxygen levels became elevated in Earth’s atmosphere. In other words, inclusion of these amino acids in the universal set at life’s start seemingly anticipates future events in Earth’s history.

Protein Amino Acids Chosen by a Creator

The optimality, foresight, and molecular rationale undergirding the universal set of protein amino acids is not expected if life had an evolutionary origin. But, it is exactly what I would expect if life stems from a Creator’s handiwork. As I discuss in The Cell’s Design, objects and systems created and produced by human designers are typically well thought out and optimized. Both are indicative of intelligent design. In human designs, optimization is achieved through foresight and planning. Optimization requires inordinate attention to detail and careful craftsmanship. By analogy, the optimized biochemistry, epitomized by the amino acid set that makes up proteins, rationally points to the work of a Creator.



  1. Matthias Granhold et al., “Modern Diversification of the Amino Acid Repertoire Driven by Oxygen,” Proceedings of the National Academy of Sciences USA 115 (January 2, 2018): 41–46, doi:10.1073/pnas.1717100115.
  2. Gayle K. Philip and Stephen J. Freeland, “Did Evolution Select a Nonrandom ‘Alphabet’ of Amino Acids?” Astrobiology 11 (April 2011): 235–40, doi:10.1089/ast.2010.0567.
  3. Philip and Freeland, “Did Evolution Select,” 235–40.
Reprinted with permission by the author
Original article at:

Is the Laminin “Cross” Evidence for a Creator?



As I interact with people on social media and travel around the country to speak on the biochemical evidence for a Creator, I am frequently asked to comment on laminin.1 The people who mention this protein are usually quite excited, convinced that its structure provides powerful scientific evidence for the Christian faith. Unfortunately, I don’t agree.

Motivating this unusual question is the popularized claim of a well-known Christian pastor that laminin’s structure provides physical evidence that the God of the Bible created human beings and also sustains our lives. While I wholeheartedly believe God did create and does sustain human life, laminin’s apparent cross-shape does not make the case.

Laminin is one of the key components of the basal lamina, a thin sheet-like structure that surrounds cells in animal tissue. The basal lamina is part of the extracellular matrix (ECM). This structure consists of a meshwork of fibrous proteins and polysaccharides secreted by the cells. It forms the space between cells in animal tissue. The ECM carries out a wide range of functions that include providing anchor points and support for cells.

Laminin is a relatively large protein made of three different protein subunits that combine to form a t-shaped structure when the flexible rod-like regions of laminin are fully extended. Each of the four “arms” of laminin contains sites that allow this biomolecule to bind to other laminin molecules, other proteins (like collagen), and large polysaccharides. Laminin also provides a binding site for proteins called integrins, which are located in the cell membrane.


Figure: The structure of laminin. Image credit: Wikipedia

Laminin’s architecture and binding sites make this protein ideally suited to interact with other proteins and polysaccharides to form a network called the basal reticulum and to anchor cells to its biochemical scaffolding. The basal reticulum helps hold cells together to form tissues and, in turn, helps cement that tissue to connective tissues.

The cross-like shape of laminin and the role it plays in holding tissues together has prompted the claim that this biomolecule provides scientific support for passages such as Colossians 1:15–17 and shows how the God of the Bible must have made humans and continues to sustain them.

I would caution Christians against using this “argument.” I see a number of problems with it. (And so do many skeptics.)

First, the cross shape is a simple structure found throughout nature. So, it’s probably not a good idea to attach too much significance to laminin’s shape. The t configuration makes laminin ideally suited to connect proteins to each other and cells to the basal reticulum. This is undoubtedly the reason for its structure.

Secondly, the cross shape of laminin is an idealized illustration of the molecule. Portraying complex biomolecules in simplified ways is a common practice among biochemists. Depicting laminin in this extended form helps scientists visualize and catalog the binding sites along its four arms. This configuration should not be interpreted to represent its actual shape in biological systems. In the basal reticulum, laminin adopts all sorts of shapes that bear no resemblance to a cross. In fact, it’s much more common to observe laminin in a swastika configuration than in a cross-like one. Even electron micrographs of isolated laminin molecules that appear cross-shaped may be misleading. Their shape is likely an artifact of sample preparation. I have seen other electron micrographs that show laminin adopting a variety of twisted shapes that, again, bear no resemblance to a cross.

Finally, laminin is not the only molecule “holding things together.” A number of other proteins and polysaccharides are also indispensable components of the basal reticulum. None of these molecules is cross-shaped.

As I argue in my book, The Cell’s Design, the structure and operation of biochemical systems provide some of the most potent support for a Creator’s role in fabricating living systems. Instead of pointing to superficial features of biomolecules such as the “cross-shaped” architecture of laminin, there are many more substantive ways to use biochemistry to argue for the necessity of a Creator and for the value he places on human life. As a case in point, the salient characteristics of biochemical systems identically match those features we would recognize immediately as evidence for the work of a human design engineer. The close similarity between biochemical systems and the devices produced by human designers logically compels this conclusion: life’s most fundamental processes and structures stem from the work of an intelligent, intentional Agent.

When Christians invest the effort to construct a careful case for the Creator, skeptics and seekers find it difficult to deny the powerful evidence from biochemistry and other areas of science for God’s existence.



  1. This article was originally published in the April 1, 2009, edition of New Reasons to Believe.
Reprinted with permission by the author
Original article at:

Fatty Acids Are Beautiful



Who says that fictions onely and false hair
Become a verse? Is there in truth no beauty?
Is all good structure in a winding stair?
May no lines passe, except they do their dutie
Not to a true, but painted chair?

George Herbert, “Jordan (I)”

I doubt the typical person would ever think fatty acids are a thing of beauty. In fact, most people try to do everything they can to avoid them—at least in their diets. But, as a biochemist who specializes in lipids (a class of biomolecules that includes fatty acids) and cell membranes, I am fascinated by these molecules—and by the biochemical and cellular structures they form.

I know, I know—I’m a science geek. But for me, the chemical structures and the physicochemical properties of lipids are as beautiful as an evening sunset. As an expert, I thought I knew most of what there is to know about fatty acids, so I was surprised to learn that researchers from Germany recently uncovered an elegant mathematical relationship that explains the structural makeup of fatty acids.From my vantage point, this newly revealed mathematical structure boggles my mind, providing new evidence for a Creator’s role in bringing life into existence.

Fatty Acids

To first approximation, fatty acids are relatively simple compounds, consisting of a carboxylic acid head group and a long-chain hydrocarbon tail.


Structure of two typical fatty acids
Image credit: Edgar181/Wikimedia Commons

Despite their structural simplicity, a bewildering number of fatty acid species exist. For example, the hydrocarbon chain of fatty acids can vary in length from 1 carbon atom to over 30. One or more double bonds can occur at varying positions along the chain, and the double bonds can be either cis or trans in geometry. The hydrocarbon tails can be branched and can be modified by carbonyl groups and by hydroxyl substituents at varying points along the chain. As the hydrocarbon chains become longer, the number of possible structural variants increases dramatically.

How Many Fatty Acids Exist in Nature?

This question takes on an urgency today because advances in analytical techniques now make it possible for researchers to identify and quantify the vast number of lipid species found in biological systems, birthing the discipline of lipidomics. Investigators are interested in understanding how lipid compositions vary spatially and temporally in biological systems and how these compositions change in response to altered physiological conditions and pathologies.

To process and make sense of the vast amount of data generated in lipidomics studies, biochemists need to have some understanding of the number of lipid species that are theoretically possible. Recently, researchers from Friedrich Schiller University in Germany took on this challenge—at least, in part—by attempting to calculate the number of chemical species that exist for fatty acids varying in size from 1 to 30 atoms.

Fatty Acids and Fibonacci Numbers

To accomplish this objective, the German researchers developed mathematical equations that relate the number of carbon atoms in fatty acids to the number of structural variants (isomers). They discovered that this relationship conforms to the Fibonacci series, with the number of possible fatty acid species increasing by a factor of 1.618—the golden mean—for each carbon atom added to the fatty acid. Though not immediately evident when first examining the wide array of fatty acids found in nature, deeper analysis reveals that a beautiful yet simple mathematical structure underlies the seemingly incomprehensible structural diversity of these biomolecules.

This discovery indicates it is unlikely that the fatty acid compositions found in nature reflect the haphazard outcome of an undirected, historically contingent evolutionary history, as many biochemists are prone to think. Instead, the fatty acids found throughout the biological realm appear to be fundamentally dictated by the tenets of nature. It is provocative to me that the fatty acid diversity produced by the laws of nature is precisely the isomers needed to for life to be possible—a fitness to purpose, if you will.

Understanding this mathematical relationship and knowing the theoretical number of fatty acid species will certainly aid biochemists working in lipidomics. But for me, the real significance of these results lies in the philosophical and theological arenas.

The Mathematical Beauty of Fatty Acids

The golden mean occurs throughout nature, describing the spiral patterns found in snail shells and the flowers and leaves of plants, as examples, highlighting the pervasiveness of mathematical structures and patterns that describe many aspects of the world in which we live.

But there is more. As it turns out, we perceive the golden mean to be a thing of beauty. In fact, architects and artists often make use of the golden mean in their work because of its deeply aesthetic qualities.

Everywhere we look in nature—whether the spiral arms of galaxies, the shell of a snail, or the petals of a flower—we see a grandeur so great that we are often moved to our very core. This grandeur is not confined to the elements of nature we perceive with our senses; it also exists in the underlying mathematical structure of nature, such the widespread occurrence of the Fibonacci sequence and the golden mean. And it is remarkable that this beautiful mathematical structure even extends to the relationship between the number of carbon atoms in a fatty acid and the number of isomers.

As a Christian, nature’s beauty—including the elegance exemplified by the mathematically dictated composition of fatty acids—prompts me to worship the Creator. But this beauty also points to the reality of God’s existence and supports the biblical view of humanity. If God created the universe, then it is reasonable to expect it to be a beautiful universe. Yet, if the universe came into existence through mechanism alone, there is no reason to think it would display beauty. In other words, the beauty in the world around us signifies the Divine.

Furthermore, if the universe originated through uncaused physical mechanisms, there is no reason to think that humans would possess an aesthetic sense. But if human beings are made in God’s image, as Scripture teaches, we should be able to discern and appreciate the universe’s beauty, made by our Creator to reveal his glory and majesty.

Resources to Dig Deeper


  1. Stefan Schuster, Maximilian Fichtner, and Severin Sasso, “Use of Fibonacci Numbers in Lipidomics—Enumerating Various Classes of Fatty Acids,” Scientific Reports 7 (January 2017): 39821, doi:10.1038/srep39821.
Reprinted with permission by the author
Original article at:

The Human Genome: Copied by Design



The time my wife Amy and I spent in graduate school studying biochemistry were some of the best days of our lives. But it wasn’t all fun and games. For the most part, we spent long days and nights working in the lab.

But we weren’t alone. Most of the graduate students in the chemistry department at Ohio University kept the same hours we did, with all-nighters broken up around midnight by “Dew n’ Donut” runs to the local 7-Eleven. Even though everybody worked hard, some people were just more productive than others. I soon came to realize that activity and productivity were two entirely different things. Some of the busiest people I knew in graduate school rarely accomplished anything.

This same dichotomy lies at the heart of an important scientific debate taking place about the meaning of the ENCODE project results. This controversy centers around the question: Is the biochemical activity measured for the human genome merely biochemical noise or is it productive for the cell? Or to phrase the question the way a biochemist would: Is biochemical activity associated with the human genome the same thing as biochemical function?

The answer to this question doesn’t just have scientific implications. It impacts questions surrounding humanity’s origin. Did we arise through evolutionary processes or are we the product of a Creator’s handiwork?

The ENCODE Project

The ENCODE project—a program carried out by a consortium of scientists with the goal of identifying the functional DNA sequence elements in the human genome—reported phase II results in the fall of 2012. To the surprise of many, the ENCODE project reported that around 80% of the human genome displays biochemical activity, and hence function, with the expectation that this percentage should increase with phase III of the project.

If valid, the ENCODE results force a radical revision of the way scientists view the human genome. Instead of a wasteland littered with junk DNA sequences (as the evolutionary paradigm predicts), the human genome (and the genomes of other organisms) is packed with functional elements (as expected if a Creator brought human beings into existence).

Within hours of the publication of the phase II results, evolutionary biologists condemned the ENCODE results, citing technical issues with the way the study was designed and the way the results were interpreted. (For a response to these complaints go herehere, and here.)

Is Biochemical Activity the Same Thing As Function?

One of the technical complaints relates to how the ENCODE consortium determined biochemical function. Critics argue that ENCODE scientists conflated biochemical activity with function. For example, the ENCODE Project determined that about 60% of the human genome is transcribed to produceRNA. ENCODE skeptics argue that most of these transcripts lack function. Evolutionary biologist Dan Graur has asserted that “some studies even indicate that 90% of transcripts generated by RNA polymerase II may represent transcriptional noise.”In other words, the biochemical activity measured by the ENCODE project can be likened to busy but nonproductive graduate students who hustle and bustle about the lab but fail to get anything done.

When I first learned how many evolutionary biologists interpreted the ENCODE results I was skeptical. As a biochemist, I am well aware that living systems could not tolerate such high levels of transcriptional noise.

Transcription is an energy- and resource-intensive process. Therefore, it would be untenable to believe that most transcripts are mere biochemical noise. Such a view ignores cellular energetics. Transcribing 60% of the genome when most of the transcripts serve no useful function would routinely waste a significant amount of the organism’s energy and material stores. If such an inefficient practice existed, surely natural selection would eliminate it and streamline transcription to produce transcripts that contribute to the organism’s fitness.

Most RNA Transcripts Are Functional

Recent work supports my intuition as a biochemist. Genomics scientists are quickly realizing that most of the RNA molecule transcribed from the human genome serve critical functional roles.

For example, a recently published report from the Second Aegean International Conference on the Long and the Short of Non-Coding RNAs (held in Greece between June 9–14, 2017) highlights this growing consensus. Based on the papers presented at the conference, the authors of the report conclude, “Non-coding RNAs . . . are not simply transcriptional by-products, or splicing artefacts, but comprise a diverse population of actively synthesized and regulated RNA transcripts. These transcripts can—and do—function within the contexts of cellular homeostasis and human pathogenesis.”2

Shortly before this conference was held, a consortium of scientists from the RIKEN Center for Life Science Technologies in Japan published an atlas of long non-coding RNAs transcribed from the human genome. (Long non-coding RNAs are a subset of RNA transcripts produced from the human genome.) They identified nearly 28,000 distinct long non-coding RNA transcripts and determined that nearly 19,200 of these play some functional role, with the possibility that this number may increase as they and other scientific teams continue to study long non-coding RNAs.3 One of the researchers involved in this project acknowledges that “There is strong debate in the scientific community on whether the thousands of long non-coding RNAs generated from our genomes are functional or simply byproducts of a noisy transcriptional machinery . . . we find compelling evidence that the majority of these long non-coding RNAs appear to be functional.”4

Copied by Design

Based on these results, it becomes increasingly difficult for ENCODE skeptics to dismiss the findings of the ENCODE project. Independent studies affirm the findings of the ENCODE consortium—namely, that a vast proportion of the human genome is functional.

We have come a long way from the early days of the human genome project. When completed in 2003, many scientists at that time estimated that around 95% of the human genome consisted of junk DNA. And in doing so, they seemingly provided compelling evidence that humans must be the product of an evolutionary history.

But, here we are, nearly 15 years later. And the more we learn about the structure and function of genomes, the more elegant and sophisticated they appear to be. And the more reasons we have to think that the human genome is the handiwork of our Creator.



  1. Dan Graur et al., “On the Immortality of Television Sets: ‘Function’ in the Human Genome According to the Evolution-Free Gospel of ENCODE,” Genome Biology and Evolution5 (March 1, 2013): 578–90, doi:10.1093/gbe/evt028.
  2. Jun-An Chen and Simon Conn, “Canonical mRNA is the Exception, Rather than the Rule,” Genome Biology 18 (July 7, 2017): 133, doi:10.1186/s13059-017-1268-1.
  3. Chung-Chau Hon et al., “An Atlas of Human Long Non-Coding RNAs with Accurate 5′ Ends,” Nature 543 (March 9, 2017): 199–204, doi:10.1038/nature21374.
  4. RIKEN, “Improved Gene Expression Atlas Shows that Many Human Long Non-Coding RNAs May Actually Be Functional,” ScienceDaily, March 1, 2017,

Dollo’s Law at Home with a Creation Model, Reprised*



*This article is an expanded and updated version of an article published in 2011 on

Published posthumously, Thomas Wolfe’s 1940 novel, You Can’t Go Home Againconsidered by many to be his most significant work—explores how brutally unfair the passage of time can be. In the finale, George Webber (the story’s protagonist) concedes, “You can’t go back home” to family, childhood, familiar places, dreams, and old ways of life.

In other words, there’s an irreversible quality to life. Call it the arrow of time.

Like Wolfe, most evolutionary biologists believe there is an irreversibility to life’s history and the evolutionary process. In fact, this idea is codified in Dollo’s Law, which states that an organism cannot return, even partially, to a previous evolutionary stage occupied by one of its ancestors. Yet, several recent studies have uncovered what appears to be violations of Dollo’s Law. These violations call into question the sufficiency of the evolutionary paradigm to fully account for life’s history. On the other hand, the return to ‘ancestral states’ finds an explanation in an intelligent design/creation model approach to life’s history.

Dollo’s Law

French paleontologist Louis Dollo formulated the law that bears his name in 1893 before the advent of modern-day genetics, basing it on patterns he unearthed from the fossil record. Today, his idea finds undergirding in contemporary understanding of genetics and developmental biology.

Evolutionary biologist Richard Dawkins explains the modern-day conception of Dollo’s Law this way:

“Dollo’s Law is really just a statement about the statistical improbability of following exactly the same evolutionary trajectory twice . . . in either direction. A single mutational step can easily be reversed. But for larger numbers of mutational steps . . . mathematical space of all possible trajectories is so vast that the chance of two trajectories ever arriving at the same point becomes vanishingly small.”1

If a biological trait is lost during the evolutionary process, then the genes and developmental pathways responsible for that feature will eventually degrade, because they are no longer under selective pressure. In 1994, using mathematical modeling, researchers from Indiana University determined that once a biological trait is lost, the corresponding genes can be “reactivated” with reasonable probability over time scales of five hundred thousand to six million years. But once a time span of ten million years has transpired, unexpressed genes and dormant developmental pathways become permanently lost.2

In 2000, a scientific team from the University of Oregon offered a complementary perspective on the timescale for evolutionary reversals when they calculated how long it takes for a duplicated gene to lose function.3 (Duplicated genes serve as a proxy for dormant genes rendered useless because the trait they encode has been lost.) According to the evolutionary paradigm, once a gene becomes duplicated, it is no longer under the influence of natural selection. That is, it undergoes neutral evolution, and eventually becomes silenced as mutations accrue. As it turns out, the half-life for this process is approximately four million years. To put it another way, sixteen to twenty-four million years after the duplication event, the duplicated gene will have completely lost its function. Presumably, this result applies to dormant, unexpressed genes rendered unnecessary because the trait they specify is lost.

Both scenarios assume neutral evolution and the accumulation of mutations in a clockwise manner. But what if the loss of gene function is advantageous? Collaborative work by researchers from Harvard University and NYU in 2007 demonstrated that loss of gene function can take place on the order of about one million years if natural selection influences gene loss.4 This research team studied the loss of eyes in the cave fish, the Mexican tetra. Because they live in a dark cave environment, eyes serve no benefit for these creatures. The team discovered that eye reduction offers an advantage for these fish, because of the high metabolic cost associated with maintaining eyes. The reduced metabolic cost associated with eye loss accelerates the loss of gene function through the operation of natural selection.

Based on these three studies, it is reasonable to conclude that once a trait has been lost, the time limit for evolutionary reversals is on the order of about 20 million years.

The very nature of evolutionary mechanisms and the constraints of genetic mutations make it extremely improbable that evolutionary processes would allow an organism to revert to an ancestral state or to recover a lost biological trait. You can’t go home again.

Violations of Dollo’s Law

Despite this expectation, over the course of the last several years, researchers have uncovered several instances in which Dollo’s Law has been violated. A brief description of a handful of these occurrences follows:

The re-evolution of mandibular teeth in the frog genus Gastrotheca. This group is the only one that includes living frogs with true teeth on the lower jaw. When examined from an evolutionary framework, mandibular teeth were present in ancient frogs and then lost in the ancestor of all living frogs. It also looks as if teeth have been absent in frogs for 225 million years before they reappeared in Gastrotheca.5

The re-evolution of oviparity in sand boas. When viewed from an evolutionary perspective, it appears as if live-birth (viviparity) evolved from egg-laying (oviparity) behaviors in reptiles several times. For example, estimates indicate that this evolutionary transition has occurred in snakes at least thirty times. As a case in point, there are 41 species of boas in the Old and New Worlds that give live births. Yet, two recently described sand boas, the Arabian sand boas (Eryx jayakari) and the Saharan sand boa (Eryx muelleri) lay eggs. Phylogenetic analysis carried out by researchers from Yale University indicates that the egg-laying in these two species of sand boas re-evolved 60 million years after the transition to viviparity took place.6

The re-evolution of rotating sex combs in Drosophila. Sex combs are modified bristles unique to male fruit flies, used for courtship and mating. Compared to transverse sex combs, rotating sex combs result when several rows of bristles undergo a rotation of ninety degrees. In the ananassae fruit fly group most of the twenty or so species have simple transverse sex combs, with Drosophila bipectinata and Drosophila parabipectinata the two exceptions. These fruit fly species possess rotating sex combs. Phylogenetic analysis conducted by investigators from the University of California, Davis indicates that the rotating sex combs in these two species re-evolved, twelve million years after being lost.7

The re-evolution of sexuality in mites belonging to the taxa, Crotoniidae. Mites exhibit a wide range of reproductive modes, including parthenogenesis. In fact, this means of reproduction is prominent in the group Oribatida, clustering into two subgroups that display parthenogenesis, almost exclusively. However, residing within one of these clusters is the taxa Crotoniidae, which displays sexual reproduction. Based on an evolutionary analysis, a team of German researchers conclude this group re-evolved the capacity for sexual reproduction.8

The re-evolution of shell coiling in limpets. From an evolutionary perspective, the coiled shell has been lost in gastropod lineages numerous times, producing a limpet shape, consisting of a cap-shaped shell and a large foot. Evolutionary biologists have long thought that the loss of the coiled shell represents an evolutionary dead end. However, researchers from Venezuela have shown that coiled shell morphology re-evolved, at least one time, in calyptraeids, 20 to 100 million years after its loss.9

This short list gives just a few recently discovered examples of Dollo’s Law violations. Surveying the scientific literature, evolutionary biologist J. J. Wiens identified an additional eight examples in which Dollo’s Law was violated and determined that in all cases the lost trait reappeared after at least 20 million years had passed and in some instances after 120 million years had transpired.10

Violation of Dollo’s Law and the Theory of Evolution

Given that the evolutionary paradigm predicts that re-evolution of traits should not occur after the trait has been lost for twenty million years, the numerous discoveries of Dollo’s Law violations provide a basis for skepticism about the capacity of the evolutionary paradigm to fully account for life’s history. The problem is likely worse than it initially appears. J. J. Wiens points out that Dollo’s Law violations may be more widespread than imagined, but difficult to detect for methodological reasons.11

In response to this serious problem, evolutionary biologists have offered two ways to account for Dollo’s Law violations.12 The first is to question the validity of the evolutionary analysis that exposes the violations. To put it another way, these scientists claim that the recently identified Dollo’s Law violations are artifacts of the evolutionary analysis, and not real. However, this work-around is unconvincing. The evolutionary biologists who discovered the different examples of Dollo’s Law violations were aware of this complication and took painstaking efforts to ensure the validity of the evolutionary analysis they performed.

Other evolutionary biologists argue that some genes and developmental modules serve more than one function. So, even though the trait specified by a gene or a developmental module is lost, the gene or the module remains intact because they serve other roles. This retention makes it possible for traits to re-evolve, even after a hundred million years. Though reasonable, this explanation still must be viewed as speculative. Evolutionary biologists have yet to apply the same mathematical rigor to this explanation as they have when estimating the timescale for loss of function in dormant genes. These calculations are critical given the expansive timescales involved in some of the Dollo’s Law violations.

Considering the nature of evolutionary processes, this response neglects the fact that genes and developmental pathways will continue to evolve under the auspices of natural selection, once a trait is lost. Free from the constraints of the lost function, the genes and developmental modules experience new evolutionary possibilities, previously unavailable to them. The more functional roles a gene or developmental module assumes, the less likely it is that these systems can evolve. Shedding one of their roles increases the likelihood that these genes and developmental pathways will become modified as the evolutionary process explores new space now available to it. In this scenario, it is reasonable to think that natural selection could modify the genes and developmental modules to such an extent that the lost trait would be just as unlikely to re-evolve as it would if gene loss was a consequence of neutral evolution. In fact, the study of eye loss in the Mexican tetra suggests that the modification of these genes and developmental modules could occur at a faster rate if governed by natural selection rather than neutral evolution.

Violation of Dollo’s Law and the Case for Creation

While Dollo’s Law violations are problematic for the evolutionary paradigm, the re-evolution—or perhaps, more appropriately, the reappearance—of the same biological traits after their disappearance makes sense from a creation model/intelligent design perspective. The reappearance of biological systems could be understood as the work of the Creator. It is not unusual for engineers to reuse the same design or to revisit a previously used design feature in a new prototype. While there is an irreversibility to the evolutionary process, designers are not constrained in that way and can freely return to old designs.

Dollo’s Law violations are at home in a creation model, highlighting the value of this approach to understanding life’s history.


  1. Richard Dawkins, The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design (New York: W.W. Norton, 2015), 94.
  2. Charles R. Marshall, Elizabeth C. Raff, and Rudolf A. Raff, “Dollo’s Law and the Death and Resurrection of Genes,” Proceedings of the National Academy of Sciences USA 91 (December 6, 1994): 12283–87.
  3. Michael Lynch and John S. Conery, “The Evolutionary Fate and Consequences of Duplicate Genes,” Science 290 (November 10, 2000): 1151–54, doi:10.1126/science.290.5494.1151.
  4. Meredith Protas et al., “Regressive Evolution in the Mexican Cave Tetra, Astyanax mexicanus,” Current Biology 17 (March 6, 2007): 452–54, doi:10.1016/j.cub.2007.01.051.
  5. John J. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs after More than 200 Million Years, and Re-evaluating Dollo’s Law,” Evolution 65 (May 2011): 1283–96, doi:10.1111/j.1558-5646.2011.01221.x.
  6. Vincent J. Lynch and Günter P. Wagner, “Did Egg-Laying Boas Break Dollo’s Law? Phylogenetic Evidence for Reversal to Oviparity in Sand Boas (Eryx: Boidae),” Evolution 64 (January 2010): 207–16, doi:10.1111/j.1558-5646.2009.00790.x.
  7. Thaddeus D. Seher et al., “Genetic Basis of a Violation of Dollo’s Law: Re-Evolution of Rotating Sex Combs in Drosophila bipectinata,” Genetics 192 (December 1, 2012): 1465–75, doi:10.1534/genetics.112.145524.
  8. Katja Domes et al., “Reevolution of Sexuality Breaks Dollo’s Law,” Proceedings of the National Academy of Sciences USA 104 (April 24, 2007): 7139–44, doi:10.1073/pnas.0700034104.
  9. Rachel Collin and Roberto Cipriani, “Dollo’s Law and the Re-Evolution of Shell Coiling,” Proceedings of the Royal Society B 270 (December 22, 2003): 2551–55, doi:10.1098/rspb.2003.2517.
  10. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs.”
  11. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs.”
  12. Rachel Collin and Maria Pia Miglietta, “Reversing Opinions on Dollo’s Law,” Trends in Ecology and Evolution 23 (November 2008): 602–9, doi:10.1016/j.tree.2008.06.013.
Reprinted with permission by the author
Original article at: