But Do Watches Replicate? Addressing a Logical Challenge to the Watchmaker Argument

By Fazale Rana – January 22, 2020

Were things better in the past than they are today? It depends who you ask.

Without question, there are some things that were better in years gone by. And, clearly, there are some historical attitudes and customs that, today, we find hard to believe our ancestors considered to be an acceptable part of daily life.

It isn’t just attitudes and customs that change over time. Ideas change, too—some for the better, some for the worst. Consider the way doing science has evolved, particularly the study of biological systems. Was the way we approached the study of biological systems better in the past than it is today?

It depends who you ask.

As an old-earth creationist and intelligent design proponent, I think the approach biologists took in the past was better than today for one simple reason. Prior to Darwin, teleology was central to biology. In the late 1700s and early to mid-1800s, life scientists viewed biological systems as the product of a Mind. Consequently, design was front and center in biology.

As part of the Darwinian revolution, teleology was cast aside. Mechanism replaced agency and design was no longer part of the construct of biology. Instead of reflecting the purposeful design of a Mind, biological systems were now viewed as the outworking of unguided evolutionary mechanisms. For many people in today’s scientific community, biology is better for it.

Prior to Darwin, the ideas shaped by thinkers (such as William Paley) and biologists (such as Sir Richard Owen) took center stage. Today, their ideas have been abandoned and are often lampooned.

But, advances in my areas of expertise (biochemistry and origins-of-life research) justify a return to the design hypothesis, indicating that there may well be a role for teleology in biology. In fact, as I argue in my book The Cell’s Design, the latest insights into the structure and function of biomolecules bring us full circle to the ideas of William Paley (1743-1805), revitalizing his Watchmaker argument for God’s existence.

In my view, many examples of molecular-level biomachinery stand as strict analogs to human-made machinery in terms of architecture, operation, and assembly. The biomachines found in the cell’s interior reveal a diversity of form and function that mirrors the diversity of designs produced by human engineers. The one-to-one relationship between the parts of man-made machines and the molecular components of biomachines is startling (e.g., the flagellum’s hook). I believe Paley’s case continues to gain strength as biochemists continue to discover new examples of biomolecular machines.

The Skeptics’ Challenge

Despite the powerful analogy that exists between machines produced by human designers and biomolecular machines, many skeptics continue to challenge the revitalized watchmaker argument on logical grounds by arguing in the same vein as David Hume.1 These skeptics assert that significant and fundamental differences exist between biomachines and human creations.

In a recent interaction on Twitter, a skeptic raised just such an objection. Here is what he wrote:

“Do [objects and machines designed by humans] replicate with heritable variation? Bad analogy, category mistake. Same one Paley made with his watch on the heath centuries ago.”

In other words, biological systems replicate, whereas devices and artefacts made by human beings don’t. This difference is fundamental. Such a dissimilarity is so significant that it undermines the analogy between biological systems (in general) and biomolecular machines (specifically) and human designs, invalidating the conclusion that life must stem from a Mind.

This is not the first time I have encountered this objection. Still, I don’t find it compelling because it fails to take into account manmade machines that do, indeed, replicate.

Von Neumann’s Universal Self-Constructor

In the 1940s, mathematician, physicist, and computer scientist John von Neumann (1903–1957) designed a hypothetical machine called a universal constructor. This machine is a conceptual apparatus that can take materials from the environment and build any machine, including itself. The universal constructor requires instructions to build the desired machines and to build itself. It also requires a supervisory system that can switch back and forth between using the instructions to build other machines and copying the instructions prior to the replication of the universal constructor.

Von Neumann’s universal constructor is a conceptual apparatus, but today researchers are actively trying to design and build self-replicating machines.2 Much work needs to be done before self-replicating machines are a reality. Nevertheless, one day machines will be able to reproduce, making copies of themselves. To put it another way, reproduction isn’t necessarily a quality that distinguishes machines from biological systems.

It is interesting to me that a description of von Neumann’s universal constructor bears remarkable similarity to a description of a cell. In fact, in the context of the origin-of-life problem, astrobiologists Paul Davies and Sara Imari Walker noted the analogy between the cell’s information systems and von Neumann’s universal constructor.3 Davies and Walker think that this analogy is key to solving the origin-of-life problem. I would agree. However, Davies and Walker support an evolutionary origin of life, whereas I maintain that the analogy between cells and von Neumann’s universal constructor adds vigor to the revitalized Watchmaker argument and, in turn, the scientific case for a Creator.

In other words, the reproduction objection to the Watchmaker argument has little going for it. Self-replication is not the basis for viewing biomolecular machines as fundamentally dissimilar to machines created by human designers. Instead, self-replication stands as one more machine-like attribute of biochemical systems. It also highlights the sophistication of biological systems compared to systems produced by human designers. We are a far distance away from creating machines that are as sophisticated as the machines found inside the cell. Nevertheless, as we continue to move in that direction, I think the case for a Creator will become even more compelling.

Who knows? With insights such as these maybe one day we will return to the good old days of biology, when teleology was paramount.


Biomolecular Machines and the Watchmaker Argument

Responding to Challenges to the Watchmaker Argument

  1. “Whenever you depart, in the least, from the similarity of the cases, you diminish proportionably the evidence; and may at last bring it to a very weak analogy, which is confessedly liable to error and uncertainty.” David Hume, “Dialogues Concerning Natural Religion,” in Classics of Western Philosophy, 3rd ed., ed. Steven M. Cahn, (1779; repr., Indianapolis: Hackett, 1990), 880.
  2. For example, Daniel Mange et al., “Von Neumann Revisited: A Turing Machine with Self-Repair and Self-Reproduction Properties,” Robotics and Autonomous Systems 22 (1997): 35-58, https://doi.org/10.1016/S0921-8890(97)00015-8; Jean-Yves Perrier, Moshe Sipper, and Jacques Zahnd, “Toward a Viable, Self-Reproducing Universal Computer,” Physica D: Nonlinear Phenomena
    97, no. 4 (October 15, 1996): 335–52, https://doi.org/10.1016/0167-2789(96)00091-7; Umberto Pesavento, “An Implementation of von Neumann’s Self-Reproducing Machine,” Artificial Life 2, no. 4 (Summer 1995): 337–54, https://doi.org/10.1162/artl.1995.2.4.337.
  3. Sara Imari Walker and Paul C. W. Davies, “The Algorithmic Origins of Life,” Journal of the Royal Society Interface 10 (2013), doi:10.1098/rsif.2012.0869.

Reprinted with permission by the author

Original article at:


The Flagellum’s Hook Connects to the Case for a Creator

By Fazale Rana – January 8, 2020

What would you say is the most readily recognizable scientific icon? Is it DNA, a telescope, or maybe a test tube?


Figure 1: Scientific Icons. Image credit: Shutterstock

Marketing experts recognize the power of icons. When used well, icons prompt consumers to instantly identify a brand or product. They can also communicate a powerful message with a single glance.

Though many skeptics question if it’s science at all, the intelligent design movement has identified a powerful icon that communicates its message. Today, when most people see an image the bacterial flagellum they immediately think: Intelligent Design.

This massive protein complex powerfully communicates sophisticated engineering that could only come from an Intelligent Agent. And along these lines, it serves as a powerful piece of evidence for a Creator’s handiwork. Careful study of its molecular architecture and operation provides detailed evidence that an Intelligent Agent must be responsible for biochemical systems and, hence, the origin of life. And, as it turns out, the more we learn about the bacterial flagellum, the more evident it becomes that a Creator must have played a role in the origin and design of life—at least at the biochemical level—as new research from Japan illustrates.1

The Bacterial Flagellum

This massive protein complex looks like a whip extending from the bacterial cell surface. Some bacteria have only a single flagellum, others possess several flagella. Rotation of the flagellum(a) allows the bacterial cell to navigate its environment in response to various chemical signals.


Figure 2: Typical Bacteria with Flagella. Image credit: Shutterstock

An ensemble of 30 to 40 different proteins makes up the typical bacterial flagellum. These proteins function in concert as a literal rotary motor. The flagellum’s components include a rotor, stator, drive shaft, bushing, universal joint, and propeller. It is essentially a molecular-sized electrical motor directly analogous to human-produced rotary motors. The rotation is powered by positively charged hydrogen ions flowing through the motor proteins embedded in the inner membrane.


Figure 3: The Bacterial Flagellum. Image credit: Wikipedia

The Bacterial Flagellum and the Revitalized Watchmaker Argument

Typically, when intelligent design proponents/creationists use the bacterial flagellum to make the case for a Creator, they focus the argument on its irreducibly complex nature. I prefer a different tact. I like to emphasize the eerie similarity between rotary motors created by human designers and nature’ bacterial flagella.

The bacterial flagellum is just one of a large number of protein complexes with machine-like attributes. (I devote an entire chapter to biomolecular machines in my book The Cell’s Design.) Collectively, these biomolecular machines can be deployed to revitalize the Watchmaker argument.

Popularized by William Paley in the eighteenth century, this argument states that as a watch requires a watchmaker, so too, life requires a Creator. Following Paley’s line of reasoning, a machine is emblematic of systems produced by intelligent agents. Biomolecular machines display the same attributes as human-crafted machines. Therefore, if the work of intelligent agents is necessary to explain the genesis of machines, shouldn’t the same be true for biochemical systems?

Skeptics inspired by atheist philosopher David Hume have challenged this simple, yet powerful, analogy. They argue that the analogy would be compelling only if there is a high degree of similarity between the objects that form the analogy. Skeptics have long argued that biochemical systems and machines are too dissimilar to make the Watchmaker argument work.

However, the striking similarity between the machine parts of the bacterial flagellum and human-made machines cause this objection to evaporate. New work on flagella by Japanese investigators lends yet more support to the Watchmaker analogy.

New Insights into the Structure and Function of the Flagellum’s Universal Joint

The flagellum’s universal joint (sometimes referred to as the hook) transfers the torque generated by the motor to the propeller. The research team wanted to develop a deeper understanding of the relationship between the molecular structure of the hook and how the structural features influence its function as a universal joint.

Comprised of nearly 100 copies (monomers) of a protein called FlgE, the hook is a curved, tube-like structure with a hollow interior. FlgE monomers stack on top of each other to form a protofilament. Eleven protofilaments organize to form the hook’s tube, with the long axis of the protofilament aligning to form the long axis of the hook.

Each FlgE monomer consists of three domains, called D0, D1, and D2. The researchers discovered that when the FlgE monomers stack to form a protofilament, the D0, D1, and D2 domains of each of the monomers align along the length of the protofilament to form three distinct regions in the hook. These layers have been labeled the tube layer, the mesh layer, and the spring layer.

During the rotation of the flagellum, the protofilaments experience compression and extension. The movement of the domains, which changes their spatial arrangement relative to one another, mediates the compression and extension. These domain movements allow the hook to function as a universal joint that maintains a rigid tube shape against a twisting “force,” while concurrently transmitting torque from the motor to the flagellum’s filament as it bends along its axis.

Regardless of one’s worldview, it is hard not to marvel at the sophisticated and elegant design of the flagellum’s hook!

The Bacterial Flagellum and the Case for a Creator

If the Watchmaker argument holds validity, it seems reasonable to think that the more we learn about protein complexes, such as the bacterial flagellum, the more machine-like they should appear to be. This work by the Japanese biochemists bears out this assumption. The more we characterize biomolecular machines, the more reason we have to think that life stems from a Creator’s handiwork.

Dynamic properties of the hook assembly add to the Watchmaker argument (when applied to the bacterial flagellum). This structure is much more sophisticated and ingenious than the design of a typical universal joint crafted by human designers. This elegance and ingenuity of the hook are exactly the attributes I would expect if a Creator played a role in the origin and design of life.

Message received, loud and clear.


The Bacterial Flagellum and the Case for a Creator

Can Intelligent Design Be Part of the Scientific Construct?

  1. Takayuki Kato et al., “Structure of the Native Supercoiled Flagellar Hook as a Universal Joint,” Nature Communications 10 (2019): 5295, doi:10.1038/s4146.

Reprinted with permission by the author

Original article at:


Mutations, Cancer, and the Case for a Creator

By Fazale Rana – December 11, 2019

Cancer. Perhaps no other word evokes more fear, anger, and hopelessness.

It goes without saying that cancer is an insidious disease. People who get cancer often die way too early. And even though a cancer diagnosis is no longer an immediate death sentence—thanks to biomedical advances—there are still many forms of cancer that are difficult to manage, let alone effectively treat.

Cancer also causes quite a bit of consternation for those of us who use insights from science to make a case for a Creator. From my vantage point, one of the most compelling reasons to think that a Creator exists and played a role in the origin and design of life is the elegant, sophisticated, and ingenious designs of biochemical systems. And yet, when I share this evidence with skeptics—and even seekers—I am often met with resistance in the form of the question: What about cancer?

Why Would God Create a World Where Cancer Is Possible?

In effect, this question typifies one of the most common—and significant—objections to the design argument. If a Creator is responsible for the designs found in biochemistry, then why are so many biochemical systems seemingly flawed, inelegant, and poorly designed?

The challenge cancer presents for the design argument carries an added punch. It’s one thing to cite inefficiency of protein synthesis or the error-prone nature of the rubisco enzyme, but it’s quite another to describe the suffering of a loved one who died from cancer. There’s an emotional weight to the objection. These deaths feel horribly unjust.

Couldn’t a Creator design biochemistry so that a disease as horrific as cancer would never be possible—particularly if this Creator is all-powerful, all-knowing, and all-good?

I think it’s possible to present a good answer to the challenge that cancer (and other so-called bad designs) poses for the design argument. Recent insights published by a research duo from Cambridge University in the UK help make the case.1

A Response to the Bad Designs in Biochemistry and Biology

Because the “bad designs” challenge is so significant (and so frequently expressed), I devoted an entire chapter in The Cell’s Design to addressing the apparent imperfections of biochemical systems. My goal in that chapter was to erect a framework that comprehensively addresses this pervasive problem for the design argument.

In the face of this challenge it is important to recognize that many so-called biochemical flaws are not genuine flaws at all. Instead, they arise as the consequences of trade-offs. In their cellular roles, many biochemical systems face two (or more) competing objectives. Effectively managing these opposing objectives means that it is impossible for every aspect of the system to perform at an optimal level. Some features must be carefully rendered suboptimal to ensure that the overall system performs robustly under a wide range of conditions.

Cancer falls into this category. It is not a consequence of flawed biochemical designs. Instead, cancer reflects a trade-off between DNA repair and cell survival.

DNA Damage and Cancer

The etiology (cause) of most cancers is complex. While about 10 percent of cancers have a hereditary basis, the vast proportion results from mutations to DNA caused by environmental factors.

Some of the damage to DNA stems from endogenous (internal) factors, such as water and oxygen in the cell. These materials cause hydrolysis and oxidative damage to DNA, respectively. Both types of damage can introduce mutations into this biomolecule. Exogenous chemicals (genotoxins) from the environment can also interact with DNA and cause damage leading to mutations. So does exposure to ultraviolet radiation and radioactivity from the environment.

Infectious agents such as viruses can also cause cancer. Again, these infectious agents cause genomic instability, which leads to DNA mutations.


Figure: Tumor Formation Process. Image credit: Shutterstock

In effect, DNA mutations are an inevitable consequence of the laws of nature, specifically the first and second laws of thermodynamics. These laws make possible the chemical structures and operations necessary for life to even exist. But, as a consequence, these same life-giving laws also undergird chemical and physical processes that damage DNA.

Fortunately, cells have the capacity to detect and repair damage to DNA. These DNA repair pathways are elaborate and sophisticated. They are the type of biochemical features that seem to support the case for a Creator. DNA repair pathways counteract the deleterious effects of DNA mutation by correcting the damage and preventing the onset of cancer.

Unfortunately, these DNA repair processes function incompletely. They fail to fully compensate for all of the damage that occurs to DNA. Consequently, over time, mutations accrue in DNA, leading to the onset of cancer. The inability of the cell’s machinery to repair all of the mutation-causing DNA damage and, ultimately, protect humans (and other animals) from cancer is precisely the thing that skeptics and seekers alike point to as evidence that counts against intelligent design.

Why would a Creator make a world where cancer is possible and then design cancer-preventing processes that are only partially effective?

Cancer: The Result of a Trade-Off

Even though mutations to DNA cause cancer, it is rare that a single mutation leads to the formation of a malignant cell type and, subsequently, tumor growth. Biomedical researchers have discovered that the onset of cancer involves a series of mutations to highly specific genes (dubbed cancer genes). The mutations that cause cells to transform into cancer cells are referred to as driver mutations. Researchers have also learned that most cells in the body harbor a vast number of mutations that have little or no biological consequence. These mutations are called passenger mutations. As it turns out, there are thousands of passenger mutations in a typical cancer cell and only about ten driver mutations to so-called cancer genes. Biomedical investigators have also learned that many normal cells harbor both passenger and driver mutations without ever transforming. (It appears that other factors unrelated to DNA mutation play a role in causing a cancer cell to undergo extensive clonal expansion, leading to the formation of a tumor.)

What this means is that mutations to DNA are quite extensive, even in normal, healthy cells. But this factor prompts the question: Why is the DNA repair process so lackluster?

The research duo from Cambridge University speculate that DNA repair is so costly to cells—making extensive use of energy and cell resources—that to maintain pristine genomes would compromise cell survival. These researchers conclude that “DNA quality control pathways are fully functional but naturally permissive of mutagenesis even in normal cells.”2 And, it seems as if the permissiveness of the DNA repair processes generally have little consequence given that a vast proportion of the human genome consists of noncoding DNA.

Biomedical researchers have uncovered another interesting feature about the DNA repair processes. The processes are “biased,” with repairs taking place preferentially on the DNA strand (of the double helix) that codes for proteins and, hence, is transcribed. In other words, when DNA repair takes place it occurs where it counts the most. This bias displays an elegant molecular logic and rationale, strengthening the case for design.

Given that driver mutations are not in and of themselves sufficient to lead to tumor formation, the researchers conclude that cancer prevention pathways are quite impressive in the human body. They conclude, “Considering that an adult human has ~30 trillion cells, and only one cell develops into a cancer, human cells are remarkably robust at preventing cancer.”3

So, what about cancer?

Though cancer ravages the lives of so many people, it is not because of poorly designed, substandard biochemical systems. Given that we live in a universe that conforms to the laws of thermodynamics, cancer is inevitable. Despite this inevitability, organisms are designed to effectively ward off cancer.

Ironically, as we gain a better understanding of the process of oncogenesis (the development of tumors), we are uncovering more—not less—evidence for the remarkably elegant and ingenious designs of biochemical systems.

The insights by the research team from Cambridge University provide us with a cautionary lesson. We are often quick to declare a biochemical (or biological) feature as poorly designed based on incomplete understanding of the system. Yet, inevitably, as we learn more about the system we discover an exquisite rationale for why things are the way they are. Such knowledge is consistent with the idea that these systems stem from a Creator’s handiwork.

Still, this recognition does little to dampen the fear and frustration associated with a cancer diagnosis and the pain and suffering experienced by those who battle cancer (and their loved ones who stand on the sidelines watching the fight take place). But, whether we are a skeptic or a believer, we all should be encouraged by the latest insights developed by the Cambridge researchers. The more we understand about the cause and progression of cancers, the closer we are to one day finding cures to a disease that takes so much from us.

We can also take added encouragement from the powerful scientific case for a Creator’s existence. The Old and New Testaments teach us that the Creator revealed by scientific discovery has suffered on our behalf and will suffer alongside us—in the person of Christ—as we walk through the difficult circumstances of life.


Examples of Biochemical Trade-Offs

Evidence that Nonfunctional DNA Serves as a Mutational Buffer

  1. Serena Nik-Zainal and Benjamin A. Hall, “Cellular Survival over Genomic Perfection,” Science 366, no. 6467 (November 15, 2019): 802–03, doi:10.1126/science.aax8046.
  2. Nik-Zainal and Hall, 802–03.
  3. Nik-Zainal and Hall, 802–03.

Reprinted with permission by the author

Original article at:

Origin and Design of the Genetic Code: A One-Two Punch for Creation


By Fazale Rana – October 23, 2019

So, in the spirit of the endless debates that take place on sports talk radio, I ask: What duo is the greatest one-two punch in NBA history? Is it:

  • Kareem and Magic?
  • Kobe and Shaq?
  • Michael and Scottie?

Another confession: I am a science-faith junkie. I never tire when it comes to engaging in discussions about the interplay between science and the Christian faith. From my perspective, the most interesting facet of this conversation centers around the scientific evidence for God’s existence.

So, toward this end, I ask: What is the most compelling biochemical evidence for God’s existence? Is it:

  • The complexity of biochemical systems?
  • The eerie similarity between biomolecular motors and machines designed by human engineers?
  • The information found in DNA?

Without hesitation I would say it is actually another feature: the origin and design of the genetic code.

The genetic code is a biochemical code that consists of a set of rules defining the information stored in DNA. These rules specify the sequence of amino acids used by the cell’s machinery to synthesize proteins. The genetic code makes it possible for the biochemical apparatus in the cell to convert the information formatted as nucleotide sequences in DNA into information formatted as amino acid sequences in proteins.


Figure: A Depiction of the Genetic Code. Image credit: Shutterstock

In previous articles (see the Resources section), I discussed the code’s most salient feature that I think points to a Creator’s handiwork: it’s multidimensional optimization. That optimization is so extensive that evolutionary biologists struggle to account for it’s origin, as illustrated by the work of biologist Steven Massey1.

Both the optimization of the genetic code and the failure of evolutionary processes to account for its design form a potent one-two punch, evincing the work of a Creator. Optimization is a marker of design, and if it can’t be accounted for through evolutionary processes, the design must be authentic—the product of a Mind.

Can Evolutionary Processes Generate the Genetic Code?

For biochemists working to understand the origin of the genetic code, its extreme optimization means that it is not the “frozen accident” that Francis Crick proposed in a classic paper titled “On the Origin of the Genetic Code.”2

Many investigators now think that natural selection shaped the genetic code, producing its optimal properties. However, I question if natural selection could evolve a genetic code with the degree of optimality displayed in nature. In the Cell’s Design (published in 2008), I cite the work of the late biophysicist Hubert Yockey in support of my claim.3 Yockey determined that natural selection would have to explore 1.40 x 1070 different genetic codes to discover the universal genetic code found in nature. Yockey estimated 6.3 x 1015 seconds (200 million years) is the maximum time available for the code to originate. Natural selection would have to evaluate roughly 1055 codes per second to find the universal genetic code. And even if the search time was extended for the entire duration of the universe’s existence, it still would require searching through 1052 codes per second to find nature’s genetic code. Put simply, natural selection lacks the time to find the universal genetic code.

Researchers from Germany raised the same difficulty for evolution recently. Because of the genetic code’s multidimensional optimality, they concluded that “the optimality of the SGC [standard genetic code] is a robust feature and cannot be explained by any simple evolutionary hypothesis proposed so far. . . . the probability of finding the standard genetic code by chance is very low. Selection is not an omnipotent force, so this raises the question of whether a selection process could have found the SGC in the case of extreme code optimalities.”4

Two More Evolutionary Mechanisms Considered

Life scientist Massey reached a similar conclusion through a detailed analysis of two possible evolutionary mechanisms, both based on natural selection.9

If the genetic code evolved, then alternate genetic codes would have to have been generated and evaluated until the optimal genetic code found in nature was discovered. This process would require that coding assignments change. Biochemists have identified two mechanisms that could contribute to coding reassignments: (1) codon capture and (2) an ambiguous intermediate mechanism. Massey tested both mechanisms.

Massey discovered that neither mechanism can evolve the optimal genetic code. When he ran computer simulations of the evolutionary process using codon capture as a mechanism, they all ended in failure, unable to find a highly optimized genetic code. When Massey ran simulations with the ambiguous intermediate mechanism, he could evolve an optimized genetic code. But he didn’t view this result as success. He learned that it takes between 20 to 30 codon reassignments to produce a genetic code with the same degree of optimization as the genetic code found in nature.

The problem with this evolutionary mechanism is that the number of coding reassignments observed in nature is scarce based on the few deviants of the genetic code thought to have evolved since the origin of the last common ancestor. On top of this problem, the structure of the optimized codes that evolved via the ambiguous intermediate mechanism is different from the structure of the genetic code found in nature. In short, the result obtained via the ambiguous intermediate mechanism is unrealistic.

As Massey points out, “The evolution of the SGC remains to be deciphered, and constitutes one of the greatest challenges in the field of molecular evolution.”10

Making Sense of Explanatory Models

In the face of these discouraging results for the evolutionary paradigm, Massey concludes that perhaps another evolutionary force apart from natural selection shaped the genetic code. One idea Massey thinks has merit is the Coevolution Theory proposed by J. T. Wong. Wong argued that the genetic code evolved in conjunction with the evolution of biosynthetic pathways that produce amino acids. Yet, Wong’s theory doesn’t account for the extreme optimization of the genetic code in nature. And, in fact, the relationships between coding assignments and amino acid biosynthesis appear to result from a statistical artifact, and nothing more.11 In other words, Wong’s ideas don’t work.

That brings us back to the question of how to account for the genetic code’s optimization and design.

As I see it, in the same way that two NBA superstars work together to help produce a championship-caliber team, the genetic code’s optimization and the failure of every evolutionary model to account for it form a potent one-two punch that makes a case for a Creator.

And that is worth talking about.


  1. Steven E. Massey, “Searching of Code Space for an Error-Minimized Genetic Code via Codon Capture Leads to Failure, or Requires at Least 20 Improving Codon Reassignments via the Ambiguous Intermediate Mechanism,” Journal of Molecular Evolution 70, no. 1 (January 2010): 106–15, doi:10.1007/s00239-009-9313-7.
  2. F. H. C. Crick, “The Origin of the Genetic Code,” Journal of Molecular Biology 38, no. 3 (December 28, 1968): 367–79, doi:10.1016/0022-2836(68)90392-6.
  3. Hubert P. Yockey, Information Theory and Molecular Biology (Cambridge, UK: Cambridge University Press, 1992), 180–83.
  4. Stefan Wichmann and Zachary Ardern, “Optimality of the Standard Genetic Code Is Robust with Respect to Comparison Code Sets,” Biosystems 185 (November 2019): 104023, doi:10.1016/j.biosystems.2019.104023.
  5. Massey, “Searching of Code Space.”
  6. Massey, “Searching of Code Space.”
  7. Ramin Amirnovin, “An Analysis of the Metabolic Theory of the Origin of the Genetic Code,” Journal of Molecular Evolution 44, no. 5 (May 1997): 473–76, doi:10.1007//PL00006170.

Reprinted with permission by the author

Original article at:

New Insights into Genetic Code Optimization Signal Creator’s Handiwork


By Fazale Rana – October 16, 2019

I knew my career as a baseball player would be short-lived when, as a thirteen-year-old, I made the transition from Little League to the Babe Ruth League, which uses official Major League Baseball rules. Suddenly there were a whole lot more rules for me to follow than I ever had to think about in Little League.

Unlike in Little League, at the Babe Ruth level the hitter and base runners have to know what the other is going to do. Usually, the third-base coach is responsible for this communication. Before each pitch is thrown, the third-base coach uses a series of hand signs to relay instructions to the hitter and base runners.


Credit: Shutterstock

My inability to pick up the signs from the third-base coach was a harbinger for my doomed baseball career. I did okay when I was on base, but I struggled to pick up his signs when I was at bat.

The issue wasn’t that there were too many signs for me to memorize. I struggled recognizing the indicator sign.

To prevent the opposing team from stealing the signs, it is common for the third-base coach to use an indicator sign. Each time he relays instructions, the coach randomly runs through a series of signs. At some point in the sequence, the coach gives the indicator sign. When he does that, it means that the next signal is the actual sign.

All of this activity was simply too much for me to process. When I was at the plate, I couldn’t consistently keep up with the third-base coach. It got so bad that a couple of times the third-base coach had to call time-out and have me walk up the third-base line, so he could whisper to me what I was to do when I was at the plate. It was a bit humiliating.

Codes Come from Intelligent Agents

The signs relayed by a third-base coach to the hitter and base runners are a type of code—a set of rules used to convert and convey information across formats.

Experience teaches us that it takes intelligent agents, such as baseball coaches, to devise codes, even those that are rather basic in their design. The more sophisticated a code, the greater the level of ingenuity required to develop it.

Perhaps the most sophisticated codes of all are those that can detect errors during data transmission.

I sure could have used a code like that when I played baseball. It would have helped me if the hand signals used by the third-base coach were designed in such a way that I could always understand what he wanted, even if I failed to properly pick up the indicator signal.

The Genetic Code

As it turns out, just such a code exists in nature. It is one of the most sophisticated codes known to us—far more sophisticated than the best codes designed by the brightest computer engineers in the world. In fact, this code resides at the heart of biochemical systems. It is the genetic code.

This biochemical code consists of a set of rules that define the information stored in DNA. These rules specify the sequence of amino acids that the cell’s machinery uses to build proteins. In this process, information formatted as nucleotide sequences in DNA is converted into information formatted as amino acid sequences in proteins.

Moreover, the genetic code is universal, meaning that all life on Earth uses it.1

Biochemists marvel at the design of the genetic code, in part because its structure displays exquisite optimization. This optimization includes the capacity to dramatically curtail errors that result from mutations.

Recently, a team from Germany identified another facet of the genetic code that is highly optimized, further highlighting its remarkable qualities.2

The Optimal Genetic Code

As I describe in The Cell’s Design, scientists from Princeton University and the University of Bath (UK) quantified the error-minimization capacity of the genetic code during the 1990s. Their work indicated that the universal genetic code is optimized to withstand the potentially harmful effects of substitution mutations better than virtually any other conceivable genetic code.3

In 2018, another team of researchers from Germany demonstrated that the universal genetic code is also optimized to withstand the harmful effects of frameshift mutations—again, better than other conceivable codes.4

In 2007, researchers from Israel showed that the genetic code is also optimized to harbor overlapping codes.5 This is important because, in addition to the genetic code, regions of DNA harbor other overlapping codes that direct the binding of histone proteins, transcription factors, and the machinery that splices genes after they have been transcribed.

The Robust Optimality of the Genetic Code

With these previous studies serving as a backdrop, the German research team wanted to probe more deeply into the genetic code’s optimality. These researchers focused on potential optimality of three properties of the genetic code: (1) resistance to harmful effects of substitution mutations, (2) resistance to harmful effects of frameshift mutations, and (3) capacity to support overlapping genes.

As with earlier studies, the team assessed the optimality of the naturally occurring genetic code by comparing its performance with sets of random codes that are conceivable alternatives. For all three property comparisons, they discovered that the natural (or standard) genetic code (SGC) displays a high degree of optimality. The researchers write, “We find that the SGC’s optimality is very robust, as no code set with no optimised properties is found. We therefore conclude that the optimality of the SGC is a robust feature across all evolutionary hypotheses.”6

On top of this insight, the research team adds one other dimension to multidimensional optimality of the genetic code: its capacity to support overlapping genes.

Interestingly, the researchers also note that the results of their work raise significant challenges to evolutionary explanations for the genetic code, pointing to the code’s multidimensional optimality that is extreme in all dimensions. They write:

We conclude that the optimality of the SGC is a robust feature and cannot be explained by any simple evolutionary hypothesis proposed so far. . . . the probability of finding the standard genetic code by chance is very low. Selection is not an omnipotent force, so this raises the question of whether a selection process could have found the SGC in the case of extreme code optimalities.7

While natural selection isn’t omnipotent, a transcendent Creator would be, and could account for the genetic code’s extreme optimality.

The Genetic Code and the Case for a Creator

In The Cell’s Design, I point out that our common experience teaches us that codes come from minds. It’s true on the baseball diamond and true in the computer lab. By analogy, the mere existence of the genetic code suggests that biochemical systems come from a Mind—a conclusion that gains additional support when we consider the code’s sophistication and exquisite optimization.

The genetic code’s ability to withstand errors that arise from substitution and frameshift mutations, along with its optimal capacity to harbor multiple overlapping codes and overlapping genes, seems to defy naturalistic explanation.

As a neophyte playing baseball, I could barely manage the simple code the third-base coach used. How mind-boggling it is for me when I think of the vastly superior ingenuity and sophistication of the universal genetic code.

And, just like the hitter and base runner work together to produce runs in baseball, the elegant design of the genetic code and the inability of evolutionary processes to account for its extreme multidimensional optimization combine to make the case that a Creator played a role in the origin and design of biochemical systems.

With respect to the case for a Creator, the insight from the German research team hits it out of the park.


  1. Some organisms have a genetic code that deviates from the universal code in one or two of the coding assignments. Presumably, these deviant codes originate when the universal genetic code evolves, altering coding assignments.
  2. Stefan Wichmann and Zachery Ardern, “Optimality of the Standard Genetic Code Is Robust with Respect to Comparison Code Sets,” Biosystems 185 (November 2019): 104023, doi:10.1016/j.biosystems.2019.104023.
  3. David Haig and Laurence D. Hurst, “A Quantitative Measure of Error Minimization in the Genetic Code,” Journal of Molecular Evolution 33, no. 5 (November 1991): 412–17, doi:1007/BF02103132; Gretchen Vogel, “Tracking the History of the Genetic Code,” Science 281, no. 5375 (July 17, 1998): 329–31, doi:1126/science.281.5375.329; Stephen J. Freeland and Laurence D. Hurst, “The Genetic Code Is One in a Million,” Journal of Molecular Evolution 47, no. 3 (September 1998): 238–48, doi:10.1007/PL00006381; Stephen J. Freeland et al., “Early Fixation of an Optimal Genetic Code,” Molecular Biology and Evolution 17, no. 4 (April 2000): 511–18, 10.1093/oxfordjournals.molbev.a026331.
  4. Regine Geyer and Amir Madany Mamlouk, “On the Efficiency of the Genetic Code after Frameshift Mutations,” PeerJ 6 (May 21, 2018): e4825, doi:10.7717/peerj.4825.
  5. Shalev Itzkovitz and Uri Alon, “The Genetic Code Is Nearly Optimal for Allowing Additional Information within Protein-Coding Sequences,” Genome Research 17, no. 4 (April 2007): 405–12, doi:10.1101/gr.5987307.
  6. Wichmann and Ardern, “Optimality.”
  7. Wichmann and Ardern, “Optimality.”

Reprinted with permission by the author

Original article at:

Is the Optimal Set of Protein Amino Acids Purposed by a Mind?


By Fazale Rana – October 9, 2019

To get our assays to work properly, we had to carefully design and optimize each test before executing it with exacting precision in the laboratory. Optimizing these assays was no easy feat. It could take weeks of painstaking effort to get the protocols just right.

My experiences working in the lab taught me some important lessons that I carry with me today as a Christian apologist. One of these lessons has to do with optimization. Optimized systems don’t just happen, whether they are laboratory procedures, manufacturing operations, or well-designed objects or devices. Instead, optimization results from the insights and efforts of intelligent agents, and therefore serves as a sure indicator of intelligent design.

As it turns out, nearly every biochemical system appears to be highly optimized. For me, this fact indicates that life stems from a Mind. And as life scientists continue to characterize biochemical systems, they keep discovering more and more examples of biochemical optimization, as recent work by a large team of collaborators working at the Earth-Life Science Institute (ELSI) in Tokyo, Japan, illustrates.1

These researchers uncovered more evidence that the twenty amino acids encoded by the genetic code possess the optimal set of physicochemical properties. If not for these properties, it would not be possible for the cell to build proteins that could support the wide range of activities required to sustain living systems. This insight gives us important perspective into the structure-function relationships of proteins. It also has theological significance, adding to the biochemical case for a Creator.

Before describing the ELSI team’s work and its theological implications, a little background might be helpful for some readers. For those who are familiar with basic biochemistry, just skip ahead to Why These Twenty Amino Acids?

Background: Protein Structure

Proteins are large, complex molecules that play a key role in virtually all of the cell’s operations. Biochemists have long known that the three-dimensional structure of a protein dictates its function. Because proteins are such large, complex molecules, biochemists categorize protein structure into four different levels: primary, secondary, tertiary, and quaternary structures.


Figure 1: The Four Levels of Protein Structure. Image credit: Shutterstock

  • A protein’s primary structure is the linear sequence of amino acids that make up each of its polypeptide chains.
  • The secondary structure refers to short-range three-dimensional arrangements of the polypeptide chain’s backbone arising from the interactions between chemical groups that make up its backbone. Three of the most common secondary structures are the random coil, alpha (α) helix, and beta (β) pleated sheet.
  • Tertiary structure describes the overall shape of the entire polypeptide chain and the location of each of its atoms in three-dimensional space. The structure and spatial orientation of the chemical groups that extend from the protein backbone are also part of the tertiary structure.
  • Quaternary structure arises when several individual polypeptide chains interact to form a functional protein complex.

Background: Amino Acids

The building blocks of proteins are amino acids. These compounds are characterized by having both an amino group and a carboxylic acid bound to a central carbon atom. Also bound to this carbon are a hydrogen atom and a substituent that biochemists call an R group.


Figure 2: The Structure of a Typical Amino Acid. Image credit: Shutterstock

The R group determines the amino acid’s identity. For example, if the R group is hydrogen, the amino acid is called glycine. If the R group is a methyl group, the amino acid is called alanine.

Close to 150 amino acids are found in proteins. But only 19 amino acids (plus 1 imino acid, called proline) are specified by the genetic code. Biochemists refer to these 20 as the canonical set.


Figure 3: The Protein-Forming Amino Acids. Image credit: Shutterstock

A protein’s primary structure forms when amino acids react with each other to form a linear chain, with the amino group of one amino acid combining with the carboxylic acid of another to form an amide linkage. (Sometimes biochemists call the linkage a peptide bond.)


Figure 4: The Chemical Linkage between Amino Acids. Image credit: Shutterstock

The repeating amide linkages along the amino acid chain form the protein’s backbone. The amino acids’ R groups extend from the backbone, creating a distinct physicochemical profile along the protein chain for each unique amino acid sequence. To first approximation, this unique physicochemical profile dictates the protein’s higher-order structures and, hence, the protein’s function.

Why These Twenty Amino Acids?

Research has revealed that the set of amino acids used to build proteins is universal. In other words, the proteins found in every organism on Earth are made up of the same canonical set.

Biochemists have long wondered: Why these 20 amino acids?

In the early 1980s biochemists discovered that an exquisite molecular rationale undergirds the amino acid set used to make proteins.2 Every aspect of amino acid structure has to be precisely the way it is for life to be possible. On top of that, biochemists concluded that the set of 20 amino acids possesses the “just-right” physical and chemical properties that evenly and uniformly vary across a broad range of size, charge, and hydrophobicity (water resistance). In fact, it appears as if the amino acids selected for proteins seem to form a uniquely optimal set of 20 amino acids compared to random sets of amino acids.3

With these previous studies as a backdrop, the ELSI investigators wanted to develop a better understanding of the optimal nature of the universal set of amino acids used to build proteins. They also wanted to gain insight into the origin of the canonical set.

To do this they used a library of 1,913 amino acids (including the 20 amino acids that make up the canonical set) to construct random sets of amino acids. The researchers varied the set sizes from 3 to 20 amino acids and evaluated the performance of the random sets in terms of their capacity to support: (1) the folding of protein chains into three-dimensional structures; (2) protein catalytic activity; and (3) protein solubility.

They discovered that if a random set of amino acids included even a single amino acid from the canonical set, it dramatically out-performed random sets of the same size without any of the canonical amino acids. Based on these results, the researchers concluded that each of the 20 amino acids used to build proteins stands out, possessing highly unusual properties that make them ideally suited for their biochemical role, confirming the results of previous studies.

An Evolutionary Origin for the Canonical Set?

The ELSI researchers believe that—from an evolutionary standpoint—these results also shed light as to how the canonical set of amino acids emerged. Because of the unique adaptive properties of the canonical amino acids, the researchers speculate that “each time a CAA [canonical amino acid] was discovered and embedded during evolution, it provided an adaptive value unusual among many alternatives, and each selective step may have helped bootstrap the developing set to include still more CAAs.”4

In other words, the researchers offer the conjecture that whenever the evolutionary process stumbled upon one of the amino acids in the canonical set and incorporated it into nascent biochemical systems, the addition offered such a significant evolutionary advantage that it became instantiated into the biochemistry of the emerging cellular systems. Presumably, as this selection process occurred repeatedly over time, members of the canonical set would be added, one by one, to the evolving amino acid set, eventually culminating in the full canonical set.

Scientists find further support for this scenario in the following observation: some of the canonical amino acids seemingly play a more important role in optimizing smaller sets of amino acids, some play a more important role in optimizing intermediate size sets of amino acids, and others play a more prominent role in optimizing larger sets. They argue that this difference may reflect the sequence by which amino acids were added to the evolving set of amino acids as life emerged.

On the surface, this evolutionary explanation is not unreasonable. But more careful consideration of the idea raises concerns. For example, just because a canonical amino acid becomes incorporated into a set of amino acids and improves its adaptive value doesn’t mean that the resulting set of amino acids could produce the range of proteins with the solubility, foldability, and catalytic range needed to support life processes. Intuitively, it seems to me as a biochemist, that there must be a threshold for the number of canonical amino acids in any set of amino acids for it to have the range of physicochemical properties needed to build all the proteins needed to support minimal life.

I also question this evolutionary scenario because some of the amino acids that optimize smaller sets would not have been the ones present initially on the early Earth because they cannot be made by prebiotic reactions. Instead, many of the amino acids that optimize smaller sets can only be generated through biosynthetic routes that must have emerged much later in any evolutionary scenario for the origin of life.5 This limitation also means that the only way for some of the canonical amino acids to become incorporated into the canonical set is that multi-step biosynthetic routes for those amino acids evolved first. But if the full canonical set isn’t available, then it is questionable if the proteins needed to catalyze the biosynthesis of these amino acid would exist, resulting in a chicken-and-egg dilemma.

In light of these concerns, is there a better explanation for the highly optimized canonical set of amino acids?

A Creator’s Role?

Optimality of the universal set of protein amino acids finds explanation if life stems from a Creator’s handiwork. As noted, optimization is an indicator of intelligent design, achieved through foresight and preplanning. Optimization requires inordinate attention to detail and careful craftsmanship. By analogy, the optimized biochemistry epitomized by the amino acid set that makes up proteins rationally points to the work of a Creator.

Is There a Biochemical Anthropic Principle?

This discovery also leads to another philosophical implication: It lends support to the existence of a biochemical anthropic principle.

The ELSI researchers speculate that no matter the starting point in the evolutionary process, the pathways will all converge at the canonical set of amino acids because of the acids’ unusual adaptive properties. In other words, the amino acids that make up the universal set of protein-coding amino acids are not the outworking of an historically contingent evolutionary process, but instead seem to be fundamentally prescribed by the laws of nature. To put it differently, it appears as if the canonical set of amino acids has been preordained in some way.6 One of the study’s authors, Rudrarup Bose, suggests that “Life may not be just a set of accidental events. Rather, there may be some universal laws governing the evolution of life.”7

Though I prefer to see the origin of life as a creation event, it is important to recognize that even if one were to adopt an evolutionary perspective on life’s origin, it looks as if a Mind is responsible for jimmy-rigging the process to a predetermined endpoint. It looks as if a Mind purposed for life to be present in the universe and structured the laws of nature so that, in this case, the uniquely optimal canonical set of amino acids would inevitably emerge.

Along these lines, it is remarkable to think that the canonical set of amino acids has the precise properties needed for life to exist. This “coincidence” is eerie, to say the least. As a biochemist, I interpret this coincidence as evidence that our universe has been designed for a purpose. It is provocative to think that regardless of one’s perspective on the origin of life, the evidence converges toward a single conclusion: namely that life manifests from an intelligent agent—God.


The Optimality of Biochemical Systems

The Biochemical Anthropic Principle

  1. Melissa Ilardo et al., “Adaptive Properties of the Genetically Encoded Amino Acid Alphabet Are Inherited from Its Subset,” Scientific Reports 9, no. 12468 (August 28, 2019), doi:10.1038/s41598-019-47574-x.
  2. Arthur L. Weber and Stanley L. Miller, “Reasons for the Occurrence of the Twenty Coded Protein Amino Acids,” Journal of Molecular Evolution 17, no. 5 (September 1981): 273–84, doi:10.1007/BF01795749; H. James Cleaves II, “The Origin of the Biologically Coded Amino Acids,” Journal of Theoretical Biology 263, no. 4 (April 2010): 490–98, doi:10.1016/j.jtbi.2009.12.014.
  3. Gayle K. Philip and Stephen J. Freeland, “Did Evolution Select a Nonrandom ‘Alphabet’ of Amino Acids?” Astrobiology 11, no. 3 (April 2011), 235–40, doi:10.1089/ast.2010.0567; Matthias Granhold et al., “Modern Diversification of the Amino Acid Repertoire Driven by Oxygen,” Proceedings of the National Academy of Sciences, USA 115, no. 1 (January 2, 2018): 41–46, doi:10.1073/pnas.1717100115.
  4. Ilardo et al., “Adaptive Properties.”
  5. J. Tze-Fei Wong and Patricia M. Bronskill, “Inadequacy of Prebiotic Synthesis as Origin of Proteinous Amino Acids,” Journal of Molecular Evolution 13, no. 2 (June 1979): 115–25, doi:10.1007/BF01732867.
  6. Tokyo Institute of Technology, “Scientists Find Biology’s Optimal ‘Molecular Alphabet’ May Be Preordained,” ScienceDaily, September 10, 2019, http://www.sciencedaily.com/releases/2019/09/190910080017.htm.
  7. Tokyo Institute, “Scientists Find.”

Reprinted with permission by the author

Original article at:

Does Information Come from a Mind?


By Fazale Rana – August 14, 2019

Imagine you’re flying over the desert, and you notice a pile of rocks down below. Most likely, you would think little of it. But suppose the rocks were arranged to spell out a message. I bet you would conclude that someone had arranged those rocks to communicate something to you and others who might happen to fly over the desert.

You reach that conclusion because experience has taught you that messages come from persons/people—or, rather, that information comes from a mind. And, toward that end, information serves as a marker for the work of intelligent agency.


Image credit: Shutterstock

Recently, a skeptic challenged me on this point, arguing that we can identify numerous examples of natural systems that harbor information, but that the information in these systems arose through natural processes—not a mind.

So, does information truly come from a mind? And can this claim be used to make a case for a Creator’s existence and role in life’s origin and design?

I think it can. And my reasons are outlined below.

Information and the Case for a Creator

In light of the (presumed) relationship between information and minds, I find it provocative that biochemical systems are information systems.

Two of the most important classes of information-harboring molecules are nucleic acids (DNA and RNA) and proteins. In both cases, the information content of these molecules arises from the nucleotide and amino acid sequences, respectively, that make up these two types of biomolecules.

The information harbored in nucleotide sequences of nucleic acids and amino acid sequences of proteins is digital information. Digital information is represented by a succession of discrete units, just like the ones and zeroes that encode the information manipulated by electronic devices. In this respect, sequences of nucleotides and amino acids for discrete informational units that encode the information in DNA and RNA and proteins, respectively.

But the information in nucleic acids and proteins also has analog characteristics. Analog information varies in an uninterrupted continuous manner, like radio waves used for broadcasting purposes. Analog information in nucleic acids and proteins are expressed through the three-dimensional structures adopted by both classes of biomolecules. (For more on the nature of biochemical information, see Resources.)

If our experience teaches us that information comes from minds, then the fact that key classes of biomolecules are comprised of both digital and analog information makes it reasonable to conclude that life itself stems from the work of a Mind.

Is Biochemical Information Really Information?

Skeptics, such as philosopher Massimo Pigliucci, often dismiss this particular design argument, maintaining that biochemical information is not genuine information. Instead, they maintain that when scientists refer to biomolecules as harboring information, they are employing an illustrative analogy—a scientific metaphor—and nothing more. They accuse creationists and intelligent design proponents of misconstruing scientists’ use of analogical language to make the case for a Creator.1

In light of this criticism, it is worth noting that the case for a Creator doesn’t merely rest on the presence of digital and analog information in biomolecules, but gains added support from work in information theory and bioinformatics.

For example, information theorist Bernd-Olaf Küppers points out in his classic work Information and the Origin of Life that the structure of the information housed in nucleic acids and proteins closely resembles the hierarchical organization of human language.2 This is what Küppers writes:

The analogy between human language and the molecular genetic language is quite strict. . . . Thus, central problems of the origin of biological information can adequately be illustrated by examples from human language without the sacrifice of exactitude.3

Added to this insight is the work by a team from NIH who discovered that the information content of proteins bears the same mathematical structure as human language. To this end, they discovered that a universal grammar exists that defines the structure of the biochemical information in proteins. (For more details on the NIH team’s work, see Resources.)

In other words, the discovery that the biochemical information shares the same features as human language deepens the analogy between biochemical information and the type of information we create as human designers. And, in doing so, it strengthens the case for a Creator.

Further Studies that Strengthen the Case for a Creator

So, too, does other work, such as studies in DNA barcoding. Biologists have been able to identify, catalog, and monitor animal and plant species using relatively short, standardized segments of DNA within genomes. They refer to these sequences as DNA barcodes that are analogous to the barcodes merchants use to price products and monitor inventory.

Typically, barcodes harbor information in the form of parallel dark lines on a white background, creating areas of high and low reflectance that can be read by a scanner and interpreted as binary numbers. Barcoding with DNA is possible because this biomolecule, at its essence, is an information-based system. To put it another way, this work demonstrates that the information in DNA is not metaphorical, but is in fact informational. (For more details on DNA barcoding, see “DNA Barcodes Used to Inventory Plant Biodiversity” in Resources.)

Work in nanotechnology also strengthens the analogy between biochemical information and the information we create as human designers. For example, a number of researchers are exploring DNA as a data storage medium. Again, this work demonstrates that biochemical information is information. (For details on DNA as a data storage medium, see Resources.)

Finally, researchers have learned that the protein machines that operate on DNA during processes such as transcription, replication, and repair literally operate like a computer system. In fact, the similarity is so strong that this insight has spawned a new area of nanotechnology called DNA computing. In other words, the cell’s machinery manipulates information in the same way human designers manipulate digital information. For more details, take a look at the article “Biochemical Turing Machines ‘Reboot’ the Watchmaker Argument” in Resources.)

The bottom line is this: The more we learn about the architecture and manipulation of biochemical information, the stronger the analogy becomes.

Does Information Come from a Mind?

Other skeptics challenge this argument in a different way. They assert that information can originate without a mind. For example, a skeptic recently challenged me this way:

“A volcano can generate information in the rocks it produces. From [the] information we observe, we can work out what it means. Namely, in this example, that the rock came from the volcano. There was no Mind in information generation, but rather minds at work, generating meaning.

Likewise, a growing tree can generate information through its rings. Humans can also generate information by producing sound waves.

However, I don’t think that volcanoes have minds, nor do trees—at least not the way we have minds.”

–Roland W. via Facebook

I find this to be an interesting point. But, I don’t think this objection undermines the case for a Creator. Ironically, I think it makes the case stronger. Before I explain why, though, I need to bring up an important clarification.

In Roland’s examples, he conflates two different types of information. When I refer to the analogy between human languages and biochemical information, I am specifically referring to semantic information, which consists of combinations of symbols that communicate meaning. In fact, Roland’s point about humans generating information with sound waves is an example of semantic information, with the sounds serving as combinations of ephemeral symbols.

The type of information found in volcanic rocks and tree rings is different from the semantic information found in human languages. It is actually algorithmic information, meaning that it consists of a set of instructions. And technically, the rocks and tree rings don’t contain this information—they result from it.

The reason why we can extract meaning and insight from rocks and tree rings is because of the laws of nature, which correspond to algorithmic information. We can think of these laws as instructions that determine the way the world works. Because we have discovered these laws, and because we have also discovered nature’s algorithms, we can extract insight and meaning from studying rocks and tree rings.

In fact, Küppers points out that biochemical systems also consist of sets of instructions instantiated within the biomolecules themselves. These instructions direct activities of the biomolecular systems and, hence, the cell’s operations. To put it another way, biochemical information is also algorithmic information.

From an algorithmic standpoint, the information content relates to the complexity of the instructions. The more complex the instructions, the greater the information content. To illustrate, consider a DNA sequence that consists of alternating nucleotides, AGAGAGAG . . . and so on. The instructions needed to generate this sequence are:

  1. Add an A
  2. Add a G
  3. Repeat steps 1 and 2, x number of times, where x corresponds to the length of the DNA sequence divided by 2

But what about a DNA sequence that corresponds to a typical gene? In effect, because there is no pattern to that sequence, the set of instructions needed to create that sequence is the sequence itself. In other words, a much greater amount of algorithmic information resides in a gene than in a repetitive DNA sequence.

And, of course, our common experience teaches us that information—whether it’s found in a gene, a rock pile, or a tree ring—comes from a Mind.


  1. For example, see Massimo Pigliucci and Maarten Boudry, “Why Machine-Information Metaphors Are Bad for Science and Science Education,” Science and Education 20, no. 5–6 (May 2011): 453–71; doi:10.1007/s11191-010-9267-6.
  2. Bernd-Olaf Küppers, Information and the Origin of Life (Boston, MA: MIT Press, 1990), 24–25.
  3. Küppers, Information, 23.

Reprinted with permission by the author

Original article at:

Satellite DNA: Critical Constituent of Chromosomes

Untitled 4

By Fazale Rana – June 26, 2019

Let me explain.

Recently, I wound up with a disassembled cabinet in the trunk of my car. Neither my wife Amy nor I could figure out where to put the cabinet in our home and we didn’t want to store it in the garage. The cabinet had all its pieces and was practically new. So, I offered it to a few people, but there were no takers. It seemed that nobody wanted to assemble the cabinet.

Getting Rid of the Junk

After driving around with the cabinet pieces in my trunk for a few days, I channeled my inner Marie Kondo. This cabinet wasn’t giving me any joy by taking up valuable space in the trunk. So, I made a quick detour on my way home from the office and donated the cabinet to a charity.

When I told Amy what I had done, she expressed surprise and a little disappointment. If she had known I was going to donate the cabinet, she would have kept it for its glass doors. In other words, if I hadn’t donated the cabinet, it would have eventually wound up in our garage because it has nice glass doors that Amy thinks she could have repurposed.

There is a point to this story: The cabinet was designed for a purpose and, at one time, it served a useful function. But once it was disassembled and put in the trunk of my car, nobody seemed to want it. Disassembling the cabinet transformed it into junk. And since my wife loves to repurpose things, she saw a use for it. She didn’t perceive the cabinet as junk at all.

The moral of my little story also applies to the genomes of eukaryotic organisms. Specifically, is it time that evolutionary scientists view some kinds of DNA not as junk, but rather as purposeful genetic elements?

Junk in the Genome

Many biologists hold the view that a vast proportion of the genomes of other eukaryotic organisms is junk, just like the disassembled cabinet I temporarily stored in my car. They believe that, like the unwanted cabinet, many of the different types of “junk” DNA in genomes originated from DNA sequences that at one time performed useful functions. But these functional DNA sequences became transformed (like the disassembled cabinet) into nonfunctional elements.

Evolutionary biologists consider the existence of “junk” DNA as one of the most potent pieces of evidence for biological evolution. According to this view, junk DNA results when undirected biochemical processes and random chemical and physical events transform a functional DNA segment into a useless molecular artifact. Junk pieces of DNA remain part of an organism’s genome, persisting from generation to generation as a vestige of evolutionary history.

Evolutionary biologists highlight the fact that, in many instances, identical (or nearly identical) segments of junk DNA appear in a wide range of related organisms. Frequently, the identical junk DNA segments reside in corresponding locations in these genomes—and for many biologists, this feature clearly indicates that these organisms shared a common ancestor. Accordingly, the junk DNA segment arose prior to the time that the organisms diverged from their shared evolutionary ancestor and then persisted in the divergent evolutionary lines.

One challenging question these scientists ask is, Why would a Creator purposely introduce nonfunctional, junk DNA at the exact location in the genomes of different, but seemingly related, organisms?

Satellite DNA

Satellite DNA, which consists of nucleotide sequences that repeat over and over again, is one class of junk DNA. This highly repetitive DNA occurs within the centromeres of chromosomes and also in the chromosomal regions adjacent to centromeres (referred to as pericentromeric regions).


Figure: Chromosome Structure. Image credit: Shutterstock

Biologists have long regarded satellite DNA as junk because it doesn’t encode any useful information. Satellite DNA sequences vary extensively from organism to organism. For evolutionary biologists, this variability is a sure sign that these DNA sequences can’t be functional. Because if they were, natural selection would have prevented the DNA sequences from changing. On top of that, molecular biologists think that satellite DNA’s highly repetitive nature leads to chromosomal instability, which can result in genetic disorders.

A second challenging question is, Why would a Creator intentionally introduce satellite DNA into the genomes of eukaryotic organisms?

What Was Thought to Be Junk Turns Out to Have Purpose

Recently, a team of biologists from the University of Michigan (UM) adopted a different stance regarding the satellite DNA found in pericentromeric regions of chromosomes. In the same way that my wife Amy saw a use for the cabinet doors, the researchers saw potential use for satellite DNA. According to Yukiko Yamashita, the UM research head, “We were not quite convinced by the idea that this is just genomic junk. If we don’t actively need it, and if not having it would give us an advantage, then evolution probably would have gotten rid of it. But that hasn’t happened.”1

With this mindset—refreshingly atypical for most biologists who view satellite DNA as junk—the UM research team designed a series of experiments to determine the function of pericentromeric satellite DNA.2 Typically, when molecular biologists seek to understand the functional role of a region of DNA, they either alter it or splice it out of the genome. But, because the pericentromeric DNA occupies such a large proportion of chromosomes, neither option was available to the research team. Instead, they made use of a protein found in the fruit fly Drosophila melanogaster, called D1. Previous studies demonstrated that this protein binds to satellite DNA.

The researchers disabled the gene that encodes D1 and discovered that fruit fly germ cells died. They observed that without the D1 protein, the germ cells formed micronuclei. These structures reflect chromosomal instability and they form when a chromosome or a chromosomal fragment becomes dislodged from the nucleus.

The team repeated the study, but this time they used a mouse model system. The mouse genome encodes a protein called HMGA1 that is homologous to the D1 protein in fruit flies. When they damaged the gene encoding HMGA1, the mouse cells also died, forming micronuclei.

As it turns out, both D1 and HMGA1 play a crucial role, ensuring that chromosomes remain bundled in the nucleus. These proteins accomplish this feat by binding to the pericentromeric satellite DNA. Both proteins have multiple binding sites and, therefore, can simultaneously bind to several chromosomes at once. The multiple binding interactions collect chromosomes into a bundle to form an association site called a chromocenter.

The researchers aren’t quite sure how chromocenter formation prevents micronuclei formation, but they speculate that these structures must somehow stabilize the nucleus and the chromosomes housed in its interior. They believe that this functional role is universal among eukaryotic organisms because they observed the same effects in fruit flies and mice.

This study teaches us two additional lessons. One, so-called junk DNA may serve a structural role in the cell. Most molecular biologists are quick to overlook this possibility because they are hyper-focused on the informational role (encoding the instructions to make proteins) DNA plays.

Two, just because regions of the genome readily mutate without consequences doesn’t mean these sequences aren’t serving some kind of functional role. In the case of pericentromeric satellite DNA, the sequences vary from organism to organism. Most molecular biologists assume that because the sequences vary, they must not be functionally important. For if they were, natural selection would have prevented them from changing. But this study demonstrates that DNA sequences can vary—particularly if DNA is playing a structural role—as long as they don’t compromise DNA’s structural utility. In the case of pericentromeric DNA, apparently the nucleotide sequence can vary quite a bit without compromising its capacity to bind chromocenter-forming proteins (such as D1 and HMGA1).

Is the Evolutionary Paradigm the Wrong Framework to Study Genomes?

Scientists who view biology through the lens of the evolutionary paradigm are often quick to conclude that the genomes of organisms reflect the outworking of evolutionary history. Their perspective causes them to see the features of genomes, such as satellite DNA, as little more than the remnants of an unguided evolutionary process. Within this framework, there is no reason to think that any particular DNA sequence element harbors function. In fact, many life scientists regard these “evolutionary vestiges” as junk DNA. This clearly was the case for satellite DNA.

Yet, a growing body of data indicates that virtually every category of so-called junk DNA displays function. In fact, based on the available data, a strong case can be made that most sequence elements in genomes possess functional utility. Based on these insights, and the fact that pericentromeric satellite DNA persists in eukaryotic genomes, the team of researchers assumed that it must be functional. It’s a clear departure from the way most biologists think about genomes.

Based on this study (and others like it), I think it is safe to conclude that we really don’t understand the molecular biology of genomes.

It seems to me that we live in the midst of a revolution in our understanding of genome structure and function. Instead of being a wasteland of evolutionary debris, the architecture and operations of genomes appear to be far more elegant and sophisticated than anyone ever imagined—at least within the confines of the evolutionary paradigm.

This insight also leads me to wonder if we have been using the wrong paradigm all along to think about genome structure and function. I contend that viewing biological systems as the Creator’s handiwork provides a superior framework for promoting scientific advance, particularly when the rationale for the structure and function of a particular biological system is not apparent. Also, in addressing the two challenging questions, if biological systems have been created, then there must be good reasons why these systems are structured and function the way they do. And this expectation drives further study of seemingly nonfunctional, purposeless systems with the full anticipation that their functional roles will eventually be uncovered.

Though committed to an evolutionary interpretation of biology, the UM researchers were rewarded with success when they broke ranks with most evolutionary biologists and assumed junk regions of the genome were functional. Their stance illustrates the power of a creation model approach to biology.

Sadly, most evolutionary biologists are like me when it comes to old furniture. We lack vision and are quick to see it as junk, when in fact a treasure lies in front of us. And, if we let it, this treasure will bring us joy.


  1. University of Michigan, “Scientists Discover a Role for ‘Junk’ DNA,” ScienceDaily (April 11, 2018), www.sciencedaily.com/releases/2018/04/180411131659.htm.
  2. Madhav Jagannathan, Ryan Cummings, and Yukiko M. Yamashita, “A Conserved Function for Pericentromeric Satellite DNA,” eLife 7 (March 26, 2018): e34122, doi:10.7554/eLife.34122.

Reprinted with permission by the author

Original article at:

Biochemical Grammar Communicates the Case for Creation

Untitled 19

As I get older, I find myself forgetting things—a lot. But, thanks to smartphone technology, I have learned how to manage my forgetfulness by using the “Notes” app on my iPhone.


Figure 1: The Apple Notes app icon. Image credit: Wikipedia

This app makes it easy for me to:

  • Jot down ideas that suddenly come to me
  • List books I want to read and websites I want to visit
  • Make note of musical artists I want to check out
  • Record “to do” and grocery lists
  • Write down details I need to have at my fingertips when I travel
  • List new scientific discoveries with implications for the RTB creation model that I want to blog about, such as the recent discovery of a protein grammar calling attention to the elegant design of biochemical systems

And the list goes on. I will never forget, again!

On top of that, I can use the Notes app to categorize and organize all my notes and house them in a single location. Thus, I don’t have to manage scraps of paper that invariably wind up getting scattered all over the place—and often lost.

And, as a bonus, the Notes app anticipates the next word I am going to use even before I type it. I find myself relying on this feature more and more. It is much easier to select a word than type it out. In fact, the more I use this feature, the better the app becomes at anticipating the next word I want to type.

Recently, a team of bioinformaticists from the University of Alabama, Birmingham (UAB) and the National Institutes of Health (NIH) used the same algorithm the Notes app uses to anticipate word usage to study protein architectures.1 Their analysis reveals new insight into the structural features of proteins and also highlights the analogy between the information housed in these biomolecules and human language. This analogy contributes to the revitalized Watchmaker argument presented in my book The Cell’s Design.

N-Gram Language Modeling

The algorithm used by the Notes app to anticipate the next word the user will likely type is called n-gram language modeling. This algorithm determines the probability of a word being used based on the previous word (or words) typed. (If the probability is based on a single word, it is called a unigram probability. If the calculation is based on the previous two words, it is called a bigram probability, and so on.) This algorithm “trains” the Notes app so that the more I use it, the more reliable the calculated probabilities—and, hence, the better the word recommendations.

N-Gram Language Modeling and the Case for a Creator

To understand why the work of research team from UAB and NIH provides evidence for a Creator’s role in the origin and design of life, a brief review of protein structure is in order.

Protein Structure

Proteins are large complex molecules that play a key role in virtually all of the cell’s operations. Biochemists have long known that the three-dimensional structure of a protein dictates its function.

Because proteins are such large complex molecules, biochemists categorize protein structure into four different levels: primary, secondary, tertiary, and quaternary structures. A protein’s primary structure is the linear sequence of amino acids that make up each of its polypeptide chains.

The secondary structure refers to short-range three-dimensional arrangements of the polypeptide chain’s backbone arising from the interactions between chemical groups that make up its backbone. Three of the most common secondary structures are the random coil, alpha (α) helix, and beta (β) pleated sheet.

Tertiary structure describes the overall shape of the entire polypeptide chain and the location of each of its atoms in three-dimensional space. The structure and spatial orientation of the chemical groups that extend from the protein backbone are also part of the tertiary structure.

Quaternary structure arises when several individual polypeptide chains interact to form a functional protein complex.



Figure 2: The four levels of protein structure. Image credit: Shutterstock

Protein Domains

Within the tertiary structure of proteins, biochemists have discovered compact, self-contained regions that fold independently. These three-dimensional regions of the protein’s structure are called domains. Some proteins consist of a single compact domain, but many proteins possess several domains. In effect, domains can be thought to be the fundamental units of a protein’s tertiary structure. Each domain possesses a unique biochemical function. Biochemists refer to the spatial arrangement of domains as a protein’s domain architecture.

Researchers have discovered several thousand distinct protein domains. Many of these domains recur in different proteins, with each protein’s tertiary structure comprised of a mix-and-match combination of protein domains. Biochemists have also learned that a relationship exists between the complexity of an organism and the number of unique domains found in its set of proteins and the number of multi-domain proteins encoded by its genome.


Figure 3: Pyruvate kinase, an example of a protein with three domains. Image credit: Wikipedia

The Key Question in Protein Chemistry

As much progress as biochemists have made characterizing protein structure over the last several decades, they still lack a fundamental understanding of the relationship between primary structure (the amino acid sequence) and tertiary structure and, hence, protein function. In order to develop this insight, they need to determine the “rules” that dictate the way proteins fold. Treating proteins as information systems can help determine some of these rules.

Protein as Information Systems

Proteins are not only large, complex molecules but also information-harboring systems. The amino acid sequence that defines a protein’s primary structure is a type of information—biochemical information—with the individual amino acids analogous to the letters that make up an alphabet.

N-Gram Analysis of Proteins

To gain insight into the relationship between a protein’s primary structure and its tertiary structures, the researchers from UAB and NIH carried out an n-gram analysis on the 23 million protein domains found in the protein sets of 4,800 species found across all three domains of life.

These researchers point out that an individual amino acid in a protein’s primary structure doesn’t contain information just as an individual letter in an alphabet doesn’t harbor any meaning. In human language, the most basic unit that conveys meaning is a word. And, in proteins, the most basic unit that conveys biochemical meaning is a domain.

To decipher the “grammar” used by proteins, the researchers treated adjacent pairs of protein domains in the tertiary structure of each protein in the sample set as a bigram (similar to two words together). Surveying the proteins found in their data set of 4,800 species, they discovered that 95% of all the possible domain combinations don’t exist!

This finding is key. It indicates that there are, indeed, rules that dictate the way domains interact. In other words, just like certain word combinations never occur in human languages because of the rules of grammar, there appears to be a protein “grammar” that constrains the domain combinations in proteins. This insight implies that physicochemical constraints (which define protein grammar) dictate a protein’s tertiary structure, preventing 95% of conceivable domain-domain interactions.

Entropy of Protein Grammar

In thermodynamics, entropy is often used as a measure of the disorder of a system. Information theorists borrow the concept of entropy and use it to measure the information content of a system. For information theorists, the entropy of a system is indirectly proportional to the amount of information contained in a sequence of symbols. As the information content increases, the entropy of the sequence decreases, and vice versa. Using this concept, the UAB and NIH researchers calculated the entropy of the protein domain combinations.

In human language, the entropy increases as the vocabulary increases. This makes sense because, as the number of words increases in a language, the likelihood that random word combinations would harbor meaning decreases. In like manner, the research team discovered that the entropy of the protein grammar increases as the number of domains increases. (This increase in entropy likely reflects the physicochemical constraints—the protein grammar, if you will—on domain interactions.)

Human languages all carry the same amount of information. That is to say, they all display the same entropy content. Information theorists interpret this observation as an indication that a universal grammar undergirds all human languages. It is intriguing that the researchers discovered that the protein “languages” across prokaryotes and eukaryotes all display the same level of entropy and, consequently, the same information content. This relationship holds despite the diversity and differences in complexity of the organism in their data set. By analogy, this finding indicates that a universal grammar exists for proteins. Or to put it another way, the same set of physicochemical constraints dictate the way protein domains interact for all organisms.

At this point, the researchers don’t know what the grammatical rules are for proteins, but knowing that they exist paves the way for future studies. It also generates hope that one day biochemists might understand them and, in turn, use them to predict protein structure from amino acid sequences.

This study also illustrates how fruitful it can be to treat biochemical systems as information systems. The researchers conclude that “The similarities between natural languages and genomes are apparent when domains are treated as functional analogs of words in natural languages.”2

In my view, it is this relationship that points to a Creator’s role in the origin and design of life.

Protein Grammar and the Case for a Creator

As discussed in The Cell’s Design, the recognition that biochemical systems are information-based systems has interesting philosophical ramifications. Common, everyday experience teaches that information derives solely from the activity of human beings. So, by analogy, biochemical information systems, too, should come from a divine Mind. Or at least it is rational to hold that view.

But the case for a Creator strengthens when we recognize that it’s not merely the presence of information in biomolecules that contributes to this version of a revitalized Watchmaker analogy. Added vigor comes from the UAB and NIH researchers’ discovery that the mathematical structure of human languages and biochemical languages is identical.

Skeptics often dismiss the updated Watchmaker argument by arguing that biochemical information is not genuine information. Instead, they maintain that when scientists refer to biomolecules as harboring information, they are employing an illustrative analogy—a scientific metaphor—and nothing more. They accuse creationists and intelligent design proponents of misconstruing their use of analogical language to make the case for design.3

But the UAB and NIH scientists’ work questions the validity of this objection. Biochemical information has all of the properties of human language. It really is information, just like the information we conceive and use to communicate.

Is There a Biochemical Anthropic Principle?

This discovery also yields another interesting philosophical implication. It lends support to the existence of a biochemical anthropic principle. Discovery of a protein grammar means that there are physicochemical constraints on protein structure. It is remarkable to think that protein tertiary structures may be fundamentally dictated by the laws of nature, instead of being the outworking of an historically contingent evolutionary history. To put it differently, the discovery of a protein grammar reveals that the structure of biological systems may reflect some deep, underlying principles that arise from the very nature of the universe itself. And yet these structures are precisely the types of structures life needs to exist.

I interpret this “coincidence” as evidence that our universe has been designed for a purpose. And as a Christian, I find that notion to resonate powerfully with the idea that life manifests from an intelligent Agent—namely, God.

Resources to Dig Deeper

  1. Lijia Yu et al., “Grammar of Protein Domain Architectures,” Proceedings of the National Academy of Sciences, USA 116, no. 9 (February 26, 2019): 3636–45, doi:10.1073/pnas.1814684116.
  2. Yu et al., 3636–45.
  3. For example, see Massimo Pigliucci and Maarten Boudry, “Why Machine-Information Metaphors Are Bad for Science and Science Education,” Science and Education 20, no. 5–6 (May 2011): 453–71; doi:10.1007/s11191-010-9267-6.

Reprinted with permission by the author
Original article at:

Why Mitochondria Make My List of Best Biological Designs

Untitled 14

A few days ago, I ran across a BuzzFeed list that catalogs 24 of the most poorly designed things in our time. Some of the items that stood out from the list for me were:

  • serial-wired Christmas lights
  • economy airplane seats
  • clamshell packaging
  • juice cartons
  • motion sensor faucets
  • jewel CD packaging
  • umbrellas

What were people thinking when they designed these things? It’s difficult to argue with BuzzFeed’s list, though I bet you might add a few things of your own to their list of poor designs.

If biologists were to make a list of poorly designed things, many would probably include…everything in biology. Most life scientists are influenced by an evolutionary perspective. Thus, they view biological systems as inherently flawed vestiges cobbled together by a set of historically contingent mechanisms.

Yet as our understanding of biological systems improves, evidence shows that many “poorly designed” systems are actually exquisitely assembled. It also becomes evident that many biological designs reflect an impeccable logic that explains why these systems are the way they are. In other words, advances in biology reveal that it makes better sense to attribute biological systems to the work of a Mind, not to unguided evolution.

Based on recent insights by biochemist and origin-of-life researcher Nick Lane, I would add mitochondria to my list of well-designed biological systems. Lane argues that complex cells and, ultimately, multicellular organisms would be impossible if it weren’t for mitochondria.1(These organelles generate most of the ATP molecules used to power the operations of eukaryotic cells.) Toward this end, Lane has demonstrated that mitochondria’s properties are just-right for making complex eukaryotic cells possible. Without mitochondria, life would be limited to prokaryotic cells (bacteria and archaea).

To put it another way, Nick Lane has shown that prokaryotic cells could never evolve the complexity needed to form cells with complexity akin to the eukaryotic cells required for multicellular organisms. The reason has to do with bioenergetic constraints placed on prokaryotic cells. According to Lane, the advent of mitochondria allowed life to break free from these constraints, paving the way for complex life.


Figure 1: A Mitochondrion. Image credit: Shutterstock

Through Lane’s discovery, mitochondria reveal exquisite design and logical architecture and operations. Yet this is not necessarily what I (or many others) would have expected if mitochondria were the result of evolution. Rather, we’d expect biological systems to appear haphazard and purposeless, just good enough for the organism to survive and nothing more.

To understand why I (and many evolutionary biologists) would hold this view about mitochondria and eukaryotic cells (assuming that they were the product of evolutionary processes), it is necessary to review the current evolutionary explanation for their origins.

The Endosymbiont Hypothesis

Most biologists believe that the endosymbiont hypothesis is the best explanation for the origin of complex eukaryotic cells. This hypothesis states that complex cells originated when single-celled microbes formed symbiotic relationships. “Host” microbes (most likely archaea) engulfed other archaea and/or bacteria, which then existed inside the host as endosymbionts.

The presumption, then, is that organelles, including mitochondria, were once endosymbionts. Evolutionary biologists believe that, once engulfed, the endosymbionts took up permanent residency within the host cell and even grew and divided inside the host. Over time, the endosymbionts and the host became mutually interdependent. For example, the endosymbionts provided a metabolic benefit for the host cell, such as serving as a source of ATP. In turn, the host cell provided nutrients to the endosymbionts. The endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism.

Based on this scenario, there is no real rationale for the existence of mitochondria (and eukaryotic cells). They are the way they are because they just wound up that way.

But Nick Lane’s insights suggest otherwise.

Lane’s analysis identifies a deep-seated rationale that accounts for the features of mitochondria (and eukaryotic cells) related to their contribution to cellular bioenergetics. To understand why mitochondria and eukaryotic cells are the way they are, we first need to understand why prokaryotic cells can never evolve into large complex cells, a necessary step for the advent of complex multicellular organisms.

Bioenergetics Constraints on Prokaryotic Cells

Lane has discovered that bioenergetics constraints keep bacterial and archaeal cells trapped at their current size and complexity. Key to discovering this constraint is a metric Lane devised called Available Energy per Gene (AEG). It turns out that AEG in eukaryotic cells can be as much as 200,000 times larger than the AEG in prokaryotic cells. This extra energy allows eukaryotic cells to engage in a wide range of metabolic processes that support cellular complexity. Prokaryotic cells simply can’t afford such processes.

An average eukaryotic cell has between 20,000 to 40,000 genes; a typical bacterial cell has about 5,000 genes. Each gene encodes the information the cell’s machinery needs to make a distinct protein. And proteins are the workhorse molecules of the cell. More genes mean a more diverse suite of proteins, which means greater biochemical complexity.

So, what is so special about eukaryotic cells? Why don’t prokaryotic cells have the same AEG? Why do eukaryotic cells have an expanded repertoire of genes and prokaryotic cells don’t?

In short, the answer is: mitochondria.

On average, the volume of eukaryotic cells is about 15,000 times larger than that of prokaryotic cells. Eukaryotic cells’ larger size allows for their greater complexity. Lane estimates that for a prokaryotic cell to scale up to this volume, its radius would need to increase 25-fold and its surface area 625-fold.

Because the plasma membrane of bacteria is the site for ATP synthesis, increases in the surface area would allow the hypothetically enlarged bacteria to produce 625 times more ATP. But this increased ATP production doesn’t increase the AEG. Why is that?

The bacteria would have to produce 625 times more proteins to support the increased ATP production. Because the cell’s machinery must access the bacteria’s DNA to make these proteins, a single copy of the genome is insufficient to support all of the activity centered around the synthesis of that many proteins. In fact, Lane estimates that for bacteria to increase its ATP production 625-fold, it would require 625 copies of its genome. In other words, even though the bacteria increased in size, in effect, the AEG remains unchanged.


Figure 2: ATP Production at the Cell Membrane Surface. Image credit: Shutterstock

Things become more complicated when factoring in cell volume. When the surface area (and concomitant ATP production) increase by a factor of 625, the volume of the cell expands 15,000 times. To satisfy the demands of a larger cell, even more copies of the genome would be required, perhaps as many as 15,000. But energy production tops off at a 625-fold increase. This mismatch means that the AEG drops by 25 percent per gene. For a genome consisting of 5,000 genes, this drop means that a bacterium the size of a eukaryotic cell would have about 125,000 times less AEG than a typical eukaryotic cell and 200,000 times less AEG when compared to eukaryotes with genome sizes approaching 40,000 genes.

Bioenergetic Freedom for Eukaryotic Cells

Thanks to mitochondria, eukaryotic cells are free from the bioenergetic constraints that ensnare prokaryotic cells. Mitochondria generate the same amount of ATP as a bacterial cell. However, their genome consists of only 13 proteins, thus the organelle’s ATP demand is low. The net effect is that the mitochondria’s AEG skyrockets. Furthermore, mitochondrial membranes come equipped with an ATP transport protein that can pump the vast excess of ATP from the organelle interior into the cytoplasm for the eukaryotic cell to use.

To summarize, mitochondria’s small genome plus its prodigious ATP output are the keys to eukaryotic cells’ large AEG.

Of course, this raises a question: Why do mitochondria have genomes at all? Well, as it turns out, mitochondria need genomes for several reasons (which I’ve detailed in previous articles).

Other features of mitochondria are also essential for ATP production. For example, cardiolipinin the organelle’s inner membrane plays a role in stabilizing and organizing specific proteinsneeded for cellular energy production.

From a creation perspective it seems that if a Creator was going to design a eukaryotic cell from scratch, he would have to create an organelle just like a mitochondrion to provide the energy needed to sustain the cell’s complexity with a high AEG. Far from being an evolutionary “kludge job,” mitochondria appear to be an elegantly designed feature of eukaryotic cells with a just-right set of properties that allow for the cellular complexity needed to sustain complex multicellular life. It is eerie to think that unguided evolutionary events just happened to traverse the just-right evolutionary path to yield such an organelle.

As a Christian, I see the rationale that undergirds the design of mitochondria as the signature of the Creator’s handiwork in biology. I also view the anthropic coincidence associated with the origin of eukaryotic cells as reason to believe that life’s history has purpose and meaning, pointing toward the advent of complex life and humanity.

So, now you know why mitochondria make my list.


  1. Nick Lane, “Bioenergetic Constraints on the Evolution of Complex Life,” Cold Spring Harbor Perspectives in Biology 6, no. 5 (May 2014): a015982, doi:10.1101/cshperspect.a015982.

Reprinted with permission by the author
Original article at: