Mutations, Cancer, and the Case for a Creator


By Fazale Rana – December 11, 2019

Cancer. Perhaps no other word evokes more fear, anger, and hopelessness.

It goes without saying that cancer is an insidious disease. People who get cancer often die way too early. And even though a cancer diagnosis is no longer an immediate death sentence—thanks to biomedical advances—there are still many forms of cancer that are difficult to manage, let alone effectively treat.

Cancer also causes quite a bit of consternation for those of us who use insights from science to make a case for a Creator. From my vantage point, one of the most compelling reasons to think that a Creator exists and played a role in the origin and design of life is the elegant, sophisticated, and ingenious designs of biochemical systems. And yet, when I share this evidence with skeptics—and even seekers—I am often met with resistance in the form of the question: What about cancer?

Why Would God Create a World Where Cancer Is Possible?

In effect, this question typifies one of the most common—and significant—objections to the design argument. If a Creator is responsible for the designs found in biochemistry, then why are so many biochemical systems seemingly flawed, inelegant, and poorly designed?

The challenge cancer presents for the design argument carries an added punch. It’s one thing to cite inefficiency of protein synthesis or the error-prone nature of the rubisco enzyme, but it’s quite another to describe the suffering of a loved one who died from cancer. There’s an emotional weight to the objection. These deaths feel horribly unjust.

Couldn’t a Creator design biochemistry so that a disease as horrific as cancer would never be possible—particularly if this Creator is all-powerful, all-knowing, and all-good?

I think it’s possible to present a good answer to the challenge that cancer (and other so-called bad designs) poses for the design argument. Recent insights published by a research duo from Cambridge University in the UK help make the case.1

A Response to the Bad Designs in Biochemistry and Biology

Because the “bad designs” challenge is so significant (and so frequently expressed), I devoted an entire chapter in The Cell’s Design to addressing the apparent imperfections of biochemical systems. My goal in that chapter was to erect a framework that comprehensively addresses this pervasive problem for the design argument.

In the face of this challenge it is important to recognize that many so-called biochemical flaws are not genuine flaws at all. Instead, they arise as the consequences of trade-offs. In their cellular roles, many biochemical systems face two (or more) competing objectives. Effectively managing these opposing objectives means that it is impossible for every aspect of the system to perform at an optimal level. Some features must be carefully rendered suboptimal to ensure that the overall system performs robustly under a wide range of conditions.

Cancer falls into this category. It is not a consequence of flawed biochemical designs. Instead, cancer reflects a trade-off between DNA repair and cell survival.

DNA Damage and Cancer

The etiology (cause) of most cancers is complex. While about 10 percent of cancers have a hereditary basis, the vast proportion results from mutations to DNA caused by environmental factors.

Some of the damage to DNA stems from endogenous (internal) factors, such as water and oxygen in the cell. These materials cause hydrolysis and oxidative damage to DNA, respectively. Both types of damage can introduce mutations into this biomolecule. Exogenous chemicals (genotoxins) from the environment can also interact with DNA and cause damage leading to mutations. So does exposure to ultraviolet radiation and radioactivity from the environment.

Infectious agents such as viruses can also cause cancer. Again, these infectious agents cause genomic instability, which leads to DNA mutations.

blog__inline--mutations-cancer-and-the-case-for-a-creator

Figure: Tumor Formation Process. Image credit: Shutterstock

In effect, DNA mutations are an inevitable consequence of the laws of nature, specifically the first and second laws of thermodynamics. These laws make possible the chemical structures and operations necessary for life to even exist. But, as a consequence, these same life-giving laws also undergird chemical and physical processes that damage DNA.

Fortunately, cells have the capacity to detect and repair damage to DNA. These DNA repair pathways are elaborate and sophisticated. They are the type of biochemical features that seem to support the case for a Creator. DNA repair pathways counteract the deleterious effects of DNA mutation by correcting the damage and preventing the onset of cancer.

Unfortunately, these DNA repair processes function incompletely. They fail to fully compensate for all of the damage that occurs to DNA. Consequently, over time, mutations accrue in DNA, leading to the onset of cancer. The inability of the cell’s machinery to repair all of the mutation-causing DNA damage and, ultimately, protect humans (and other animals) from cancer is precisely the thing that skeptics and seekers alike point to as evidence that counts against intelligent design.

Why would a Creator make a world where cancer is possible and then design cancer-preventing processes that are only partially effective?

Cancer: The Result of a Trade-Off

Even though mutations to DNA cause cancer, it is rare that a single mutation leads to the formation of a malignant cell type and, subsequently, tumor growth. Biomedical researchers have discovered that the onset of cancer involves a series of mutations to highly specific genes (dubbed cancer genes). The mutations that cause cells to transform into cancer cells are referred to as driver mutations. Researchers have also learned that most cells in the body harbor a vast number of mutations that have little or no biological consequence. These mutations are called passenger mutations. As it turns out, there are thousands of passenger mutations in a typical cancer cell and only about ten driver mutations to so-called cancer genes. Biomedical investigators have also learned that many normal cells harbor both passenger and driver mutations without ever transforming. (It appears that other factors unrelated to DNA mutation play a role in causing a cancer cell to undergo extensive clonal expansion, leading to the formation of a tumor.)

What this means is that mutations to DNA are quite extensive, even in normal, healthy cells. But this factor prompts the question: Why is the DNA repair process so lackluster?

The research duo from Cambridge University speculate that DNA repair is so costly to cells—making extensive use of energy and cell resources—that to maintain pristine genomes would compromise cell survival. These researchers conclude that “DNA quality control pathways are fully functional but naturally permissive of mutagenesis even in normal cells.”2 And, it seems as if the permissiveness of the DNA repair processes generally have little consequence given that a vast proportion of the human genome consists of noncoding DNA.

Biomedical researchers have uncovered another interesting feature about the DNA repair processes. The processes are “biased,” with repairs taking place preferentially on the DNA strand (of the double helix) that codes for proteins and, hence, is transcribed. In other words, when DNA repair takes place it occurs where it counts the most. This bias displays an elegant molecular logic and rationale, strengthening the case for design.

Given that driver mutations are not in and of themselves sufficient to lead to tumor formation, the researchers conclude that cancer prevention pathways are quite impressive in the human body. They conclude, “Considering that an adult human has ~30 trillion cells, and only one cell develops into a cancer, human cells are remarkably robust at preventing cancer.”3

So, what about cancer?

Though cancer ravages the lives of so many people, it is not because of poorly designed, substandard biochemical systems. Given that we live in a universe that conforms to the laws of thermodynamics, cancer is inevitable. Despite this inevitability, organisms are designed to effectively ward off cancer.

Ironically, as we gain a better understanding of the process of oncogenesis (the development of tumors), we are uncovering more—not less—evidence for the remarkably elegant and ingenious designs of biochemical systems.

The insights by the research team from Cambridge University provide us with a cautionary lesson. We are often quick to declare a biochemical (or biological) feature as poorly designed based on incomplete understanding of the system. Yet, inevitably, as we learn more about the system we discover an exquisite rationale for why things are the way they are. Such knowledge is consistent with the idea that these systems stem from a Creator’s handiwork.

Still, this recognition does little to dampen the fear and frustration associated with a cancer diagnosis and the pain and suffering experienced by those who battle cancer (and their loved ones who stand on the sidelines watching the fight take place). But, whether we are a skeptic or a believer, we all should be encouraged by the latest insights developed by the Cambridge researchers. The more we understand about the cause and progression of cancers, the closer we are to one day finding cures to a disease that takes so much from us.

We can also take added encouragement from the powerful scientific case for a Creator’s existence. The Old and New Testaments teach us that the Creator revealed by scientific discovery has suffered on our behalf and will suffer alongside us—in the person of Christ—as we walk through the difficult circumstances of life.

Resources

Examples of Biochemical Trade-Offs

Evidence that Nonfunctional DNA Serves as a Mutational Buffer

Endnotes
  1. Serena Nik-Zainal and Benjamin A. Hall, “Cellular Survival over Genomic Perfection,” Science 366, no. 6467 (November 15, 2019): 802–03, doi:10.1126/science.aax8046.
  2. Nik-Zainal and Hall, 802–03.
  3. Nik-Zainal and Hall, 802–03.

Reprinted with permission by the author

Original article at:
https://reasons.org/explore/blogs/the-cells-design

New Insights into Genetic Code Optimization Signal Creator’s Handiwork

00014

By Fazale Rana – October 16, 2019

I knew my career as a baseball player would be short-lived when, as a thirteen-year-old, I made the transition from Little League to the Babe Ruth League, which uses official Major League Baseball rules. Suddenly there were a whole lot more rules for me to follow than I ever had to think about in Little League.

Unlike in Little League, at the Babe Ruth level the hitter and base runners have to know what the other is going to do. Usually, the third-base coach is responsible for this communication. Before each pitch is thrown, the third-base coach uses a series of hand signs to relay instructions to the hitter and base runners.

blog__inline--new-insights-into-genetic-code

Credit: Shutterstock

My inability to pick up the signs from the third-base coach was a harbinger for my doomed baseball career. I did okay when I was on base, but I struggled to pick up his signs when I was at bat.

The issue wasn’t that there were too many signs for me to memorize. I struggled recognizing the indicator sign.

To prevent the opposing team from stealing the signs, it is common for the third-base coach to use an indicator sign. Each time he relays instructions, the coach randomly runs through a series of signs. At some point in the sequence, the coach gives the indicator sign. When he does that, it means that the next signal is the actual sign.

All of this activity was simply too much for me to process. When I was at the plate, I couldn’t consistently keep up with the third-base coach. It got so bad that a couple of times the third-base coach had to call time-out and have me walk up the third-base line, so he could whisper to me what I was to do when I was at the plate. It was a bit humiliating.

Codes Come from Intelligent Agents

The signs relayed by a third-base coach to the hitter and base runners are a type of code—a set of rules used to convert and convey information across formats.

Experience teaches us that it takes intelligent agents, such as baseball coaches, to devise codes, even those that are rather basic in their design. The more sophisticated a code, the greater the level of ingenuity required to develop it.

Perhaps the most sophisticated codes of all are those that can detect errors during data transmission.

I sure could have used a code like that when I played baseball. It would have helped me if the hand signals used by the third-base coach were designed in such a way that I could always understand what he wanted, even if I failed to properly pick up the indicator signal.

The Genetic Code

As it turns out, just such a code exists in nature. It is one of the most sophisticated codes known to us—far more sophisticated than the best codes designed by the brightest computer engineers in the world. In fact, this code resides at the heart of biochemical systems. It is the genetic code.

This biochemical code consists of a set of rules that define the information stored in DNA. These rules specify the sequence of amino acids that the cell’s machinery uses to build proteins. In this process, information formatted as nucleotide sequences in DNA is converted into information formatted as amino acid sequences in proteins.

Moreover, the genetic code is universal, meaning that all life on Earth uses it.1

Biochemists marvel at the design of the genetic code, in part because its structure displays exquisite optimization. This optimization includes the capacity to dramatically curtail errors that result from mutations.

Recently, a team from Germany identified another facet of the genetic code that is highly optimized, further highlighting its remarkable qualities.2

The Optimal Genetic Code

As I describe in The Cell’s Design, scientists from Princeton University and the University of Bath (UK) quantified the error-minimization capacity of the genetic code during the 1990s. Their work indicated that the universal genetic code is optimized to withstand the potentially harmful effects of substitution mutations better than virtually any other conceivable genetic code.3

In 2018, another team of researchers from Germany demonstrated that the universal genetic code is also optimized to withstand the harmful effects of frameshift mutations—again, better than other conceivable codes.4

In 2007, researchers from Israel showed that the genetic code is also optimized to harbor overlapping codes.5 This is important because, in addition to the genetic code, regions of DNA harbor other overlapping codes that direct the binding of histone proteins, transcription factors, and the machinery that splices genes after they have been transcribed.

The Robust Optimality of the Genetic Code

With these previous studies serving as a backdrop, the German research team wanted to probe more deeply into the genetic code’s optimality. These researchers focused on potential optimality of three properties of the genetic code: (1) resistance to harmful effects of substitution mutations, (2) resistance to harmful effects of frameshift mutations, and (3) capacity to support overlapping genes.

As with earlier studies, the team assessed the optimality of the naturally occurring genetic code by comparing its performance with sets of random codes that are conceivable alternatives. For all three property comparisons, they discovered that the natural (or standard) genetic code (SGC) displays a high degree of optimality. The researchers write, “We find that the SGC’s optimality is very robust, as no code set with no optimised properties is found. We therefore conclude that the optimality of the SGC is a robust feature across all evolutionary hypotheses.”6

On top of this insight, the research team adds one other dimension to multidimensional optimality of the genetic code: its capacity to support overlapping genes.

Interestingly, the researchers also note that the results of their work raise significant challenges to evolutionary explanations for the genetic code, pointing to the code’s multidimensional optimality that is extreme in all dimensions. They write:

We conclude that the optimality of the SGC is a robust feature and cannot be explained by any simple evolutionary hypothesis proposed so far. . . . the probability of finding the standard genetic code by chance is very low. Selection is not an omnipotent force, so this raises the question of whether a selection process could have found the SGC in the case of extreme code optimalities.7

While natural selection isn’t omnipotent, a transcendent Creator would be, and could account for the genetic code’s extreme optimality.

The Genetic Code and the Case for a Creator

In The Cell’s Design, I point out that our common experience teaches us that codes come from minds. It’s true on the baseball diamond and true in the computer lab. By analogy, the mere existence of the genetic code suggests that biochemical systems come from a Mind—a conclusion that gains additional support when we consider the code’s sophistication and exquisite optimization.

The genetic code’s ability to withstand errors that arise from substitution and frameshift mutations, along with its optimal capacity to harbor multiple overlapping codes and overlapping genes, seems to defy naturalistic explanation.

As a neophyte playing baseball, I could barely manage the simple code the third-base coach used. How mind-boggling it is for me when I think of the vastly superior ingenuity and sophistication of the universal genetic code.

And, just like the hitter and base runner work together to produce runs in baseball, the elegant design of the genetic code and the inability of evolutionary processes to account for its extreme multidimensional optimization combine to make the case that a Creator played a role in the origin and design of biochemical systems.

With respect to the case for a Creator, the insight from the German research team hits it out of the park.

Resources:

Endnotes
  1. Some organisms have a genetic code that deviates from the universal code in one or two of the coding assignments. Presumably, these deviant codes originate when the universal genetic code evolves, altering coding assignments.
  2. Stefan Wichmann and Zachery Ardern, “Optimality of the Standard Genetic Code Is Robust with Respect to Comparison Code Sets,” Biosystems 185 (November 2019): 104023, doi:10.1016/j.biosystems.2019.104023.
  3. David Haig and Laurence D. Hurst, “A Quantitative Measure of Error Minimization in the Genetic Code,” Journal of Molecular Evolution 33, no. 5 (November 1991): 412–17, doi:1007/BF02103132; Gretchen Vogel, “Tracking the History of the Genetic Code,” Science 281, no. 5375 (July 17, 1998): 329–31, doi:1126/science.281.5375.329; Stephen J. Freeland and Laurence D. Hurst, “The Genetic Code Is One in a Million,” Journal of Molecular Evolution 47, no. 3 (September 1998): 238–48, doi:10.1007/PL00006381; Stephen J. Freeland et al., “Early Fixation of an Optimal Genetic Code,” Molecular Biology and Evolution 17, no. 4 (April 2000): 511–18, 10.1093/oxfordjournals.molbev.a026331.
  4. Regine Geyer and Amir Madany Mamlouk, “On the Efficiency of the Genetic Code after Frameshift Mutations,” PeerJ 6 (May 21, 2018): e4825, doi:10.7717/peerj.4825.
  5. Shalev Itzkovitz and Uri Alon, “The Genetic Code Is Nearly Optimal for Allowing Additional Information within Protein-Coding Sequences,” Genome Research 17, no. 4 (April 2007): 405–12, doi:10.1101/gr.5987307.
  6. Wichmann and Ardern, “Optimality.”
  7. Wichmann and Ardern, “Optimality.”

Reprinted with permission by the author

Original article at:
https://reasons.org/explore/blogs/the-cells-design

Is 75% of the Human Genome Junk DNA?

is75percentofthehumangenome
BY FAZALE RANA – AUGUST 29, 2017

By the rude bridge that arched the flood,
Their flag to April’s breeze unfurled,
Here once the embattled farmers stood,
And fired the shot heard round the world.

–Ralph Waldo Emerson, Concord Hymn

Emerson referred to the Battles of Lexington and Concord, the first skirmishes of the Revolutionary War, as the “shot heard round the world.”

While not as loud as the gunfire that triggered the Revolutionary War, a recent article published in Genome Biology and Evolution by evolutionary biologist Dan Graur has garnered a lot of attention,1 serving as the latest salvo in the junk DNA wars—a conflict between genomics scientists and evolutionary biologists about the amount of functional DNA sequences in the human genome.

Clearly, this conflict has important scientific ramifications, as researchers strive to understand the human genome and seek to identify the genetic basis for diseases. The functional content of the human genome also has significant implications for creation-evolution skirmishes. If most of the human genome turns out to be junk after all, then the case for a Creator potentially suffers collateral damage.

According to Graur, no more than 25% of the human genome is functional—a much lower percentage than reported by the ENCODE Consortium. Released in September 2012, phase II results of the ENCODE project indicated that 80% of the human genome is functional, with the expectation that the percentage of functional DNA in the genome would rise toward 100% when phase III of the project reached completion.

If true, Graur’s claim would represent a serious blow to the validity of the ENCODE project conclusions and devastate the RTB human origins creation model. Intelligent design proponents and creationists (like me) have heralded the results of the ENCODE project as critical in our response to the junk DNA challenge.

Junk DNA and the Creation vs. Evolution Battle

Evolutionary biologists have long considered the presence of junk DNA in genomes as one of the most potent pieces of evidence for biological evolution. Skeptics ask, “Why would a Creator purposely introduce identical nonfunctional DNA sequences at the same locations in the genomes of different, though seemingly related, organisms?”

When the draft sequence was first published in 2000, researchers thought only around 2–5% of the human genome consisted of functional sequences, with the rest being junk. Numerous skeptics and evolutionary biologists claim that such a vast amount of junk DNA in the human genome is compelling evidence for evolution and the most potent challenge against intelligent design/creationism.

But these arguments evaporate in the wake of the ENCODE project. If valid, the ENCODE results would radically alter our view of the human genome. No longer could the human genome be regarded as a wasteland of junk; rather, the human genome would have to be recognized as an elegantly designed system that displays sophistication far beyond what most evolutionary biologists ever imagined.

ENCODE Skeptics

The findings of the ENCODE project have been criticized by some evolutionary biologists who have cited several technical problems with the study design and the interpretation of the results. (See articles listed under “Resources to Go Deeper” for a detailed description of these complaints and my responses.) But ultimately, their criticisms appear to be motivated by an overarching concern: if the ENCODE results stand, then it means key features of the evolutionary paradigm can’t be correct.

Calculating the Percentage of Functional DNA in the Human Genome

Graur (perhaps the foremost critic of the ENCODE project) has tried to discredit the ENCODE findings by demonstrating that they are incompatible with evolutionary theory. Toward this end, he has developed a mathematical model to calculate the percentage of functional DNA in the human genome based on mutational load—the amount of deleterious mutations harbored by the human genome.

Graur argues that junk DNA functions as a “sponge” absorbing deleterious mutations, thereby protecting functional regions of the genome. Considering this buffering effect, Graur wanted to know how much junk DNA must exist in the human genome to buffer against the loss of fitness—which would result from deleterious mutations in functional DNA—so that a constant population size can be maintained.

Historically, the replacement level fertility rates for human beings have been two to three children per couple. Based on Graur’s modeling, this fertility rate requires 85–90% of the human genome to be composed of junk DNA in order to absorb deleterious mutations—ensuring a constant population size, with the upper limit of functional DNA capped at 25%.

Graur also calculated a fertility rate of 15 children per couple, at minimum, to maintain a constant population size, assuming 80% of the human genome is functional. According to Graur’s calculations, if 100% of the human genome displayed function, the minimum replacement level fertility rate would have to be 24 children per couple.

He argues that both conclusions are unreasonable. On this basis, therefore, he concludes that the ENCODE results cannot be correct.

Response to Graur

So, has Graur’s work invalidated the ENCODE project results? Hardly. Here are four reasons why I’m skeptical.

1. Graur’s estimate of the functional content of the human genome is based on mathematical modeling, not experimental results.

An adage I heard repeatedly in graduate school applies: “Theories guide, experiments decide.” Though the ENCODE project results theoretically don’t make sense in light of the evolutionary paradigm, that is not a reason to consider them invalid. A growing number of studies provide independent experimental validation of the ENCODE conclusions. (Go here and here for two recent examples.)

To question experimental results because they don’t align with a theory’s predictions is a “Bizarro World” approach to science. Experimental results and observations determine a theory’s validity, not the other way around. Yet when it comes to the ENCODE project, its conclusions seem to be weighed based on their conformity to evolutionary theory. Simply put, ENCODE skeptics are doing science backwards.

While Graur and other evolutionary biologists argue that the ENCODE results don’t make sense from an evolutionary standpoint, I would argue as a biochemist that the high percentage of functional regions in the human genome makes perfect sense. The ENCODE project determined that a significant fraction of the human genome is transcribed. They also measured high levels of protein binding.

ENCODE skeptics argue that this biochemical activity is merely biochemical noise. But this assertion does not make sense because (1) biochemical noise costs energy and (2) random interactions between proteins and the genome would be harmful to the organism.

Transcription is an energy- and resource-intensive process. To believe that most transcripts are merely biochemical noise would be untenable. Such a view ignores cellular energetics. Transcribing a large percentage of the genome when most of the transcripts serve no useful function would routinely waste a significant amount of the organism’s energy and material stores. If such an inefficient practice existed, surely natural selection would eliminate it and streamline transcription to produce transcripts that contribute to the organism’s fitness.

Apart from energetics considerations, this argument ignores the fact that random protein binding would make a dire mess of genome operations. Without minimizing these disruptive interactions, biochemical processes in the cell would grind to a halt. It is reasonable to think that the same considerations would apply to transcription factor binding with DNA.

2. Graur’s model employs some questionable assumptions.

Graur uses an unrealistically high rate for deleterious mutations in his calculations.

Graur determined the deleterious mutation rate using protein-coding genes. These DNA sequences are highly sensitive to mutations. In contrast, other regions of the genome that display function—such as those that (1) dictate the three-dimensional structure of chromosomes, (2) serve as transcription factors, and (3) aid as histone binding sites—are much more tolerant to mutations. Ignoring these sequences in the modeling work artificially increases the amount of required junk DNA to maintain a constant population size.

3. The way Graur determines if DNA sequence elements are functional is questionable. 

Graur uses the selected-effect definition of function. According to this definition, a DNA sequence is only functional if it is undergoing negative selection. In other words, sequences in genomes can be deemed functional only if they evolved under evolutionary processes to perform a particular function. Once evolved, these sequences, if they are functional, will resist evolutionary change (due to natural selection) because any alteration would compromise the function of the sequence and endanger the organism. If deleterious, the sequence variations would be eliminated from the population due to the reduced survivability and reproductive success of organisms possessing those variants. Hence, functional sequences are those under the effects of selection.

In contrast, the ENCODE project employed a causal definition of function. Accordingly, function is ascribed to sequences that play some observationally or experimentally determined role in genome structure and/or function.

The ENCODE project focused on experimentally determining which sequences in the human genome displayed biochemical activity using assays that measured

  • transcription,
  • binding of transcription factors to DNA,
  • histone binding to DNA,
  • DNA binding by modified histones,
  • DNA methylation, and
  • three-dimensional interactions between enhancer sequences and genes.

In other words, if a sequence is involved in any of these processes—all of which play well-established roles in gene regulation—then the sequences must have functional utility. That is, if sequenceQperforms functionG, then sequenceQis functional.

So why does Graur insist on a selected-effect definition of function? For no other reason than a causal definition ignores the evolutionary framework when determining function. He insists that function be defined exclusively within the context of the evolutionary paradigm. In other words, his preference for defining function has more to do with philosophical concerns than scientific ones—and with a deep-seated commitment to the evolutionary paradigm.

As a biochemist, I am troubled by the selected-effect definition of function because it is theory-dependent. In science, cause-and-effect relationships (which include biological and biochemical function) need to be established experimentally and observationally,independent of any particular theory. Once these relationships are determined, they can then be used to evaluate the theories at hand. Do the theories predict (or at least accommodate) the established cause-and-effect relationships, or not?

Using a theory-dependent approach poses the very real danger that experimentally determined cause-and-effect relationships (or, in this case, biological functions) will be discarded if they don’t fit the theory. And, again, it should be the other way around. A theory should be discarded, or at least reevaluated, if its predictions don’t match these relationships.

What difference does it make which definition of function Graur uses in his model? A big difference. The selected-effect definition is more restrictive than the causal-role definition. This restrictiveness translates into overlooked function and increases the replacement level fertility rate.

4. Buffering against deleterious mutations is a function.

As part of his model, Graur argues that junk DNA is necessary in the human genome to buffer against deleterious mutations. By adopting this view, Graur has inadvertently identified function for junk DNA. In fact, he is not the first to argue along these lines. Biologist Claudiu Bandea has posited that high levels of junk DNA can make genomes resistant to the deleterious effects of transposon insertion events in the genome. If insertion events are random, then the offending DNA is much more likely to insert itself into “junk DNA” regions instead of coding and regulatory sequences, thus protecting information-harboring regions of the genome.

If the last decade of work in genomics has taught us anything, it is this: we are in our infancy when it comes to understanding the human genome. The more we learn about this amazingly complex biochemical system, the more elegant and sophisticated it becomes. Through this process of discovery, we continue to identify functional regions of the genome—DNA sequences long thought to be “junk.”

In short, the criticisms of the ENCODE project reflect a deep-seated commitment to the evolutionary paradigm and, bluntly, are at war with the experimental facts.

Bottom line: if the ENCODE results stand, it means that key aspects of the evolutionary paradigm can’t be correct.

Resources to Go Deeper

Endnotes

  1. Dan Graur, “An Upper Limit on the Functional Fraction of the Human Genome,” Genome Biology and Evolution 9 (July 2017): 1880–85, doi:10.1093/gbe/evx121.