Genome Code Builds the Case for Creation

By Fazale Rana – December 18, 2019

A few days ago, I was doing a bit of Christmas shopping for my grandkids and I happened across some really cool construction kits, designed to teach children engineering principles while encouraging imaginative play. For those of you who still have a kid or two on your Christmas list, here are some of the products that caught my eye:

These building block sets are a far cry from the simple Lego kits I played with as a kid.

As cool as these construction toys may be, they don’t come close to the sophisticated construction kit cells use to build the higher-order structures of chromosomes. This point is powerfully illustrated by the insights of Italian investigator Giorgio Bernardi. Over the course of the last several years, Bernardi’s research teams have uncovered design principles that account for chromosome structure, a set of rules that he refers to as the genome code.1

To appreciate these principles and their theological implications, a little background information is in order. (For those readers familiar with chromosome structure, skip ahead to The Genome Code.)


DNA and proteins interact to make chromosomes. Each chromosome consists of a single DNA molecule wrapped around a series of globular protein complexes. These complexes repeat to form a supramolecular structure resembling a string of beads. Biochemists refer to the “beads” as nucleosomes.


Figure 1: Nucleosome Structure. Image credit: Shutterstock

The chain of nucleosomes further coils to form a structure called a solenoid. In turn, the solenoid condenses to form higher-order structures that constitute the chromosome.


Figure 2: Chromosome Structure Image credit: Shutterstock

Between cell division events (called the interphase of the cell cycle), the chromosome exists in an extended diffuse form that is not readily detectable when viewed with a microscope. Just prior to and during cell division, the chromosome condenses to form its readily recognizable compact structures.

Biologists have discovered that there are two distinct regions—labeled euchromatin and heterochromatin for chromosomes in the diffuse state. Euchromatin is resistant to staining with dyes that help researchers view it with a microscope. On the other hand, heterochromatin stains readily. Biologists believe that heterochromatin is more tightly packed (and, hence, more readily stained) than euchromatin. They have also learned that heterochromatin associates with the nuclear envelope.


Figure 3: Structure of the Nucleus Showing the Distribution of Euchromatin and Heterochromatin. Image credit: Wikipedia

The Genome Code

Historically, biologists have viewed chromosomes as consisting of compositionally distinct units called isochores. In vertebrate genomes, five isochores exist (L1, L2, H1, H2, and H3). The isochores differ in the composition of guanine- and cytosine-containing deoxyribonucleotides (two of the four building blocks of DNA). The GC composition increases from L1 to H3. Gene density also increases, with the H3 isochore possessing the greatest number of genes. On the other hand, the size of DNA pieces of compositional homogeneity decreases from L1 to H3.

Bernardi and his collaborators have developed evidence that the isochores reflect a fundamental unit of chromosome organization. The H isochores correspond to GC-rich euchromatin (containing most of the genes) and the L isochores correspond to GC-poor heterochromatin (characterized by gene deserts).

Bernardi’s research teams have demonstrated that the two groups of isochores are characterized by different distributions of DNA sequence elements. GC-poor isochores contain a disproportionately high level of oligo A sequences while GC-rich isochores harbor a disproportionately high level of oligo G sequences. These two different types of DNA sequence elements form stiff structures that mold the overall three-dimensional architecture of chromosomes. For example, oligo A sequences introduce curvature to the DNA double helix. This topology allows the double helix to wrap around the protein core that forms nucleosomes. The oligo G sequence elements adopt a topology that weakens binding to the proteins that form the nucleosome core. As Bernardi points out, “There is a fundamental link between DNA structure and chromatin structure, the genomic code.”2

In other words, the genomic code refers to a set of DNA sequence elements that:

  1. Directly encodes and molds chromosome structure (while defining nucleosome binding),
  2. Is pervasive throughout the genome, and
  3. Overlaps the genetic code by constraining sequence composition and gene structure.

Because of the existence of the genomic code, variations in DNA sequence caused by mutations will alter the structure of chromosomes and lead to deleterious effects.

The bottomline: Most of the genomic sequence plays a role in establishing the higher-order structures necessary for chromosome formation.

Genomic Code Challenges the Junk DNA Concept

According to Bernardi, the discovery of the genomic code explains the high levels of noncoding DNA sequences in genomes. Many people view such sequences as vestiges of an evolutionary history. Because of the existence and importance of the genomic code, the vast proportion of noncoding DNA found in vertebrate genomes must be viewed as functionally vital. According to Bernardi:

Ohno, mostly focusing on pseudo-genes, proposed that non-coding DNA was “junk DNA.” Doolittle and Sapienza and Orgel and Crick suggested the idea of “selfish DNA,” mainly involving transposons visualized as molecular parasites rather than having an adaptive function for their hosts. In contrast, the ENCODE project claimed that the majority (~80%) of the genome participated “in at least one biochemical RNA-and/or chromatin-associated event in at least one cell type.”…At first sight, the pervasive involvement of isochores in the formation of chromatin domains and spatial compartments seems to leave little or no room for “junk” or “selfish” DNA.3

The ENCODE Project

Over the last decade or so, ENCODE Project scientists have been seeking to identify the functional DNA sequence elements in the human genome. The most important landmark for the project came in the fall of 2012 when the ENCODE Project reported phase II results. (Currently, ENCODE is in phase IV.) To the surprise of many, the project reported that around 80 percent of the human genome displays biochemical activity—hence, function—with many scientists anticipating that that percentage would increase as phases III and IV moved toward completion.

The ENCODE results have generated quite a bit of controversy, to say the least. Some researchers accept the ENCODE conclusions. Others vehemently argue that the conclusions fly in the face of the evolutionary paradigm and, therefore, can’t be valid. Of course, if the ENCODE Project conclusions are correct, then it becomes a boon for creationists and intelligent design advocates.

One of the most prominent complaints about the ENCODE conclusions relates to the way the consortium determined biochemical function. Critics argue that ENCODE scientists conflated biochemical activity with function. These critics assert that, at most, about ten percent of the human genome is truly functional, with the remainder of the activity reflecting biochemical noise and experimental artifacts.

However, as Bernardi points out, his work (independent of the ENCODE Project) affirms the project’s conclusions. In this case, the so-called junk DNA plays a critical role in molding the structures of chromosomes and must be considered functional.

Function for “Junk DNA”

Bernardi’s work is not the first to recognize pervasive function of noncoding DNA. Other researchers have identified other functional attributes of noncoding DNA. To date, researchers have identified at least five distinct functional roles that noncoding DNA plays in genomes.

  1. Helps in gene regulation
  2. Functions as a mutational buffer
  3. Forms a nucleoskeleton
  4. Serves as an attachment site for mitotic apparatus
  5. Dictates three-dimensional architecture of chromosomes

A New View of Genomes

These types of insights are forcing us to radically rethink our view of the human genome. It appears that genomes are incredibly complex, sophisticated biochemical systems and most of the genes serve useful and necessary functions.

We have come a long way from the early days of the human genome project. Just 15 years ago, many scientists estimated that around 95 percent of the human genome consists of junk. That acknowledgment seemingly provided compelling evidence that humans must be the product of an evolutionary history. Today, the evidence suggests that the more we learn about the structure and function of genomes, the more elegant and sophisticated they appear to be. It is quite possible that most of the human genome is functional.

For creationists and intelligent design proponents, this changing view of the human genome provides reasons to think that it is the handiwork of our Creator. A skeptic might wonder why a Creator would make genomes littered with so much junk. But if a vast proportion of genomes consists of functional sequences, then this challenge no longer carries weight and it becomes more and more reasonable to interpret genomes from within a creation model/intelligent design framework.

What a Christmas gift!


Junk DNA Regulates Gene Expression

Junk DNA Serves as a Mutational Buffer

Junk DNA Serves a Nucleoskeletal Role

Junk DNA Plays a Role in Cell Division

ENCODE Project

Studies that Affirm the ENCODE Results

  1. Giorgio Bernardi, “The Genomic Code: A Pervasive Encoding/Molding of Chromatin Structures and a Solution of the ‘Non-Coding DNA’ Mystery,” BioEssays 41, no. 12 (November 8, 2019), doi:10.1002/bies.201900106.
  2. Bernardi, “The Genomic Code.”
  3. Bernardi, “The Genomic Code.”

Reprinted with permission by the author

Original article at:

Pseudogene Discovery Pains Evolutionary Paradigm

Untitled 15

It was one of the most painful experiences I ever had. A few years ago, I had two back-to-back bouts of kidney stones. I remember it as if it were yesterday. Man, did it hurt when I passed the stones! All I wanted was for the emergency room nurse to keep the Demerol coming.


Figure 1: Schematic Depiction of Kidney Stones Moving through the Urinary Tract. Image Credit: Shutterstock

When all that misery was going down, I wished I was one of those rare individuals who doesn’t experience pain. There are some people who, due to genetic mutations, live pain-free lives. This condition is called hypoalgesia. (Of course, there is a serious downside to hypoalgesia. Pain lets us know when our body is hurt or sick. Because hypoalgesics can’t experience pain, they are prone to serious injury, etc.)

Biomedical researchers possess a keen interest in studying people with hypoalgesia. Identifying the mutations responsible for this genetic condition helps investigators understand the physiological processes that undergird the pain sensation. This insight then becomes indispensable to guiding efforts to develop new drugs and techniques to treat pain.

By studying the genetic profile of a 66-year-old woman who lived a lifetime with pain-free injuries, a research team from the UK recently discovered a novel genetic mutation that causes hypoalgesia.1 The mutation responsible for this patient’s hypoalgesia occurred in a pseudogene, a region of the genome considered nonfunctional “junk DNA.”

This discovery adds to the mounting evidence that shows junk DNA is functional. At this point, molecular geneticists have demonstrated that virtually every class of junk DNA has function. This notion undermines the best evidence for common descent and, hence, undermines an evolutionary interpretation of biology. More importantly, the discovery adds support for the competitive endogenous RNA hypothesis, which can be marshaled to support RTB’s genomics model. It is becoming more and more evident to me that genome structure and function reflect the handiwork of a Creator.

The Role of a Pseudogene in Mediating Hypoalgesia

To identify the genetic mutation responsible for the 66-year-old’s hypoalgesia, the research team scanned her DNA along with samples taken from her mother and two children. The team discovered two genetic changes: (1) mutations to the FAAH gene that reduced its expression, and (2) deletion of part of the FAAH pseudogene.

The FAAH gene encodes for a protein called fatty acid amide hydrolase (FAAH). This protein breaks down fatty acid amides. Some of these compounds interact with cannabinoid receptors. These receptors are located in the membranes of cells found in tissues throughout the body. They mediate pain sensation, among other things. When fatty acid amide concentrations become elevated in the circulatory system, it produces an analgesic effect.

Researchers found elevated fatty acid amide levels in the patient’s blood, consistent with reduced expression of the FAAH gene. It appears that both mutations are required for the complete hypoalgesia observed in the patient. The patient’s mother, daughter, and son all display only partial hypoalgesia. The mother and daughter have the same mutation in the FAAH gene but an intact FAAH pseudogene. The patient’s son is missing the FAAH pseudogene, but has a “normal” FAAH gene.

Based on the data, it looks like proper expression levels of the FAAH gene require an intact FAAH pseudogene. This is not the first time that biomedical researchers have observed the same effect. There are a number of gene-pseudogene pairs in which both must be intact and transcribed for the gene to be expressed properly. In 2011, researchers from Harvard University proposed that the competitive endogenous RNA hypothesis explains why transcribed pseudogenes are so important for gene expression.2

The Competitive Endogenous RNA Hypothesis

Biochemists and molecular biologists have long believed that the primary mechanism for regulating gene expression centered around controlling the frequency and amount of mRNA produced during transcription. For housekeeping genes, mRNA is produced continually, while for genes that specify situational proteins, it is produced as needed. Greater amounts of mRNA are produced for genes expressed at high levels and limited amounts for genes expressed at low levels.

Researchers long thought that once the mRNA was produced it would be translated into proteins, but recent discoveries indicate this is not the case. Instead, an elaborate mechanism exists that selectively degrades mRNA transcripts before they can be used to direct the protein production at the ribosome. This mechanism dictates the amount of protein produced by permitting or preventing mRNA from being translated. The selective degradation of mRNA also plays a role in gene expression, functioning in a complementary manner to the transcriptional control of gene expression.

Another class of RNA molecules, called microRNAs, mediates the selective degradation of mRNA. In the early 2000s, biochemists recognized that by binding to mRNA (in the 3′ untranslated region of the transcript), microRNAs play a crucial role in gene regulation. Through binding, microRNAs flag the mRNA for destruction by RNA-induced silencing complex (RISC).


Figure 2: Schematic of the RNA-Induced Silencing Mechanism. Image Credit: Wikipedia

Various distinct microRNA species in the cell bind to specific sites in the 3′ untranslated region of mRNA transcripts. (These binding locations are called microRNA response elements.) The selective binding by the population of microRNAs explains the role that duplicated pseudogenes play in regulating gene expression.

The sequence similarity between the duplicated pseudogene and the corresponding “intact” gene means that the same microRNAs will bind to both mRNA transcripts. (It is interesting to note that most duplicated pseudogenes are transcribed.) When microRNAs bind to the transcript of the duplicated pseudogene, it allows the transcript of the “intact” gene to escape degradation. In other words, the transcript of the duplicated pseudogene is a decoy. The mRNA transcript can then be translated and, hence, the “intact” gene expressed.

It is not just “intact” and duplicated pseudogenes that harbor the same microRNA response elements. Other genes share the same set of microRNA response elements in the 3′ untranslated region of the transcripts and, consequently, will bind the same set of microRNAs. These genes form a network that, when transcribed, will influence the expression of all genes in the network. This relationship means that all the mRNA transcripts in the network can function as decoys. This recognition accounts for the functional utility of unitary pseudogenes.

One important consequence of this hypothesis is that mRNA has dual functions inside the cell. First, it encodes information needed to make proteins. Second, it helps regulate the expression of other transcripts that are part of its network.

Junk DNA and the Case for Creation

Evolutionary biologists have long maintained that identical (or nearly identical) pseudogene sequences found in corresponding locations in genomes of organisms that naturally group together (such as humans and the great apes) provide compelling evidence for shared ancestry. This interpretation was persuasive because molecular geneticists regarded pseudogenes as nonfunctional, junk DNA. Presumably, random biochemical events transformed functional DNA sequences (genes) into nonfunctional garbage.

Creationists and intelligent design proponents had little to offer by way of evidence for the intentional design of genomes. But all this changed with the discovery that virtually every class of junk DNA has function, including all three types of pseudogenes (processed, duplicated, and unitary).

If junk DNA is functional, then the sequences previously thought to show common descent could be understood as shared designs. The competitive endogenous RNA hypothesis supports this interpretation. This model provides an elegant rationale for the structural similarity between gene-pseudogene pairs and also makes sense of the widespread presence of unitary pseudogenes in genomes.

Of course, this insight also supports the RTB genomics model. And that sure feels good to me.


  1. Abdella M. Habib et al., “Microdeletion in a FAAH Pseudogene Identified in a Patient with High Anandamide Concentrations and Pain Insensitivity,” British Journal of Anaesthesia, advanced access publication, doi:10.1016/j.bja.2019.02.019.
  2. Ana C. Marques, Jennifer Tan, and Chris P. Ponting, “Wrangling for microRNAs Provokes Much Crosstalk,” Genome Biology 12, no. 11 (November 2011): 132, doi:10.1186/gb-2011-12-11-132; Leonardo Salmena et al., “A ceRNA Hypothesis: The Rosetta Stone of a Hidden RNA Language?”, Cell 146, no. 3 (August 5, 2011): 353–58, doi:10.1016/j.cell.2011.07.014.

Reprinted with permission by the author
Original article at:

Is 75% of the Human Genome Junk DNA?


By the rude bridge that arched the flood,
Their flag to April’s breeze unfurled,
Here once the embattled farmers stood,
And fired the shot heard round the world.

–Ralph Waldo Emerson, Concord Hymn

Emerson referred to the Battles of Lexington and Concord, the first skirmishes of the Revolutionary War, as the “shot heard round the world.”

While not as loud as the gunfire that triggered the Revolutionary War, a recent article published in Genome Biology and Evolution by evolutionary biologist Dan Graur has garnered a lot of attention,1 serving as the latest salvo in the junk DNA wars—a conflict between genomics scientists and evolutionary biologists about the amount of functional DNA sequences in the human genome.

Clearly, this conflict has important scientific ramifications, as researchers strive to understand the human genome and seek to identify the genetic basis for diseases. The functional content of the human genome also has significant implications for creation-evolution skirmishes. If most of the human genome turns out to be junk after all, then the case for a Creator potentially suffers collateral damage.

According to Graur, no more than 25% of the human genome is functional—a much lower percentage than reported by the ENCODE Consortium. Released in September 2012, phase II results of the ENCODE project indicated that 80% of the human genome is functional, with the expectation that the percentage of functional DNA in the genome would rise toward 100% when phase III of the project reached completion.

If true, Graur’s claim would represent a serious blow to the validity of the ENCODE project conclusions and devastate the RTB human origins creation model. Intelligent design proponents and creationists (like me) have heralded the results of the ENCODE project as critical in our response to the junk DNA challenge.

Junk DNA and the Creation vs. Evolution Battle

Evolutionary biologists have long considered the presence of junk DNA in genomes as one of the most potent pieces of evidence for biological evolution. Skeptics ask, “Why would a Creator purposely introduce identical nonfunctional DNA sequences at the same locations in the genomes of different, though seemingly related, organisms?”

When the draft sequence was first published in 2000, researchers thought only around 2–5% of the human genome consisted of functional sequences, with the rest being junk. Numerous skeptics and evolutionary biologists claim that such a vast amount of junk DNA in the human genome is compelling evidence for evolution and the most potent challenge against intelligent design/creationism.

But these arguments evaporate in the wake of the ENCODE project. If valid, the ENCODE results would radically alter our view of the human genome. No longer could the human genome be regarded as a wasteland of junk; rather, the human genome would have to be recognized as an elegantly designed system that displays sophistication far beyond what most evolutionary biologists ever imagined.

ENCODE Skeptics

The findings of the ENCODE project have been criticized by some evolutionary biologists who have cited several technical problems with the study design and the interpretation of the results. (See articles listed under “Resources to Go Deeper” for a detailed description of these complaints and my responses.) But ultimately, their criticisms appear to be motivated by an overarching concern: if the ENCODE results stand, then it means key features of the evolutionary paradigm can’t be correct.

Calculating the Percentage of Functional DNA in the Human Genome

Graur (perhaps the foremost critic of the ENCODE project) has tried to discredit the ENCODE findings by demonstrating that they are incompatible with evolutionary theory. Toward this end, he has developed a mathematical model to calculate the percentage of functional DNA in the human genome based on mutational load—the amount of deleterious mutations harbored by the human genome.

Graur argues that junk DNA functions as a “sponge” absorbing deleterious mutations, thereby protecting functional regions of the genome. Considering this buffering effect, Graur wanted to know how much junk DNA must exist in the human genome to buffer against the loss of fitness—which would result from deleterious mutations in functional DNA—so that a constant population size can be maintained.

Historically, the replacement level fertility rates for human beings have been two to three children per couple. Based on Graur’s modeling, this fertility rate requires 85–90% of the human genome to be composed of junk DNA in order to absorb deleterious mutations—ensuring a constant population size, with the upper limit of functional DNA capped at 25%.

Graur also calculated a fertility rate of 15 children per couple, at minimum, to maintain a constant population size, assuming 80% of the human genome is functional. According to Graur’s calculations, if 100% of the human genome displayed function, the minimum replacement level fertility rate would have to be 24 children per couple.

He argues that both conclusions are unreasonable. On this basis, therefore, he concludes that the ENCODE results cannot be correct.

Response to Graur

So, has Graur’s work invalidated the ENCODE project results? Hardly. Here are four reasons why I’m skeptical.

1. Graur’s estimate of the functional content of the human genome is based on mathematical modeling, not experimental results.

An adage I heard repeatedly in graduate school applies: “Theories guide, experiments decide.” Though the ENCODE project results theoretically don’t make sense in light of the evolutionary paradigm, that is not a reason to consider them invalid. A growing number of studies provide independent experimental validation of the ENCODE conclusions. (Go here and here for two recent examples.)

To question experimental results because they don’t align with a theory’s predictions is a “Bizarro World” approach to science. Experimental results and observations determine a theory’s validity, not the other way around. Yet when it comes to the ENCODE project, its conclusions seem to be weighed based on their conformity to evolutionary theory. Simply put, ENCODE skeptics are doing science backwards.

While Graur and other evolutionary biologists argue that the ENCODE results don’t make sense from an evolutionary standpoint, I would argue as a biochemist that the high percentage of functional regions in the human genome makes perfect sense. The ENCODE project determined that a significant fraction of the human genome is transcribed. They also measured high levels of protein binding.

ENCODE skeptics argue that this biochemical activity is merely biochemical noise. But this assertion does not make sense because (1) biochemical noise costs energy and (2) random interactions between proteins and the genome would be harmful to the organism.

Transcription is an energy- and resource-intensive process. To believe that most transcripts are merely biochemical noise would be untenable. Such a view ignores cellular energetics. Transcribing a large percentage of the genome when most of the transcripts serve no useful function would routinely waste a significant amount of the organism’s energy and material stores. If such an inefficient practice existed, surely natural selection would eliminate it and streamline transcription to produce transcripts that contribute to the organism’s fitness.

Apart from energetics considerations, this argument ignores the fact that random protein binding would make a dire mess of genome operations. Without minimizing these disruptive interactions, biochemical processes in the cell would grind to a halt. It is reasonable to think that the same considerations would apply to transcription factor binding with DNA.

2. Graur’s model employs some questionable assumptions.

Graur uses an unrealistically high rate for deleterious mutations in his calculations.

Graur determined the deleterious mutation rate using protein-coding genes. These DNA sequences are highly sensitive to mutations. In contrast, other regions of the genome that display function—such as those that (1) dictate the three-dimensional structure of chromosomes, (2) serve as transcription factors, and (3) aid as histone binding sites—are much more tolerant to mutations. Ignoring these sequences in the modeling work artificially increases the amount of required junk DNA to maintain a constant population size.

3. The way Graur determines if DNA sequence elements are functional is questionable. 

Graur uses the selected-effect definition of function. According to this definition, a DNA sequence is only functional if it is undergoing negative selection. In other words, sequences in genomes can be deemed functional only if they evolved under evolutionary processes to perform a particular function. Once evolved, these sequences, if they are functional, will resist evolutionary change (due to natural selection) because any alteration would compromise the function of the sequence and endanger the organism. If deleterious, the sequence variations would be eliminated from the population due to the reduced survivability and reproductive success of organisms possessing those variants. Hence, functional sequences are those under the effects of selection.

In contrast, the ENCODE project employed a causal definition of function. Accordingly, function is ascribed to sequences that play some observationally or experimentally determined role in genome structure and/or function.

The ENCODE project focused on experimentally determining which sequences in the human genome displayed biochemical activity using assays that measured

  • transcription,
  • binding of transcription factors to DNA,
  • histone binding to DNA,
  • DNA binding by modified histones,
  • DNA methylation, and
  • three-dimensional interactions between enhancer sequences and genes.

In other words, if a sequence is involved in any of these processes—all of which play well-established roles in gene regulation—then the sequences must have functional utility. That is, if sequenceQperforms functionG, then sequenceQis functional.

So why does Graur insist on a selected-effect definition of function? For no other reason than a causal definition ignores the evolutionary framework when determining function. He insists that function be defined exclusively within the context of the evolutionary paradigm. In other words, his preference for defining function has more to do with philosophical concerns than scientific ones—and with a deep-seated commitment to the evolutionary paradigm.

As a biochemist, I am troubled by the selected-effect definition of function because it is theory-dependent. In science, cause-and-effect relationships (which include biological and biochemical function) need to be established experimentally and observationally,independent of any particular theory. Once these relationships are determined, they can then be used to evaluate the theories at hand. Do the theories predict (or at least accommodate) the established cause-and-effect relationships, or not?

Using a theory-dependent approach poses the very real danger that experimentally determined cause-and-effect relationships (or, in this case, biological functions) will be discarded if they don’t fit the theory. And, again, it should be the other way around. A theory should be discarded, or at least reevaluated, if its predictions don’t match these relationships.

What difference does it make which definition of function Graur uses in his model? A big difference. The selected-effect definition is more restrictive than the causal-role definition. This restrictiveness translates into overlooked function and increases the replacement level fertility rate.

4. Buffering against deleterious mutations is a function.

As part of his model, Graur argues that junk DNA is necessary in the human genome to buffer against deleterious mutations. By adopting this view, Graur has inadvertently identified function for junk DNA. In fact, he is not the first to argue along these lines. Biologist Claudiu Bandea has posited that high levels of junk DNA can make genomes resistant to the deleterious effects of transposon insertion events in the genome. If insertion events are random, then the offending DNA is much more likely to insert itself into “junk DNA” regions instead of coding and regulatory sequences, thus protecting information-harboring regions of the genome.

If the last decade of work in genomics has taught us anything, it is this: we are in our infancy when it comes to understanding the human genome. The more we learn about this amazingly complex biochemical system, the more elegant and sophisticated it becomes. Through this process of discovery, we continue to identify functional regions of the genome—DNA sequences long thought to be “junk.”

In short, the criticisms of the ENCODE project reflect a deep-seated commitment to the evolutionary paradigm and, bluntly, are at war with the experimental facts.

Bottom line: if the ENCODE results stand, it means that key aspects of the evolutionary paradigm can’t be correct.

Resources to Go Deeper


  1. Dan Graur, “An Upper Limit on the Functional Fraction of the Human Genome,” Genome Biology and Evolution 9 (July 2017): 1880–85, doi:10.1093/gbe/evx121.