The Optimal Design of the Genetic Code

theoptimaldesign

BY FAZALE RANA – OCTOBER 3, 2018

Were there no example in the world of contrivance except that of the eye, it would be alone sufficient to support the conclusion which we draw from it, as to the necessity of an intelligent Creator.

–William Paley, Natural Theology

In his classic work, Natural TheologyWilliam Paley surveyed a range of biological systems, highlighting their similarities to human-made designs. Paley noticed that human designs typically consist of various components that interact in a precise way to accomplish a purpose. According to Paley, human designs are contrivances—things produced with skill and cleverness—and they come about via the work of human agents. They come about by the work of intelligent designers. And because biological systems are contrivances, they, too, must come about via the work of a Creator.

For Paley, the pervasiveness of biological contrivances made the case for a Creator compelling. But he was especially struck by the vertebrate eye. For Paley, if the only example of a biological contrivance available to us was the eye, its sophisticated design and elegant complexity alone justify the “necessity of an intelligent creator” to explain its origin.

As a biochemist, I am impressed with the elegant designs of biochemical systems. The sophistication and ingenuity of these designs convinced me as a graduate student that life must stem from the work of a Mind. In my book The Cell’s Design, I follow in Paley’s footsteps by highlighting the eerie similarity between human designs and biochemical systems—a similarity I describe as an intelligent design pattern. Because biochemical systems conform to the intelligent design pattern, they must be the work of a Creator.

As with Paley, I view the pervasiveness of the intelligent design pattern in biochemical systems as critical to making the case for a Creator. Yet, in particular, I am struck by the design of a single biochemical system: namely, the genetic code. On the basis of the structure of the genetic code alone, I think one is justified to conclude that life stems from the work of a Divine Mind. The latest work by a team of German biochemists on the genetic code’s design convinces me all the more that the genetic code is the product of a Creator’s handiwork.1

To understand the significance of this study and the code’s elegant design, a short primer on molecular biology is in order. (For those who have a background in biology, just skip ahead to The Optimal Genetic Code.)

Proteins

The “workhorse” molecules of life, proteins take part in essentially every cellular and extracellular structure and activity. Proteins are chain-like molecules folded into precise three-dimensional structures. Often, the protein’s three-dimensional architecture determines the way it interacts with other proteins to form a functional complex.

Proteins form when the cellular machinery links together (in a head-to-tail fashion) smaller subunit molecules called amino acids. To a first approximation, the cell employs 20 different amino acids to make proteins. The amino acids that make up proteins possess a variety of chemical and physical properties.

optimal-design-of-the-genetic-code-1

Figure 1: The Amino Acids. Image credit: Shutterstock

Each specific amino acid sequence imparts the protein with a unique chemical and physical profile along the length of its chain. The chemical and physical profile determines how the protein folds and, therefore, its function. Because structure determines the function of a protein, the amino acid sequence is key to dictating the type of work a protein performs for the cell.

DNA

The cell’s machinery uses the information harbored in the DNA molecule to make proteins. Like these biomolecules, DNA consists of chain-like structures known as polynucleotides. Two polynucleotide chains align in an antiparallel fashion to form a DNA molecule. (The two strands are arranged parallel to one another with the starting point of one strand located next to the ending point of the other strand, and vice versa.) The paired polynucleotide chains twist around each other to form the well-known DNA double helix. The cell’s machinery forms polynucleotide chains by linking together four different subunit molecules called nucleotides. The four nucleotides used to build DNA chains are adenosine, guanosine, cytidine, and thymidine, familiarly known as A, G, C, and T, respectively.

optimal-design-of-the-genetic-code-2

Figure 2: The Structure of DNA. Image credit: Shutterstock

As noted, DNA stores the information necessary to make all the proteins used by the cell. The sequence of nucleotides in the DNA strands specifies the sequence of amino acids in protein chains. Scientists refer to the amino-acid-coding nucleotide sequence that is used to construct proteins along the DNA strand as a gene.

The Genetic Code

A one-to-one relationship cannot exist between the 4 different nucleotides of DNA and the 20 different amino acids used to assemble polypeptides. The cell addresses this mismatch by using a code comprised of groupings of three nucleotides to specify the 20 different amino acids.

The cell uses a set of rules to relate these nucleotide triplet sequences to the 20 amino acids making up proteins. Molecular biologists refer to this set of rules as the genetic code. The nucleotide triplets, or “codons” as they are called, represent the fundamental communication units of the genetic code, which is essentially universal among all living organisms.

Sixty-four codons make up the genetic code. Because the code only needs to encode 20 amino acids, some of the codons are redundant. That is, different codons code for the same amino acid. In fact, up to six different codons specify some amino acids. Others are specified by only one codon.

Interestingly, some codons, called stop codons or nonsense codons, code no amino acids. (For example, the codon UGA is a stop codon.) These codons always occur at the end of the gene, informing the cell where the protein chain ends.

Some coding triplets, called start codons, play a dual role in the genetic code. These codons not only encode amino acids, but also “tell” the cell where a protein chain begins. For example, the codon GUG encodes the amino acid valine and also specifies the starting point of the proteins.

optimal-design-of-the-genetic-code-3

Figure 3: The Genetic Code. Image credit: Shutterstock

The Optimal Genetic Code

Based on visual inspection of the genetic code, biochemists had long suspected that the coding assignments weren’t haphazard—a frozen accident. Instead it looked to them like a rationale undergirds the genetic code’s architecture. This intuition was confirmed in the early 1990s. As I describe in The Cell’s Design, at that time, scientists from the University of Bath (UK) and from Princeton University quantified the error-minimization capacity of the genetic code. Their initial work indicated that the naturally occurring genetic code withstands the potentially harmful effects of substitution mutations better than all but 0.02 percent (1 out of 5,000) of randomly generated genetic codes with codon assignments different from the universal genetic code.2

Subsequent analysis performed later that decade incorporated additional factors. For example, some types of substitution mutations (called transitions) occur more frequently in nature than others (called transversions). As a case in point, an A-to-G substitution occurs more frequently than does either an A-to-C or an A-to-T mutation. When researchers included this factor into their analysis, they discovered that the naturally occurring genetic code performed better than one million randomly generated genetic codes. In a separate study, they also found that the genetic code in nature resides near the global optimum for all possible genetic codes with respect to its error-minimization capacity.3

It could be argued that the genetic code’s error-minimization properties are more dramatic than these results indicate. When researchers calculated the error-minimization capacity of one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution where the naturally occurring genetic code’s capacity occurred outside the distribution. Researchers estimate the existence of 1018 (a quintillion) possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. Nearly all of these codes fall within the error-minimization distribution. This finding means that of 1018 possible genetic codes, only a few have an error-minimization capacity that approaches the code found universally in nature.

Frameshift Mutations

Recently, researchers from Germany wondered if this same type of optimization applies to frameshift mutations. Biochemists have discovered that these mutations are much more devastating than substitution mutations. Frameshift mutations result when nucleotides are inserted into or deleted from the DNA sequence of the gene. If the number of inserted/deleted nucleotides is not divisible by three, the added or deleted nucleotides cause a shift in the gene’s reading frame—altering the codon groupings. Frameshift mutations change all the original codons to new codons at the site of the insertion/deletion and onward to the end of the gene.

optimal-design-of-the-genetic-code-4

Figure 4: Types of Mutations. Image credit: Shutterstock

The Genetic Code Is Optimized to Withstand Frameshift Mutations

Like the researchers from the University of Bath, the German team generated 1 million random genetic codes with the same type and degree of redundancy as the genetic code found in nature. They discovered that the code found in nature is better optimized to withstand errors that result from frameshift mutations (involving either the insertion or deletion of 1 or 2 nucleotides) than most of the random genetic codes they tested.

The Genetic Code Is Optimized to Harbor Multiple Overlapping Codes

The optimization doesn’t end there. In addition to the genetic code, genes harbor other overlapping codes that independently direct the binding of histone proteins and transcription factors to DNA and dictate processes like messenger RNA folding and splicing. In 2007, researchers from Israel discovered that the genetic code is also optimized to harbor overlapping codes.4

The Genetic Code and the Case for a Creator

In The Cell’s Design, I point out that common experience teaches us that codes come from minds. By analogy, the mere existence of the genetic code suggests that biochemical systems come from a Mind. This conclusion gains considerable support based on the exquisite optimization of the genetic code to withstand errors that arise from both substitution and frameshift mutations, along with its optimal capacity to harbor multiple overlapping codes.

The triple optimization of the genetic code arises from its redundancy and the specific codon assignments. Over 1018 possible genetic codes exist and any one of them could have been “selected” for the code in nature. Yet, the “chosen” code displays extreme optimization—a hallmark feature of designed systems. As the evidence continues to mount, it becomes more and more evident that the genetic code displays an eerie perfection.5

An elegant contrivance such as the genetic code—which resides at the heart of biochemical systems and defines the information content in the cell—is truly one in a million when it comes to reasons to believe.

Resources

Endnotes

  1. Regine Geyer and Amir Madany Mamlouk, “On the Efficiency of the Genetic Code after Frameshift Mutations,” PeerJ 6 (2018): e4825, doi:10.7717/peerj.4825.
  2. David Haig and Laurence D. Hurst, “A Quantitative Measure of Error Minimization in the Genetic Code,” Journal of Molecular Evolution33 (1991): 412–17, doi:1007/BF02103132.
  3. Gretchen Vogel, “Tracking the History of the Genetic Code,” Science281 (1998): 329–31, doi:1126/science.281.5375.329; Stephen J. Freeland and Laurence D. Hurst, “The Genetic Code Is One in a Million,” Journal of Molecular Evolution 47 (1998): 238–48, doi:10.1007/PL00006381.; Stephen J. Freeland et al., “Early Fixation of an Optimal Genetic Code,” Molecular Biology and Evolution 17 (2000): 511–18, doi:10.1093/oxfordjournals.molbev.a026331.
  4. Shalev Itzkovitz and Uri Alon, “The Genetic Code Is Nearly Optimal for Allowing Additional Information within Protein-Coding Sequences,” Genome Research(2007): advanced online, doi:10.1101/gr.5987307.
  5. In The Cell’s Design, I explain why the genetic code cannot emerge through evolutionary processes, reinforcing the conclusion that the cell’s information systems—and hence, life—must stem from the handiwork of a Creator.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/10/03/the-optimal-design-of-the-genetic-code

Yeast Gene Editing Study Raises Questions about the Evolutionary Origin of Human Chromosome 2

yeastgeneediting

BY FAZALE RANA – SEPTEMBER 12, 2018

As a biochemist and a skeptic of the evolutionary paradigm, people often ask me two interrelated questions:

  1. What do you think are the greatest scientific challenges to the evolutionary paradigm?
  2. How do you respond to all the compelling evidence for biological evolution?

When it comes to the second question, people almost always ask about the genetic similarity between humans and chimpanzees. Unexpectedly, new research on gene editing in brewer’s yeast helps answer these questions more definitively than ever.

For many people, the genetic comparisons between the two species convince them that human evolution is true. Presumably, the shared genetic features in the human and chimpanzee genomes reflect the species’ shared evolutionary ancestry.

One high-profile example of these similarities is the structural features human chromosome 2 shares with two chimpanzee chromosomes labeled chromosome 2A and chromosome 2B. When the two chimpanzee chromosomes are placed end to end, they look remarkably like human chromosome 2. Evolutionary biologists interpret this genetic similarity as evidence that human chromosome 2 arose when chromosome 2A and chromosome 2B underwent an end-to-end fusion. They claim that this fusion took place in the human evolutionary lineage at some point after it separated from the lineage that led to chimpanzees and bonobos. Therefore, the similarity in these chromosomes provides strong evidence that humans and chimpanzees share an evolutionary ancestry.

yeast-gene-editing-study-1

Figure 1: Human and Chimpanzee Chromosomes Compared

Image credit: Who Was Adam? (Covina, CA: RTB Press, 2015), p. 210.

Yet, new work by two separate teams of synthetic biologists from the United States and China, respectively, raises questions about this evolutionary scenario. Working independently, both research teams devised similar gene editing techniques that, in turn, they used to fuse the chromosomes in the yeast species, Saccharomyces cerevisiae (brewer’s yeast).Their work demonstrates the central role intelligent agency must play in end-on-end chromosome fusion, thereby countering the evolutionary explanation while supporting a creation model interpretation of human chromosome 2.

The Structure of Human Chromosome 2

Chromosomes are large structures visible in the nucleus during the cell division process. These structures consist of DNA combined with proteins to form the chromosome’s highly condensed, hierarchical architecture.

yeast-gene-editing-study-2Figure 2: Chromosome Structure

Image credit: Shutterstock

Each species has a characteristic number of chromosomes that differ in size and shape. For example, humans have 46 chromosomes (23 pairs); chimpanzees and other apes have 48 (24 pairs).

When exposed to certain dyes, chromosomes stain. This staining process produces a pattern of bands along the length of the chromosome that is diagnostic. The bands vary in number, location, thickness, and intensity. And the unique banding profile of each chromosome helps geneticists identify them under a microscope.

In the early 1980s, evolutionary biologists compared the chromosomes of humans, chimpanzees, gorillas, and orangutans for the first time.2 These studies revealed an exceptional degree of similarity between human and chimp chromosomes. When aligned, the human and corresponding chimpanzee chromosomes display near-identical banding patterns, band locations, band size, and band stain intensity. To evolutionary biologists, this resemblance reveals powerful evidence for human and chimpanzee shared ancestry.

The most noticeable difference between human and chimp chromosomes is the quantity: 46 for humans and 48 for chimpanzees. As I pointed out, evolutionary biologists account for this difference by suggesting that two chimp chromosomes (2A and 2B) fused. This fusion event would have reduced the number of chromosome pairs from 24 to 23, and the chromosome number from 48 to 46.

As noted, evidence for this fusion comes from the close similarity of the banding patterns for human chromosome 2 and chimp chromosomes 2A and 2B when the two are oriented end on end. The case for fusion also gains support by the presence of: (1) two centromeres in human chromosome 2, one functional, the other inactive; and (2) an internal telomeresequence within human chromosome 2.3 The location of the two centromeres and internal telomere sequences corresponds to the expected locations if, indeed, human chromosome 2 arose as a fusion event.4

Evidence for Evolution or Creation?

Even though human chromosome 2 looks like it is a fusion product, it seems unlikely to me that its genesis resulted from undirected natural processes. Instead, I would argue that a Creator intervened to create human chromosome 2 because combining chromosomes 2A and 2B end to end to form it would have required a succession of highly improbable events.

I describe the challenges to the evolutionary explanation in some detail in a previous article:

  • End-to-end fusion of two chromosomes at the telomeres faces nearly insurmountable hurdles.
  • And, if somehow the fusion did occur, it would alter the number of chromosomes and lead to one of three possible scenarios: (1) nonviable offspring, (2) viable offspring that suffers from a diseased state, or (3) viable but infertile offspring. Each of these scenarios would prevent the fused chromosome from entering and becoming entrenched in the human gene pool.
  • Finally, if chromosome fusion took place and if the fused chromosome could be passed on to offspring, the event would have had to create such a large evolutionary advantage that it would rapidly sweep through the population, becoming fixed.

This succession of highly unlikely events makes more sense, from my vantage point, if we view the structure of human chromosome 2 as the handiwork of a Creator instead of the outworking of evolutionary processes. But why would these chromosomes appear to be so similar, if they were created? As I discuss elsewhere, I think the similarity between human and chimpanzee chromosomes reflects shared design, not shared evolutionary ancestry. (For more details, see my article “Chromosome 2: The Best Evidence for Evolution?”)

Yeast Chromosome Studies Offer Insight

Recent work by two independent teams of synthetic biologists from the US and China corroborates my critique of the evolutionary explanation for human chromosome 2. Working within the context of the evolutionary framework, both teams were interested in understanding the influence that chromosome number and organization have on an organism’s biology and how chromosome fusion shapes evolutionary history. To pursue this insight, both research groups carried out similar experiments using CRISPR/Cas9 gene editing to reduce the number of chromosomes in brewer’s yeast from 16 to 1 (for the Chinese team) and from 16 to 2 (for the team from the US) through a succession of fusion events.

Both teams reduced the number of chromosomes in stages by fusing pairs of chromosomes. The first attempt reduced the number from 16 to 8. In the next round they fused pairs of the newly created chromosome to reduce the number from 8 to 4, and so on.

To their surprise, the yeast seemed to tolerate this radical genome editing quite well—although their growth rate slowed and the yeast failed to thrive under certain laboratory conditions. Gene expression was altered in the modified yeast genomes, but only for a few genes. Most of the 5,800 genes in the brewer’s yeast genome were normally expressed, compared to the wild-type strain.

For synthetic biology, this work is a milestone. It currently stands as one of the most radical genome reconfigurations ever achieved. This discovery creates an exciting new research tool to address fundamental questions about chromosome biology. It also may have important applications in biotechnology.

The experiment also ranks as a milestone for the RTB human origins creation model because it helps address questions about the origin of human chromosome 2. Specifically, the work with brewer’s yeast provides empirical evidence that human chromosome 2 must have been shaped by an Intelligent Agent. This research also reinforces my concerns about the capacity of evolutionary mechanisms to generate human chromosome 2 via the fusion of chimpanzee chromosomes 2A and 2B.

Chromosome fusion demonstrates the critical role intelligent agency plays.

Both research teams had to carefully design the gene editing system they used so that it would precisely delete two distinct regions in the chromosomes. This process affected end-on-end chromosome fusions in a way that would allow the yeast cells to survive. Specifically, they had to delete regions of the chromosomes near the telomeres, including the highly repetitive telomere-associated sequences. While they carried out this deletion, they carefully avoided deleting DNA sequences near the telomeres that harbored genes. They also simultaneously deleted one of the centromeres of the fused chromosomes to ensure that the fused chromosome would properly replicate and segregate during cell division. Finally, they had to make sure that when the two chromosomes fused, the remaining centromere was positioned near the center of the resulting chromosome.

In addition to the high-precision gene editing, they had to carefully construct the sequence of donor DNA that accompanied the CRISPR/Cas9 gene editing package so that the chromosomes with the deleted telomeres could be directed to fuse end on end. Without the donor DNA, the fusion would have been haphazard.

In other words, to fuse the chromosomes so that the yeast survived, the research teams needed a detailed understanding of chromosome structure and biology and a strategy to use this knowledge to design precise gene editing protocols. Such planning would ensure that chromosome fusion occurred without the loss of key genetic information and without disrupting key processes such as DNA replication and chromosome segregation during cell division. The researchers’ painstaking effort is a far cry from the unguided, undirected, haphazard events that evolutionary biologists think caused the end-on-end chromosome fusion that created human chromosome 2. In fact, given the high-precision gene editing required to create fused chromosomes, it is hard to envision how evolutionary processes could ever produce a functional fused chromosome.

A discovery by both research teams further complicates the evolutionary explanation for the origin of human chromosome 2. Namely, the yeast cells could not replicate unless the centromere of one of the chromosomes was deleted at the time the chromosomes fused. The researchers learned that if this step was omitted, the fused chromosomes weren’t stable. Because centromeres serve as the point of attachment for the mitotic spindle, if a chromosome possesses two centromeres, mistakes occur in the chromosome segregation step during cell division.

It is interesting that human chromosome 2 has two centromeres but one of them has been inactivated. (In the evolutionary scenario, this inactivation would have happened through a series of mutations in the centromeric DNA sequences that accrued over time.) However, if human chromosome 2 resulted from the fusion of two chimpanzee chromosomes, the initial fusion product would have possessed two centromeres, both functional. In the evolutionary scenario, it would have taken millennia for one of the chromosomes to become inactivated. Yet, the yeast studies indicate that centromere loss must take place simultaneously with end-to-end fusion. However, based on the nature of evolutionary mechanisms, it cannot.

Chromosome fusion in yeast leads to a loss of fitness.

Perhaps one of the most remarkable outcomes of this work is the discovery that the yeast cells lived after undergoing that many successive chromosome fusions. In fact, experts in synthetic biology such as Gianni Liti (who commented on this work for Nature), expressed surprise that the yeast survived this radical genome restructuring.5

Though both research teams claimed that the fusion had little effect on the fitness of the yeast, the data suggests otherwise. The yeast cells with the fused chromosomes grew more slowly than wild-type cells and struggled to grow under certain culture conditions. In fact, when the Chinese research team cultured the yeast with the single fused chromosome with the wild-type strain, the wild-type yeast cells out-competed the cells with the fused chromosome.

Although researchers observed changes in gene expression only for a small number of genes, this result appears to be a bit misleading. The genes with changed expression patterns are normally located near telomeres. The activity of these genes is normally turned down low because they usually are needed only under specific growth conditions. But with the removal of telomeres in the fused chromosomes, these genes are no longer properly regulated; in fact, they may be over-expressed. And, as a consequence of chromosome fusion, some genes that normally reside at a distance from telomeres find themselves close to telomeres, leading to reduced activity.

This altered gene expression pattern helps explains the slower growth rate of the yeast strain with fused chromosomes and the yeast cells’ difficulty to grow under certain conditions. The finding also raises more questions about the evolutionary scenario for the origin of human chromosome 2. Based on the yeast studies, it seems reasonable to think that the end-to-end fusion of chromosomes 2A and 2B would have reduced the fitness of the offspring that first inherited the fused chromosome 2, making it less likely that the fusion would have taken hold in the human gene pool.

Chromosome fusion in yeast leads to a loss of fertility.

Normally, yeast cells reproduce asexually. But they can also reproduce sexually. When yeast cells mate, they fuse. As a result of this fusion event, the resulting cell has two sets of chromosomes. In this state, the yeast cells can divide or form spores. In many respects, the sexual reproduction of yeast cels resembles the sexual reproduction in humans, in which egg and sperm cells, each with one set of chromosomes, fuse to form a zygote with two sets of chromosomes.

yeast-gene-editing-study-3

Figure 3: Yeast Cell Reproduction

Image credit: Shutterstock

Both research groups discovered that genetically engineered yeast cells with fused chromosomes could mate and form spores, but spore viability was lower than for wild-type yeast.

They also discovered that after the first round of chromosome fusion when the genetically engineered yeast possessed 8 chromosomes, mating normal yeast cells with those harboring fused chromosomes resulted in low fertility. When wild-type yeast cells were mated with yeast strains that had been subjected to additional rounds of chromosome fusion, spore formation failed altogether.

The synthetic biologists find this result encouraging because it means that if they use yeast with fused chromosomes for biotechnology applications, there is little chance that the genetically engineered yeast will mate with wild-type yeast. In other words, the loss of fertility serves as a safeguard.

However, this loss of fertility does not bode well for evolutionary explanations for the origin of human chromosome 2. The yeast studies indicate that chromosome fusion leads to a loss of fertility because of the mismatch in chromosome number, which makes it difficult for chromosomes to align and properly segregate during cell division. So, why wouldn’t this loss of fertility happen if chromosomes 2A and 2B fuse?

yeast-gene-editing-study-4

Figure 4: Cell Division

Image credit: Shutterstock

In short, the theoretical concerns I expressed about the evolutionary origin of human chromosome 2 find experimental support in the yeast studies. And the indisputable role intelligent agency plays in designing and executing the protocols to fuse yeast chromosomes provides empirical evidence that a Creator must have intervened in some capacity to design human chromosome 2.

Of course, there are a number of outstanding questions that remain for a creation model interpretation of human chromosome 2, including:

  • Why would a Creator seemingly fuse together two chromosomes to create human chromosome 2?
  • Why does this chromosome possess internal telomere sequences?
  • Why does human chromosome 2 harbor seemingly nonfunctional centromere sequences?

We predict that as we learn more about the biology of human chromosome 2, we will discover a compelling rationale for the structural features of this chromosome, in a way that befits a Creator.

But, at this juncture the fusion of yeast chromosomes in the lab makes it hard to think that unguided evolutionary processes could ever successfully fuse two chromosomes, including human chromosome 2, end on end. Creation appears to make more sense.

Resources

Endnotes

  1. Jingchuan Luo et al., “Karyotype Engineering by Chromosome Fusion Leads to Reproductive Isolation in Yeast,” Nature 560 (2018): 392–96, doi:10.1038/s41586-018-0374-x; Yangyang Shao et al., “Creating a Functional Single-Chromosome Yeast,” Nature 560 (2018): 331–35, doi:10.1038/s41586-018-0382-x.
  2. Jorge J. Yunis, J. R. Sawyer, and K. Dunham, “The Striking Resemblance of High-Resolution G-Banded Chromosomes of Man and Chimpanzee,” Science 208 (1980): 1145–48, doi:10.1126/science.7375922; Jorge J. Yunis and Om Prakash, “The Origin of Man: A Chromosomal Pictorial Legacy,” Science 215 (1982): 1525–30, doi:10.1126/science.7063861.
  3. The centromere is a region of the DNA molecule near the center of the chromosome that serves as the point of attachment for the mitotic spindle during the cell division process. Telomeres are DNA sequences located at the tip ends of chromosomes designed to stabilize the chromosome and prevent it from undergoing degradation.
  4. J. W. Ijdo et al., “Origin of Human Chromosome 2: An Ancestral Telomere-Telomere Fusion,” Proceedings of the National Academy of Sciences USA 88 (1991): 9051–55, doi:10.1073/pnas.88.20.9051; Rosamaria Avarello et al., “Evidence for an Ancestral Alphoid Domain on the Long Arm of Human Chromosome 2,” Human Genetics 89 (1992): 247–49, doi:10.1007/BF00217134.
  5. Gianni Liti, “Yeast Chromosome Numbers Minimized Using Genome Editing,” Nature 560 (August 1, 2018): 317–18, doi:10.1038/d41586-018-05309-4.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/09/12/yeast-gene-editing-study-raises-questions-about-the-evolutionary-origin-of-human-chromosome-2

 

The Endosymbiont Hypothesis: Things Aren’t What They Seem to Be

theendosymbionthypothesis

BY FAZALE RANA – AUGUST 29, 2018

Sometimes, things just aren’t what they seem to be. For example, when it comes to the world of biology:

  • Fireflies are not flies; they are beetles
  • Prairie dogs are not dogs; they are rodents
  • Horned toads are not toads; they are lizards
  • Douglas firs are not firs; they are pines
  • Silkworms are not worms; they are caterpillars
  • Peanuts are not nuts; they are legumes
  • Koala bears are not bears; they are marsupials
  • Guinea pigs are not from Guinea and they are not pigs; they are rodents from South America
  • Banana trees are not trees; they are herbs
  • Cucumbers are not vegetables; they are fruit
  • Mexican jumping beans are not beans; they are seeds with a larva inside

And . . . mitochondria are not alphaproteobacteria. In fact, evolutionary biologists don’t know what they are—at least, if recent work by researchers from Uppsala University in Sweden is to be taken seriously.1

As silly as this list may be, evolutionary biologists are not amused by this latest insight about the identity of mitochondria. Uncertainty about the evolutionary origin of mitochondria removes from the table one of the most compelling pieces of evidence for the endosymbiont hypothesis.

A cornerstone idea within the modern evolutionary framework, biology textbooks often present the endosymbiont hypothesis as a well-evidenced, well-established evolutionary explanation for the origin of complex cells (eukaryotic cells). Yet, confusion and uncertainty surround this idea, as this latest discovery attests. To put it another way: when it comes to the evolutionary explanation for the origin of complex cells in biology textbooks, things aren’t what they seem.

The Endosymbiont Hypothesis

Most evolutionary biologists believe that the endosymbiont hypothesis is the best explanation for one of the key transitions in life’s history—namely, the origin of complex cells from bacteria and archaea. Building on the ideas of Russian botanist Konstantin Mereschkowski, Lynn Margulis (1938–2011) advanced the endosymbiont hypothesis to explain the origin of eukaryotic cells in the 1960s.

Since that time, Margulis’s ideas on the origin of complex cells have become an integral part of the evolutionary paradigm. Many life scientists find the evidence for this hypothesis compelling; consequently, they view it as providing broad support for an evolutionary explanation for the history and design of life.

According to this hypothesis, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe. (Ingested cells that take up permanent residence within other cells are referred to as endosymbionts.)

the-endosymbiont-hypothesis

The Evolution of Eukaryotic Cells According to the Endosymbiont Hypothesis

Image source: Wikipedia

Presumably, organelles such as mitochondria were once endosymbionts. Evolutionary biologists believe that once taken inside the host cell, the endosymbionts took up permanent residence, with the endosymbiont growing and dividing inside the host. Over time, endosymbionts and hosts became mutually interdependent, with the endosymbionts providing a metabolic benefit for the host cell. The endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from endosymbionts’ genomes were transferred into the genome of the host organism. Eventually, the host cell evolved machinery to produce proteins needed by the former endosymbiont and processes to transport those proteins into the organelle’s interior.

Evidence for the Endosymbiont Hypothesis

The morphological similarity between organelles and bacteria serve as one line of evidence for the endosymbiont hypothesis. For example, mitochondria are about the same size and shape as a typical bacterium and they have a double membrane structure like the gram-negative cells. These organelles also divide in a way that is reminiscent of bacterial cells.

Biochemical evidence also seems to support the endosymbiont hypothesis. Evolutionary biologists view the presence of the diminutive mitochondrial genome as a vestige of this organelle’s evolutionary history. Additionally, biologists also take the biochemical similarities between mitochondrial and bacterial genomes as further evidence for the evolutionary origin of these organelles.

The presence of the unique lipid cardiolipin in the mitochondrial inner membrane also serves as evidence for the endosymbiont hypothesis. Cardiolipin is an important lipid component of bacterial inner membranes. Yet, it is not found in the membranes of eukaryotic cells—except for the inner membranes of mitochondria. In fact, biochemists consider it a signature lipid for mitochondria and a vestige of this organelle’s evolutionary history.

But, as compelling as these observations may be, for many evolutionary biologists phylogenetic analysis provides the most convincing evidence for the endosymbiont hypothesis. Evolutionary trees built from the DNA sequences of mitochondria, bacteria, and archaea place these organelles among a group of microbes called alphaproteobacteria. And, for many (but not all) evolutionary trees, mitochondria cluster with the bacteria, Rickettsiales.For evolutionary biologists, these results mean that the endosymbionts that eventually became the first mitochondria were alphaproteobacteria. If mitochondria were notevolutionarily derived from alphaproteobacteria, why would the DNA sequences of these organelles group with these bacteria in evolutionary trees?

But . . . Mitochondria Are Not Alphaproteobacteria

Even though evolutionary biologists seem certain about the phylogenetic positioning of mitochondria among the alphaproteobacteria, there has been an ongoing dispute as to the precise positioning of mitochondria in evolutionary trees, specifically whether or not mitochondria group with Rickettsiales. Looking to bring an end to this dispute, the Uppsula University research team developed a more comprehensive data set to build their evolutionary trees, with the hope that they could more precisely locate mitochondria among alphaproteobacteria. The researchers point out that the alphaproteobacterial genomes used to construct evolutionary trees stem from microbes found in clinical and agricultural settings, which is a small sampling of the alphaproteobacteria found in nature. Researchers knew this was a limitation, but, up to this point, this was the only DNA sequence data available to them.

To avoid the bias that arises from this limited data set, the researchers screened databases of DNA sequences collected from the Pacific and Atlantic Oceans for undiscovered alphaproteobacteria. They uncovered twelve new groups of alphaproteobacteria. In turn, they included these new genome sequences along with DNA sequences from previously known alphaproteobacterial genomes to build a new set of evolutionary trees. To their surprise, their analysis indicates that mitochondria are not alphaproteobacteria.

Instead, it looks like mitochondria belong to a side branch that separated from the evolutionary tree before alphaproteobacteria emerged. Adding to their surprise, the research team was unable to identify any bacterial species alive today that would group with mitochondria.

To put it another way: the latest study indicates that evolutionary biologists have no candidate for the evolutionary ancestor of mitochondria.

Does the Endosymbiont Hypothesis Successfully Account for the Origin of Mitochondria?

Evolutionary biologists suggest that there’s compelling evidence for the endosymbiont hypothesis. But when researchers attempt to delineate the details of this presumed evolutionary transition, such as the identity of the original endosymbiont, it becomes readily apparent that biologists lack a genuine explanation for the origin of mitochondria and, in a broader context, the origin of eukaryotic cells.

As I have written previously, the problems with the endosymbiont hypothesis are not limited to the identity of the evolutionary ancestor of mitochondria. They are far more pervasive, confounding each evolutionary step that life scientists envision to be part of the emergence of complex cells. (For more examples, see the Resources section.)

When it comes to the endosymbiont hypothesis, things are not what they seem to be. If mitochondria are not alphaproteobacteria, and if evolutionary biologists have no candidate for their evolutionary ancestor, could it be possible that they are the handiwork of the Creator?

Resources

Endnotes

  1. Joran Martijn et al., “Deep Mitochondrial Origin Outside the Sampled Alphaproteobacteria,” Nature 557 (May 3, 2018): 101–5, doi:10.1038/s41586-018-0059-5.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/08/29/the-endosymbiont-hypothesis-things-aren-t-what-they-seem-to-be

Do Plastic-Eating Bacteria Dump the Case for Creation?

doplasticeatingbacteria

BY FAZALE RANA – JULY 18, 2018

At the risk of stating the obvious: Plastics are an indispensable part of our modern world. Yet, plastic materials cause untold problems for the environment. One of the properties that makes plastics so useful also makes them harmful. Plastics don’t readily degrade.

Recently, researchers discovered a new strain of bacteria that had recently evolved the ability to degrade plastics. These microbes may help solve some of the environmental problems caused by plastics, but their evolution seemingly causes new problems for people who hold the view that a Creator is responsible for life’s origin and design. But, is this really the case? To find out, we need to break down this discovery.

One plastic in widespread use today is polyethylene terephthalate (PET). This polymer was patented in the 1940s and became widely used in the 1970s. Most people are familiar with PET because it is used to make drinking bottles.

This material is produced by reacting ethylene glycol with terephthalic acid (both produced from petroleum). Crystalline in nature, this plastic is a durable material that is difficult to break down, because of the inaccessibility of the ester linkages that form between the terephthalic acid and ethylene glycol subunits that make up the polymer backbone.

PET can be recycled, thereby mitigating its harmful effects on the environment. A significant portion of PET is mechanically recycled by converting it into fibers used to manufacture carpets.

In principle, PET could be recycled by chemically breaking the ester linkages holding the polymer together. When the ester linkages are cleaved, ethylene glycol and terephthalic acid are the breakdown products. These recovered starting materials could be reused to make more PET. Unfortunately, chemical recycling of PET is expensive and difficult to carry out because of the inaccessibility of the ester linkages. In fact, it is cheaper to produce PET from petroleum products than from the recycled monomers.

Can Bacteria Recycle PET?

An interesting advance took place in 2016 that has important implications for PET recycling. A team of Japanese researchers discovered a strain of the bacterium Ideonella sakaiensis that could break down PET into terephthalic acid and ethylene glycol.1 This strain was discovered by screening wastewater, soil, sediments, and sludge from a PET recycling facility. The microbe produces two enzymes, dubbed PETase and MHETase, that work in tandem to convert PET into its constituent monomers.

Evolution in Action

Researchers think that this microbe acquired DNA from the environment or another microbe via horizontal gene transfer. Presumably, this DNA fragment harbored the genes for cutinase, an enzyme that breaks down ester linkages. Once the I. sakaiensis strain picked up the DNA and incorporated it into its genome, the cutinase gene must have evolved so that it now encodes the information to produce two enzymes with the capacity to break down PET. Plus, this new capability must have evolved rather quickly, over the span of a few decades.

PETase Structure and Evolution

In an attempt to understand how PETase and MHETase evolved and how these two enzymes might be engineered for recycling and bioremediation purposes, a team of investigators from the University of Plymouth determined the structure of PETase with atomic level detail.2 They learned that this enzyme has the structural components characteristic of a family of enzymes called alpha/beta hydrolases. Based on the amino acid sequence of the PETase, the researchers concluded that its closest match to any existing enzyme is to a cutinase produced by the bacterium Thermobifida fusca. One of the most significant differences between these two enzymes is found at their active sites. (The active site is the location on the enzyme surface that binds the compounds that the enzyme chemically alters.) The active site of the PETase is broader than the T. fusca cutinase, allowing it to accommodate PET polymers.

As researchers sought to understand how PETase evolved from cutinase, they engineered amino acid changes in PETase, hoping to revert it to a cutinase. To their surprise, the resulting enzyme was even more effective at degrading PET than the PETase found in nature.

This insight does not help explain the evolutionary origin of PETase, but the serendipitous discovery does point the way to using engineered PETases for recycling and bioremediation. One could envision spraying this enzyme (or the bacterium I. sakaiensis) onto a landfill or in patches of plastics floating in the Earth’s oceans. Or alternatively using this enzyme at recycling facilities to generate the PET monomers.

As a Christian, I find this discovery exciting. Advances such as these will help us do a better job as planetary caretakers and as stewards of God’s creation, in accord with the mandate given to us in Genesis 1.

But, this discovery does raise a question: Does the evolution of a PET-eating bacterium prove that evolution is true? Does this discovery undermine the case for creation? After all, it is evolution happening right before our eyes.

Is Evolution in Action Evidence for Evolution?

To answer this question, we need to recognize that the term “evolution” can take on a variety of meanings. Each one reflects a different type of biological transformation (or presumed transformation).

It is true that organisms can change as their environment changes. This occurs through mutations to the genetic material. In rare circumstances, these mutations can create new biochemical and biological traits, such as the ones that produced the strain of I. sakaiensis that can degrade PET. If these new traits help the organism survive, it will reproduce more effectively than organisms lacking the trait. Over time, this new trait will take hold in the population, causing a transformation of the species.

And this is precisely what happened with I. sakaiensis. However, microbial evolution is not controversial. Most creationists and intelligent design proponents acknowledge evolution at this scale. In a sense, it is not surprising that single-celled microbes can evolve, given their extremely large population sizes and capacity to take up large pieces of DNA from their surroundings and incorporate it into their genomes.

Yet, I. sakaiensis is still I. sakaiensis. In fact, the similarity between PETase and cutinases indicates that only a few amino acid changes can explain the evolutionary origin of new enzymes. Along these lines, it is important to note that both cutinase and PETase cleave ester linkages. The difference between these two enzymes involves subtle structural differences triggered by altering a few amino acids. In other words, the evolution of a PET-degrading bacterium is easy to accomplish through a form of biochemical microevolution.

But just because microbes can undergo limited evolution at a biochemical level does not mean that evolutionary mechanisms can account for the origin of biochemical systems and the origin of life. That is an unwarranted leap. This study is evidence for microbial evolution, nothing more.

Though this advance can help us in our planetary stewardship role, this study does not provide the type of evidence needed to explain the origin of biochemistry and, hence, the origin of life through evolutionary means. Nor does it provide the type of evidence needed to explain the evolutionary origin of life’s major groups. Evolutionary biologists must develop appropriate evidence for these putative transformations, and so far, they haven’t.

Evidence of microbial evolution in action is not evidence for the evolutionary paradigm.

Resources:

Endnotes

  1. Shosuke Yoshida et al., “A Bacterium that Degrades and Assimilates Poly(ethylene terephthalate)” Science 351 (March 11, 2016): 1196–99, doi:10.1126/science.aad6359.
  2. Harry P. Austin, et al., “Characterization and Engineering of a Plastic-Degrading Aromatic Polyesterase,” Proceedings of the National Academy of Sciences, USA (April 17, 2018): preprint, doi:10.1073/pnas.1718804115.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/07/18/do-plastic-eating-bacteria-dump-the-case-for-creation

Believing Impossible Things: Convergent Origins of Functional Junk DNA Sequences

believingimpossiblethings

BY FAZALE RANA – MARCH 14, 2018

In a classic scene from Alice in Wonderland, the story’s heroine informs the White Queen, “One can’t believe impossible things,” to which, the White Queen—scolding Alice—replies, “I daresay you haven’t had much practice. When I was your age, I always did it for half-an-hour a day. Why, sometimes I’ve believed as many as six impossible things before breakfast.”

If recent work by researchers from UC Santa Cruz and the University of Rochester (New York) is to be taken as true, it would require evolutionary biologists to believe two impossible things—before, during, and after breakfast. These scientific investigators have discovered something that is hard to believe about the role SINE DNA plays in gene regulation, raising questions about the validity of the evolutionary explanation for the architecture of the human genome.1 In fact, considering the implications of this work, it would be easier to believe that the human genome was shaped by a Creator’s handiwork than by evolutionary forces.

SINE DNA

One of the many classes of noncoding or junk DNA, short interspersed elements, or SINE DNA sequences, range in size from 100 to 300 base pairs (genetic letters). In primates, the most common SINEs are the Alu sequences. There are about 1.1 million Alu copies in the human genome (roughly 12 percent of the genome).

SINE DNA sequences (including Alu sequences) contain a DNA segment used by the cell’s machinery to produce an RNA message. This feature allows SINEs to be transcribed. Because of this feature, molecular biologists also categorize SINE DNA as a retroposon. Molecular biologists believe that SINE sequences can multiply in number within an organism’s genome through the activity of the enzyme, reverse transcriptase. Presumably, once SINE DNA becomes transcribed, reverse transcriptase converts SINE RNA back into DNA. The reconverted DNA sequence then randomly reintegrates back into the genome. It’s through this duplication and reintegration mechanism that SINE sequences proliferate as they move around, or retrotranspose, throughout the genome. To say it differently, molecular biologists believe that over time, transcription of SINE DNA and reverse transcription of SINE RNA increases the copy number of SINE sequences and randomly disperses them throughout an organism’s genome.

Molecular biologists have discovered numerous instances in which nearly identical SINE segments occur at corresponding locations in the genomes of humans, chimpanzees, and other primates. Because the duplication and movement of SINE DNA appear to be random, evolutionary biologists think it unlikely that SINE sequences would independently appear in the same locations in the genomes of humans and chimpanzees (and other primates). And given their supposed nonfunctional nature, shared SINE DNA in humans and chimpanzees seemingly reflects their common evolutionary ancestry. In fact, evolutionary biologists have gone one step further, using SINE Alu sequences to construct primate evolutionary trees.

SINE DNA Is Functional

Even though many people view shared junk DNA sequences as the most compelling evidence for biological evolution, the growing recognition that virtually every class of junk DNA has function undermines this conclusion. For if these shared sequences are functional, then one could argue that they reflect the Creator’s common design, not shared evolutionary ancestry and common descent. As a case in point, in recent years, molecular biologists have learned that SINE DNA plays a vital role in gene regulation through a variety of distinct mechanisms.2

Staufen-Mediated mRNA Decay

One way SINE sequences regulate gene expression is through a pathway called Staufen-mediated messenger RNA (mRNA) decay (SMD). Critical to an organism’s development, SMD plays a key role in cellular differentiation. SMD is characterized by a complex mechanism centered around the destruction of mRNA. When this degradation takes place, it down-regulates gene expression. The SMD pathway involves binding of a protein called Staufen-1 to one of the ends of the mRNA molecule (dubbed the 3´untranslated region). Staufen-1 binds specifically to double-stranded structures in the 3´untranslated region. This double strand structure forms when Alu sequences in the 3´untranslated region bind to long noncoding RNA molecules containing Alu sequences. This binding event triggers a cascade of additional events that leads to the breakdown of messenger RNA.

Common Descent or Common Design?

As an old-earth creationist, I see the functional role played by noncoding DNA sequences as a reflection of God’s handiwork, defending the case for design from a significant evolutionary challenge. To state it differently: these findings mean that it is just as reasonable to conclude that the shared SINE sequences in the genomes of humans and the great apes reflect common design, not a shared evolutionary ancestry.

In fact, I would maintain that it is more reasonable to think that functional SINE DNA sequences reflect common design, rather than common descent, given the pervasive role these sequence elements play in gene regulation. Because Alu sequences are only found in primates, they must have originated fairly recently (when viewed from an evolutionary framework). Yet, they play an integral and far-reaching role in gene regulation.

And herein lies the first impossible thing evolutionary biologists must believe: Somehow Alusequences arose and then quickly assumed a central place in gene regulation. According to Carl Schmid, a researcher who uncovered some of the first evidence for the functional role played by SINE DNA, “Sine Alus have appeared only recently within the primate lineage, this proposal [of SINE DNA function] provokes the challenging question of how Alu RNA could have possibly assumed a significant role in cell physiology.”3

How Does Junk DNA Acquire Function?

Still, those who subscribe to the evolutionary framework do not view functional junk DNA as incompatible with common descent. They argue that junk DNA acquired function through a process called neofunctionalization. In the case of SMD mediated by Alu sequences in the human genome, evolutionary biologists maintain that occasionally these DNA elements will become incorporated into the 3´untranslated regions of genes and regions of the human genome that produce long noncoding RNAs, and, occasionally, by chance, some of the Alusequences in long noncoding RNAs will have the capacity to pair with the 3´untranslated region of specific mRNAs. When this happens, these Alu sequences trigger SMD-mediated gene regulation. And if this gene regulation has any advantage, it will persist so that over time, some Alu sequences will eventually evolve to assume a role in SMD-mediated gene regulation.

Is Neofunctionalization the Best Explanation for SINE Function?

At some level, this evolutionary scenario seems reasonable (the concerns expressed by Carl Schmid notwithstanding). Still, neofunctionalization events should be relatively rare. And because of the chance nature of neofunctionalization, it would be rational to think that the central role SINE sequences play in SMD gene regulation would be unique to humans.

Why would I make this claim? Based on the nature of evolutionary mechanisms, chance should govern biological and biochemical evolution at its most fundamental level (assuming it occurs). Evolutionary pathways consist of a historical sequence of chance genetic changes operated on by natural selection, which also consists of chance components. The consequences are profound. If evolutionary events could be repeated, the outcome would be dramatically different every time. The inability of evolutionary processes to retrace the same path makes it highly unlikely that the same biological and biochemical designs should appear repeatedly throughout nature.

The concept of historical contingency embodies this idea and is the theme of Stephen Jay Gould’s book Wonderful Life. According to Gould,

“No finale can be specified at the start, none would ever occur a second time in the same way, because any pathway proceeds through thousands of improbable stages. Alter any early event, ever so slightly, and without apparent importance at the time, and evolution cascades into a radically different channel.”4

To help clarify the concept of historical contingency, Gould used the metaphor of “replaying life’s tape.” If one were to push the rewind button, erase life’s history, and let the tape run again, the results would be completely different each time. The very essence of the evolutionary process renders evolutionary outcomes nonrepeatable.

Gould’s perspective of the evolutionary process has been affirmed by other researchers who have produced data, indicating that if evolutionary processes explain the origin of biochemical systems, they must be historically contingent.

Did SMD Evolve Twice?

Yet, collaborators from UC Santa Cruz and the University of Rochester discovered that SINE-mediated SMD appears to have evolved independently—two separate times—in humans and mice, the second impossible thing evolutionary biologists have to believe.

Though rodents don’t possess Alu sequences, they do possess several other SINE elements, labeled B1, B2, B4, and ID. Remarkably, these B/ID sequences occur in regions of the mouse genome corresponding to regions of the human-harboring Alu sequences. And, when the B/ID sequences are associated with the 3´untranslated regions of genes, the mRNA produced from these genes is down-regulated, suggesting that these genes are under the influence of the SMD-mediated pathway—an unexpected result.

But, this finding is not nearly as astonishing as something else the research team discovered. By comparing about 1,200 human-mouse gene pairs in myoblasts, the researchers discovered 24 genes in this cell type that were identical in the human and mouse genomes. These identical genes performed the same physiological role and possessed SINE elements (Alu and B/ID, respectively) and were regulated by the SMD mechanism.

Evolutionary biologists believe that Alu and B/ID SINE sequences emerged independently in the rodent and human lineages. If so, this means that the evolutionary processes must have independently produced the identical outcome—SINE-mediated SMD gene regulation—24 separate times for each of the 24 identical genes. As the researchers point out, chance alone cannot explain their findings. Yet, evolutionary mechanisms are historically contingent and should not yield identical outcomes. This impossible scenario causes me to question if neofunctionalization is the explanation for functional SINE DNA.

And yet, this is not the first time that life scientists have discovered the independent emergence of identical function for junk DNA sequences.

So, which is the better explanation for functional junk DNA sequences: neofunctionalization through historically contingent evolutionary processes or the work of a Mind?

As Alice emphatically complained, “One can’t believe impossible things.”

Resources

Endnotes

  1. Brownyn A. Lucas et al., “Evidence for Convergent Evolution of SINE-Directed Staufen-Mediated mRNA Decay,” Proceedings of the National Academy of Sciences, USA Early Edition (January 2018): doi:10.1073/pnas.1715531115.
  2. Reyad A. Elbarbary et al., “Retrotransposons as Regulators of Gene Function,” Science 351 (February 12, 2016): doi:10.1126/science.aac7247.
  3. Carl W. Schmid, “Does SINE Evolution Preclude Alu Function?” Nucleic Acid Research 26 (October 1998): 4541–50, doi:10.1093/nar/26.20.4541.
  4. Stephen Jay Gould, Wonderful Life: The Burgess Shale and the Nature of History (New York: W. W. Norton & Company, 1989), 51.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/03/14/believing-impossible-things-convergent-origins-of-functional-junk-dna-sequences

Did Neanderthals Self-Medicate?

neanderthalselfmedicate

BY FAZALE RANA – JANUARY 24, 2018

Calculus is hard.

But it is worth studying because it is such a powerful tool.

Oh, wait!

You don’t think I’m referring to math, do you? I’m not. I’m referring to dental calculus, the hardened plaque that forms on teeth.

Recently, researchers from Australia and the UK studied the calculus scraped from the teeth of Neanderthals and compared it to the calculus taken from the teeth of modern humans and chimpanzees (captured from the wild) with the hope of understanding the diets and behaviors of these hominins.1 The researchers concluded that this study supports the view that Neanderthals had advanced cognitive abilities like that of modern humans. If so, this conclusion creates questions and concerns about the credibility of the biblical view of humanity; specifically, the idea that we stand apart from all other creatures on Earth because we are uniquely made in God’s image. Ironically, careful assessment of this work actually supports the notion of human exceptionalism, and with it provides scientific evidence that human beings are made in God’s image.

This study built upon previous work in which researchers discovered that they could extract trace amounts of different types of compounds from the dental calculus of Neanderthals and garner insights about their dietary practices.2 Scientists have learned that when plaque forms, it traps food particles and microbes from the mouth and respiratory tract. In the most recent study, Australian and British scientists extracted ancient DNA from the plaque samples isolated from the teeth of Neanderthals recovered in Spy Cave (Belgium) and El Sidrón (Spain). These specimens age-date between 42,000 and 50,000 years in age. By sequencing the ancient DNA in the samples and comparing the sequences to known sequences in databases, the research team determined the types of food Neanderthals ate and the microorganisms that infected their mouths.

Neanderthal Diets

Based on the ancient DNA recovered from the calcified dental plaque, the researchers concluded that the Neanderthals unearthed at Spy Cave and El Sidrón consumed different diets. The calculus samples taken from the Spy Cave specimens harbored DNA from the woolly rhinoceros and European wild sheep. It also contained mushroom DNA. On the other hand, the ancient DNA samples taken from the dental plaque of the El Sidrón specimens came from pine nuts, moss, mushrooms, and tree bark. These results suggest that the Spy Neanderthals consumed a diet comprised largely of meat, while the El Sidrón hominins ate a vegetarian diet.

The microbial DNA recovered from the dental calculus confirmed the dietary differences between the two Neanderthal groups. In Neanderthals, and in modern humans, the composition of the microbiota in the mouth is dictated in part by the diet, varying in predictable ways for meat-based and plant-based diets, respectively.

Did Neanderthals Consume Medicinal Plants?

One of the Neanderthals from El Sidrón—a teenage boy—had a large dental abscess. The researchers recovered DNA from his dental calculus showing that he also suffered from a gut parasite that causes diarrhea. But, instead of suffering without any relief, it looks as if this sick individual was consuming plants with medicinal properties. Researchers recovered DNA from poplar plants, which produce salicylic acid, a painkiller, and DNA from a fungus that produces penicillin, an antibiotic. Interestingly, the other El Sidrón specimen showed no evidence of ancient DNA from poplar or the fungus, Penicillium.

If Neanderthals were able to self-medicate, the researchers conclude that these hominins must have had advanced cognitive abilities, similar to those of modern humans. One of the members of the research team, Alan Cooper, muses, “Apparently, Neandertals possessed a good knowledge of medicinal plants and their various anti-inflammatory and pain-relieving properties, and seem to be self-medicating. The use of antibiotics would be very surprising, as this is more than 40,000 years before we developed penicillin. Certainly, our findings contrast markedly with the rather simplistic view of our ancient relatives in popular imagination.”3

Though intriguing, one could argue that the research team’s conclusion about Neanderthals self-medicating is a bit of an overreach, particularly the idea that Neanderthals were consuming a specific fungus as a source of antibiotics. Given that the El Sidrón Neanderthals were eating a vegetarian diet, it isn’t surprising that they occasionally consumed fungus because Penicillium grows naturally on plant material when it becomes moldy. This conclusion is based on a single Neanderthal specimen; thus, it could simply be a coincidence that the sick Neanderthal teenager consumed the fungus. In fact, it would be virtually impossible for Neanderthals to intentionally eat penicillin-producing fungi because, according to anthropologist Hannah O’Regan from the University of Nottingham, “It’s difficult to tell these specific moulds apart unless you have a hand lens.”4

Zoopharmacognosy

But even if Neanderthals were self-medicating, this behavior is not as remarkable as it might initially seem. Many animals self-medicate. In fact, this phenomenon is called zoopharmacognosy.5 For example, chimpanzees will consume the leaves of certain plants to make themselves vomit, in order to rid themselves of intestinal parasites. So, instead of viewing the consumption of poplar plants and fungus by Neanderthals as evidence for advanced behavior, perhaps, it would be better to regard it as one more instance of zoopharmacognosy.

Medicine and Human Exceptionalism

The difference between the development and use of medicine by modern humans and the use of medicinal plants by Neanderthals (assuming they did employ plants for medicinal purposes) is staggering. Neanderthals existed on Earth longer than modern humans have. And at the point of their extinction, the best that these creatures could do is incorporate into their diets a few plants that produced compounds that were natural painkillers or antibiotics. On the other hand, though on Earth for only around 150,000 years, modern humans have created an industrial-pharmaceutical complex that routinely develops and dispenses medicines based on a detailed understanding of chemistry and biology.

As paleoanthropologist Ian Tattersall and linguist Noam Chomsky (along with other collaborators) put it:

“Our species was born in a technologically archaic context . . . . Then, within a remarkably short space of time, art was invented, cities were born, and people had reached the moon.”6

And biomedical advance has yielded an unimaginably large number of drugs that improve the quality of our lives. In other words, comparing the trajectories of Neanderthal and modern human technologies highlights profound differences between us—differences that affirm modern humans really are exceptional, echoing the biblical view that human beings are truly made in God’s image.

Resources

Endnotes

  1. Laura S. Weyrich et al., “Neanderthal Behavior, Diet, and Disease Inferred from Ancient DNA in Dental Calculus,” Nature 544 (April 20, 2017): 357–61, doi:10.1038/nature21674.
  2. Karen Hardy et al., “Neanderthal Medics? Evidence for Food, Cooking, and Medicinal Plants Entrapped in Dental Calculus,” Naturwissenschaften 99 (August 2012): 617–26, doi:10.1007/s00114-012-0942-0.
  3. “Dental Plaque DNA Shows Neandertals Used ‘Aspirin,’” Phys.org, updated March 8, 2017, https://phys.org/print408199421.html.
  4. Colin Barras, “Neanderthals May Have Medicated with Penicillin and Painkillers,” New Scientist, March 8, 2017, https://www.newscientist.com/article/2123669-neanderthals-may-have-medicated-with-penicillin-and-painkillers/.
  5. Shrivastava Rounak et al, “Zoopharmacognosy (Animal Self Medication): A Review,” International Journal of Research in Ayurveda and Pharmacy 2 (2011): 1510–12.
  6. Johan J. Bolhuis et al., “How Could Language Have Evolved?,” PLoS Biology 12 (August 26, 2014): e1001934, doi:10.1371/journal.pbio.1001934.
Reprinted with permission by the author
Original Article:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/01/24/did-neanderthals-self-medicate

DNA: Digitally Designed

dnadigitallydesigned
BY FAZALE RANA – MAY 24, 2017

We live in uncertain and frightening times.

There seems to be no end to the serious risks confronting humanity. In fact, in 2014, USA Today published an article identifying the 10 greatest threats facing our world:

  • Fiscal crises in key economies
  • Structurally high unemployment/underemployment
  • Water crises
  • Severe income disparity
  • Failure to climate change mitigation and adaptation
  • Greater incidence of extreme weather events (e.g., floods, storms, fires)
  • Global governance failure
  • Food crises
  • Failure of a major financial mechanism/institution
  • Profound political and social instability

If this list isn’t bad enough, another crisis looms in our near future: a data storage crisis.

Thanks to the huge volume of scientific data generated by disciplines such as genomics and the explosion of YouTube videos, 44 trillion gigabytes of digital data currently exist in the world. To put this in context, each person in a worldwide population of 10 billion people would have to store over 6,000 CDs to house this data. Estimates are that if we keep generating data at this pace, we will run out of high-quality silicon needed to make data storage devices by 2040.1

Compounding this problem are the limitations of current data storage technology. Because of degradative processes, hard disks have a lifetime of about 3 years and magnetic tapes about 10 years. These storage systems must be kept in controlled environments—which makes data storage an expensive proposition.

Digital Data Storage in DNA

Because of DNA’s role as a biochemical data storage system (in which the data is digitized), researchers are exploring the use of this biomolecule as the next-generation digital data storage technology. As proof of principle, a team of researchers from Harvard University headed up by George Church coded the entire contents of a 54,000-word book (including 11 JPEG images) into DNA fragments.

The researchers chose to encode the book’s contents into small DNA fragments—devoting roughly two-thirds of the sequence for data and the remainder for information that can be used to locate the content within the entire data block. In this sense, their approach is analogous to using page numbers to order and locate the contents of a book.

Since then, researchers have encoded computer programs, operating systems, and even movies into DNA.

Because DNA is so highly optimized to store information, it is an ideal data storage medium. (For details regarding the optimal nature of DNA’s structure, see The Cell’s Design.) Researchers think that DNA has the capacity to store data near the theoretical maximum. About one-half pound of DNA can store all the data that exists in the world today.

Limitations of DNA Data Storage

Despite its promises, there are some significant technical hurdles to overcome before DNA can serve as a data storage system. Cost and time are two limitations. It is expensive and time-consuming to produce and read the synthetic DNA used to store information. As technology advances, the cost and time requirements associated with DNA data storage will likely improve. Still, because of these limitations, most technologists think that the best use of DNA will be for archival storage of data.

Another concern is the long-term stability of DNA. Over time, DNA degrades. Researchers believe that redundancy may be one way around this problem. By encoding the same data in multiple pieces of DNA, data lost because of DNA degradation can be recovered.

The processes of making and reading synthetic DNA also suffer from error. Current technology has an error rate of 1 in 100. Recently, researchers from Columbia University achieved a breakthrough that allows them to elegantly address loss of information from DNA due to degradation or miscoding that takes place when DNA is made and read. These researchers successfully applied techniques used for “noisy communication” operations to DNA data storage.2

With these types of advances, the prospects of using DNA to store digital data may soon become a reality. And unlike other data storage technologies, DNA will never become obsolete.

Biomimetics and Bioinspiration

The use of biological designs to drive technological advance is one of the most exciting areas in engineering. This area of study—called biomimetics and bioinspiration—presents us with new reasons to believe that life stems from a Creator. As the names imply, biomimetics involves direct copying (or mimicry) of designs from biology, whereas bioinspiration relies on insights from biology to guide the engineering enterprise. DNA’s capacity to inspire engineering efforts to develop new data storage technology highlights this biomolecule’s elegant, sophisticated design and, at the same time, raises a troubling question for the evolutionary paradigm.

The Converse Watchmaker Argument

Biomimetics and bioinspiration pave the way for a new type of design argument I dub the converse Watchmaker argument: If biological designs are the work of a Creator, then these systems should be so well-designed that they can serve as engineering models and otherwise inspire the development of new technologies.

At some level, I find the converse Watchmaker argument more compelling than the classical Watchmaker analogy. It is remarkable to me that biological designs can inspire engineering efforts.

It is even more astounding to think that biomimetics and bioinspiration programs could be so successful if biological systems were truly generated by an unguided, historically contingent process, as evolutionary biologists claim.

Biomimetics and Bioinspiration: The Challenge to the Evolutionary Paradigm

To appreciate why work in biomimetics and bioinspiration challenge the evolutionary paradigm, we need to discuss the nature of the evolutionary process.

Evolutionary biologists view biological systems as the outworking of unguided, historically contingent processes that co-opt preexisting designs to cobble together new systems. Once these designs are in place, evolutionary mechanisms can optimize them, but still, these systems remain—in essence—kludges.

Most evolutionary biologists are quick to emphasize that evolutionary processes and pathways seldom yield perfect designs. Instead, most biological designs are flawed in some way. To be certain, most biologists would concede that natural selection has produced biological designs that are well-adapted, but they would maintain that biological systems are not well-designed. Why? Because evolutionary processes do not produce biological systems from scratch, but from preexisting systems that are co-opted through a process dubbed exaptation and then modified by natural selection to produce new designs. Once formed, these new structures can be fine-tuned and optimized through natural selection to produce well-adapted designs, but not well-designed systems.

If biological systems are, in effect, kludged together, why would engineers and technologists turn to them for inspiration? If produced by evolutionary processes—even if these processes operated over the course of millions of years—biological systems should make unreliable muses for technology development. Does it make sense for engineers to rely on biological systems—historically contingent and exapted in their origin—to solve problems and inspire new technologies, much less build an entire subdiscipline of engineering around mimicking biological designs?

Using biological designs to guide engineering efforts seems to be fundamentally incompatible with an evolutionary explanation for life’s origin and history. On the other hand, biomimetics and bioinspiration naturally flow out of an intelligent design/creation model approach to biology. Using biological systems to inspire engineering makes better sense if the designs in nature arise from a Mind.

Resources

The Cell’s Design: How Chemistry Reveals the Creator’s Artistry by Fazale Rana (book)
iDNA: The Next Generation of iPods?” by Fazale Rana (article)
Harvard Scientists Write the Book on Intelligent Design—in DNA” by Fazale Rana (article)
Digital and Analog Information Housed in DNA” by Fazale Rana (article)
Engineer’s Muse: The Design of Biochemical Systems” by Fazale Rana (article)

Endnotes

  1. Andy Extance, “How DNA Could Store All the World’s Data,” Nature 537 (September 2, 2016): 22–24, doi:10.1038/537022a.
  2. Yaniv Erlich and Dina Zielinski, “DNA Fountain Enables a Robust and Efficient Storage Architecture,” Science355 (March 3, 2017): 950–54, doi:10.1126/science.aaj2038.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2017/05/24/dna-digitally-designed

Protein-Binding Sites ENCODEd into the Design of the Human Genome

proteinbindingsitesencoded

BY FAZALE RANA – MARCH 15, 2017

At last year’s AMP Conference, I delivered a talk titled: “How the Greatest Challenges Can Become the Greatest Opportunities for the Gospel.” I illustrated this point by describing three scientific concepts related to the origin of humanity that 20 years ago stood as insurmountable challenges to the traditional biblical view of human origins. But, thanks to scientific advances, these concepts have been replaced with new insights that turn these challenges into evidence for the Christian faith.

The Challenge of Junk DNA

One of the challenges I discussed centered on junk DNA—nonfunctional DNA littering the genomes of most organisms. Presumably, these nonfunctional DNA sequences arose through random biochemical, chemical, and physical events, with functional DNA converted into useless junk, in some instances. In fact, when the scientific community declared the human genome sequence completed in 2003, estimates at that time indicated that around 95 percent of the human genome consist of junk sequences.

Since I have been involved in apologetics (around 20 years), skeptics (and believers) have regarded the high percentages of junk DNA in genomes as a significant problem for intelligent design and creation models. Why would an all-powerful, all-knowing, and all-good God create organisms with so much junk in their genomes? The shared junk DNA sequences found among the genomes of humans and the great apes compounds this challenge. For many, these shared sequences serve as compelling evidence for common ancestry among humans and the other primates. Why would a Creator introduce nonfunctional DNA sequences into corresponding locations in genomes of humans and the great apes?

But what if the junk DNA sequences are functional? It would undermine the case for common descent, because these shared sequences could reasonably be interpreted as evidence for common design.

The ENCODE Project

In recent years, numerous discoveries indicate that virtually every class of junk DNA displays function, providing mounting support for a common-design interpretation of junk DNA. (For a summary, see the expanded and updated edition of Who Was Adam?) Perhaps the most significant advance toward that end came in the fall of 2012 with the publication of phase II results of the ENCODE project—a program carried out by a consortium of scientists with the goal of identifying the functional DNA sequence elements in the human genome.

To the surprise of many, the ENCODE project reported that around 80 percent of the human genome displays function, with the expectation that this percentage should increase with phase III of the project. Many of the newly recognized functional elements play a central role in regulating gene expression. Others serve critical roles in establishing and maintaining the three-dimensional hierarchical structure of chromosomes.

If valid, the ENCODE results would force a radical revision of the way scientists view the human genome. Instead of a wasteland littered with junk DNA sequences, the human genome (and the genome of other organisms) would have to be viewed as replete with functional elements, pointing to a system far more complex and sophisticated than ever imagined—befitting a Creator’s handiwork. (See the articles listed in the Resources section below for more details.)

ENCODE Skeptics

Within hours of the publication of the phase II results, evolutionary biologists condemned the ENCODE project, citing a number of technical issues with the way the study was designed and the way the results were interpreted. (For a response to these complaints go herehere, and here.)

These technical complaints continue today, igniting the junk DNA war between evolutionary biologists and genomics scientists. Though the concerns expressed by evolutionary biologists are technical, some scientists have suggested the real motivation behind the criticisms of the ENCODE project are philosophical—even theological—in nature. For example, molecular biologists John Mattick and Marcel Dinger write:

There may also be another factor motivating the Graur et al. and related articles (van Bakel et al. 2010; Scanlan 2012), which is suggested by the sources and selection of quotations used at the beginning of the article, as well as in the use of the phrase ‘evolution-free gospel’ in its title (Graur et al. 2013): the argument of a largely non-functional genome is invoked by some evolutionary theorists in the debate against the proposition of intelligent design of life on earth, particularly with respect to the origin of humanity. In essence, the argument posits that the presence of non-protein-coding or so-called ‘junk DNA’ that comprises >90% of the human genome is evidence for the accumulation of evolutionary debris by blind Darwinian evolution, and argues against intelligent design, as an intelligent designer would presumably not fill the human genetic instruction set with meaningless information (Dawkins 1986; Collins2006). This argument is threatened in the face of growing functional indices of noncoding regions of the genome, with the latter reciprocally used in support of the notion of intelligent design and to challenge the conception that natural selection accounts for the existence of complex organisms (Behe 2003; Wells 2011).1

Is DNA-Binding Activity Functional?

Even though there may be nonscientific reasons for the complaints leveled against the ENCODE project, it is important to address the technical concerns. One relates to how biochemical function was determined by the ENCODE project. Critics argued that ENCODE scientists conflated biochemical activity with function. As a case in point, three of the assays employed by the ENCODE consortium measure binding of proteins to the genome, with the assumption that binding of transcription factors and histones to DNA indicated a functional role for the target sequences. On the other hand, ENCODE skeptics argue that most of the measured protein binding to the genome was random.

Most DNA-binding proteins recognize and bind to short stretches of DNA (4 to 10 base pairs in length) comprised of highly specific nucleotide sequences. But given the massive size of the human genome (3.2 billion genetic letters), nonfunctional binding sites will randomly occur throughout the genome, for statistical reasons alone. To illustrate: Many DNA-binding proteins target roughly between 1 and 100 sites in the genome. Yet, the genome potentially harbors between 1 million and 1 billion binding sites. The hundreds of sites that are slight variants of the target sequence will have a strong affinity to the DNA-binding proteins, with thousands more having weaker affinities. Hence, the ENCODE critics maintain that much of the protein binding measured by the ENCODE team was random and nonfunctional. To put it differently, much of the protein binding measured in the ENCODE assays merely is a consequence of random biochemical activity.

Nonfunctional Protein Binding to DNA Is Rare

This challenge does have some merit. But, this criticism may not be valid. In an earlier response to this challenge, I acknowledged that some protein binding in genomes will be random and nonfunctional. Yet, based on my intuition as a biochemist, I argued that random binding of proteins throughout the genome would be disruptive to DNA metabolism, and, from an evolutionary perspective would have been eliminated by natural selection. (From an intelligent design/creation model vantage point, it is reasonable to expect that a Creator would design genomes with minimal nonfunctional protein-binding sites.)

As it happens, new work by researchers from NYU affirms my assessment.2 These investigators demonstrated that protein binding in genomes is not random but highly specific. As a corollary, the human genome (and genomes of other organisms) contains very few nonfunctional protein-binding sites.

To reach this conclusion, these researchers looked for nonfunctional protein-binding sites in the genomes of 75 organisms, representative of nearly every major biological group, and assessed the strength of their interaction with DNA-binding proteins. The researchers began their project by measuring the binding affinity for a sample of regulatory proteins (from humans, mice, fruit flies, and yeast) with every possible 8 base pair sequence combination (32,896). Based on the binding affinity data, the NYU scientists discovered that nonfunctional binding sites with a high affinity for DNA binding proteins occurred infrequently in genomes. To use scientific jargon to describe their findings: The researchers discovered a negative correlation between protein-binding affinity and the frequency of nonfunctional binding sites in genomes. Using statistical methods, they demonstrated that this pattern holds for all 75 genomes in their study.

They attempted to account for the frequency of nonfunctional binding sequences in genomes by modeling the evolutionary process, assuming neutral evolution in which random mutations accrue over time free from the influence of natural selection. They discovered that this modeling failed to account for the sequence distributions they observed in the genomes, concluding that natural selection must have weeded high affinity nonfunctional binding sites in genomes.

These results make sense. The NYU scientists point out that protein mis-binding would be catastrophic for two reasons: (1) it would interfere with several key processes, such as transcription, gene regulation, replication, and DNA repair (the interference effect); and (2) it would create inefficiencies by rendering DNA-binding proteins unavailable to bind at functional sites (the titration effect). Though these problems may be insignificant for a given DNA-binding protein, the cumulative effects would be devastating because there are 100 to 1,000 DNA-binding proteins per genome with 10 to 10,000 copies of each protein.

The Human Genome Is ENCODEd for Design

Though the NYU researchers conducted their work from an evolutionary perspective, their results also make sense from an intelligent design/creation model vantage point. If genome sequences are truly the product of a Creator’s handiwork, then it is reasonable to think that the sequences comprising genomes would be optimized—in this case, to minimize protein mis-binding. Though evolutionary biologists maintain that natural selection shaped genomes for optimal protein binding, as a creationist, it is my contention that the genomes were shaped by an intelligent Agent—a Creator.

These results also have important implications for how we interpret the results of the ENCODE project. Given that the NYU researchers discovered that high affinity nonfunctional binding sites rarely occur in genomes (and provided a rationale for why that is the case), it is difficult for critics of the ENCODE project to argue that transcription factor and histone binding assays were measuring mostly random binding. Considering this recent work, it makes most sense to interpret the protein-binding activity in the human genome as functionally significant, bolstering the original conclusion of the ENCODE project—namely, that most of the human genome consists of functional DNA sequence elements. It goes without saying: If the original conclusion of the ENCODE project stands, the best evidence for the evolutionary paradigm unravels.

Our understanding of genomes is in its infancy. Forced by their commitment to the evolutionary paradigm, many biologists see genomes as the cobbled-together product of an unguided evolutionary history. But as this recent study attests, the more we learn about the structure and function of genomes, the more elegant and sophisticated they appear to be. And the more reasons we have to believe that genomes are the handiwork of our Creator.

Resources

Endnotes

  1. John S. Mattick and Marcel E. Dinger, “The Extent of Functionality in the Human Genome,” The HUGO Journal 7 (July 2013): doi:10.1186/1877-6566-7-2.
  2. Long Qian and Edo Kussell, “Genome-Wide Motif Statistics Are Shaped by DNA Binding Proteins over Evolutionary Time Scales,” Physical Review X 6 (October–December 2016): id. 041009, doi:10.1103/PhysRevX.6.041009.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2017/03/15/protein-binding-sites-encoded-into-the-design-of-the-human-genome

DNA: Designed for Flexibility

dnadesignedforflexibility

BY FAZALE RANA – AUGUST 17, 2016

Over the years I’ve learned that flexibility is key to a happy and successful life. If you are too rigid, it can create problems for you and others and rob you of joy.

Recently, a team of collaborators from Duke University and several universities in the US discovered that DNA displays unexpected structural flexibility. As it turns out, this property appears to be key to life.1 In contrast, the researchers showed that RNA (DNA’s biochemical cousin) is extremely rigid, highlighting another one of DNA’s unique structural properties that make it ideal as the cell’s information storage system.

To appreciate DNA’s uniquely optimal properties, a review of this important biomolecule’s structure is in order.

DNA

DNA consists of two chain-like molecules (polynucleotides) that twist around each other to form the DNA double helix. The cell’s machinery forms polynucleotide chains by linking together four different sub-unit molecules called nucleotides. DNA is built from the nucleotides: adenosine, guanosine, cytidine, and thymine, famously abbreviated A, G, C, and T, respectively.

In turn, the nucleotide molecules that make up the strands of DNA are complex molecules, consisting of both a phosphate moiety, and a nucleobase (either adenine, guanine, cytosine, or thymine) joined to a 5-carbon sugar (deoxyribose). (In RNA, the five-carbon sugar ribose replaces deoxyribose.)

dna-designed-for-flexibility-1Image 1: Nucleotide Structure

The backbone of the DNA strand is formed when the cell’s machinery repeatedly links the phosphate group of one nucleotide to the deoxyribose unit of another nucleotide. The nucleobases extend as side chains from the backbone of the DNA molecule and serve as interaction points (like ladder rungs) when the two DNA strands align and twist to form the double helix.

dna-designed-for-flexibility-2Image 2: The DNA Backbone

When the two DNA strands align, the adenine (A) side chains of one strand always pair with thymine (T) side chains from the other strand. Likewise, the guanine (G) side chains from one DNA strand always pair with cytosine (C) side chains from the other strand.

When the side chains pair, they form cross bridges between the two DNA strands. The length of the A-T and G–C cross bridges is nearly identical. Adenine and guanine are both composed of two rings and thymine (uracil) and cytosine are composed of one ring. Each cross bridge consists of three rings.

When A pairs with T, two hydrogen bonds mediate the interaction between these two nucleobases. Three hydrogen bonds accommodate the interaction between G and C. The specificity of the hydrogen bonding interactions accounts for the A-T and G-C base-pairing rules.

dna-designed-for-flexibility-3

Image 3: Watson-Crick Base Pairs

Watson-Crick and Hoogsteen Base Pairing

In DNA (and in RNA double helixes), the base pairing interactions occur at precise locations between the A and T nucleobases and the G and C nucleobases, respectively. Biochemists refer to these exacting interactions as Watson-Crick base pairing. However, in 1959—six years after Francis Crick and James Watson published their structure for DNA—a biochemist named Karst Hoogsteen discovered another way—albeit, rare—that the A and T nucleobases and the G and C nucleobases pair, called Hoogsteen base pairing.

Hoogsteen base pairing results when the nucleobase attached to the sugar rotates by 180°. Because of the dynamics of the DNA molecule, this nucleobase rotation occurs occasionally, converting a Watson-Crick base pair into a Hoogsteen base pair. However, the same dynamics will eventually revert the Hoogsteen base pair to a Watson-Crick pairing. Hoogsteen base pairs aren’t preferred because they cause a distortion in the DNA double helix. For a “naked” piece of DNA in a test tube, at any point in time, about 1 percent of the base pairs are of the Hoogsteen variety.

dna-designed-for-flexibility-4

Image 4: Watson-Crick and Hoogsteen Base Pairs
Image Credit: Wikimedia Commons

While rare in naked DNA, biochemists have recently discovered that the Hoogsteen configuration occurs frequently when: 1) proteins bind to DNA; 2) DNA is methylated; and 3) DNA is damaged. Biochemists now think that Hoogsteen base pairing is important to maintain the stability of the DNA double helix, ensuring the integrity of the information stored in the DNA molecule.

According to Hashim Al-Hashimi, “There is an amazing complexity built into these simple beautiful structures, whole new layers or dimensions that we have been blinded to because we didn’t have the tools to see them, until now.”2

It looks like the capacity to form Hoogsteen base pairs is a unique property of DNA. Al-Hashimi and his team failed to detect any evidence for Hoogsteen base pairs in double helixes made up of two strands of RNA. When they chemically attached a methyl group to the nucleobases of RNA to block the formation of Watson-Crick base pairs and force Hoogsteen base pairing, they discovered that the RNA double helix fell apart. Unlike the DNA double—which is flexible—the RNA double helix is rigid and cannot tolerate a distortion to its structure. Instead, the RNA strands can only dissociate.

It turns out that the flexibility of DNA and the rigidity of RNA is explained by the absence of a hydroxyl group in the 2’ position of the deoxyribose sugar of DNA and the presence of the 2’ hydroxyl group on ribose sugar of RNA, respectively. The 2’ position is the only structural difference between the two sugars. The presence or absence of the 2’ hydroxyl group makes all the difference. The deoxyribose ring can more freely adopt alternate conformations (called puckering) than the ribose ring, leading to differences in double helix flexibility.

dna-designed-for-flexibility-5

Image 5: Difference between Deoxyribose and Ribose

This difference makes DNA ideally suited as an information storage molecule. Because of its ability to form Hoogsteen base pairs, the DNA double helix remains intact, even when the molecule becomes chemically damaged. It also makes it possible for the cell’s machinery to control the expression of the genetic information harbored in DNA through protein binding and DNA methylation.

It is intriguing that DNA’s closet biochemical analogue lacks this property.

It appears that DNA has been optimized for data storage and retrieval. This property is critical for DNA’s capacity to store genetic information. DNA harbors the information needed for the cell’s machinery to make proteins. It also houses the genetic information passed on to subsequent generations. If DNA isn’t stable, then the information it harbors will become distorted or lost. This will have disastrous consequences for the cell’s day-to-day operations and make long-term survival of life impossible.

As I discuss in The Cell’s Design, flexibility is not the only feature of DNA that has been optimized. Other chemical and biochemical features appear to be carefully chosen to ensure its stability; again, a necessary property for a molecule that harbors the genetic information.

Optimized biochemical systems comprise evidence for biochemical intelligent design. Optimization of an engineered system doesn’t just happen—it results from engineers carefully developing their designs. It requires forethought, planning, and careful attention to detail. In the same way, the optimized features of DNA logically point to the work of a Divine engineer.

Resources
DNA Soaks Up Sun’s Rays” by Fazale Rana (Article)
The Cell’s Design by Fazale Rana (Book)
The Cell’s Design: The Proper Arrangement of Elements” by Fazale Rana (Podcast)

Endnotes

  1. Huiqing Zhou et al., “m1A and m1G Disrupt A-RNA Structure through the Intrinsic Instability of Hoogsteen Base Pairs,” Nature Structure and Molecular Biology, published electronically August 1, 2016, doi:10.1038/nsmb.3270.
  2. Duke University, “DNA’s Dynamic Nature Makes It Well-Suited to Serve as the Blueprint of Life,” Science News (blog), ScienceDaily, August 1, 2016, www.sciencedaily.com/releases/2016/08/160801113823.htm.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2016/08/17/dna-designed-for-flexibility