Do Goosebumps Send a Chill Down the Spine of the Creation Model?

By Fazale Rana – September 2, 2020

I think few would be surprised to learn that J. K. Rowling’s Harry Potter titles are the best-selling children’s books of all time, but do you know which works take second place in that category? It’s the Goosebumps series by R. L. Stine.

From 1992 to 1997, Stine wrote and published 62 Goosebumps books. To date, these books have been printed in over 3o languages, with over 400 million copies sold worldwide (this does not include Stine’s numerous spin-off books) and adapted for television and film. Each book in the Goosebumps lineup features different child characters who find themselves in scary situations that often involve encounters with the bizarre and supernatural.

The title of the series is apropos. Humans get goosebumps whenever we are afraid. We also get goosebumps when we are moved by something beautiful and awe-inspiring. And, of course, we get goosebumps when we are cold.

Goosebumps are caused by a process dubbed piloerection. When we feel cold, tiny smooth muscles (called the arrector pili) deep within our skin contract. Because these muscles are attached to hair follicles, this contraction causes our hairs to stand on end. Getting goosebumps is one of our quirks as human beings. Most biologists don’t think this phenomenon serves any useful purpose, making it that much more of an oddity. So, if goosebumps have no obvious utility, then why do we experience them at all?

Evolutionary Explanation for Goosebumps

Many life scientists view goosebumps as a vestige of our evolutionary history. So, while goosebumps serve no apparent function in modern humans, evolutionary biologists believe they did have utility for our evolutionary ancestors, who were covered with a lot of hair. Presumably, when our ancestors were cold, the contraction of the arrector pili muscles created pockets of air near the surface of the skin when the hairs stood on end, serving as insulation from the chill. And when our ancestors were frightened, contraction of the arrector pili muscles caused their hair to puff up, making them seem larger and more menacing.

These two behaviors are observed in other mammals and even in some bird species. For evolutionary biologists, this shared behavior attests to our evolutionary connection to animal life.

In other words, many life scientists see goosebumps as compelling evidence that human beings have an evolutionary origin because: (1) goosebumps serve no useful purpose in humans today and (2) the same physiological process that causes goosebumps in humans causes hair and fur of other animals to stand erect when they feel cold or threatened.

So, one theological question creationists need to address is this: Why would God create human beings to have a useless response to the cold or to being frightened? For those of us who hold to a creation model/design perspective, goosebumps in humans cause us a bit of a fright. But is there any reason to be scared?

What if goosebumps in humans serve a useful function? If they do, that function undermines the idea that goosebumps are a vestige of our evolutionary history and, at the same time, makes it reasonable to think that human beings are the handiwork of a Creator. Accordingly, all facets of our anatomy and physiology are intentionally designed for a purpose, including goosebumps. And this is precisely what a research team from Harvard University has discovered. These investigators identified an unexpected function performed by arrector pili muscles, beyond causing hairs to stand erect.1 This new insight suggests a reason why humans get goosebumps, making it reasonable to interpret this physiological feature of human beings within a creation model/design framework.

Multiple Roles of the Arrector Pili Muscle

To carry out its function, the arrector pili muscle forms an intimate association with nerves in the sympathetic nervous system. This component of the nervous system contributes to homeostasis, allowing the bodies of animals (including humans) to maintain constant and optimal conditions. As part of this activity, animals receive sensory input from their surroundings and respond to environmental changes. So, in this case, when a mammal experiences cold the sympathetic nervous system transmits the sensation to the arrector pili muscles, causing them to contract, helping the animal to stay warm.

Recently, the Harvard research team, working with mice, discovered that the arrector pili muscle also plays a structural role, with the individual nerve fibers of the sympathetic nervous system wrapping around the muscle. This architecture positions the nerves next to a bed of stem cells near hair follicles, providing the sympathetic nervous system with a direct connection to the hair follicle stem cells.

Normally, the hair follicle stem cells are in a quiescent (inactive) state. Under conditions of prolonged cold, however, the sympathetic nerves release the neurotransmitter norepinephrine. This release stimulates the stem cells to replicate and develop into new hair. In other words, the interplay between the arrector pili and the sympathetic nerves provides both a short-term (contraction of the arrector pili) and a long-term (hair growth) response to cold.

The researchers discovered that when they removed the arrector pili muscles the sympathetic nerves retracted, losing their connection to the hair follicle stem cells. In the retracted state, the sympathetic nerves could not stimulate the activity of the hair follicle stem cells. In short, the arrector pili plays an integral role in coupling stem cell regeneration and, hence, hair growth to changes in the environment by functioning as scaffolding.

Goosebumps and the Case for Creation

In mammals (which have a coat of fur or bodies heavily covered with hair), the dual role played by the arrector pili muscles in mounting both rapid and long-term responses to the cold highlights the elegance, sophistication, and ingenuity of biological systems—features befitting the work of a Creator. But does this insight have any bearing on why humans experience goosebumps if they are created by God?

Toward this end, if the arrector pili muscles served no true function, evolutionary theory predicts that they should atrophy, maybe even disappear. Yet, the work of the Harvard scientists makes it plain that if the arrector pili muscles became more diminutive or were lost, it could very well compromise the overall function of the sympathetic nervous system in human skin, because the scaffolding for nerves of the sympathetic system would be lost.

The recognition that the arrector pili muscles prevent the sympathetic nerves from retracting away from hair follicles in mice suggests that this muscle functions in the same way in human skin. In mice and other mammals, the positioning of the sympathetic nerve is critical to stimulate the growth of new hair in response to ongoing exposure to cold. The same should be true in humans. Still, it is not clear at this juncture if hair growth in humans under these conditions would have any real benefit. On the other hand, there is no evidence to the contrary. We don’t know.

What we do know is that without the arrector pili muscles the sympathetic nerves would lose their positioning anchor in human skin. It seems perfectly reasonable to think that the proper positioning of the sympathetic nerve in the skin, in general, plays an overarching role in communicating changes in the environment to our bodies, helping us to maintain a homeostatic state.

In other words, because the muscle serves multiple purposes, it helps explain why these intact, fully functional muscles are found in human skin, with goosebumps produced as a by-product of the arrector pili’s association with hair follicles and sympathetic nerves. And who knows, maybe these muscles have added functions yet to be discovered.

There may be other reasons why we get goosebumps. They help us to pay close attention to the happenings in our environment. And, of course, this heightened awareness provides a survival advantage. On top of that use, goosebumps also provide social cues to others, signaling to them that we are cold or frightened, with the hope that these cues would encourage them to step in and help us—again, a survival advantage.

The cold truth is this: gaining a better understanding about the anatomy and physiology of the skin makes goosebumps less frightening for those of us who embrace a creation model approach to humanity’s origin.

Resources

Endnotes
  1. Yuli Schwartz et al., “Cell Types Promoting Goosebumps Form a Niche to Regulate Hair Follicle Stem Cells,” Cell 182, no. 3 (August 6, 2020): 578–93, doi:10.1016/j.cell.2020.06.031.

Reprinted with permission by the author

Original article at:
https://reasons.org/explore/blogs/the-cells-design

Answering Scientific Questions on Neanderthal-Human Interbreeding

00051

By Fazale Rana – August 5, 2020

So don’t ask me no questions
And I won’t tell you no lies
And don’t ask me about my business
And I won’t tell you good-bye

“Don’t Ask Me No Questions”

—Ronnie Van Zandt and Gary Robert Rossington

One of my favorite rock bands of all time is Lynyrd Skynyrd. (That’s right…Skynyrd, baby!) I know their musical catalog forward and backwards. I don’t know if it is a good thing or not, but I am conversant with the history of most of the songs recorded by the band’s original lineup.

“Don’t Ask Me No Questions” was the first single released from their second studio album, Second Helping. The album also included “Sweet Home Alabama.” When juxtaposed with the success of “Sweet Home Alabama,” it’s ironic that “Don’t Ask Me No Questions” never even broke the charts.

An admonition to family and friends not to pry into their personal affairs, this song describes the exhaustion the band members felt after spending months on tour. All they want is peace and respite when they return home. Instead, they find themselves continuously confronted by unrelenting and inappropriate questions about the rock ‘n’ roll lifestyle.

As a Christian apologist, people ask me questions all the time. Yet, rarely do I find the questions annoying and inappropriate. I am happy to do my best to answer most of the questions asked of me—even the snarky ones posed by internet trolls. As of late, one topic that comes up often is interbreeding between modern humans and Neanderthals:

  • Is it true that modern humans and Neanderthals interbred?
  • If interbreeding took place, what does that mean for the credibility of the biblical account of human origins?
  • Did the children resulting from these interbreeding events have a soul? Did they bear the image of God?

Recently, an international team of investigators looking to catalog Neanderthal genetic contributions, surveyed a large sampling of Icelander genomes. This work generated new and unexpected insights about interbreeding between hominins and modern humans.1

No lie.

It came as little surprise to me when the headlines announcing this discovery triggered another round of questions about interbreeding between modern humans and Neanderthals. I will address the first two questions above in this article and the third one in a future post.

RTB’s Human Origins Model in 2005

To tell the truth, for a number of years I resisted the idea that modern humans interbred with Neanderthals and Denisovans. When Hugh Ross and I published the first edition of our book, Who Was Adam? (2005), there was no real evidence that modern humans and Neanderthals interbred. We took this absence of evidence as support for the RTB human origins model.

According to our model, Neanderthals have no evolutionary connection to modern humans. The RTB model posits that the hominins, such as Neanderthals and Denisovans, were creatures made by God that existed for a time and went extinct. These creatures had intelligence and emotional capacity (like most mammals), which enabled them to establish a culture. However, unlike modern humans, these creatures lacked the image of God. Accordingly, they were cognitively inferior to modern humans. In this sense, the RTB human origins model regards the hominins in the same vein as the great apes: intelligent, fascinating creatures in their own right that share some biological and behavioral attributes with modern humans (reflecting common design). Yet, no one would confuse a great ape and a modern human because of key biological distinctions and, more importantly, because of profound cognitive and behavioral differences.

When we initially proposed our model, we predicted that the biological differences between modern humans and Neanderthals would have made interbreeding unlikely. And if they did interbreed, then these differences would have prohibited the production of viable, fertile offspring.

Did Humans and Neanderthals Interbreed?

In 2010, researchers produced a rough draft sequence of the Neanderthal genome and compared it to modern human genomes. They discovered a closer statistical association of the Neanderthal genome with those from European and Asian people groups than with genomes from African people groups.2 The researchers maintained that this effect could be readily explained if a limited number of interbreeding events took place between humans and Neanderthals in the eastern portion of the Middle East, roughly 45,000 to 80,000 years ago, just as humans began to migrate around the world. This would explain why non-African populations display what appears to be a 1 to 4 percent genetic contribution from Neanderthals while African people groups have no contribution whatsoever.

At that time, I wasn’t entirely convinced that modern humans and Neanderthals interbred because there were other ways to explain the statistical association. Additionally, studies of Neanderthal genomes indicate that these hominins lived in small insular groups. At that time, I argued that the low population densities of Neanderthals would have greatly reduced the likelihood of encounters with modern humans migrating in small populations. It seemed to me that it was unlikely that interbreeding occurred.

Other studies demonstrated that Neanderthals most likely were extinct before modern humans made their way into Europe. Once again, I argued that the earlier extinction of Neanderthals makes it impossible for them to have interbred with humans in Europe. Extinction also raises questions about whether the two species interbred at all.

The Case for Interbreeding

Despite these concerns, in the last few years I have become largely convinced that modern humans and Neanderthals interbred. Studies such as the one cataloging the Neanderthal contribution to the genomes of Icelanders leave me little choice in the matter.

Thanks to the deCODE project, the genome sequences for nearly half the Icelandic population have been determined. An international team of collaborators made use of this data set, analyzing over 27,500 Icelander genomes for Neanderthal contribution using a newly developed algorithm. They detected over 14.4 million fragments of Neanderthal DNA in their data set. Of these, 112,709 were unique sequences that collectively constituted 48 percent of the Neanderthal genome.

This finding has important implications. Even though individual Icelanders have about a 1 to 4 percent Neanderthal contribution to their genomes, the precise contribution differs from person to person. And when these individual contributions are combined it yields Neanderthal DNA sequences that cover nearly 50 percent of the Neanderthal genome. This finding aligns with previous studies which demonstrate that, collectively, across the human population Neanderthal sequences are distributed throughout 20 percent of the human genome. And 40 percent of the Neanderthal genome can be reconstructed from Neanderthal sequences found in a sampling of Eurasian genomes.3

Adding to this evidence for interbreeding are studies that characterized ancient DNA recovered from several modern human fossil remains unearthed in Europe, dating between about 35,000 and 45,000 years in age. The genomes of these ancient modern humans contain much longer stretches of Neanderthal DNA than what’s found in contemporary modern humans, which is exactly what would be expected if modern humans interbred with these hominins.4

As I see it, interbreeding is the only way to make sense of these results.

Are Humans and Neanderthals the Same Species?

Because the biological species concept (BSC) defines a species as an interbreeding population, some people argue that modern humans and Neanderthals must belong to the same species. This perspective is common among young-earth creationists who see Neanderthals as a subset of humanity.

This argument fails to take into account the limitations of the BSC, one being the phenomenon of hybridization. Mammals that belong to separate species have been known to interbreed and produce viable—even fertile—offspring called hybrids. For example, lions and tigers in captivity have interbred successfully—yet both parent animals remain considered separate species. I would argue that the concept of hybridization applies to the interbreeding that took place between modern humans and Neanderthals.

Even though it appears that modern humans and Neanderthals interbred, other lines of evidence indicate that these two hominins were distinct species. Significant anatomical differences exist between the two. The most profound difference is skull anatomy and, consequently, brain structure

blog__inline--answering-scientific-questions-on-neanderthal-human-interbreeding-part-1Anatomical Differences between Human and
Neanderthal Skulls. Image credit: Wikipedia.

Additionally, Neanderthals possessed a hyper-polar body design, consisting of a stout, barrel-shaped body with shortened limbs to help with heat retention. Neanderthals and modern humans display significant developmental differences as well. Neanderthals, for example, spent a minimal time in adolescence compared to modern humans. The two hominins also exhibit significant genetic differences (which includes differences in gene expression patterns), most notably for genes that play a role in cognition and cognitive development. Most critically, modern humans and Neanderthals display significant behavioral differences that stem from substantial differences in cognitive capacity.

Along these lines, it is important to note that researchers believe that the resulting human-Neanderthal hybrids lacked fecundity.5 As geneticist David Reich notes, “Modern humans and Neanderthals were at the edge of biological compatibility.”6

In other words, even though modern humans and Neanderthals interbred, they displayed sufficient biological differences that are extensive enough to justify classing the two as distinct species, just as the RTB model predicts. The extensive behavioral differences also validate the view that modern humans are exceptional and unique in ways that align with the image of God—again, in accord with RTB model predictions.

Is the RTB Human Origins Model Invalid?

It is safe to say that most paleoanthropologists view modern humans and Neanderthals as distinct species (or at least distinct populations that were isolated from one another for over 500,000 to 600,000 years). From an evolutionary perspective, modern humans and Neanderthals share a common evolutionary ancestor, perhaps Homo heidelbergensis, and arose as separate species as the two lineages diverged from this ancestral population. In the evolutionary framework, the capacity of Neanderthals and modern humans to interbreed reflects their shared evolutionary heritage. For this reason, some critics have pointed to the interbreeding between modern humans and other hominins as a devastating blow to the RTB model and as clear-cut evidence for human evolution.

In light of this concern, it is important to recognize that the RTB human origins model readily accommodates the evidence for interbreeding between modern humans and Neanderthals. Instead of reflecting a shared evolutionary ancestry, within a creation model framework, the capacity for interbreeding is a consequence of the biological designs shared by modern humans and Neanderthals.

The RTB model’s stance that shared biological features represent common design taps into a rich tradition within the history of biology. Prior to Charles Darwin, life scientists such as the preeminent biologist Sir Richard Owen, routinely viewed homologous systems as manifestations of archetypal designs that resided in the Mind of the First Cause. The RTB human origins model co-opts Owen’s ideas and applies them to the biological features modern humans share with other creatures, including the hominins.

Without question, the discovery that modern humans interbred with other hominins, stands as a failed prediction of the initial version of the RTB human origins model. However, this discovery can be accommodated by revising the model–as is often done in science. Of course, this leads to the next set of questions.

  • Is there biblical warrant to think that modern humans interbred with other creatures?
  • Did the modern human-Neanderthal hybrid have a soul? Did it bear God’s image?

I will take on these questions in the next article. And I am telling you no lie.

Resources

Biological Differences between Humans and Neanderthals

Archetype Biology

Endnotes
  1. Laurits Skov et al., “The Nature of Neanderthal Introgression Revealed by 27,566 Icelandic Genomes,” Nature (published online April 22, 2020), doi:10.1038/s49586-020-2225-9.
  2. Fazale Rana with Hugh Ross, Who Was Adam? A Creation Model Approach to the Origin of Humanity, 10-Year Update (Covina, CA: RTB Press, 2015), 301–12.
  3. Sriram Sankararaman et al., “The Genomic Landscape of Neanderthal Ancestry in Present-Day Humans,” Nature 507 (2014): 354–57, doi:10.1038/nature12961; Benjamin Vernot and Joshua M. Akey, “Resurrecting Surviving Neandertal Lineages from Modern Human Genomes,” Science 343 (2014): 1017–21, doi: 10.1126/science.1245938.
  4. Rana with Ross, Who Was Adam?, 304–5.
  5. Sankararaman et al., “Genomic Landscape,” 354–57, Vernot and Akey, “Resurrecting Surviving Neandertal Lineages,” 1017–21.
  6. Ewen Callaway, “Modern Human Genomes Reveal Our Inner Neanderthal,” Nature News (January 29, 2014), https://www.nature.com/news/modern-human-genomes-reveal-our-inner-neanderthal-1.14615.

Reprinted with permission by the author

Original article at:
https://reasons.org/explore/blogs/the-cells-design

Is Cruelty in Nature Really Evil?

By Fazale Rana – July 8, 2020

How many are your works, Lord!
In wisdom you made them all;
the earth is full of your creatures.

Psalm 104:24

I don’t remember who pointed out thedarksideofnature Instagram account to me, but their description was intriguing enough that I had to check it out. After perusing a few posts, I ended up adding myself to the list of followers.

I can’t say I enjoy the photos and videos posted by thedarksideofnature—which depict nature “red in tooth and claw”—but I do find them mesmerizing. Their website states that it is “all about showing the world a different side of nature. A side that may not be the prettiest, but it is the realist.”

The posts from thedarksideofnature are a stark reminder of the dichotomy in the animal kingdom, simultaneously beautiful and brutal, highlighting the majesty and the power—and danger—of the world of nature. For many people the beauty, majesty, and power of nature evince a Creator’s handiwork. For others, nature’s brutality serves as justifiable cause for rejecting God’s existence. Why would an all-powerful, all-knowing, and all-good God create a world in which animal pain and suffering appears to be gratuitous?

Perhaps nothing exemplifies the seemingly senseless cruelty of nature more so than the widespread occurrence of filial (relating to offspring) cannibalism and filial abandonment among animals. Many animals eat their young or consume eggs after laying them. Others abandon their young, condemning them to certain death.

What an unimaginably cruel feature of nature. Why would God create animals that eat their offspring or abandon their young?

Is Cruelty in Nature Really Evil?

What if there are good reasons for God to permit pain and suffering to exist in the animal kingdom? Scientific research seems to offer several reasons.

For example, some studies reveal purpose for animal pain and suffering. Others demonstrate that animal death promotes biodiversity and ecosystem stability. There are even studies that provide reasons for filial cannibalism and offspring abandonment (see the Resources section, below). Most recently, a team of investigators from Europe and Australia provide additional reasons why animals would consume their own offspring.1

These researchers didn’t set out to study filial cannibalism. Instead, they sought to understand why the comb jelly, native to the Atlantic coast of North America, has been so successful at colonizing new habitats. For example, this invasive species has made its way into the Baltic Sea, which has longer periods of low food availability compared to the comb jelly’s native habitat. The comb jelly has adapted to the food shortage by engaging in behavior that, at first blush, is counterintuitive and seems to be counterproductive. As it enters into the late season, when the prey field begins to empty, the comb jelly makes a massive investment in reproduction, even though the larval offspring have virtually no chance of survival. In fact, after three weeks the comb jelly progeny stop growing, then shrink in size, and die.

blog__inline--is-cruelty-in-nature-really-evil

Figure: Comb Jelly. Credit: Shutterstock

As it turns out, the late season wave of reproduction explains the comb jelly’s success as an invasive species. The researchers learned that the bloom of offspring serve as a food source for the comb jelly adults, replacing the disappearing prey. In other words, as the comb jelly’s available prey begins to decline in number, the jellies reproduce on a large scale with the juveniles serving as a nutrient store that lasts for an additional three weeks beyond the collapse of the prey fields. While this short duration may not seem like much, it affords the comb jelly an opportunity to outcompete other marine life during this window of time, ecologically making the difference between the flourishing and the decline of the species.

Instead of viewing the filial cannibalism among the comb jelly in sinister terms, the investigators found it to be an ingenious design. They argue that the comb jelly population appears to be working together as a single organism. According to research team member Thomas Larsen:

“In some ways, the whole jelly population is acting like a single organism, with the younger groups supporting the adults through times of nutrient stress. Overall, it enables jellies to persist through extreme events and low food periods, colonizing further than climate conditions and other conditions would usually allow.”2

In effect, the filial cannibalism observed for the comb jelly is no different than the autophagy and apoptosis observed in multicellular organisms, in which individual cells are consumed for the overall benefit of the organism.

Filial Cannibalism and the Logical Problem of Evil

These insights into the adaptive value of filial cannibalism for the comb jelly help address the logical problem of natural evil. As part of the problem of natural evil, questions arise about God’s existence and goodness because of brutality in the animal kingdom. Many skeptics view the problem of evil as an insurmountable challenge for Christian theism:

  1. God is all-powerful, all-knowing, and all-good.
  2. Therefore, we would expect good designs in nature.
  3. Yet, nature is brutal, with animals experiencing an undue amount of pain and suffering.
  4. Therefore, God either does not exist or is not good.

Skeptics argue that this final observation about nature is logically incompatible with God’s existence, or, minimally with God’s goodness. In other words, because of natural evil either God doesn’t exist or He isn’t good. Either way, Christian theism is undermined. But what if there is a good reason—as research shows—for pain and suffering to exist in nature? We could modify the syllogism this way:

  1. God is all-powerful, all-knowing, and all-good.
  2. Therefore, we would expect good designs in nature.
  3. There are good reasons for God to allow pain and suffering in the animal realm.
  4. Animal death, pain, and suffering are part of nature.

In other words, if there are good reasons for animal pain and suffering, then God’s existence and goodness are logically coherent with animal pain and suffering. Also, who is to say that pain and suffering in the animal kingdom is excessive? How could anyone possibly know?

The God of Skepticism or the God of the Bible?

When considering the problem of natural evil, it is important to distinguish between the God of naturalistic philosophy and the God of the Bible. Though some philosophers may see pain and suffering in the animal realm as a reason to question God’s existence and goodness, the authors of Scripture had a different perspective. They saw animal death as part of the good creation and a reason to praise and celebrate God as Creator and Provider.3 The insights from science about the importance of animal death to ecosystems, and the adaptive value of pain and suffering provide the rationale for calling these features of nature “good.”

All creatures look to you
to give them their food at the proper time.
When you give it to them,
they gather it up;
when you open your hand,
they are satisfied with good things.

When you hide your face,
they are terrified;
when you take away their breath,
they die and return to the dust.
When you send your Spirit,
they are created,
and you renew the face of the ground.

Psalm 104:27–30

Resources

Animal Death and the Problem of Evil

The Argument from Beauty

Endnotes
  1. Jamileh Javidpour et al., “Cannibalism Makes Invasive Comb Jelly, Mnemiopsis leidyi, Resilient to Unfavorable Conditions,” Communications Biology 3 (2020): 212, doi:10.1038/s42003-020-0940-2.

Reprinted with permission by the author

Original article at:
https://reasons.org/explore/blogs/the-cells-design

Meteorite Protein Discovery: Does It Validate Chemical Evolution?

00049

By Fazale Rana – June 10, 2020

“I’ll toss my coins in the fountain,

Look for clovers in grassy lawns

Search for shooting stars in the night

Cross my fingers and dream on.”

—Tracy Chapman

Like most little kids, I was regaled with tales about genies and wizards who used their magical powers to grant people the desires of their heart. And for a time, I was obsessed with finding some way to make my own wishes become a reality, too. I blew on dandelions, hunted for four-leafed clover, tried to catch fairy insects and looked for shooting stars in the night sky. Unfortunately, nothing worked.

But, that didn’t mean that I gave up on my hopes and dreams. In time, I realized that sometimes my imagination outpaced reality.

I still have hopes and dreams today. Hopefully, they are more realistic than in than the ones I held to in my youth. I even have hopes and dreams about what I might accomplish as a scientist. All scientists do. It’s part of what drives us. Scientists like to solve problems and extend the frontiers of knowledge. And, they hope that they will make discoveries that do that very thing, even if their hopes sometimes outpace reality.

Recently, a team of biochemists turned to a meteorite—a small piece of a shooting star—with the hope that their dream of finding meaningful insights into the evolutionary origin-of-life question would be realized. Using state-of-the art analytical methods, the Harvard University researchers uncovered the first-ever evidence for proteins in meteorites.1 Their work is exemplary work—science at its best. These biochemists view this discovery as offering an important clue to the chemical evolutionary origin of life. Yet, a careful analysis of their claims leads to the nagging doubt that origin-of-life researchers really aren’t any closer to understanding the origin of life and realizing their dream.

Meteorites and the Origin of Life

Origin-of-life researchers have long turned to meteorites for insight into the chemical evolutionary processes they believe spawned life on Earth. It makes sense. Meteorites represent a sampling of the materials that formed during the time our solar system came together and, therefore, provide a window into the physical and chemical processes that shaped the earliest stages of our solar system’s history and would have played a potential role in the origin of life.

One group of meteorites that origin-of-life researchers find to be most valuable toward this end are carbonaceous chondrites. Some classes of carbonaceous chondrites contain relatively high levels of organic compounds that formed from materials that existed in our early solar system. Many of these meteorites have undergone chemical and physical alterations since the time of their formation. Because of this metamorphosis, these meteorites offer clues about the types of prebiotic chemical processes that could have reasonably transpired on early Earth. However, they don’t give a clear picture of what the chemical and physical environment of the early solar system was like.

Fortunately, researchers have discovered a unique type of carbonaceous chondrite: the CV3 class. These meteorites have escaped metamorphosis, undergoing virtually no physical or chemical alterations since they formed. For this reason, these meteorites prove to be exceptionally valuable because they provide a pristine, unadulterated view of the nascent solar system.

The Discovery of Proteins in Meteorites

Origin-of-life investigators have catalogued a large inventory of organic compounds from carbonaceous chondrites, including some of the building blocks of life, such as amino acids, the constituents of proteins. Even though amino acids have been recovered from meteorites, there have been no reports of amino acid polymers (protein-like materials) in meteorites—at least until the Harvard team began their work.

Figure: Reaction of Amino Acids to Form Proteins. Credit: Shutterstock

The team’s pursuit of proteins in meteorites started in 2014 when they carried out a theoretical study that indicated to them that amino acids could polymerize to form protein-like materials in the gas nebulae that condense to form solar systems.2 In an attempt to provide experimental support for this claim, the research team analyzed two CV3 class carbonaceous chondrites: the Allende and Acfer 086 meteorites.

Instead of extracting these meteorites for 24 hours with water at 100°C (which is the usual approach taken by origin-of-life investigators), the research team adopted a different strategy. They reasoned that the protein-like materials that would form from amino acids in gaseous nebulae would be hydrophobic. (Hydrophobic materials are water-repellent materials that are insoluble in aqueous systems.) These types of materials wouldn’t be extracted by hot water. Alternatively, these hydrophobic protein-like substances would be susceptible to breaking down into their constituent amino acids (through a process called hydrolysis) under the standard extraction method. Either way, the protein-like materials would escape detection.

So, the researchers employed a Folch extraction at room temperature. This technique is designed to extract materials with a range of solubility properties while avoiding hydrolytic reactions. Using this approach, the Harvard researchers were able to detect evidence for amino acid polymers consisting of glycine and hydroxyglycine in extracts taken from the two meteorites.3

In their latest work, the research team performed a detailed structural characterization of the amino acid polymers from the Acfer 086 meteorite, thanks to access to a state-of-the-art mass spectrometer that had the capabilities of analyzing low levels of materials in the meteorite extracts.

The Harvard scientists determined that a distribution of amino acid polymer species existed in the meteorite sample.The most prominent one was a duplex formed from two protein-like chains that were 16 amino acids in length, comprised of glycine and hydroxyglycine residues. They also detected lithium ions associated with some of the hydroxyglycine subunits. Bound to both ends of the duplex was an unusual iron oxide moiety formed from two atoms of iron and three oxygen atoms. Lithium atoms were also associated with the iron oxide moiety.

Researchers are confident that this protein-like material—which they dub hemolithin—is not due to terrestrial contamination for two reasons. First, hydroxyglycine is a non-protein amino acid. Secondly, the protein duplex is enriched in deuterium—a signature that indicates it stems from an extraterrestrial source. In fact, the deuterium enrichment is so excessive, the researchers think it may have formed in the gas nebula before it condensed to form our solar system.

Origin-of-Life Implications

If these results stand, they represent an important scientific milestone—the first-ever protein-like material recovered from an extraterrestrial source. A dream come true for the Harvard scientists. Beyond this acclaim, origin-of-life researchers view this work as having important implications for the origin-of-life question.

For starters, this work affirms that chemical complexification can take place in prebiotic settings, providing support of chemical evolution. The Harvard scientists also speculate that the iron oxide complex at the ends of the amino acid polymer chains could serve as an energy source for prebiotic chemistry. This complex can absorb photons of light and, in turn, use that absorbed energy to drive chemical processes, such as cleaving water molecules.

More importantly, this work indicates that amino acids can form and polymerize in gaseous nebulae prior to the time that these structures collapse and condense into solar systems. In other words, this work suggests that prebiotic chemistry may have been well under way before Earth formed. If so, it means that prebiotic materials could have been endogenous to (produced within) the solar system, forming an inventory of building block materials that could have jump-started the chemical evolutionary process. Alternatively, the formation of prebiotic materials prior to solar system formation opens up the possibility that these critical compounds for the origin of life didn’t have to form on early Earth. Instead, prebiotic compounds could have been delivered to the early Earth by asteroids and comets—again, contributing to the early Earth’s cache of prebiotic substances.

Does the Protein-in-Meterorite Discovery Evince Chemical Evolution?

In many respects, the discovery of protein species in carbonaceous chondrites is not surprising. If amino acids are present in meteorites (or gaseous nebula), it stands to reason that, under certain conditions, these materials will react to form amino acid polymers. But, even so, a protein-like material made up of glycine and hydroxyglycine residues has questionable biochemical utility and this singular compound is a far cry from the minimal biochemical complexity needed for life. Chemical evolutionary processes must traverse a long road to move from the simplest amino acid building blocks (and the polymers formed from these compounds) to a minimal cell.

More importantly, it is questionable if the amino acid polymers in carbonaceous chondrites (or in gaseous nebula) made much of a contribution to the inventory of prebiotic materials on early Earth. Detection and characterization of the amino acid polymer in the Acfer 086 meteorite was only possible thanks to cutting-edge analytical instrumentation (the mass spectrometer) with the capability to detect and characterize low levels of materials. This requirement means that proteins found in the Acfer 086 meteorite samples must exist at relatively low levels. Once delivered to the early Earth, these materials would have been further diluted to even lower levels as they were introduced into the environment. In other words, these compounds most likely would have melded into the chemical background of early Earth, making little, if any, contribution to chemical evolution. And once the amino acid polymers dissolved into the early Earth’s oceans, a significant proportion may well have undergone hydrolysis (decomposition) into constituent amino acids.

Earth’s geological record affirms my assessment of the research team’s claims. Geochemical evidence from the oldest rock formations on Earth, dating to around 3.8 billion years ago, makes it clear that neither endogenous organic materials nor prebiotic materials delivered to early Earth via comets and asteroids (including amino acids and protein-like materials) made any contribution to the prebiotic inventory of early Earth. If these materials did add to the prebiotic store, the carbonaceous deposits in the oldest rocks on Earth would display a carbon-13 and deuterium enrichment. But they don’t. Instead, these deposits display a carbon-13 and deuterium depletion, indicating that these carbonaceous materials result from biological activity, not extraterrestrial mechanisms.

So, even though the Harvard investigators accomplished an important milestone in origin-of-life research, the scientific community’s dream of finding a chemical evolutionary pathway to the origin of life remains unfulfilled.

Resources

Endnotes
  1. Malcolm W. McGeoch, Sergei Dikler, and Julie E. M. McGeoch, “Hemolithin: A Meteoritic Protein Containing Iron and Lithium,” (February 22, 2020), preprint, https://arxiv.org/abs/2002.11688.
  2. Julie E. M. McGeoch and Malcolm W. McGeoch, “Polymer Amide as an Early Topology,” PLoS ONE 9, no. 7 (July 21, 2014): e103036, doi:10.1371/journal.pone.0103036.
  3. Julie E. M. McGeoch and Malcolm W. McGeoch, “Polymer Amide in the Allende and Murchison Meteorites,” Meteoritics and Planetary Science 50 (November 5, 2015): 1971–83, doi:10.1111/maps.12558; Julie E. M. McGeoch and Malcolm W. McGeoch, “A 4641Da Polymer of Amino Acids in Acfer 086 and Allende Meteorites,” (July 28, 2017), preprint, https://arxiv.org/pdf/1707.09080.pdf.

Reprinted with permission by the author

Original article at:
https://reasons.org/explore/blogs/the-cells-design

The Argument from Beauty: Can Evolution Explain Our Aesthetic Sense?

 

By Fazale Rana – May 13, 2020

Lately, I find myself spending more time in front of the TV than I normally would, thanks to the COVID-19 pandemic. I‘m not sure investing more time watching TV is a good thing, but it has allowed me to catch up on some of my favorite TV shows.

One program that is near the top of my favorites list these days is the Canadian sitcom Kim’s Convenience. Based on the 2011 play of the same name written by Ins Choi, this sitcom is about a family of Korean immigrants who live in Toronto, where they run a convenience store.

In the episode “Best Before” Appa, the traditional, opinionated, and blunt family patriarch, argues with his 20-year-old daughter about selling cans of ravioli that have expired. Janet, an art student frustrated by her parents’ commitment to Korean traditions and their tendency to parent her excessively, implores her father not to sell the expired product because it could make people sick. But Mr. Kim asserts that the ravioli isn’t bad, reasoning that the label states, “best before this date. After this date, not the best, but still pretty good.”

The assessment “not the best, but still pretty good” applies to more than just expired cans of foods. It also applies to explanations.

Often, competing explanations exist for a set of facts, an event in life’s history, or some phenomenon in nature. And, each explanation has merits and weaknesses. In these circumstances, it’s not uncommon to seek the best explanation among the contenders. Yet, as I have learned through experience, identifying the best explanation isn’t as easy as it might seem. For example, whether or not one considers an explanation to be the “best” or “not the best, but pretty good” depends on a number of factors, including one’s worldview.

I have found this difference in perspective to be true as I have interacted with skeptics about the argument for God from beauty.

Nature’s Beauty, God’s Existence, and the Biblical View of Humanity

Every place we look in nature—whether the night sky, the oceans, the rain forests, the deserts, even the microscopic world—we see a grandeur so compelling that we are often moved to our very core. For theists, nature’s beauty points to the reality of God’s existence.

As philosopher Richard Swinburne argues, “If God creates a universe, as a good workman he will create a beautiful universe. On the other hand, if the universe came into existence without being created by God, there is no reason to suppose that it would be a beautiful universe.”1 In other words, the best explanation for the beauty in the world around us is divine agency.

blog__inline--the-argument-from-beauty-can-evolution-explain

Image: Richard Swinburne Credit: Wikipedia

Moreover, our response to the beauty in the world around us supports the biblical view of human nature. As human beings, why do we perceive beauty in the world? In response to this question, Swinburne asserts, “There is certainly no particular reason why, if the universe originated uncaused, psycho-physical laws . . . would bring about aesthetic sensibilities in human beings.”2 But if human beings are made in God’s image, as Scripture teaches, we should be able to discern and appreciate the universe’s beauty, made by our Creator to reveal his glory and majesty. In other words, Swinburne and others who share his worldview find God to be the best explanation for the beauty that surrounds us.

Humanity’s Aesthetic Sense

Our appreciation of beauty stands as one of humanity’s defining features. And it extends beyond our fascination with nature’s beauty. Because of our aesthetic sense, we strive to create beautiful things ourselves, such as paintings and figurative art. We adorn ourselves with body ornaments. We write and perform music. We sing songs. We dance. We create fiction and tell stories. Much of the art we produce involves depictions of imaginary worlds. And, after we create these imaginary worlds, we contemplate them. We become absorbed in them.

What is the best explanation for our aesthetic sense? Following after Swinburne, I maintain that the biblical view of human nature accounts for our aesthetic sense. For, if we are made in God’s image, then we are creators ourselves. And the art, music, and stories we create arises as a manifestation of God’s image within us.

As a Christian theist, I am skeptical that the evolutionary paradigm can offer a compelling explanation for our aesthetic sense.

Though sympathetic to an evolutionary approach as a way to explain for our sense of beauty, philosopher Mohan Matthen helps frame the problem confronting the evolutionary paradigm: “But why is this good, from an evolutionary point of view? Why is it valuable to be absorbed in contemplation, with all the attendant dangers of reduced vigilance? Wasting time and energy puts organisms at an evolutionary disadvantage. For large animals such as us, unnecessary activity is particularly expensive.”3

Our response to beauty includes the pleasure we experience when we immerse ourselves in nature’s beauty, a piece of art or music, or a riveting fictional account. But, the pleasure we derive from contemplating beauty isn’t associated with a drive that supports our survival, such as thirst, hunger, or sexual urges. When these desires are satisfied we experience pleasure, but that pleasure displays a time-dependent profile. For example, it is unpleasant when we are hungry, yet those unpleasant feelings turn into pleasure when we eat. In turn, the pleasure associated with assuaging our hunger is short-lived, soon replaced with the discomfort of our returning hunger.

In contrast, the pleasure associated with our aesthetic sense varies little over time. The sensory and intellectual pleasure we experience from contemplating things we deem beautiful continues without end.

On the surface it appears our aesthetic sense defies explanation within an evolutionary framework. Yet, many evolutionary biologists and evolutionary psychologists have offered possible evolutionary accounts for its origin.

Evolutionary Accounts for Humanity’s Aesthetic Sense

Evolutionary scenarios for the origin of human aesthetics adopt one of three approaches, viewing it as either (1) an adaptation, (2) an evolutionary by-product, or (3) the result of genetic noise.4

1. Theories that involve adaptive mechanisms claim our aesthetic sense emerged as an adaptation that assumed a central place in our survival and reproductive success as a species.

2. Theories that view our aesthetic sense as an evolutionary by-product maintain that it is the accidental, unintended consequence of other adaptations that evolved to serve other critical functions—functions with no bearing on our capacity to appreciate beauty.

3. Theories that appeal to genetic drift consider our aesthetic sense to be the accidental, chance outcome of evolutionary history that just happened upon a gene network that makes our appreciation of beauty possible.

For many people, these evolutionary accounts function as better explanations for our aesthetic sense than one relying on a Creator’s existence and role in creating a beautiful universe, including creatures who bear his image and are designed to enjoy his handiwork. Yet, for me, none of the evolutionary approaches seem compelling. The mere fact that a plethora of differing scenarios exist to explain the origin of our aesthetic sense indicates that none of these approaches has much going for it. If there truly was a compelling way to explain the evolutionary origin of our aesthetic sense, then I would expect that a singular theory would have emerged as the clear front-runner.

Genetic Drift and Evolutionary By-Product Models

In effect, evolutionary models that regard our aesthetic sense to be an unintended by-product or the consequence of genetic drift are largely untestable. And, of course, this concern prompts the question: Are any of these approaches genuinely scientific explanations?

On top of that, both types of scenarios suffer from the same overarching problem; namely, human activities that involve our aesthetic sense are central to almost all that we do. According to evolutionary biologists John Tooby and Leda Cosmides:

Aesthetically-driven activities are not marginal phenomena or elite behavior without significance in ordinary life. Humans in all cultures spend a significant amount of time engaged in activities such as listening to or telling fictional stories, participating in various forms of imaginative pretense, thinking about imaginary worlds, experiencing the imaginary creations of others, and creating public representations designed to communicate fictional experiences to others. Involvement in fictional, imagined worlds appears to be a cross-culturally universal, species-typical phenomenon . . . Involvement in the imaginative arts appears to be an intrinsically rewarding activity, without apparent utilitarian payoff.5

As human beings we prefer to occupy imaginary worlds. We prefer absorbing ourselves in the beauty of the world or in the creations we make. Yet, as Tooby and Cosmides point out, obsession with the imaginary world detracts from our survivability.6 The ultimate rewards we receive should be those leading to our survival and reproductive success and these rewards should come from the time we spend acquiring and acting on true information about the world. In fact, we should have an appetite for accurate information about the world and a willingness to cast aside false, imaginary information.

In effect, our obsession with aesthetics could be properly seen as maladaptive. It would be one thing if our obsession with creating and admiring beauty was an incidental part of our nature. But, because it is at the forefront of everything we think and do, its “maladaptive“ character should have resulted in its adaptive elimination. Instead, we see the opposite. Our aesthetic sense is one of our most dominant traits as human beings.

Evolutionary Adaptation Models

This significant shortcoming pushes to the forefront evolutionary scenarios that explain our aesthetic sense as adaptations. Yet, generally speaking, these evolutionary scenarios leave much to be desired. For example, one widely touted model explains our attraction to natural beauty as a capacity that helped humans identify the best habitats when we were hunter-gatherers. This aesthetic sense causes us to admire idyllic settings with water and trees. And, because we admire these settings, we want to live in them, promoting our survivability and reproductive success. Yet this model doesn’t account for our attraction to settings that would make it nearly impossible to live, let alone thrive. Such settings include snow-covered mountains with sparse vegetation; the crashing waves of an angry ocean; or the molten lava flowing from a volcanic eruption. These settings are hostile, yet we are enamored with their majestic beauty. This adaptive model also doesn’t explain our attraction to animals that would be deadly to us: lions and tigers or brightly colored snakes, for example.

Another more sophisticated model explains our aesthetic sense as a manifestation of our ability to discern patterns. The capacity to discern patterns plays a key role in our ability to predict future events, promoting our survival and reproductive success. Our perception of patterns is innate, yet, it needs to be developed and trained. So, our contemplation of beauty and our creation of art, music, literature, etc. are perceptual play—fun and enjoyable activities that develop our perceptual skills.7 If this model is valid, then I would expect that perceptual play (and consequently fascination with beauty) would be most evident in children and teenagers. Yet, we see that our aesthetic sense continues into adulthood. In fact, it becomes more elaborate and sophisticated as we grow older. Adults are much more likely to spend an exorbitant amount of time admiring and contemplating beauty and creating art and music.

This model also fails to explain why we feel compelled to develop our perceptual abilities and aesthetic capacities far beyond the basic skills needed to survive and reproduce. As human beings, we are obsessed with becoming aesthetic experts. The drive to develop expert skill in the aesthetic arts detracts from our survivability. This drive for perfection is maladaptive. To become an expert requires time and effort. It involves difficulty—even pain—and sacrifice. It’s effort better spent trying to survive and reproduce.

At the end of the day, evolutionary models that appeal to the adaptive value of our aesthetic sense, though elaborate and sophisticated, seem little more than evolutionary just-so stories.

So, what is the best explanation for our aesthetic sense? It likely depends on your worldview.

Which explanatory model is best? And which is not the best, but still pretty good? If you are a Christian theist, you most likely find the argument from beauty compelling. But, if you are a skeptic you most likely prefer evolutionary accounts for the origin of our aesthetic sense.

So, like beauty, the best explanation may lie in the eye of the beholder.

Resources

Endnotes
  1. Richard Swinburne, The Existence of God, 2nd ed. (New York: Oxford University Press, 2004), 190–91.
  2. Swinburne, The Existence of God, 190–91.
  3. Mohan Matthen, “Eye Candy,” Aeon (March 24, 2014), https://aeon.co/amp/essays/how-did-evolution-shape-the-human-appreciation-of-beauty.
  4. John Tooby and Leda Cosmides, “Does Beauty Build Adaptive Minds? Towards an Evolutionary Theory of Aesthetics, Fiction and the Arts,” SubStance 30, no. 1&2 (2001): 6–27; doi: 10.1353/sub.2001.0017.
  5. Tooby and Cosmides, “Does Beauty Build Adaptive Minds?”
  6. Tooby and Cosmides, “Does Beauty Build Adaptive Minds?”
  7. Matthen, “Eye Candy.”

Reprinted with permission by the author

Original article at:
https://reasons.org/explore/blogs/the-cells-design

Another Disappointment for the Evolutionary Model for the Origin of Eukaryotic Cells?

By Fazale Rana – April 29, 2020

We all want to be happy.

And there is no shortage of advice on what we need to do to lead happy, fulfilled lives. There are even “experts” who offer advice on what we shouldn’t do, if we want to be happy.

As a scientist, there is one thing that makes me (and most other scientists) giddy with delight: It is learning how things in nature work.

Most scientists have a burning curiosity to understand the world around them, me included. Like most scientists, I derive enormous amount of joy and satisfaction when I gain insight into the inner workings of some feature of nature. And, like most in the scientific community, I feel frustrated and disappointed when I don’t know why things are the way they are. Side by side, this combination of joy and frustration serves as one of the driving forces for my work as a scientist.

And, because many of the most interesting questions in science can appear at times to be nearly impenetrable mysteries, new discoveries typically bring me (and most other scientists) a mixture of hope and consternation.

Trying to Solve a Mystery

These mixed emotions are clearly evident in the life scientists who strive to understand the evolutionary origin of complex, eukaryotic cells. As science journalist Carl Zimmer rightly points out, the evolutionary process that produced eukaryotic cells from simpler microbes stands as “one of the deepest mysteries in biology.”1 And while researchers continue to accumulate clues about the origin of eukaryotic cells, they remain stymied when it comes to offering a robust, reliable evolutionary account of one of life’s key transitions.

The leading explanation for the evolutionary origin of eukaryotic cells is the endosymbiont hypothesis. On the surface, this idea appears to be well evidenced. But digging a little deeper into the details of this model exposes gaping holes. And each time researchers present new understanding about this presumed evolutionary transition, it exposes even more flaws with the model, turning the joy of discovery into frustration, as the latest work by a team of Japanese microbiologists attests.2

Before we unpack the work by the Japanese investigators and its implications for the endosymbiont hypothesis, a quick review of this cornerstone idea in evolutionary theory is in order. (If you are familiar with the endosymbiont hypothesis and the evidence in support of the model, please feel free to skip ahead to The Discovery of Lokiarchaeota)

The Endosymbiont Hypothesis

According to this idea, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe.

Much of the endosymbiont hypothesis centers around the origin of the mitochondrion. Presumably, this organelle started as an endosymbiont. Evolutionary biologists believe that once engulfed by the host cell, this microbe took up permanent residency, growing and dividing inside the host. Over time, the endosymbiont and the host became mutually interdependent, with the endosymbiont providing a metabolic benefit for the host cell, such as supplying a source of ATP. In turn, the host cell provided nutrients to the endosymbiont. Presumably, the endosymbiont gradually evolved into an organelle through a process referred to as genome reduction. This reduction resulted when genes from the endosymbiont’s genome were transferred into the genome of the host organism.

blog__inline--another-disappointment-for-the-evolutionary-model

Figure 1: A Depiction of the Endosymbiont Hypothesis. Image credit: Shutterstock

Evidence for the Endosymbiont Hypothesis

At least three lines of evidence bolster the hypothesis:

  • The similarity of mitochondria to bacteria. Most of the evidence for the endosymbiont hypothesis centers around the fact that mitochondria are about the same size and shape as a typical bacterium and have a double membrane structure like gram-negative cells. These organelles also divide in a way that is reminiscent of bacterial cells.
  • Mitochondrial DNA. Evolutionary biologists view the presence of the diminutive mitochondrial genome as a vestige of this organelle’s evolutionary history. They see the biochemical similarities between mitochondrial and bacterial genomes as further evidence for the evolutionary origin of these organelles.
  • The presence of the unique lipid, cardiolipin, in the mitochondrial inner membrane. This important lipid component of bacterial inner membranes is not found in the membranes of eukaryotic cells—except for the inner membranes of mitochondria. In fact, biochemists consider cardiolipin a signature lipid for mitochondria and another relic from its evolutionary past.

The Discovery of Lokiarchaeota

Evolutionary biologists have also developed other lines of evidence in support of the endosymbiont hypothesis. For example, biochemists have discovered that the genetic core (DNA replication and the transcription and translation of genetic information) of eukaryotic cells resembles that of the Archaea. This similarity suggests to many biologists that a microbe belonging to the archaeal domain served as the host cell that gave rise to eukaryotic cells.

Life scientists think they may have made strides toward identifying the archaeal host. In 2015, a large international team of collaborators reported the discovery of Lokiarchaeota, a new phylum belonging to the Archaea. This phylum groups with eukaryotes on the evolutionary tree. Analysis of the genomes of Lokiarchaeota reveal the presence of genes that encode for the so-called eukaryotic signature proteins (ESPs). These genes are unique to eukaryotic organisms.3

As exciting as the discovery has been for evolutionary biologists, it has also been a source of frustration. Researchers didn’t discover this group of microbes by isolating microbes and culturing them in the lab. Instead, they discovered them by recovering DNA fragments from the environment (a hydrothermal vent system in the Atlantic Ocean called Loki’s Castle, after Loki, the ancient Norse god of trickery) and assembling them into genome sequences. Through this process, they learned that Lokiarchaeota correspond to a new group of Archaea, called the Asgardians. The reconstructed Lokiarchaeota “genome” is low quality (1.4-fold coverage) and incomplete (8 percent of the genome is missing).

Mystery Solved?

So, without actual microbes to study, the best that life scientists could do was infer the cell biology of Lokiarchaeota from its genome. But this frustrating limitation recently turned into excitement as a team of Japanese microbiologists isolated and cultured the first microbe that belongs to this group of archaeons, dubbed Prometheoarchaeum syntrophicum. It took researchers nearly 12 years of laboratory work to isolate this slow-growing microbe from sediments in the Pacific Ocean and culture it in the laboratory. (It takes 14 to 25 days for the microbe to double.) But this effort is now paying off, because the research team is now able to get a glimpse into what many life scientists believe to be a representative of the host microbe that spawned the first eukaryotic cells.

P. syntrophicum is spherically shaped and about 550 nm in size. In culture, this microbe forms aggregates around an extracellular polymeric material it secretes. It also has unusual membrane-based tentacle-like protrusions (of about 80 to 100 nm in length) that extend from the cell surface.

Researchers were unable to produce a pure culture of P. syntrophicum because it forms a close association with other microbes. The team learned that P. syntrophicum lives a syntrophic lifestyle, meaning that it forms interdependent relationships with other microbes in the environment. Specifically, P. syntrophicum produces hydrogen and formate as metabolic by-products that, in turn, are scavenged for nutrients by partner microbes. Researchers also discovered that P. syntrophicum consumes amino acids externally supplied in the growth medium. Presumably, this observation means that in the ocean floor sediments, P. syntrophicum feeds on organic materials released by its microbial counterpart.

P. syntrophicum and Failed Predictions of the Endosymbiont Hypothesis

Availability of P. syntrophicum cells now allows researchers the unprecedented chance to study a microbe that they believe stands in as a representative for the archaeal host in the endosymbiont hypothesis. Has the mystery been solved? Instead of affirming the scientific predictions of leading versions of the endosymbiont hypothesis, the biology of this organism adds to the frustration and confusion surrounding the evolutionary account. Scientific analysis produces raises three questions for the evolutionary view:

  • First, this microbe has no internal cellular structures. This observation stands as a failed prediction. Because Lokiarchaeota (and other members of the Asgard archaeons) have a large number of ESPs present in their genomes, some biologists speculated that the Asgardian microbes would have complex subcellular structures. Yet, this expectation has not been realized for P. syntrophicum, even though this microbe has around 80 or so ESPs in its genome.
  • Second, this microbe can’t engulf other microbes. This inability also serves as a failed prediction. Prior to the cultivation of P. syntrophicum, analysis of the genomes of Lokiarchaeota identified a number of genes involved in membrane-related activities, suggesting that this microbe may well have possessed the ability to engulf other microbes. Again, this expectation wasn’t realized for P. syntrophicum. This observation is a significant blow to the endosymbiont hypothesis, which requires the host cell to have cellular processes in place to engulf other microbes.
  • Third, the membranes of this microbe are comprised of typical archaeal lipids and lack the enzymatic machinery to make typical bacterial lipids. This also serves as a failed prediction. Evolutionary biologists had hoped that P. syntrophicum would provide a solution to the lipid divide (next section). It doesn’t.

What Is the Lipid Divide?

The lipid divide refers to the difference in the chemical composition of the cell membranes found in bacteria and archaea. Phospholipids comprise the cell membranes of both sorts of microbes. But that‘s where the similarity ends. The chemical makeup of the phospholipids is distinct in bacteria and archaea, respectively.

Bacterial phospholipids are built around a D-glycerol backbone which has a phosphate moiety bound to the glycerol in the sn-3 position. Two fatty acids are bound to the D-glycerol backbone at the sn-1 and sn-2 position. In water, these phospholipids assemble into bilayer structures.

Archaeal phospholipids are constructed around an L–glycerol backbone (which produces membrane lipids with different stereochemistry than bacterial phospholipids). The phosphate moiety is attached to the sn-1 position of glycerol. Two isoprene chains are bound to the sn-2 and sn-3 positions of L-glycerol via ether linkages. Some archaeal membranes are formed from phospholipid bilayers, while others are formed from phospholipid monolayers.

Presumably, the structural features of the archaeal phospholipids serve as an adaptation that renders them ideally suited to form stable membranes in the physically and chemically harsh environments in which many archaea find themselves.

The Lipid Divide Frustrates the Endosymbiont Hypothesis

If the host cell in the endosymbiont evolutionary mechanism is an archaeal cell, it logically follows that the membrane composition of eukaryotic cells should be archaeal-like. As it turns out, this expectation is not met. The cell membranes of eukaryotic cells closely resemble bacterial, not archaeal, membranes.

Can Lokiarchaeota Traverse the Lipid Divide?

Researchers had hoped that the discovery of Lokiarchaeota would shed light on the evolutionary origin of eukaryotic cell membranes. In the absence of having actual organisms to study, researchers screened the Lokiarchaeota genome for enzymes that would take part in phospholipid synthesis, with the hopes of finding clues about how this transition may have occurred.

Based on their analysis, they argued that Lokiarchaeota could produce some type of hybrid phospholipid with features of both archaeal and bacterial phospholipids. Still, their conclusion remained speculative at best. The only way to establish Lokiarchaeota membranes as transitional between those found in archaea and bacteria is to perform chemical analysis of its membranes. With the isolation and cultivation of P. syntrophicum this analysis is possible. Yet its results only serve to disappoint evolutionary biologists, because this microbe has typical archaeal lipids in its membranes and displays no evidence of being capable of making archaeal/bacterial hybrid lipids.

A New Model for the Endosymbiont Hypothesis?

Not to be dissuaded by these disappointing results, the Japanese researchers propose a new version of the endosymbiont hypothesis, consistent with P. syntrophicum biology. For this model, they envision the archaeal host entangling an oxygen-metabolizing, ATP-producing bacterium in the tentacle-like structures that emanate from its cellular surface. Over time, the entangled organism forms a mutualistic relationship with the archaeal host. Eventually, the host encapsulates the entangled microbe in an extracellular structure that forms the body of the eukaryotic cell, with the host cell forming a proto-nucleus.

Though this model is consistent with P. syntrophicum biology, it is highly speculative and lack supporting evidence. To be fair, the Japanese researchers make this very point when they state, “further evidence is required to support this conjecture.”5

This work shows how scientific advance helps validate or invalidate models. Even though many biologists view the endosymbiont hypothesis as a compelling, well-established theory, significant gaps in our understanding of the origin of eukaryotic cells persist. (For a more extensive discussion of these outages see the Resources section.) In my view as a biochemist, some of these gaps are unbridgeable chasms that motivate my skepticism about the endosymbiont hypothesis, specifically, and the evolutionary approach to explain the origin of eukaryotic cells, generally.

Of course, my skepticism leads to another question: Is it possible that the origin of eukaryotic cells reflects a Creator’s handiwork? I am happy to say that the answer is “yes.”

Resources

Challenges to the Endosymbiont Hypothesis

In Support of A Creation Model for the Origin of Eukaryotic Cells

Endnotes
  1. Carl Zimmer, “This Strange Microbe May Mark One of Life’s Great Leaps,” The New York Times (January 16, 2020), https://www.nytimes.com/2020/01/15/science/cells-eukaryotes-archaea.html.
  2. Hiroyuki Imachi et al., “Isolation of an Archaeon at the Prokaryote-Eukaryote Interface,” Nature 577 (January 15, 2020): 519–25, doi:10.1038/s41586-019-1916-6.
  3. Anja Spang et al., “Complex Archaea That Bridge the Gap between Prokaryotes and Eukaryotes,” Nature 521 (May 14, 2015): 173–79, doi:10.1038/nature14447; Katarzyna Zaremba-Niedzwiedzka et al., “Asgard Archaea Illuminate the Origin of Eukaryotic Cellular Complexity,” Nature 541 (January 19, 2017): 353–58, doi:10.1038/nature21031.
  4. Laura Villanueva, Stefan Shouten, and Jaap S. Sinninghe Damsté, “Phylogenomic Analysis of Lipid Biosynthetic Gene and of Archaea Shed Light on the ‘Lipid Divide,’” Environmental Microbiology 19 (January 2017): 54–69, doi:10.1111/1462-2920.13361.
  5. Imachi et al., “Isolation of an Archaeon.”

Reprinted with permission by the author

Original article at:
https://reasons.org/explore/blogs/the-cells-design

How Can DNA Survive for 75 Million Years? Implications for the Age of the Earth

 
By Fazale Rana – April 15, 2020

My family’s TV viewing habits have changed quite a bit over the years. It doesn’t seem that long ago that we would gather around the TV, each week at the exact same time, to watch an episode of our favorite show, broadcast live by one of the TV networks. In those days, we had no choice but to wait another week for the next installment in the series.

Now, thanks to the availability of streaming services, my wife and I find ourselves binge-watching our favorite TV programs from beginning to end, in one sitting. I’m almost embarrassed to admit this, but we rarely sit down to watch TV with the idea that we are going to binge watch an entire season at a time. Usually, we just intend to take a break and watch a single episode of our favorite program before we get back to our day. Inevitably, however, we find ourselves so caught up with the show we are watching that we end up viewing one episode after another, after another, as the hours of our day melt away.

One program we couldn’t stop watching was Money Heist (available through Netflix). This Spanish TV series is a crime drama that was originally intended to span two seasons. (Because of its popularity, Netflix ordered two more seasons.) Money Heist revolves around a group of robbers led by a brilliant strategist called the Professor. The Professor and his brother nicknamed Berlin devise an ambitious, audacious plan to take control of the Royal Mint of Spain in order to print and then escape with 2.5 billion euros.

Because their plan is so elaborate, it takes the team of robbers five months to prepare for their multi-day takeover of the Royal Mint. As you might imagine, their scheme consists of a sequence of ingenious, well-timed, and difficult-to-execute steps requiring everything to come together in the just-right way for their plan to succeed and for the robbers to make it out of the mint with a treasure trove of cash.

Recently a team of paleontologists uncovered their own treasure trove—a haul of soft tissue materials from the 75-million-year-old fossilized skull fragments of a juvenile duck-billed dinosaur (Hypacrosaurus stebingeri).1 Included in this cache of soft tissue materials were the remnants of the dinosaur’s original DNA—the ultimate paleontological treasure. What a steal!

This surprising discovery has people asking: How is possible that DNA could survive for that long a period of time?

Common wisdom states that DNA shouldn’t survive for more than 1 million years, much less 75 million. Thus, young-earth creationists (YECs) claim that this soft-tissue discovery provides the most compelling reason to think that the earth is young and that the fossil record resulted from a catastrophic global deluge (Noah’s flood).

But is their claim valid?

Hardly. The team that made the soft-tissue discovery propose a set of mechanisms and processes that could enable DNA to survive for 75 million years. All it takes is the just-right set of conditions and a sequence of well-timed, just-right events all coming together in the just-right way and DNA will persist in fossil remains.

Baby Dinosaur Discovery

The team of paleontologists who made this discovery—co-led by Mary Schweitzer at North Carolina State University and Alida M. Bailleul of the Chinese Academy of Sciences—unwittingly stumbled upon these DNA remnants as part of another study. They were investigating pieces of fossilized skull and leg fragments of a juvenile Hypacrosaurus recovered from a nesting site. Because of the dinosaur’s young age, the researchers hoped to extend the current understanding of dinosaur growth by carrying out a detailed microscopic characterization of these fossil pieces. In one of the skull fragments they observed well-defined and well-preserved calcified cartilage that was part of a growth plate when the juvenile was alive.

A growth plate is a region in a growing skeleton where bone replaces cartilage. At this chondro-osseous junction, chondrocytes (cells found in cartilage) can be found within lacunae (spaces in the matrix of bone tissues). Here, chondrocyte cells secrete an extracellular matrix made up of type II collagen and glucosamine glycans. These cells rapidly divide and grow (a condition called hypertrophy). Eventually, the cells die, leaving the lacunae empty. Afterwards, bone fills in the cavities.

The team of paleontologists detected lacunae in the translucent, well-preserved cartilage of the dinosaur skull fragment. A more careful examination of the spaces revealed several cell-like structures sharing the same lacunae. The team interpreted these cell-like structures as the remnants of chondrocytes. In some instances, the cell-like structures appeared to be doublets, presumably resulting from the final stages of cell division. In the doublets, they observed darker regions that appeared to be the remnants of nuclei and, within the nuclei, dark colored materials that were elongated and aligned to mirror each other. They interpreted these features as the leftover remnants of chromosomes, which would form condensed structure during the later stages of cell division.

Given the remarkable degree of preservation, the investigators wondered if any biomolecular remnants persisted within these microscopic structures. To test this idea, they exposed a piece of the fossil to Alcian blue, a dye that stains cartilage of extant animals. The fact that the fossilized cartilage picked up the stain indicated to the research team that soft tissue materials still persisted in the fossils.

Using an antibody binding assay (an analytic test), the research team detected the remnants of collagen II in the lacunae. Moreover, as a scientific first, the researchers isolated the cell-like remnants of the original chondrocytes. Exposing the chondrocyte remnants to two different dyes (PI and DAPI) produced staining in the cell interior near the nuclei. These two dyes both intercalate between the base pairs that form DNA’s interior region. This step indicated the presence of DNA remnants in the fossils, specifically in the dark regions that appear to be the nuclei.

Implications of This Find

This discovery adds to the excitement of previous studies that describe soft tissue remnants in fossils. These types of finds are money for paleontologists because they open up new windows into the biology of extinct life. According to Bailleul:

“These exciting results add to growing evidence that cells and some of their biomolecules can persist in deep-time. They suggest DNA can preserve for tens of millions of years, and we hope that this study will encourage scientists working on ancient DNA to push current limits and to use new methodology in order to reveal all the unknown molecular secrets that ancient tissues have.”2

Those molecular secrets are even more exciting and surprising for paleontologists because kinetic and modeling studies indicate that DNA should have completely degraded within the span of 1 million years.

The YEC Response

The surprising persistence of DNA in the dinosaur fossil remains is like bars of gold for YECs and they don’t want to hoard these treasure for themselves. YECs assert that this find is the “last straw” for the notion of deep time (the view that Earth is 4.5 billion years old and life has existed on it for upwards of 3.8 billion years). For example, YEC author David Coppedge insists that “something has to give. Either DNA can last that long, or dinosaur bones are not that old.”3 He goes on to remind us that “creation research has shown that there are strict upper limits on the survival of DNA. It cannot be tens of millions of years old.”4 For YECs, this type of discovery becomes prima facia evidence that the fossil record must be the result of a global flood that occurred only a few thousand years ago.

Yet, in my book Dinosaur Blood and the Age of the Earth, I explain why there is absolutely no reason to think that the radiometric dating techniques used to determine the ages of geological formations and fossils are unreliable. The certainty of radiometric dating methods means that there must be mechanisms that work together to promote DNA’s survival in fossil remains. Fortunately, we don’t have to wait for the next season of our favorite program to be released by Netflix to learn what those mechanisms and processes might be.

blog__inline--how-can-dna-survive-for-75-million-years-2

Preservation Mechanisms for Soft Tissues in Fossils

Even though common wisdom says that DNA can’t survive for tens of millions of years, a word of caution is in order. When I worked in R&D for a Fortune 500 company, I participated in a number of stability studies. I quickly learned an important lesson: the stability of chemical compounds can’t be predicted. The stability profile for a material only applies to the specific set of conditions used in the study. Under a different set of conditions chemical stability can vary quite extensively, even if the conditions differ only slightly from the ones employed in the study.

So, even though researchers have performed kinetic and modeling studies on DNA during fossilization, it’s best to exercise caution before we apply them to the Hypacrosaurus fossils. To say it differently, the only way to know what the DNA stability profile should be in the Hypacrosaurus fragments is to study it under the precise set of taphonomic (burial, decay, preservation) conditions that led to fossilization. And, of course, this type of study isn’t realistic.

This limitation doesn’t mean that we can’t produce a plausible explanation for DNA’s survival for 75 million years in the Hypacrosaurus fossil fragments. Here are some clues as to why and how DNA persisted in the young dinosaur’s remains:

  • These fossilized cartilage and chondrocytes appear to be exceptionally well-preserved. For this reason, it makes sense to think that soft tissue material could persist in these remains. So, while we don’t know the taphonomic conditions that contributed to the fossilization process, it is safe to assume that these conditions came together in the just-right way to preserve remnants of the biomolecules that make up the soft tissues, including DNA.
  • Soft tissue material is much more likely to survive in cartilage than in bone. The extracellular matrix that makes up cartilage has no vascularization (channels). This property makes it less porous and reduces the surface area compared to bone. Both properties inhibit groundwater and microorganisms from gaining access to the bulk of the soft tissue materials in the cartilage. At the growth plate, cartilage actually has a higher mineral to organic ratio than bone. Minerals inhibit the activity of environmental enzymes and microorganisms. Minerals also protect the biomolecules that make up the organic portion of cartilage because they serve as an adsorption site stabilizing even fragile molecules. Also, minerals can form cross-links with biomolecules. Cross-linking slows down the degradation of biopolymers. Because the chondrocytes in the cartilage lacunae were undergoing rapid cell division at the time of the creature’s death, they consumed most of the available oxygen in their local environment. This consumption would have created a localized hypoxia (oxygen deficiency) that would have minimized oxidative damage to the tissue in the lacunae.
  • The preserved biomolecules are not the original, unaltered materials, but are fragmented remnants that have undergone chemical alteration. Even with the molecules in this altered, fragmented state, many of the assays designed to detect the original, unaltered materials will produce positive results. For example, the antibody binding assays the research team used to detect collagen II could easily detect small fragmented pieces of collagen. These assays depend upon the binding of antibodies to the target molecule. The antibody binding site consists of a relatively small region of the molecular target. This feature of antibody binding means that the antibodies designed to target collagen II will also bind to small peptide fragments of only a few amino acids in length—as long as they are derived from collagen II.

The dyes used to detect DNA can bind to double-stranded regions of DNA that are only six base pairs in length. Again, this feature means that the dye molecules will as readily intercalate between the bases of intact DNA molecules as relatively small fragments derived from the original material.

  • The biochemical properties of collagen II and condensed chromosomes explain the persistence of this protein and DNA. Collagen is a heavily cross-linked material. Cross-linking imparts a high degree of stability to proteins, accounting for their long-term durability in fossil remains.
In the later stage of cell division, chromosomes (which consist of DNA and proteins) exist in a highly compact, condensed phase. In this phase, chromosomal DNA would be protected and much more resistant to chemical breakdown than if the chromosomes existed in a more diffuse state, as is the case in other stages of the cell cycle.

In other words, a confluence of factors worked together to promote a set of conditions that allows small pieces of collagen II and DNA to survive long enough for these materials to become entombed within a mineral encasement. At this point in the preservation process, the materials can survive for indefinite periods of time.

More Historical Heists to Come

Nevertheless, some people find it easier to believe that a team of robbers could walk out of the Royal Mint of Spain with 2.5 billion euros than to think that DNA could persist in 75-million-year-old fossils. Their disbelief causes them to question the concept of deep time. Yet, it is possible to devise a scientifically plausible scheme to explain DNA’s survival for tens of millions of years, if several factors all work together in the just-right way. This appears to be the case for the duck-billed dinosaur specimen characterized by Schweitzer and Bailleul’s team.

As this latest study demonstrates, if the just-right sequence of events occurs in the just-right way with the just-right timing, scientists have the opportunity to walk out of the fossil record vault with the paleontological steal of the century.

It is exciting to think that more discoveries of this type are just around the corner. Stay tuned!

Resources

Responding to Young Earth Critics

Mechanism of Soft Tissue Preservation

Recovery of a Wide Range of Soft Tissue Materials in Fossils

Detection of Carbon-14 in Fossils

Endnotes
  1. Alida M. Bailleul et al., “Evidence of Proteins, Chromosomes and Chemical Markers for DNA in Exceptionally Preserved Dinosaur Cartilage,” National Science Review, nwz206 (January 12, 2020), doi:10.1093/nsr/nwz206, https://academic.oup.com/nsr/advance-article/doi/10.1093/nsr/nwz206/5762999.
  2. Science China Press, “Cartilage Cells, Chromosomes and DNA Preserved in 75 Million-Year-Old Baby Duck-Billed Dinosaur,” Phys.org, posted February 28, 2020, https://phys.org/news/2020-02-cartilage-cells-chromosomes-dna-million-year-old.html.
  3. David F. Coppedge, “Dinosaur DNA Found!”, Creation-Evolution Headlines (website), posted February 28, 2020, https://crev.info/2020/02/dinosaur-dna-found/.
  4. Coppedge, “Dinosaur DNA Found.”

Reprinted with permission by the author

Original article at:
https://reasons.org/explore/blogs/the-cells-design

No Joke: New Pseudogene Function Smiles on the Case for Creation

By Fazale Rana – April 1, 2020

Time to confess. I now consider myself an evolutionary creationist. I have no choice. The evidence for biological evolution is so overwhelming…

…Just kidding! April Fool’s!

I am still an old-earth creationist. Even though the evolutionary paradigm is the prevailing framework in biology, I am skeptical about facets of it. I am more convinced than ever that a creation model approach is the best way to view life’s origin, design, and history. It’s not to say that there isn’t evidence for common descent; there is. Still, even with this evidence, I prefer old-earth creationism for three reasons.

  • First, a creation model approach can readily accommodate the evidence for common descent within a design framework.
  • Second, the evolutionary paradigm struggles to adequately explain many of the key transitions in life’s history.
  • Third, the impression of design in biology is overwhelming—and it’s becoming more so every day.

And that is no joke.

Take the human genome as an example. When it comes to understanding its structure and function, we are in our infancy. As we grow in our knowledge and insight, it becomes increasingly apparent that the structural and functional features of the human genome (and the genomes of other organisms) display more elegance and sophistication than most life scientists could have ever imagined—at least, those operating within the evolutionary framework. On the other hand, the elegance and sophistication of genomes is expected for creationists and intelligent design advocates. To put it simply, the more we learn about the human genome, the more it appears to be the product of a Mind.

In fact, the advances in genomics over the last decade have forced life scientists to radically alter their views of genome biology. When the human genome was first sequenced in 2000, biologists considered most of the sequence elements to be nonfunctional, useless DNA. Now biologists recognize that virtually every class of these so-called junk DNA sequences serve key functional roles.

If most of the DNA sequence elements in the human genome were truly junk, then I’d agree that it makes sense to view them as evolutionary leftovers, especially because these junk DNA sequences appear in corresponding locations of the human and primate genomes. It is for these reasons that biologists have traditionally interpreted these shared sequences as the most convincing evidence for common descent.

However, now that we have learned that these sequences are functional, I think it is reasonable to regard them as the handiwork of a Creator, intentionally designed to contribute to the genome’s biology. In this framework, the shared DNA sequences in the human and primate genomes reflect common design, not common descent.

Still, many biologists reject the common design interpretation, while continuing to express confidence in the evolutionary model. Their certainty reflects a commitment to methodological naturalism, but there is another reason for their confidence. They argue that the human genome (and the genomes of other organisms) display other architectural and operational features that the evolutionary framework explains best—and, in their view, these features tip the scales toward the evolutionary interpretation.

Yet, researchers continue to make discoveries about junk DNA that counterbalance the evidence for common descent, including these structural and functional features. Recent insights into pseudogene biology nicely illustrate this trend.

Pseudogenes

Most life scientists view pseudogenes as the remnants of once-functional genes. Along these lines, biologists have identified three categories of pseudogenes (unitary, duplicated, and processed) and proposed three distinct mechanisms to explain the origin of each class. These mechanisms produce distinguishing features that allow investigators to identify certain DNA sequences as pseudogenes. However, a pre-commitment to the evolutionary paradigm can influence many biologists to declare too quickly that pseudogenes are nonfunctional based on their sequence characteristics.1

blog__inline--no-joke-new-pseudogene-function-smiles
The Mechanisms of Pseudogene Formation.
Image credit: Wikipedia.

As the old adage goes: theories guide, experiments decide. There is an accumulation of experimental data which indicates that pseudogenes from all three classes have utility.

A number of research teams have demonstrated that the cell’s machinery transcribes processed pseudogenes and, in turn, these transcripts are translated into proteins. Both duplicated and unitary pseudogenes are also transcribed. However, except for a few rare cases, these transcripts are not translated into proteins. Most of duplicated and unitary pseudogene transcripts serve a regulatory role, described by the competitive endogenous RNA hypothesis.

In other words, the experimental support for pseudogene function seemingly hinges on the transcription of these sequences. That leads to the question: What about pseudogene sequences located in genomes that aren’t transcribed? A number of pseudogenic sequences in genomes seemingly sit dormant. They aren’t transcribed and, presumably, have no utility whatsoever.

For many life scientists, this supports the evolutionary account for pseudogene origins, making it the preferred explanation over any model that posits the intentional design of pseudogene sequences. After all, why would a Creator introduce mutationally damaged genes that serve no function? Isn’t it better to explain the presence of functional processed pseudogenes as the result of neofunctionalization, whereby evolutionary mechanisms co-opt processed pseudogenes and use them as the raw material to evolve DNA sequence elements into new genes?

Or, perhaps, is it better to view the transcripts of regulatory unitary and duplicated pseudogenes as the functional remnants of the original genes whose transcripts played a role in regulatory networks with other RNA transcripts? Even though these pseudogenes no longer direct protein production, they can still take part in the regulatory networks comprised of RNA transcripts.

Are Untranscribed Pseudogenes Really Untranscribed?

Again, remember that support for the evolutionary interpretation of pseudogenes rests on the belief that some pseudogenes are not transcribed. What happens to this support if these DNA sequences are transcribed, meaning we simply haven’t detected or identified their transcripts experimentally?

As a case in point, in a piece for Nature Reviews, a team of collaborators from Australia argue that failure to detect pseudogene transcripts experimentally does not confirm the absence of a transcription.2 For example, the transcripts for a pseudogene transcribed at a low level may fall below the experimental detection limit. This particular pseudogene would appear inactive to researchers when, in fact, the opposite is the case. Additionally, pseudogene expression may be tissue-specific or may take place at certain points in the growth and development process. If the assay doesn’t take these possibilities into account, then failure to detect pseudogene transcripts could just mean that the experimental protocol is flawed.

The similarity of the DNA sequences of pseudogenes and their corresponding “sister” genes causes another complication. It can be hard to experimentally distinguish between a pseudogene and its “intact” sister gene. This limitation means that, in some instances, pseudogene transcripts may be misidentified as the transcripts of the “intact” gene. Again, this can lead researchers to conclude mistakenly that the pseudogene isn’t transcribed.

Are Untranscribed Pseudogenes Really Nonfunctional?

These very real experimental challenges notwithstanding, there are pseudogenes that indeed are not transcribed, but it would be wrong to conclude that they have no role in gene regulation. For example, a large team of international collaborators demonstrated that a pseudogene sequence contributes to the specific three-dimensional architecture of chromosomes. By doing so, this sequence exerts influence over gene expression, albeit indirectly.3

Another research team determined that a different pseudogene plays a role in maintaining chromosomal stability. In laboratory experiments, they discovered that deleting the DNA region that harbors this pseudogene increases chromosomal recombination events that result in the deletion of pieces of DNA. This deletion is catastrophic and leads to DiGeorge/velocardiofacial syndrome.4

To be clear, these two studies focused on single pseudogenes. We need to be careful about extrapolating the results to all untranscribed pseudogenes. Nevertheless, at minimum, these findings open up the possibility that other untranscribed pseudogene sequences function in the same way. If past history is anything to go by when it comes to junk DNA, these two discoveries are most likely harbingers of what is to come. Simply put, we continue to uncover unexpected function for pseudogenes (and other classes of junk DNA).

Common Design or Common Descent?

Not that long ago, shared nonfunctional, junk DNA sequences in the human and primate genomes were taken as prima facia evidence for our shared evolutionary history with the great apes. There was no way to genuinely respond to the challenge junk DNA posed to creation models, other than to express the belief that we would one day discover function for junk DNA sequences.

Subsequently, discoveries have fulfilled a key scientific prediction made by creationists and intelligent design proponents alike. These initial discoveries involved single, isolated pseudogenes. Later studies demonstrated that pseudogene function is pervasive, leading to new scientific ideas such as the competitive endogenous RNA hypothesis, that connect the sequence similarity of pseudogenes and “intact” genes to pseudogene function. Researchers are beginning to identify functional roles for untranscribed pseudogenes. I predict that it is only a matter of time before biologists concede that the utility of untranscribed pseudogenes is pervasive and commonplace.

The creation model interpretation of shared junk DNA sequences becomes stronger and stronger with each step forward, which leads me to ask, When are life scientists going to stop fooling around and give a creation model approach a seat at the biology table?

Resources

Endnotes
  1. Seth W. Cheetham, Geoffrey J. Faulkner, and Marcel E. Dinger, “Overcoming Challenges and Dogmas to Understand the Functions of Pseudogenes,” Nature Reviews Genetics 21 (December 17, 2019): 191–201, doi:10.1038/s41576-019-0196-1.
  2. Cheetham et al., 191–201.
  3. Peng Huang, et al., “Comparative Analysis of Three-Dimensional Chromosomal Architecture Identifies a Novel Fetal Hemoglobin Regulatory Element,” Genes and Development 31, no. 16 (August 15, 2017): 1704–13, doi: 10.1101/gad.303461.117.
  4. Laia Vergés et al., “An Exploratory Study of Predisposing Genetic Factors for DiGeorge/Velocardiofacial Syndrome,” Scientific Reports 7 (January 6, 2017): id. 40031, doi: 10.1038/srep40031.

Reprinted with permission by the author

Original article at:

https://reasons.org/explore/blogs/the-cells-design

Does Evolutionary Bias Create Unhealthy Stereotypes about Pseudogenes?

By Fazale Rana – March 18, 2020

Truth be told, we all hold to certain stereotypes whether we want to admit it or not. Though unfair, more often than not, these stereotypes cause little real damage.

Yet, there are instances when stereotypes can be harmful—even deadly. As a case in point, researchers have shown that stereotyping disrupts the healthcare received by members of so-called disadvantaged groups, such as African Americans, Latinos, and the poor.1

Healthcare providers are frequently guilty of bias towards underprivileged people. Often, the stereotyping is unconscious and unintentional. Still, this bias compromises the medical care received by people in these ethnic and socioeconomic groups.

Underprivileged patients are also guilty of stereotyping. It is not uncommon for these patients to perceive themselves as the victims of prejudice, even when their healthcare providers are genuinely unbiased. As a result, these patients don’t trust healthcare workers and, consequently, withhold information that is vital for a proper diagnosis.

Fortunately, psychologists have developed best practices that can reduce stereotyping by both healthcare practitioners and patients. Hopefully, by implementing these practices, the impact of stereotyping on the quality of healthcare can be minimized over time.

Recently, a research team from Australia identified another form of stereotyping that holds the potential to negatively impact healthcare outcomes.2 In this case, the impact of this stereotyping isn’t limited to disadvantaged people; it affects all of us.

A Bias Against Pseudogenes

These researchers have uncovered a bias in the way life scientists view the human genome (and the genomes of other organisms). Too often they regard the human genome as a repository of useless, nonfunctional DNA that arises as a vestige of evolutionary history. Because of this view, life scientists and the biomedical research community eschew studying regions of the human genome they deem to be junk DNA. This posture is not unreasonable. It doesn’t make sense to invest precious scientific resources to study nonfunctional DNA.

Many life scientists are unaware of their bias. Unfortunately, this stereotyping hinders scientific advance by delaying discoveries that could be translated into the clinical setting. Quite often, supposed junk DNA has turned out to serve a vital purpose. Failure to recognize this function not only compromises our understanding of genome biology, but also hinders biomedical researchers from identifying defects in these genomic regions that contribute to genetic diseases and disorders.

As psychologists will point out, acknowledging bias is the first step to solving the problems that stereotyping causes. This is precisely what these researchers have done by publishing an article in Nature Review Genetics.3 The team focused on DNA sequence elements called pseudogenes. Traditionally, life scientists have viewed pseudogenes as the remnants of once functional genes. Biologists have identified three categories of pseudogenes: (1) unitary, (2) duplicated, and (3) processed.

Researchers categorize DNA sequences as pseudogenes based on structural features. Such features indicate to the investigators that these sequence elements were functional genes at one time in evolutionary history, but eventually lost function due to mutations or other biochemical processes, such as reverse transcription and DNA insertion. Once a DNA sequence is labeled a pseudogene, bias sets in and researchers just assume that it lacks function—not because it has been experimentally demonstrated to be nonfunctional, but because of the stereotyping that arises out of the evolutionary paradigm.

The authors of the piece acknowledge that “the annotation of genomics regions as pseudogenes constitutes an etymological signifier that an element has no function and is not a gene. As a result, pseudogene-annotated regions are largely excluded from functional screen and genomic analyses.”4 In other words, the “pseudogene” moniker biases researchers to such a degree that they ignore these sequence elements as they study genome structure and function without ever doing the hard, experimental work to determine whether it is actually nonfunctional.

This approach is clearly misguided and detracts from scientific discovery. As the authors admit, “However, with a growing number of instances of pseudogene-annotated regions later found to exhibit biological function, there is an emerging risk that these regions of the genome are prematurely dismissed as pseudogenic and therefore regarded as void of function.”5

Discovering Function Despite Bias

The harmful effects of this bias become evident as biomedical researchers unexpectedly stumble upon function for pseudogenes, time and time, again, not because of the evolutionary paradigm, but despite it. These authors point out that many processed pseudogenes are transcribed and, of those, many are translated to produce proteins. Many unitary and duplicated pseudogenes are also transcribed. Some are also translated into proteins, but a majority are not. Instead they play a role in gene regulation as described by the competitive endogenous RNA hypothesis.

Still, there are some pseudogenes that aren’t transcribed and, thus, could rightly be deemed nonfunctional. However, the researchers point out that the current experimental approaches for identifying transcribed regions are less than ideal. Many of these methods may fail to detect pseudogene transcripts. However, as the researchers point out, even if a pseudogene isn’t transcribed it still may serve a functional role (e.g., contributing to chromosome three-dimensional structure and stability).

This Nature article raises a number of questions and concerns for me as a biochemist:

  • How widespread is this bias?
  • If this type of stereotyping exists toward pseudogenes, does it exist for other classes of junk DNA?
  • How well do we really understand genome structure and function?
  • Do we have the wrong perspective on the genome, one that stultifies scientific advance?
  • Does this bias delay the understanding and alleviation of human health concerns?

Is the Evolutionary Paradigm the Wrong Framework to Study Genomes?

Based on this article, I think it is safe to conclude that we really don’t understand the molecular biology of genomes. We are living in the midst of a scientific revolution that is radically changing our view of genome structure and function. The architecture and operations of genomes appear to be far more elegant and sophisticated than anyone ever imagined—at least within the confines of the evolutionary paradigm.

This insight also leads me to question if the evolutionary paradigm is the proper framework for thinking about genome structure and function. From my perspective, treating biological systems as the Creator’s handiwork provides a superior approach to understanding the genome. A creation model approach promotes scientific advance, particularly when the rationale for the structure and function of a particular biological system is not apparent. This expectation forces researchers to keep an open mind and drives further study of seemingly nonfunctional, purposeless systems with the full anticipation that their functional roles will eventually be uncovered.

Over the last several years, I have raised concerns about the bias life scientists have harbored as they have worked to characterize the human genome (and genomes of other organisms). It is gratifying to me to see that there are life scientists who, though committed to the evolutionary paradigm, are beginning to recognize this bias as well.

The first step to addressing the problem of stereotyping—in any sector of society—is to acknowledge that it exists. Often, this step is the hardest one to take. The next step is to put in place structures to help overcome its harmful influence. Could it be that part of the solution to this instance of scientific stereotyping is to grant a creation model approach access to the scientific table?

Resources

Pseudogene Function

The Evolutionary Paradigm Hinders Scientific Advance

Endnotes
  1. For example, see Joshua Aronson et al., “Unhealthy Interactions: The Role of Stereotype Threat in Health Disparities,” American Journal of Public Health 103 (January 1, 2013): 50–56, doi:10.2105/AJPH.2012.300828.
  2. Seth W. Cheetham, Geoffrey J. Faulkner, and Marcel E. Dinger, “Overcoming Challenges and Dogmas to Understand the Functions of Pseudogenes,” Nature Reviews Genetics 21 (March 2020): 191–201, doi:10.1038/s41576-019-0196-1.
  3. Cheetham, Faulkner, and Dinger, 191–201.
  4. Cheetham, Faulkner, and Dinger, 191–201.
  5. Cheetham, Faulkner, and Dinger, 191–201.

Reprinted with permission by the author

Original article at:

https://reasons.org/explore/blogs/the-cells-design

New Genetic Evidence Affirms Human Uniqueness

By Fazale Rana – March 4, 2020

It’s a remarkable discovery—and a bit gruesome, too.

It is worth learning a bit about some of its unseemly details because this find may have far-reaching implications that shed light on our origins as a species.

In 2018, a group of locals discovered the remains of a two-year-old male puppy in the frozen mud (permafrost) in the eastern part of Siberia. The remains date to 18,000 years in age. Remarkably, the skeleton, teeth, head, fur, lashes, and whiskers of the specimen are still intact.

Of Dogs and People

The Russian scientists studying this find (affectionately dubbed Dogor) are excited by the discovery. They think Dogor can shed light on the domestication of wolves into dogs. Biologists believe that this transition occurred around 15,000 years ago. Is Dogor a wolf? A dog? Or a transitional form? To answer these questions, the researchers have isolated DNA from one of Dogor’s ribs, which they think will provide them with genetic clues about Dogor’s identity—and clues concerning the domestication process.

Biologists study the domestication of animals because this process played a role in helping to establish human civilization. But biologists are also interested in animal domestication for another reason. They think this insight will tell us something about our identity as human beings.

In fact, in a separate study, a team of researchers from the University of Milan in Italy used insights about the genetic changes associated with the domestication of dogs, cats, sheep, and cattle to identify genetic features that make human beings (modern humans) stand apart from Neanderthals and Denisovans.1 They conclude that modern humans share some of the same genetic characteristics as domesticated animals, accounting for our unique and distinct facial features (compared to other hominins). They also conclude that our high level of cooperativeness and lack of aggression can be explained by these same genetic factors.

This work in comparative genomics demonstrates that significant anatomical and behavioral differences exist between humans and hominins, supporting the concept of human exceptionalism. Though the University of Milan researchers carried out their work from an evolutionary perspective, I believe their insights can be recast as scientific evidence for the biblical conception of human nature; namely, creatures uniquely made in God’s image.

Biological Changes that Led to Animal Domestication

Biologists believe that during the domestication process, many of the same biological changes took place in dogs, cats, sheep, and cattle. For example, they think that during domestication, mild deficits in neural crest cells resulted. In other words, once animals are domesticated, they produce fewer, less active neural crest cells. These stem cells play a role in neural development; thus, neural crest cell defects tend to make animals friendlier and less aggressive. This deficit also impacts physical features, yielding smaller skulls and teeth, floppy ears, and shorter, curlier tails.

Life scientists studying the domestication process have identified several genes of interest. One of these is BAZ1B. This gene plays a role in the maintenance of neural crest cells and controls their migration during embryological development. Presumably, changes in the expression of BAZ1B played a role in the domestication process.

Neural Crest Deficits and Williams Syndrome

As it turns out, there are two genetic disorders in modern humans that involve neural crest cells: Williams-Beuren syndrome (also called Williams syndrome) and Williams-Beuren region duplication syndrome. These genetic disorders involve the deletion or duplication, respectively, of a region of chromosome 7 (7q11.23). This chromosomal region harbors 28 genes. Craniofacial defects and altered cognitive and behavioral traits characterize these disorders. Specifically, people with these syndromes have cognitive limitations, smaller skulls, and elf-like faces, and they display excessive friendliness.

Among the 28 genes impacted by the two disorders is the human version of BAZ1B. This gene codes for a type of protein called a transcription factor. (Transcription factors play a role in regulating gene expression.)

The Role of BAZ1B in Neural Crest Cell Biology

To gain insight into the role BAZ1B plays in neural crest cell biology, the European research team developed induced pluripotent stem cell lines from (1) four patients with Williams syndrome, (2) three patients with Williams-Beuren region duplication syndrome, and (3) four people without either disorder. Then, they coaxed these cells in the laboratory to develop into neural crest cells.

Using a technique called RNA interference, they down-regulated BAZ1B in all three types of neural crest cells. By doing this, the researchers learned that changes in the expression of this gene altered the migration rates of the neural crest cells. Specifically, they discovered that neural crest cells developed from patients with Williams-Beuren region duplication syndrome migrated more slowly than control cells (generated from test subjects without either syndrome) and neural crest cells derived from patients with Williams syndrome migrated more rapidly than control cells.

The discovery that the BAZ1B gene influences neural crest cell migration is significant because these cells have to migrate to precise locations in the developing embryo to give rise to distinct cell types and tissues, including those that form craniofacial features.

Because BAZ1B encodes for a transcription factor, when its expression is altered, it alters the expression of genes under its control. The team discovered that 448 genes were impacted by down-regulating BAZ1B. They learned that many of these impacted genes play a role in craniofacial development. By querying databases of genes that correlate with genetic disorders, researchers also learned that, when defective, some of the impacted genes are known to cause disorders that involve altered facial development and intellectual disabilities.

Lastly, the researchers determined that the BAZ1B protein (again, a transcription factor) targets genes that influence dendrite and axon development (which are structures found in neurons that play a role in transmissions between nerve cells).

BAZ1B Gene Expression in Modern and Archaic Humans

With these findings in place, the researchers wondered if differences in BAZ1B gene expression could account for anatomical and cognitive differences between modern humans and archaic humans—hominins such as Neanderthals and Denisovans. To carry out this query, the researchers compared the genomes of modern humans to Neanderthals and Denisovans, paying close attention to DNA sequence differences in genes under the influence of BAZ1B.

This comparison uncovered differences in the regulatory region of genes targeted by the BAZ1B transcription factor, including genes that control neural crest cell activities and craniofacial anatomy. In other words, the researchers discovered significant genetic differences in gene expression among modern humans and Neanderthals and Denisovans. And these differences strongly suggest that anatomical and cognitive differences existed between modern humans and Neanderthals and Denisovans.

Did Humans Domesticate Themselves?

The researchers interpret their findings as evidence for the self-domestication hypothesis—the idea that we domesticated ourselves after the evolutionary lineage that led to modern humans split from the Neanderthal/Denisovan line (around 600,000 years ago). In other words, just as modern humans domesticated dogs, cats, cattle, and sheep, we domesticated ourselves, leading to changes in our anatomical features that parallel changes (such as friendlier faces) in the features of animals we domesticated. Along with these anatomical changes, our self-domestication led to the high levels of cooperativeness characteristic of modern humans.

On one hand, this is an interesting account that does seem to have some experimental support. But on the other, it is hard to escape the feeling that the idea of self-domestication as the explanation for the origin of modern humans is little more than an evolutionary just-so story.

It is worth noting that some evolutionary biologists find this account unconvincing. One is William Tecumseh Fitch III—an evolutionary biologist at the University of Vienna. He is skeptical of the precise parallels between animal domestication and human self-domestication. He states, “These are processes with both similarities and differences. I also don’t think that mutations in one or a few genes will ever make a good model for the many, many genes involved in domestication.”2

Adding to this skepticism is the fact that nobody has anything beyond a speculative explanation for why humans would domesticate themselves in the first place.

Genetic Differences Support the Idea of Human Exceptionalism

Regardless of the mechanism that produced the genetic differences between modern and archaic humans, this work can be enlisted in support of human uniqueness and exceptionalism.

Though the claim of human exceptionalism is controversial, a minority of scientists operating within the scientific mainstream embrace the idea that modern humans stand apart from all other extant and extinct creatures, including Neanderthals and Denisovans. These anthropologists argue that the following suite of capacities uniquely possessed by modern humans accounts for our exceptional nature:

  • symbolism
  • open-ended generative capacity
  • theory of mind
  • capacity to form complex social systems

As human beings, we effortlessly represent the world with discrete symbols. We denote abstract concepts with symbols. And our ability to represent the world symbolically has interesting consequences when coupled with our abilities to combine and recombine those symbols in a countless number of ways to create alternate possibilities. Our capacity for symbolism manifests in the form of language, art, music, and even body ornamentation. And we desire to communicate the scenarios we construct in our minds with other human beings.

But there is more to our interactions with other human beings than a desire to communicate. We want to link our minds together. And we can do this because we possess a theory of mind. In other words, we recognize that other people have minds just like ours, allowing us to understand what others are thinking and feeling. We also have the brain capacity to organize people we meet and know into hierarchical categories, allowing us to form and engage in complex social networks. Forming these relationships requires friendliness and cooperativeness.

In effect, these qualities could be viewed as scientific descriptors of the image of God, if one adopts a resemblance view for the image of God.

This study demonstrates that, at a genetic level, modern humans appear to be uniquely designed to be friendlier, more cooperative, and less aggressive than other hominins—in part accounting for our capacity to form complex hierarchical social structures.

To put it differently, the unique capability of modern humans to form complex, social hierarchies no longer needs to be inferred from the fossil and archaeological records. It has been robustly established by comparative genomics in combination with laboratory studies.

A Creation Model Perspective on Human Origins

This study not only supports human exceptionalism but also affirms RTB’s human origins model.

RTB’s biblical creation model identifies hominins such as Neanderthals and the Denisovans as animals created by God. These extraordinary creatures possessed enough intelligence to assemble crude tools and even adopt some level of “culture.” However, the RTB model maintains that these hominids were not spiritual creatures. They were not made in God’s image. RTB’s model reserves this status exclusively for Adam and Eve and their descendants (modern humans).

Our model predicts many biological similarities will be found between the hominins and modern humans, but so too will significant differences. The greatest distinction will be observed in cognitive capacity, behavioral patterns, technological development, and culture—especially artistic and religious expression.

The results of this study fulfill these two predictions. Or, to put it another way, the RTB model’s interpretation of the hominins and their relationship to modern humans aligns with “mainstream” science.

But what about the similarities between the genetic fingerprint of modern humans and the genetic changes responsible for animal domestication that involve BAZ1B and genes under its influence?

Instead of viewing these features as traits that emerged through parallel and independent evolutionary histories, the RTB human origins model regards the shared traits as reflecting shared designs. In this case, through the process of domestication, modern humans stumbled upon the means (breeding through artificial selection) to effect genetic changes in wild animals that resemble some of the designed features of our genome that contribute to our unique and exceptional capacity for cooperation and friendliness.

It is true: studying the domestication process does, indeed, tell us something exceptionally important about who we are.

Resources

Endnotes
  1. Matteo Zanella et al., “Dosage Analysis of the 7q11.23 Williams Region Identifies BAZ1B as a Major Human Gene Patterning the Modern Human Face and Underlying Self-Domestication,” Science Advances 5, no. 12 (December 4, 2019): eaaw7908, doi:10.1126/sciadv.aaw7908.
  2. Michael Price, “Early Humans Domesticated Themselves, New Genetic Evidence Suggests,” Science (December 4, 2019), doi:10.1126/science.aba4534.

Reprinted with permission by the author

Original article at:

https://reasons.org/explore/blogs/the-cells-design