Biochemical Grammar Communicates the Case for Creation

Untitled 19
BY FAZALE RANA – MAY 29, 2019

As I get older, I find myself forgetting things—a lot. But, thanks to smartphone technology, I have learned how to manage my forgetfulness by using the “Notes” app on my iPhone.

blog__inline--biochemical-grammar-communicates-1

Figure 1: The Apple Notes app icon. Image credit: Wikipedia

This app makes it easy for me to:

  • Jot down ideas that suddenly come to me
  • List books I want to read and websites I want to visit
  • Make note of musical artists I want to check out
  • Record “to do” and grocery lists
  • Write down details I need to have at my fingertips when I travel
  • List new scientific discoveries with implications for the RTB creation model that I want to blog about, such as the recent discovery of a protein grammar calling attention to the elegant design of biochemical systems

And the list goes on. I will never forget, again!

On top of that, I can use the Notes app to categorize and organize all my notes and house them in a single location. Thus, I don’t have to manage scraps of paper that invariably wind up getting scattered all over the place—and often lost.

And, as a bonus, the Notes app anticipates the next word I am going to use even before I type it. I find myself relying on this feature more and more. It is much easier to select a word than type it out. In fact, the more I use this feature, the better the app becomes at anticipating the next word I want to type.

Recently, a team of bioinformaticists from the University of Alabama, Birmingham (UAB) and the National Institutes of Health (NIH) used the same algorithm the Notes app uses to anticipate word usage to study protein architectures.1 Their analysis reveals new insight into the structural features of proteins and also highlights the analogy between the information housed in these biomolecules and human language. This analogy contributes to the revitalized Watchmaker argument presented in my book The Cell’s Design.

N-Gram Language Modeling

The algorithm used by the Notes app to anticipate the next word the user will likely type is called n-gram language modeling. This algorithm determines the probability of a word being used based on the previous word (or words) typed. (If the probability is based on a single word, it is called a unigram probability. If the calculation is based on the previous two words, it is called a bigram probability, and so on.) This algorithm “trains” the Notes app so that the more I use it, the more reliable the calculated probabilities—and, hence, the better the word recommendations.

N-Gram Language Modeling and the Case for a Creator

To understand why the work of research team from UAB and NIH provides evidence for a Creator’s role in the origin and design of life, a brief review of protein structure is in order.

Protein Structure

Proteins are large complex molecules that play a key role in virtually all of the cell’s operations. Biochemists have long known that the three-dimensional structure of a protein dictates its function.

Because proteins are such large complex molecules, biochemists categorize protein structure into four different levels: primary, secondary, tertiary, and quaternary structures. A protein’s primary structure is the linear sequence of amino acids that make up each of its polypeptide chains.

The secondary structure refers to short-range three-dimensional arrangements of the polypeptide chain’s backbone arising from the interactions between chemical groups that make up its backbone. Three of the most common secondary structures are the random coil, alpha (α) helix, and beta (β) pleated sheet.

Tertiary structure describes the overall shape of the entire polypeptide chain and the location of each of its atoms in three-dimensional space. The structure and spatial orientation of the chemical groups that extend from the protein backbone are also part of the tertiary structure.

Quaternary structure arises when several individual polypeptide chains interact to form a functional protein complex.

 

blog__inline--biochemical-grammar-communicates-2

Figure 2: The four levels of protein structure. Image credit: Shutterstock

Protein Domains

Within the tertiary structure of proteins, biochemists have discovered compact, self-contained regions that fold independently. These three-dimensional regions of the protein’s structure are called domains. Some proteins consist of a single compact domain, but many proteins possess several domains. In effect, domains can be thought to be the fundamental units of a protein’s tertiary structure. Each domain possesses a unique biochemical function. Biochemists refer to the spatial arrangement of domains as a protein’s domain architecture.

Researchers have discovered several thousand distinct protein domains. Many of these domains recur in different proteins, with each protein’s tertiary structure comprised of a mix-and-match combination of protein domains. Biochemists have also learned that a relationship exists between the complexity of an organism and the number of unique domains found in its set of proteins and the number of multi-domain proteins encoded by its genome.

blog__inline--biochemical-grammar-communicates-3

Figure 3: Pyruvate kinase, an example of a protein with three domains. Image credit: Wikipedia

The Key Question in Protein Chemistry

As much progress as biochemists have made characterizing protein structure over the last several decades, they still lack a fundamental understanding of the relationship between primary structure (the amino acid sequence) and tertiary structure and, hence, protein function. In order to develop this insight, they need to determine the “rules” that dictate the way proteins fold. Treating proteins as information systems can help determine some of these rules.

Protein as Information Systems

Proteins are not only large, complex molecules but also information-harboring systems. The amino acid sequence that defines a protein’s primary structure is a type of information—biochemical information—with the individual amino acids analogous to the letters that make up an alphabet.

N-Gram Analysis of Proteins

To gain insight into the relationship between a protein’s primary structure and its tertiary structures, the researchers from UAB and NIH carried out an n-gram analysis on the 23 million protein domains found in the protein sets of 4,800 species found across all three domains of life.

These researchers point out that an individual amino acid in a protein’s primary structure doesn’t contain information just as an individual letter in an alphabet doesn’t harbor any meaning. In human language, the most basic unit that conveys meaning is a word. And, in proteins, the most basic unit that conveys biochemical meaning is a domain.

To decipher the “grammar” used by proteins, the researchers treated adjacent pairs of protein domains in the tertiary structure of each protein in the sample set as a bigram (similar to two words together). Surveying the proteins found in their data set of 4,800 species, they discovered that 95% of all the possible domain combinations don’t exist!

This finding is key. It indicates that there are, indeed, rules that dictate the way domains interact. In other words, just like certain word combinations never occur in human languages because of the rules of grammar, there appears to be a protein “grammar” that constrains the domain combinations in proteins. This insight implies that physicochemical constraints (which define protein grammar) dictate a protein’s tertiary structure, preventing 95% of conceivable domain-domain interactions.

Entropy of Protein Grammar

In thermodynamics, entropy is often used as a measure of the disorder of a system. Information theorists borrow the concept of entropy and use it to measure the information content of a system. For information theorists, the entropy of a system is indirectly proportional to the amount of information contained in a sequence of symbols. As the information content increases, the entropy of the sequence decreases, and vice versa. Using this concept, the UAB and NIH researchers calculated the entropy of the protein domain combinations.

In human language, the entropy increases as the vocabulary increases. This makes sense because, as the number of words increases in a language, the likelihood that random word combinations would harbor meaning decreases. In like manner, the research team discovered that the entropy of the protein grammar increases as the number of domains increases. (This increase in entropy likely reflects the physicochemical constraints—the protein grammar, if you will—on domain interactions.)

Human languages all carry the same amount of information. That is to say, they all display the same entropy content. Information theorists interpret this observation as an indication that a universal grammar undergirds all human languages. It is intriguing that the researchers discovered that the protein “languages” across prokaryotes and eukaryotes all display the same level of entropy and, consequently, the same information content. This relationship holds despite the diversity and differences in complexity of the organism in their data set. By analogy, this finding indicates that a universal grammar exists for proteins. Or to put it another way, the same set of physicochemical constraints dictate the way protein domains interact for all organisms.

At this point, the researchers don’t know what the grammatical rules are for proteins, but knowing that they exist paves the way for future studies. It also generates hope that one day biochemists might understand them and, in turn, use them to predict protein structure from amino acid sequences.

This study also illustrates how fruitful it can be to treat biochemical systems as information systems. The researchers conclude that “The similarities between natural languages and genomes are apparent when domains are treated as functional analogs of words in natural languages.”2

In my view, it is this relationship that points to a Creator’s role in the origin and design of life.

Protein Grammar and the Case for a Creator

As discussed in The Cell’s Design, the recognition that biochemical systems are information-based systems has interesting philosophical ramifications. Common, everyday experience teaches that information derives solely from the activity of human beings. So, by analogy, biochemical information systems, too, should come from a divine Mind. Or at least it is rational to hold that view.

But the case for a Creator strengthens when we recognize that it’s not merely the presence of information in biomolecules that contributes to this version of a revitalized Watchmaker analogy. Added vigor comes from the UAB and NIH researchers’ discovery that the mathematical structure of human languages and biochemical languages is identical.

Skeptics often dismiss the updated Watchmaker argument by arguing that biochemical information is not genuine information. Instead, they maintain that when scientists refer to biomolecules as harboring information, they are employing an illustrative analogy—a scientific metaphor—and nothing more. They accuse creationists and intelligent design proponents of misconstruing their use of analogical language to make the case for design.3

But the UAB and NIH scientists’ work questions the validity of this objection. Biochemical information has all of the properties of human language. It really is information, just like the information we conceive and use to communicate.

Is There a Biochemical Anthropic Principle?

This discovery also yields another interesting philosophical implication. It lends support to the existence of a biochemical anthropic principle. Discovery of a protein grammar means that there are physicochemical constraints on protein structure. It is remarkable to think that protein tertiary structures may be fundamentally dictated by the laws of nature, instead of being the outworking of an historically contingent evolutionary history. To put it differently, the discovery of a protein grammar reveals that the structure of biological systems may reflect some deep, underlying principles that arise from the very nature of the universe itself. And yet these structures are precisely the types of structures life needs to exist.

I interpret this “coincidence” as evidence that our universe has been designed for a purpose. And as a Christian, I find that notion to resonate powerfully with the idea that life manifests from an intelligent Agent—namely, God.

Resources to Dig Deeper

Endnotes
  1. Lijia Yu et al., “Grammar of Protein Domain Architectures,” Proceedings of the National Academy of Sciences, USA 116, no. 9 (February 26, 2019): 3636–45, doi:10.1073/pnas.1814684116.
  2. Yu et al., 3636–45.
  3. For example, see Massimo Pigliucci and Maarten Boudry, “Why Machine-Information Metaphors Are Bad for Science and Science Education,” Science and Education 20, no. 5–6 (May 2011): 453–71; doi:10.1007/s11191-010-9267-6.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/05/29/biochemical-grammar-communicates-the-case-for-creation

Why Mitochondria Make My List of Best Biological Designs

Untitled 14
BY FAZALE RANA – MAY 1, 2019

A few days ago, I ran across a BuzzFeed list that catalogs 24 of the most poorly designed things in our time. Some of the items that stood out from the list for me were:

  • serial-wired Christmas lights
  • economy airplane seats
  • clamshell packaging
  • juice cartons
  • motion sensor faucets
  • jewel CD packaging
  • umbrellas

What were people thinking when they designed these things? It’s difficult to argue with BuzzFeed’s list, though I bet you might add a few things of your own to their list of poor designs.

If biologists were to make a list of poorly designed things, many would probably include…everything in biology. Most life scientists are influenced by an evolutionary perspective. Thus, they view biological systems as inherently flawed vestiges cobbled together by a set of historically contingent mechanisms.

Yet as our understanding of biological systems improves, evidence shows that many “poorly designed” systems are actually exquisitely assembled. It also becomes evident that many biological designs reflect an impeccable logic that explains why these systems are the way they are. In other words, advances in biology reveal that it makes better sense to attribute biological systems to the work of a Mind, not to unguided evolution.

Based on recent insights by biochemist and origin-of-life researcher Nick Lane, I would add mitochondria to my list of well-designed biological systems. Lane argues that complex cells and, ultimately, multicellular organisms would be impossible if it weren’t for mitochondria.1(These organelles generate most of the ATP molecules used to power the operations of eukaryotic cells.) Toward this end, Lane has demonstrated that mitochondria’s properties are just-right for making complex eukaryotic cells possible. Without mitochondria, life would be limited to prokaryotic cells (bacteria and archaea).

To put it another way, Nick Lane has shown that prokaryotic cells could never evolve the complexity needed to form cells with complexity akin to the eukaryotic cells required for multicellular organisms. The reason has to do with bioenergetic constraints placed on prokaryotic cells. According to Lane, the advent of mitochondria allowed life to break free from these constraints, paving the way for complex life.

blog__inline--why-mitochondria-make-my-list-1

Figure 1: A Mitochondrion. Image credit: Shutterstock

Through Lane’s discovery, mitochondria reveal exquisite design and logical architecture and operations. Yet this is not necessarily what I (or many others) would have expected if mitochondria were the result of evolution. Rather, we’d expect biological systems to appear haphazard and purposeless, just good enough for the organism to survive and nothing more.

To understand why I (and many evolutionary biologists) would hold this view about mitochondria and eukaryotic cells (assuming that they were the product of evolutionary processes), it is necessary to review the current evolutionary explanation for their origins.

The Endosymbiont Hypothesis

Most biologists believe that the endosymbiont hypothesis is the best explanation for the origin of complex eukaryotic cells. This hypothesis states that complex cells originated when single-celled microbes formed symbiotic relationships. “Host” microbes (most likely archaea) engulfed other archaea and/or bacteria, which then existed inside the host as endosymbionts.

The presumption, then, is that organelles, including mitochondria, were once endosymbionts. Evolutionary biologists believe that, once engulfed, the endosymbionts took up permanent residency within the host cell and even grew and divided inside the host. Over time, the endosymbionts and the host became mutually interdependent. For example, the endosymbionts provided a metabolic benefit for the host cell, such as serving as a source of ATP. In turn, the host cell provided nutrients to the endosymbionts. The endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism.

Based on this scenario, there is no real rationale for the existence of mitochondria (and eukaryotic cells). They are the way they are because they just wound up that way.

But Nick Lane’s insights suggest otherwise.

Lane’s analysis identifies a deep-seated rationale that accounts for the features of mitochondria (and eukaryotic cells) related to their contribution to cellular bioenergetics. To understand why mitochondria and eukaryotic cells are the way they are, we first need to understand why prokaryotic cells can never evolve into large complex cells, a necessary step for the advent of complex multicellular organisms.

Bioenergetics Constraints on Prokaryotic Cells

Lane has discovered that bioenergetics constraints keep bacterial and archaeal cells trapped at their current size and complexity. Key to discovering this constraint is a metric Lane devised called Available Energy per Gene (AEG). It turns out that AEG in eukaryotic cells can be as much as 200,000 times larger than the AEG in prokaryotic cells. This extra energy allows eukaryotic cells to engage in a wide range of metabolic processes that support cellular complexity. Prokaryotic cells simply can’t afford such processes.

An average eukaryotic cell has between 20,000 to 40,000 genes; a typical bacterial cell has about 5,000 genes. Each gene encodes the information the cell’s machinery needs to make a distinct protein. And proteins are the workhorse molecules of the cell. More genes mean a more diverse suite of proteins, which means greater biochemical complexity.

So, what is so special about eukaryotic cells? Why don’t prokaryotic cells have the same AEG? Why do eukaryotic cells have an expanded repertoire of genes and prokaryotic cells don’t?

In short, the answer is: mitochondria.

On average, the volume of eukaryotic cells is about 15,000 times larger than that of prokaryotic cells. Eukaryotic cells’ larger size allows for their greater complexity. Lane estimates that for a prokaryotic cell to scale up to this volume, its radius would need to increase 25-fold and its surface area 625-fold.

Because the plasma membrane of bacteria is the site for ATP synthesis, increases in the surface area would allow the hypothetically enlarged bacteria to produce 625 times more ATP. But this increased ATP production doesn’t increase the AEG. Why is that?

The bacteria would have to produce 625 times more proteins to support the increased ATP production. Because the cell’s machinery must access the bacteria’s DNA to make these proteins, a single copy of the genome is insufficient to support all of the activity centered around the synthesis of that many proteins. In fact, Lane estimates that for bacteria to increase its ATP production 625-fold, it would require 625 copies of its genome. In other words, even though the bacteria increased in size, in effect, the AEG remains unchanged.

blog__inline--why-mitochondria-make-my-list-2

Figure 2: ATP Production at the Cell Membrane Surface. Image credit: Shutterstock

Things become more complicated when factoring in cell volume. When the surface area (and concomitant ATP production) increase by a factor of 625, the volume of the cell expands 15,000 times. To satisfy the demands of a larger cell, even more copies of the genome would be required, perhaps as many as 15,000. But energy production tops off at a 625-fold increase. This mismatch means that the AEG drops by 25 percent per gene. For a genome consisting of 5,000 genes, this drop means that a bacterium the size of a eukaryotic cell would have about 125,000 times less AEG than a typical eukaryotic cell and 200,000 times less AEG when compared to eukaryotes with genome sizes approaching 40,000 genes.

Bioenergetic Freedom for Eukaryotic Cells

Thanks to mitochondria, eukaryotic cells are free from the bioenergetic constraints that ensnare prokaryotic cells. Mitochondria generate the same amount of ATP as a bacterial cell. However, their genome consists of only 13 proteins, thus the organelle’s ATP demand is low. The net effect is that the mitochondria’s AEG skyrockets. Furthermore, mitochondrial membranes come equipped with an ATP transport protein that can pump the vast excess of ATP from the organelle interior into the cytoplasm for the eukaryotic cell to use.

To summarize, mitochondria’s small genome plus its prodigious ATP output are the keys to eukaryotic cells’ large AEG.

Of course, this raises a question: Why do mitochondria have genomes at all? Well, as it turns out, mitochondria need genomes for several reasons (which I’ve detailed in previous articles).

Other features of mitochondria are also essential for ATP production. For example, cardiolipinin the organelle’s inner membrane plays a role in stabilizing and organizing specific proteinsneeded for cellular energy production.

From a creation perspective it seems that if a Creator was going to design a eukaryotic cell from scratch, he would have to create an organelle just like a mitochondrion to provide the energy needed to sustain the cell’s complexity with a high AEG. Far from being an evolutionary “kludge job,” mitochondria appear to be an elegantly designed feature of eukaryotic cells with a just-right set of properties that allow for the cellular complexity needed to sustain complex multicellular life. It is eerie to think that unguided evolutionary events just happened to traverse the just-right evolutionary path to yield such an organelle.

As a Christian, I see the rationale that undergirds the design of mitochondria as the signature of the Creator’s handiwork in biology. I also view the anthropic coincidence associated with the origin of eukaryotic cells as reason to believe that life’s history has purpose and meaning, pointing toward the advent of complex life and humanity.

So, now you know why mitochondria make my list.

Resources

Endnotes
  1. Nick Lane, “Bioenergetic Constraints on the Evolution of Complex Life,” Cold Spring Harbor Perspectives in Biology 6, no. 5 (May 2014): a015982, doi:10.1101/cshperspect.a015982.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/05/01/why-mitochondria-make-my-list-of-best-biological-designs

Self-Assembly of Protein Machines: Evidence for Evolution or Creation?

Untitled 12
BY FAZALE RANA – APRIL 17, 2019

I finally upgraded my iPhone a few weeks ago from a 5s to an 8 Plus. I had little choice. The battery on my cell phone would no longer hold a charge.

I’d put off getting a new one for as long as possible. It just didn’t make sense to spend money chasing the latest and greatest technology when current cell phone technology worked perfectly fine for me. Apart from the battery life and a less-than-ideal camera, I was happy with my iPhone 5s. Now I am really glad I made the switch.

Then, the other day I caught myself wistfully eyeing the iPhone X. And, today, I learned that Apple is preparing the release of the iPhone 11 (or XI or XT). Where will Apple’s technology upgrades take us next? I can’t wait to find out.

Have I become a technology junkie?

It is remarkable how quickly cell phone technology advances. It is also remarkable how alluring new technology can be. The next thing you know, Apple will release an iPhone that will assemble itself when it comes out of the box. . . . Probably not.

But, if the work of engineers at MIT ever reaches fruition, it is possible that smartphone manufacturers one day just might rely on a self-assembly process to produce cell phones.

A Self-Assembling Cell Phone

The Self-Assembly Lab at MIT has developed a pilot process to manufacture cell phones by self-assembly.

To do this, they designed their cell phone to consist of six parts that fit together in a lock-in-key manner. By placing the cell phone pieces into a tumbler that turns at the just-right speed, the pieces automatically combine with one another, bit by bit, until the cell phone is assembled.

Few errors occur during the assembly process. Only pieces designed to fit together combine with one another because of the lock-in-key fabrication.

Self-Assembly and the Case for a Creator

It is quite likely that the work of MIT’s Self-Assembly Lab (and other labs like it) will one day revolutionize manufacturing—not just for iPhones, but for other types of products as well.

As alluring as this new technology might be, I am more intrigued by its implications for the creation-evolution controversy. What do self-assembly processes have to do with the creation-evolution debate? More than we might realize.

I believe self-assembly processes strengthen the watchmaker argument for God’s existence (and role in the origin of life). Namely, this cutting-edge technology makes it possible to respond to a common objection leveled against this design argument.

To understand why this engineering breakthrough is so important for the Watchmaker argument, a little background is necessary.

The Watchmaker Argument

Anglican natural theologian William Paley (1743–1805) posited the Watchmaker argument in the eighteenth century. It went on to become one of the best-known arguments for God’s existence. The argument hinges on the comparison Paley made between a watch and a rock. He argued that a rock’s existence can be explained by the outworking of natural processes—not so for a watch.

The characteristics of a watch—specifically the complex interaction of its precision parts for the purpose of telling time—implied the work of an intelligent designer. Employing an analogy, Paley asserted that just as a watch requires a watchmaker, so too, life requires a Creator. Paley noted that biological systems display a wide range of features characterized by the precise interplay of complex parts designed to interact for specific purposes. In other words, biological systems have much more in common with a watch than a rock. This similarity being the case, it logically follows that life must stem from the work of a Divine Watchmaker.

Biochemistry and the Watchmaker Argument

As I discuss in my book The Cell’s Design, advances in biochemistry have reinvigorated the Watchmaker argument. The hallmark features of biochemical systems are precisely the same properties displayed in objects, devices, and systems designed and crafted by humans.

Cells contain protein complexes that are structured to operate as biomolecular motors and machines. Some molecular-level biomachines are strict analogs to machinery produced by human designers. In fact, in many instances, a one-to-one relationship exists between the parts of manufactured machines and the molecular components of biomachines. (A few examples of these biomolecular machines are discussed in the articles listed in the Resources section.)

We know that machines originate in human minds that comprehend and then implement designs. So, when scientists discover example after example of biomolecular machines inside the cell with an eerie and startling similarity to the machines we produce, it makes sense to conclude that these machines and, hence, life, must also have originated in a Mind.

A Skeptic’s Challenge

As you might imagine, skeptics have leveled objections against the Watchmaker argument since its introduction in the 1700s. Today, when skeptics criticize the latest version of the Watchmaker argument (based on biochemical designs), the influence of Scottish skeptic David Hume (1711–1776) can be seen and felt.

In his 1779 work Dialogues Concerning Natural Religion, Hume presented several criticisms of design arguments. The foremost centered on the nature of analogical reasoning. Hume argued that the conclusions resulting from analogical reasoning are only sound when the things compared are highly similar to each other. The more similar, the stronger the conclusion. The less similar, the weaker the conclusion.

Hume dismissed the original version of the Watchmaker argument by maintaining that organisms and watches are nothing alike. They are too dissimilar for a good analogy. In other words, what is true for a watch is not necessarily true for an organism and, therefore, it doesn’t follow that organisms require a Divine Watchmaker, just because a watch does.

In effect, this is one of the chief reasons why some skeptics today dismiss the biochemical Watchmaker argument. For example, philosopher Massimo Pigliucci has insisted that Paley’sanalogy is purely metaphorical and does not reflect a true analogical relationship. He maintains that any similarity between biomolecular machines and human designs reflects merely illustrative analogies that life scientists use to communicate the structure and function of these protein complexes via familiar concepts and language. In other words, it is illegitimate to use the “analogies” between biomolecular machines and manufactured machines to make a case for a Creator.1

A Response Based on Insights from Nanotechnology

I have responded to this objection by pointing out that nanotechnologists have isolated biomolecular machines from the cell and incorporated these protein complexes into nanodevices and nanosystems for the explicit purpose of taking advantage of their machine-like properties. These transplanted biomachines power motion and movements in the devices, which otherwise would be impossible with current technology. In other words, nanotechnologists view these biomolecular systems as actual machines and utilize them as such. Their work demonstrates that biomolecular machines are literal, not metaphorical, machines. (See the Resources section for articles describing this work.)

Is Self-Assembly Evidence of Evolution or Design?

Another criticism—inspired by Hume—is that machines designed by humans don’t self-assemble, but biochemical machines do. Skeptics say this undermines the Watchmaker analogy. I have heard this criticism in the past, but it came up recently in a dialogue I had with a skeptic in a Facebook group.

I wrote that “What we discover when we work out the structure and function of protein complexes are features that are akin to an automobile engine, not an outcropping of rocks.”

A skeptic named Maurice responded: “Your analogy is false. Cars do not spontaneously self-assemble—in that case there is a prohibitive energy barrier. But hexagonal lava rocks can and do—there is no energy barrier to prohibit that from happening.”

Maurice argues that my analogy is a poor one because protein complexes in the cell self-assemble, whereas automobile engines can’t. For Maurice (and other skeptics), this distinction serves to make manufactured machines qualitatively different from biomolecular machines. On the other hand, hexagonal patterns in lava rocks give the appearance of design but are actually formed spontaneously. For skeptics like Maurice, this feature indicates that the design displayed by protein complexes in the cell is apparent, not true, design.

Maurice added: “Given that nature can make hexagonal lava blocks look ‘designed,’ it can certainly make other objects look ‘designed.’ Design is not a scientific term.”

Self-Assembly and the Watchmaker Argument

This is where the MIT engineers’ fascinating work comes into play.

Engineers continue to make significant progress toward developing self-assembly processes for manufacturing purposes. It very well could be that in the future a number of machines and devices will be designed to self-assemble. Based on the researchers’ work, it becomes evident that part of the strategy for designing machines that self-assemble centers on creating components that not only contribute to the machine’s function, but also precisely interact with the other components so that the machine assembles on its own.

The operative word here is designed. For machines to self-assemble they must be designed to self-assemble.

This requirement holds true for biochemical machines, too. The protein subunits that interact to form the biomolecular machines appear to be designed for self-assembly. Protein-protein binding sites on the surface of the subunits mediate this self-assembly process. These binding sites require high-precision interactions to ensure that the binding between subunits takes place with a high degree of accuracy—in the same way that the MIT engineers designed the cell phone pieces to precisely combine through lock-in-key interactions.

blog__inline--self-assembly-of-protein-machines

Figure: ATP Synthase is a biomolecular motor that is literally an electrically powered rotary motor. This biomachine is assembled from protein subunits. Credit: Shutterstock

The level of design required to ensure that protein subunits interact precisely to form machine-like protein complexes is only beginning to come into full view.2 Biochemists who work in the area of protein design still don’t fully understand the biophysical mechanisms that dictate the assembly of protein subunits. And, while they can design proteins that will self-assemble, they struggle to replicate the complexity of the self-assembly process that routinely takes place inside the cell.

Thanks to advances in technology, biomolecular machines’ ability to self-assemble should no longer count against the Watchmaker argument. Instead, self-assembly becomes one more feature that strengthens Paley’s point.

The Watchmaker Prediction

Advances in self-assembly also satisfy the Watchmaker prediction, further strengthening the case for a Creator. In conjunction with my presentation of the revitalized Watchmaker argument in The Cell’s Design, I proposed the Watchmaker prediction. I contend that many of the cell’s molecular systems currently go unrecognized as analogs to human designs because the corresponding technology has yet to be developed.

The possibility that advances in human technology will ultimately mirror the molecular technology that already exists as an integral part of biochemical systems leads to the Watchmaker prediction. As human designers develop new technologies, examples of these technologies, though previously unrecognized, will become evident in the operation of the cell’s molecular systems. In other words, if the Watchmaker argument truly serves as evidence for a Creator’s existence, then it is reasonable to expect that life’s biochemical machinery anticipates human technological advances.

In effect, the developments in self-assembly technology and its prospective use in future manufacturing operations fulfill the Watchmaker prediction. Along these lines, it’s even more provocative to think that cellular self-assembly processes are providing insight to engineers who are working to develop similar technology.

Maybe I am a technology junkie, after all. I find it remarkable that as we develop new technologies we discover that they already exist in the cell, and because they do the Watchmaker argument becomes more and more compelling.

Can you hear me now?

Resources

The Biochemical Watchmaker Argument

Challenges to the Biochemical Watchmaker Argument

Endnotes
  1. Massimo Pigliucci and Maarten Boudry, “Why Machine-Information Metaphors are Bad for Science and Science Education,” Science and Education 20, no. 5–6 (May 2011): 453–71; doi:10.1007/s11191-010-9267-6.
  2. For example, see Christoffer H. Norn and Ingemar André, “Computational Design of Protein Self-Assembly,” Current Opinion in Structural Biology 39 (August 2016): 39–45, doi:10.1016/j.sbi.2016.04.002.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/04/17/self-assembly-of-protein-machines-evidence-for-evolution-or-creation

Origins of Monogamy Cause Evolutionary Paradigm Breakup

Untitled 9
BY FAZALE RANA – MARCH 20, 2019

Gregg Allman fronted the Allman Brothers Band for over 40 years until his death in 2017 at the age of 69. Writer Mark Binelli described Allman’s voice as “a beautifully scarred blues howl, old beyond its years.”1

A rock legend who helped pioneer southern rock, Allman was as well known for his chaotic, dysfunctional personal life as for his accomplishments as a musician. Allman struggled with drug abuse and addiction. He was also married six times, with each marriage ending in divorce and, at times, in a public spectacle.

In a 2009 interview with Binelli for Rolling Stone, Allman reflected on his failed marriages: “To tell you the truth, it’s my sixth marriage—I’m starting to think it’s me.”2

Allman isn’t the only one to have trouble with marriage. As it turns out, so do evolutionary biologists—but for different reasons than Greg Allman.

To be more exact, evolutionary biologists have made an unexpected discovery about the evolutionary origin of monogamy (a single mate for at least a season) in animals—an insight that raises questions about the evolutionary explanation. Based on recent work headed by a large research team of investigators from the University of Texas (UT), Austin, it looks like monogamy arose independently, multiple times, in animals. And these origin events were driven, in each instance, by the same genetic changes.3

In my view, this remarkable example of evolutionary convergence highlights one of the many limitations of evolutionary theory. It also contributes to my skepticism (and that of other intelligent design proponents/creationists) about the central claim of the evolutionary paradigm; namely, the origin, design, history, and diversity of life can be fully explained by evolutionary mechanisms.

At the same time, the independent origins of monogamy—driven by the same genetic changes—(as well as other examples of convergence) find a ready explanation within a creation model framework.

Historical Contingency

To appreciate why I believe this discovery is problematic for the evolutionary paradigm, it is necessary to consider the nature of evolutionary mechanisms. According to the evolutionary biologist Stephen Jay Gould (1941–2002), evolutionary transformations occur in a historically contingent manner.This means that the evolutionary process consists of an extended sequence of unpredictable, chance events. If any of these events were altered, it would send evolution down a different trajectory.

To help clarify this concept, Gould used the metaphor of “replaying life’s tape.” If one were to push the rewind button, erase life’s history, and then let the tape run again, the results would be completely different each time. In other words, the evolutionary process should not repeat itself. And rarely should it arrive at the same end point.

Gould based the concept of historical contingency on his understanding of the mechanisms that drive evolutionary change. Since the time of Gould’s original description of historical contingency, several studies have affirmed his view. (For descriptions of some representative studies, see the articles listed in the Resources section.) In other words, researchers have experimentally shown that the evolutionary process is, indeed, historically contingent.

A Failed Prediction of the Evolutionary Paradigm

Given historical contingency, it seems unlikely that distinct evolutionary pathways would lead to identical or nearly identical outcomes. Yet, when viewed from an evolutionary standpoint, it appears as if repeated evolutionary outcomes are a common occurrence throughout life’s history. This phenomenon—referred to as convergence—is widespread. Evolutionary biologists Simon Conway Morris and George McGhee point out in their respective books, Life’s Solution and Convergent Evolution, that identical evolutionary outcomes are a characteristic feature of the biological realm.5 Scientists see these repeated outcomes at the ecological, organismal, biochemical, and genetic levels. In fact, in my book The Cell’s Design, I describe 100 examples of convergence at the biochemical level.

In other words, biologists have made two contradictory observations within the evolutionary framework: (1) evolutionary processes are historically contingent and (2) evolutionary convergence is widespread. Since the publication of The Cell’s Design, many new examples of convergence have been unearthed, including the recent origin of monogamy discovery.

Convergent Origins of Monogamy

Working within the framework of the evolutionary paradigm, the UT research team sought to understand the evolutionary transition to monogamy. To achieve this insight, they compared the gene expression profiles in the neural tissues of reproductive males for closely related pairs of species, with one species displaying monogamous behavior and the other nonmonogamous reproduction.

The species pairs spanned the major vertebrate groups and included mice, voles, songbirds, frogs, and cichlids. From an evolutionary perspective, these organisms would have shared a common ancestor 450 million years ago.

Monogamous behavior is remarkably complex. It involves the formation of bonds between males and females, care of offspring by both parents, and increased territorial defense. Yet, the researchers discovered that in each instance of monogamy the gene expression profiles in the neural tissues of the monogamous species were identical and distinct from the gene expression patterns for their nonmonogamous counterparts. Specifically, they observed the same differences in gene expression for the same 24 genes. Interestingly, genes that played a role in neural development, cell-cell signaling, synaptic activity, learning and memory, and cognitive function displayed enhanced gene expression. Genes involved in gene transcription and AMPA receptor regulation were down-regulated.

So, how do the researchers account for this spectacular example of convergence? They conclude that a “universal transcriptomic mechanism” exists for monogamy and speculate that the gene modules needed for monogamous behavior already existed in the last common ancestor of vertebrates. When needed, these modules were independently recruited at different times in evolutionary history to yield monogamous species.

Yet, given the number of genes involved and the specific changes in gene expression needed to produce the complex behavior associated with monogamous reproduction, it seems unlikely that this transformation would happen a single time, let alone multiple times, in the exact same way. In fact, Rebecca Young, the lead author of the journal article detailing the UT research team’s work, notes that “Most people wouldn’t expect that across 450 million years, transitions to such complex behaviors would happen the same way every time.”6

So, is there another way to explain convergence?

Convergence and the Case for a Creator

Prior to Darwin (1809–1882), biologists referred to shared biological features found in organisms that cluster into disparate biological groups as analogies. (In an evolutionary framework, analogies are referred to as evolutionary convergences.) They viewed analogous systems as designs conceived by the Creator that were then physically manifested in the biological realm and distributed among unrelated organisms.

In light of this historical precedence, I interpret convergent features (analogies) as the handiwork of a Divine mind. The repeated origins of biological features equate to the repeated creations by an intelligent Agent who employs a common set of solutions to address a common set of problems facing unrelated organisms.

Thus, the idea of monogamous convergence seems to divorce itself from the evolutionary framework, but it makes for a solid marriage in a creation model framework.

Resources

Endnotes
  1. Mark Binelli, “Gregg Allman: The Lost Brother,” Rolling Stone, no. 1082/1083 (July 9–23, 2009), https://www.rollingstone.com/music/music-features/gregg-allman-the-lost-brother-108623/.
  2. Binelli, “Gregg Allman: The Lost Brother.”
  3. Rebecca L. Young et al., “Conserved Transcriptomic Profiles underpin Monogamy across Vertebrates,” Proceedings of the National Academy of Sciences, USA 116, no. 4 (January 22, 2019): 1331–36, doi:10.1073/pnas.1813775116.
  4. Stephen Jay Gould, Wonderful Life: The Burgess Shale and the Nature of History (New York: W. W. Norton & Company, 1990).
  5. Simon Conway Morris, Life’s Solution: Inevitable Humans in a Lonely Universe (New York: Cambridge University Press, 2003); George McGhee, Convergent Evolution: Limited Forms Most Beautiful (Cambridge, MA: MIT Press, 2011).
  6. University of Texas at Austin, “Evolution Used Same Genetic Formula to Turn Animals Monogamous,” ScienceDaily (January 7, 2019), www.sciencedaily.com/releases/2019/01/1901071507.htm.

Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2019/03/20/origins-of-monogamy-cause-evolutionary-paradigm-breakup

Endosymbiont Hypothesis and the Ironic Case for a Creator

endosymbionthypothesisandtheironic

BY FAZALE RANA – DECEMBER 12, 2018

i ·ro ·ny

The use of words to express something different from and often opposite to their literal meaning.
Incongruity between what might be expected and what actually occurs.

—The Free Dictionary

People often use irony in humor, rhetoric, and literature, but few would think it has a place in science. But wryly, this has become the case. Recent work in synthetic biology has created a real sense of irony among the scientific community—particularly for those who view life’s origin and design from an evolutionary framework.

Increasingly, life scientists are turning to synthetic biology to help them understand how life could have originated and evolved. But, they have achieved the opposite of what they intended. Instead of developing insights into key evolutionary transitions in life’s history, they have, ironically, demonstrated the central role intelligent agency must play in any scientific explanation for the origin, design, and history of life.

This paradoxical situation is nicely illustrated by recent work undertaken by researchers from Scripps Research (La Jolla, CA). Through genetic engineering, the scientific investigators created a non-natural version of the bacterium E. coli. This microbe is designed to take up permanent residence in yeast cells. (Cells that take up permanent residence within other cells are referred to as endosymbionts.) They hope that by studying these genetically engineered endosymbionts, they can gain a better understanding of how the first eukaryotic cells evolved. Along the way, they hope to find added support for the endosymbiont hypothesis.1

The Endosymbiont Hypothesis

Most biologists believe that the endosymbiont hypothesis (symbiogenesis) best explains one of the key transitions in life’s history; namely, the origin of complex cells from bacteria and archaea. Building on the ideas of Russian botanist Konstantin Mereschkowski, Lynn Margulis(1938–2011) advanced the endosymbiont hypothesis in the 1960s to explain the origin of eukaryotic cells.

Margulis’s work has become an integral part of the evolutionary paradigm. Many life scientists find the evidence for this idea compelling and consequently view it as providing broad support for an evolutionary explanation for the history and design of life.

According to this hypothesis, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe. Presumably, organelles such as mitochondria were once endosymbionts. Evolutionary biologists believe that once engulfed by the host cell, the endosymbionts took up permanent residency, with the endosymbiont growing and dividing inside the host.

Over time, the endosymbionts and the host became mutually interdependent. Endosymbionts provided a metabolic benefit for the host cell—such as an added source of ATP—while the host cell provided nutrients to the endosymbionts. Presumably, the endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism.

endosymbiont-hypothesis-and-the-ironic-case-for-a-creator-1

Figure 1: Endosymbiont hypothesis. Image credit: Wikipedia.

Life scientists point to a number of similarities between mitochondria and alphaproteobacteria as evidence for the endosymbiont hypothesis. (For a description of the evidence, see the articles listed in the Resources section.) Nevertheless, they don’t understand how symbiogenesis actually occurred. To gain this insight, scientists from Scripps Research sought to experimentally replicate the earliest stages of mitochondrial evolution by engineering E. coli and brewer’s yeast (S. cerevisiae) to yield an endosymbiotic relationship.

Engineering Endosymbiosis

First, the research team generated a strain of E. coli that no longer has the capacity to produce the essential cofactor thiamin. They achieved this by disabling one of the genes involved in the biosynthesis of the compound. Without this metabolic capacity, this strain becomes dependent on an exogenous source of thiamin in order to survive. (Because the E. coli genome encodes for a transporter protein that can pump thiamin into the cell from the exterior environment, it can grow if an external supply of thiamin is available.) When incorporated into yeast cells, the thiamin in the yeast cytoplasm becomes the source of the exogenous thiamin, rendering E. coli dependent on the yeast cell’s metabolic processes.

Next, they transferred the gene that encodes a protein called ADP/ATP translocase into the E. coli strain. This gene was harbored on a plasmid (which is a small circular piece of DNA). Normally, the gene is found in the genome of an endosymbiotic bacterium that infects amoeba. This protein pumps ATP from the interior of the bacterial cell to the exterior environment.2

The team then exposed yeast cells (that were deficient in ATP production) to polyethylene glycol, which creates a passageway for E. coli cells to make their way into the yeast cells. In doing so, E. coli becomes established as endosymbionts within the yeast cells’ interior, with the E. coli providing ATP to the yeast cell and the yeast cell providing thiamin to the bacterial cell.

Researchers discovered that once taken up by the yeast cells, the E. coli did not persist inside the cell’s interior. They reasoned that the bacterial cells were being destroyed by the lysosomal degradation pathway. To prevent their destruction, the research team had to introduce three additional genes into the E. coli from three separate endosymbiotic bacteria. Each of these genes encodes proteins—called SNARE-like proteins—that interfere with the lysosomal destruction pathway.

Finally, to establish a mutualistic relationship between the genetically-engineered strain of E. coli and the yeast cell, the researchers used a yeast strain with defective mitochondria. This defect prevented the yeast cells from producing an adequate supply of ATP. Because of this limitation, the yeast cells grow slowly and would benefit from the E. coli endosymbionts, with the engineered capacity to transport ATP from their cellular interior to the exterior environment (the yeast cytoplasm.)

The researchers observed that the yeast cells with E. coli endosymbionts appeared to be stable for 40 rounds of cell doublings. To demonstrate the potential utility of this system to study symbiogenesis, the research team then began the process of genome reduction for the E. coli endosymbionts. They successively eliminated the capacity of the bacterial endosymbiont to make the key metabolic intermediate NAD and the amino acid serine. These triply-deficient E. coli strains survived in the yeast cells by taking up these nutrients from the yeast cytoplasm.

Evolution or Intentional Design?

The Scripps Research scientific team’s work is impressive, exemplifying science at its very best. They hope that their landmark accomplishment will lead to a better understanding of how eukaryotic cells appeared on Earth by providing the research community with a model system that allows them to probe the process of symbiogenesis. It will also allow them to test the various facets of the endosymbiont hypothesis.

In fact, I would argue that this study already has made important strides in explaining the genesis of eukaryotic cells. But ironically, instead of proffering support for an evolutionary origin of eukaryotic cells (even though the investigators operated within the confines of the evolutionary paradigm), their work points to the necessary role intelligent agency must have played in one of the most important events in life’s history.

This research was executed by some of the best minds in the world, who relied on a detailed and comprehensive understanding of biochemical and cellular systems. Such knowledge took a couple of centuries to accumulate. Furthermore, establishing mutualistic interactions between the two organisms required a significant amount of ingenuity—genius that is reflected in the experimental strategy and design of their study. And even at that point, execution of their experimental protocols necessitated the use of sophisticated laboratory techniques carried out under highly controlled, carefully orchestrated conditions. To sum it up: intelligent agency was required to establish the endosymbiotic relationship between the two microbes.

endosymbiont-hypothesis-and-the-ironic-case-for-a-creator-2

Figure 2: Lab researcher. Image credit: Shutterstock.

Or, to put it differently, the endosymbiotic relationship between these two organisms was intelligently designed. (All this work was necessary to recapitulate only the presumed first step in the process of symbiogenesis.) This conclusion gains added support given some of the significant problems confronting the endosymbiotic hypothesis. (For more details, see the Resources section.) By analogy, it seems reasonable to conclude that eukaryotic cells, too, must reflect the handiwork of a Divine Mind—a Creator.

Resources

Endnotes

  1. Angad P. Mehta et al., “Engineering Yeast Endosymbionts as a Step toward the Evolution of Mitochondria,” Proceedings of the National Academy of Sciences, USA 115 (November 13, 2018): doi:10.1073/pnas.1813143115.
  2. ATP is a biochemical that stores energy used to power the cell’s operation. Produced by mitochondria, ATP is one of the end products of energy harvesting pathways in the cell. The ATP produced in mitochondria is pumped into the cell’s cytoplasm from within the interior of this organelle by an ADP/ATP transporter.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/12/12/endosymbiont-hypothesis-and-the-ironic-case-for-a-creator

Spider Silk Inspires New Technology and the Case for a Creator

spidersilk

BY FAZALE RANA – NOVEMBER 28, 2018
Mark your calendars!

On December 14th (2018), Columbia Pictures—in collaboration with Sony Pictures Animation—will release a full-length animated feature: Spider-Man: Into the Spider-Verse. The story features Miles Morales, an Afro-Latino teenager, as Spider-Man.

Morales accidentally becomes transported from his universe to ours, where Peter Parker is Spider-Man. Parker meets Morales and teaches him how to be Spider-Man. Along the way, they encounter different versions of Spider-Man from alternate dimensions. All of them team up to save the multiverse and to find a way to return back to their own versions of reality.

What could be better than that?

In 1962, Spider-Man’s creators, Stan Lee and Steve Ditko, drew inspiration for their superhero in the amazing abilities of spiders. And today, engineers find similar inspiration, particularly, when it comes to spider silk. The remarkable properties of spider’s silk is leading to the creation of new technologies.

Synthetic Spider Silk

Engineers are fascinated by spider silk because this material displays astonishingly high tensile strength and ductility (pliability), properties that allow it to absorb huge amounts of energy before breaking. Only one-sixth the density of steel, spider silk can be up to four times stronger, on a per weight basis.

By studying this remarkable substance, engineers hope that they can gain insight and inspiration to engineer next-generation materials. According to Northwestern University researcher Nathan C. Gianneschi, who is attempting to produce synthetic versions of spider silk, “One cannot overstate the potential impact on materials and engineering if we can synthetically replicate the natural process to produce artificial fibers at scale. Simply put, it would be transformative.”1

Gregory P. Holland of San Diego State University, one of Gianneschi’s collaborators, states, “The practical applications for materials like this are essentially limitless.”2 As a case in point, synthetic versions of spider silk could be used to make textiles for military personnel and first responders and to make construction materials such as cables. They would also have biomedical utility and could be used to produce environmentally friendly plastics.

The Quest to Create Synthetic Spider Silk

But things aren’t that simple. Even though life scientists and engineers understand the chemical structure of spider’s silk and how its structural features influence its mechanical properties, they have not been able to create synthetic versions of it with the same set of desired properties.

 

blog__inline--spider-silk-inspires-new-technology

Figure 1: The Molecular Architecture of Spider Silk. Fibers of spider silk consist of proteins that contain crystalline regions separated by amorphous regions. The crystals form from regions of the protein chain that fold into structures called beta-sheets. These beta-sheets stack together to give the spider silk its tensile strength. The amorphous regions give the silk fibers ductility. Image credit: Chen-Pan Liao.

Researchers working to create synthetic spider silk speculate that the process by which the spider spins the silk may play a critical role in establishing the biomaterial’s tensile strength and ductility. Before it is extruded, silk exists in a precursor form in the silk gland. Researchers think that the key to generating synthetic spider silk with the same properties as naturally formed spider silk may be found by mimicking the structure of the silk proteins in precursor form.

Previous work suggests that the proteins that make up spider silk exist as simple micelles in the silk gland and that when spun from this form, fibers with greater-than-steel strength are formed. But researchers’ attempts to apply this insight in a laboratory setting failed to yield synthetic silk with the desired properties.

The Structure of Spider Silk Precursors

Hoping to help unravel this problem, a team of American collaborators led by Gianneschi and Holland recently provided a detailed characterization of the structure of the silk protein precursors in spider glands.3 They discovered that the silk proteins form micelles, but the micelles aren’t simple. Instead, they assemble into a complex structure comprised of a hierarchy of subdomains. Researchers also learned that when they sheared these nanoassemblies of precursor proteins, fibers formed. If they can replicate these hierarchical nanostructures in the lab, researchers believe they may be able to construct synthetic spider silk with the long-sought-after tensile strength and ductility.

Biomimetics and Bioinspiration

Attempts to find inspiration for new technology is n0t limited to spider silk. It has become rather commonplace for engineers to employ insights from arthropod biology (which includes spiders and insects) to solve engineering problems and to inspire the invention of new technologies—even technologies unlike anything found in nature. In fact, I discuss this practice in an essay I contributed for the book God and the World of Insects.

This activity falls under the domain of two relatively new and exciting areas of engineering known as biomimetics and bioinspiration. As the names imply, biomimetics involves direct mimicry of designs from biology, whereas bioinspiration relies on insights from biology to guide the engineering enterprise.

The Converse Watchmaker Argument for God’s Existence

The idea that biological designs can inspire engineering and technology advances is highly provocative. It highlights the elegant designs found throughout the living realm. In the case of spider silk, design elegance is not limited to the structure of spider silk but extends to its manufacturing process as well—one that still can’t be duplicated by engineers.

The elegance of these designs makes possible a new argument for God’s existence—one I have named the converse Watchmaker argument. (For a detailed discussion see the essay I contributed to the book Building Bridges, entitled, “The Inspirational Design of DNA.”)

The argument can be stated like this: if biological designs are the work of a Creator, then these systems should be so well-designed that they can serve as engineering models for inspiring the development of new technologies. Indeed, this scenario is what scientists observe in nature. Therefore, it becomes reasonable to think that biological designs are the work of a Creator.

Biomimetics and the Challenge to the Evolutionary Paradigm

From my perspective, the use of biological designs to guide engineering efforts seems fundamentally at odds with evolutionary theory. Generally speaking, evolutionary biologists view biological systems as the products of an unguided, historically contingent process that co-opts preexisting systems to cobble together new ones. Evolutionary mechanisms can optimize these systems, but even then they are, in essence, still kludges.

Given the unguided nature of evolutionary mechanisms, does it make sense for engineers to rely on biological systems to solve problems and inspire new technologies? Is it in alignment with evolutionary beliefs to build an entire subdiscipline of engineering upon mimicking biological designs? I would argue that these engineering subdisciplines do not fit with the evolutionary paradigm.

On the other hand, biomimetics and bioinspiration naturally flow out of a creation model approach to biology. Using designs in nature to inspire engineering only makes sense if these designs arose from an intelligent Mind, whether in this universe or in any of the dimensions of the Spider-Verse.

Resources

Endnotes

  1. Northwestern University, “Mystery of How Black Widow Spiders Create Steel-Strength Silk Webs further Unravelled,” Phys.org, Science X, October 22, 2018, https://phys.org/news/2018-10-mystery-black-widow-spiders-steel-strength.html.
  2. Northwestern University, “Mystery of How Black Widow Spiders Create.”
  3. Lucas R. Parent et al., “Hierarchical Spidroin Micellar Nanoparticles as the Fundamental Precursors of Spider Silks,” Proceedings of the National Academy of Sciences USA (October 2018), doi:10.1073/pnas.1810203115.

Vocal Signals Smile on the Case for Human Exceptionalism

vocalsignalssmile

BY FAZALE RANA – NOVEMBER 21, 2018

Before Thanksgiving each year, those of us who work at Reasons to Believe (RTB) headquarters take part in an annual custom. We put our work on pause and use that time to call donors, thanking them for supporting RTB’s mission. (It’s a tradition we have all come to love, by the way.)

Before we start making our calls, our ministry advancement team leads a staff meeting to organize our efforts. And each year at these meetings, they remind us to smile when we talk to donors. I always found this to be an odd piece of advice, but they insist that when we talk to people, our smiles come across over the phone.

Well, it turns out that the helpful advice of our ministry advancement team has scientific merit, based on a recent study from a team of neuroscientists and psychologists from France and the UK.1 This research highlights the importance of vocal signaling for communicating emotions between people. And from my perspective, the work also supports the notion of human exceptionalism and the biblical concept of the image of God.

We Can Hear Smiles

The research team was motivated to perform this study in order to learn the role vocal signaling plays in social cognition. They chose to focus on auditory “smiles,” because, as these researchers point out, smiles are among the most powerful facial expressions and one of the earliest to develop in children. As I am sure we all know, smiles express positive feelings and are contagious.

When we smile, our zygomaticus major muscle contracts bilaterally and causes our lips to stretch. This stretching alters the sounds of our voices. So, the question becomes: Can we hear other people when they smile?

headanatomy

Figure 1: Zygomaticus major. Image credit: Wikipedia

To determine if people can “hear” smiles, the researchers recorded actors who spoke a range of French phonemes, with and without smiling. Then, they modeled the changes in the spectral patterns that occurred in the actors’ voices when they smiled while they spoke.

The researchers used this model to manipulate recordings of spoken sentences so that they would sound like they were spoken by someone who was smiling (while keeping other features such as pitch, content, speed, gender, etc., unchanged). Then, they asked volunteers to rate the “smiley-ness” of voices before and after manipulation of the recordings. They found that the volunteers could distinguish the transformed phonemes from those that weren’t altered.

Next, they asked the volunteers to mimic the sounds of the “smiley” phonemes. The researchers noted that for the volunteers to do so, they had to smile.

Following these preliminary experiments, the researchers asked volunteers to describe their emotions when listening to transformed phonemes compared to those that weren’t transformed. They found that when volunteers heard the altered phonemes, they expressed a heightened sense of joy and irony.

Lastly, the researchers used electromyography to monitor the volunteers’ facial muscles so that they could detect smiling and frowning as the volunteers listened to a set of 60 sentences—some manipulated (to sound as if they were spoken by someone who was smiling) and some unaltered. They found that when the volunteers judged speech to be “smiley,” they were more likely to smile and less likely to frown.

In other words, people can detect auditory smiles and respond by mimicking them with smiles of their own.

Auditory Signaling and Human Exceptionalism

This research demonstrates that both the visual and auditory clues we receive from other people help us to understand their emotional state and to become influenced by it. Our ability to see and hear smiles helps us develop empathy toward others. Undoubtedly, this trait plays an important role in our ability to link our minds together and to form complex social structures—two characteristics that some anthropologists believe contribute to human exceptionalism.

The notion that human beings differ in degree, not kind, from other creatures has been a mainstay concept in anthropology and primatology for over 150 years. And it has been the primary reason why so many people have abandoned the belief that human beings bear God’s image.

Yet, this stalwart view in anthropology is losing its mooring, with the concept of human exceptionalism taking its place. A growing minority of anthropologists and primatologists now believe that human beings really are exceptional. They contend that human beings do, indeed, differ in kind, not merely degree, from other creatures—including Neanderthals. Ironically, the scientists who argue for this updated perspective have developed evidence for human exceptionalism in their attempts to understand how the human mind evolved. And, yet, these new insights can be used to marshal support for the biblical conception of humanity.

Anthropologists identify at least four interrelated qualities that make us exceptional: (1) symbolism, (2) open-ended generative capacity, (3) theory of mind, and (4) our capacity to form complex social networks.

Human beings effortlessly represent the world with discrete symbols and to denote abstract concepts. Our ability to represent the world symbolically and to combine and recombine those symbols in a countless number of ways to create alternate possibilities has interesting consequences. Human capacity for symbolism manifests in the form of language, art, music, and body ornamentation. And humans alone desire to communicate the scenarios we construct in our minds with other people.

But there is more to our interactions with other human beings than a desire to communicate. We want to link our minds together and we can do so because we possess a theory of mind. In other words, we recognize that other people have minds just like ours, allowing us to understand what others are thinking and feeling. We also possess the brain capacity to organize people we meet and know into hierarchical categories, allowing us to form and engage in complex social networks.

Thus, I would contend that our ability to hear people’s smiles plays a role in theory of mind and our sophisticated social capacities. It contributes to human exceptionalism.

In effect, these four qualities could be viewed as scientific descriptors of the image of God. In other words, evidence for human exceptionalism is evidence that human beings bear God’s image.

So, even though many people in the scientific community promote a view of humanity that denigrates the image of God, scientific evidence and common-day experience continually support the notion that we are unique and exceptional as human beings. It makes me grin from ear to ear to know that scientific investigations into our cognitive and behavioral capacities continue to affirm human exceptionalism and, with it, the image of God.

Indeed, we are the crown of creation. And that makes me thankful!

Resources

Endnotes

  1. Pablo Arias, Pascal Belin, and Jean-Julien Aucouturier, “Auditory Smiles Trigger Unconscious Facial Imitation,” Current Biology 28 (July 23, 2018): PR782–R783, doi:10.1016/j.cub.2018.05.084.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/11/21/vocal-signals-smile-on-the-case-for-human-exceptionalism

The Optimal Design of the Genetic Code

theoptimaldesign

BY FAZALE RANA – OCTOBER 3, 2018

Were there no example in the world of contrivance except that of the eye, it would be alone sufficient to support the conclusion which we draw from it, as to the necessity of an intelligent Creator.

–William Paley, Natural Theology

In his classic work, Natural TheologyWilliam Paley surveyed a range of biological systems, highlighting their similarities to human-made designs. Paley noticed that human designs typically consist of various components that interact in a precise way to accomplish a purpose. According to Paley, human designs are contrivances—things produced with skill and cleverness—and they come about via the work of human agents. They come about by the work of intelligent designers. And because biological systems are contrivances, they, too, must come about via the work of a Creator.

For Paley, the pervasiveness of biological contrivances made the case for a Creator compelling. But he was especially struck by the vertebrate eye. For Paley, if the only example of a biological contrivance available to us was the eye, its sophisticated design and elegant complexity alone justify the “necessity of an intelligent creator” to explain its origin.

As a biochemist, I am impressed with the elegant designs of biochemical systems. The sophistication and ingenuity of these designs convinced me as a graduate student that life must stem from the work of a Mind. In my book The Cell’s Design, I follow in Paley’s footsteps by highlighting the eerie similarity between human designs and biochemical systems—a similarity I describe as an intelligent design pattern. Because biochemical systems conform to the intelligent design pattern, they must be the work of a Creator.

As with Paley, I view the pervasiveness of the intelligent design pattern in biochemical systems as critical to making the case for a Creator. Yet, in particular, I am struck by the design of a single biochemical system: namely, the genetic code. On the basis of the structure of the genetic code alone, I think one is justified to conclude that life stems from the work of a Divine Mind. The latest work by a team of German biochemists on the genetic code’s design convinces me all the more that the genetic code is the product of a Creator’s handiwork.1

To understand the significance of this study and the code’s elegant design, a short primer on molecular biology is in order. (For those who have a background in biology, just skip ahead to The Optimal Genetic Code.)

Proteins

The “workhorse” molecules of life, proteins take part in essentially every cellular and extracellular structure and activity. Proteins are chain-like molecules folded into precise three-dimensional structures. Often, the protein’s three-dimensional architecture determines the way it interacts with other proteins to form a functional complex.

Proteins form when the cellular machinery links together (in a head-to-tail fashion) smaller subunit molecules called amino acids. To a first approximation, the cell employs 20 different amino acids to make proteins. The amino acids that make up proteins possess a variety of chemical and physical properties.

optimal-design-of-the-genetic-code-1

Figure 1: The Amino Acids. Image credit: Shutterstock

Each specific amino acid sequence imparts the protein with a unique chemical and physical profile along the length of its chain. The chemical and physical profile determines how the protein folds and, therefore, its function. Because structure determines the function of a protein, the amino acid sequence is key to dictating the type of work a protein performs for the cell.

DNA

The cell’s machinery uses the information harbored in the DNA molecule to make proteins. Like these biomolecules, DNA consists of chain-like structures known as polynucleotides. Two polynucleotide chains align in an antiparallel fashion to form a DNA molecule. (The two strands are arranged parallel to one another with the starting point of one strand located next to the ending point of the other strand, and vice versa.) The paired polynucleotide chains twist around each other to form the well-known DNA double helix. The cell’s machinery forms polynucleotide chains by linking together four different subunit molecules called nucleotides. The four nucleotides used to build DNA chains are adenosine, guanosine, cytidine, and thymidine, familiarly known as A, G, C, and T, respectively.

optimal-design-of-the-genetic-code-2

Figure 2: The Structure of DNA. Image credit: Shutterstock

As noted, DNA stores the information necessary to make all the proteins used by the cell. The sequence of nucleotides in the DNA strands specifies the sequence of amino acids in protein chains. Scientists refer to the amino-acid-coding nucleotide sequence that is used to construct proteins along the DNA strand as a gene.

The Genetic Code

A one-to-one relationship cannot exist between the 4 different nucleotides of DNA and the 20 different amino acids used to assemble polypeptides. The cell addresses this mismatch by using a code comprised of groupings of three nucleotides to specify the 20 different amino acids.

The cell uses a set of rules to relate these nucleotide triplet sequences to the 20 amino acids making up proteins. Molecular biologists refer to this set of rules as the genetic code. The nucleotide triplets, or “codons” as they are called, represent the fundamental communication units of the genetic code, which is essentially universal among all living organisms.

Sixty-four codons make up the genetic code. Because the code only needs to encode 20 amino acids, some of the codons are redundant. That is, different codons code for the same amino acid. In fact, up to six different codons specify some amino acids. Others are specified by only one codon.

Interestingly, some codons, called stop codons or nonsense codons, code no amino acids. (For example, the codon UGA is a stop codon.) These codons always occur at the end of the gene, informing the cell where the protein chain ends.

Some coding triplets, called start codons, play a dual role in the genetic code. These codons not only encode amino acids, but also “tell” the cell where a protein chain begins. For example, the codon GUG encodes the amino acid valine and also specifies the starting point of the proteins.

optimal-design-of-the-genetic-code-3

Figure 3: The Genetic Code. Image credit: Shutterstock

The Optimal Genetic Code

Based on visual inspection of the genetic code, biochemists had long suspected that the coding assignments weren’t haphazard—a frozen accident. Instead it looked to them like a rationale undergirds the genetic code’s architecture. This intuition was confirmed in the early 1990s. As I describe in The Cell’s Design, at that time, scientists from the University of Bath (UK) and from Princeton University quantified the error-minimization capacity of the genetic code. Their initial work indicated that the naturally occurring genetic code withstands the potentially harmful effects of substitution mutations better than all but 0.02 percent (1 out of 5,000) of randomly generated genetic codes with codon assignments different from the universal genetic code.2

Subsequent analysis performed later that decade incorporated additional factors. For example, some types of substitution mutations (called transitions) occur more frequently in nature than others (called transversions). As a case in point, an A-to-G substitution occurs more frequently than does either an A-to-C or an A-to-T mutation. When researchers included this factor into their analysis, they discovered that the naturally occurring genetic code performed better than one million randomly generated genetic codes. In a separate study, they also found that the genetic code in nature resides near the global optimum for all possible genetic codes with respect to its error-minimization capacity.3

It could be argued that the genetic code’s error-minimization properties are more dramatic than these results indicate. When researchers calculated the error-minimization capacity of one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution where the naturally occurring genetic code’s capacity occurred outside the distribution. Researchers estimate the existence of 1018 (a quintillion) possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. Nearly all of these codes fall within the error-minimization distribution. This finding means that of 1018 possible genetic codes, only a few have an error-minimization capacity that approaches the code found universally in nature.

Frameshift Mutations

Recently, researchers from Germany wondered if this same type of optimization applies to frameshift mutations. Biochemists have discovered that these mutations are much more devastating than substitution mutations. Frameshift mutations result when nucleotides are inserted into or deleted from the DNA sequence of the gene. If the number of inserted/deleted nucleotides is not divisible by three, the added or deleted nucleotides cause a shift in the gene’s reading frame—altering the codon groupings. Frameshift mutations change all the original codons to new codons at the site of the insertion/deletion and onward to the end of the gene.

optimal-design-of-the-genetic-code-4

Figure 4: Types of Mutations. Image credit: Shutterstock

The Genetic Code Is Optimized to Withstand Frameshift Mutations

Like the researchers from the University of Bath, the German team generated 1 million random genetic codes with the same type and degree of redundancy as the genetic code found in nature. They discovered that the code found in nature is better optimized to withstand errors that result from frameshift mutations (involving either the insertion or deletion of 1 or 2 nucleotides) than most of the random genetic codes they tested.

The Genetic Code Is Optimized to Harbor Multiple Overlapping Codes

The optimization doesn’t end there. In addition to the genetic code, genes harbor other overlapping codes that independently direct the binding of histone proteins and transcription factors to DNA and dictate processes like messenger RNA folding and splicing. In 2007, researchers from Israel discovered that the genetic code is also optimized to harbor overlapping codes.4

The Genetic Code and the Case for a Creator

In The Cell’s Design, I point out that common experience teaches us that codes come from minds. By analogy, the mere existence of the genetic code suggests that biochemical systems come from a Mind. This conclusion gains considerable support based on the exquisite optimization of the genetic code to withstand errors that arise from both substitution and frameshift mutations, along with its optimal capacity to harbor multiple overlapping codes.

The triple optimization of the genetic code arises from its redundancy and the specific codon assignments. Over 1018 possible genetic codes exist and any one of them could have been “selected” for the code in nature. Yet, the “chosen” code displays extreme optimization—a hallmark feature of designed systems. As the evidence continues to mount, it becomes more and more evident that the genetic code displays an eerie perfection.5

An elegant contrivance such as the genetic code—which resides at the heart of biochemical systems and defines the information content in the cell—is truly one in a million when it comes to reasons to believe.

Resources

Endnotes

  1. Regine Geyer and Amir Madany Mamlouk, “On the Efficiency of the Genetic Code after Frameshift Mutations,” PeerJ 6 (2018): e4825, doi:10.7717/peerj.4825.
  2. David Haig and Laurence D. Hurst, “A Quantitative Measure of Error Minimization in the Genetic Code,” Journal of Molecular Evolution33 (1991): 412–17, doi:1007/BF02103132.
  3. Gretchen Vogel, “Tracking the History of the Genetic Code,” Science281 (1998): 329–31, doi:1126/science.281.5375.329; Stephen J. Freeland and Laurence D. Hurst, “The Genetic Code Is One in a Million,” Journal of Molecular Evolution 47 (1998): 238–48, doi:10.1007/PL00006381.; Stephen J. Freeland et al., “Early Fixation of an Optimal Genetic Code,” Molecular Biology and Evolution 17 (2000): 511–18, doi:10.1093/oxfordjournals.molbev.a026331.
  4. Shalev Itzkovitz and Uri Alon, “The Genetic Code Is Nearly Optimal for Allowing Additional Information within Protein-Coding Sequences,” Genome Research(2007): advanced online, doi:10.1101/gr.5987307.
  5. In The Cell’s Design, I explain why the genetic code cannot emerge through evolutionary processes, reinforcing the conclusion that the cell’s information systems—and hence, life—must stem from the handiwork of a Creator.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/10/03/the-optimal-design-of-the-genetic-code

The Multiplexed Design of Neurons

multiplexeddesignneurons

BY FAZALE RANA – AUGUST 22, 2018

In 1910, Major General George Owen Squier developed a technique to increase the efficiency of data transmission along telephone lines that is still used today in telecommunications and computer networks. This technique, called multiplexing, allows multiple signals to be combined and transmitted along a single cable, making it possible to share a scarce resource (available phone lines, in Squier’s day).

Today, there are a number of ways to carry out multiplexing. One of them is called time-division multiplexing. While other forms of multiplexing can be used for analog data, this technique can only be applied to digital data. Data is transmitted as a collection of bits along a single channel separated by a time interval that allows the data groups to be directed to the appropriate receiver.

Researchers from Duke University have discovered that neurons employ time-division multiplexing to transmit multiple electrical signals along a single axon.1 The remarkable similarity between data transmission techniques used by neurons and telecommunication systems and computer networks is provocative. It can also be marshaled to add support to the revitalized Watchmaker argument for God’s existence and role in the origin and design of life.

A brief primer on neurons will help us better appreciate the work of the Duke research team.

Neurons

The primary component of the nervous system (the brain, spinal cord, and the peripheral system of nerves), neurons are electrically excitable cells that rely on electrochemical processes to receive and send electrical signals. By connecting to each other through specialized structures called synapses, neurons form pathways that transmit information throughout the nervous system.

Neurons consist of the soma or cell body, along with several outward extending projections called dendrites and axons.

multiplexed-design-of-neuronsImage credit: Wikipedia

Dendrites are “tree-like” projections that extend from the soma into the synaptic space. Receptors on the surface of dendrites bind neurotransmitters deposited by adjacent neurons in the synapse. These binding events trigger an electrical signal that travels along the length of the dendrites to the soma. However, axons conduct electrical impulses away from the soma toward the synapse, where this signal triggers the release of neurotransmitters into the extracellular medium, initiating electrical activity in the dendrites of adjacent neurons.

Sensory Neurons

In the world around us, many things happen at the same time. And we need to be aware of all of these events. Sensory neurons react to stimuli, communicating information about the environment to our brains. Many different types of sensory neurons exist, making possible our sense of sight, smell, taste, hearing, touch, and temperature. These sensory neurons have to be broadly tuned and may have to respond to more than one environmental stimulus at the same time. An example of this scenario would be carrying on a conversation with a friend at an outdoor café while the sounds of the city surround us.

The Duke University researchers wanted to understand the mechanism neurons employ when they transmit information about two or more environmental stimuli at the same time. To accomplish this, the scientists trained two macaques (monkeys) to look in the direction of two distinct sounds produced at two different locations in the room. After achieving this step, the researchers planted electrodes into the inferior colliculus of the monkeys’ brains and used these electrodes to record the activity of single neurons as the monkeys responded to auditory stimuli. The researchers discovered that each sound produced a unique firing rate along single neurons and that when the two sounds were presented at the same time, the neuron transmitting the electrical signals alternated back and forth between the two firing rates. In other words, the neurons employed time-division multiplexing to transmit the two signals.

Neuron Multiplexing and the Case for Creation

The capacity of neurons to multiplex signals generated by environmental stimuli exemplifies the elegance and sophistication of biological designs. And it is discoveries such as these that compel me to believe that life must stem from the work of a Creator.

But the case for a Creator extends beyond the intuition of design. Discoveries like this one breathe new life into the Watchmaker argument.

British natural theologian William Paley (1743–1805) advanced this argument by pointing out that the characteristics of a watch—with the complex interaction of its precision parts for the purpose of telling time—implied the work of an intelligent designer. Paley asserted by analogy that just as a watch requires a watchmaker, so too, does life require a Creator, since organisms display a wide range of features characterized by the precise interplay of complex parts for specific purposes.

Over the centuries, skeptics have maligned this argument by claiming that biological systems only bear a superficial similarity to human designs. That is, the analogy between human designs and biological systems is weak and, therefore, undermines the conclusion that a Divine Watchmaker exits. But, as I discuss in The Cell’s Design, the discovery of molecular motors, biochemical watches, and DNA computers—biochemical complexes with machine-like characteristics—energizes the argument. These systems are identical to the highly sophisticated machines and devices we build as human designers. In fact, these biochemical systems have been directly incorporated into nanotechnologies. And, we recognize that motors and computers, not to mention watches, come from minds. So, why wouldn’t we conclude that these biochemical systems come from a mind, as well?

Analogies between human machines and biological systems are not confined to biochemical systems. We see them at the biological level as well, as the latest work by the research team from Duke University illustrates.

It is fascinating to me that as we learn more about living systems, whether at the molecular scale, the cellular level, or the systems stage, we discover more and more instances in which biological systems bear eerie similarities to human designs. This learning strengthens the Watchmaker argument and the case for a Creator.

Resources

Endnotes

  1. Valeria C. Caruso et al., “Single Neurons May Encode Simultaneous Stimuli by Switching between Activity Patterns,” Nature Communications 9 (2018): 2715, doi:10.1038/s41467-018-05121-8.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/08/22/the-multiplexed-design-of-neurons

Design Principles Explain Neuron Anatomy

designprinciples

BY FAZALE RANA – AUGUST 15, 2018

It’s one of the classic episodes of I Love Lucy. Originally aired on September 15, 1952, the episode entitled “Job Switching” finds Lucy and Ethel working at a candy factory. They have been assigned to an assembly line, where they are supposed to pick up pieces of candy from a moving conveyor belt, wrap them, and place the candy back on the assembly line. But the conveyor belt moves too fast for Lucy and Ethel to keep up. Eventually, they both start stuffing pieces of candy into their mouths, under their hats, and in their blouses, as fast as they can as pieces of candy on the assembly line quickly move beyond their reach—a scene of comedic brilliance.

This chaotic (albeit hilarious) scene is a good analogy for how neurons would transmit electrical signals throughout the nervous system if not for the clever design of the axons that project from the nerve cell’s soma, or cell body.

The principles that undergird the design of axons were recently discovered by a team of bioengineers at the University of California, San Diego (UCSD).1 Insights such as this highlight the elegant designs that characterize biological systems—designs worthy to be called the Creator’s handiwork—no joke.

Neurons

The primary component of the nervous system (the brain, spinal cord, and the peripheral system of nerves), neurons are electrically excitable cells, thanks to electrochemical processes that take place across their cell membranes. These electrochemical activities allow the cells to receive and send electrical signals. By connecting to each other through specialized structures called synapses, neurons form pathways that transmit information throughout the nervous system. Neurologists refer to these pathways as neural circuits.

The heart of a neuron is the soma or cell body. This portion of the cell harbors the nucleus. Two sets of projections emanate from the soma: dendrites and axons. Dendrites are “tree-like” projections that extend from the soma into the synaptic space. Receptors on the surface of dendrites bind neurotransmitters deposited by adjacent neurons in the synapse. These binding events trigger an electrical signal that travels along the length of the dendrites to the soma. On the other hand, axons conduct electrical impulses away from the soma toward the synapse where this signal triggers the release of neurotransmitters into the extracellular medium, initiating electrical activity in the dendrites of adjacent neurons. Many dendrites feed the soma, but the soma gives rise to only a single axon, though the axon can branch extensively for some types of nerve cells. Axons vary significantly in terms of their diameter and length. Their diameter ranges from 1 to 20 microns. Axons can be quite long, up to a meter in length.

design-principles-explain-neuron-anatomy

Image: A Neuron. Image source: Wikipedia

The electrical excitability of neurons stems from the charge separation across its cell or plasma membrane that arises due to concentration differences in positively charged sodium, potassium, and calcium ions between the cell’s interior and exterior surroundings. This charge difference sets up a voltage across the membrane that is maintained by the activity of proteins embedded within the membranes called ion pumps. This voltage is called the resting potential. When the neuron binds neurotransmitters, this event triggers membrane-bound proteins called ion channels to open up, allowing ions to flow across the membrane. This causes a localized change in the membrane voltage that propagates along the length of the dendrite or axon. This propagating voltage change is called an action potential. When the action potential reaches the end of the axon, it triggers the release of neurotransmitters into the synaptic space.

Why Are Neurons the Way They Are?

The UCSD researchers wanted to understand the principles that undergird the neuron design, specifically why the length and diameter of the axons varies so much. Previous studies indicate that axons aren’t structured to minimize the use of cellular material—otherwise they wouldn’t be so long and convoluted. Nor are they structured for speed because axons don’t propagate electrical signals as fast as they could, theoretically speaking.

Even though the UCSD bioengineers adhere to the evolutionary paradigm, they were convinced that design principles must exist that explain the anatomy and physiology of neurons. From my perspective, their conviction is uncharacteristic of many life scientists because of the nature of evolutionary mechanisms (unguided, historically contingent processes that co-opt and cobble together existing designs to create new biological systems). Based on these mechanisms, there need not be any rationale for why things are the way they are. In fact, many evolutionary biologists view most biological systems as flawed, imperfect systems that are little more than kludge jobs.

But their conviction paid off. They discovered an elegant rationale that explains the variation in axon lengths.

Refraction Ratio

The UCSD investigators reasoned that the cellular architecture of axons may reflect a trade-off between (1) the speed of signal transduction along the axon, and (2) the time it takes the axon to reset the resting potential after the action potential propagates along the length of the axon and to ready the cell for the next round of neurotransmitter release.

To test this idea, the research team defined a quantity they dubbed the refraction ratio. This is the ratio of the refractory period of a neuron and the time it takes the electrical signal to transmit along the length of the axon. These researchers calculated the refraction ratio for 12,000 axon branches of rat basket cells. (These are a special type of neuron with heavily branched axons.) They found the information they needed for these calculations in the NeuroMorpho database. They determined that the average value for the refraction ratio was 0.92. The ideal value for the refraction ratio is 1.0. A value of 1.0 for the refraction ratio reflects optimal efficiency. In other words, the refraction ratio appears to be nearly optimal.

If not for this optimization, then signal transmission along axons would suffer the same fate as the pieces of candy on the assembly line manned by Lucy and Ethel. Things would become a jumbled mess along the length of the axons and at the synaptic terminus. And, if this happened, the information transmitted by the neurons would be lost.

The researchers concluded that the axon diameter—and, more importantly, its length—are varied to ensure that the refraction ratio remains as close to 1.0 as possible. This design principle explains why the shape, length, and width of axons varies so much. The reset time (refractory period) cannot be substantially altered. But the axon geometry can be altered, and this variation controls the transmission time of the electrical signal along the axon. To put it another way, axon geometry is analogous to slowing down or speeding up the conveyor belt to ensure that the candy factory workers can wrap as many pieces of candy as possible, without having to eat any or tuck them under their hats.

The Importance of Axon Geometry

The researchers from UCSD think that the design principles they have uncovered may be helpful in understanding some neurological disorders. They reason that if a disease leads to changes in neuronal anatomy, the axon geometry may no longer be optimized (causing the refraction ratio to deviate from its ideal value). This deviation will lead to loss of information when nerve cells transmit electrical signals through neural circuits, potentially contributing to the etiology of neurological diseases.

This research team also thinks that their insights might have use in computer technology. Understanding the importance of refraction ratio should benefit the design of machine-learning systems based on brain-like neural networks. At this time, the design of machine-learning systems doesn’t account for the time it takes for signals to reach neural network nodes. By incorporating this temporal parameter into the design, the researchers believe that they can dramatically improve the power of neural networks. In fact, this research team is now building new types of machine-learning architectures based on these new insights.2

Axon Geometry and the Case for Creation

The elegant, optimized, sophisticated, and ingenious design displayed by axon geometry is the type of evidence that convinced me, as an agnostic graduate student studying biochemistry, that life must stem from the work of a Creator. The designs we observe in biology (and biochemistry) are precisely the types of designs that we would expect to see if a Creator was responsible for life’s origin, history, and design.

On the other hand, evolutionary mechanisms (based on unguided, directionless processes that rely on co-opting and modifying existing designs to create biological innovation) are expected to yield biological designs that are inherently limited and flawed. For many life scientists, the varying length and meandering, convoluted paths taken by axons serve as a reminder that evolution produces imperfect designs, just good enough for survival, but nothing more.

And, in spite of this impoverished view of biology, the UCSD bioengineers were convinced that there must be a design principle that explained the variable length of axons. And herein lies the dilemma faced by many life scientists. The paradigm they embrace demands that they view biological systems as flawed and imperfect. Yet, biological systems appear to be designed for a purpose. And, hence, biologists can’t stop from using design language when they describe the structure and function of these systems. Nor can they keep themselves from seeking design principles when they study the architecture and operation of these systems. In other words, many life scientists operate as if life was the product of a Creator’s handiwork, though they might vehemently deny God’s influence in shaping biology—and even go as far as denying God’s existence. In this particular case, the commitment these researchers had to a de facto design paradigm paid off handsomely for them—and scientific advance.

The Converse Watchmaker Argument

Along these lines, it is provocative that the insights the researchers gleaned regarding axon geometry and the refraction ratio may well translate into improved designs for neural networks and machine-learning systems. The idea that biological designs can inspire engineering and technology advances makes possible a new argument for God’s existence—one I have named the converse Watchmaker argument.

The argument goes something like this: if biological designs are the work of a Creator, then these systems should be so well-designed that they can serve as engineering models and otherwise inspire the development of new technologies.

At some level, I find the converse Watchmaker argument more compelling than the classical Watchmaker analogy. Again, it is remarkable to me that biological designs can inspire engineering efforts.

It is even more astounding to think that engineers would turn to biological designs to inspire their work if biological systems were truly generated by an unguided, historically contingent process, as evolutionary biologists claim.

Using biological designs to guide engineering efforts seems to be fundamentally incompatible with an evolutionary explanation for life’s origin and history. To think otherwise is only possible after taking a few swigs of “Vitameatavegamin” mix.

Resources

Endnotes

  1. Francesca Puppo, Vivek George, and Gabriel A. Silva, “An Optimized Structure-Function Design Principle Underlies Efficient Signaling Dynamics in Neurons,” Scientific Reports 8 (2018): 10460, doi:10.1038/s41598-018-28527-2.
  2. Katherine Connor, “Why Are Neuron Axons Long and Spindly? Study Shows They’re Optimizing Signaling Efficiency,” UC San Diego News Center, July 11, 2018, https://ucsdnews.ucsd.edu/pressrelease/why_are_neuron_axons_long_and_spindly.
Reprinted with permission by the author
Original article at:
https://www.reasons.org/explore/blogs/the-cells-design/read/the-cells-design/2018/08/15/design-principles-explain-neuron-anatomy