The Information Enigma: A Closer Look – Discovery Institute

Posted: June 20, 2020 at 10:39 am

The first video in our Intelligent Design YouTube Festival, The Information Enigma, consolidates the research and writing of Stephen Meyer and Douglas Axe into a 21-minute video. The presentation demonstrates how the information present in life points unambiguously to intelligent design. This topic is central to intelligent design arguments and the ID research program. Here I will flesh out in greater detail the concept of biological information, and I will explain why significant quantities of it cannot be generated through natural processes.

A pioneer in the field of information theory was Claude Shannon who connected the concept of information to the reduction in uncertainty and to probability. As an example, knowing the five-digit ZIP Code for an address eliminates uncertainty about a buildings location. And, the four-digit extension to the ZIP Code provides additional information that reduces the uncertainty even further. Conversely, randomly generating the correct five-digit ZIP Code corresponds to a probability of 1 in 100,000, while generating the correct nine-digit ZIP Code corresponds to a probability of 1 in a billion. The latter is much less probable, so the nine-digit code contains more information.

Shannon quantified the amount of information in a pattern in what he defined as the Shannon measure of information. In the simplest case, the quantity is proportional to the log of 1/p, where p is the probability of a pattern occurring by chance. For the five-digit code, p would be 1/100,000, and 1/p would be 100,000. This measure can be thought of as the minimal number of answers to Yes-No questions that would be required to identify 1 out of N choices. To illustrate, imagine attempting to identify a pre-chosen famous actor out of eight possible people. If the answer to each question about the mystery individual eliminated half of the options, the correct answer could be determined with three questions. Therefore, learning the answer corresponds to acquiring 3 bits of information. Note that 2 to the power of 3 is 8, or conversely, log (base 2) of 8 is 3.

Information theory has been applied to biology by such figures as Hubert Yockey. In this context, Shannons definition had to be modified to distinguish between arbitrary patterns and those that performed some function. Shannons measure was modified to quantify functional information. The measure of functional information corresponds to the probability of a random pattern achieving some target goal. For instance, if 1 in 1024 amino acid sequences formed a structure that accelerated a specific reaction, the functional information associated with that sequence would equate to 10 bits since 10 Yes-No questions would have to be asked to select 1 entity out of 1024 possibilities. Mathematically, 2 to the power of 10 is 1024, or log (base 2) of 1024 is 10. More advanced measures for functional information have been developed including algorithmic specified complexity and the more generalized canonical specified complexity. They follow the same basic logic. These measures help relate the information content of biological molecules and structures to their functional capacities.

The information content of a proteins amino acid sequence directly relates to its ability to control chemical reactions or other processes. In general, the higher the information content, the higher the level of fine-grained control over outcomes and the greater the capacity for elaborate molecular manipulations. For amino acid sequences with higher information content are more specified, so they can fold into three-dimensional shapes of greater precision and complexity. In turn, the higher specificity requirement corresponds to proteins being more susceptible to mutations a few amino acid changes will often completely disable them. Functional sequences are consequently less probable, which is another signature of greater information content. This connection between information, sequence rarity, and complexity of function has profound implications for Doug Axes protein research.

Axe demonstrated that the probability for a random amino acid sequence to fold into one section (domain) of a functional -lactamase protein is far too small for it to ever occur by chance. Therefore, the information content is too great to originate through a random search. Yet -lactamase performs the relatively simple task of breaking apart an antibiotic molecule. In contrast, many of the proteins required for the origin of life perform much more complex operations (see here, here, and here). The same holds true for many proteins required in the construction of new groups of organisms (e.g., animal phyla). Therefore, these proteins information content must be even greater. So the probability of their originating through a random search is even smaller.

This conclusion is deeply problematic for evolutionary theory since no natural process can generate quantities of information substantially larger than what could result from a random search. The limitation results from No Free Lunch theorems as demonstrated by the research of Robert J. Marks, Winston Ewert, and William Dembski (see here and here). It is further supported by theorems derived from research in computer science. For instance, computer scientist Leonid Levin demonstrated the conservation of independence in information-bearing systems. He stated the following:

The information I(x:y) has a remarkable invariance; it cannot be increased by random or deterministic (recursive) processing of x or y. This is natural, since if x contains no information about y then there is little hope to find out something about y by processing x. (Torturing an uninformed witness cannot give information about the crime!)

The conservation law simply means that the information, I(x:y), present in one system, x, coinciding with another, y, cannot increase through any natural process. A nearly identical conclusion comes from information theory in what is known as the data processing inequality. It states that the information content of a signal cannot be increased by any local physical operation.

In terms of evolution, the first system (the signal) could be a duplicated gene or a nonfunctional section of DNA freely mutating, and the second could be any functional protein sequence into which the gene/section could potentially evolve. The theorems mandate that a DNA sequence (x) could never appreciably increase in functional information, such as more greatly resembling a new enzyme (y). This constraint makes the evolution of most novel proteins entirely implausible.

In the video, Stephen Meyer explains how information points to intelligent design by the same logic used in the historical sciences. In addition, the information processing machinery of life demonstrates unmistakable evidence of foresight, coordination, and goal direction. And, these signatures unambiguously point to intelligent agency. The same arguments hold true to an even greater degree for the origin of life. The only pressing question is to what extent critics can continue to allow their philosophical bias to override biological informations clear design implications.

Image: A scene from The Information Enigma, via Discovery Institute.

Here is the original post:

The Information Enigma: A Closer Look - Discovery Institute

Related Posts