Breakout Paper in Journal of Theoretical Biology Explicitly Supports Intelligent Design – Discovery Institute

Photo: Red poppy, Auckland Botanic Gardens, Auckland, New Zealand, by Sandy Millar via Unsplash.

As John West noted here last week, the Journal of Theoretical Biology has published an explicitly pro-intelligent design article, Using statistical methods to model the fine-tuning of molecular machines and systems. Lets take a closer look at the contents. The paper is math-heavy, discussing statistical models of making inferences, but it is also groundbreaking for this crucial reason: it considers and proposes intelligent design, by name, as a viable explanation for the origin of fine-tuning in biology. This is a major breakthrough for science, but also for freedom of speech. If the paper is any indication, appearing as it does in a prominent peer-reviewed journal, some of the suffocating constraints on ID advocacy may be coming off.

The authors are Steinar Thorvaldsen, a professor of information science at the University of Troms in Norway, and Ola Hssjer, a professor of mathematical statistics at Stockholm University. The paper, which is open access, begins by noting that while fine-tuning is widely discussed in physics, it needs to be considered more in the context of biology:

Fine-tuning has received much attention in physics, and it states that the fundamental constants of physics are finely tuned to precise values for a rich chemistry and life permittance. It has not yet been applied in a broad manner to molecular biology.

The authors explain the papers main thrust:

However, in this paper we argue that biological systems present fine-tuning at different levels, e.g. functional proteins, complex biochemical machines in living cells, and cellular networks. This paper describes molecular fine-tuning, how it can be used in biology, and how it challenges conventional Darwinian thinking. We also discuss the statistical methods underpinning finetuning and present a framework for such analysis.

They explain how fine-tuning is defined. The definition is essentially equivalent to specified complexity:

We define fine-tuning as an object with two properties: it must a) be unlikely to have occurred by chance, under the relevant probability distribution (i.e. complex), and b) conform to an independent or detached specification (i.e. specific).

They then introduce the concept of design, and explain how humans are innately able to recognize it:

A design is a specification or plan for the construction of an object or system, or the result of that specification or plan in the form of a product. The very term design is from the Medieval Latin word designare (denoting mark out, point out, choose); from de (out) and signum (identifying mark, sign). Hence, a public notice that advertises something or gives information. The design usually has to satisfy certain goals and constraints. It is also expected to interact with a certain environment, and thus be realized in the physical world. Humans have a powerful intuitive understanding of design that precedes modern science. Our common intuitions invariably begin with recognizing a pattern as a mark of design. The problem has been that our intuitions about design have been unrefined and pre-theoretical. For this reason, it is relevant to ask ourselves whether it is possible to turn the tables on this disparity and place those rough and pre-theoretical intuitions on a firm scientific foundation.

That last sentence is key: the purpose is to understand if there is a scientific method by which design can be inferred. They propose that design can be identified by uncovering fine-tuning. The paper explicates statistical methods for understanding fine-tuning, which they argue reflects design:

Fine-tuning and design are related entities. Fine-tuning is a bottom-up method, while design is more like a top-down approach. Hence, we focus on the topic of fine-tuning in the present paper and address the following questions: Is it possible to recognize fine-tuning in biological systems at the levels of functional proteins, protein groups and cellular networks? Can fine-tuning in molecular biology be formulated using state of the art statistical methods, or are the arguments just in the eyes of the beholder?

They cite the work of multiple leading theorists in the ID research community.

They return to physics and the anthropic principle, the idea that the laws of nature are precisely suited for life:

Suppose the laws of physics had been a bit different from what they actually are, what would the consequences be? (Davies, 2006). The chances that the universe should be life permitting are so infinitesimal as to be incomprehensible and incalculable. The finely tuned universe is like a panel that controls the parameters of the universe with about 100 knobs that can be set to certain values. If you turn any knob just a little to the right or to the left, the result is either a universe that is inhospitable to life or no universe at all. If the Big Bang had been just slightly stronger or weaker, matter would not have condensed, and life never would have existed. The odds against our universe developing were enormous and yet here we are, a point that equates with religious implications

However, rather than getting into religion, they apply statistics to consider the possibility of design as an explanation for the fine-tuning of the universe. They cite ID theorist William Dembski:

William Dembski regards the fine-tuning argument as suggestive, as pointers to underlying design. We may describe this inference as abductive reasoning or inference to the best explanation. This reasoning yields a plausible conclusion that is relatively likely to be true, compared to competing hypotheses, given our background knowledge. In the case of fine-tuning of our cosmos, design is considered to be a better explanation than a set of multi-universes that lacks any empirical or historical evidence.

The article offers additional reasons why the multiverse is an unsatisfying explanation for fine-tuning namely that multiverse hypotheses do not predict fine-tuning for this particular universe any better than a single universe hypothesis and we should prefer those theories which best predict (for this or any universe) the phenomena we observe in our universe.

The paper reviews the lines of evidence for fine-tuning in biology, including information, irreducible complexity, protein evolution, and the waiting-timeproblem. Along the way it considers the arguments of many ID theorists, starting with a short review showing how the literature uses words such as sequence code, information, and machine to describe lifes complexity:

One of the surprising discoveries of modern biology has been that the cell operates in a manner similar to modern technology, while biological information is organized in a manner similar to plain text. Words and terms like sequence code, and information, and machine have proven very useful in describing and understanding molecular biology (Wills, 2016). The basic building blocks of life are proteins, long chain-like molecules consisting of varied combinations of 20 different amino acids. Complex biochemical machines are usually composed of many proteins, each folded together and configured in a unique 3D structure dependent upon the exact sequence of the amino acids within the chain. Proteins employ a wide variety of folds to perform their biological function, and each protein has a highly specified shape with some minor variations.

The paper cites and reviews the work of Michael Behe, Douglas Axe, Stephen Meyer, and Gnter Bechly. Some of these discussions are quite long and extensive. First, the article contains a lucid explanation of irreducible complexity and the work of Michael Behe:

Michael Behe and others presented ideas of design in molecular biology, and published evidence of irreducibly complex biochemical machines in living cells. In his argument, some parts of the complex systems found in biology are exceedingly important and do affect the overall function of their mechanism. The fine-tuning can be outlined through the vital and interacting parts of living organisms. In Darwins Black Box (Behe, 1996), Behe exemplified systems, like the flagellum bacteria use to swim and the blood-clotting cascade, that he called irreducibly complex, configured as a remarkable teamwork of several (often dozen or more) interacting proteins. Is it possible on an incremental model that such a system could evolve for something that does not yet exist? Many biological systems do not appear to have a functional viable predecessor from which they could have evolved stepwise, and the occurrence in one leap by chance is extremely small. To rephrase the first man on the moon: Thats no small steps of proteins, no giant leap for biology.

[]

A Behe-system of irreducible complexity was mentioned in Section 3. It is composed of several well-matched, interacting modules that contribute to the basic function, wherein the removal of any one of the modules causes the system to effectively cease functioning. Behe does not ignore the role of the laws of nature. Biology allows for changes and evolutionary modifications. Evolution is there, irreducible design is there, and they are both observed. The laws of nature can organize matter and force it to change. Behes point is that there are some irreducibly complex systems that cannot be produced by the laws of nature:

If a biological structure can be explained in terms of those natural laws [reproduction, mutation and natural selection] then we cannot conclude that it was designed. . . however, I have shown why many biochemical systems cannot be built up by natural selection working on mutations: no direct, gradual route exist to these irreducible complex systems, and the laws of chemistry work strongly against the undirected development of the biochemical systems that make molecules such as AMP1 (Behe, 1996, p. 203).

Then, even if the natural laws work against the development of these irreducible complexities, they still exist. The strong synergy within the protein complex makes it irreducible to an incremental process. They are rather to be acknowledged as finetuned initial conditions of the constituting protein sequences. These structures are biological examples of nano-engineering that surpass anything human engineers have created. Such systems pose a serious challenge to a Darwinian account of evolution, since irreducibly complex systems have no direct series of selectable intermediates, and in addition, as we saw in Section 4.1, each module (protein) is of low probability by itself.

The article also reviews the peer-reviewed research of protein scientist Douglas Axe, as well as his 2016 book Undeniable, on the evolvability of protein folds:

An important goal is to obtain an estimate of the overall prevalence of sequences adopting functional protein folds, i.e. the right folded structure, with the correct dynamics and a precise active site for its specific function. Douglas Axe worked on this question at the Medical Research Council Centre in Cambridge. The experiments he performed showed a prevalence between 1 in 1050 to 1 in 1074 of protein sequences forming a working domain-sized fold of 150 amino acids (Axe, 2004). Hence, functional proteins require highly organised sequences, as illustrated in Fig. 2. Though proteins tolerate a range of possible amino acids at some positions in the sequence, a random process producing amino-acid chains of this length would stumble onto a functional protein only about one in every 1050 to 1074 attempts due to genetic variation. This empirical result is quite analog to the inference from fine-tuned physics.

[]

The search space turns out to be too impossibly vast for blind selection to have even a slight chance of success. The contrasting view is innovations based on ingenuity, cleverness and intelligence. An element of this is what Axe calls functional coherence, which always involves hierarchical planning, hence is a product of finetuning. He concludes: Functional coherence makes accidental invention fantastically improbable and therefore physically impossible (Axe, 2016, p. 160).

They conclude that the literature shows the probability of finding a functional protein in sequence space can vary broadly, but commonly remains far beyond the reach of Darwinian processes (Axe, 2010a).

Citing the work of Gnter Bechly and Stephen Meyer, the paper also reviews the question of whether sufficient time is allowed by the fossil record for complex systems to arise via Darwinian mechanisms. This is known as the waiting-time problem:

Achieving fine-tuning in a conventional Darwinian model: The waiting time problem

In this section we will elaborate further on the connection between the probability of an event and the time available for that event to happen. In the context of living systems, we need to ask the question whether conventional Darwinian mechanisms have the ability to achieve fine-tuning during a prescribed period of time. This is of interest in order to correctly interpret the fossil record, which is often interpreted as having long periods of stasis interrupted by very sudden abrupt changes (Bechly and Meyer, 2017). Examples of such sudden changes include the origin of photosynthesis, the Cambrian explosions, the evolution of complex eyes and the evolution of animal flight. The accompanying genetic changes are believed to have happen very rapidly, at least on a macroevolutionary timescale, during a time period of length t. In order to test whether this is possible, a mathematical model is needed in order to estimate the prevalence P(A) of the event A that the required genetic changes in a species take place within a time window of length t.

Throughout the discussions are multiple citations of BIO-Complexity, a journal dedicated to investigating the scientific evidence for intelligent design.

Lastly, the authors consider intelligent design as a possible explanation of biological fine-tuning, citing heavily the work of William Dembski, Winston Ewert, Robert J. Marks, and other ID theorists:

Intelligent Design (ID) has gained a lot of interest and attention in recent years, mainly in USA, by creating public attention as well as triggering vivid discussions in the scientific and public world. ID aims to adhere to the same standards of rational investigation as other scientific and philosophical enterprises, and it is subject to the same methods of evaluation and critique. ID has been criticized, both for its underlying logic and for its various formulations (Olofsson, 2008; Sarkar, 2011).

William Dembski originally proposed what he called an explanatory filter for distinguishing between events due to chance, lawful regularity or design (Dembski, 1998). Viewed on a sufficiently abstract level, its logics is based on well-established principles and techniques from the theory of statistical hypothesis testing. However, it is hard to apply to many interesting biological applications or contexts, because a huge number of potential but unknown scenarios may exist, which makes it difficult to phrase a null hypothesis for a statistical test (Wilkins and Elsberry, 2001; Olofsson, 2008).

The re-formulated version of a complexity measure published by Dembski and his coworkers is named Algorithmic Specified Complexity (ASC) (Ewert et al., 2013; 2014). ACS incorporates both Shannon and Kolmogorov complexity measures, and it quantifies the degree to which an event is improbable and follows a pattern. Kolmogorov complexity is related to compression of data (and hence patterns), but suffers from the property of being unknowable as there is no general method to compute it. However, it is possible to give upper bounds for the Kolmogorov complexity, and consequently ASC can be bounded without being computed exactly. ASC is based on context and is measured in bits. The same authors have applied this method to natural language, random noise, folding of proteins, images etc (Marks et al., 2017).

[]

The laws, constants, and primordial initial conditions of nature present the flow of nature. These purely natural objects discovered in recent years show the appearance of being deliberately fine-tuned. Functional proteins, molecular machines and cellular networks are both unlikely when viewed as outcomes of a stochastic model, with a relevant probability distribution (having a small P(A)), and at the same time they conform to an independent or detached specification (the set A being defined in terms of specificity). These results are important and deduced from central phenomena of basic science. In both physics and molecular biology, fine-tuning emerges as a uniting principle and synthesis an interesting observation by itself.

In this paper we have argued that a statistical analysis of fine-tuning is a useful and consistent approach to model some of the categories of design: irreducible complexity (Michael Behe), and specified complexity (William Dembski). As mentioned in Section 1, this approach requires a) that a probability distribution for the set of possible outcomes is introduced, and b) that a set A of fine-tuned events or more generally a specificity function f is defined. Here b) requires some apriori understanding of what fine-tuning means, for each type of application, whereas a) requires a naturalistic model for how the observed structures would have been produced by chance. The mathematical properties of such a model depend on the type of data that is analyzed. Typically a stochastic process should be used that models a dynamic feature such as stellar, chemical or biological (Darwinian) evolution. In the simplest case the state space of such a stochastic process is a scalar (one nucleotide or amino acid), a vector (a DNA or amino acid string) or a graph (protein complexes or cellular networks).

A major conclusion of our work is that fine-tuning is a clear feature of biological systems. Indeed, fine-tuning is even more extreme in biological systems than in inorganic systems. It is detectable within the realm of scientific methodology. Biology is inherently more complicated than the large-scale universe and so fine-tuning is even more a feature. Still more work remains in order to analyze more complicated data structures, using more sophisticated empirical criteria. Typically, such criteria correspond to a specificity function f that not only is a helpful abstraction of an underlying pattern, such as biological fitness. One rather needs a specificity function that, although of non-physical origin, can be quantified and measured empirically in terms of physical properties such as functionality. In the long term, these criteria are necessary to make the explanations both scientifically and philosophically legitimate. However, we have enough evidence to demonstrate that fine-tuning and design deserve attention in the scientific community as a conceptual tool for investigating and understanding the natural world. The main agenda is to explore some fascinating possibilities for science and create room for new ideas and explorations. Biologists need richer conceptual resources than the physical sciences until now have been able to initiate, in terms of complex structures having non-physical information as input (Ratzsch, 2010). Yet researchers have more work to do in order to establish fine-tuning as a sustainable and fully testable scientific hypothesis, and ultimately a Design Science.

This is a significant development. The article gives the arguments of intelligent design theorists a major hearing in a mainstream scientific journal. And dont miss the purpose of the article, which is stated in its final sentence to work towards establish[ing] fine-tuning as a sustainable and fully testable scientific hypothesis, and ultimately a Design Science. The authors present compelling arguments that biological fine-tuning cannot arise via unguided Darwinian mechanisms. Some explanation is needed to account for why biological systems show the appearance of being deliberately fine-tuned. Despite the noise that often surrounds this debate, for ID arguments to receive such a thoughtful and positive treatment in a prominent journal is itself convincing evidence that ID has intellectual merit. Claims of IDs critics notwithstanding, design science is being taken seriously by scientists.

Read the original here:
Breakout Paper in Journal of Theoretical Biology Explicitly Supports Intelligent Design - Discovery Institute

Related Posts

Comments are closed.