Intelligent Design – Dembski

William Dembski, a mathematician and philosopher, attempted to give a rigorous, quasi-mathematical foundation for the theory of intelligent design. In his technical book The Design Inference, revised from his Ph.D. dissertation in philosophy and published by Cambridge University Press in a series on probability theory, he proposes the three categories of law, chance, or design. If an event is regular and necessary (or highly probable), then it is the result of law. If an event has an intermediate probability, or if it has a very low probability but is not a particularly special event, then it is the result of chance. If an event has a very low probability, and matches an independently given specification, then it is the result of design. To describe low probability together with an independent specification, Dembski uses the term specified complexity, or complex specified information. Dembski was by no means the first to use the notion of specified complexity. Richard Dawkins himself, explaining why animals seem designed, employed the same concept: “complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone.”1 Dembski’s innovation is his attempt to use this notion of complexity to exclude origination through law and chance. In this original work, The Design Inference, Dembski did not apply the principle to natural organisms and events, but in later writings he sought to apply the design inference to nature.

There are several weaknesses in Dembski’s argument. Simply showing that a large quantity of information or complexity is present is insufficient, since complexity can be produced by chance. (An attempt to memorize random series of numbers quickly shows that randomness and complexity go together.) Even showing that the complexity somehow fits an independent pattern is insufficient, since chance together with law can do this. A computer can take random input, and transform it by a regular method, or law, so that the result is unique, or highly complex, on account of the randomness involved, and also highly specified, on account of the regularity involved. Examples of this are solutions to problems that are found by the use of computer genetic algorithms, or unique music that is written by computers. In some cases, computers have even found better solutions to problems than humans did. Dr. Adrian Thompson, for example, by means of a genetic algorithm evolved a device that could distinguish between the words “go” and “stop,” using only 37 logic gates—far fewer than a human engineer would need to solve the problem.2 And while computer-generated music may not yet be great music, it is certainly not mere noise. According to any purely mathematical definition of information, such programs can produce information.3

In order for Dembski to apply the design inference to nature, he needs to exclude such a combination of chance and law, to exclude the possibility of information in the sense of new possibilities being introduced by chance, and becoming specified information by the regular process of natural selection (organisms are matched to their environment by the greater reproduction of those which match it). The only way he can do this is to fall back on Behe’s notion of irreducible complexity.4 That is, he has to posit, implicitly or explicitly, that the specified information must be introduced in one fell swoop; thus it cannot be attributed to chance, since the probabilities are far too small, nor can it be attributed to law, since there is no set law to produce it. Dembski begins his argument with a different concept than Behe does, namely that of “information,” but when it comes to applying the argument to real biological systems, Dembski’s argument more or less coincides with Behe’s.

1Dawkins, The Blind Watchmaker (New York: W.W. Norton & Co., 1986), 9.

2Clive Davidson, “Creatures from primordial silicon,” New Scientist 156, no. 2108 (November 15, 1997):30–35.

3Dembski addresses a genetic algorithm that learned to play checkers at the expert level, arguing that the information was inserted from the beginning, that the programs were “guided,” because the programmers “kept the criterion of winning constant” (No Free Lunch, 2nd edition [Lanham, MD: Rowman & Littlefield, 2007], 223). But while keeping “the criterion of winning constant” may be part of the regularity such an algorithm presupposes, there is nothing more natural than that the “criterion of winning” in checkers should remain constant, and does not indicate any design of the solution to the problem by the programmers.

4See, for example, Dembski, Intelligent Design: The Bridge Between Science & Theology (Downers Grove, IL: InterVarsity Press, 2002), 177, and No Free Lunch, 287 ff. Though Dembski relies on the argument from irreducible complexity, it is not clear whether he perceived its strict necessity for the validity of his argument.

Comments are closed.