عصير كتاب: العلم وأدلة التصميم في الكون لـ علماء مركز ديسكوفري Science and Evidence for Design in the Universe

Posted: أكتوبر 4, 2016 in الكون ونشأة الحياة, الإلحاد, عصير الكتب

بسم الله الرحمن الرحيم

Science and Evidence for Design in the Universe

By: Behe, Dembski & Meyer

للتحميل: (PDF) (DOC)

إعداد: أ. مصطفى نصر قديح

science-design

1. Introduction

· Philosophers and scientists have disagreed not only about how to distinguish these modes of explanation but also about their very legitimacy. The Epicureans, for instance, gave pride of place to chance. The Stoics, on the other hand, emphasized necessity and design but rejected chance. In the Middle Ages Moses Maimonides contended with the Islamic interpreters of Aristotle who viewed the heavens as, in Maimonides’ words, “the necessary result of natural laws”. Where the Islamic philosophers saw necessity, Maimonides saw design. [Moses Maimonides, The Guide for the Perplexed, trans. M. Friedlander (New York: Dover, 1956), p. 188.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.17]

· The Islamic philosophers, intent on keeping Aristotle pure of theology, said no. Maimonides, arguing from observed contingency in nature, said yes. His argument focused on the distribution of stars in the night sky: What determined that the one small part [of the night sky] should have ten stars, and the other portion should be without any star? . . . The answer to [this] and similar questions is very difficult and almost impossible, if we assume that all emanates from God as the necessary result of certain permanent laws, as Aristotle holds. But if we assume that all this is the result of design, there is nothing strange or improbable; the only question to be asked is this: What is the cause of this design? The answer to this question is that all this has been made for a certain purpose, though we do not know it; there is nothing that is done in vain, or by chance. . . . How, then, can any reasonable person imagine that the position, magnitude, and number of the stars, or the various courses of their spheres, are purposeless, or the result of chance? There is no doubt that every one of these things is . . . in accordance with a certain design; and it is extremely improbable that these  things should be the necessary result of natural laws, and not that of design. [Moses Maimonides, The Guide for the Perplexed, trans. M. Friedlander (New York: Dover, 1956), p. 188.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.18]

· in the General Scholium to his Principia, Newton claimed that the stability of the planetary system depended not only on the regular action of the universal law of gravitation but also on the precise initial positioning of the planets and comets in relation to the sun. As he explained: Though these bodies may, indeed, persevere in their orbits by the mere laws of gravity, yet they could by no means have at first derived the regular position of the orbits themselves from those laws. . . . [Thus] this most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent and powerful being. [Isaac Newton, Mathematical Principles of Natural Philosophy, trans. A. Motte, ed. F. Cajori (Berkeley, Calif.: University of California Press, 1978), pp. 54344] Like Maimonides, Newton saw both necessity and design as legitimate explanations but gave short shrift to chance. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.18-19]

· When asked by Napoleon where God fit into his equations of celestial mechanics, Laplace famously replied, “Sire, I have no need of that hypothesis.” In place of a designing intelligence that precisely positioned the heavenly bodies, Laplace proposed his nebular hypothesis, which accounted for the origin of the solar system strictly through natural gravitational forces [Pierre Simon de Laplace, Celestial Mechanics, 4 vols., trans. N. Bowditch (New York: Chelsea, 1966).] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.19]

· Since Laplace’s day, science has largely dispensed with design. Certainly Darwin played a crucial role here by eliminating design from biology. Yet at the same time science was dispensing with design, it was also dispensing with Laplace’s vision of a deterministic universe (recall Laplace’s famous demon who could predict the future and retrodict the past with perfect precision provided that present positions and momenta of particles were fully known). [See the introduction to Pierre Simon de Laplace, A Philosophical Essay on Probabilities, trans. F. W. Truscott and F. L. Emory (New York: Dover, 1996)] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.19]

· To sum up, contemporary science allows a principled distinction between necessity and chance but repudiates design as a possible explanation for natural phenomena. [Owen Gingerich: God’s Planet. Harvard university press, London, 2014. P.20]

2. Rehabilitating Design

· For Aristotle, to understand any phenomenon properly, one had to understand its four causes, namely, its material, efficient, formal, and final cause. [See Aristotle, Metaphysics, bk. 5, chap. 2, in The Basic Works of Aristotle, ed. R. McKeon (New York: Random House, 1941), p. 752.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.20]

· If design is so readily detectable outside science, and if its detectability is one of the key factors keeping scientists honest, why should design be barred from the actual content of science? There is a worry here. The worry is that when we leave the constricted domain of human artifacts and enter the unbounded domain of natural objects, the distinction between design and nondesign cannot be reliably drawn. Consider, for instance, the following remark by Darwin in the concluding chapter of his Origin of Species: Several eminent naturalists have of late published their belief that a multitude of reputed species in each genus are not real species; but that other species are real, that is, have been independently created. . . . Nevertheless they do not pretend that they can define, or even conjecture, which are the created forms of life, and which are those produced by secondary laws. They admit variation as a vera causa in one case, they arbitrarily reject it in another, without assigning any distinction in the two cases.[Charles Darwin, On the Origin of Species (1859; reprint, Cambridge: Harvard University Press, 1964), p. 482] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.22-23]

· There does in fact exist a rigorous criterion for discriminating intelligently from unintelligently caused objects. Many special sciences already use this criterion, though in a pretheoretic form (for example, forensic science, artificial intelligence, cryptography, archeology, and the Search for Extraterrestrial Intelligence). In The Design Inference I identify and make precise this criterion. I call it the complexity-specification criterion. When intelligent agents act, they leave behind a characteristic trademark or signature—what I define as specified complexity. The complexity-specification criterion detects design by identifying this trademark of designed objects. [Strictly speaking, in The Design Inference I develop a “specification / small probability criterion”. This criterion is equivalent to the complexity-specification criterion described here] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.23]

3. The Complexity-Specification Criterion

· To sum up, the complexity-specification criterion detects design by establishing three things: contingency, complexity, and specification. When called to explain an event, object, or structure, we have a decision to make—are we going to attribute it to necessity, chance, or design? According to the complexity-specification criterion, to answer this question is to answer three simpler questions: Is it contingent? Is it complex? Is it specified? Consequently, the complexity-specification criterion can be represented as a flow chart with three decision nodes. I call this flow chart the Explanatory Filter. [See figure on p. 32.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.31]

· 5. False Negatives and False Positives

· Consider first the problem of false negatives. When the complexity-specification criterion fails to detect design in a thing, can we be sure no intelligent cause underlies it? The answer is No. For determining that something is not designed, this criterion is not reliable. False negatives are a problem for it. This problem of false negatives, however, is endemic to detecting intelligent causes. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.33]

· One difficulty is that intelligent causes can mimic necessity and chance, thereby rendering their actions indistinguishable from such unintelligent causes. A bottle of ink may fall off a cupboard and spill onto a sheet of paper. Alternatively, a human agent may deliberately take a bottle of ink and pour it over a sheet of paper. The resulting inkblot may look identical in both instances but, in the one case, results by chance, in the other by design. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.33]

· Another difficulty is that detecting intelligent causes requires background knowledge on our part. It takes an intelligent cause to know an intelligent cause. But if we do not know enough, we will miss it. Consider a spy listening in on a communication channel whose messages are encrypted. Unless the spy knows how to break the cryptosystem used by the parties on whom he is eavesdropping, any messages passing the communication channel will be unintelligible and might in fact be meaningless. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.33-34]

· This objection is readily met. The fact is that the complexity-specification criterion does not yield design all that easily, especially if the complexities are kept high (or, correspondingly, the probabilities are kept small). It is simply not the case that unusual and striking coincidences automatically yield design. Martin Gardner is no doubt correct when he notes, “The number of events in which you participate for a month, or even a week, is so huge that the probability of noticing a startling correlation is quite high, especially if you keep a sharp outlook.” [Martin Gardner, “Arthur Koestler: Neoplatonism Rides Again”, World, August 1, 1972, pp. 87-89.] The implication he means to draw, however, is incorrect, namely, that therefore startling correlations / coincidences may uniformly be relegated to chance. Yes, the fact that the Shoemaker-Levy comet crashed into Jupiter exactly twenty-five years to the day after the Apollo 11 moon landing is a coincidence best referred to chance. But the fact that Mary Baker Eddy’s writings on Christian Science bear a remarkable resemblance to Phineas Parkhurst Quimby’s writings on mental healing is a coincidence that cannot be explained by chance and is properly explained by positing Quimby as a source for Eddy.[Walter Martin, The Kingdom of the Cults, rev. ed. (Minneapolis: Bethany House, 1985), pp. 127-30.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.37]

· There is one last potential counterexample we need to consider, and that is the possibility of an evolutionary algorithm producing specified complexity. By an evolutionary algorithm I mean any clearly defined procedure that generates contingency via some chance process and then sifts the so-generated contingency via some law-like (that is, necessitarian) process. The Darwinian mutation-selection mechanism, neural nets, and genetic algorithms all fall within this definition of evolutionary algorithms. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.37-38]

· Now, it is widely held that evolutionary algorithms are just the means for generating specified complexity apart from design. Yet this widely held view is incorrect. The problem is that evolutionary algorithms cannot generate complexity. This may seem counterintuitive, but consider a well-known example by Richard Dawkins in which he purports to show how a cumulative selection process acting on chance can generate specified complexity. [Richard Dawkins, The Blind Watchmaker (New York: Norton, 1986), pp. 47-48] He starts with the target sequence. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.38]

METHINKS·IT·IS·LIKE·A·WEASEL

· If we tried to attain this target sequence by pure chance (for example, by randomly shaking out Scrabble pieces), the probability of getting it on the first try would be around 1 in 1040, and correspondingly it would take on average about 1040 tries to stand a better than even chance of getting it. Thus, if we depended on pure chance to attain this target sequence, we would in all likelihood be unsuccessful (granted, this 1 in 1040 improbability falls short of my universal probability bound of 1 in 10150, but for practical purposes 1 in 1040 is small enough to preclude chance and, yes, implicate design). As a problem for pure chance, attaining Dawkins’ target sequence is an exercise in generating specified complexity, and it becomes clear that pure chance simply is not up for the task. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.38]

· But a probability amplifier is also a complexity attenuator. Recall that the “complexity” in the complexity-specification criterion coincides with improbability. Dawkins’ evolutionary algorithm vastly increases the probability of getting the target sequence but in so doing vastly decreases the complexity inherent in the target sequence. The target sequence, if it had to be obtained by randomly throwing Scrabble pieces, would be highly improbable and on average would require a vast number of iterations before it could be obtained. But with Dawkins’ evolutionary algorithm, the probability of obtaining the target sequence is high given only a few iterations. In effect, Dawkins’ evolutionary algorithm skews the probabilities so that what at first blush seems highly improbable or complex is nothing of the sort. It follows that evolutionary algorithms cannot generate true complexity but only the appearance of complexity. And since they cannot generate complexity, they cannot generate specified complexity either. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.40]

6. Why the Criterion Works

· My second argument for showing that specified complexity reliably detects design considers the nature of intelligent agency and, specifically, what it is about intelligent agents that makes them detectable. Even though induction confirms that specified complexity is a reliable criterion for detecting design, induction does not explain why this criterion works. To see why the complexity-specification criterion is exactly the right instrument for detecting design, we need to understand what it is about intelligent agents that makes them detectable in the first place. The principal characteristic of intelligent agency is choice. Even the etymology of the word “intelligent” makes this clear. “Intelligent” derives from two Latin words, the preposition inter, meaning between, and the verb lego, meaning to choose or select. Thus, according to its etymology, intelligence consists in choosing between. For an intelligent agent to act is therefore to choose from a range of competing possibilities. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, P.41]

7. Conclusion

· Albert Einstein once said that in science things should be made as simple as possible but no simpler. The materialistic philosophy of science that dominated the end of the nineteenth and much of the twentieth century insists that all phenomena can be explained simply by reference to chance and / or necessity. Nevertheless, this essay has suggested, in effect, that materialistic philosophy portrays reality too simply. There are some entities and events that we cannot and, indeed, do not explain by reference to these twin modes of materialistic causation. Specifically, I have shown that when we encounter entities or events that manifest the joint properties of complexity and specification we routinely, and properly, attribute them, not to chance and / or physical / chemical necessity, but to intelligent design, that is, to mind rather than matter. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, p.44]

· Clearly, we find the complexity-specification criteria in objects that other human minds have designed. Nevertheless, this essay has not sought to answer the question of whether the criteria that reliably indicate the activity of a prior intelligent mind exist in the natural world, that is, in things that we know humans did not design, such as living organisms or the fundamental architecture of the cosmos. In short, I have not addressed the empirical question of whether the natural world, as opposed to the world of human technology, also bears evidence of intelligent design. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, p.44-45]

Evidence for Design in Physics and Biology

From The Origin of the Universe to The Origin Of Life

1. Introduction

· As an illustration of the concepts of complexity and specification, consider the following three sets of symbols: “inetehnsdysk]idmhcpew, ms.s/a” . “Time and tide wait for no man.” . “ABABABABABABABABABAB”

Both the first and second sequences shown above are complex because both defy reduction to a simple rule. Each represents a highly irregular, aperiodic, and improbable sequence of symbols. The third sequence is not complex but is instead highly ordered and repetitive. Of the two complex sequences, only the second exemplifies a set of independent functional requirements—that is, only the second sequence is specified. English has a number of functional requirements. For example, to convey meaning in English one must employ existing conventions of vocabulary (associations of symbol sequences with particular objects, concepts, or ideas), syntax, and grammar (such as “every sentence requires a subject and a verb”). When arrangements of symbols “match” or utilize existing vocabulary and grammatical conventions (that is, functional requirements) communication can occur. Such arrangements exhibit “specification”. The second sequence (“Time and tide wait for no man”) clearly exhibits such a match between itself and the preexisting requirements of English vocabulary and grammar. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, pp.53-54]

· Thus, of the three sequences above only the second manifests complexity and specification, both of which must be present for us to infer a designed system according to Dembski’s theory. The third sequence lacks complexity, though it does exhibit a simple pattern, a specification of sorts. The first sequence is complex but not specified, as we have seen. Only the second sequence, therefore, exhibits both complexity and specification. Thus, according to Dembski’s theory, only the second sequence indicates an intelligent cause—as indeed our intuition tells us. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, p.54]

· Dembski’s work shows that detecting the activity of intelligent agency (“inferring design”) represents an indisputably common form of rational activity. His work also suggests that the properties of complexity and specification reliably indicate the prior activity of an intelligent cause. This essay will build on this insight to address another question. It will ask: Are the criteria that indicate intelligent design present in features of nature that clearly preexist the advent of humans on earth? Are the features that indicate the activity of a designing intelligence present in the physical structure of the universe or in the features of living organisms? If so, does intelligent design still constitute the best explanation of these features, or might naturalistic explanations based upon chance and / or physico-chemical necessity constitute a better explanation? This paper will evaluate the merits of the design argument in light of developments in physics and biology as well as Dembski’s work on “the design inference”. I will employ Dembski’s comparative explanatory method (the “explanatory filter”) to evaluate the competing explanatory power of chance, necessity, and design with respect to evidence in physics and biology. I will argue that intelligent design (rather than chance, necessity, or a combination of the two) constitutes the best explanation of these phenomena. I will, thus, suggest an empirical, as well as a theoretical, basis for resuscitating the design argument. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, pp.55-56]

2.1 Evidence of Design in Physics: Anthropic “Fine Tuning”

· During the last forty years, however, developments in physics and cosmology have placed the word “design” back in the scientific vocabulary. Beginning in the 1960s, physicists unveiled a universe apparently fine-tuned for the possibility of human life. They discovered that the existence of life in the universe depends upon a highly improbable but precise balance of physical factors. [K. Giberson, “The Anthropic Principle”, Journal of Interdisciplinary Studies 9 (1997): 63-90, and response by Steven Yates, pp. 91-104.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, p.56]

· The constants of physics, the initial conditions of the universe, and many other of its features appear delicately balanced to allow for the possibility of life. Even very slight alterations in the values of many factors, such as the expansion rate of the universe, the strength of gravitational or electromagnetic attraction, or the value of Planck’s constant, would render life impossible. Physicists now refer to these factors as “anthropic coincidences” (because they make life possible for man) and to the fortunate convergence of all these coincidences as the “fine tuning of the universe”. Given the improbability of the precise ensemble of values represented by these constants, and their specificity relative to the requirements of a life-sustaining universe, many physicists have noted that the fine tuning strongly suggests design by a preexistent intelligence. As well-known British physicist Paul Davies has put it, “the impression of design is overwhelming.” [P. Davies, The Cosmic Blueprint (New York: Simon and Schuster, 1988), p. 203] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, p.56-57]

· Imagine that you are a cosmic explorer who has just stumbled into the control room of the whole universe. There you discover an elaborate “universe-creating machine”, with rows and rows of dials, each with many possible settings. As you investigate, you learn that each dial represents some particular parameter that has to be calibrated with a precise value in order to create a universe in which life can exist. One dial represents the possible settings for the strong nuclear force, one for the gravitational constant, one for Planck’s constant, one for the ratio of the neutron mass to the proton mass, one for the strength of electromagnetic attraction, and so on. As you, the cosmic explorer, examine the dials, you find that they could easily have been tuned to different settings. Moreover, you determine by careful calculation that if any of the dial settings were even slightly altered, life would cease to exist. Yet for some reason each dial is set at just the exact value necessary to keep the universe running. What do you infer about the origin of these finely tuned dial settings? [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe, Ignatius Press, San Francisco 2000, p.57]

· Not surprisingly, physicists have been asking the same question. As astronomer George Greenstein mused, “the thought insistently arises that some supernatural agency, or rather Agency, must be involved. Is it possible that suddenly, without intending to, we have stumbled upon scientific proof for the existence of a Supreme Being? Was it God who stepped in and so providentially crafted the cosmos for our benefit?”[G. Greenstein, The Symbiotic Universe: Life and Mind in the Cosmos (New York: Morrow, 1988), pp. 26-27] For many scientists,[Greenstein himself does not favor the design hypothesis. Instead, he favors the so-called “participatory universe principle” or “PAP”. PAP attributes the apparent design of the fine tuning of the physical constants to the universe’s (alleged) need to be observed in order to exist. As he says, the universe “brought forth life in order to exist . . . that the very Cosmos does not exist unless observed” (ibid., p. 223).] the design hypothesis seems the most obvious and intuitively plausible answer to this question. As Sir Fred Hoyle commented, “a commonsense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as chemistry and biology, and that there are no blind forces worth speaking about in nature.”[F. Hoyle, “The Universe: Past and Present Reflections”, Annual Review of Astronomy and Astrophysics 20 (1982): 16.] Many physicists now concur. They would argue that, given the improbability and yet the precision of the dial settings, design seems the most plausible explanation for the anthropic fine tuning. Indeed, it is precisely the combination of the improbability (or complexity) of the settings and their specificity relative to the conditions required for a life-sustaining universe that seems to trigger the “commonsense” recognition of design. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.57-58]

2.2 Anthropic Fine Tuning and the Explanatory Filter

· Yet several other types of interpretations have been proposed: (1) the so-called weak anthropic principle, which denies that the fine tuning needs explanation; (2) explanations based upon natural law; and (3) explanations based upon chance. Each of these approaches denies that the fine tuning of the universe resulted from an intelligent agent. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.58]

· Of the three options above, perhaps the most popular approach, at least initially, was the “weak anthropic principle” (WAP). Nevertheless, the WAP has recently encountered severe criticism from philosophers of physics and cosmology. Advocates of WAP claimed that if the universe were not fine-tuned to allow for life, then humans would not be here to observe it. Thus, they claimed, the fine tuning requires no explanation. Yet as John Leslie and William Craig have argued, the origin of the fine tuning does require explanation. [W. Craig, “Cosmos and Creator”, Origins & Design 20, no. 2 (spring 1996): 23.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.58-59]

· Though we humans should not be surprised to find ourselves living in a universe suited for life (by definition), we ought to be surprised to learn that the conditions necessary for life are so vastly improbable. Leslie likens our situation to that of a blindfolded man who has discovered that, against all odds, he has survived a firing squad of one hundred expert marksmen.[J. Leslie, “Anthropic Principle, World Ensemble, Design”, American Philosophical Quarterly 19, no. 2 (1982): 150.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.59]

· Though his continued existence is certainly consistent with all the marksmen having missed, it does not explain why the marksmen actually did miss. In essence, the weak anthropic principle wrongly asserts that the statement of a necessary condition of an event eliminates the need for a causal explanation of that event. Oxygen is a necessary condition of fire, but saying so does not provide a causal explanation of the San Francisco fire. Similarly, the fine tuning of the physical constants of the universe is a necessary condition for the existence of life, but that does not explain, or eliminate the need to explain, the origin of the fine tuning. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.59]

· While some scientists have denied that the fine-tuning coincidences require explanation (with the WAP), others have tried to find various naturalistic explanations for them. Of these, appeals to natural law have proven the least popular for a simple reason. The precise “dial settings” of the different constants of physics are specific features of the laws of nature themselves. For example, the gravitational constant G determines just how strong gravity will be, given two bodies of known mass separated by a known distance. The constant G is a term within the equation that describes gravitational attraction. In this same way, all the constants of the fundamental laws of physics are features of the laws themselves. Therefore, the laws cannot explain these features; they comprise the features that we need to explain. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.59]

· As Davies has observed, the laws of physics “seem themselves to be the product of exceedingly ingenious design”. [P. Davies, The Superforce: The Search for a Grand Unified Theory of Nature (New York: Simon and Schuster, 1984), p. 243] Further, natural laws by definition describe phenomena that conform to regular or repetitive patterns. Yet the idiosyncratic values of the physical constants and initial conditions of the universe constitute a highly irregular and nonrepetitive ensemble. It seems unlikely, therefore, that any law could explain why all the fundamental constants have exactly the values they do—why, for example, the gravitational constant should have exactly the value 6.67 x 10-11 Newton-meters2 per kilogram2 and the permittivity constant in Coulombs law the value 8.85 x 10-12 Coulombs2 per Newton-meter2, and the electron charge to mass ratio 1.76 x 1011 Coulombs per kilogram, and Planck’s constant 6.63 x 10-34 Joules-seconds, and so on.[D. Halliday, R. Resnick, and G. Walker, Fundamentals of Physics, 5th ed. (New York: John Wiley and Sons, 1997), p. A23.] These values specify a highly complex array. As a group, they do not seem to exhibit a regular pattern that could in principle be subsumed or explained by natural law. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.60]

· Explaining anthropic coincidences as the product of chance has proven more popular, but this has several severe liabilities as well. First, the immense improbability of the fine tuning makes straightforward appeals to chance untenable. Physicists have discovered more than thirty separate physical or cosmological parameters that require precise calibration in order to produce a life-sustaining universe.[J. Barrow and F. Tipler, The Anthropic Cosmological Principle (Oxford: Oxford University Press, 1986), pp. 295-356, 384-444, 510-56; J. Gribbin and M. Rees, Cosmic Coincidences(London: Black Swan, 1991), pp. 3-29, 241-69; H. Ross, “The Big Bang Model Refined by Fire”, in W. A. Dembski, ed., Mere Creation: Science, Faith and Intelligent Design (Downers Grove, Ill.: InterVarsity Press, 1998), pp. 372-81.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.60]

· Michael Denton, in his book Nature’s Destiny (1998), has documented many other necessary conditions for specifically human life from chemistry, geology, and biology. Moreover, many individual parameters exhibit an extraordinarily high degree of fine tuning. The expansion rate of the universe must be calibrated to one part in 1060.[A. Guth and M. Sher, “Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems”, Physical Review 23, no. 2 (1981): 348.]A slightly more rapid rate of expansion—by one part in 1060—would have resulted in a universe too diffuse in matter to allow stellar formation.[For those unfamiliar with exponential notation, the number 1050 is the same as 10 multiplied by itself 60 times or 1 with 60 zeros written after it] An even slightly less rapid rate of expansion—by the same factor—would have produced an immediate gravitational recollapse. The force of gravity itself requires fine tuning to one part in 1040.[P. Davies, God and the New Physics (New York: Simon and Schuster, 1983), p. 188.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.60]

· Thus, our cosmic explorer finds himself confronted not only with a large ensemble of separate dial settings but with very large dials containing a vast array of possible settings, only very few of which allow for a life-sustaining universe. In many cases, the odds of arriving at a single correct setting by chance, let alone all the correct settings, turn out to be virtually infinitesimal. Oxford physicist Roger Penrose has noted that a single parameter, the so-called “original phase-space volume”, required such precise fine tuning that the “Creator’s aim must have been [precise] to an accuracy of one part in 1010^123”(which is ten billion multiplied by itself 123 times). Penrose goes on to remark that, “one could not possibly even write the number down in full . . .[since] it would be ‘1’ followed by 10123 successive ‘0’s!”—more zeros than the number of elementary particles in the entire universe. Such is, he concludes, “the precision needed to set the universe on its course”.[R. Penrose, The Emperor’s New Mind (New York: Oxford, 1989), p. 344.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.60-61]

· To circumvent such vast improbabilities, some scientists have postulated the existence of a quasi-infinite number of parallel universes. By doing so, they increase the amount of time and number of possible trials available to generate a life-sustaining universe and thus increase the probability of such a universe arising by chance. In these “many worlds” or “possible worlds” scenarios—which were originally developed as part of the “Everett interpretation” of quantum physics and the inflationary Big Bang cosmology of André Linde—any event that could happen, however unlikely it might be, must happen somewhere in some other parallel universe.[A. Linde, “The Self-Reproducing Inflationary Universe”, Scientific American 271 (November 1994): 48-55] So long as life has a positive (greater than zero) probability of arising, it had to arise in some possible world. Therefore, sooner or later some universe had to acquire life-sustaining characteristics. Clifford Longley explains that according to the many-worlds hypothesis: There could have been millions and millions of different universes created each with different dial settings of the fundamental ratios and constants, so many in fact that the right set was bound to turn up by sheer chance. We just happened to be the lucky ones.[C. Longley, “Focusing on Theism”, London Times, January 21, 1989, p. 10] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.61]

· According to the many-worlds hypothesis, our existence in the universe only appears vastly improbable, since calculations about the improbability of the anthropic coincidences arising by chance only consider the “probabilistic resources” (roughly, the amount of time and the number of possible trials) available within our universe and neglect the probabilistic resources available from the parallel universes. According to the many-worlds hypothesis, chance can explain the existence of life in the universe after all. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.62]

· The many-worlds hypothesis now stands as the most popular naturalistic explanation for the anthropic fine tuning and thus warrants detailed comment. Though clearly ingenious, the many-worlds hypothesis suffers from an overriding difficulty: we have no evidence for any universes other than our own. Moreover, since possible worlds are by definition causally inaccessible to our own world, there can be no evidence for their existence except that they allegedly render probable otherwise vastly improbable events. Of course, no one can observe a designer directly either, although a theistic designer—that is, God—is not causally disconnected from our world. Even so, recent work by philosophers of science such as Richard Swinburne, John Leslie, Bill Craig,[W. Craig, “Barrow and Tipler on the Anthropic Principle v. Divine Design”, British Journal for the Philosophy of Science 38 (1988): 389-95] Jay Richards,[J. W. Richards, “Many Worlds Hypotheses: A Naturalistic Alternative to Design”, Perspectives on Science and Christian Belief 49, no. 4 (1997): 218-27.] and Robin Collins have established several reasons for preferring the (theistic) design hypothesis to the naturalistic many-worlds hypothesis. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.62]

2.3 Theistic Design: A Better Explanation?

· First, all current cosmological models involving multiple universes require some kind of mechanism for generating universes. Yet such a “universe generator” would itself require precisely configured physical states, thus begging the question of its initial design. As Collins describes the dilemma: In all currently worked out proposals for what this universe generator could be—such as the oscillating big bang and the vacuum fluctuation models . . .—the “generator” itself is governed by a complex set of laws that allow it to produce universes. It stands to reason, therefore, that if these laws were slightly different the generator probably would not be able to produce any universes that could sustain life.[R. Collins, “The Fine-Tuning Design Argument: A Scientific Argument for the Existence of God”, in M. Murray, ed., Reason for the Hope Within (Grand Rapids, Mich.: Eerdmans, 1999), p. 61] Indeed, from experience we know that some machines (or factories) can produce other machines. But our experience also suggests that such machine-producing machines themselves require intelligent design. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.62-63]

· Second, as Collins argues, all things being equal, we should prefer hypotheses “that are natural extrapolations from what we already know” about the causal powers of various kinds of entities.23 Yet when it comes to explaining the anthropic coincidences, the multiple-worlds hypothesis fails this test, whereas the theistic-design hypothesis does not. To illustrate, Collins asks his reader to imagine a paleontologist who posits the existence of an electromagnetic “dinosaur-bone-producing field”, as opposed to actual dinosaurs, as the explanation for the origin of large fossilized bones. While certainly such a field qualifies as a possible explanation for the origin of the fossil bones, we have no experience of such fields or of their producing fossilized bones. Yet we have observed animal remains in various phases of decay and preservation in sediments and sedimentary rock. Thus, most scientists rightly prefer the actual dinosaur hypothesis over the apparent dinosaur hypothesis (that is, the “dinosaur-bone-producing-field” hypothesis) as an explanation for the origin of fossils. In the same way, Collins argues, we have no experience of anything like a “universe generator” (that is not itself designed; see above) producing finely tuned systems or infinite and exhaustively random ensembles of possibilities. Yet we do have extensive experience of intelligent agents producing finely tuned machines such as Swiss watches. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.63]

· Thus, Collins concludes, when we postulate “a supermind” (God) to explain the fine tuning of the universe, we are extrapolating from our experience of the causal powers of known entities (that is, intelligent humans), whereas when we postulate the existence of an infinite number of separate universes, we are not. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.63-64]

· Third, as Craig has shown, for the many-worlds hypothesis to suffice as an explanation for anthropic fine tuning, it must posit an exhaustively random distribution of physical parameters and thus an infinite number of parallel universes to insure that a life-producing combination of factors will eventually arise. Yet neither of the physical models that allow for a multiple-universe interpretation—Everett’s quantum-mechanical model or Linde’s inflationary cosmology—provides a compelling justification for believing that such an exhaustively random and infinite number of parallel universes exists, but instead only a finite and nonrandom set. [Craig, “Cosmos”, p. 24] The Everett model, for example, only generates an ensemble of material states, each of which exists within a parallel universe that has the same set of physical laws and constants as our own. Since the physical constants do not vary “across universes”, Everett’s model does nothing to increase the probability of the precise fine tuning of constants in our universe arising by chance. Though Linde’s model does envision a variable ensemble of physical constants in each of his individual “bubble universes”, his model fails to generate either an exhaustively random set of such conditions or the infinite number of universes required to render probable the life-sustaining fine tuning of our universe. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.64]

· Fourth, Richard Swinburne argues that the theistic-design hypothesis constitutes a simpler and less ad hoc hypothesis than the many-worlds hypothesis. [R. Swinburne, “Argument from the Fine Tuning of Universe”, in J. Leslie, ed., Physical Cosmology and Philosophy (New York: Macmillan, 1990), pp. 15473] He notes that virtually the only evidence for many worlds is the very anthropic fine tuning the hypothesis was formulated to explain. On the other hand, the theistic-design hypothesis, though also only supported by indirect evidences, can explain many separate and independent features of the universe that the many-worlds scenario cannot, including the origin of the universe itself, the mathematical beauty and elegance of physical laws, and personal religious experience. Swinburne argues that the God hypothesis is a simpler as well as a more comprehensive explanation because it requires the postulation of only one explanatory entity, rather than the multiple entities—including the finely tuned universe generator and the infinite number of causally separate universes—required by the many-worlds hypothesis. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.64-65]

· Swinburne and Collins arguments suggest that few reasonable people would accept such an unparsimonious and farfetched explanation as the many-worlds hypothesis in any other domain of life. That some scientists dignify the many-worlds hypothesis with serious discussion may speak more to an unimpeachable commitment to naturalistic philosophy than to any compelling merit for the idea itself. As Clifford Longley noted in the London Times in 1989,[Originally the many-worlds hypothesis was proposed for strictly scientific reasons as a solution to the so-called quantum-measurement problem in physics. Though its efficacy as an explanation within quantum physics remains controversial among physicists, its use there does have an empirical basis. More recently, however, it has been employed to serve as an alternative non-theistic explanation for the fine tuning of the physical constants. This use of the MWH does seem to betray a metaphysical desperation.] the use of the many-worlds hypothesis to avoid the theistic-design argument often seems to betray a kind of special pleading and metaphysical desperation. As Longley explains: The [anthropic-design argument] and what it points to is of such an order of certainty that in any other sphere of science, it would be regarded as settled. To insist otherwise is like insisting that Shakespeare was not written by Shakespeare because it might have been written by a billion monkeys sitting at a billion keyboards typing for a billion years. So it might. But the sight of scientific atheists clutching at such desperate straws has put new spring in the step of theists.[Longley, “Focusing”, p. 10.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.65]

· Indeed, it has. As the twentieth century comes to a close, the design argument has reemerged from its premature retirement at the hands of biologists in the nineteenth century. Physics, astronomy, cosmology, and chemistry have each revealed that life depends on a very precise set of design parameters, which, as it happens, have been built into our universe. The fine-tuning evidence has led to a persuasive reformulation of the design hypothesis, even if it does not constitute a formal deductive proof of God’s existence. Physicist John Polkinghorne has written that, as a result, “we are living in an age where there is a great revival of natural theology taking place. That revival of natural theology is taking place not on the whole among theologians, who have lost their nerve in that area, but among the scientists.” [J. Polkinghorne, “So Finely Tuned a Universe of Atoms, Stars, Quanta & God”, Commonweal, August 16, 1996, p. 16.] Polkinghorne also notes that this new natural theology generally has more modest ambitions than the natural theology of the Middle Ages. Indeed, scientists arguing for design based upon evidence of anthropic fine tuning tend to do so by inferring an intelligent cause as a “best explanation”, rather than by making a formal deductive proof of God’s existence. (See Appendix, pp. 213-34, “Fruitful Interchange or Polite Chitchat: The Dialogue between Science and Theology”.) Indeed, the foregoing analysis of competing types of causal explanations for the anthropic fine tuning suggests intelligent design precisely as the best explanation for its origin. Thus, fine-tuning evidence may support belief in God’s existence, even if it does not “prove” it in a deductively certain way. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.65-66]

3.2 Molecular Machines

· Nevertheless, the interest in design has begun to spread to biology. For example, in 1998 the leading journal, Cell, featured a special issue on “Macromolecular Machines”. Molecular machines are incredibly complex devices that all cells use to process information, build proteins, and move materials back and forth across their membranes. Bruce Alberts, President of the National Academy of Sciences, introduced this issue with an article entitled, “The Cell as a Collection of Protein Machines”. In it, he stated that:We have always underestimated cells. . . . The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. . . . Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts. [B. Alberts, “The Cell as a Collection of Protein Machines: Preparing the Next Generation of Molecular Biologists”, Cell 92 (February 8, 1998): 291.] Alberts notes that molecular machines strongly resemble machines designed by human engineers, although as an orthodox neo-Darwinian he denies any role for actual, as opposed to apparent, design in the origin of these systems. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.66-67]

· In recent years, however, a formidable challenge to this view has arisen within biology. In his book Darwin’s Black Box (1996), Lehigh University biochemist Michael Behe shows that neo-Darwinists have failed to explain the origin of complex molecular machines in living systems. For example, Behe looks at the ion-powered rotary engines that turn the whip-like flagella of certain bacteria.[M. Behe, Darwin’s Black Box (New York: Free Press, 1996), pp. 51-73.] He shows that the intricate machinery in this molecular motor—including a rotor, a stator, O-rings, bushings, and a drive shaft—requires the coordinated interaction of some forty complex protein parts. Yet the absence of any one of these proteins results in the complete loss of motor function. To assert that such an “irreducibly complex” engine emerged gradually in a Darwinian fashion strains credulity. According to Darwinian theory, natural selection selects functionally advantageous systems.[According to the neo-Darwinian theory of evolution, organisms evolved by natural selection acting on random genetic mutations. If these genetic mutations help the organism to survive better, they will be preserved in subsequent generations, while those without the mutation will die off faster. For instance, a Darwinian might hypothesize that giraffes born with longer necks were able to reach the leaves of trees more easily, and so had greater survival rates, than giraffes with shorter necks. With time, the necks of giraffes grew longer and longer in a step-by-step process because natural selection favored longer necks. But the intricate machine-like systems in the cell could not have been selected in such a step-by-step process, because not every step in the assembly of a molecular machine enables the cell to survive better. Only when the molecular machine is fully assembled can it function and thus enable a cell to survive better than cells that do not have it.] Yet motor function only ensues after all the necessary parts have independently self-assembled—an astronomically improbable event. Thus, Behe insists that Darwinian mechanisms cannot account for the origin of molecular motors and other “irreducibly complex systems” that require the coordinated interaction of multiple independent protein parts. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.67-68]

· To emphasize his point, Behe has conducted a literature search of relevant technical journals. [Behe, Darwin’s, pp. 165-86.] He has found a complete absence of gradualistic Darwinian explanations for the origin of the systems and motors that he discusses. Behe concludes that neo-Darwinists have not explained, or in most cases even attempted to explain, how the appearance of design in “irreducibly complex” systems arose naturalistically. Instead, he notes that we know of only one cause sufficient to produce functionally integrated, irreducibly complex systems, namely, intelligent design. Indeed, whenever we encounter irreducibly complex systems and we know how they arose, they were invariably designed by an intelligent agent. Thus, Behe concludes (on strong uniformitarian grounds) that the molecular machines and complex systems we observe in cells must also have had an intelligent source. In brief, molecular motors appear designed because they were designed. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.68]

3.3 The Complex Specificity of Cellular Components.

· During the 1950s, scientists quickly realized that proteins possess another remarkable property. In addition to their complexity, proteins also exhibit specificity, both as one-dimensional arrays and as three-dimensional structures. Whereas proteins are built from rather simple chemical building blocks known as amino acids, their function—whether as enzymes, signal transducers, or structural components in the cell—depends crucially upon the complex but specific sequencing of these building blocks.[B. Alberts, D. Bray, J. Lewis, M. Raff, K. Roberts, and J. D. Watson, Molecular Biology of the Cell (New York: Garland, 1983), pp. 91-141.] Molecular biologists such as Francis Crick quickly likened this feature of proteins to a linguistic text. Just as the meaning (or function) of an English text depends upon the sequential arrangement of letters in a text, so too does the function of a polypeptide (a sequence of amino acids) depend upon its specific sequencing. Moreover, in both cases, slight alterations in sequencing can quickly result in loss of function. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.69]

3.4 The Sequence Specificity of DNA

· The discovery of the complexity and specificity of proteins has raised an important question. How did such complex but specific structures arise in the cell? This question recurred with particular urgency after Sanger revealed his results in the early 1950s. Clearly, proteins were too complex and functionally specific to arise “by chance”. Moreover, given their irregularity, it seemed unlikely that a general chemical law or regularity governed their assembly. Instead, as Nobel Prize winner Jacques Monod recalled, molecular biologists began to look for some source of information within the cell that could direct the construction of these highly specific structures. As Monod would later recall, to explain the presence of the specific sequencing of proteins, “you absolutely needed a code.”[Judson, Eighth Day, p. 611] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.70]

· According to Crick’s hypothesis, the specific arrangement of the nucleotide bases on the DNA molecule generates the specific arrangement of amino acids in proteins.[Judson, Eighth Day, pp. 335-36]The sequence hypothesis suggested that the nucleotide bases in DNA functioned like letters in an alphabet or characters in a machine code. Just as alphabetic letters in a written language may perform a communication function depending upon their sequencing, so too, Crick reasoned, the nucleotide bases in DNA may result in the production of a functional protein molecule depending upon their precise sequential arrangement. In both cases, function depends crucially upon sequencing. The nucleotide bases in DNA function in precisely the same way as symbols in a machine code or alphabetic characters in a book. In each case, the arrangement of the characters determines the function of the sequence as a whole. As Dawkins notes, “The machine code of the genes is uncannily computer-like.” [R. Dawkins, River out of Eden (New York: Basic Books, 1995), p. 10] Or, as software innovator Bill Gates explains, “DNA is like a computer program, but far, far more advanced than any software we’ve ever created.” [B. Gates, The Road Ahead (Boulder, Col.: Blue Penguin, 1996), p. 228] In the case of a computer code, the specific arrangement of just two symbols (0 and 1) suffices to carry information. In the case of an English text, the twenty-six letters of the alphabet do the job. In the case of DNA, the complex but precise sequencing of the four nucleotide bases adenine, thymine, guanine, and cytosine (A, T, G, and C)—stores and transmits genetic information, information that finds expression in the construction of specific proteins. Thus, the sequence hypothesis implied not only the complexity but also the functional specificity of DNA base sequencing. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.71] ..

4.1 The Origin of Life and the Origin of Biological Information

· Developments in molecular biology have led scientists to ask how the specific sequencing—the information content or specified complexity—in both DNA and proteins originated. These developments have also created severe difficulties for all strictly naturalistic theories of the origin of life. Since the late 1920s, naturalistically minded scientists have sought to explain the origin of the very first life as the result of a completely undirected process of “chemical evolution”. In The Origin of Life (1938), Alexander I. Oparin, a pioneering chemical evolutionary theorist, envisioned life arising by a slow process of transformation starting from simple chemicals on the early earth. Unlike Darwinism, which sought to explain the origin and diversification of new and more complex living forms from simpler, preexisting forms, chemical evolutionary theory seeks to explain the origin of the very first cellular life. Yet since the late 1950s, naturalistic chemical evolutionary theories have been unable to account for the origin of the complexity and specificity of DNA base sequencing necessary to build a living cell.[For a good summary and critique of different naturalistic models, see especially K. Dose, “The Origin of Life: More Questions than Answers”, Interdisciplinary Science Review 13, no. 4 (1998): 348-56; H. P. Yockey, Information Theory and Molecular Biology (Cambridge: Cambridge University Press, 1992), pp. 259-93; C. Thaxton, W. Bradley, and R. Olsen, The Mystery of Life’s Origin (Dallas: Lewis and Stanley, 1992); C. Thaxton and W. Bradley, “Information and the Origin of Life”, in The Creation Hypothesis: Scientific Evidence for an Intelligent Designer, ed. J. P. Moreland (Downers Grove, Ill.: InterVarsity Press, 1994), pp. 173-210; R. Shapiro, Origins (London: Heinemann, 1986), pp. 97-189; S. C. Meyer, “The Explanatory Power of Design: DNA and the Origin of Information”, in Dembski, Mere Creation, pp. 119-34. For a contradictory hypothesis, see S. Kauffman, The Origins of Order (Oxford: Oxford University Press, 1993), pp. 287-341.] This section will, using the categories of Dembski’s explanatory filter, evaluate the competing types of naturalistic explanations for the origin of specified complexity or information content necessary to the first living cell. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.72]

4.2 Beyond the Reach of Chance

· While some scientists may still invoke “chance” as an explanation, most biologists who specialize in origin-of-life research now reject chance as a possible explanation for the origin of the information in DNA and proteins.[C. de Duve, Blueprint for a Cell: The Nature and Origin of Life (Burlington, N.C.: Neil Patterson Publishers, 1991), p. 112; F. Crick, Life Itself (New York: Simon and Schuster, 1981), pp. 89-93; H. Quastler, The Emergence of Biological Organization (New Haven: Yale University Press, 1964), p. 7.] Since molecular biologists began to appreciate the sequence specificity of proteins and nucleic acids in the 1950s and 1960s, many calculations have been made to determine the probability of formulating functional proteins and nucleic acids at random. Various methods of calculating probabilities have been offered by Morowitz, [H. J. Morowitz, Energy Flow in Biology (New York: Academic Press, 1968), pp. 5-12.] Hoyle and Wickramasinghe,[F. Hoyle and C. Wickramasinghe, Evolution from Space (London: J. M. Dent, 1981), pp. 24-27.] Cairns-Smith,[A. G. Cairns-Smith, The Life Puzzle (Edinburgh: Oliver and Boyd, 1971), pp. 91-96] Prigogine,[I. Prigogine, G. Nicolis, and A. Babloyantz, “Thermodynamics of Evolution”, Physics Today, November 1972, p. 23] and Yockey.[Yockey, Information Theory, pp. 246-58; H. P. Yockey, “Self Organization, Origin of Life Scenarios and Information Theory”, Journal of Theoretical Biology 91 (1981): 13-31; see also Shapiro, Origins, pp. 117-31.] For the sake of argument, such calculations have often assumed extremely favorable prebiotic conditions (whether realistic or not), much more time than there was actually available on the early earth, and theoretically maximal reaction rates among constituent monomers (that is, the constituent parts of proteins, DNA and RNA). Such calculations have invariably shown that the probability of obtaining functionally sequenced biomacromolecules at random is, in Prigogine’s words, “vanishingly small . . . even on the scale of . . . billions of years”.[Prigogine, Nicolis, and Babloyantz, “Thermodynamics”, p. 23.] As Cairns-Smith wrote in 1971: Blind chance . . . is very limited. Low-levels of cooperation he [blind chance] can produce exceedingly easily (the equivalent of letters and small words), but he becomes very quickly incompetent as the amount of organization increases. Very soon indeed long waiting periods and massive material resources become irrelevant.[Cairns-Smith, Life Puzzle, p. 95.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.73]

· Consider the probabilistic hurdles that must be overcome to construct even one short protein molecule of about one hundred amino acids in length. (A typical protein consists of about three hundred amino acid residues, and many crucial proteins are very much longer.) [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.73-74]

· First, all amino acids must form a chemical bond known as a peptide bond so as to join with other amino acids in the protein chain. Yet in nature many other types of chemical bonds are possible between amino acids; in fact, peptide and nonpeptide bonds occur with roughly equal probability. Thus, at any given site along a growing amino acid chain the probability of having a peptide bond is roughly 1/2. The probability of attaining four peptide bonds is: (1/2 x 1/2 x 1/2 x 1/2) = 1/16 or (1/2) 4. The probability of building a chain of one hundred amino acids in which all linkages involve peptide linkages is (1/2)99, or roughly 1 chance in 1030. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.74]

· Secondly, in nature every amino acid has a distinct mirror image of itself, one left-handed version, or L-form, and one right-handed version, or D-form. These mirror-image forms are called optical isomers. Functioning proteins use only left-handed amino acids, yet the right-handed and left-handed isomers occur in nature with roughly equal frequency. Taking this into consideration compounds the improbability of attaining a biologically functioning protein. The probability of attaining at random only L-amino acids in a hypothetical peptide chain one hundred amino acids long is (1/2)100, or again roughly 1 chance in 1030. The probability of building a one hundred-amino-acid-length chain at random in which all bonds are peptide bonds and all amino acids are L-form would be roughly 1 chance in 1060. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.74]

· Finally, functioning proteins have a third independent requirement, which is the most important of all; their amino acids must link up in a specific sequential arrangement, just as the letters in a sentence must be arranged in a specific sequence to be meaningful. In some cases, even changing one amino acid at a given site can result in a loss of protein function. Moreover, because there are twenty biologically occurring amino acids, the probability of getting a specific amino acid at a given site is small, that is, 1/20. (Actually the probability is even lower because there are many nonproteineous amino acids in nature.) On the assumption that all sites in a protein chain require one particular amino acid, the probability of attaining a particular protein one hundred amino acids long would be (1/20)100, or roughly 1 chance in 10130. We know now, however, that some sites along the chain do tolerate several of the twenty proteineous amino acids, while others do not. The biochemist Robert Sauer of MIT has used a technique known as “cassette mutagenesis” to determine just how much variance among amino acids can be tolerated at any given site in several proteins. His results have shown that, even taking the possibility of variance into account, the probability of achieving a functional sequence of amino acids [Actually, Sauer counted sequences that folded into stable three-dimensional configurations as functional, though many sequences that fold are not functional. Thus, his results actually underestimate the probabilistic difficulty.] in several known proteins at random is still “vanishingly small”, roughly 1 chance in 1065—an astronomically large number.[J. Reidhaar-Olson and R. Sauer, “Functionally Acceptable Solutions in Two Alpha-Helical Regions of Lambda Repressor”, Proteins, Structure, Function, and Genetics 7 (1990): 306-10; J. Bowie and R. Sauer, “Identifying Determinants of Folding and Activity for a Protein of Unknown Sequences: Tolerance to Amino Acid Substitution, Proceedings of the National Academy of Sciences, USA 86 (1989): 2152-56; J. Bowie, J. Reidhaar-Olson, W. Lim, and R. Sauer, “Deciphering the Message in Protein Sequences: Tolerance to Amino Acid Substitution”, Science 247 (1990): 1306-10; M. Behe, “Experimental Support for Regarding Functional Classes of Proteins to Be Highly Isolated from Each Other”, in J. Buell and G. Hearns, eds., Darwinism: Science or Philosophy? (Dallas: Haughton Publishers, 1994), pp. 60-71;Yockey, Information Theory, pp. 246-58] (There are 1065 atoms in our galaxy.)[See also D. Axe, N. Foster, and A. Ferst, “Active Barnase Variants with Completely Random Hydrophobic Cores”, Proceedings of the National Academy of Sciences, USA 93 (1996): 5590.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.74-75]

· Moreover, if one also factors in the need for proper bonding and homochirality (the first two factors discussed above), the probability of constructing a rather short functional protein at random becomes so small (1 chance in 10125) as to approach the universal probability bound of 1 chance in 10150, the point at which appeals to chance become absurd given the “probabilistic resources” of the entire universe.[W. Dembski, The Design Inference: Eliminating Chance through Small Probabilities (Cambridge: Cambridge University Press, 1998), pp. 67-91, 175-223. Dembski’s universal probability bound actually reflects the “specificational” resources, not the probabilistic resources, in the universe. Dembski’s calculation determines the number of specifications possible in finite time. It nevertheless has the effect of limiting the “probabilistic resources” available to explain the origin of any specified event of small probability. Since living systems are precisely specified systems of small probability, the universal probability bound effectively limits the probabilistic resources available to explain the origin of specified biological information (ibid., 1998, pp. 175-229)] Further, making the same calculations for even moderately longer proteins easily pushes these numbers well beyond that limit. For example, the probability of generating a protein of only 150 amino acids in length exceeds (using the same method as above) [Cassette mutagenesis experiments have usually been performed on proteins of about one hundred amino acids in length. Yet extrapolations from these results can generate reasonable estimates for the improbability of longer protein molecules. For example, Sauer’s results on the proteins lambda repressor and arc repressor suggest that, on average, the probability at each site of finding an amino acid that will maintain functional sequencing (or, more accurately, that will produce folding) is less than 1 in 4 (1 in 4.4). Multiplying 1/4 by itself 150 times (for a protein 150 amino acids in length) yields a probability of roughly 1 chance in 1091. For a protein of that length the probability of attaining both exclusive peptide bonding and homochirality is also about 1 chance in 1091. Thus, the probability of achieving all the necessary conditions of function for a protein 150 amino acids in length exceeds 1 chance in 10180.]1 chance in 10180, well beyond the most conservative estimates for the small probability bound given our multi-billion-year-old universe.[Dembski, Design Inference, pp. 67-91, 175-214; cf. E. Borel, Probabilities and Life, trans. M. Baudin (New York: Dover, 1962), p. 28.] In other words, given the complexity of proteins, it is extremely unlikely that a random search through all the possible amino acid sequences could generate even a single relatively short functional protein in the time available since the beginning of the universe (let alone the time available on the early earth). Conversely, to have a reasonable chance of finding a short functional protein in such a random search would require vastly more time than either cosmology or geology allows. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.75-76]

· More realistic calculations (taking into account the probable presence of nonproteineous amino acids, the need for vastly longer functional proteins to perform specific functions such as polymerization, and the need for multiple proteins functioning in coordination) only compound these improbabilities—indeed, almost beyond computability. For example, recent theoretical and experimental work on the so-called “minimal complexity” required to sustain the simplest possible living organism suggests a lower bound of some 250 to 400 genes and their corresponding proteins. [E. Pennisi, “Seeking Life’s Bare Genetic Necessities”, Science 272 (1996): 1098-99; A. Mushegian and E. Koonin, “A Minimal Gene Set for Cellular Life Derived by Comparison of Complete Bacterial Genomes”, Proceedings of the National Academy of Sciences, USA 93 (1996): 10268-73; C. Bult et al., “Complete Genome Sequence of the Methanogenic Archaeon, Methanococcus Jannaschi”, Science 273 (1996): 1058-72.] The nucleotide sequence space corresponding to such a system of proteins exceeds 4300000. The improbability corresponding to this measure of molecular complexity again vastly exceeds 1 chance in 10150, and thus the “probabilistic resources” of the entire universe. [Dembski, Design Inference, pp. 67-91, 175-223.] Thus, when one considers the full complement of functional biomolecules required to maintain minimal cell function and vitality, one can see why chance-based theories of the origin of life have been abandoned. What Mora said in 1963 still holds: Statistical considerations, probability, complexity, etc., followed to their logical implications suggest that the origin and continuance of life is not controlled by such principles. An admission of this is the use of a period of practically infinite time to obtain the derived result. Using such logic, however, we can prove anything. [P. T. Mora, “Urge and Molecular Biology”, Nature 199 (1963): 212-19.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.76]

· Though the probability of assembling a functioning biomolecule or cell by chance alone is exceedingly small, origin-of-life researchers have not generally rejected the chance hypothesis merely because of the vast improbabilities associated with these events. Many improbable things occur every day by chance. Any hand of cards or any series of rolled dice will represent a highly improbable occurrence. Yet observers often justifiably attribute such events to chance alone. What justifies the elimination of the chance is not just the occurrence of a highly improbable event, but the occurrence of a very improbable event that also conforms to an independently given or discernible pattern. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.76-77]

· If someone repeatedly rolls two dice and turns up a sequence such as: 9, 4, 11, 2, 6, 8, 5, 12, 9, 2, 6, 8, 9, 3, 7, 10, 11, 4, 8 and 4, no one will suspect anything but the interplay of random forces, though this sequence does represent a very improbable event given the number of combinatorial possibilities that correspond to a sequence of this length. Yet rolling twenty (or certainly two hundred!) consecutive sevens will justifiably arouse suspicion that something more than chance is in play. Statisticians have long used a method for determining when to eliminate the chance hypothesis that involves prespecifying a pattern or “rejection region”. [I. Hacking, The Logic of Statistical Inference (Cambridge: Cambridge University Press, 1965), pp. 74-75] In the dice example above, one could prespecify the repeated occurrence of seven as such a pattern in order to detect the use of loaded dice, for example. Dembski’s work discussed above has generalized this method to show how the presence of any conditionally independent pattern, whether temporally prior to the observation of an event or not, can help (in conjunction with a small probability event) to justify rejecting the chance hypothesis.[Dembski, Design Inference, pp. 47-55.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.77]

· Origin-of-life researchers have tacitly, and sometimes explicitly, employed precisely this kind of statistical reasoning to justify the elimination of scenarios that rely heavily on chance. Christian de Duve, for example, has recently made this logic explicit in order to explain why chance fails as an explanation for the origin of life: A single, freak, highly improbable event can conceivably happen. Many highly improbable events—drawing a winning lottery number or the distribution of playing cards in a hand of bridge—happen all the time. But a string of improbable events—drawing the same lottery number twice, or the same bridge hand twice in a row—does not happen naturally.[C. de Duve, “The Beginnings of Life on Earth”, American Scientist 83 (1995): 437.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.77]

· De Duve and other origin-of-life researchers have long recognized that the cell represents not only a highly improbable but also a functionally specified system. For this reason, by the mid-1960s most researchers had eliminated chance as a plausible explanation for the origin of the information content or specified complexity necessary to build a cell.[H. Quastler, The Emergence of Biological Organization (New Haven, Conn.: Yale University Press, 1964), p. 7.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.77-78]

5.1 The Return of the Design Hypothesis

· Our experientially based knowledge of information confirms that systems with large amounts112 of specified complexity or information content (especially codes and languages) always originate from an intelligent source—that is, from mental or personal agents. Moreover, this generalization holds not only for the specified complexity or information present in natural languages but also for other forms of specified complexity, whether present in machine codes, machines, or works of art. Like the letters in a section of meaningful text, the parts in a working engine represent a highly improbable and functionally specified configuration. Similarly, the highly improbable shapes in the rock on Mount Rushmore in the United States conform to an independently given pattern—the face of American presidents known from books and paintings. Thus, both these systems have a large amount of specified complexity or information content. Not coincidentally, they also resulted from intelligent design, not chance and / or physical-chemical necessity. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.92]

· This generalization about the cause of specified complexity or information has, ironically, received confirmation from origin-of-life research itself. During the last forty years, every naturalistic model (see n. 44 above) proposed has failed precisely to explain the origin of the specified genetic information required to build a living cell. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.92]

· Thus, mind or intelligence, or what philosophers call “agent causation”, now stands as the only cause known to be capable of generating large amounts of specified complexity or information content (from nonbiological precursors). [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.92-93]

· Indeed, because large amounts of specified complexity or information content must be caused by a mind or intelligent design, one can detect the past action of an intelligent cause from the presence of an information-rich effect, even if the cause itself cannot be directly observed.[Meyer, Clues, pp. 77-140] For instance, visitors to the gardens of Victoria harbor in Canada correctly infer the activity of intelligent agents when they see a pattern of red and yellow flowers spelling “Welcome to Victoria”, even if they did not see the flowers planted and arranged. Similarly, the specifically arranged nucleotide sequences—the complex but functionally specified sequences—in DNA imply the past action of an intelligent mind, even if such mental agency cannot be directly observed. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.93]

· Scientists in many fields recognize the connection between intelligence and specified complexity and make inferences accordingly. Archaeologists assume a mind produced the inscriptions on the Rosetta Stone. Evolutionary anthropologists argue for the intelligence of early hominids by showing that certain chipped flints are too improbably specified to have been produced by natural causes. NASA’s search for extraterrestrial intelligence (SETI) presupposed that the presence of functionally specified information imbedded in electromagnetic signals from space (such as the prime number sequence) would indicate an intelligent source.[T. R. McDonough, The Search for Extraterrestrial Intelligence: Listening for Life in the Cosmos (New York: Wiley, 1987).] As yet, however, radio-astronomers have not found such information-bearing signals coming from space. But closer to home, molecular biologists have identified specified complexity or informational sequences and systems in the cell, suggesting, by the same logic, an intelligent cause. Similarly, what physicists refer to as the “anthropic coincidences” constitute precisely a complex and functionally specified array of values. Given the inadequacy of the cosmological explanations based upon chance and law discussed above, and the known sufficiency of intelligent agency as a cause of specified complexity, the anthropic fine-tuning data would also seem best explained by reference to an intelligent cause. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.93-94]

5.2 An Argument from Ignorance?

· Of course, many would object that any such arguments from evidence to design constitute arguments from ignorance. Since, these objectors say, we do not yet know how specified complexity in physics and biology could have arisen, we invoke the mysterious notion of intelligent design. On this view, intelligent design functions, not as an explanation, but as a kind of place holder for  ignorance. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.94]

· And yet, we often infer the causal activity of intelligent agents as the best explanation for events and phenomena. As Dembski has shown,116 we do so rationally, according to clear theoretic criteria. Intelligent agents have unique causal powers that nature does not. When we observe effects that we know from experience only intelligent agents produce, we rightly infer the antecedent presence of a prior intelligence even if we did not observe the action of the particular agent responsible.117 When these criteria are present, as they are in living systems and in the contingent features of physical law, design constitutes a better explanation than either chance and / or deterministic natural processes. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.94-95]

· While admittedly the design inference does not constitute a proof (nothing based upon empirical observation can), it most emphatically does not constitute an argument from ignorance. Instead, the design inference from biological information constitutes an “inference to the best explanation.”118 Recent work on the method of “inference to the best explanation”119 suggests that we determine which among a set of competing possible explanations constitutes the best one by assessing the causal powers of the competing explanatory entities. Causes that can produce the evidence in question constitute better explanations of that evidence than those that do not. In this essay, I have evaluated and compared the causal efficacy of three broad categories of explanation—chance, necessity (and chance and necessity combined), and design—with respect to their ability to produce large amounts of specified complexity or information content. As we have seen, neither explanations based upon chance nor those based upon necessity, nor (in the biological case) those that combine the two, possess the ability to generate the large amounts of specified complexity or information content required to explain either the origin of life or the origin of the anthropic fine tuning. This result comports with our ordinary and uniform human experience. Brute matter—whether acting randomly or by necessity—does not have the capability to generate novel information content or specified complexity. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.95]

· Yet it is not correct to say that we do not know how specified complexity or information content arises. We know from experience that conscious intelligent agents can and do create specified information-rich sequences and systems. Furthermore, experience teaches that whenever large amounts of specified complexity or information content are present in an artifact or entity whose causal story is known, invariably creative intelligence—design—has played a causal role in the origin of that entity. Thus, when we encounter such information in the biomacromolecules necessary to life, or in the fine tuning of the laws of physics, we may infer based upon our present knowledge of established cause-effect relationships that an intelligent cause operated in the past to produce the specified complexity or information content necessary to the origin of life. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.95-96]

· Thus, we do not infer design out of ignorance but because of what we know about the demonstrated causal powers of natural entities and agency, respectively. We infer design using the standard uniformitarian method of reasoning employed in all historical sciences. These inferences are no more based upon ignorance than well-grounded inferences in geology, archeology, or paleontology are—where provisional knowledge of cause-effect relationships derived from present experience guides inferences about the causal past. Recent developments in the information sciences merely help formalize knowledge of these relationships, allowing us to make inferences about the causal histories of various artifacts, entities, or events based upon the complexity and information-theoretic signatures they exhibit.120 In any case, present knowledge of established cause-effect relationships, not ignorance, justifies the design inference as the best explanation for the origin of specified complexity in both physics and biology. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.96]

5.3 Intelligent Design: A Vera Causa?

· Yet some would still insist that we cannot legitimately postulate such an agent as an explanation for the origin of specified complexity in life since living systems, as organisms rather than simple machines, far exceed the complexity of systems designed by human agents. Thus, such critics argue, invoking an intelligence similar to that which humans possess would not suffice to explain the exquisite complexity of design present in biological systems. To explain that degree of complexity would require a “superintellect” (to use Fred Hoyle’s phrase). Yet, since we have no experience or knowledge of such a super-intelligence, we cannot invoke one as a possible cause for the origin of life. Indeed, we have no knowledge of the causal powers of such a hypothetical agent. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.97-98]

· This objection derives from the so-called vera causa principle—an important methodological guideline in the historical sciences. The vera causa principle asserts that historical scientists seeking to explain an event in the distant past (such as the origin of life) should postulate (or prefer in their postulations) only causes that are sufficient to produce the effect in question and that are known to exist by observation in the present.121 Darwin, for example, marshaled this methodological consideration as a reason for preferring his theory of natural selection over special creation. Scientists, he argued, can observe natural selection producing biological change; they cannot observe God creating new species. [V. Kavalovski, The Vera Causa Principle: A Historico-Philosophical Study of a Meta-theoretical Concept from Newton through Darwin (Ph.D. diss., University of Chicago, 1974), p. 104] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.98]

5.4 But Is It Science?

· Of course, many simply refuse to consider the design hypothesis on the grounds that it does not qualify as “scientific”. Such critics affirm an extraevidential principle known as “methodological naturalism”.[M. Ruse, “Witness Testimony Sheet: McLean v. Arkansas”, in M. Ruse, ed., But Is It Science? (Buffalo, N.Y.: Prometheus Books, 1988), p. 301; R. Lewontin, “Billions and Billions of Demons”, The New York Review of Books, January 9, 1997, p. 31; Meyer, “Equivalence”, pp. 69-71] Methodological naturalism (MN) asserts that for a hypothesis, theory, or explanation to qualify as “scientific” it must invoke only naturalistic or materialistic causes. Clearly, on this definition, the design hypothesis does not qualify as “scientific”. Yet, even if one grants this definition, it does not follow that some nonscientific (as defined by MN) or metaphysical hypothesis may not constitute a better, more causally adequate, explanation. Indeed, this essay has argued that, whatever its classification, the design hypothesis does constitute a better explanation than its naturalistic rivals for the origin of specified complexity in both physics and biology. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.99-100]

· Surely, simply classifying this argument as metaphysical does not refute it. In any case, methodological naturalism now lacks a compelling justification as a normative definition of science. First, attempts to justify methodological naturalism by reference to metaphysically neutral (that is, non-question begging) demarcation criteria have failed.[Meyer, “Laws”, pp. 29-40; Meyer, “Equivalence”, pp. 67-112; S. C. Meyer, “Demarcation and Design: The Nature of Historical Reasoning”, in Jitse van der Meer, ed., Facets of Faith and Science, vol. 4, Interpreting God’s Action in the World (Lanham, Md.: University Press of America, 1996), pp. 91-130; L. Laudan, “The Demise of the Demarcation Problem”, in Ruse, Science?, pp. 337-50; L. Laudan, “Science at the Bar—Causes for Concern”, in Ruse, Science?, pp. 351-55; A. Plantinga, “Methodological Naturalism”, Origins & Design 18, no. 1 (1997): 18-27, and no. 2 (1997): 22-34] (See Appendix, pp. 151-211, “The Scientific Status of Intelligent Design”.) Second, asserting methodological naturalism as a normative principle for all of science has a negative effect on the practice of certain scientific disciplines. In origin-of-life research, methodological naturalism artificially restricts inquiry and prevents scientists from seeking the most truthful, best, or even most empirically adequate explanation. The question that must be asked about the origin of life is not “Which materialistic scenario seems most adequate?” but “What actually caused life to arise on earth?” Clearly, one of the possible answers to this latter question is “Life was designed by an intelligent agent that existed before the advent of humans.” Yet if one accepts methodological naturalism as normative, scientists may not consider this possibly true causal hypothesis. Such an exclusionary logic diminishes the claim to theoretical superiority for any remaining hypothesis and raises the possibility that the best “scientific” explanation (as defined by methodological naturalism) may not, in fact, be the best. As many historians and philosophers of science now recognize, evaluating scientific theories is an inherently comparative enterprise. Theories that gain acceptance in artificially constrained competitions can claim to be neither “most probably true” nor “most empirically adequate”. Instead, at best they can achieve the status of the “most probably true or adequate among an artificially limited set of options”. Openness to the design hypothesis, therefore, seems necessary to a fully rational historical biology—that is, to one that seeks the truth “no holds barred”.[P. W. Bridgman, Reflections of a Physicist, 2d ed. (New York: Philosophical Library, 1955), p. 535.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.100-101]

5.5 Conclusion

· For almost 150 years many scientists have insisted that “chance and necessity”—happenstance and law—jointly suffice to explain the origin of life and the features or the universe necessary to sustain it. We now find, however, that materialistic thinking—with its reliance upon chance and necessity—has failed to explain the specificity and complexity of both the contingent features of physical law and the biomacromolecules upon which life depends. Even so, many scientists insist that to consider another possibility would constitute a departure from both science and reason itself. Yet ordinary reason, and much scientific reasoning that passes under the scrutiny of materialist sanction, not only recognizes but requires us to recognize the causal activity of intelligent agents. The sculptures of Michelangelo, the software of the Microsoft corporation, the inscribed steles of Assyrian kings—each bespeaks the prior action of intelligent agents. Indeed, everywhere in our high-tech environment, we observe complex events, artifacts, and systems that impel our minds to recognize the activity of other minds—minds that communicate, plan, and design. But to detect the presence of mind, to detect the activity of intelligence in the echo of its effects, requires a mode of reasoning—indeed, a form of knowledge—the existence of which science, or at least official biology, has long excluded. Yet recent developments in the information sciences now suggest a way to rehabilitate this lost way of knowing. Perhaps, more importantly, evidence from biology and physics now strongly suggests that mind, not just matter, played an important role in the origin of our universe and in the origin of the life that it contains. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.101]

Michael J. Behe: Evidence for Design At The Foundation Of Life, Urea and Purpose

Explaining the Eye

· And if evolution could explain the eye . . . well, what could it not explain? But there was a question left unaddressed by Darwin’s scheme—where did the light-sensitive spot come from? It seems an odd starting point, since most objects are not light sensitive. Nonetheless, Darwin decided not even to attempt to address the question. He wrote that: “How a nerve comes to be sensitive to light hardly concerns us more than how life itself originated.” [C. Darwin, On the Origin of Species (1876; reprint, New York: New York University Press, 1988), p. 151.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.116]

· He wrote in the Origin of Species that: “If it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.” [Darwin, Origin, p. 154]But what sort of organ or system could not be formed by “numerous, successive, slight modifications”? Well, to begin with, one that is irreducibly complex. “Irreducibly complex” is a fancy phrase, but it stands for a very simple concept. As I wrote in Darwin’s Black Box: The Biochemical Challenge to Evolution, an irreducibly complex system is: “a single system which is composed of several well-matched, interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning.”[M. J. Behe, Darwin’s Black Box: The Biochemical Challenge to Evolution (New York: Free Press, 1996), p. 39.M. J. Behe, Darwin’s Black Box: The Biochemical Challenge to Evolution (New York: Free Press, 1996), p. 39.] Less formally, the phrase “irreducibly complex” just means that a system has a number of components that interact with each other, and if any are taken away the system no longer works. A good illustration of an irreducibly complex system from our everyday world is a simple mechanical mousetrap. The mousetraps that one buys at the hardware store generally have a wooden platform to which all the other parts are attached. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.119]

The Scientific Status of Intelligent Design

The Methodological Equivalence of Naturalistic and Non-Naturalistic Origins Theories.

· Thus, Darwin would emphatically dismiss the creationist account of homology, for example, by saying “but that is not a scientific explanation.” [Charles Darwin, The Origin of Species by Means of Natural Selection (1859, reprint, Harmondsworth: Penguin Books, 1984), p. 334; N. C. Gillespie, Charles Darwin and the Problem with Creation (Chicago: University of Chicago Press, 1979), pp. 67-81.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.151-152]

· Underlying Darwin’s repudiation of creationist legitimacy lay an entirely different conception of science than had prevailed among earlier naturalists.[Gillespie, Darwin, pp. 1-18, 41-66, 146-56.] Darwin’s attacks on his creationist and idealist opponents in part expressed and in part established an emerging positivistic “episteme” in which the mere mention of unverifiable “acts of divine will” or “the plan of creation” would increasingly serve to disqualify theories from consideration as science qua science. This decoupling of theology from science and the redefinition of science that underlay it was justified less by argument than by an implicit assumption about the characteristic features of all scientific theories—features that presumably could distinguish theories of a properly scientific (that is, positivistic) bent from those tied to unwelcome metaphysical or theological moorings. Thus, both in the Origin and in subsequent letters one finds Darwin invoking a number of ideas about what constitutes a properly scientific explanation in order to characterize creationist theories as inherently “unscientific”. For Darwin the in-principle illegitimacy of creationism was demonstrated by perceived deficiencies in its method of inquiry, such as its failure to explain by reference to natural law5 and its postulation of unobservable causes and explanatory entities such as mind, purpose, or “the plan of creation”.[Darwin, Origin, pp. 201, 430, 453; V. Kavalovski, The Vera Causa Principle: A Historico-Philosophical Study of a Meta-Theoretical Concept from Newton through Darwin (Ph.D. diss., University of Chicago, Chicago, Illinois, 1974), pp. 10429.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.152]

Part 1: The General Failure of Demarcation Arguments

· As problems with using methodological considerations grew, demarcationists shifted their focus again. Beginning in the 1920s, philosophy of science took a linguistic or semantic turn. The logical positivist tradition held that scientific theories could be distinguished from nonscientific theories, not because scientific theories had been produced via unique or superior methods, but because such theories were more meaningful. Logical positivists asserted that all meaningful statements are either empirically verifiable or logically undeniable. According to this “verificationist criterion of meaning”, scientific theories are more meaningful than philosophical or religious ideas, for example, because scientific theories refer to observable entities such as planets, minerals, and birds, whereas philosophy and religion refer to such unobservable entities as God, truth, and morality. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.157-158]

· With the death of positivism in the 1950s, demarcationists took a different tack. Other semantic criteria emerged, such as Sir Karl Popper’s falsifiability. According to Popper, scientific theories were more meaningful than nonscientific ideas because they referred only to empirically falsifiable entities. [Laudan, “Demise”]Yet this, too, proved to be a problematic criterion. First, falsification turns out to be difficult to achieve. Rarely are the core commitments of theories directly tested via prediction. Instead, predictions occur when core theoretical commitments are conjoined with auxiliary hypotheses, thus always leaving open the possibility that auxiliary hypotheses, not core commitments, are responsible for failed predictions. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.158]

· The “demise of the demarcation problem”, as Laudan calls it, implies that the use of positivistic demarcationist arguments by evolutionists is, at least prima facie, on very slippery ground. Laudan’s analysis suggests that such arguments are not likely to succeed in distinguishing the scientific status of descent vis-à-vis design or anything else for that matter. As Laudan puts it, “If we could stand up on the side of reason, we ought to drop terms like ‘pseudo-science.’. . . They do only emotive work for us.”[Laudan, “Demise”, p. 349] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.159-160]

· If philosophers of science such as Laudan are correct, a stalemate exists in our analysis of design and descent. Neither can automatically qualify as science; neither can be necessarily disqualified either. The a priori methodological merit of design and descent are indistinguishable if no agreed criteria exist by which to judge their merits. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.160]

Part 2: Specific Demarcation Arguments against Design

· Unfortunately, to establish this conclusively would require an examination of all the demarcation arguments that have been used against design. And indeed, an examination of evolutionary polemic reveals many such arguments. Design or creationist theories have been alleged to be necessarily unscientific because they (a) do not explain by reference to natural law, [Ruse, “Witness”, p. 301; Ruse, “Philosopher’s Day”, p. 26; Ruse, “Darwinism”, pp. 1-6.] (b) invoke unobservables,[Skoog, “View”; Root-Bernstein, “Creationism Considered”, p. 74.] (c) are not testable,[Gould, “Genesis”, pp. 129-30; Ruse, “Witness”, p. 305; Ebert et al., Science, pp. 8-10] (d) do not make predictions, [Root-Bernstein, “Creationism Considered,” p. 73; Ruse, “Philosopher’s Day,” p. 28; Ebert et al., Science, pp. 8-10.] (e) are not falsifiable,[Kline, “Theories,” p. 42; Gould, “Evolution,” p. 120; Root-Bernstein, “Creationism Considered,” p. 72.] (f) provide no mechanisms,[Ruse, Darwinism, p. 59; Ruse, “Witness,” p. 305; Gould, “Evolution,” p. 121; Root-Bernstein, “Creationism Considered”, p. 74.] (g) are not tentative,[A. Kehoe, “Modern Anti-evolutionism: The Scientific Creationists”, in What Darwin Began, ed. L. R. Godfrey (Boston: Allyn and Bacon, 1985), pp. 173-80; Ruse, “Witness”, p. 305; Ruse, “Philosopher’s Day”, p. 28; Ebert et al., Science, pp. 8-10.] and (h) have no problem-solving capability.[Kitcher, Abusing Science, pp. 126-27, 176-77.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.161-162]

· Explanation via natural law. Now let us examine the first, and according to Michael Ruse [Ruse, “Philosopher’s Day”, pp. 21-26.] most fundamental, of the arguments against the possibility of a scientific theory of design. This argument states: “Scientific theories must explain by natural law. Because design or creationist theories do not do so, they are necessarily unscientific.” [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.162]

· Many contemporary philosophers disagree with Ruse and Lewontin about this, as have a number of good scientists over the years—Isaac Newton and Robert Boyle, for example. The action of agency (whether divine or human) need not violate the laws of nature; in most cases it merely changes the initial and boundary conditions on which the laws of nature operate. [This dichotomy between “unbroken law” and the action of agency is merely a species of the same genus of confusion that led Ruse and others to insist that science always explains via laws. In Ruse’s case the dichotomy is manifest in his assertion that invoking the action of a divine agent constitutes a “violation of natural law”. I disagree. Pitting the action of agents (whether seen or unseen) against natural law creates a false opposition. The reason for this is simple. Agents can change initial and boundary conditions, yet in so doing they do not violate laws. Most scientific laws have the form “If A, then B will follow, given conditions X.” If X is altered or if A did not obtain, then it constitutes no violation of the laws of nature to say that B did not occur, even if we expected it to. Agents may alter the course of events or produce novel events that contradict our expectations without violating the laws of nature. To assert otherwise is merely to misunderstand the distinction between antecedent conditions and laws. C. S. Lewis, God in the Dock (London: Collins, 1979), pp. 51-55. See R. Swinburne, The Concept of Miracle (London: Macmillan, 1970), pp. 23-32, and G. Colwell, “On Defining Away the Miraculous”, Philosophy 57 (1982): 327-37, for other defenses of the possibility of miracles that assume and respect the integrity of natural laws.] But this issue must be set aside for the moment. For now it will suffice merely to note that the criterion of demarcation has subtly shifted. No longer does the demarcationist repudiate design as unscientific because it does not “explain via natural law”; now the demarcationist rejects intelligent design because it does not “explain naturalistically”. To be scientific a theory must be naturalistic. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.167-168]

· Unobservables and testability. At this point evolutionary demarcationists must offer other demarcation criteria. One that appears frequently both in conversation and in print finds expression as follows: “Miracles are unscientific because they cannot be studied empirically.[See also Kavalovski, Vera Causa, pp. 104-29, for a discussion of the so-called vera causa principle, a nineteenth-century methodological principle invoked by Darwin to eliminate from consideration creationist explanations judged to be unobservable (Darwin, Origin, pp. 201, 430, 453).] Design invokes miraculous events; therefore design is unscientific. Moreover, since miraculous events can’t be studied empirically, they can’t be tested.[Skoog, “View”; Gould, “Genesis”, pp. 129-30; Ruse, “Witness”, p. 305.] Since scientific theories must be testable, design is, again, not scientific.” Molecular biologist Fred Grinnell has argued, for example, that intelligent design cannot be a scientific concept because if something “can’t be measured, or counted, or photographed, it can’t be science”.[Grinnell, “Radical Intersubjectivity: Why Naturalism Is an Assumption Necessary for Doing Science”, paper presented at the conference on “Darwinism: Scientific Inference or Philosophical Preference?” Southern Methodist University, Dallas, March 26-28, 1993] Gerald Skoog amplifies this concern: “The claim that life is the result of a design created by an intelligent cause can not be tested and is not within the realm of science.” [Skoog, “View”.] This reasoning was invoked in a 1993 case at San Francisco State University as a justification for removing Professor Dean Kenyon from his classroom. Kenyon is a biophysicist who has embraced intelligent design after years of work on chemical evolution. Some of his critics at SFSU argued that his theory fails to qualify as scientific because it refers to an unseen Designer that cannot be tested or, as Eugenie Scott said, “You can’t use supernatural explanations because you can’t put an omnipotent deity in a test tube. As soon as creationists invent a ‘theo-meter’ maybe we could test for miraculous intervention.” [S. C. Meyer, “A Scopes Trial for the ‘90s”, The Wall Street Journal, December 6, 1993, p. A14; S. C. Meyer, “Open Debate on Life’s Origin”, Insight, February 21, 1994, pp. 27-29. Eugenie Scott, “Keep Science Free from Creationism”, Insight, February 21, 1994, p. 30.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.168-169]

· It turns out, however, that both parts of this formula fail. First, observability and testability are not both necessary to scientific status, because observability at least is not necessary to scientific status, as theoretical physics has abundantly demonstrated. Many entities and events cannot be directly observed or studied—in practice or in principle. The postulation of such entities is no less the product of scientific inquiry for that. Many sciences are in fact directly charged with the job of inferring the unobservable from the observable. Forces, fields, atoms, quarks, past events, mental states, subsurface geological features, molecular biological structures—all are unobservables inferred from observable phenomena. Nevertheless, most are unambiguously the result of scientific inquiry. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.169-170]

· Second, unobservability does not preclude testability: claims about unobservables are routinely tested in science indirectly against observable phenomena. That is, the existence of unobservable entities is established by testing the explanatory power that would result if a given hypothetical entity (that is, an unobservable) were accepted as actual. This process usually involves some assessment of the established or theoretically plausible causal powers of a given unobservable entity. In any case, many scientific theories must be evaluated indirectly by comparing their explanatory power against competing hypotheses. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.170]

· During the race to elucidate the structure of the genetic molecule, both a double helix and a triple helix were considered, since both could explain the photographic images produced via X-ray crystallography.[H. Judson, The Eighth Day of Creation (New York: Simon and Schuster, 1979), pp. 157-90.] While neither structure could be observed (even indirectly through a microscope), the double helix of Watson and Crick eventually won out because it could explain other observations that the triple helix could not. The inference to one unobservable structure—the double helix—was accepted because it was judged to possess a greater explanatory power than its competitors with respect to a variety of relevant observations. Such attempts to infer to the best explanation, where the explanation presupposes the reality of an unobservable entity, occur frequently in many fields already regarded as scientific, including physics, geology, geophysics, molecular biology, genetics, physical chemistry, cosmology, psychology, and, of course, evolutionary biology. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.170]

· The prevalence of unobservables in such fields raises difficulties for defenders of descent who would use observability criteria to disqualify design. Darwinists have long defended the apparently unfalsifiable nature of their theoretical claims by reminding critics that many of the creative processes to which they refer occur at rates too slow to observe. Further, the core historical commitment of evolutionary theory—that present species are related by common ancestry—has an epistemological character that is very similar to many present design theories. The transitional life forms that ostensibly occupy the nodes on Darwin’s branching tree of life are unobservable, just as the postulated past activity of a Designer is unobservable.[Meyer, Of Clues, p. 120; Darwin, Origin, p. 398; D. Hull, Darwin and His Critics (Chicago: University of Chicago Press, 1973), p. 45.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.170]

· Transitional life forms are theoretical postulations that make possible evolutionary accounts of present biological data. An unobservable designing agent is, similarly, postulated to explain features of life such as its information content and irreducible complexity. Darwinian transitional, neo-Darwinian mutational events, punctuationalism’s “rapid branching” events, the past action of a designing agent—none of these is directly observable. With respect to direct observability, each of these theoretical entities is equivalent. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.170-171]

· Each is roughly equivalent with respect to testability as well. Origins theories generally must make assertions about what happened in the past to cause present features of the universe (or the universe itself) to arise. They must reconstruct unobservable causal events from present clues or evidences. Positivistic methods of testing, therefore, that depend upon direct verification or repeated observation of cause-effect relationships have little relevance to origins theories, as Darwin himself understood. Though he complained repeatedly about the creationist failure to meet the vera causa criterion—a nineteenth-century methodological principle that favored theories postulating observed causes—he chafed at the application of rigid positivistic standards to his own theory. As he complained to Joseph Hooker: “I am actually weary of telling people that I do not pretend to adduce direct evidence of one species changing into another, but that I believe that this view in the main is correct because so many phenomena can be thus grouped and explained”[C. Darwin, More Letters of Charles Darwin, ed. F. Darwin, 2 vols. (London: Murray, 1903), 1:184.] (emphasis added).[Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.171]

· The preceding considerations suggest that neither evolutionary descent with modification nor intelligent design is ultimately untestable. Instead, both theories seem testable indirectly, as Darwin explained of descent, by a comparison of their explanatory power with that of their competitors. As Philip Kitcher—no friend of creationism—has acknowledged, the presence of unobservable elements in theories, even ones involving an unobservable Designer, does not mean that such theories cannot be evaluated empirically. He writes, “Even postulating an unobserved Creator need be no more unscientific than postulating unobserved particles. What matters is the character of the proposals and the ways in which they are articulated and defended.” [Kitcher, Abusing Science, p. 125. While Kitcher allows for the possibility of a testable theory of divine creation, he believes creationism was tested and found wanting in the nineteenth century.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.172-173]

· Similarly, the requirement that a scientific theory must provide a causal mechanism fails to provide a metaphysically neutral standard of demarcation for several reasons. First, as we have already noted, many theories in science are not mechanistic theories. Many theories that explicate what regularly happens in nature either do not or need not explain why those phenomena occur mechanically. Newton’s universal law of gravitation was no less a scientific theory because Newton failed—indeed refused—to postulate a mechanistic cause for the regular pattern of attraction his law described. Also, as noted earlier, many historical theories about what happened in the past may stand on their own without any mechanistic theory about how the events to which such theories attest could have occurred. The theory of common descent is generally regarded as a scientific theory even though scientists have not agreed on a completely adequate mechanism to explain how transmutation between lines of descent can be achieved. In the same way, there seems little justification for asserting that the theory of continental drift became scientific only after the advent of plate tectonics. While the mechanism provided by plate tectonics certainly helped render continental drift a more persuasive theory,[The same could be said of the neo-Darwinian selection-mutation mechanism vis-à-vis the theory of common descent. In both cases, however, issues of warrant and issues of scientific status should not be confused.] it was nevertheless not strictly necessary to know the mechanism by which continental drift occurs (1) to know or theorize that drift had occurred or (2) to regard the continental drift theory as scientific. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.173-174]

Part 3: The Methodological Character of Historical Science

· In other words, a fundamental methodological equivalence between design and descent derives from a common concern with history—that is, with historical questions, historical inferences, and historical explanations. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.176]

· So the evolutionary demarcation arguments above seem to fail in part because they attempt to impose (as normative) criteria of method that ignore the historical character of origins research. Indeed, each one of the demarcationist arguments listed above fails because it overlooks a specific characteristic of the historical sciences. But what are these characteristics? And could they provide grounds for distinguishing the scientific, or at least methodological, status of design and descent? [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.178]

· The nature of historical science. Answering these questions will require briefly summarizing the results of my doctoral research on the logical and methodological features of the historical sciences.[Meyer, Of Clues.] Through that research I have identified three general features of historical scientific disciplines. These features derive from a concern to reconstruct the past and to explain the present by reference to the past. They distinguish disciplines motivated by historical concerns from disciplines motivated by a concern to discover, classify, or explain unchanging laws and properties of nature. These latter disciplines may be called “inductive” or “nomological” (from the Greek word nomos, for law); the former type may be called “historical”.80 I contend that historical sciences generally can be distinguished from nonhistorical scientific disciplines by virtue of the three following features: 1. The historical interest or questions motivating their practitioners: Those in the historical sciences generally seek to answer questions of the form “What happened?” or “What caused this event or that natural feature to arise?” On the other hand, those in the nomological or inductive sciences generally address questions of the form “How does nature normally operate or function?”  2. The distinctively historical types of inference used: The historical sciences use inferences with a distinctive logical form. Unlike many nonhistorical disciplines, which typically attempt to infer generalizations or laws from particular facts, historical sciences make what C. S. Peirce has called “abductive inferences” in order to infer a past event from a present fact or clue. These inferences have also been called “retrodictive” because they are temporally asymmetric—that is, they seek to reconstruct past conditions or causes from present facts or clues. For example, detectives[A. C. Doyle, “The Boscome Valley Mystery”, in The Sign of Three: Peirce, Holmes, Popper, ed. T. Sebeok (Bloomington: Indiana University Press, 1983), p. 145.] use abductive or retrodictive inferences to reconstruct the circumstances of a crime after the fact. In so doing they function as historical scientists. As Gould has put it, the historical scientist proceeds by “inferring history from its results”.[S. J. Gould, “Evolution and the Triumph of Homology: Or, Why History Matters”, American Scientist 74 (1986): 61.] 3. The distinctively historical types of explanations used: In the historical sciences one finds causal explanations of particular events, not nomological descriptions or theories of general phenomena. In historical explanations, past causal events, not laws, do the primary explanatory work. The explanations cited earlier of the Himalayan orogeny and the beginning of World War I exemplify such historical explanations.83 In addition, the historical sciences share with many other types of science a fourth feature. 4. Indirect methods of testing such as inference to the best explanation: As discussed earlier, many disciplines cannot test theories by direct observation, prediction, or repeated experiment. Instead, testing must be done indirectly through comparison of the explanatory power of competing theories. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.178-179]

· As already discussed, Darwin also (with respect to feature 4 above) employed a method of indirect testing of his theory by assessing its relative explanatory power. Recall his statement that “this hypothesis [that is, common descent] must be tested . . . by trying to see whether it explains several large and independent classes of facts.”[Quoted in Gould, “Darwinism”, p. 70.] He makes this indirect and comparative method of testing even more explicit in a letter to Asa Gray: I . . . test this hypothesis [common descent] by comparison with as many general and pretty well-established propositions as I can find—in geographical distribution, geological history, affinities &c., &c. And it seems to me that, supposing that such a hypothesis were to explain such general propositions, we ought, in accordance with the common way of following all science, to admit it till some better hypothesis be found out [emphasis added]. [F. Darwin, ed., Life and Letters of Charles Darwin, 2 vols. (London: D. Appleton, 1896), 1:437.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.182]

· Design as historical science. The foregoing suggests that evolutionary biology, or at least Darwin’s version of it, does conform to the pattern of inquiry described above as historically scientific. To show that design and descent are methodologically equivalent with respect to the historical mode of inquiry outlined above, it now remains to show that a design argument or theory could exemplify this same historical pattern of inquiry. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.182-183]

· In the case of feature 1, this equivalence is quite obvious. As just noted, a clear logical distinction exists between questions of the form “How does nature normally operate or function?” and those of the form “How did this or that natural feature arise?” or “What caused this or that event to occur?” Those who postulate the past activity of an intelligent Designer do so as an answer, or partial answer, to questions of the latter historical type. Whatever the evidential merits or liabilities of design theories, such theories undoubtedly represent attempts to answer questions about what caused certain features in the natural world to come into existence. With respect to an interest in origins questions, design and descent are clearly equivalent.  [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.183]

· Design and descent are also equivalent with respect to feature 2. Inferences to intelligent design are clearly abductive and retrodictive. They seek to infer a past unobservable cause (an instance of creative mental action or agency) from present facts or clues in the natural world, such as the informational content in DNA, the irreducible complexity of molecular machines, the hierarchical top-down pattern of appearance in the fossil record, and the fine tuning of physical laws and constants. [Denton, Evolution, pp. 338-42; C. Thaxton, W. Bradley, and R. Olsen, The Mystery of Life’s Origin (New York: Philosophical Library, 1984), pp. 113-65, 201-4, 209-12] Moreover, just as Darwin sought to strengthen the retrodictive inferences that he made by showing that many facts or classes of facts could be explained on the supposition of common descent, so too may proponents of design seek to muster a wide variety of clues to demonstrate the explanatory power of their theory. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.183]

· With respect to feature 3, design inferences, once made, may also serve as causal explanations. The same reciprocal relationship between inference and explanation that exists in arguments for descent can exist in arguments for design. Thus, as noted, an inference to intelligent design may gain support because it could, if accepted, explain many diverse classes of facts. Clearly, once adopted it will provide corresponding explanatory resources. Moreover, theories of design involving the special creative act of an agent conceptualize that act as a causal event,[Thaxton, Bradley, and Olsen, Mystery, pp. 201-12] albeit involving mental rather than purely physical antecedents. Indeed, design theories—whether posited by young-earth Genesis literalists, old-earth progressive creationists, theistic macromutationalists, or religiously agnostic biologists—refer to antecedent causal events or express some kind of causal scenario just as, for example, chemical evolutionary theories do. As a matter of method, advocates of design and descent alike seek to postulate antecedent causal events or event scenarios in order to explain the origin of present phenomena. With respect to feature 3, design and descent again appear methodologically equivalent. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.183-184]

· Much has already been said to suggest that with respect to feature 4 design may be tested indirectly in the same way as descent. Certainly, advocates of design may seek to test their ideas as Darwin did—against a wide class of relevant facts and by comparing the explanatory power of their hypotheses against those of competitors. Indeed, many biologists who favor design now make their case for it on the basis of its ability to explain the same evidences that descent can as well as some that descent allegedly cannot (such as the presence of specified complexity or information content in DNA).[E. J. Ambrose, The Nature and Origin of the Biological World (New York: Halstead, 1982); Denton, Evolution; R. Augros and G. Stanciu, The New Biology (Boston: Shambhala, 1987); D. Kenyon and P. W. Davis, Of Pandas and People: The Central Question of Biological Origins (Dallas: Haughton, 1993)] Thus design and descent again seem methodologically equivalent. Both seek to answer characteristically historical questions; both rely upon abductive inferences; both postulate antecedent causal events or scenarios as explanations of present data; and both are tested indirectly by comparing their explanatory power against that of competing theories. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.184]

· The preceding considerations suggest that allowing the design hypothesis as the best explanation for some events in the history of the cosmos will not cause science to come grinding to a halt. While design does have the required features of some scientific (historical) explanations, it cannot be invoked appropriately in all scientific contexts. Furthermore, because effective postulations of design are constrained by empirical considerations of causal precedence and adequacy, and by extraevidential considerations such as simplicity and theological plausibility, concerns about design theory functioning as a “theory of everything” or “providing cover for ignorance” or “putting scientists out of work” can be shown to be unfounded. [Following Sober, I regard simplicity as a notion that cannot be formally explicated but which, nevertheless, plays a role in scientific theory evaluation. Like Sober I believe that intuitive notions of simplicity, economy, or elegance express or are informed by tacit background assumptions. I see no reason that theistic explanations could not be either commended or disqualified on the basis of such judgments just as materialistic explanations are. Sober, Reconstructing, pp. 36-69] Many important scientific questions would remain to be answered if one adopted a theory of design. Indeed, all questions about how nature normally operates without the special assistance of divine agency remain unaffected by whatever view of origins one adopts. And that, perhaps, is yet another equivalence between design and descent.[Theists who invoke the special assistance or activity of divine agency to explain an origin event or biblical miracle, for example, are not, as is commonly asserted, guilty of semideism. Those who infer that God has acted in a discrete, special, and perhaps more easily discernible way in one case do not deny that he is constantly acting to “uphold the universe by the word of his power” at all other times. The medievals resisted this false dichotomy by affirming two powers of God, or two ways by which he interacts with the world. The ordinary power of God they called his potentia ordinata and the special or fiat power they called his potentia absoluta. W. Courtenay, “The Dialectic of Omnipotence in the High and Late Middle Ages”, in Divine Omniscience and Omnipotence in Medieval Philosophy, ed. T. Rudavsky (Norwell: Kluwer Academic Publishers, 1984), pp. 243-69. Many modern theists who affirm the special action of God at a discrete point in history have this type of distinction in mind.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.192]

Conclusion: Toward a Scientific Theory of Creation

· So what should we make of these methodological equivalencies? Can there be a scientific theory of intelligent design? At the very least it seems we can conclude that we have not yet encountered any good reason in principle to exclude design from science. Design seems to be just as scientific (or unscientific) as its naturalistic competitors when judged according to the methodological criteria examined above. Moreover, if the antidemarcationists are correct, our lack of universal demarcation criteria implies there cannot be a negative a priori case against the scientific status of design—precisely because there is not an agreed standard as to what constitutes the properly scientific. To say that some discipline or activity qualifies as scientific is to imply the existence of a standard by which the scientific status of an activity or discipline can be assessed or adjudicated. If no such standard presently exists, then nothing positive (or negative) can be said about the scientific status of intelligent design (or any other theory, for that matter). [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.192-193]

· But there is another approach that can be taken to the question. If (1) there exists a distinctively historical pattern of inquiry, and (2) a program of origins research committed to design theory could or does instantiate that pattern, and (3) many other fields such as evolutionary biology also instantiate that pattern, and (4) these other fields are already regarded by convention as science, there can be a very legitimate if convention-dependent sense in which design may be considered scientific. In other words, the conjunction of the methodological equivalence of design and descent and the existence of a convention that regards theories of descent as scientific implies that design should—by that same convention—be regarded as scientific too. Thus, one might quite legitimately say that both design and descent are historically scientific research programs, since they instantiate the same pattern of inquiry. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.193]

· So the question is not whether there can be a scientific theory of design or creation. The question is whether design should be considered as a competing hypothesis alongside descent in serious origins research (call it what you will). Once issues of demarcation are firmly behind us, understood as the red herrings they are, the answer to this question must clearly be yes—that is, if origins biology is to have standing as a fully rational enterprise, rather than just a game played according to rules convenient to philosophical materialists. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.195]

· Naturalism: the only game in town? G. K. Chesterton once said that “behind every double standard lies a single hidden agenda.”[G. K. Chesterton, Orthodoxy (London: John Lane, 1909)] Advocates of descent have used demarcation arguments to erect double standards against design, suggesting that the real methodological criterion they have in mind is naturalism. Of course for many the equation of science with the strictly materialistic or naturalistic is not at all a hidden agenda. Scientists generally treat “naturalistic” as perhaps the most important feature of their enterprise.[As Basil Willey put it: “Science must be provisionally atheistic or cease to be itself” (“Darwin’s Place”, p. 15). See also Ruse, Darwinism, p. 59; Ruse, “Witness”, p. 305; Gould, “Evolution”, p. 121; Root-Bernstein, “Creationism Considered”, p. 74; Ruse, “Darwinism”, pp. 1-13.]Clearly, if naturalism is regarded as a necessary feature of all scientific hypotheses, then design will not be considered a scientific hypothesis. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.195]

· But must all scientific hypotheses be entirely naturalistic? Must scientific origins theories, in particular, limit themselves to materialistic causes? Thus far none of the arguments advanced in support of a naturalistic definition of science has provided a noncircular justification for such a limitation. Nevertheless, perhaps such arguments are irrelevant. Perhaps scientists should just accept the definition of science that has come down to them. After all, the search for natural causes has served science well. What harm can come from continuing with the status quo? What compelling reasons can be offered for overturning the prohibition against nonnaturalistic explanation in science? [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.195-196]

· An openness to empirical arguments for design is therefore a necessary condition of a fully rational historical biology. A rational historical biology must not only address the question “Which materialistic or naturalistic evolutionary scenario provides the most adequate explanation of biological complexity?” but also the question “Does a strictly materialistic evolutionary scenario or one involving intelligent agency or some other theory best explain the origin of biological complexity, given all relevant evidence?” To insist otherwise is to insist that materialism holds a metaphysically privileged position. Since there seems no reason to concede that assumption, I see no reason to concede that origins theories must be strictly naturalistic. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.197-198]

William A. Dembski and Stephen C. Meyer

Fruitful Interchange or Polite Chitchat?

The Dialogue Between Science and Theology

· The difficulties attendant on the interdisciplinary conversation between physics and philosophy, and between the humanities and the natural sciences more generally, often pale by comparison to those encountered in the interdisciplinary dialogue between theology and the natural sciences. Distinct disciplines have a hard time communicating, even those which prima facie we might think would want to communicate, for example, philosophy and physics. How much more difficult it is, then, to get theology and science communicating when, especially over the last one hundred years, they have been increasingly characterized in terms of either a warfare or a partition metaphor (that is, either they are in unresolvable conflict or they are so thoroughly compartmentalized that no possibility of meaningful communication exists). [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.214]

· Failure to distinguish between a strong and a weak form of epistemic support has led to confusion in the dialogue between science and theology. Consider, for instance, what Ernan McMullin means when he denies that the relation between the Big Bang and the creation of the universe by God can be characterized in terms of epistemic support: “What one could say . . . is that if the universe began in time through the act of a Creator, from our vantage point it would look something like the Big Bang that cosmologists are talking about. What one cannot say is, first, that the Christian doctrine of Creation ‘supports’ the Big Bang model, or, second, that the Big Bang model ‘supports’ the Christian doctrine of Creation.”[Ernan McMullin, “How Should Cosmology Relate to Theology?” in The Sciences and Theology in the Twentieth Century, ed. Arthur R. Peacocke (Notre Dame, Ind.: University of Notre Dame Press, 1981), p. 39.] Contra McMullin, we insist that the Big Bang model does support the Christian doctrine of Creation, and vice versa. Yet we will develop a more liberalized notion of epistemic support that allows fruitful interdisciplinary dialogue without requiring that scientific evidence compel religious beliefs or the reverse. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.216]

Rational Compulsion

· We wish to stress that both strict and partial entailment yield what we have been calling rational compulsion. This is immediately obvious for strict entailment. Indeed, if it is impossible for B to be false if A is true, then if we affirm A we surely had better affirm B also. Still, we may wonder why partial entailment should also yield rational compulsion. Whereas strict entailment leaves no room for either (1) fallibility or (2) contingency or (3) degree or (4) doubt, partial entailment leaves room for all of these. If A strictly entails B, then (1) there is no possibility of being wrong about B if we are right about A; (2) B follows necessarily from A; (3) A epistemically supports B to the utmost and cannot be made to support B to a still higher degree; and (4) not only need we not but we also ought not to doubt B if we trust A. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.218]

· On the other hand, none of these properties holds in general for partial entailment. Consider the following two claims: A: There will be a heavy snowfall tonight. B: Schools will be closed tomorrow. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.218-219]

· Suppose that nine times out often when there is a heavy snowfall at night, schools close on the next day. Then if we see heavy snow accumulating tonight, we have good reason to expect that school will be closed tomorrow. Nevertheless, the four claims we just made about strict entailment in the last paragraph fail to hold for partial entailment. Thus (1) even though A may hold, we may still be mistaken for holding B; (2) there is no necessary connection between A and B; (3) the relation of support between A and B admits of degrees (for instance, the relation would be still stronger if ninety-nine times out of a hundred school were closed following a heavy snowfall, weaker if only two times out of three); and (4) we are entitled to invest B with a measure of doubt even if we know A to be true. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.219]

· Nevertheless, partial entailment and rational compulsion remain inextricably linked. To see this, consider the following rumination by C. S. Peirce: If a man had to choose between drawing a card from a pack containing twenty-five red cards and a black one, or from a pack containing twenty-five black cards and a red one, and if the drawing of a red card were destined to transport him to eternal felicity, and that of a black one to consign him to everlasting woe, it would be folly to deny that he ought to prefer the pack containing the larger portion of red cards, although, from the nature of the risk, it could not be repeated. . . . But suppose he should choose the red pack, and should draw the wrong card, what consolation would he have?[Charles S. Peirce, “The Red and the Black” (1878), in The World of Mathematics, ed. J. R. Newman, 4 vols. (Redmond, Wash.: Tempus, 1988), pp. 1313-14.] [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.219]

Explanatory Power

This section summarizes the second author’s treatment of explanation in his doctoral dissertation (Stephen C. Meyer, Of Clues and Causes: A Methodological Interpretation of Origin of Life Studies [diss., University of Cambridge, 1990])

· To sum up, whereas in the logic of deduction, A epistemically supports B because A logically entails and therefore rationally compels B, in the logic of explanation, A epistemically supports B because B provides a good explanation of A. As Peirce showed, both logics provide legitimate inference patterns and underwrite robust relations of epistemic support. Yet although these logics often work in tandem, they are nevertheless distinct. Moreover, the logic of explanation suggests an important role for theology in enhancing our understanding of some scientific data, results, or theories. Unlike the logic of entailment, which left theology little to do beyond (in the most negative case) questioning the empirical findings of science, the logic of explanation suggests that theology might provide science with a source of (albeit in many cases metaphysical) hypotheses and explanations for its empirical findings and results. This logic further suggests a way that scientific data might provide epistemic support for theological propositions or doctrines. In particular, it suggests that scientific data can provide epistemic support for theological propositions just in case those propositions suggest a better explanation for the data than do the alternatives under consideration. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.225-226]

The Big Bang and the Divine Creation

· With explanatory power rather than rational compulsion characterizing epistemic support, the cosmological theory of the Big Bang and the Christian doctrine of divine Creation can now be brought into a relation of mutual epistemic support. To show this in detail far exceeds the scope of this modest essay. Still, a few brief observations will suggest how the Big Bang and the divine Creation might provide epistemic support for each other, once epistemic support is reconceptualized by reference to the logic of explanation. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.229]

· Curiously, in the very passage in which he denies that relations of epistemic support obtain between the Big Bang model and the Christian doctrine of Creation, Ernan McMullin actually opens the door to such relations. In a passage already quoted, McMullin remarks, “What one could say . . . is that if the universe began in time through the act of a Creator, from our vantage point it would look something like the Big Bang that cosmologists are talking about. What one cannot say is, first, that the Christian doctrine of Creation ‘supports’ the Big Bang model, or, second, that the Big Bang model ‘supports’ the Christian doctrine of Creation.”[McMullin, “Cosmology”, p. 39.] Yet if we take explanatory power as our basis for epistemic support, it seems that what McMullin denies in the second part of this quotation he actually affirms in the first. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.229]

· For consider what it means to say, “If the universe began in time through the act of a Creator, from our vantage point it would look something like the Big Bang that cosmologists are talking about.”[McMullin, “Cosmology”, p. 39.] Does this not simply mean that if we assume the Christian doctrine of Creation as a kind of metaphysical hypothesis, then the Big Bang is the kind of cosmological theory we have reason to expect? Does this not also mean that the Christian doctrine of Creation is consonant with the Big Bang? We submit that the answer is yes to both questions. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.229]

· Suppose now that we take the Big Bang as given (= data) and pose the question of how we might best explain the Big Bang in metaphysical terms. The playing field is potentially quite large. Metaphysics offers a multitude of competing explanations for the nature and origin of the material universe, everything from solipsism to idealism to naturalism to theism. Nevertheless, in practice we tend to consider only the competing explanations advocated by parties in a dispute. Since McMullin’s foil is the scientific naturalist, let us limit the competition to Christian theism and scientific naturalism. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.229-230]

· If we limit our attention to these two choices, Christian theism and its doctrine of Creation may with some justification be regarded as providing a more causally adequate explanation of the Big Bang than any of the explanations offered to date by scientific naturalism. Since the naturalist assumes that, in Carl Sagan’s formulation, “the Cosmos is all that is, or ever was or ever will be”,[Carl Sagan, Cosmos (New York: Random House, 1980), p. 4] naturalism denies the existence of any entity with the causal powers capable of explaining the origin of the universe as a whole. Since the Big Bang (in conjunction with general relativity) implies a singular beginning for matter, space, time, and energy,[Stephen Hawking and Roger Penrose, “The Singularities of Gravitational Collapse and Cosmology”, Proceedings of the Royal Society of London, series A, 314 (1970): 529-48.] it follows that any entity capable of explaining this singular event must transcend these four dimensions or domains. Insofar as God as conceived by Christian theism possesses precisely such transcendent causal powers, theism provides a better explanation than naturalism for the putative singularity affirmed by the Big Bang cosmology. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.230]

· To be sure, the argument that the Big Bang provides epistemic support for the Christian doctrine of Creation can be more fully developed and nuanced. Still, the general idea of how a fruitful interdisciplinary dialogue between theology and science may proceed should be clear. Note that in the example involving the Big Bang and the Christian doctrine of Creation, we only examined the case of a scientific claim (that is, the Big Bang) providing epistemic support for a theological claim (the Christian doctrine of Creation). We could, of course, turn this around. Thus, we could fix the Christian doctrine of Creation as data and ask which cosmological theory of the origin of the universe is best supported by the Christian doctrine of Creation. The answer to this question is left as an exercise to the reader. [Behe, Dembski & Meyer: Science and Evidence for Design in the Universe. Ignatius Press, San Francisco 2000, p.231-232]

الحمد لله الذي بنعمته تتمّ الصَّالِحات

أضف تعليقاً

إملأ الحقول أدناه بالمعلومات المناسبة أو إضغط على إحدى الأيقونات لتسجيل الدخول:

WordPress.com Logo

أنت تعلق بإستخدام حساب WordPress.com. تسجيل خروج   / تغيير )

صورة تويتر

أنت تعلق بإستخدام حساب Twitter. تسجيل خروج   / تغيير )

Facebook photo

أنت تعلق بإستخدام حساب Facebook. تسجيل خروج   / تغيير )

Google+ photo

أنت تعلق بإستخدام حساب Google+. تسجيل خروج   / تغيير )

Connecting to %s