|

The Scientific Status of Design Inferences

by Bruce L. Gordon, Ph.D.
History and Philosophy of Physics
Baylor University

Scientific practice assumes that the universe, in both its origin and function, is a closed system of undirected physical processes. While many scientists reject this assumption as the ultimate truth, they still think that it is essential for science to function as if it were true. This means that they have accepted methodological naturalism as a necessary constraint on their practice as scientists. Methodological naturalism is the doctrine that in order to be scientific, an explanation must be naturalistic, that is, it must only appeal to entities, causes, events, and processes contained within the material universe. Even if we grant that this restriction on permissible explanations has been a fruitful strategy for science, we must still ask whether it is methodologically required by science. Arbitrarily rejecting methodological naturalism may be unwise as an explanatory strategy within science. But perhaps there is a perfectly rigorous method for ascertaining when such restrictions cannot be applied if a correct explanation for something is to be given. Would such a principled decision, subject to a strict and objective methodology, not also conform to the canons of scientific explanation?

A number of philosophers of science have attempted to give an account of what it means to offer a scientific explanation for a phenomenon. We briefly consider three such accounts: the deductive-nomological model, the causal-statistical (statistical-relevance) model, and the pragmatic model.

The deductive-nomological (D-N) model was the earliest model of scientific explanation and has been very influential. It postulates four criteria for scientific explanations:

1.  The explanation must be able to be put in the form of a valid deductive argument, with the thing to be explained as its conclusion.
2.  The explanation must contain at least one general law that is required for the derivation of this conclusion.
3.  The explanation must have empirical content that can be tested.
4.  The premises of the argument constituting the explanation must be true.

Subsequently, it became clear that the D-N model had irremediable shortcomings falling into two categories: (a) there are arguments meeting the criteria of the D-N model that fail to be genuine scientific explanations; and (b) there are genuine scientific explanations that fail to meet the criteria of the D-N model. In short, these four criteria are neither sufficient nor necessary to guarantee that an explanation is scientific. To see this, I offer two standard counterexamples: the man and the pill, and the explanation of paresis.

That the D-N model is insufficient as an account of scientific explanation can be illustrated by this humorous counterexample. A man explains his failure to become pregnant over the last year, despite an amorous relationship with his wife, on the ground that he has regularly consumed her birth control pills. He appeals to the law-like generalization that every man who regularly takes oral contraceptives will not get pregnant. This example conforms to the D-N pattern of explanation. The problem is that his use of birth control is irrelevant because men do not get pregnant. So it is possible to construct valid arguments with true premises in which some fact asserted by the premises is irrelevant to the real explanation of the phenomenon in question.

To see that the model does not provide conditions that are necessary for a proper scientific explanation, consider the explanation for the development of paresis (a form of tertiary syphilis characterized by progressive physical paralysis and loss of mental function). In order to develop paresis, it is necessary to have untreated latent syphilis, but only about twenty-five percent of the people in this situation ever develop it. So we have a necessary condition for the development of the disease, but we cannot use this to derive the conclusion that paresis will develop in an individual case, or even to predict that it will. In fact, we're better off predicting that it will not develop, since it doesn't in seventy-five percent of the cases. Even so, the proper scientific explanation for paresis is that it results from untreated latent syphilis. This is just one example of a genuinely scientific explanation that does not conform to the D-N model.

To remedy the defects of the D-N model of scientific explanation, the causal-statistical or statistical-relevance model was proposed. Advocates of this model stress the role of causal components in scientific explanations and generally deny that explaining something scientifically must involve rigorous deductive or inductive arguments. Because they recognize that there are rational explanations for unexpected events (like the onset of paresis after untreated latent syphilis), they reject the idea that universal or statistical laws and empirical facts must provide conditions of adequacy for a scientific explanation of the occurrence of events.

The positive idea behind the causal-statistical model is that a scientific explanation presents two things: (1) the set of factors statistically relevant to the occurrence of that event; and (2) the causal framework or link connecting those factors with the event to be explained. Statistical relevance may be defined as follows: factor B is statistically relevant to factor A if and only if the probability of A, given that B has already occurred, is different from the probability of A occurring on its own, that is, P(A|B) ? P(A). The causal network or link connecting the factors with an event is simply an account of the underlying causal processes and interactions that bring it about. A causal process is a continuous spatio-temporal process; a causal interaction is a relatively brief event in which two or more causal processes intersect. The causal-statistical theory arose from the conviction that legitimate scientific explanations have to explain events in terms of the things that actually caused them to happen.

While the causal-statistical model seems fairly solid, it nonetheless finds a counter-example in quantum mechanics, the theory describing the behavior of atomic and subatomic particles. The details of why the model fails for quantum mechanics are complicated. Roughly, the causal-statistical account appeals to processes that are deterministic and continuous in space-time, while it is generally accepted that quantum mechanics is not consistent with this view of the world. Since quantum mechanics is regarded as one of the triumphs of twentieth-century science, we have a reason for thinking that this model of explanation is too narrow. Of course, we also have the option of saying that quantum mechanics provides us with a mathematical description of quantum phenomena that enables amazingly accurate predictions, but it does not explain these phenomena at all - a complete explanation would get to the root cause of experimental outcomes, not just predict them.

The shortcomings of the D-N and causal-statistical models led to a third proposal for scientific explanations, the pragmatic model. This approach not only denies that scientific explanations have a characteristic form (as in the D-N model), but also denies that they supply distinctive information (as in the causal-statistical model) outside of that provided by the theories, facts, and procedures of science itself. Calling an explanation "scientific" means nothing more than saying it draws on what gets recognized as science to provide an explanation, and whether this criterion is satisfied is something determined by the community of scientists themselves. Beyond this, the pragmatic theory is highly contextual.

Bas van Fraassen, the originator of the pragmatic theory, maintains that a scientific explanation is a telling response to a why-question that is identifiable by its topics of concern, contrast classes, and explanatory relevance conditions. An explanation is a telling response simply if it favors the occurrence of the state of affairs to be explained. The topic of concern is the thing to be explained. The contrast class is the set of alternative possibilities, of which the topic of concern is a member, for which an explanation might be requested in a particular context. The explanatory relevance conditions are the respects in which an answer might be given. For example, to borrow one of van Fraassen's illustrations, our topic of concern might be why an electrical conductor is warped. In this case, the contrast class might consist of other nearby conductors that are not warped, the warping of the conductor as opposed to its retaining its original shape, etc. The explanatory relevance conditions might be the presence of a particularly strong magnetic field, the presence of moisture on the conductor, and so on. All of these things are highly dependent on context.

The pragmatic theory is relatively simple and direct in comparison with the other two models. It also is capable of accommodating the special aspects of the other two theories of explanation, and it has a very broad range of application. Critics of the pragmatic model have questioned whether every why-question asked by a scientist requires a contrast class, whether scientific questions might sometimes involve explanations of how as well as why (for example, the question of how genes replicate), whether a telling response must always favor the topic of concern, and whether the theory is too broad and would legitimize as scientific those explanations that the community of scientists might wish to exclude (though actual acceptance by the scientific community seems to be built into the criteria of legitimacy in this case).

Notice that none of the foregoing theories of scientific explanation make mention of methodological naturalism as a constraint (though it is perhaps implicit in the definition of a causal process used by the causal-statistical approach). Some philosophers of science would say this absence points to its status as a grounding assumption for any theory of scientific explanation; others would maintain that this absence shows it is not an essential part of scientific explanation and its relevance as a condition is context-dependent. A brief consideration of the role that a rigorous theory of design inferences can play in science reveals the latter attitude to be most reasonable.

As William Dembski points out, drawing design inferences is already an essential and uncontroversial part of various scientific activities ranging from the detection of fabricated experimental data, to forensic science, cryptography, and even the search for extra-terrestrial intelligence (SETI). He identifies two criteria as necessary and sufficient for inferring intelligence or design: complexity and specification. Complexity ensures that the event in question is not so simple that it can readily be explained by chance. It is an essentially probabilistic concept. Specification ensures that the event in question exhibits the trademarks of intelligence. The notion of specification amounts to this: if, independently of the small probability of the event in question, we are somehow able to circumscribe and define it so as to render its reconstruction tractable, then we are justified in eliminating chance as the proper explanation for the event. Dembski calls such an event one of specified small probability.

If an event of small probability fails to satisfy the specification criterion, it is still attributable to chance, as is the case, for example, with any sequence of heads and tails produced by 1,000 tosses of a fair coin. But if an event is genuinely one of specified small probability, then the proper conclusion is that the cause of that event is intelligent agency. A brief example will suffice to clarify the notion. Suppose that a bank vault lock has a quadrillion possible combinations. Each of the quadrillion possible combinations is equally improbable, yet one of them in fact opens the lock. The actual combination that opens the vault is an event of specified small probability. If a person given one chance to open the vault succeeds in doing so, then the proper conclusion is that he opened the vault by design, namely by having prior knowledge of the right combination. One of Dembski's important contributions has been to render the notion of specification mathematically rigorous in a way that places design inferences on a solid foundation.

The mathematical analysis used to determine whether an event is one of specified small probability rests on empirical observations set in the context of the theoretical models used to study the domain (quantum-theoretic, molecular biological, developmental biological, cosmological, etc.) under investigation, but the design inference itself can be formulated as a valid deductive argument. One of its premises is a mathematical result that Dembski calls the law of small probability. That the design inference lends itself to this precision of expression is significant because it enables us to see that a rigorous approach to design inferences conforms to even the most restrictive theory of scientific explanation, the D-N model. In fact, even though the accounts of scientific explanation we considered were inadequate as universal theories, all three of them captured important intuitions. Furthermore, it is short work to see that rigorous design inferences satisfy the conditions imposed by all of them.

Design inferences conform to the requirements of a deductive-nomological explanation because it satisfies all four criteria of this explanatory model.

1) The explanation it offers can be put in the form of a deductive argument.
2) It contains at least one general law (the law of small probability), and this law is required for the derivation of the thing to be explained (in this case the nature of the cause of the event in question).
3) It has empirical content because it depends on both the observation of the event and the empirical facts relevant to determining the objective probability of its occurrence.
4) The sentences constituting the explanation are true to the best of our knowledge, because they take into account all of the relevant factors in principle available to us prior to the event we are seeking to explain.

Design inferences also satisfy the requirements of the causal-statistical model of explanation by isolating the factors statistically relevant to the explanation of the event under investigation. This is accomplished by determining that the event in question is one of small probability and by ensuring that the criterion of specifiability is satisfied, thereby eliminating natural law and chance as possible explanations. It also makes manifest the causal network undergirding this statistical regularity, since it causally connects the relevant explanatory factor (intelligent agency) to the occurrence of the event (though not necessarily by means of mechanism).

Finally, design inferences satisfy the pragmatic model of explanation because they provide telling answers to why questions, where those questions are identifiable by their topics of concern, their contrast classes, and their explanatory relevance conditions. The topic of concern in a design inference is the observed occurrence of an improbable event that bears prima facie evidence of specification. The contrast class is constituted by the set of alternatives of which the topic of concern is a member. For example, the contrast class might include the occurrence of other more probable events in the causal context under consideration, or equally improbable events in that context that bear no evidence of specification, etc. The explanatory relevance conditions might be the presence of highly particular initial conditions in the physical system, indications of thermodynamic counter-flow, the presence of apparently intelligent informational content, and so on. All of these things are dependent upon the context, but what is sought is a correct account of the cause of the event in question. The response provided by the design inference is therefore a telling one by the standards of the pragmatic model, because when an event of demonstrably specified small probability occurs, this state of affairs is favored by design-theoretic explanations.

Since design inferences satisfy all three models of scientific explanation, there seems little reason to bar their legitimacy as a mode of scientific explanation. Indeed, when generating scientific conclusions in cryptography or forensics, the design inference is not controversial. The sticking point centers on the issue of methodological naturalism. What happens if design-theoretic analysis, when applied to certain natural phenomena, yields the conclusion that these phenomena are the result of intelligent design? And what if this state of affairs implies that there is an intelligent cause that transcends our universe? Nothing but the unacknowledged operation of a questionable double standard would bar using design-theoretic tools in this context when their employment would be uncontroversial if no such implications were in view. So can design inferences, when applied to nature, be a form of scientific explanation? Considered without prejudice, this question requires an affirmative answer.

 

 

This Web site is part of NAMB's major mission objective committed to sharing Christ. More>

Sharing Christ