Wednesday, April 14, 2004

The Limits of Intelligence One of the chief arguments offered by ID proponent William Dembski begins with the premise that intelligent agency can cause things to happen that would be effectively impossible via natural causes alone. For example, weathering and erosion are unlikely to turn a mountain into Mt. Rushmore. He then argues that the hallmark of intelligent agency is what he calls “specified complexity” which is a fancy term for events that are highly improbable and conform to some independently describable pattern. He then goes on to argue that certain biological structures exhibit specified complexity, and therefore must have been created by an intelligent agent. This claim is dubious to say the least, since Dembski's definition of specificity is hopelessly vague and he has no credible way of carrying out the probability calculations that are essential to his method.

There is a further problem with his argument, however. Dembski is fond of arguing that natural causes can only bring about certain sorts of effects and that intelligent agency can bring about other sorts of effects not attainable by nature alone. The trouble is that he never thinks to turn this argument around. Sure, intelligent agency can craft Mt. Rushmore, whereas natural causes can not. But all of our experience with intelligent agents tells us that there are certain limitations on what intelligence can achieve. We have no experience of intelligent agents being able to tinker with the fundamental constants of nature, for example. We have no experience with intelligent agents being able to create life from nothing. Intelligent agents can discover the law of gravitation, but they are completely unable to change it suit their whim. Genetic engineering is still in its infancy, but we have no experience of intelligent agents being able to perform the sort of microengineering that the designer in ID theory has apparently performed.

In light of this, we can fairly say that all of our experience with intelligent agents tells us that they are incapable of the feats attributed to them by Dembski and his ilk. Therefore, ID proponents are not simply extrapolating a known sort of explanation to cover a new situation. They are actually positing a fundamentally new sort of creative force in the world, one for which we have no direct evidence.

Against the action of this hypothesized designer evolutionists offer the mechanism of random variation sifted through natural selection. Since even Dembski concedes that such a mechanism can, in principle, lead to great complexity, they offer only a single argument for claiming that there are features of organisms that fundamentally can not be explained by recourse to this mechanism. Specifically, they claim that if a biochemical system is composed of several well-matched, indispensable parts ( if it's “irreducibly complex” (IC)) then it could not have formed gradually. This argument is laughably false, since as a simple matter of logic systems that are IC could evolve gradually, and we have countless artificial life simulations to prove it. On top of that, for many biochemical systems quite a lot is known about how they likely evolved.

So that is our choice: Explain the complexity of organisms via a mechanism with the proven ability to craft complex structures or conjure out of whole cloth an intelligent agent with powers fundamentally different from any other intelligent agent we have ever encountered. Which is more reasonable?

0 Comments:

Post a Comment

<< Home