Consciousness and evolution

by Neil Rickert

From time to time the question of consciousness comes up in the creationism, ID (intelligent design), evolution debates.  The proponent of creationism or ID raises the issue of consciousness as something that is not explained by science, and uses that to argue that consciousness could only arise from the actions of a divine intelligent designer.  For a recent example of this kind of argument, see Granville Sewell’s post on Human Consciousness in the “Uncommon Descent” blog.

It is really a typical “God of the gaps” argument.  It is a mystery to me that theists continue to make such arguments.  By now it should be obvious that if they are going to define their god based on the gaps in human  knowledge, then sooner or later that god will be exposed as a charlatan.

In this post, I shall argue that consciousness actually poses a far greater problem for the theist than it does for the evolutionist.  The attempts to explain consciousness have all been attempts to understand how we would go about designing a conscious agent.  That such attempts have not succeeded would seem to pose a problem for the idea of design.

Here is how Sewell finishes his argument:

And if you don’t believe that intelligent engineers could ever cause machines to attain consciousness, how can you believe that random mutations could accomplish this?

Sewell’s argument appears to be that since we have failed in our attempts to understand how to design consciousness, therefore consciousness must be designed.  Perhaps he did not notice the incongruity of such an argument.  An alternative conclusion, and I think a more plausible one, might be that consciousness is only possible in an evolved system, and could not be present in a designed system.

There are three problems that seem to arise in attempts to understand consciousness.

  1. How can a designed system have free will?  It always seems that a designed system (a robot, for example) will be carrying out the intentions or purposes of its designer, and therefore cannot be said to be able to act of its own free will.
  2. How can a designed system be said to appreciate that the data it collects from the world is about something (i.e. is about part of the world)?  This is the intentionality problem that John Searle raised in his famous “Chinese Room” argument.  It is easy enough to see how the robot can collect data that is meaningful to its designer, but it is not clear how that data can be meaningful to the designed system (or robot) itself.
  3. How can the designed agent actually experience the world?  This is approximately the qualia question.  Roughly speaking, and using “through the eyes” metaphorically, it is the question of how the robot can see the world through its own eyes, instead of it merely behaving in accordance with the way the world is seen through the designer’s eyes.

The clear solution to these problems would be to have an agent that, in some sense, designs itself.  Then to say that the agent acts in accordance with the will of its designer is to say that it acts on its own free will.  To say that the data collected is meaningful to the agent’s designer is to say that the data is meaningful to the agent itself.  To say that the agent behaves in accordance to how the world is seen through the designer’s eyes is to say that the agent behaves in accordance with how the world is seen through its own eyes.

With an evolved creature, we cannot quite literally say that it designed itself.  But the evolved creature does come as close to that as we could hope.  Up until the time of conception, the creature can be said to be designed by its parents as part of an inter-breeding group.  The biological development that follows conception can reasonably be considered self-design.

My personal conclusion: consciousness is only possible in evolved systems.  If God had wanted the world to have conscious creatures, he would have created a system of evolution as a way to produce such creatures.

%d bloggers like this: