Archive for September 4th, 2010

September 4, 2010

Consciousness and evolution

by Neil Rickert

From time to time the question of consciousness comes up in the creationism, ID (intelligent design), evolution debates.  The proponent of creationism or ID raises the issue of consciousness as something that is not explained by science, and uses that to argue that consciousness could only arise from the actions of a divine intelligent designer.  For a recent example of this kind of argument, see Granville Sewell’s post on Human Consciousness in the “Uncommon Descent” blog.

It is really a typical “God of the gaps” argument.  It is a mystery to me that theists continue to make such arguments.  By now it should be obvious that if they are going to define their god based on the gaps in human  knowledge, then sooner or later that god will be exposed as a charlatan.

In this post, I shall argue that consciousness actually poses a far greater problem for the theist than it does for the evolutionist.  The attempts to explain consciousness have all been attempts to understand how we would go about designing a conscious agent.  That such attempts have not succeeded would seem to pose a problem for the idea of design.

Here is how Sewell finishes his argument:

And if you don’t believe that intelligent engineers could ever cause machines to attain consciousness, how can you believe that random mutations could accomplish this?

Sewell’s argument appears to be that since we have failed in our attempts to understand how to design consciousness, therefore consciousness must be designed.  Perhaps he did not notice the incongruity of such an argument.  An alternative conclusion, and I think a more plausible one, might be that consciousness is only possible in an evolved system, and could not be present in a designed system.

There are three problems that seem to arise in attempts to understand consciousness.

  1. How can a designed system have free will?  It always seems that a designed system (a robot, for example) will be carrying out the intentions or purposes of its designer, and therefore cannot be said to be able to act of its own free will.
  2. How can a designed system be said to appreciate that the data it collects from the world is about something (i.e. is about part of the world)?  This is the intentionality problem that John Searle raised in his famous “Chinese Room” argument.  It is easy enough to see how the robot can collect data that is meaningful to its designer, but it is not clear how that data can be meaningful to the designed system (or robot) itself.
  3. How can the designed agent actually experience the world?  This is approximately the qualia question.  Roughly speaking, and using “through the eyes” metaphorically, it is the question of how the robot can see the world through its own eyes, instead of it merely behaving in accordance with the way the world is seen through the designer’s eyes.

The clear solution to these problems would be to have an agent that, in some sense, designs itself.  Then to say that the agent acts in accordance with the will of its designer is to say that it acts on its own free will.  To say that the data collected is meaningful to the agent’s designer is to say that the data is meaningful to the agent itself.  To say that the agent behaves in accordance to how the world is seen through the designer’s eyes is to say that the agent behaves in accordance with how the world is seen through its own eyes.

With an evolved creature, we cannot quite literally say that it designed itself.  But the evolved creature does come as close to that as we could hope.  Up until the time of conception, the creature can be said to be designed by its parents as part of an inter-breeding group.  The biological development that follows conception can reasonably be considered self-design.

My personal conclusion: consciousness is only possible in evolved systems.  If God had wanted the world to have conscious creatures, he would have created a system of evolution as a way to produce such creatures.

September 4, 2010

On similarity and partitioning

by Neil Rickert

John Wilkins has an interesting post on similarity.  Since this is closely related to my ideas about evolutionary epistemology, I am adding my two cents.

A great deal of philosophical explanation is based on similarity.  For example, it is often said we apply names such as “cat” and “dog” based on the similarity to some sort of archetype that we are said to have in our heads.  However, it is never explained how similarity works, nor how that archetype gets into our heads in the first place.

My own view is that philosophy has much of this almost exactly backwards.  The view of philosophers is that organize the world by grouping or categorizing similar things together.  My apparently heretical alternative view is that we organize the world according to whatever ways we can find that work to organize the world, and that prove useful to us.  Of particular importance is that our ways of organizing the world must be reliable and repeatable and useful (pragmatic).  That is to say, we organize the world in a way so that if we went back and did it again in the same way, we would come up with the same result, and we do it in order to make it easier for us to negotiate our way around our world.

Once we have a reliable way of organizing the world, then we can call things “similar” to the extent that our method of organization groups those things into common categories.

Allow me to redescribe this in terms of what I call “partitioning”.

What is immediately available to us, or immediately available to a newborn infant, is a world, but perhaps a world with no known structure.  In order to better apprehend the world, we try to find reliable ways of dividing that world up into parts.  For example, the newborn infant eventually learns how to partition the world into daytime and nighttime, and that ability is welcomed by parents when it finally allows them to get a good night’s sleep.  Once we have partitioned, we can then further subdivide or partition those partitions into finder sub-partitions.  The result is a scheme of nested partitioning.  Then, the further down this hierarchy that two things are found in the same partition at that level, the more similar they will seem to us.

The overall point.  Our idea of similarity is a consequence of how we organize the world, rather than the underlying principle that we use to carry out that organizing.