Consciousness 3: Qualia

by Neil Rickert

I don’t much like the word “qualia”.  I don’t find it useful.  People who use that word (and its singular form “quale”) hope to be able to discuss questions about conscious experience.  In this post, I’ll try to address those topics without the assumptions that seem to be built into use of “qualia.”

I’ll start with a review of earlier posts in this series.

  • Experience: I have identified this with internal activity of homeostatic processes, such as are commonly found in biological systems.  In particular, I have identified “experience” (with the scare quotes) with internal event to which the system reacts, so can be said to be reactively aware.  How we become conscious, and not merely reactively aware, I take to be related to our ability to have thoughts.  I expect to discuss thought in a future post.
  • Information: I have suggested that an organism acquires information about the world, and represents this information as internal events of which the organism is reactively aware.  This reactive awareness of represented information mediates the organisms awareness or consciousness of the external world.  Perhaps one could think of information as being represented by biochemical events or by neural events.  I prefer to not be that specific, because I am discussing principles rather than implementation details.  AI proponents will want to consider whether computational events could be used instead of neural events.

Information and representation

I’ll start by discussing information and its representation.  The term “representation” is contentious.  Some people — representationalists — insist that the brain forms internal representations of the world, and that we literally perceive those internal representations.  Others — direct realists — insist that we see the world directly.  And they often deny that there are representations.  However, they do talk about picking up information.  And, at least as I use the term, “information” is abstract and can only exist in the form of representations.  So you might say that I’m a direct realist who does not deny representations, though I do deny the kind of representations that the representationalists argue for.

I use “information” in about the sense defined by Shannon.  That is, information is communicated in the form of discrete symbols.

Natural languages are a well known example of the use of information.  We represent something (often said to be semantics) in terms of symbols (words).  And we communicate sequences of words as a way of transmitting the represented semantic information.  I will use the term “semantic information” for what is represented in those symbols, and “syntactic information” for the symbols themselves.  And I’ll note that Shannon’s theory was primarily concerned with syntactic information (the communication of symbols) rather than with semantic information.

What becomes clear, from looking at natural languages, is that there are many of them.  Apparently it takes some arbitrary choices to represent semantics as syntactic information.

We see something similar if we look at technology.  The letters of the alphabet have been represented in ASCII in some computers and in EBCDIC in other computers.  Again, we have apparently arbitrary choices which are then systematically followed.

Having semantic information depends on physical representation, and physical representation depends on arbitrary choices having been made.

Qualities of experience (qualia)

What users of the term “qualia” want to do, is connect the qualities of experience with physical details.  That is, they want a physicalist reduction of the qualities of experience.  If the way that we represent the world as internal events happens to be the same for all people, then perhaps such a reduction could be found.  However, if the way we represent information depends on arbitrary choices, then there may be no two people who do it in the same way.

It seems likely to me that the DNA does not prescribe a particular representation scheme.  Rather, is generates a drive to represent.  So a developing organism will be driven to make it up as it goes along, to invent its own ways of representing information.  And, in that case, even two identical twins are likely to come up with different ways of internally representing information about the world.

If I am correct about that, then the “hard problem” is hopeless.  There will be no way of relating the qualities of experience to the details of internal physical representation.  The details of physical representation might have little relevance.

Why are our experiences similar?

If we represent the world differently, how is it that we experience it similarly?  That is a natural question.  But perhaps it is a mistaken question.  We cannot compare qualities of experience.  Our ability to compare is limited to our ability to describe things in the world that we are experiencing.  And the similarity there is surely because we are experiencing the same world.

This suggests that the qualities of experience that we can talk about and that we can compare, are due to the semantics of the information we represent, rather than to the syntactical form of representation.  Roughly speaking, seeing the world is having immediate semantic information about the world that we are seeing.  Two people might have very different ways of internally representing that information, yet have about the same semantic information even though the syntactical details are very different.

If I am correct about this, then the program of physical reductionism is hopeless.  We need to study the cognitive principles of how an organism deals with semantic information, rather than the physical principles of how it processes signals.

The problem for AI

The “hard problem” can be rephrased for AI, as “how can we design an artificial intelligence so that it has sensory experience similar to that of people?”  If experience depends on the semantics of the information, then the real problem for AI is that our computing systems deal primarily with syntactic information, and it has proved difficult to get at the semantics.

A final note

I will try to say more about semantics in a future post.  I have previously touched on the topic in “A semantic conception of mind“, though that might come across as too abstract.

One Comment to “Consciousness 3: Qualia”

%d bloggers like this: