In prior posts (here and here), I have illustrated representational methods and direct methods. The illustrations were from science, because that is more public so easier to demonstrate the contrast. I believe that they illustrate well enough, the distinction between direct and indirect perception. Both aim to provide the same sort of information about the world. The method is different, though perhaps the differences are small enough to be confusing.
The primary distinction here is that direct perception is simpler and more direct, and does not rely on computation or inference. This is why I see direct perception as more likely to be what has evolved, and thus a more likely candidate for explaining human perception.
One way of seeing the distinction is to look at it in terms of categorization. Here, I use “categorization” to refer to the dividing up of the world into parts (or categories). This comes from the old idea (from Plato?) of carving the world at its seams, though the seams might actually be man-made.
In the case of indirect measurement, we categorized by dividing the world up in accordance with the calibration marks on the ruler used to measure the height of the mercury column. Or, in simpler terms, we categorized into intervals of length (height). Using those length categories, together with the appropriate knowledge, we then made an inference to yield a secondary categorization in terms of intervals of temperature range. So that’s a double categorization. A primary categorization is done directly with the physical world, to as to give a representation. And then a secondary categorization is done inferentially from the first categorization.
Direct perception eliminates the double categorization. Instead we directly categorize into the desired categories of intervals of temperature range.
We see the same idea at play with accounts of visual perception. According to representationalists we first form a pixel map of the immediate environment (or the visual field). Forming a pixel map is categorization. It is dividing the world up into tiny pixel sized pieces. And then, according to representationalists, we use inference to further categorize into things such as cats, dogs, etc. Gibson argue that instead, we directly perceive cats, etc. We do this by having what Gibson called a transducer, tuned to invariant properties of cats. The transducer thus directly categorizes into cats and the non-cat remainder of the world.
“Direct” then, means avoiding double categorization. It means directly dividing into the desired categories instead of categorizing in some other way, followed by inference.
As my illustrative examples show, representational methods of perception require additional knowledge. If we take seriously the basic ideas of empiricism, that our knowledge of the world comes from what we learn through experience, then representationalism seems unlikely because it would seem to require a lot of innate knowledge. In order to see a cat, according to representationalists, I categorize the world into a pixel map, and then I perform an inference (or computation) to determine what is a cat. That presupposes a lot of prior knowledge about cats and about how cats can be distinguished in pixel maps.
The direct perception viewpoint is that, via a process of perceptual learning, our brains build cat transducers and tune them to do an adequately good job of picking out cats. As our brains build these transducers we are, in effect, gaining implicit knowledge on what distinguishes cats from other things. I take it that this implicit knowledge becomes part of our system of meanings. The learning needed to build these transducers is, I believe, part of what explains intentionality.
In the prior posts, I dismissed the relevance of Planginga argument for direct perception. My dismissal was because I did not see the same knowledge and truth requirements for direct perception, as seem to be needed for representational perception.
I expect that Plantinga might argue that, if I directly categorize things as cats, there is still a truth requirement that what I categorize as cats truly are cats. While that at first appears to be a good argument, it does not fit with how I see the problems that perception must solve.
A traditional view would be that the world is metaphysically divided into things such as cats and dogs. I disagree with that. As I see it, the world consists of stuff. We humans divide into things like cats and dogs. So I do not see a role for ontology as a branch of metaphysics.
I can re-express that last paragraph in a different way. The classical ontologist says that certain things like cats and dogs exist and it is up to us to form true statements about them. The way that I see it, instead, is that of the uncountable infinity of things that could be said to exist, our problem is to decide which ones are important enough for us to name and for us to develop methods of recognizing them (or to develop suitable transducers).
If we came upon a person who was unable to distinguish between cats and dogs, but instead considered them all to be small animals, we would not say that was a failure of truth. We would say that it was a lack of the ability to discriminate between cats and dogs.
I have, in effect, implied that our distinguishing between cats and dogs, rather than just calling them all small animals, is a human decision and is not dictated by metaphysics. I take this to be the kind of view that Wittgenstein was getting at, when he said “If a lion could talk, we could not understand him.”
Examples from technology
When you shop at the supermarket your grocery bill is determined, in part, using a bar-code scanner. The bar-code scanner uses the methods of direct perception, in the sense that it avoids double categorization. It directly seeks the bar code. It does not pixelize the visual field, and then infer that there is a bar code.
When your computer uses a network connection, the ethernet card is using direct methods. It does not double-categorize the electrical signal. Rather, it directly seeks the signal variation pattern that is what we expect in an ethernet frame. In Gibson’s way of speaking, it is tuned to the invariant properties of ethernet frames.
Those are just two examples. Technologists are pragmatists. They try to find a solution that is simple and reliable. And they often seem to settle on using the methods that I take to be those of direct perception.
Implications for cognitive science
As I see it, there isn’t much computing going on in the brain. In keeping with my earlier post on direct measurement, what I see as important is the maintaining of consistent calibration. I take Hebbian learning to be what we should expect to see if the brain is involved in maintaining consistent calibration.
It might seem that calibration is merely mundane. However, if brain processes are trying to keep the calibration consistent, so that what we see with the left eye is the same as what we see with the right eye, then those processes will come upon some discrepancies that cannot be eliminated. That counts as a discovery of new information that turns out to be what is needed for depth perception. So a program of maintaining consistent calibration can discover new kinds of information.
Fifty years ago, a radio technician would connect a signal generator to a radio, and use that to adjust (calibrate) the various circuits. This was a kind of fine tuning. I suspect that the brain might be doing something similar. It is a standard engineering method. So suppose that the brain is generating test signals to tune up the circuits and keep those consistently calibrated. We might well expect to experience that as dreams, or REM sleep.