Representationalism and computationalism

by Neil Rickert

A commenter to a recent post said, in part:

The consensus in science is that objective data (like photons, matter, etc.) interact with our body via the sensory system (nervous system, etc.) converting a truncated amount of incoming data (due to limitations on nervous system processing speed and resolution) into even further truncated streams of information (due to nervous system compression before & as a result of space limitations in the body/spinal cord), eventually leading to the brain where that truncated data is translated into what we perceive (perceptions).

That is a view known as “representationalism” and computationalism is the particular version of representationism that says that what the brain is mainly doing is computation.I’m not sure that is the consensus view in science, though it probably is the consensus view in philosophy of mind, in cognitive science and in cognitive psychology.  Many behaviorist psychologists would disagree.

The view of representationalists is that we don’t experience the world at all.  Rather, received stimuli form a representation of the world, and we then experience that representation.  This includes the view that perception is passive, and that our cognitive processes apply only to what perception has given us.    Steve Lehar has an illustrated page where he argues the case for representationalism.

The alternative view is that we directly perceive and interact with the world, rather than with representations of that world.  Perhaps the best known advocate of direct perception was J.J. Gibson.

For myself, I favor direct perception, though my own view is a little different from that of Gibson.  The problem with representationalism, is that it posits a layer of representation that insulates us from the world.  Throughout our entire lifetimes, we would access only finitely many datapoints.  Granted, that would be a very large finite quantity.  But it still would not be enough for us to infer what we seem to know about the world.  This is sometimes described as the “poverty of stimulation” problem.  It would seem to require that most of our knowledge of the world comes to us through our genes.  But it is not clear how that would help, because processes involved in biological evolution would also have been restricted to finitely many datapoints.  Moreover, it is doubtful that the capacity of the DNA is sufficient to carry all of the required innate knowledge.

I often hear that direct perception is impossible, that it depends on magic.  I have seen arguments that purport to show that.  However, when I visit the grocery, my purchases are scanned by a bar code scanner.  And the operation of the bar code scanner is based on the methods of direct perception.  So arguments that it could not work seem unconvincing.

Advertisements

11 Comments to “Representationalism and computationalism”

  1. In what sense is our perception direct, without completely distorting the distinction between the meanings of the words direct and indirect?

    When a photon strikes an object in the world, is absorbed, and eventually causes another to be emitted, then what hits the surface of the eye is about as concrete an example of a direct connection as there can be, between the eye and the object. We only ever receive, in this sense, a very small direct part of an object. From then on, that photon is absorbed by the eye, and in ever respect from then on the information going into the brain consists of activity in neurons – as you described from the commenter.

    With touch, there is at the surface of the skin, about as direct an interaction with the body and an object. But from then on the internal activity of the nerves is entirely internal activity. All immediate and direct contact with the object has been lost.

    Unless we reject current physics and sign up to some unsupported philosophical metaphysics there is nothing direct about perception.

    Direct perception was only ever a philosophical notion that was meant to express the opinion that our internal ‘picture’ of the world was so good a ‘representation’ that it reflected what the world was actually like; as opposed to some views that we were so far detached from the world that our perceptions could be grossly way off the mark – e.g. when I see my cat, what is actually there is an sixteen legged two headed monster.

    Another expression of direct perception was that when we perceive we are in some sense projecting out into the world, as if our minds are out there experiencing objects directly. This leads to odd ideas like those of Rupert Sheldrake, who thought (and maybe still does) that there is some dense in which we protect some sort of beam or rays to interact with the world.

    These direct perception ideas seem to be stimulated by what appears to us to be a very rich perceptual experience when we see the world. And it’s always related to how discriminatory our sight is. If you close your eyes and try to perceive the world through sound alone you soon come to realise that we have only minimally rich access to the world, and that it is far from direct.

    I think that the physical distinction is clear: our perceptions must be indirect, in the very physical sense; we do not have direct contact, and so our perception has no direct sense.

    The more general notional concept, that our perception is a pretty true representation of the world, is more reasonable. Geometrically there may be a good mapping between what we see and what’s out there (allowing for the fact that our macro view is some approxiamation to a microscopic view, down to whatever level of detail).

    But we know so much about vision that at very best we can say our visual perception is an interpolated made-up representation of a very disjointed input. Inputs come and go in fleeting instants, so that even when we contemplate what we are looking at there is a very real sense in which we are manipluationg a combination of visual memory (from fractions to several seconds ago) and fabrication from past experience. We know that our field of vision is limited, but supported by saccades to give us the impression that it is richer than it is.

    “This is sometimes described as the “poverty of stimulation” problem.” – This misses the point, by being so anthropocentric. If we were to suggest to Nagel’s bat that relying on ultra-sound it must have a very poverty stricken perception I think it would look astonished at such a suggestion. Any animal that has evolved with several senses, one of which is more acute than the others, will have no idea what it feels like to have any sense perceptions that are any more capable than that. To such an animal it will feel as naturally rich to it as sight does to us.

    Before microscopes, telescopes, even spectacles, it must have seemed that our natural sense of sight was fantastically rich, compared to touch and hearing. We feel sight is so rich because that’s the best of what we know. We even mistake our colour limitations for glorious richness. When we look at natural light images from astronomy they often seem quite dull compared to the artificial colours that they are sometimes replaced with. But in artifially compressing the range of colour in those images they are being deminished. Any creature that could see our colour range as we do, but could also see into the other bands of the electromagnetic spectrum would feel our artificial colour images were on a par with our black and white ones.

    Our most common natural perceptions from our environment depended mostly on where we lived and when. Someone who grew up in desert lands would be exposed to those colours mostly. In the UK the most common colours would be the green of the trees and fields, and the blue (or here in Manchester the gray) of the skies. This is why when towns people drive out into the country we wow at the spectacle of a field of rapeseed in full glow; why landlocked people are taken aback when they come upon the sea; why spring is always a treat of colour after a dark winter. We have norms of visual stimulation and glory in the exceptions. When this is expressed in vivid mand made art, or in a glorious sunset, we marvel at the richness, not for its absolute richness, but its comparative richness.

    By having an evolved natural range, it is also natural to perceive that as rich, espactially at the extremes. Our perception of our perceptual capabilities is misguided, due to our familiarity – it seems rich because that’s as rich as we get.

    That we don’t have direct experience is easily demonstrated. For those who have very poor sight or no sight it is possible to stimulate the tongue with electrical impulses from an array that is fed from a camera, to provide. To sighted people who are blind-folded who try such devices they soon learn to ‘see’ images – though obviously not as rich as the evolved mechanism they are used to. But they do ‘see’; and it is so very clearly not direct. The very mechanism by which such devices work need not be local. It would be possible to navigate a maze that was actually in another room, based solely on transmitted signals, without any clues from the current room.

    Like

    • I think you have the wrong meaning for “direct perception.”

      Nobody denies that perception is mediated by neurons. “Direct perception” is supposed to contrast with “indirect perception” and the latter claims that we first form an internal representation and then perceive that representation.

      Like

  2. Hi Neil,

    First, a criticism of ‘direct perception’ itself.

    From your Gibson link and on to wiki on Naive Realism.

    1. There exists a world of material objects.

    Indirect perception can agree with this: indirect perception does not need to be a pure idealism that suggests the objects we see don’t exist.

    2. Statements about these objects can be known to be true through sense-experience.

    A sneaky use of ‘can’ here.

    Statements about objects are often wrong, and always incomplete. The incimpleteness may be simple an uncontroversial – we can’t see the rear or internal aspects of a solid opaque object. We may have a good geometric mapping from line of sight, enhanced by telescopic vision, but that really is incomplete. And the range of colour vision and scale of vision is very limited.

    Illusions and delusions are examples of direct perception being dead wrong. So are the mental pictures we build from saccades.

    3. These objects exist not only when they are being perceived but also when they are not perceived. The objects of perception are largely perception-independent.

    Again, not for optical illusions and delusions. And, we must not be fooled into thinking only of visual perception. Our auditory system can easily perceive sounds that have no direct source – as when independent compound noise sources can stimulate the perception of beats.

    4. These objects are also able to retain properties of the types we perceive them as having, even when they are not being perceived. Their properties are perception-independent.

    No. We perceive solidity in quite a different way than science tells us it is. We have never directly percieved the world of atoms. We don’t perceive the relativistic effects of moving objects at normal speeds. We perceive colours differently under different light sources, which leads us to believe the colour property of an object is a perceptual invention. A colour blind person and a non-colour blind even have different perceptions at the very same time.

    5. By means of our senses, we perceive the world directly, and pretty much as it is. In the main, our claims to have knowledge of it are justified.

    OK. But the same applies to indirect perception to an extent, if we allow such vague notions as ‘pretty much as it is’.

    The use of the term ‘direct perception’ itself is pretty vague, because it is contrasted with other incomplete notions like ‘scientific realism’. But this is wiki on scientific realism: Scientific realism is, at the most general level, the view that the world described by science (perhaps ideal science) is the real world, as it is, independent of what we might take it to be. Now that seems pretty much like some of the descriptions of direct perception above. So where is the real distinction?

    So I still can’t see what ‘direct perception’ offers that is both (a) distinct from ‘indirect perception’ and (b) at the same time not clearly wrong.

    Like

    • From your Gibson link and on to wiki on Naive Realism.

      There’s a philosophical position known as “Direct Realism” that comes from objectivism (the philosophy of Ayn Rand). As far as I know, it is mostly nonsense. The Randians do tend to cite Gibson, but much of what they say does not come from Gibson.

      In his books, Gibson discusses perception. He does not get into the philosophy of realism. I have seen an interview where he admits to being a naive realist. But he does not get into details such as you list. So I think you are seeing a lot of Randian stuff mixed in with mentions of Gibson’s positions. I guess that’s one of the problems of using Wikipedia links.

      Gibson, himself, actually doesn’t say much about how perception works, in terms of details of what is happening in the brain. I consider that a defect in his work. He does point out that the eye is in motion (as saccades), which sure seems inconsistent with the idea of forming internal images before perceiving.

      Here’s the distinction I think is important:

      Representationalism: Start by forming an internal image (a pixel map), and then analyze that internal image to implement perception.

      Direct perception: Start by finding boundary crossings using motion of the eye (saccades), and analyze the results before forming any internal representations.

      Like

  3. The latter [indirect perception] claims that we **first** form an internal representation and then perceive that representation?

    It does not. And that contrasts with what you wrote in your post: “The view of representationalists is that we don’t experience the world at all. Rather, received stimuli form a representation of the world, and we then experience that representation.”

    The most basic notion of indirect perception that comes close to current science and isn’t clouded by pre-scientific philosophical debates and definitions is that the ‘mental pictures’ we have in our heads are fabrications that have some correspondence to the outside world, but which may vary in accuracy.

    And your bar code example is a good representation of this. But the nature of the problem is exemplified even more so by other attempts at producing computer perception systems: visual systems, speech recognition. Trying to get the semantic information (our perceptions) out of images and sounds has very little to do with the immediate **direct** data stream but comes from masses of background data accumulated over time. We can’t do pattern recognition (e.g. seeing the face of Jesus on toast) without having that accumulated capability.

    If direct perception had anything going for it at all then visual and speech recognition systems would have been up and running long ago. And, because artificial systems can actually receive more immediate detail from and image (i.e. no saccades, no blind spot, no small field of focus) the direct perception model should have produced systems even better than us.

    The brain does all the hard work in constructing perceptions from very simple data accumulated over time. So, your other description of representationalism then is also wrong: “This includes the view that perception is passive, and that our cognitive processes apply only to what perception has given us.” How can it be passive if the brain is doing the constructing? If anything it’s the ‘direct perception’ notion that’s passive because it seems to claim we don’t do ab=nything, we just perceive directly, as-is.

    “The problem with representationalism, is that it posits a layer of representation that insulates us from the world. Throughout our entire lifetimes, we would access only finitely many datapoints. Granted, that would be a very large finite quantity. But it still would not be enough for us to infer what we seem to know about the world.”

    Yes it would, and obviously so. When I’m working here at my desk and I hear a particular noise and I perceive that a refuse collection is in progress I have a mental image that corresponds to the truck and the guys moving along the street emptying bins. I infer from a limited amount of data what is going on based on similar but different data received previously, including visual images that I am not receiving at the time. Most of my world is a reconstruction. And I get that wrong sometimes too. I have actually ‘perceived’ a refuse collection, which occurs every week, but gone to the window only to see it’s the drain cleaning team, which is a rare occurance. The similarity of the slow moving and pausing truck in both cases, and a few noises, such as the metal drain being lifted, were enough for me to perceive a refuse collection.

    We have ablack cat that I often find sleeping in many different places in the house. But I can’t tell you how often I’ve started talking to something that isn’t the cat: a black hat, a black t-shirt, and even a small black ball of wool my mother had left after a visit. I have often, with incmplete immediate data, but with plenty of fabricated data, perceived the cat where it isn’t.

    Again, this whole topic is clouded by traditional philosophical conentration on immediate visual perception and the mistaken belief that what we are perceiving is, in total, what is actually out there.

    Like

    • If direct perception had anything going for it at all then visual and speech recognition systems would have been up and running long ago. And, because artificial systems can actually receive more immediate detail from and image (i.e. no saccades, no blind spot, no small field of focus) the direct perception model should have produced systems even better than us.

      That makes no sense. You seem to be addressing the Randian Objectivist position, that we don’t even have to explain perception at all. That’s not what I am talking about.

      Like

  4. The speech and vision recognition (SR/VR) is directly relevant to perception. Take your distinction:

    Representationalism (R): Start by forming an internal image (a pixel map), and then analyze that internal image to implement perception.

    Direct perception (DP): Start by finding boundary crossings using motion of the eye (saccades), and analyze the results before forming any internal representations.

    If analysis of direct input was the primary means of forming representations then, as I said, the SR/VR would have been able to work on analysis alone. But despite tons of analysis it only produces different data, no perceptual meaning, no semantics. At the very least your DP description presupposes something capable of doing the analysis **in context**.

    I agree there is a context for the analysis, but that context is whatever the brain does as it learns to acquire patterns that it recognises: mostly through learning from infancy, but also with some genetic component that creates the the type of brain that has the capacity to learn and grow.

    Without the genetics the brain wouldn’t be able to grow into one with the perceptual capabilities we come to have. This is clear when infant brains go drastically wrong.

    Without the stimulating environment for learning, the learning of patterns that we recognise wouldn’t be possible. Sight obscured at crucial stages of development fails to develop – not, note, that the retina and the neurons don’t work, but that the perceptual pre-conditions, the prior learning used to favricate perceptions, has not developed.

    This is why I don’t see your distinction. Both statements seem like part descriptions of the same thing.

    In a sense the brain doesonce it has started to develop, as (R) implies, use internal information, pattern recognition. That recognition occurs because of patterns, right down to individual neurons that are involved in learning, which has been demostrated. Neurons can be stimulated; they change state (form memories) in complex ways, by growing connections, switching genes on and off inside the neuron; and subsequent stimulation re-enforces activity because of the change of state. Repeated over many neurons, neuron groups and networks, memories are formed. The nature of these patterns does not have a one-to-one mapping with the outside world, so a neuron that is easily excited by the sight of a family member might also be involved in recognising a favourite car; and on different occasions a particular neuron may or may not be triggered by a familiar stimulus. Add to that the great variety fo views, lighting, clothes and environment in which we see a family member means that the associated is vague and fluid. In this sense any perception is not ‘direct’, but is a vague representation.

    So, regarding R, this does not constitute an ‘internal image’, using the common notion of that phrase, and much less a pixel map, so such a representation of representationalism is misguided. The objection that there is no one to see this representation is ireelevent becuse it is not a representation to be seen; it is just a pattern of neuron stimulation. It’s the apparent familiar subjective nature of the experience that fools into thnking of it as an image.

    But, the building up of patterns does require immediate input for analysis (e.g. dreams and visions), so your DP describes only the early stages of sight input, not perception as such. But it’s not clear what analysis goes on, or what does the analysis, and there’s no evidence to support the claim that the process is:

    Eyes -> Direct Neuron Excitation + Immediate Analysis -> Image.

    All indications are that visual perception relies on earlier learning – patterns that are already there:

    Eyes -> Direct Neuron Excitation + Immediate Analysis + Pattern Recognition -> Image.

    What neither of these simplistic ‘diagrams’ express is the time related feedback, which relies on previous image experiences for recognition. Even for very short term experiences our perception is relying on the predictive nature of past experience, particularly where any co-ordination with the motor actions is required. Take reading – we can absorb the meaning of whole chunks of words without focusing on individual ones because we have predictive fabricated expectations of what words follow. In other words the fabrication of representations and further matching to incoming data.

    So, to summarise, I don’t see you have described any distinction of significance. Further, all evidence shows that our prior perceptions contribute so much to our current perceptions that it is quite reasonable to say we fabricate current perceptions from past ones, but with new data (the latter of which is also used later for later perceptions). And given that the brain can fabricate illusory and delusional perceptions, with or without direct stimulation, and dreams, that I really don’t see what the description of DP is offering. And, if, as you suggest, DP is mixed up with views on philosophical realism, then that’s the nature of the presentation of DP. It just seems like a useless concept that is confused and does not account for much of the science of the brain.

    Like

    • If analysis of direct input was the primary means of forming representations then, as I said, the SR/VR would have been able to work on analysis alone.

      That already highlights our disagreement.

      We are not analyzing input. We are analyzing the world. And, as part of that analysis, we can seek additional information as needed. I am specifically disagreeing with the idea of perception as passive. We actively seek whatever information we need.

      Like

  5. Neil,

    I agree with Ron on many points. As I mentioned in my infamous comment, incoming sensory data has to travel through many mediums in the body before it gets to the brain and is further processed. The fact that the sensory impulses have to undergo data compression due to space limitations in the body means that the data has already changed (much is lost) from the moment your body senses something to the moment your brain perceives that sensation. Once the sensation (nerve impulse) gets to the brain, it is further processed into our self experience. You are not feeling/perceiving electric impulses when you sense things, even though your brain is receiving electric impulses. Rather your brain translates these electric impulses into what you perceive (i.e. taste, smell, touch, sound, sight, etc.). This is the representation that you are given of the world (the nerve-to-brain-to-mind translation). On top of the translation that occurred, your brain also modifies certain perceptions based on the presence or absence of others. Your brain has already learned to identify many patterns (feature recognition, etc.) from the sensory stimuli and translate it into perceptions that “matter most”. In other words, your brain also seeks out or ends up with certain perceptions even if it is not accurately representative of what’s really “out there” (it’s still a representation).

    Aocdcrnig to rseecrah at Cmabrigde Uinervtisy, it dseno’t mttaer in waht oderr the lterets in a wrod are, the olny irpoamtnt tihng is taht the frsit and lsat ltteer be in the rhgit pclae.

    In this case, you may have had a mixed perception of seeing jumbled words, but ALSO seeing the correct words. This is because your perception isn’t just dependent on what’s really “out there”. It is because your brain forms a representation of the sensory stimuli and in this case, it changed your perception (in some way) in order to make sense (a form of gestaltism). All optical illusions are another example of what you perceive as being different (thus a representation) from the actual sensory stimuli hitting your sense organs (and thus your perception is different from the stimuli transmitted into the brain). Mind you, we do have some of these pattern recognition processes occurring in the feature detection cells located in the retina (among other places), but many of these representations are created by the brain. The fact that the brain produces a unified feeling “self” from the complex combination of interactions between various sensory stimuli is a demonstration of the representation.

    As Ron pointed out, “Again, this whole topic is clouded by traditional philosophical conentration on immediate visual perception and the mistaken belief that what we are perceiving is, in total, what is actually out there.”

    I couldn’t agree more. What is actually “out there”, a concept I’ve repeated over and over (and with good reason), is not what we perceive for many reasons even before we consider the brain’s influence — because we are limited by our visible light spectrum (400-800 nm photons), audible sonic spectrum (20-20KHz), finite number of chemical receptors in the nose and tongue, and a minimum force required for detecting touch (let alone the minimum threshold intensity for all of these senses). Once we’ve compressed the data from the organs through a finite number of nerves connected to the brain, the brain receives these electric impulses (action potentials, etc., which we could consider our initial “pixelated image”) and converts them to what we experience (our perception). So we have the incoming stimuli which are a representation of the world and then we have the conversion of those stimuli into a complex nervous representation of what those stimuli are as well as neurons that discriminate between certain stimuli based on feature detection cells, and finally we have a brain representation of what the individual nervous representation means and finally a brain representation of the unification of these sub-representations.

    Like

    • Aocdcrnig to rseecrah at Cmabrigde Uinervtisy, it dseno’t mttaer in waht oderr the lterets in a wrod are, the olny irpoamtnt tihng is taht the frsit and lsat ltteer be in the rhgit pclae.

      Well, it does matter. It is far harder to read in that form.

      “… and the mistaken belief that what we are perceiving is, in total, what is actually out there.”

      You are assuming that “what is actually out there” is meaningful.

      Perception is making sense of the world, as best we can. There isn’t any external (or metaphysical) standard that it must conform with.

      Like

  6. “Well, it does matter. It is far harder to read in that form.”

    It’s only slightly harder for me (almost the same as regular reading), and so you must be the exception to the norm (according to Cambridge University). I don’t know if this is a mental anomaly or why you’d have trouble — and I can only speculate about your mental acuity.

    “You are assuming that “what is actually out there” is meaningful.
    Perception is making sense of the world, as best we can. There isn’t any external (or metaphysical) standard that it must conform with.”

    I need not speculate that what is “out there” is meaningful. I need only speculate that it is NOT THE SAME as what we perceive. That’s the whole point here. It doesn’t matter what meaning there is “out there”, only that our perception is a representation of what’s “out there”.

    Like

%d bloggers like this: