While reading Searle’s perception book, I came across this passage:
Think of the problem from a designer point of view. Suppose you are God or evolution and you are designing organisms capable of coping with their environment in spectacularly successful ways. First, you create an environment that has objects with shapes, sizes, movements, etc. Furthermore, you create an environment with differential light reflectances. Then you create organisms with spectacularly rich visual capacities. Within certain limits, the whole world is open to their visual awareness. But now you need to create a specific set of perceptual organizations where specific visual experiences are internally tied to specific features of the world, such that being those features involves the capacity to produce those sorts of experiences. Reality is not dependent on experience, but conversely. The concept of the reality in question already involves the causal capacity to produce certain sorts of experiences. So the reason that these experiences present red objects is that the very fact of being a red object involves a capacity to produce this sort of experience. Being a straight line involves the capacity to produce this other sort of experience. The upshot is that organisms cannot have these experiences without it seeming to them that they are seeing a red object or a straight line, and that ”seeming to them” marks the intrinsic intentionality of the perceptual experience. (page 129)
I’m not surprised by that kind of design thinking. I have long thought that such design thinking is the background to much of philosophy. It is, however, a little strange to be calling on evolution as a designer and as having a designer point of view. Even worse is the idea of evolution wanting to “create organisms with spectacularly rich visual capacities.”
Such organisms might arise via evolution, but the idea that evolution has some kind of specific goal is surely a misunderstanding of evolution.
Well, okay, Searle is not a scientist. So perhaps I shouldn’t be too concerned about this kind of mistake. But there is something else about this that raises concerns.
The Chinese Room
Searle is well known for his “Chinese Room” thought experiment, in which he claims to refute the possibility of AI (artificial intelligence). AI researchers are typically motivated by a kind of design thinking similar to what Searle suggests.
Searle’s argument against AI, is that the AI systems will lack intentionality. But here, in his perception book, Searle is using just the same kind of design thinking to explain intentionality.
In his CR argument, Searle is particularly objecting to computation. But many AI researchers see computation as just an implementation detail in carrying out the kind of designing that Searle describes. The answer to Searle, on this, is the “Systems Reply” — that intentionality is in the system as a whole, rather than in the computation. Searle’s own design thinking would seem to be an example of the “Systems Reply,” particularly if the design is implemented with the use of computation.
Searle dismissed the Systems reply. Yet here, in his perception book, he is suggesting that something similar explains intrinsic intentionality.
Critics of AI often argue that an AI based robotic system could exhibit only derived intentionality. That is, it could seem to have intentionality, but that would really only be a projection of the intentionality of the designer and not anything intrinsic to the robot. It is my sense that Searle’s design thinking has the same problem. Searle says that it accounts for intrinsic intentionality, but I don’t see how you get other a projection of the intentionality of the designer.
Wordstar on the wall
In his book “The Rediscovery of the Mind“, Searle argues that the WordStar program is running in molecular state transitions on his office wall. His point was to question whether it is even meaningful to talk of computation. He apparently thinks that the idea of state transition (as used in the theory of a finite state automaton) is vapid. Yet, in his perception book, he often mentions “states of affairs” as part of an explanation of perception or of intentionality.
For myself, I consider “states of affairs” to be far more vapid than the state transition accounts of computation.
Summary
Overall, I do not find Searle’s explanation of intentionality at all satisfying.