Coel has a recent post
about Searle’s “Chinese Room” argument. My response is a bit long for a comment, so I’ll respond here.
Understanding
Here’s how Coel frames the issue:
You’ve just bought the latest in personal-assistant robots. You say to it: “Please put the dirty dishes in the dishwasher, then hoover the lounge, and then take the dog for a walk”. The robot is equipped with a microphone, speech-recognition software, and extensive programming on how to do tasks. It responds to your speech by doing exactly as requested, and ends up taking hold of the dog’s leash and setting off out of the house. All of this is well within current technological capability.
Did the robot understand the instructions?
My answer would be “obviously not.” So, according to Coel, that makes me a Searlite. If I had agreed that the robot understood, then he would say that I’m a Dennettite.
One advantage of being a Searlite, is that I can pronounce that without accidentally biting my tongue. But, actually, I am not a Searlite, nor am I a Dennettite. I’ve never been a fan of Searle’s CR argument. And I do agree with the “Systems Reply” to Searle.
The rest of us can’t see the problem. We — let’s call ourselves Dennettites — ask what is missing from the above robot such that it falls short of “understanding”. We point out that our own brains are doing the same sort of information processing in a material network, just to a vastly greater degree. We might suspect the Searlites of hankering after a “soul” or some other form of dualism.
Well, no. Our brains are not doing the same sort of information processing. Yes they are, in some sense, doing information processing. But it is a very different kind of information processing.
A thought experiment
Let me illustrate with a thought experiment.
I have some packages to send to the hardware store at 37 Coel Boulevard, in a town some distance south of here. So I commission a robot to deliver one of those packages. And I use a human messenger to deliver the other package. I give them both the same delivery instructions.
Checking, later in the day, I find that both packages were delivered to the proper location.
Later that night, there was an earthquake in that town. Remarkably, most of the buildings were not seriously damaged. But they had moved about 1/4 of a mile from their previous locations.
The next morning, I ask the robot and the human messenger to deliver two more packages to the same location. Checking later, I find that the human messenger has delivered the package to the hardware store. But the robot left its package in a parking lot that now sits where the hardware store used to be before the earthquake.
The difference, I suggest, is that the human messenger understood the instructions. But the robot was just following mechanical rules without any understanding.
Intentional vs. mechanical
We have an intentional way of talking about things. And we have a mechanical way of talking about things. In our intentional way of talking, we might say “take tollroad, but follow the signs toward New York City.” In a mechanical language, we might say “go in the direction 90.2 degrees, but veer 2.3 degrees to the south after traveling 153 feet.”
Humans normally use intentional language. But we program our robots to use a mechanical language. When we have a robot respond to natural language commands, it does this by translating the intentional natural language expression into a mechanical language expression. And then it follow the instructions from the mechanical language.
If there were an exact translation between intentional language and mechanical language, this would work perfectly. But there is no exact translation. The relation between intentional language and mechanical language is dynamic. We see this in the thought experiment above, where the earthquake changed the relation between the intentional language instructions and the mechanical language instructions. The human messenger followed the intentional language, and correctly left the package at the hardware store. The robot followed the pre-earthquake mechanical language instructions, and left the package at the location where the hardware store had been before before the earthquake.
Of course, if the robot had a post-earthquake way of translating the intentional instructions, then it would have reached the right destination. But it depends on human translators for that, and it takes time. So post-earthquake translations would not have been available when needed. The human messenger, by contrast, works directly with the intentional instructions and does not need other humans to translate that into a mechanical form.
In short, the robot did not understand the intentional language instructions. It “understood” the mechanical language instructions, but those were wrong because of the earthquake. The human messenger did understand the intentional instructions, and was able to follow them in spite of the changes caused by the earthquake.