In the previous post, I criticized Searle’s design thinking. Today I want to suggest an alternative.
The trouble with design thinking
Design thinking seems to be common in philosophy and in AI. The problem is that we end up attempting to design ourselves. We look at ourselves as the intended finished product. And we want what we design to have the same concepts, the same beliefs, the same ideas of truth.
There is a lot of talk about autonomous agents. But can an agent be truly autonomous if we require it to have our own concepts and our own beliefs? This, I think, is why we often have the intuition that an AI system won’t really be making decisions — it will, instead, be a mechanization of the designer’s intended decision making.
An alternative
The alternative is to try to understand the problem than an organism or a perceptual system is attempting to solve. And then, once we understand the problem, we can look into ways of solving that problem.
As an analogy, consider the investigation of flight. Some of the early attempts were aimed at design a bird with flapping wings and all. By now, we understand that the real problem was one of aerodynamics, of providing sufficient lift (vertical force) to keep the flying system aloft.
As I see it, a newly born child (or other organism) finds itself in a strange world. The problem facing that child is to find ways of coming to make sense of that strange world, and of making sufficient sense that the child can find ways of meeting its biological needs (food, nutrition, etc).
It is my assessment, that a newborn child cannot start with innate knowledge of what exists. If human cognition depended on such innate knowledge, then European children should have had innate knowledge of kangaroos and koalas long before the discovery of Australia. But there is no evidence of that. So it seems that making sense of the world includes working out what exists as part of that making sense.
As a child begins to make sense of the world, that child is developing knowledge. If the child cannot innately know what exists, then it must be able to develop knowledge without ontology being a required starting point. It must be that ontology emerges from knowledge, rather than ontology being prerequisite to knowledge.
There is a similar problem for truth. If truth is correspondence to reality, and if the child has not yet learned how to make sense of reality, then the child could not have that kind of correspondence truth. So truth, too, must be something that emerges from knowledge rather than a prerequisite to acquiring knowledge. And, of course, if the child starts without ontology or truth, then it cannot have justified true beliefs about things in the world. So justified true beliefs must also be something that emerges from that growing knowledge.
Summary
To understand human cognition, we need to examine the problem that it solves. And it looks as if this will require a completely different approach to philosophy.
My next post will be about how to make sense of a strange world.