An alternative to design thinking

by Neil Rickert

In the previous post, I criticized Searle’s design thinking.  Today I want to suggest an alternative.

The trouble with design thinking

Design thinking seems to be common in philosophy and in AI.  The problem is that we end up attempting to design ourselves.  We look at ourselves as the intended finished product.  And we want what we design to have the same concepts, the same beliefs, the same ideas of truth.

There is a lot of talk about autonomous agents.  But can an agent be truly autonomous if we require it to have our own concepts and our own beliefs?  This, I think, is why we often have the intuition that an AI system won’t really be making decisions — it will, instead, be a mechanization of the designer’s intended decision making.

An alternative

The alternative is to try to understand the problem than an organism or a perceptual system is attempting to solve.  And then, once we understand the problem, we can look into ways of solving that problem.

As an analogy, consider the investigation of flight.  Some of the early attempts were aimed at design a bird with flapping wings and all.  By now, we understand that the real problem was one of aerodynamics, of providing sufficient lift (vertical force) to keep the flying system aloft.

As I see it, a newly born child (or other organism) finds itself in a strange world.  The problem facing that child is to find ways of coming to make sense of that strange world, and of making sufficient sense that the child can find ways of meeting its biological needs (food, nutrition, etc).

It is my assessment, that a newborn child cannot start with innate knowledge of what exists.  If human cognition depended on such innate knowledge, then European children should have had innate knowledge of kangaroos and koalas long before the discovery of Australia.  But there is no evidence of that.  So it seems that making sense of the world includes  working out what exists as part of that making sense.

As a child begins to make sense of the world, that child is developing knowledge.  If the child cannot innately know what exists, then it must be able to develop knowledge without ontology being a required starting point.  It must be that ontology emerges from knowledge, rather than ontology being prerequisite to knowledge.

There is a similar problem for truth.  If truth is correspondence to reality, and if the child has not yet learned how to make sense of reality, then the child could not have that kind of correspondence truth.  So truth, too, must be something that emerges from knowledge rather than a prerequisite to acquiring knowledge.  And, of course, if the child starts without ontology or truth, then it cannot have justified true beliefs about things in the world.  So justified true beliefs must also be something that emerges from that growing knowledge.

Summary

To understand human cognition, we need to examine the problem that it solves.  And it looks as if this will require a completely different approach to philosophy.

My next post will be about how to make sense of a strange world.

16 Responses to “An alternative to design thinking”

  1. “And we want what we design to have the same concepts, the same beliefs, the same ideas of truth.”

    Well if we were to design the systems analogously with our cognitive structures, that is, sets of innate concepts or fundamental patterns (if we have any) or complex patterns in nature, then I would expect the systems to arrive at the same beliefs, and the same ideas of truth. This would include designing the systems with the same flaws that we have, analogs to our cognitive biases, etc., if we wanted a true match.

    “There is a lot of talk about autonomous agents. But can an agent be truly autonomous if we require it to have our own concepts and our own beliefs? This, I think, is why we often have the intuition that an AI system won’t really be making decisions — it will, instead, be a mechanization of the designer’s intended decision making.”

    I’m not sure what you mean by “our own beliefs”? Do you mean the programmer’s beliefs, or fairly generalized or universal human beliefs? If the latter, then merely making a system that arrives at the same beliefs as all humans would arrive at (by it’s learning and reprogramming), wouldn’t negate the possibility that the system is just as autonomous as we are, and it would even be able to change its known concepts and beliefs if designed with the same kind of flexibility that our own brains have. I’d also be interested in knowing what you mean by autonomy, or how you define it.

    “It is my assessment, that a newborn child cannot start with innate knowledge of what exists. If human cognition depended on such innate knowledge, then European children should have had innate knowledge of kangaroos and koalas long before the discovery of Australia. But there is no evidence of that. So it seems that making sense of the world includes working out what exists as part of that making sense.”

    Your comment about kangaroos and koalas pertains to a very specific type of knowledge that we must learn, and it’s absence doesn’t negate that children may have other forms of innate knowledge. What you mention here just shows that children don’t have an innate knowledge of much more specific and more complex types of patterns in their brains. It doesn’t negate the possibility of their having simpler fundamental patterns about some properties of the physical world such as material, space, time, etc., which they then could use as a foundation for higher level patterns that they experience and remember later on (such as koalas and kangaroos). In any case, I’d agree and say that at the very least, making sense of the world includes working out MOST (though not necessarily all) of what exists as part of that making sense.

    “If the child cannot innately know what exists, then it must be able to develop knowledge without ontology being a required starting point. It must be that ontology emerges from knowledge, rather than ontology being prerequisite to knowledge.”

    It seems that at the very least, the brain has to have the ability to process sensations and perceptions, and thus it has to have a physical hardware that allows for that ability to process. One could also argue that any kind of instinctual behavior (perhaps deemed implicit or unconscious knowledge), possibly including the suckling reflex of a newborn baby are based on an innate knowledge, even if implicit or unconscious, of some aspects of the world. One question that becomes important here is the question of whether the innate configuration of the brain is just a random hodgepodge of neural connections (like a massive 3D net of various interconnections and hierarchical connections) that are then modified to produce a structure that facilitates filtering to eventually produce an understanding of the world and properties of the world (which would imply some kind of a blank slate)? Or, is the innate configuration of the brain at least partially composed of neural connections that result from genetically transmitted information naturally selected to represent or reflect at least some properties of the world (such as very fundamental and simplistic patterns pertaining to space, time, causality, etc.)? There’s a lot of evidence to suggest that our brains don’t begin as “blank slates” (see Pinker et al), although how “blank slate” is being defined within that research and those claims definitely needs more clarification.

    Like

    • So almost complete disagreement about what cognitives systems are and do.

      Like

      • Actually I think we agree that learning is at least a major if not exclusive component in the acquisition of knowledge and ontology. You take that one step further and posit that learning is the ONLY way we acquire knowledge, where I merely stipulate a level of uncertainty about learning being the exclusive way to acquiring knowledge, and include the possibility that it is a combination of mostly learning but perhaps some innate neural configuration or type of configuration serving as extremely fundamental building blocks that help to produce the more complex elements involving many hierarchical relationships between those elements. For example, one could argue that the mere fact that the brain of a conscious fetus before having any experience at all is starting with a hierarchical net of neuronal connections from which to prune and shape to make sense of the world and create an ontology, the mere fact that this configuration is composed of a huge set of connections with various hierarchies of neural connection reflects the common ontology of fracturing the world into a set of hierarchical concepts and properties and causal relationships based on that hierarchy suggests that it is very plausible that this serves as an innate contributor to some form of innate knowledge or ontology.

        You may think this is a mere coincidence that both have the property of complex hierarchical connections or relationships, but I find that to be evidence in favor of an innate type of ontological foundation mimicked by our physical neurological configuration. Perhaps I’ll expand on this topic in my next blog post. It is interesting to say the least. Good topic!

        Like

      • Actually I think we agree that learning is at least a major if not exclusive component in the acquisition of knowledge and ontology.

        Sure. But we don’t agree on what learning is.

        Like

  2. There must be some innate knowledge. Children have the innate knowledge of how to see, how to hear, how to walk etc. etc.

    Also of recent, Noam Chomsky an expert on linguistics has brought out a theory that language it innate. If two babies are fed, and are never thought language, they will create their own.

    I think there is quite a bit of evidence now that children are not born with a “blank slate”.

    As for Truth or Reality. I’m of the school that those are purely based on experience, and would agree that those are learnt, and not necessarily objective. We are all homosapeans, and reality for us is from the perspective of our make-up. It would certainly be interesting to know how a bat views reality.

    Like

    • There must be some innate knowledge.

      I agree. I only suggested that there were no innate beliefs and no innate ontology.

      Another commenter has argued as if I denied any innate knowledge. But don’t jump to conclusions based on what other commenters write.

      Sure, there is some innate know-how. However, I don’t agree that a child has innate knowledge on how to walk. They have to learn that. They perhaps have an innate drive to learn to walk.

      Sorry, I am not a Chomsky fan. I consider his linguistics to be absurd — and obviously absurd at that.

      Like

      • “Another commenter has argued as if I denied any innate knowledge. But don’t jump to conclusions based on what other commenters write.”

        I didn’t argue as if you denied any innate knowledge, as we were talking about an innate ontology specifically. So if they jump to these conclusions, it may be because they inferred it from your post.

        “It must be that ontology emerges from knowledge, rather than ontology being prerequisite to knowledge.”

        Although if you do think that some forms of innate knowledge exist, then you should also specify that you think “ontology emerges from knowledge, but not the innate types of knowledge that we have…” Statements like this suggest that you believe that ontology emerges from knowledge generally, not some specific type of knowledge (i.e. non-innate knowledge). If you believe that any kinds of innate knowledge exist, then your position would be more clear if you stipulate that you believe that ontology emerges from some types of knowledge but not others, because if you don’t say this then one could include the possibility that ontology emerges from innate knowledge, which would arguably make ontology innate as well.

        “However, I don’t agree that a child has innate knowledge on how to walk. They have to learn that. They perhaps have an innate drive to learn to walk.”

        I agree with you here. There doesn’t appear to be any evidence that children have innate knowledge of how to walk. Definitely a learned skill so far as the data suggests.

        Like

Trackbacks

%d bloggers like this: