Can AI be made more human?

by Neil Rickert

According to a recent news report in The Boston Globe,

there is a research effort beginning at MIT, aimed at coming up with some of the more human elements that have, up till now, been missing from AI projects (h/t Walter).

So here is my prediction.  This project will fail.  The project may come up with a lot that is interesting and perhaps valuable.  It may be deemed to have been worth the cost.  But I expect that it will fail to achieve the stated goal.  In a way, this is an easy prediction.  Thus far, AI research has a 100% perfect record of failure, when it comes to producing something that looks like human intelligence.

From the report:

At a new center based at the Massachusetts Institute of Technology, researchers will seek to craft intelligence that includes not just knowledge but also an infant’s ability to intuit basic concepts of psychology or physics.

There we already see problems.  In the usual use of the verb “to craft”, a craftsman crafts an artifact.  We look for intelligence in the craftsman, not in the artifact.  I seriously doubt that intelligence can be crafted.  That’s not to suggest that intelligence is in any way mystical.  It’s just that anything we can craft has a kind of rigidity to it, a rigidity that derives from the blueprint that the craftsman is following.  Intelligence, by contrast, seems to require a great deal of flexibility and adaptability such as we find only in evolved organisms.

A little later in the report, we read:

For Winston, what makes human intelligence most stand apart from machines — and from the rest of the animal world — is our ability to tell and comprehend stories.

And that is my biggest concern.  That quote expresses the view sometimes known as “human exceptionalism.”  Humans are seen as exceptional among the biological organisms, and it is those exceptional attributes that make up intelligence.

This human exceptionalism is, in my opinion, a mistaken assumption of AI.  For that matter, it is a mistaken assumption of philosophy of mind.  If we really want something approaching human level intelligence, then what is missing from our AI systems is that underlying animal intelligence.  The exceptional part — what distinguishes humans from other animals — can possibly be provided by computers.  But that underlying animal intelligence will still be missing.

If we want human-like AI, then we have to look into animal level intelligence.  That is what is missing.

That I hold this view, is why I am a heretic.

Tags: ,

11 Comments to “Can AI be made more human?”

  1. It was also once an easy prediction that human aided flight would be a failure because it had been 100% failure up to that point. That is the nature of discovery in a field where the principles are not understood. There’s lots of fishing around trying to understand what it is your trying to understand. The failures you see now are no more than the failed attempts to succeed at aided manned flight. Naysayers are always 100% right, until they are not. Keep up the good work.

    Like

    • That is the nature of discovery in a field where the principles are not understood.

      In this case, I understand the principles quite well. That’s why it is so clear that the research program is heading in a wrong direction.

      Like

  2. Your objection to AI on the basis of artefact doesn’t hold up.

    There are plenty of examples where engineers create artefacts that mimic the walking and flying of insects. The brain is evolved and not designed as an artefact, sure, but that applies to these other examples.

    You see no mystery in the brain? Therefore the only barrier must be technological. It may be reasonable to counter the overly optimistic projections of some AI people with a reality check, or even a opposite extreme pessimism; but why do you suppose AI will not be achieved, in say, 50 years, 100 years, 1000 years?

    The Dick Tracy wrist TV was once laughable, because the technology wasn’t up to it. New technologies cause paradigm shifts not only in pure science but also in engineering.

    Intelligence seems to require a great deal of flexibility and adaptability such as we find only in evolved organisms, so far. Is it really beyond intelligence to design in adaptability. I don’t see the rigidity of an artefact as a problem, because flexibility can be designed in.

    Evolution takes a few billion years, bumbling along with no specific purposes. It seems very pessimistic to assume that we can’t reach a similar result by being more directed towards a goal.

    I agree that humans are seen as exceptional among the biological organisms. So we have a sample set of N = 1. Not really much to go on either way. But I still see no reasoned explanation for why it cannot be done, either in principle, or in practice. Exceptionalism is a rather mysterian objection.

    “But that underlying animal intelligence will still be missing.”

    And what is that exactly? It isn’t just the animal brain that we observe in action, it’s the whole animal. It is we humans that infer intelligence and attribute it to a mind. We fail first to attribute it to the brain, and then we fail further to attribute it to the rest of the animal: http://www.ted.com/talks/robert_full_on_engineering_and_evolution.html

    The human brain seems to be a mash-up of an animal brain+body, which does all the basic stuff that all animals do, plus what we really mean by ‘intelligence’ when we talk of AI. But in a human brain all that unconscious animal stuff is chipping in all the time, and our conscious aspects don’t get to see so much of it, except as emotions, feelings, un-asked-for biological interjections that alter what we think consciously.

    The whole human is all of that, and so I agree there are many aspects of human intelligence that go beyond the mere conscious calculating human intelligence. But then the question has to be, which bits of this are we working on? Why do we need to have a fully human artefact from the outset? What is wrong with the reductive science that breaks it down and tries to address all the parts?

    Take ASIMO, but forget the conscious human intelligence. Look at the motor action, and look at the TED video of insect mimics. What is a human like AI other than the combination of the traditional thinking computing AI, which is still the main struggle, and the lower level automatic systems of insects and other animals that are already being implemented? Do you not see the potential? Are you that convinced that this is not possible?

    You may hold that view, but I don’t see any reasoning for it. It seems more a declaration closer to a mysterian assertion.

    Like

    • There are plenty of examples where engineers create artefacts that mimic the walking and flying of insects.

      But we are not talking about physical behavior. We are talking about cognitive behavior. That is very different.

      It may be reasonable to counter the overly optimistic projections of some AI people with a reality check, or even a opposite extreme pessimism; but why do you suppose AI will not be achieved, in say, 50 years, 100 years, 1000 years?

      It’s a good question, whether people would try to build an artificial person if they really understood the principles. I don’t know the answer — my guess is that they would limit themselves to simpler projects that could demonstrate that they have the principles right.

      In this case, however, we have researchers heading in a wrong direction.

      The human brain seems to be a mash-up of an animal brain+body, which does all the basic stuff that all animals do, plus what we really mean by ‘intelligence’ when we talk of AI.

      It’s in that “what we really mean by ‘intelligence’” part that people are getting it wrong.

      But in a human brain all that unconscious animal stuff is chipping in all the time, and our conscious aspects don’t get to see so much of it, except as emotions, feelings, un-asked-for biological interjections that alter what we think consciously.

      The emotions are far more important than people realize.

      Like

  3. “In this case, I understand the principles quite well.”

    OK, but I don’t recollect you making it clear in any earlier posts what principles apply that support your belief that human-like AI is either not possible or very unlikely.

    “But we are not talking about physical behavior. We are talking about cognitive behavior. That is very different.”

    But cognitive behaviour is physical behaviour. There’s not that much difference between the sensory and motor peripheral neurons and those in the brain. The main differences is what they connect to, and in the brain they connect physically, through chemistry, to other neurons. What specifically is different about it that prevents emulation in some technology?

    And I thought the problem was the simpler animal aspects, which are surely more straight forward (compared to the higher level cognition of humans, not compared to mechanical systems). And I thought the important point was the animal behaviour of the brain – animal intelligence. This is what some groups are working on. But what they are finding is that some of the intelligence we perceive is actually a natural mechanistic part of the system – or rather more simple sensing and commands are enough to make some mechanical system work well enough. Rather than having to program each leg movement moment by moment in an artificial brain using fixed algorithm, it needs only some general processing of balance and direction, and motor stimulation, and the mechanical system will ramble along in the right direction. I appreciate that mammal brains are far more complex than insect brains, and far less capable than human brains, but there is no indication that there is some impossible barrier to overcome, just a great deal of difficulty, and some understanding of what are the important problems to solve.

    “It’s a good question, whether people would try to build an artificial person if they really understood the principles. I don’t know the answer – my guess is that they would limit themselves to simpler projects that could demonstrate that they have the principles right.”

    But you say you do understand the principles, at least enough to dismiss the idea. This is what I find puzzling. This has a parallel in religion, whereby the religious claim to understand God just enough to say he is ineffable – if he’s ineffable, where do they get the understanding to say so with conviction? It seems irrational to be too certain about making claims about what cannot happen, unless there is a very clear logical or mathematical reason, and even then those reasons are dependent on the axioms they rely on. That’s why I wonder what principle prevents you being convinced that human-like AI is possible.

    “In this case, however, we have researchers heading in a wrong direction.”

    Do you have a specific research program in mind? I mean specific in the sense that you can name a research group doing the wrong thing? Only AI now encompasses so much. As the we have come to understand brains more it has become clear that there are more problems to solve. But there is no indication this is a set of problems increasing exponentially out of reach.

    “It’s in that “what we really mean by ‘intelligence'” part that people are getting it wrong.”

    How do you know they are getting it wrong and you are not? There are many ideas about intelligence out there. How about the work of Jeff Hawkins and the research groups he’s been involved in? You might be onto something if you could say categorically, this is what intelligence is, and this is why we cannot emulate it. But you seem to be hiding behind the vague term ‘intelligence’.

    Personally I think many researchers have a far better grasp on the many features of brain-body systems that go into what we call ‘intelligent’ systems, and they vary a great deal in capability – from insects to humans. But I still think we do not fully understand intelligence – we don’t understand it to be ready to create human-like AI; but nor do we understand it enough to be able to rule out human-like AI.

    A part of the problem is the over-attribution of specialness to consciousness and intelligence that has come to us through a history and tradition of theology and philosophy. Instead of looking for the magic we should be looking for the mechanistic. Sure the mechanisms in human brains are more difficult to understand. But if one accepts the principles of evolution the only difference between life and non-like is the autonomy and dynamic nature of life. The difference between the various types of intelligence, from very simple mechanistic to human mechanistic cognition, is only complexity. Try this: Stefano Mancuso: http://www.ted.com/talks/stefano_mancuso_the_roots_of_plant_intelligence.html. Note incidentally the theme I touch on later about cross discipline work, and his final points on hybrids.

    So part of the problem with intelligence is that we often don’t recognise the various components, the many mechanisms, that are really mechanistic, but which come together in humans to such a complex degree we have the tendency to attribute magic to it. Intelligence is not some one thing that we have to emulate – we’ve never been able to define it as one thing; all such definitions seem hopeless. Just as the definitions for life are not very helpful, when looking at some entities that appear life-like. Instead, life is a collection of dynamic systems interacting. Intelligence is a collection of dynamic processes solving problems. We need to identify what processes are significant and we need to create the technologies that implement them. And make them work together.

    “The emotions are far more important than people realize.”

    I think at least for the last 5 to 10 years it has become very clear how important the emotions are. But it has also become clear that they are essentially mechanistic drivers. A visual stimulus arrives, evokes some memory or vague idea, possibly subconsciously, which triggers releases of hormones, increased heart rate, an urgency, still possibly subconscious, which makes one suddenly ‘decide’ to do something, as if it were a decisions of the free will out of the blue. In terms of personal experience we only recognise the biology of emotions when they are at their most insistent, when they do make us consciously ‘feel’ something. But the same mechanisms are in process all the time.

    The core of the problem that means we require emotion is that no computational system or brain can perform recursive analysis for every problem it has to solve in order to get a clear logical answer. We always take short cuts. Even in engineering we introduce limits on measurements, and not just because there are physical limits to what we can measure, but because it is efficient to measure only well enough. Tolerances vary depending on the engineering problem. But in more general human tasks we mostly do what is good enough; and somewhere in our decision processes there is some low level biology going on, in the brain, pressed by the need to do other tasks too, that makes us choose; and the choices are driven by this low level mechanistic ‘intuition’ and not by some magical free floating mind.

    “There’s not enough information there for me to evaluate. It might have possibilities.”

    That sounds like you’re not quite as aware of all the research that is feeding into AI from many different directions as is suggested by your early claim that you understood the principles. As I said, I’m not sure what principles you are appealing to. I don’t think the collective of AI research understands all the many principles involved, and that too is why I can’t see how you can rule it out.

    It isn’t necessary that all these researchers feel they are working on some grand AI program. Many are trying to solve very specific engineering problems. But often in that environment we see cross discipline work that pulls it all together. So, while the bug emulators are dealing with what amounts to motor learning and problem solving, and others are looking at the intuitive learning of brains that process data and learn from it, or focus on tasks like autonomous road vehicles, then there is enough going on to make human like robots feasible – even if we can’t say when they will appear.

    Like

    • OK, but I don’t recollect you making it clear in any earlier posts what principles apply that support your belief that human-like AI is either not possible or very unlikely.

      This blog is primarily about the principles of human cognition, which are roughly what an AI system would need to implement. And then there’s the more explicit “AI Skepticism“.

      But cognitive behaviour is physical behaviour.

      That’s not even wrong.

      And I thought the problem was the simpler animal aspects, which are surely more straight forward (compared to the higher level cognition of humans, not compared to mechanical systems).

      Animals have perceptual systems. AI has not achieved that.

      But you say you do understand the principles, at least enough to dismiss the idea. This is what I find puzzling. This has a parallel in religion, whereby the religious claim to understand God just enough to say he is ineffable – if he’s ineffable, where do they get the understanding to say so with conviction?

      I am not suggesting that anything is ineffable. I cannot explain the principles, because they are incompatible with conventional philosophy, and people won’t let go of that. What did you think my objections to “knowledge = justified true belief” and to metaphysics were all about? (rhetorical question — no need to answer)

      How about the work of Jeff Hawkins and the research groups he’s been involved in?

      It seems to be based on the same mistaken thinking as other AI projects.

      There’s a problem that Hawkins is trying to solve. And the way he is trying to solve it may be different from way others are trying to solve that problem. But trying to solve the problem in a different way won’t help if it is entirely the wrong problem.

      Like

    • Try this: Stefano Mancuso: http://www.ted.com/talks/stefano_mancuso_the_roots_of_plant_intelligence.html

      I’m listening to that right now. The speaker’s accent makes that hard.

      I actually did spend time thinking about the intelligence of plants, so I have not ruled that out.

      Like

  4. “I am not suggesting that anything is ineffable. I cannot explain the principles, because they are incompatible with conventional philosophy, and people won’t let go of that.”

    I didn’t ask for a philosophical principle. What scientific principles make you think it’s not possible? You say you do understand the principles, so are they non-standard philosophical principles, or are they scientific principles, or what? You state you understand the principles, I’d think you could at least say what type of principles they are.

    “But trying to solve the problem in a different way won’t help if it is entirely the wrong problem.”

    Although you state in broad terms what your objection is it seems so broad that it doesn’t say anything. That’s why I was interested specifically in the principles you understand and how they persuade you that everyone else (most others?) is doing it wrong.

    Cognitive behaviour is physical behaviour.

    “That’s not even wrong.”

    If you’re not a dualist then isn’t all behaviour physical? Isn’t behaviour the dynamic changes to state of physical systems? What specifically about cognitive behaviour makes you think it isn’t physical? What clues are there that cognitive behaviour is not physical? Prod the brain physically, with probes, electromagnetic waves, chemicals, and behaviour changes. What else is there?

    Take your dualist looking statement from your previous post:

    “Most people make a clear distinction between a disease of the mind and a disease of the brain.”

    Well, yes they do. And that may be a useful distinction when working with different models of the brain. But it’s not a distinction that has any meaning at the level of chemicals and atoms, so there is no physical distinction.

    “Saying that there is a distinction is not to deny that the brain chemistry is involved in either case.”

    Well, yes. Physical.

    “For example, they might consider nicotine addiction a disease of the mind. But schizophrenia and bipolar disorder they would see as diseases of the brain. The distinction is that with schizophrenia and bipolar disorder there are clearly identified organic problems in the brain that account for the disease. It will take medical treatment to deal with those diseases.”

    Addiction may not be considered a ‘disease’ of the brain, perhaps because of the history of treating addiction, which most humans will probably be susceptible to a degree, whereas more socially severe disorders have historically been treated as diseases or other malfunctions. The occasional person that behaves with wild paranoia in public has been seen quite differently than the communal smokers that habitually light up a chain of cigarettes. But both perceptions are changing. And there are biological causes of addiction, which may be complex manifestations of simple brain learning mechanisms such as habituation at the level of the neuron as Kandel and others demonstrated with organisms like Aplasia.

    It is our familiarity with the ‘mind’ model that sees mental problems as being different from brain problems, but the differences are only in the type and method by which the anomalous behaviour is acquired.

    It may be handy for psychology departments to treat the brain as a ‘mental’ black box, while neurology departments look inside the brain, but that too is a matter the history of those disciplines. Many ‘mental’ disorders have become ‘brain’ disorders as brain biology and physiology has developed.

    Part of the problem is the complexity. The same brain biology and chemistry in a specific part of the brain in two different people may result in quite different behaviours. As such it might be tempting to call that a ‘mental’ difference. But the real difference is caused by the complex interaction of many other factors – but that doesn’t make the complex set of causes non-physical, it just means we only have an externally observable ‘mental’ model behaviour difference, while the real physical ‘brain’ behaviour is too complex to discern.

    And of course people with quite different brains can be taught, coerced, indoctrinated, into behaving the same way mentally: just watch the similar behaviour or religious door steppers, or the trained behaviour of soldiers. On the outside a group of soldiers may behave in what appear identical ways – that is the intention of the training; and yet the degree of internal biochemical turmoil differences may be physically present in the brain, just not externally observable.

    I don’t see anything that we know that suggests cognitive behaviour is not physical.

    Like

    • I didn’t ask for a philosophical principle. What scientific principles make you think it’s not possible?

      I don’t know what it is you are asking for. And I don’t think you do, either.

      You state you understand the principles, I’d think you could at least say what type of principles they are.

      They have to do with such issues as:
      What is data?
      What is an observation?
      What connects a statement to the world, such that the statement can be said to be a description of that world?

      If you’re not a dualist then isn’t all behaviour physical?

      No.

      I write something on a sheet of paper. The physical behavior is the motion of molecules resulting in particular ink marks. The cognitive behavior is in the semantic information conveyed.

      Physicists do not study the conveying of semantic information. Even Shannon’s theory of communication studies only the syntactic details, not the semantic details. So the conveying of semantic information cannot count as physical behavior. I have no doubt that there is a physical basis for it all. But when we talk of physical behavior, we are talking about motions of molecules and about ink marks, we are not talking about conveying semantics.

      Take your dualist looking statement from your previous post:

      “Most people make a clear distinction between a disease of the mind and a disease of the brain.”

      There’s nothing dualist about it.

      And that may be a useful distinction when working with different models of the brain. But it’s not a distinction that has any meaning at the level of chemicals and atoms, so there is no physical distinction.

      Then there is no point in my replying to you. For there is no chance that you will be able to understand this reply at the level of chemicals and atoms.

      Like