On David Deutsch on AI

by Neil Rickert

Physicist David Deutsch has an interesting article on AI in aeon magazine.  I thank Ant for bringing it to my attention in a comment on another blog.  My view of AI is rather different from that of Deutsch, though I agree with some of what he has to say.

I started this blog in order to discuss some of what I have learned about human intelligence, as a result of my own study of AI.  It turns out that I have not actually posted much that is directly on the topic of AI.  So I am using this post mainly as a vehicle to present my own views, though I will present them in the form of commentary on Deutsch’s article.  I’ll note that Deutsch uses the acronym AGI for Artificial General Intelligence, by which he means something like the intelligence of humans to be created artificially.

From ape to human

I will be jumping around in Deutsch’s article.  So I shall start with the last paragraph, where Deutsch writes:

Clearing this logjam will not, by itself, provide the answer. Yet the answer, conceived in those terms, cannot be all that difficult. For yet another consequence of understanding that the target ability is qualitatively different is that, since humans have it and apes do not, the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever.

With that thought, we can divide the problem of designing an AGI system into two stages:

  • Stage 1: Build something with the intelligence of a chimpanzee;
  • Stage 2: Provide the capability to go from chimp level intelligence to human level intelligence.

Clearly, Deutsch takes stage 2 to be the important step, to be where our research efforts need to go.  By contrast, I see stage 1 as the really hard part, and the part that is holding back progress in AI.

There’s a long tradition of identifying intelligence with the ability to use logic.  This view has always seemed to me to be mistaken.  It is an understandable mistake, because the people generally recognized as intelligent make a lot of use of logic, and their activities that we consider most intelligent often involve the use of logic.  However, it seems to me that people have drawn the wrong conclusion here.

Remembering back to my school days, the so-called “word problems” were often the ones that students found particularly difficult.  And the solutions to these word problems often seemed to use logic.  There are typically two steps to solving one of those word problems:

  • Step 1: Using the natural language description (which is the word problem), find a way of formulating that as a logic problem.  This typically involves forming some sort of logical or mathematical model of a hypothetical real world problem.
  • Step 2: Solving the resulting logic problem.

It was always step 1 that most students found to be difficult.  But step 1 is not really a question of logic at all.  Rather, it is a question of knowing the real world well, and being able to model it.  So the hard part was always the real world knowledge and the skills at being able to apply that real world knowledge.

The chimpanzee may not know logic.  But it does have a lot of knowledge of its world, and chimps can be quite skillful about manipulating their world.  So it seems to me that most of what I consider to be intelligence is already there in chimps, so comes from stage 1 rather than stage 2.

The implications of physics

Deutsch’s article has, as a subtitle:

The very laws of physics imply that artificial intelligence must be possible. What’s holding us up?

I don’t know whether Deutsch wrote that, or whether his copy editors added that as a way of inspiring interest from the reader.  However, it seems to have been an unwise thing to say.  When we use “imply” we are typically talking about logical implication.  If there were a known logical implication from the laws of physics to AI, then we would already have the knowledge of how to construct an AI system, at least in principle.  But we do not have that knowledge.  In fact, much of Deutsch’s article is about the fact that we do not have that knowledge.  So, if there is such an implication from the laws of physics, then we surely do not know that.  At best, we are guessing.

My own view is this.  We see intelligent systems around (such as other people).  They seem to be a part of nature.  We have done pretty well at understanding other parts of nature.  So it seems likely that we should eventually understand intelligence enough so that, at least in principle, we could construct an AGI.  Note, however, that “seems likely” is a lot weaker than claiming that there is an implication from the laws of physics.

For myself, I see intelligence as related to an ability to adapt.  And thus it seems to me that a system with general intelligence should be able to adapt to whatever the laws of physics happen to be.  So I am inclined to think that details of the laws of physics have relatively little to do with whether intelligence is possible.

Deutsch goes on to say a bit more about this, when he writes:

Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation.

And there, we see why my view is so different from that of Deutsch.  I am quite skeptical as to whether computation has any significance at all to intelligence, other than that intelligent systems are able to make use of computation.

The really big difference here is that Deutsch is an internalist.  That is, he sees intelligence as entirely an internal matter of what is happening in the brain.  By contrast, I am an externalist or interactionist, in that I see intelligence as having to do with how we adaptively interact with the world.

Brains or persons?

It is a very common view, and one taken by Deutsch, that it is brains that are intelligent:

It is uncontroversial that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos. It is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances.

I am not so sure that is uncontroversial, though many people take that view.  But it is an internalist view.  It credits intelligence to what goes on in the brain.  As an interactionist, I of course disagree.  I see intelligence as something to be credited to the person as a whole, and not to just the brain.  It is far from clear that a disembodied brain in a vat would be intelligent.  It seems more likely that it would suffer from severe sensory deprivation.

So I see persons, not brains, as intelligent, though of course their intelligent activity requires that they use their brains.  My view is that persons, not brains, think.  Of course, people use their brains by thinking.  But I am doubtful that a disembodied brain in a vat would be capable of thinking.

Perhaps I am being a bit of a pedant, when I insist that it is persons rather than brains that think or that are intelligent.  However, I think it is not just pedantry.  Rather, it reflects a deep difference in how we understand intelligence and thinking.

The role of philosophy

Deutsch goes on to discuss some of the history of thinking about intelligence and computation, leading up to Turing’s 1950 paper where he suggested that a computer could think.

This astounding claim split the intellectual world into two camps, one insisting that AGI was none the less impossible, and the other that it was imminent. Both were mistaken. The first, initially predominant, camp cited a plethora of reasons ranging from the supernatural to the incoherent. All shared the basic mistake that they did not understand what computational universality implies about the physical world, and about human brains in particular.

Deutsch is wrong about that.  There are three camps, not two.  There is the camp that Deutsch mentions that assumes something supernatural is involved in intelligence.  There is the computationalist camp to which Deutsch belongs.  And there is the third camp to which I belong.  And I am not alone in that camp.

Those of us in the third camp agree that intelligence is entirely natural.  Yet we doubt that intelligence is computational.  At least for me, perhaps for others in this camp, the important distinction is between the internalist view (that intelligence is what happens in the brain), and the interactivist view (that intelligence is in our interactions with the world).

In his breakdown of the world into what he sees as two camps, Deutsch says:

What is needed is nothing less than a breakthrough in philosophy, a theory that explains how brains create explanations.

And there, I completely agree with Deutsch.  And I have been working on that “breakthrough”.  If fact, that is what this blog is chiefly about.  From my point of view, philosophy is too strongly tied to tradition and is far too internalist to the extent of its being almost solipsistic.  I want a philosophy that is far more concerned with our interactions with the world than with natural language word play.

Deutsch sees that part of what is missing, is creativity:

I call the core functionality in question creativity: the ability to produce new explanations.

I agree with Deutsch on the importance of creativity.  But when we look at humans for examples of creativity, the emotions seem to a big part of what generates that creativity.  So I see it as a mistake to think of creativity as algorithmic, of something that could come from computation.  What is missing seems to be an ability that the chimpanzee already has, even though it lacks our abilities at logic and mathematics.

Popper and epistemology

Deutsch continues with:

What we do know about epistemology implies that any approach not directed towards that philosophical breakthrough must be futile. Unfortunately, what we know about epistemology is contained largely in the work of the philosopher Karl Popper and is almost universally underrated and misunderstood (even — or perhaps especially — by philosophers).

I am puzzled by this praise of Popper, whom I see as overrated, not as underrated.  And I am not even a real philosopher.  Looking at it from a scientist’s perspective, I don’t see that Popper has contributed much that is of use.

For example, it is still taken for granted by almost every authority that knowledge consists of justified, true beliefs and that, therefore, an AGI’s thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable.

I agree with Deutsch in his questioning of the “justified true belief” characterization of knowledge.  Long time readers of this blog will know that I have often questioned that characterization.  I am a little surprised that Deutsch is particularly critical of the justification part of that.  And part of why I am surprised, is that when I have discussed (or criticized) Popper, I have been told by the real philosophers in the discussion that Popper was primarily concerned with justification.  So here is Deutsch saying that Popper is underrated, while at the same time he is criticizing the justification which seems to be part of what concerned Popper.

The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’. But that is impossible.

Deutsch is right to criticize induction.  However, belief in induction seems to be deeply entrenched in western philosophy.  When I have questioned it, I have been told in no uncertain terms, that I am wrong.  When I have asked philosophers for justification, say for actual evidence that induction is used, I am at most given very vague statements.

Inductionism seems to be an origins myth for philosophy (particularly for epistemology).  The role that inductionism plays in philosophy seems somewhat analogous to the role that the Adam and Eve story plays in Christian theology.

Deutsch goes on to criticize Bayesianism:

Currently one of the most influential versions of the ‘induction’ approach to AGI (and to the philosophy of science) is Bayesianism, unfairly named after the 18th-century mathematician Thomas Bayes, who was quite innocent of the mistake. The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act.

I again agree with Deutsch on this.  The idea that our current laws of physics could have been derived by Bayesian inference is laughably absurd.  I continue to wonder how some philosphers can be so confused about this.

Furthermore, despite the above-mentioned enormous variety of things that we create explanations about, our core method of doing so, namely Popperian conjecture and criticism, has a single, unified, logic. Hence the term ‘general’ in AGI.

Here, I disagree with Deutsch.  I don’t see Popperian conjecture and criticism as any kind of core method.  Conjectures in science are not made out of whole cloth.  They come from a background of research and experimentation.  The conjectures may make for good dramatic events that help to market the science, but it is the preliminary research that precedes the conjecture that is the core of science.

On the apparent failure of AI

Deutsch contemplates what might have gone wrong.

In 1950, Turing expected that by the year 2000, ‘one will be able to speak of machines thinking without expecting to be contradicted.’ In 1968, Arthur C. Clarke expected it by 2001. Yet today in 2012 no one is any better at programming an AGI than Turing himself would have been.

This does not surprise people in the first camp, the dwindling band of opponents of the very possibility of AGI.

This also does not surprise those of us in the third camp.  For we never believed that intelligence was purely a matter of what goes on internally.  If intelligence has to do with how we interact with the world, then computation is automatically ruled out.  For computation, itself, is entirely solipsistic.  Computation is independent of the world.

Currently, personhood is often treated symbolically rather than factually — as an honorific, a promise to pretend that an entity (an ape, a foetus, a corporation) is a person in order to achieve some philosophical or practical aim. This isn’t good. Never mind the terminology; change it if you like, and there are indeed reasons for treating various entities with respect, protecting them from harm and so on. All the same, the distinction between actual people, defined by that objective criterion, and other entities has enormous moral and practical significance, and is going to become vital to the functioning of a civilisation that includes AGIs.

The battle between good and evil ideas is as old as our species and will go on regardless of the hardware on which it is running

For example, the mere fact that it is not the computer but the running program that is a person, raises unsolved philosophical problems that will become practical, political controversies as soon as AGIs exist.

I think this is a faulty analysis.  I still remember my first computer programming job.  It was at WRE, during the summer of 1958 (give or take a year), where I was on a summer job during my undergraduate years.  The computer that I programmed consisted of a bunch of people working with mechanical calculators at their desks.  So it was very much a human computer.

The particular problem that I was programming had no moral implications.  But, if it did have moral implications, the people doing the computation would have been held blameless.  Any blame would have been pinned on me (as programmer) and on my manager.  For we would have been the ones expected to know the moral implications.  That would not have been expected of the people doing the computation.

Again, we see the same distinction as I have been making.  Computation is entirely internalist.  Moral implications incur on the basis of externalist or interactionist activity.  Moral questions are about real world effects of what we do.  They are not about pushing abstract symbols around as part of a computation.

Conclusion

My primary aim here has been to present my own views on the nature of intelligence.  In particular, I have tried to explain why I disagree with the conventional wisdom.  Deutsch also disagrees somewhat with the conventional wisdom.  However, my disagreement is more radical than that of Deutsch.

5 Comments to “On David Deutsch on AI”

  1. You say: “If there were a known logical implication from the laws of physics to AI, then we would already have the knowledge of how to construct an AI system, at least in principle.”

    However, it is possible to prove that something exists without being able to find an actual example. For instance there are two irrational numbers a and b such that a^b is rational. There is a proof of this that nevertheless leaves you without any actual example.

    “The very laws of physics imply that artificial intelligence must be possible. What’s holding us up?” – David Deutsch

    This is a very important point and is backed by a rigorous argument. As an expert is hypercomputation I’m sure that Deutsch is correct. The only nicety is that he doesn’t mention Malament Hogarth spacetimes. But that is OK. We can be pretty sure that we don’t live in a universe that involves such spacetimes within individual brains.

    Like

    • Let’s put it this way:

      If there were a known implication from the laws of physics to intelligence, we would have a reasonably precise definition of “intelligence”. We don’t currently have that. At present, there is considerable disagreement over what is meant by “intelligent”.

      Like

      • I disagree. Humans exhibit intelligence. This shows that machines (albeit imprecise biological ones) can exhibit intelligence. The only possible objection is that Turing machines (plus randomness) cannot perform calculations that human brains can (I.e the type of machine we have are not powerful enough). But the laws of physics ensure they can.

        We can utterly fail to know what algorithms (internal or interactional) result in intelligence but never the less still know they exist.

        Like

  2. “But step 1 is not really a question of logic at all. Rather, it is a question of knowing the real world well, and being able to model it. ”

    Modelling something is the hard part that requires creativity and logic to do! This is the understanding part. Deutsch argues in his most recent book that the reason creativity evolved is so that existing explanations/ideas could better be passed from person to person. This process of understanding using creativity also happens to be the same process that is required to extract data from our surroundings and create explanations of our world: Nature/a-person has an objective-truth/idea-in-their-head that can only be accessed via observations/words, these observations/words themselves do not provide direct access to the objective-truth/ideas and they contain implicit information. So by evolving the ability to learn existing ideas from other people and pass them on better, we also (accidentally?) gained the ability to comprehend/explain our world. It was a magnificent leap in evolution, or as Deutsche would say, a beginning of infinity.

    “The very laws of physics imply that artificial intelligence must be possible. What’s holding us up?

    I don’t know whether Deutsch wrote that, or whether his copy editors added that as a way of inspiring interest from the reader. However, it seems to have been an unwise thing to say. When we use “imply” we are typically talking about logical implication. If there were a known logical implication from the laws of physics to AI, then we would already have the knowledge of how to construct an AI system, at least in principle.”

    No, you’ve got this wrong. “Imply” doesn’t mean that we have a full explanation of exactly what is happening. The laws of physics determine what is and isn’t possible. Humans are creative machines that exist within the laws of physics. So artificial intelligence must be possible. Whether humans will create it is another question .

    “My primary aim here has been to present my own views on the nature of intelligence. In particular, I have tried to explain why I disagree with the conventional wisdom. Deutsch also disagrees somewhat with the conventional wisdom. However, my disagreement is more radical than that of Deutsch.”

    Your description of intelligence as only being about interactions with the world cannot possibly be correct. If you put my brain – with all my current memories – in a vat and removed all sensory inputs, then there is no reason why I couldn’t still be intelligent and create new explanations of the world from my existing memories.

    Like

    • If you put my brain – with all my current memories – in a vat and removed all sensory inputs, then there is no reason why I couldn’t still be intelligent and create new explanations of the world from my existing memories.

      We obviously have very different ideas about what the brain is and does.

      I expect that a brain in a vat would very quickly start to reorganize itself to deal with the lack of an attached body. And the memories would fade as that reorganization proceeds.

      Like

%d bloggers like this: