AI Skepticism

by Neil Rickert

I am sometimes asked to explain why I am skeptical about the possibility of AI (artificial intelligence).  In this post, I shall discuss where I see the problems.  I sometimes express my skepticism by way of expressing doubt about computationalism, the view of mind that is summed up with the slogan “cognition is computation.”

Terminology

I’ll start by clarifying what I mean by AI.

Suppose that we could give a complete map or specification of a person, listing all of the atoms in that person’s body, and listing their exact arrangement.  Then, armed with that map, we set about creating an exact replica.  Would the result of that be a living, thinking person?  My personal opinion is that it would, indeed, be a living thinking person, a created twin or clone of the original person that was  mapped.

Let’s use the term “synthetic person” for an entity constructed in that way.  It is synthetic because we have put it together (synthesized it) from parts.  You could summarize my view as saying that a synthetic person is possible in principle, though it would be extremely difficult in practice.

To build a synthetic person, we would not need to know how it functions.  Simply copying a real biological person would do the trick.  However, if we wanted to create some sort of “person” with perhaps different materials and without it being an exact copy, then we would need to understand the principles on which it operates.  We can use the term “artificial person” for an entity so constructed.

My own opinion is that an artificial person is possible in principle, but would be very difficult to produce in practice.  And to be clear, I am saying that even if we have full knowledge of all of the principles, we would still find it very difficult to construct such an artificial person.

As I shall use the term in this post, an artificial intelligence, or an AI, is an artificial person built primarily using computation.  In the usual version, there are peripheral sensors (input devices) and effectors (output devices), but most of the work is done by a central computer so can be said to be computation.

It’s important to be clear here, that by “central computer” I just mean something that does the same kind of things as the devices that we call computers.  Some folk say that computers don’t really compute, that they are electrical appliances that we model as if they were computing.  I’m not interested in those arguments.  Whether the central computer really computes doesn’t matter.  If it behaves in a way that we can describe as if it were computing, that is sufficient for us to say that it using computation in the sense assumed in that definition of artificial intelligence.

So now, my opinion.  I am skeptical of the possibility of AI.  And the reason for my skepticism, is that I doubt that the principles needed for an artificial person can be mapped into computation.

The “purpose” problem

I spent some time studying learning, and trying to find ways that we could design a computer program that could learn.  And the major difficulty that I encountered was one of setting direction.

Let me illustrate from a mathematical point of view.  One of the things that a mathematician does, is explore axioms and their consequences.  We could have a computer produce axiom systems, perhaps randomly.  And we could program the computer to explore what can be proved from those axiom systems.  Some AI research has been done in that direction.

If you randomly generate a bunch of axiom systems, and present them to a human mathematician, that mathematician will quickly reject most of those proposed systems as “not interesting”, and explore only the few that he finds of interest.  The mathematician might be mistaken about what is interesting, but that’s not the point.  The main point here is that the mathematician seems to have some way of judging them, something like a sense of purpose or a direction.  It is hard to program a comparable sense of purpose into a computer system.

To put it simply, the form of machine learning that works best is a kind of “trial and error” learning, sometimes called “reinforcement learning”.  But that requires a suitable directionality, a way of deciding what is success and what is error.  You can program a reward system of sorts into a computer.  But the difficulty is in having a general purpose reward system that can allow completely autonomous learning.

I later came across a paper by Dreyfus, where he brought up the same problem.  In the second page, Dreyfus writes:

Using Heidegger as a guide, I began to look for signs that the whole AI research program was degenerating. I was particularly struck by the fact that, among other troubles, researchers were running up against the problem of representing significance and relevance – a problem that Heidegger saw was implicit in Descartes’ understanding of the world as a set of meaningless facts to which the mind assigned what Descartes called values and John Searle now calls function predicates.

I haven’t read any of Heidegger, but this does seem to describe what I found to be a serious problem for AI.

The data problem

A computer starts with input data, and applies logic operations to that data perhaps intended to reformulate that data in a different way or to produce output.  The AI proponents typically assume that the data will come from sensors.  For example, a computationalist will assume that there will be data in the form of stimulation of retinal cells.

The more that I looked at it, the more implausible this looked.  A particular retinal cell might, at one moment, receive a photon from the sky (diffused sunlight).  A moment later, it might receive a photon that was reflected off a blue car.  And, shortly after that, it might receive a photon from a traffic light.  The problem here is that as a person moves around, and as the eye itself moves relative to the person, the light received by a particular retinal cell could come from almost anywhere.  It was looking more and more as if what AI proponents considered data would be more like what William James described as “a bloomin’ buzzin’ confusion.”

If that is correct, then the Cartesian idea of passive input looks implausible.  Instead, what would be needed, would be for the organism to be carrying out procedures that will provide it with useful information.  And here, the sense of direction discussed in the previous section would be important for judging the usefulness of data.

I looked at science, to see what it uses for input.  Philosophers tell a story of scientists finding patterns in the data by induction.  But that did not seem to explain science.  Much of the data that scientists use today would have been completely unknown to Aristotle.  Science seems to make progress by finding new ways of getting data, rather than by finding patterns in existing data.  Some of our scientific theories seem to be derived from the ways that they get the data, which would explain why data is often theory laden.

Assuming that science is an indication of what an organism needs, that science is perception writ large, this suggests that the problem for an organism is not computing with the data it already has.  Rather, the problem is one of getting useful data in the first place.  And that does not seem to be a computational problem.

What is the alternative?

Given that I find AI implausible, why is it that I still see the possibility, at least in principle, of an artificial person?

The alternative is simply this:  instead of building a system out of logic gates, we should build it out of homeostatic processes.  A homeostatic process already has its own sense of direction, namely that of acting in ways that maintain stasis.  And, in a way, as a homeostatic process changes its state to compensate for external changes in the environment, the internal state of that homeostatic process is already useful data about something in the environment to which it needed to react.  So the use of homeostats seems to solve both the “purpose” (or direction) problem and the data problem.

Advertisements

4 Responses to “AI Skepticism”

  1. I was expecting a reference to the Turing Test 🙂

    I find AI research interesting but I am also skeptical about how accurately it can really mimic human brain function. I only need to look at the facial recognition in photo software and see how many people get misidentified, or even some objects get mistaken for faces. Play with an MS Kinect and you’ll see how background movement can confuse the sensor. Both things that the human brain copes with very easily.

    That said, if you take the view that free will is an illusion and what we consider as thinking is only the product of checmical reactions, then surely it must be possible to replicate what a human could think and do. That may be the theory, but I’m still a little dubious.

    Like

    • The importance of the Turing test is much overrated. It never seemed important to me.

      We recognize whether people have minds, based on their behavior. Turing was arguing that we can similarly rely on behavioral tests for judging if a computer can think. What’s perhaps interesting, is that the test is based on what were the prospects for technology at the time (1950) when Turing wrote that paper. We’ve made many advances since then, but nothing that would persuade us that computers have minds.

      I only need to look at the facial recognition in photo software and see how many people get misidentified, or even some objects get mistaken for faces.

      There’s that old saying about AI. The problem is half solved — they have the “artificial” part of it working.

      That said, if you take the view that free will is an illusion and what we consider as thinking is only the product of checmical reactions, then surely it must be possible to replicate what a human could think and do.

      I don’t take that view. As far as I am concerned, we do have free will. The people who try to prove that we couldn’t have free will seem to be over-interpreting. If I go by Wittgenstein’s “meaning is use”, then if “free will” is something that we use effectively in our communication, then it has a meaning and we should get that meaning from how it is used rather than from doing a logical derivation from the words “free” and “will.”

      As for “thinking is only the product of checmical reactions” that also seems absurd. When I drive my car, it moves by virtue of chemical reactions. But we don’t say “motion is the product of chemical reactions.”

      When I hear what is supposed to be an implication of materialism, I can only conclude that I must not be a materialist. However, I see no reason to believe that there is anything other than matter (and other physical things like time and space). I guess the difference is that I am not a reductionist.

      Like

Trackbacks

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: