Posts tagged ‘computationalism’

September 1, 2014

The simulation argument

by Neil Rickert

In a recent post over at Scientia Salon

Mark O’Brien asks a question and gives his own answer with:

Could a computer ever be conscious? I think so, at least in principle.

As O’Brien says, people have very different intuitions on this question.  My own intuition disagrees with that of O’Brien.

Assumptions

After a short introduction, O’Brien presents two starting assumptions that he makes, and that he will use to support his intuition on the question.

Empirical assumption 1: I assume naturalism. If your objection to computationalism comes from a belief that you have a supernatural soul anchored to your brain, this discussion is simply not for you.

Personally, I do not assume naturalism.  However, I also do not believe that I have a supernatural soul.  I don’t assume naturalism, because I have never been clear on what such an assumption entails.  I guess it is too much metaphysics for me.

read more »

October 28, 2013

Thoughts on computationalism

by Neil Rickert

Recently, Massimo Pigliucci hosted a discussion of the computational theory of mind on his Rational Speaking podcast, with an accompanying post on his blog:

That blog post has a link to the podcast.  I listened to that podcast this morning, and will comment on it in this post.

I have been clear that I am skeptical of computationalism.  And Pigliucci is equally clear that he, too, is a skeptic.  But I don’t plan to repeat those earlier posts here.

Analog computation

What surprised me about the discussion, was that O’Brien emphasized analog computation.  Perhaps O’Brien is conceding that there might be problems with computationalism in the form of digital computation.

I remember, perhaps around 15 years ago, somebody argued for analog computation rather than digital computation.  This was in a usenet post, and possibly the poster was Stefan Harnad.  I remember, at the time, that my response was something like:

read more »

July 23, 2013

AI Skepticism

by Neil Rickert

I am sometimes asked to explain why I am skeptical about the possibility of AI (artificial intelligence).  In this post, I shall discuss where I see the problems.  I sometimes express my skepticism by way of expressing doubt about computationalism, the view of mind that is summed up with the slogan “cognition is computation.”

Terminology

I’ll start by clarifying what I mean by AI.

Suppose that we could give a complete map or specification of a person, listing all of the atoms in that person’s body, and listing their exact arrangement.  Then, armed with that map, we set about creating an exact replica.  Would the result of that be a living, thinking person?  My personal opinion is that it would, indeed, be a living thinking person, a created twin or clone of the original person that was  mapped.

Let’s use the term “synthetic person” for an entity constructed in that way.  It is synthetic because we have put it together (synthesized it) from parts.  You could summarize my view as saying that a synthetic person is possible in principle, though it would be extremely difficult in practice.

To build a synthetic person, we would not need to know how it functions.  Simply copying a real biological person would do the trick.  However, if we wanted to create some sort of “person” with perhaps different materials and without it being an exact copy, then we would need to understand the principles on which it operates.  We can use the term “artificial person” for an entity so constructed.

My own opinion is that an artificial person is possible in principle, but would be very difficult to produce in practice.  And to be clear, I am saying that even if we have full knowledge of all of the principles, we would still find it very difficult to construct such an artificial person.

As I shall use the term in this post, an artificial intelligence, or an AI, is an artificial person built primarily using computation.  In the usual version, there are peripheral sensors (input devices) and effectors (output devices), but most of the work is done by a central computer so can be said to be computation.

read more »

October 18, 2012

On David Deutsch on AI

by Neil Rickert

Physicist David Deutsch has an interesting article on AI in aeon magazine.  I thank Ant for bringing it to my attention in a comment on another blog.  My view of AI is rather different from that of Deutsch, though I agree with some of what he has to say.

I started this blog in order to discuss some of what I have learned about human intelligence, as a result of my own study of AI.  It turns out that I have not actually posted much that is directly on the topic of AI.  So I am using this post mainly as a vehicle to present my own views, though I will present them in the form of commentary on Deutsch’s article.  I’ll note that Deutsch uses the acronym AGI for Artificial General Intelligence, by which he means something like the intelligence of humans to be created artificially.

read more »

March 26, 2012

Information storage in the brain

by Neil Rickert

The problem of information storage is raised by Cornelius Hunter in a post at UD and at his own blog.  I’m not quite sure why Cornelius posted that.  He often posts arguments for ID or arguments critical of evolution.  But he fails to connect this particular post with his ideas on evolution and ID.  But never mind.  It’s something to comment on, because my response illustrates my disagreement with the conventional wisdom.

Cornelius poses the issue with: “The problem is that how the brain could store information long-term has been something of a mystery.”

My reaction – as best I can tell, the brain doesn’t store information at all.  So there is no mystery.

Suppose I hear a tornado alert on the radio.  I might react by becoming more alert to the weather conditions outside.  That can be thought of as reconfiguring things.  And that reconfiguration can be said to be a kind of memory.  But, as best I can tell, there would never be a need to actually store the received information (the alert).

The idea of storing information comes from the way that we use computers.  Perhaps it is implicit in the conventional view that knowledge is justified true belief.  I disagree with that view of knowledge, and I disagree with the information processing view of what the brain is doing.  My example of how we react to a tornado warning illustrates why I disagree.

January 22, 2012

Getting information

by Neil Rickert

In an earlier post, I wrote:

That leaves, as one of the basic problems for a cognitive agent, the problem of getting information about the world.

In this post, I want to discuss why that is a problem.

Many people seem to hold the view that sensory cells in the body passively receive input from the world, and that how we perceive the world depends on what we do with that passively received data.  That seems to be the view of proponents of sense-data accounts and of proponents of computationalism.

read more »