Archive

Posts Tagged ‘materialism’

Could A Computer Ever Be As Smart As You?

August 16, 2010 4 comments

The aim of TMBL is to make computers smarter.  Any progress in this direction raises the old question about whether a computer could ever be as intelligent as a human.  On a practical level, I don’t think it’s going to happen for a very long time, if at all.  Nevertheless, the in-principle question is an interesting one: in principle, is it possible to make a computer that behaves just like a human and if so, would it be conscious?  Answering yes to the first part is known as the weak AI standpoint and answering yes to the second, the strong AI standpoint.

Many people may find both standpoints ridiculous because the computers we experience in our daily lives seem to share almost nothing with humans.  This might be misleading because the stuff we’ve managed to get computers to do so far only scratches the surface of what they could do.  Just think how different today’s computers are from the computers of just 15 years ago.

So what’s my position? Well I accept the possibility that there may be some fundamental obstacle to computers replicating human behaviour or human consciousness.  That said, I’ve never heard a convincing candidate for such an obstacle and in the absence of any, both the weak AI and strong AI standpoints look pretty reasonable to me.

I was interested to read Massimo Pigliucci’s post relating to this on Rationally Speaking.  He starts by saying lots of sensible stuff about information and pointing out that it doesn’t help creationists or materialists in the way they want it to.  So far, so good.  Then he starts talking about the idea that we might one day be able to upload our consciousnesses into computers.  Now I’m not very convinced that this will ever be practically possible but I can’t see a good reason why it isn’t possible in principle.  The bit where I disagree with Pigliucci starts here:

Briefly, though, I think the burden of proof is on the singularitarians / transhumanists to show that consciousness is just a matter of logical inputs and outputs of the type that can be simulated — at least in principle — in a computer.

So he’s saying that the burden of proof lies with those that think a computer could, in principle, be conscious.  Watching his debate video which the post links to, it seems he also thinks that the burden of proof lies with weak AIers too, and for the same reasons.  I disagree.  Don’t get me wrong: I agree that the weak AI and strong AI cases remains to be proved.  What I disagree with is the implication that the opposite views get to be the default – that those who disagree with the weak AI standpoint have less work to do.  Indeed, if either side has a greater burden of proof I believe it’s those that reject weak AI.  So what’s Pigliucci’s argument?

Like Searle, I think it more reasonable to consider consciousness a biological phenomenon akin to, say, photosynthesis: something that does have a logical structure, but that also requires certain kinds of substrates to actually work (you can simulate photosynthesis in a computer, but you ain’t getting no sugar out of it).

I’ve read Searle arguing using an analogy to the fact that a computer simulation of water splashing about will never get you wet and that a computer simulation of an explosion is not an explosion.  I think Searle actually accepts weak AI and deploys the argument against strong AI but no matter.

I can see why this sort of argument might be rather persuasive: it reminds us how different a computer simulation of a thing can be be from the thing itself and it then invites us to accept that this probably also applies to the brain.  Here’s where the problem begins: sometimes a computer simulation of a thing is, in essence, just the same as the thing.  How could this be?  Well consider one computer simulating the behaviour of another computer.  For example, this  happens when people use an emulator of a ZX Spectrum to play their old games on their PCs.  In this case, there is nothing fundamentally different between the simulation and the thing the being simulated; there is nothing missing.  Computation is simulating computation.  The difference between this example and the examples deployed against weak AI is that the thing being simulated is fundamentally doing the same thing as the thing doing the simulating: processing information.

The point is that the brain has been built by evolution as an information processor and that’s the functionality we’re interested in here.  The brain takes in information from a bunch of nerves, processes information and sends information out through other nerves.  That is the job for which natural selection has built the brain.

A man-made pump does fundamentally the same thing as a heart: pump blood.  A man-made camera does fundamentally the same thing as an eye: capture an image.  Similarly, a man-made computer does fundamentally the same thing as a brain: process information.

All of this exposes how the photosynthesis analogy misleads us.  Some thought experiments…  A replacement for a heart must actually pump blood and so a simulation will not do.  A replacement for a photosynthesis mechanism must actually produce sugar and so a simulation will not do.  But a brain replacement is different since if a perfect simulation replaces a brain’s information processing capacity, there is nothing missing – one information processor has been replaced by another.  As far as natural selection would be concerned, the modified organism is indistinguishable from the original.

If someone claims that a mechanical pump is significantly different from the human body’s pump (the heart) then the burden is largely on them to point out a specific difference and explain why it is significant.  Similarly, those that say a man-made information processor (computer) is significantly different from the human body’s information processor (the brain) need to provide a good explanation of what’s different and why that’s important.  They don’t get to just assert that the burden of proof all lies with those that disagree with them.

Pigliucci has one last shot at making us feel uncomfortable with the weak AI position:

I would note in passing that, as Searle pointed out, if one thinks that consciousness can be “uploaded”, one is committed to a type of dualism (something that singularitarians profess to abhor), because one is assuming that the “stuff” of thought is independent from the stuff of brains.

Dualism is the view that the mind is some completely separate thing from the brain and is not material but made of some mind stuff; eg, my mind does the deciding and then passes the decision to my brain to execute.  This view has become really rather unfashionable and for good reason.

I’m not convinced that Pigliucci manages to push weak AIers into the dualist camp in the way he’s suggesting.  Weak AIers can just say that mind is computation and the brain is the computer that does the computation.  If this is a dualist view then we are all dualists regarding the relationship between a web browser and the computer on which it runs.  Does anyone reading this worry about non-material web browser stuff?

Update: In the comments section of his blog post, Pigliucci says “For the record: I do accept weak AI, and it should have been clear from several of my comments”.  Please accept my apologies and ignore the bit where I suggested the debate video appeared to suggest otherwise.  Otherwise, I don’t think this affects the post.  For the record, the main reason I said that was the following part of the debate video about 20 minutes in :

Eliezer Yudkowsky: OK so first, are we agreed that with enough computing power you could simulate the human brain up to behavioural isomorphism, in the same way that we can… that we can…?
Massimo Pigliucci: No.

Advertisements