Posts Tagged ‘tmbl’

Could A Computer Ever Be As Smart As You?

August 16, 2010 4 comments

The aim of TMBL is to make computers smarter.  Any progress in this direction raises the old question about whether a computer could ever be as intelligent as a human.  On a practical level, I don’t think it’s going to happen for a very long time, if at all.  Nevertheless, the in-principle question is an interesting one: in principle, is it possible to make a computer that behaves just like a human and if so, would it be conscious?  Answering yes to the first part is known as the weak AI standpoint and answering yes to the second, the strong AI standpoint.

Many people may find both standpoints ridiculous because the computers we experience in our daily lives seem to share almost nothing with humans.  This might be misleading because the stuff we’ve managed to get computers to do so far only scratches the surface of what they could do.  Just think how different today’s computers are from the computers of just 15 years ago.

So what’s my position? Well I accept the possibility that there may be some fundamental obstacle to computers replicating human behaviour or human consciousness.  That said, I’ve never heard a convincing candidate for such an obstacle and in the absence of any, both the weak AI and strong AI standpoints look pretty reasonable to me.

I was interested to read Massimo Pigliucci’s post relating to this on Rationally Speaking.  He starts by saying lots of sensible stuff about information and pointing out that it doesn’t help creationists or materialists in the way they want it to.  So far, so good.  Then he starts talking about the idea that we might one day be able to upload our consciousnesses into computers.  Now I’m not very convinced that this will ever be practically possible but I can’t see a good reason why it isn’t possible in principle.  The bit where I disagree with Pigliucci starts here:

Briefly, though, I think the burden of proof is on the singularitarians / transhumanists to show that consciousness is just a matter of logical inputs and outputs of the type that can be simulated — at least in principle — in a computer.

So he’s saying that the burden of proof lies with those that think a computer could, in principle, be conscious.  Watching his debate video which the post links to, it seems he also thinks that the burden of proof lies with weak AIers too, and for the same reasons.  I disagree.  Don’t get me wrong: I agree that the weak AI and strong AI cases remains to be proved.  What I disagree with is the implication that the opposite views get to be the default – that those who disagree with the weak AI standpoint have less work to do.  Indeed, if either side has a greater burden of proof I believe it’s those that reject weak AI.  So what’s Pigliucci’s argument?

Like Searle, I think it more reasonable to consider consciousness a biological phenomenon akin to, say, photosynthesis: something that does have a logical structure, but that also requires certain kinds of substrates to actually work (you can simulate photosynthesis in a computer, but you ain’t getting no sugar out of it).

I’ve read Searle arguing using an analogy to the fact that a computer simulation of water splashing about will never get you wet and that a computer simulation of an explosion is not an explosion.  I think Searle actually accepts weak AI and deploys the argument against strong AI but no matter.

I can see why this sort of argument might be rather persuasive: it reminds us how different a computer simulation of a thing can be be from the thing itself and it then invites us to accept that this probably also applies to the brain.  Here’s where the problem begins: sometimes a computer simulation of a thing is, in essence, just the same as the thing.  How could this be?  Well consider one computer simulating the behaviour of another computer.  For example, this  happens when people use an emulator of a ZX Spectrum to play their old games on their PCs.  In this case, there is nothing fundamentally different between the simulation and the thing the being simulated; there is nothing missing.  Computation is simulating computation.  The difference between this example and the examples deployed against weak AI is that the thing being simulated is fundamentally doing the same thing as the thing doing the simulating: processing information.

The point is that the brain has been built by evolution as an information processor and that’s the functionality we’re interested in here.  The brain takes in information from a bunch of nerves, processes information and sends information out through other nerves.  That is the job for which natural selection has built the brain.

A man-made pump does fundamentally the same thing as a heart: pump blood.  A man-made camera does fundamentally the same thing as an eye: capture an image.  Similarly, a man-made computer does fundamentally the same thing as a brain: process information.

All of this exposes how the photosynthesis analogy misleads us.  Some thought experiments…  A replacement for a heart must actually pump blood and so a simulation will not do.  A replacement for a photosynthesis mechanism must actually produce sugar and so a simulation will not do.  But a brain replacement is different since if a perfect simulation replaces a brain’s information processing capacity, there is nothing missing – one information processor has been replaced by another.  As far as natural selection would be concerned, the modified organism is indistinguishable from the original.

If someone claims that a mechanical pump is significantly different from the human body’s pump (the heart) then the burden is largely on them to point out a specific difference and explain why it is significant.  Similarly, those that say a man-made information processor (computer) is significantly different from the human body’s information processor (the brain) need to provide a good explanation of what’s different and why that’s important.  They don’t get to just assert that the burden of proof all lies with those that disagree with them.

Pigliucci has one last shot at making us feel uncomfortable with the weak AI position:

I would note in passing that, as Searle pointed out, if one thinks that consciousness can be “uploaded”, one is committed to a type of dualism (something that singularitarians profess to abhor), because one is assuming that the “stuff” of thought is independent from the stuff of brains.

Dualism is the view that the mind is some completely separate thing from the brain and is not material but made of some mind stuff; eg, my mind does the deciding and then passes the decision to my brain to execute.  This view has become really rather unfashionable and for good reason.

I’m not convinced that Pigliucci manages to push weak AIers into the dualist camp in the way he’s suggesting.  Weak AIers can just say that mind is computation and the brain is the computer that does the computation.  If this is a dualist view then we are all dualists regarding the relationship between a web browser and the computer on which it runs.  Does anyone reading this worry about non-material web browser stuff?

Update: In the comments section of his blog post, Pigliucci says “For the record: I do accept weak AI, and it should have been clear from several of my comments”.  Please accept my apologies and ignore the bit where I suggested the debate video appeared to suggest otherwise.  Otherwise, I don’t think this affects the post.  For the record, the main reason I said that was the following part of the debate video about 20 minutes in :

Eliezer Yudkowsky: OK so first, are we agreed that with enough computing power you could simulate the human brain up to behavioural isomorphism, in the same way that we can… that we can…?
Massimo Pigliucci: No.

The Spiel

August 4, 2010 1 comment

As mentioned in previous posts, I was recently in Barcelona at WCCI 2010 telling people about my work.  Here is a rough version of the spiel that I developed as I was telling people about my poster.

(To follow this it may help to know that Genetic Programming (GP) is a technique for evolving computer programs where the programs are typically trees like the one shown at the top right of the poster.)

Poster Explaining TMBL For CEC 2010

My Poster To Help Me Explain TMBL to People At WCCI CEC 2010

I’m in interested in GP and in particular I’m interested in long term fitness growth.  In GP, we tend to use big populations but relatively few generations, say a hundred or so, so it’s a bit like putting lots of GP individuals in a bag, giving it a good shake and then refining the best results.  This produces some pretty cool stuff but maybe if we could find a way to make the fitness keep improving (even if very slowly) then after, say, a million generations we could get some REALLY cool stuff.

So then the obvious question I face is why doesn’t GP do that already – why does it stagnate.  To answer this, I argue by analogy… Imagine that I give you around a hundred toy blocks with patterns on their surfaces so that there is one way to line them up to make their patterns match (see the poster).  Imagine that I ask you to solve the puzzle but only using trial and error: no pre-planning, no writing things down, just considering random changes and performing them if they improve things.

If I give you this challenge, you will almost certainly take the puzzle, lay it out flat and solve it without much difficulty.  Imagine that I then give you an equivalent set of blocks but this time I insist that you build the blocks vertically in a tower.  I argue that you’re going to find that much harder.  In fact, I argue that with around a hundred blocks, you’ll find it pretty much impossible because however much progress you make, at some point you’re going to have to grab some block near the bottom and ruin all the hard work you’ve put into all the blocks above it.

Now, I argue that if we consider a GP tree flipped upside-down (see the poster), the same principles hold: at some point we need to make changes to a node near the root of the tree to allow it to improve and that ruins all the nodes above it.  The lower blocks in the puzzle support the blocks above them physically; the lower nodes in the inverted GP tree support the nodes above functionally.

So then we need to ask what went wrong when the puzzle became vertical?  I argue that the difference is that it became hard to make changes without ruining what had already been done.  Early in an attempt, it’s pretty easy to make progress but as more is achieved, there is more to lose by adjusting more of the blocks.

I believe that what stops GP usefully evolving for many generations is that its structure makes it hard to make changes without ruining what has already been done.  I believe that to evolve programs for longer, we must do everything we can to encourage changes that can affect behaviour without ruining existing functionality.  I call these changes tweaks.

So I try to do everything I can to change tree GP to encourage tweaks.  I consider the representation and go through four major design decisions (see poster) where I reject nodes, stacks, points of execution and output overwrites (more explanation to come at some later date).

I put all these decisions together into a representation (which ends up looking pretty similar to linear GP), I call it Tweaking Mutation Behaviour Learning (TMBL, pronounced “tumble”) and then I perform an empirical comparison.

My test is a very simple, meaningless problem in which I scatter 512 points in a square, randomly assign each of them to one of two classes, positive or negative and I use a fitness measure that rewards individuals if they output a value of the correct sign at each point.

The graph on the poster shows the results for tree GP (blue), linear GP (red) and TMBL (green).  The first thing to observe is that for the length of run that we tend to use for GP (a few hundred generations), TMBL is much, much worse than the other two.  When people propose new GP representations, they typically assess it using computation effort – a measure of how much computational work it typically requires to achieve some pre-specified level of fitness.  Again, using this measure at a fitness level of 840, TMBL is really, really bad.

As you can imagine, I would argue that although computational effort is a useful measure, it isn’t the whole story; what really matters is the best fitness.  I think the initial evidence is encouraging for two reasons.  One: TMBL ends up with a fitness level that’s quite a bit higher than for tree GP or linear GP.  Two: it supports the idea that after 10000 generations, TMBL has not stagnated in the way that tree GP and linear GP appear to have.

The First TMBL Poster

July 14, 2010 1 comment

As I previously mentioned, I hope to use this blog to explain the ideas of TMBL but I’m aware that until then it would be useful to give some sources of information.  With that in mind, here’s my (recently finished) first TMBL poster…

Warning: The full size version is pretty big (3311 x 4681)

Poster Explaining TMBL For CEC 2010

My Poster To Help Me Explain TMBL to People At WCCI CEC 2010

I’m off to WCCI 2010 in Barcelona next week (hurrah) and this poster is to help me explain TMBL to folks there.

UPDATE: I’ve just had it printed at A0 and I’m pretty pleased with how it looks but I’ve spotted a mistake.  Kudos to anyone else that spots it (and it’s a small grammatical error so “it’s all nonsense mate” doesn’t count).

The First TMBL Paper

I hope to use this blog to explain many of the ideas behind TMBL.  Until then, I’m afraid you’ll have to read the paper I’ve written on it.  Here‘s the paper and here’s its abstract:

If a population of programs evolved not for a few hundred generations but for a few hundred thousand or more, could it generate more interesting behaviours and tackle more complex problems?

We begin to investigate this question by introducing Tweaking Mutation Behaviour Learning (TMBL), a form of evolutionary computation designed to meet this challenge.  Whereas Genetic Programming (GP) typically involves creating a large pool of initial solutions and then shuffling them (with crossover and mutation) over relatively few generations, TMBL focuses on the cumulative acquisition of small adaptive mutations over many generations.  In particular, we aim to reduce limits on long term fitness growth by encouraging tweaks: changes which affect behaviour without ruining the existing functionality. We use this notion to construct a standard representation for TMBL. We then experimentally compare TMBL against linear GP and tree-based GP and find that TMBL shows strong signs of being more conducive to the long term growth of fitness.

UPDATE: Here’s a picture:

The front of a riveting page-turner.