robot paralysed by choice of who to save

Well if it's not conscious then I suppose it feels nothing but if it's conscious then who knows what it could be feeling.

But I'd like to believe it feels like it failed its mother-bot and maybe it tries harder the next time :P
 
The experiment shows something interesting about the problem of AI. Learning. A human might fret the first time, and maybe even the second, but eventually, a person is just going to start choosing randomly to make sure that at least someone is saved. Humans have the innate ability to stop and reprogram themselves, so to speak. They "get" that they're doing something wrong. A machine can't make that leap.
 
The experiment shows something interesting about the problem of AI. Learning. A human might fret the first time, and maybe even the second, but eventually, a person is just going to start choosing randomly to make sure that at least someone is saved. Humans have the innate ability to stop and reprogram themselves, so to speak. They "get" that they're doing something wrong. A machine can't make that leap.

Yes... Until these guys can incorporate feedback loops that can learn and adapt behaviour it will always be thus. I'm not sure that it's impossible but it is certainly hellishly difficult. Any "learning" ability seems more likely to come from a "bottom up" rather than "top down" approach. (One of the criticisms of Obama's BRAIN Initiative is that it's approach is too "top down".)

This is an interesting (slightly ancient) paper on the differences between the two approaches:

http://www.generation5.org/content/1999/topdown.asp?Print=1

Note:
Unfortunately, any degree of higher-level complexity is very hard to achieve with a bottom-up approach.
 
Last edited:
Yes... Until these guys can incorporate feedback loops that can learn and adapt behaviour it will always be thus. I'm not sure that it's impossible but it is certainly hellishly difficult. Any "learning" ability seems more likely to come from a "bottom up" rather than "top down" approach. (One of the criticisms of Obama's BRAIN Initiative is that it's approach is too "top down".)

This is an interesting (slightly ancient) paper on the differences between the two approaches:

http://www.generation5.org/content/1999/topdown.asp?Print=1

Note:

The problem is basically that humans and many animals can conceptualize, something that a machine will never be able to do. A computer can't technically think, so it has to have instructions that in some way, cover every possible situation. Imagination, innovation, inspiration and dreaming aren't programmable.
 
The problem is basically that humans and many animals can conceptualize, something that a machine will never be able to do. A computer can't technically think, so it has to have instructions that in some way, cover every possible situation. Imagination, innovation, inspiration and dreaming aren't programmable.

That is the problem with "top down"...

What if we think about a more bottom up approach? What if (and it's a big if) we could make a "baby brain" with some pre-programmed rules and goals, but with the plasticity to adapt and learn (make new connections) by interacting with its environment? It could make mistakes, problem solve, and adjust its behaviour accordingly?

FWIW, I think this is the only way AI could be properly realised. Obviously, anything like this would be so far in the future that I have to plead guilty to any charge of "promissory posting"... but then again, it took nature a few billion years ;)
 
Last edited:
I'm sure they're saving the "guilt" and "remorse" circuits for a future upgrade. In time these curcuits can be suppressed with the "virtual alchohol interface" option. ;)
Problem is the robot will express guilt, but won't really feel guilty :)
 
This is AI at its daftest!

The stupid thing about research like this, is that you could program the robot (presumably virtual?) to do anything. For example, you could make it cut off decision making after a preset amount of time, or to remember the dilemma and not repeat it - so what the hell is the point in the experiment?

Without publishing the complete program, I can't see how the research can be meaningfully written up.

The whole 'experiment' is based on anthropomorphizing - why not have saved a lot of money and used a Barbie doll or maybe a shoot-em-up computer game - they would have learned no more or less.

The question as to what consciousness (and all that goes with it) actually is, can't be dodged. Maybe someone should ask them if putting their robot under such mental anguish was ethical!

David
 
This is AI at its daftest!

The stupid thing about research like this, is that you could program the robot (presumably virtual?) to do anything. For example, you could make it cut off decision making after a preset amount of time, or to remember the dilemma and not repeat it - so what the hell is the point in the experiment?

Without publishing the complete program, I can't see how the research can be meaningfully written up.

The whole 'experiment' is based on anthropomorphizing - why not have saved a lot of money and used a Barbie doll or maybe a shoot-em-up computer game - they would have learned no more or less.

The question as to what consciousness (and all that goes with it) actually is, can't be dodged. Maybe someone should ask them if putting their robot under such mental anguish was ethical!
Yep, the dilemma is in our minds, not in the freakin' robot :D

It's like saying that in a video game the AI code wasn't able to run the pathfinding routine efficiently enough causing the evasive maneuver of the enemies to be ineffictive and ultimately resulting in the enemies being blasted by the player's fire.
 
That is the problem with "top down"...

What if we think about a more bottom up approach? What if (and it's a big if) we could make a "baby brain" with some pre-programmed rules and goals, but with the plasticity to adapt and learn (make new connections) by interacting with its environment? It could make mistakes, problem solve, and adjust its behaviour accordingly?

FWIW, I think this is the only way AI could be properly realised. Obviously, anything like this would be so far in the future that I have to plead guilty to any charge of "promissory posting"... but then again, it took nature a few billion years ;)

I think you're right in that without the ability to interact with the environment on its own terms, an AI has no chance at anything resembling sentience. I personally don't think that it will ever be possible with linear and even massive parallel processing as we understand it or even code. These things are self limiting.

I personally think that the answers will come once we stop thinking about consciousness like it was some sort of computer. If we see it as a property of physics, then we'll better be able to understand how to harness it. Even if consciousness is non local and itself deeply mysterious, which I think it is, there is surely a material way to direct and focus it without understanding everything about it. That's what our bodies do after all. No reason we can't figure that out and duplicate it. That would make for a computer beyond our present imagination.
 
That is the problem with "top down"...

What if we think about a more bottom up approach? What if (and it's a big if) we could make a "baby brain" with some pre-programmed rules and goals, but with the plasticity to adapt and learn (make new connections) by interacting with its environment? It could make mistakes, problem solve, and adjust its behaviour accordingly?
Tha
The AI crowd have been trying that for decades - but that exactly would your 'baby brain' be like? It isn't the size that makes the problem tough, it is getting even the tiniest bit of experience into the system. Show me how to make an artificial snail experience joy when the rain falls, and you have cracked the problem!

Just because it has taken billions of years of evolution to produce something, doesn't necessarily make it so hard to understand. People understand the heart, or the liver, or the lungs pretty well. They can at least conceive of making replacements too, but the brain seems different, doesn't it? I don't think there is any way to make matter actually feel something! A thermostat doesn't feel hot when it turns off the heating - it is just a mechanism - you need to think about how you make a feeling thermostat, which turns the heating off because it feels hot or cold - then you might have your baby brain!

Of course, that doesn't totally preclude making a machine that does what we suspect a brain really does - tune into a non-material conscious signal - but if you want to do that, you probably need to do some seriously spooky research!

David
 
Back
Top