I'm sure they're saving the "guilt" and "remorse" circuits for a future upgrade. In time these curcuits can be suppressed with the "virtual alchohol interface" option. ;)So, how does he feel now that the two h-robots have perished?
The experiment shows something interesting about the problem of AI. Learning. A human might fret the first time, and maybe even the second, but eventually, a person is just going to start choosing randomly to make sure that at least someone is saved. Humans have the innate ability to stop and reprogram themselves, so to speak. They "get" that they're doing something wrong. A machine can't make that leap.
Unfortunately, any degree of higher-level complexity is very hard to achieve with a bottom-up approach.
Yes... Until these guys can incorporate feedback loops that can learn and adapt behaviour it will always be thus. I'm not sure that it's impossible but it is certainly hellishly difficult. Any "learning" ability seems more likely to come from a "bottom up" rather than "top down" approach. (One of the criticisms of Obama's BRAIN Initiative is that it's approach is too "top down".)
This is an interesting (slightly ancient) paper on the differences between the two approaches:
http://www.generation5.org/content/1999/topdown.asp?Print=1
Note:
The problem is basically that humans and many animals can conceptualize, something that a machine will never be able to do. A computer can't technically think, so it has to have instructions that in some way, cover every possible situation. Imagination, innovation, inspiration and dreaming aren't programmable.
Problem is the robot will express guilt, but won't really feel guilty :)I'm sure they're saving the "guilt" and "remorse" circuits for a future upgrade. In time these curcuits can be suppressed with the "virtual alchohol interface" option. ;)
Yep, the dilemma is in our minds, not in the freakin' robot :DThis is AI at its daftest!
The stupid thing about research like this, is that you could program the robot (presumably virtual?) to do anything. For example, you could make it cut off decision making after a preset amount of time, or to remember the dilemma and not repeat it - so what the hell is the point in the experiment?
Without publishing the complete program, I can't see how the research can be meaningfully written up.
The whole 'experiment' is based on anthropomorphizing - why not have saved a lot of money and used a Barbie doll or maybe a shoot-em-up computer game - they would have learned no more or less.
The question as to what consciousness (and all that goes with it) actually is, can't be dodged. Maybe someone should ask them if putting their robot under such mental anguish was ethical!
That is the problem with "top down"...
What if we think about a more bottom up approach? What if (and it's a big if) we could make a "baby brain" with some pre-programmed rules and goals, but with the plasticity to adapt and learn (make new connections) by interacting with its environment? It could make mistakes, problem solve, and adjust its behaviour accordingly?
FWIW, I think this is the only way AI could be properly realised. Obviously, anything like this would be so far in the future that I have to plead guilty to any charge of "promissory posting"... but then again, it took nature a few billion years ;)
ThaThat is the problem with "top down"...
What if we think about a more bottom up approach? What if (and it's a big if) we could make a "baby brain" with some pre-programmed rules and goals, but with the plasticity to adapt and learn (make new connections) by interacting with its environment? It could make mistakes, problem solve, and adjust its behaviour accordingly?