My guess is that if the robot could not sense the steps in its decision-making process, then the decisions would feel free to it. Without a cause-and-effect chain, the decision would feel like it came out of thin air and yet also feel like it was made by the robot. Hence, free will.
~~ Paul
Clever. You would HIDE the master programming. But if our robot is intelligent as he should be, then he'll eventually notice the patterns of action he always seems to follow, and also notice that the rest of the universe is cause and effect - but he himself, is not. Like many of us humans do, he will recognize the possibility of an illusion, and then our robot will take a closer look at his own "programming", don't you think ? (we should assume that he is only surrounded by other robots similar to him (having free-will), and also similar creatures (animals) constructed out of the same material (metal or cytoplasm) but apparently lacking free-will)
In terms if AI, I've not gone much beyond the child's level either. But I have done years of programming as a way of making a living, and the more I learn, the more I see computer logic as essentially simplistic, at the level of switching on or off an electric light.
Agreed. Anthony Kenney, the great agnostic philosopher, compared a computer to a box of light switches. Here he is in a debate featuring Richard Dawkins.
It's very relevant to what we're discussing - (and may in fact have been posted on this site before)
The radio program discussed concepts such as robot emotion or robot empathy, even robot feeling pain, but essentially it described mimicking the external appearance of some human behaviour, but no-one seriously suggested the robot would actually "feel" anything.
Again, agreed. But it's intellectually beneficial to explore these extreme hypotheticals, even if it only gives us a greater appreciation for our very own human, unique situation. (For example: 'free-will'. . . I know MANY people who take their freedom of choice for granted - until they are threatened with losing it.)
I was under the impression that THIS thread's
hypothetical, assumed that future science could
frankenstein a creature that was EXACTLY like a human, except for the fact, obviously, that we 'created' it.
So simply for the sake of argument, we'll assume that the robot DOES have feelings and an inner stream of consciousness (complete with symbolic language) and even a coherent bank of memories in which his own past actions of independent free-will have logically led him to his current situation. Of course these memories could all be manufactured by us -- think of the Replicants in the movie Blade Runner . . . . .
Now, aside from the insane level of difficulty (and virtual pointlessness) of creating a robot EXACTLY like a human - a human who has the "illusions" of free-will, self-consciousness and an extra-material 'soul' -
we are only half-way there. For the robot to truly feel like a human with a soul, we're going to need to HIDE every last bit of the work we've done, so that the programming is never revealed. So when the robot seeks out the truth of his origin, he cannot find proof of any answer. To be "truly human", means to live with the confounding and constant contradiction of feeling at once that you were designed by a careful and loving creator - AND - that you are an unnecessary addition to the machinery of Nature. It is this flustering dichotomy that fuels our collective mental engine.