Could a robot have a soul?

You really feel that way? I have no illusion that I was designed by a "careful and loving creator." Does that make me less than truly human?

You are part of the human family. You are familiar with the concept of a creator, along with the fact that many sensible and intelligent people throughout history have "felt" the existence of a higher power.
So I think this has nothing to do with your singular chosen ideology on this particular day -- we're building an average (muddled) human/robot, not one middle-age committed atheist/theist. The goal is to duplicate the human environment and that environment contains spiritual/materialistic uncertainty.

Anyway, I'd say that most people, maybe not you, at some point in their lives, for some period of time, accept a spiritual explanation for their existence.
 
Wait a minute. Why isn't the robot a slave to cause and effect?


If he can, sure. So are humans as we try to understand neurophysiology. That doesn't mean he would be successful.


Why are we assuming this?

~~ Paul

I have not been clear with this simple idea because I fumbled the delivery . . . . May I backtrack for a moment. . . ?

The question is: "Can a robot have a soul ?" But this particular question, though intriguing, can't be answered in that form, because, not only can't we define 'soul', we cannot even prove that we humans HAVE one.
Still, some/many/most of us, "feel" as if we have a soul. Now, same as with consciousness, and free-will, this "feeling" may in fact be an illusion. But, importantly, no matter whether these things are real, or illusions, the feeling that we have a soul, persists. That cannot be denied. And it persists despite the lack of any empirical evidence. Don't over-think this point. . . We've simply re-worded the question to say : "Can a robot feel like it has a soul ?"

The only way (i can think of) to lend a robot the feeling of a soul, is to make the robot "feel" the same way a human feels about existence. Again, this is lower level logic and no alarms should go off. I mean: We feel like we have a soul (mostly), so program a robot to feel like us and bodda-boom bodda-bing -- the robot feels like it has a soul. And, to be fair, we're going to assume we have a brilliant programmer who can pull this off.

The point I'm introducing to the thread is also nothing to brag about : just how incredibly HARD it would be to create for our robot, these various illusions/feelings. You suggested that one way to make the robot 'feel' as if he had free-will, was to restrict access to the initial steps in the decision making process, so that the robot only "saw" the end result, and so the robot would assume that it was his own volition that made the decision. * And I thought you'd made a good suggestion. Then I guess I just assumed that, in order to maintain the illusions we'd programmed into it, the robot would need to perfectly "blend" in with his surroundings; to share the same mental composition as his comrades, and to be made of the same material as his surroundings (Just like we humans are) The point here, again not too brite, is that if the robot were to notice a glaring distinction between himself and EVERYTHING else, the game would instantly be Up - and he would Know that he'd been designed. Same as if you went to get an X-ray, and it showed a bunch of gears and levers working inside you.

The final point is the simplest of all : one of the most fundamental aspects of being human, is NOT KNOWING WHERE WE CAME FROM.** It is the mystery of all mysteries, and every single individual craves an answer, one that will give meaning to his life. I KNOW we both accept this universal human curiosity - it's why we're on this forum. And essentially, the fact that we DON'T have a ready-made and provable answer, allows us the freedom to create our own faith (science) or invest ourselves in traditional wisdom (religion). Through freedom, we find our faith, and create our own meaning. What I'm saying, in a ham-handed way I admit, is that searching for a soul, gives us a soul - (or, the illusion of a soul) so the robot that we program, MUST suffer, the same uncertainties that we do.

*(Here, I will avoid introducing the possible infinite regress implied by one deterministic (biological) robot, programming another robot.)
**with 100 percent consensus, like God on the White House lawn
 
I have not been clear with this simple idea because I fumbled the delivery . . . . May I backtrack for a moment. . . ?

The question is: "Can a robot have a soul ?" But this particular question, though intriguing, can't be answered in that form, because, not only can't we define 'soul', we cannot even prove that we humans HAVE one.
Still, some/many/most of us, "feel" as if we have a soul. Now, same as with consciousness, and free-will, this "feeling" may in fact be an illusion. But, importantly, no matter whether these things are real, or illusions, the feeling that we have a soul, persists. That cannot be denied. And it persists despite the lack of any empirical evidence. Don't over-think this point. . . We've simply re-worded the question to say : "Can a robot feel like it has a soul ?"

The only way (i can think of) to lend a robot the feeling of a soul, is to make the robot "feel" the same way a human feels about existence. Again, this is lower level logic and no alarms should go off. I mean: We feel like we have a soul (mostly), so program a robot to feel like us and bodda-boom bodda-bing -- the robot feels like it has a soul. And, to be fair, we're going to assume we have a brilliant programmer who can pull this off.

The point I'm introducing to the thread is also nothing to brag about : just how incredibly HARD it would be to create for our robot, these various illusions/feelings. You suggested that one way to make the robot 'feel' as if he had free-will, was to restrict access to the initial steps in the decision making process, so that the robot only "saw" the end result, and so the robot would assume that it was his own volition that made the decision. * And I thought you'd made a good suggestion. Then I guess I just assumed that, in order to maintain the illusions we'd programmed into it, the robot would need to perfectly "blend" in with his surroundings; to share the same mental composition as his comrades, and to be made of the same material as his surroundings (Just like we humans are) The point here, again not too brite, is that if the robot were to notice a glaring distinction between himself and EVERYTHING else, the game would instantly be Up - and he would Know that he'd been designed. Same as if you went to get an X-ray, and it showed a bunch of gears and levers working inside you.

The final point is the simplest of all : one of the most fundamental aspects of being human, is NOT KNOWING WHERE WE CAME FROM.** It is the mystery of all mysteries, and every single individual craves an answer, one that will give meaning to his life. I KNOW we both accept this universal human curiosity - it's why we're on this forum. And essentially, the fact that we DON'T have a ready-made and provable answer, allows us the freedom to create our own faith (science) or invest ourselves in traditional wisdom (religion). Through freedom, we find our faith, and create our own meaning. What I'm saying, in a ham-handed way I admit, is that searching for a soul, gives us a soul - (or, the illusion of a soul) so the robot that we program, MUST suffer, the same uncertainties that we do.

*(Here, I will avoid introducing the possible infinite regress implied by one deterministic (biological) robot, programming another robot.)
**with 100 percent consensus, like God on the White House lawn

That's a great post Liberty.
 
The only way (i can think of) to lend a robot the feeling of a soul, is to make the robot "feel" the same way a human feels about existence. Again, this is lower level logic and no alarms should go off.
Tricky, very tricky.

Admittedly the concept of the soul is a slippery one which all of us have difficulty in pinning down in this context. However, here we have the concept of robot feeling, which is also problematic. Whenever anyone is asked to clarify what is meant by this, they would probably (as happened in the radio programme) fall back on behaviour as a way of representing feeling. In other words, if the robot behaves as if it has feelings, then it can simulate the appearance of having feelings, and to some that seems sufficient.

This gives rise to some other thoughts. One, that the robot might enact a set of behaviour to simulate having a particular feeling, while actually feeling nothing at all. The other is that almost all the answers here seem to approach the issue from the perspective of simulating or imitating a human. But if the robot really had a soul (whatever that is), then presumably it would not be necessary to simulate or put on a pretence, it would simply act in its own particular way. Also, in so doing, it might not behave at all like a human, and it might be a false trail to look for human-like behaviour as an indication of the robot having such a thing.
 
Last edited:
What if, instead of building the robot with transistors, we instead built its brain with neurons, wired together exactly like that of the human brain?

Is there anyone who would doubt that this robot has a "soul" in exactly the same way that you and I have souls?

It would be a "biological robot," just like you and me!
Now this is the great mystery, because we might as well simulate those neurons on a computer - which brings us back to transistors!

We don't need to phrase this in terms of a soul, we can just ask whether a robot could feel or experience absolutely anything. If it could then presumably the computer performing the simulation would have experiences as well - but that opens up a series of paradoxes that I have explored before, and will again if anyone thinks such a computer simulation would feel anything (or have a soul if you insist).

If you are tempted to believe that computers can have experiences, are you worried that typing on your computer might be causing it angst, or that it suffers at the thought that it might be shut down. I don't mean that as a joke - after all, it 'knows' about the concept of shutting itself down, so why precisely couldn't it suffer when it has to do those tasks?

The obvious way out of this, is that the brain does not create consciousness, it couples into a non-material consciousness, which in turn opens up the possibility that reports of reincarnation and after life evidence might be real. I am a cautious guy and I don't want to jump that whole list of possible implications in one go, but I do think it is important to follow that idea that a physical object can have experiences (which seems so reasonable at first :D ) to its ultimate conclusion, so if this is something that you really believe, I'll show you where it goes!

David
 
Last edited:
If you are tempted to believe that computers can have experiences, are you worried that typing on your computer might be causing it angst, or that it suffers at the thought that it might be shut down. I don't mean that as a joke - after all, it 'knows' about the concept of shutting itself down, so why precisely couldn't it suffer when it has to do those tasks?
On a perhaps related note, I used to have an early home computer (the TI 99/4A) which had a tendency to "lock up" unpredictably, which could only be resolved by totally powering it down, which resulted in a complete erasing of the entire memory. I've heard it suggested that this was caused by spikes or surges on the incoming A/C mains power. However, I noticed that it seemed to happen when I was becoming stressed or frustrated with some program that I was working on. To this day I'm still not sure which is the better explanation.
 
What if, instead of building the robot with transistors, we instead built its brain with neurons, wired together exactly like that of the human brain?

Is there anyone who would doubt that this robot has a "soul" in exactly the same way that you and I have souls?

It would be a "biological robot," just like you and me!
I've no idea whether it would or would not have a soul. I look at it somewhat in the light of Dr Stephan Mayer's comments on the recent video discussion (see http://forum.mind-energy.net/skepti...-lance-becker-stephan-mayer-2.html#post170391 ). He could see a very a clear difference between a living human being, and one where the person had passed away, though there was no measurable physiological difference. So the question arises, if one could construct this neurological robot, whether or not it would be possible to as it were "invite" a soul to take up residence within it.
 
Here's a question. If a robot claimed it was conscious and showed all outward signs of being so would you treat it with the same respect as other living, conscious beings?
 
Here's a question. If a robot claimed it was conscious and showed all outward signs of being so would you treat it with the same respect as other living, conscious beings?
That raises (at least) two separate questions in clarification.
  • What outward signs would a robot be expected to show?
  • Do we currently treat other living conscious beings with respect?
 
Here's a question. If a robot claimed it was conscious and showed all outward signs of being so would you treat it with the same respect as other living, conscious beings?
Yep, I suspect the more human it looked and sounded the more human we would treat it. It would need to have a face, I suspect. Dealing with a box with a screen, for example, would be harder... It's innate, just how we're wired.

Also given it has a consciousness, are we assigning it a personality? It just may not be very pleasant....

Could you fall in love with a robot?
 
I get the feeling that its very hard to separate the ideas of either being conscious, or of having a soul, with being human. As a consequence, there is a (misguided in my opinion) fixation in creating a robot which can mimic a human being. To me that approach doesn't begin to answer the question, it's all about creating an illusion, rather than creating the real thing.
 
Yep, I suspect the more human it looked and sounded the more human we would treat it.

Nah. Wrapped up with 'love', is the ever-present possibility of 'loss'. Nobody that I know, cares very much about their own death, but they care very much about losing ones that they love. Souls and afterlifes are uncertain - but death is always certain - and the ever-present possibility of loss is part of our subconscious fear. I don't even want to talk about it. I love my dog and I love my truck, but if I lose my truck, I can get a replacement., . . A robot ? They're programmed to be loyal, probably; so you could never lose that . . And if they "die," hell - just transfer their memory bank into a new model.

Maybe it's necessary to remember that less than 200 years ago here, we treated our own brothers and sisters as three fifths human and the Supreme Court viewed them with the same status as cattle
 
That raises (at least) two separate questions in clarification.
  • What outward signs would a robot be expected to show?
  • Do we currently treat other living conscious beings with respect?
One by one.

1. Take anything you would consider a sign of something or someone being conscious, and then imagine it is the robot that shows those same signs to the degrees you would expect of a conscious being.

2. Let's forget the word "respect". Let's change that question simply to if 1 is true, would you treat said robot as if it were conscious?
 
Last edited:
Here's a question. If a robot claimed it was conscious and showed all outward signs of being so would you treat it with the same respect as other living, conscious beings?
I think possibly I would, but you have to be very careful about possible cheat mechanisms that Alan Turing never thought of back in the early days of computers. For example:

Suppose you started a conversation with a computer, and said "Hi - how are you?", and the computer scanned a vast list of conversations culled off the internet to find a conversation that started that way, and simply printed the reply, "Oh OK, but I had a headache all day!". This impresses you, so you reply, "Sorry to hear that - have you taken anything for it?", and it again scanned the internet for all conversations that started that way, and printed the next response from one of the matching conversations, "Yes I took an ibuprofen."

This might look impressive, but it would involve nothing at all other than brute force technology. Indeed, maybe the NSA boffins play that game for real using all the conversations they have collected :) Thus I think it would take some time to know if the robot/computer really understood anything.

I really honestly don't think that is likely. The subject of AI (Artificial Intelligence) is full of bold claims and little end result.

Tell me, Bishop, do you think a computer could be conscious if it executed the correct program?

David
 
I think possibly I would, but you have to be very careful about possible cheat mechanisms that Alan Turing never thought of back in the early days of computers. For example:

Suppose you started a conversation with a computer, and said "Hi - how are you?", and the computer scanned a vast list of conversations culled off the internet to find a conversation that started that way, and simply printed the reply, "Oh OK, but I had a headache all day!". This impresses you, so you reply, "Sorry to hear that - have you taken anything for it?", and it again scanned the internet for all conversations that started that way, and printed the next response from one of the matching conversations, "Yes I took an ibuprofen."

This might look impressive, but it would involve nothing at all other than brute force technology. Indeed, maybe the NSA boffins play that game for real using all the conversations they have collected :) Thus I think it would take some time to know if the robot/computer really understood anything.

I really honestly don't think that is likely. The subject of AI (Artificial Intelligence) is full of bold claims and little end result.

Tell me, Bishop, do you think a computer could be conscious if it executed the correct program?

David

Hey David. Here's a different way to look it.

Let's agree that consciousness:
1. is separate from the physical brain
2. is not generated by the brain
3. is filtered by the brain
4. can exist once the physical brain is dead

In this way, if you think of "correct programming" or a machine or computing power as analogous to the brain, then consciousness could absolutely exist within a robot independent those mechanisms, and that robot could then have a soul.

The question you really need to answer is why can't consciousness be filtered by a machine? What makes the brain so special when it really isn't even needed? A complex machine could do the same thing without question.

Though I may need some convincing, I'm pretty sure I would treat a robot that claimed to be, and appeared to be, self aware much differently than an inanimate object. Otherwise we could be getting into some horrific abuses. I'm not saying that that robot exists today, btw.
 
Last edited:
I think possibly I would, but you have to be very careful about possible cheat mechanisms that Alan Turing never thought of back in the early days of computers. For example:

Suppose you started a conversation with a computer, and said "Hi - how are you?", and the computer scanned a vast list of conversations culled off the internet to find a conversation that started that way, and simply printed the reply, "Oh OK, but I had a headache all day!". This impresses you, so you reply, "Sorry to hear that - have you taken anything for it?", and it again scanned the internet for all conversations that started that way, and printed the next response from one of the matching conversations, "Yes I took an ibuprofen."

This might look impressive, but it would involve nothing at all other than brute force technology. Indeed, maybe the NSA boffins play that game for real using all the conversations they have collected :) Thus I think it would take some time to know if the robot/computer really understood anything.

I really honestly don't think that is likely. The subject of AI (Artificial Intelligence) is full of bold claims and little end result.

Tell me, Bishop, do you think a computer could be conscious if it executed the correct program?

David

Notwithstanding how difficult it is to pin down and define concepts like "consciousness" and "soul" :), what robot behaviour would convince you that it had been achieved? Empathy? Fraililties? Doubt? Telepathy:eek:?

I've replied to David, but anyone?
 
I doubt that a robot could have a soul, or as I would prefer to recast it, a consciousness, as my suspicion is that consciousness is the self-aware face of the selection of alternatives in nature, without which it isn't really selection at all, but a kind of determinism, as in the computational process. If this speculation is correct, then of course a robot cannot participate, as it is a mechanism, specifically a computational mechanism, and not an organism, the basic idea being that there is an important difference between mechanism and organism.

Now whether humans might one day create a form of entity (let us call it an "x") that may be conscious and yet may be different in important ways from both a human and a robot, I would not choose to rule out. But I deeply suspect, that if this should ever prove possible, it will be by invoking in "x" or by causing to be invoked there, something very akin to what nature already does in the expression of organism. And whatever that is...it is not a robot.
 
Notwithstanding how difficult it is to pin down and define concepts like "consciousness" and "soul" :), what robot behaviour would convince you that it had been achieved? Empathy? Fraililties? Doubt? Telepathy:eek:?

I've replied to David, but anyone?

I don't think behavior alone would be convincing. As David pointed out, given enough computing resources and sufficiently sophisticated algorithms, a machine could mimic any behavior you like.

Now if the programming was a minimal "starter set" (like that of a newborn child) and the robot then learned and assimilated on its own and seemed to exhibit consciousness, that would be interesting.

Pat
 
Hey David. Here's a different way to look it.

Let's agree that consciousness:
1. is separate from the physical brain
2. is not generated by the brain
3. is filtered by the brain
4. can exist once the physical brain is dead
Well, in a way, I'd rather just accept those as reasonable postulates. I am wary of 'belief' after leaving Christianity decades ago! I think a lot of the problem is that conventional science can't seem to take postulates like that seriously and explore what they would mean. I'd also treat number 4 as a quite distinct question.

In this way, if you think of "correct programming" or a machine or computing power as analogous to the brain, then consciousness could absolutely exist within a robot independent those mechanisms, and that robot could then have a soul.

The question you really need to answer is why can't consciousness be filtered by a machine? What makes the brain so special when it really isn't even needed? A complex machine could do the same thing without question.
This is a very different sort of question - I mean sure, if your postulates are correct, then somewhere in our brains, there is presumably a part or a mechanism that can 'tune in' to our external consciousness. Yes, I suppose you could make a robot with the same mechanism, but the point is that instead of desperately trying to devise algorithms to simulate human behaviour, you would research the nature of that link. In other words, you might call the end product a robot, but it wouldn't be what we now think of as a robot.

Though I may need some convincing, I'm pretty sure I would treat a robot that claimed to be, and appeared to be, self aware much differently than an inanimate object. Otherwise we could be getting into some horrific abuses. I'm not saying that that robot exists today, btw.
Your postulates do provide plausible explanations for a lot of interesting problems and phenomena:

1) NDE's obviously make good sense.

2) Multiple personality disorders would be analogous to a badly tuned analog radio. The drugs used to treat them would be analogous to the process of tweaking the tuning.

3) The observation that the brain seems less active (observed by fMRI) while a person is undergoing a profound experience with psilocybin makes obvious sense.

4) All the various ψ phenomena, that can currently only be explained as mistakes/fraud start to make sense. I mean, if consciousness exists in a separate realm, we don't know what connections might exist there.

5) The whole problem of the "Hard Problem" disappears.

I think the whole AI effort has been misguided - concentrating more or less on cheating to pass Alan Turing's famous test. I remember there was a phase when AI concentrated on trying to understand certain social settings, such as going to a restaurant (a lot of the work was done in the US :) ).

People figured out knowledge frames that supposedly helped the computer to understand what people did in a restaurant! However, all this was illusory because nobody even pretended to have programmed in what it felt like to feel hungry, to like a particular food, to fancy the girl sitting opposite you, to want to clinch a business deal etc!

David
 
Last edited:
Back
Top