Movies: Is Ex Machina ignoring the hard problem of consciousness? |300|

Alex's questions at the end of the interview:

What do you think about Ex Machina?

Would we ever be able to build something that could merge with consciousness?
 
What do you think about Ex Machina?

Would we ever be able to build something that could merge with consciousness?


I saw the film on Christmas day. As a piece of entertainment, it was fair to middling--the cinematography and extensive use of silence reminded me of 2001: a space odyssey. As to metaphysics, it seemed impoverished. At the end of the film, I hadn't made up my mind about Ava--i.e. whether she was truly conscious, or not. Did she kill her creator because of an emotional response to him, or because, in the end, it was the logical thing to do if she were to escape? Did she want to escape to be free to experience and learn, or because that was implicit in her programming?

The second question is rather interesting. It seems to be accepting of the idea that consciousness is primal and that the brain is akin to a receiver, but asks whether or not something could be built that would tap into consciousness in the same way. My answer to that is that it would have to be at least as complex as the brain. But hey! Why bother when the brain already exists and can replicate itself through organic reproduction?

That's if one accepts that the brain is a receiver of consciousness. I tend not to accept that though, thinking instead of the brain as how an "external" image of conscious processes is perceived rather than an object that receives consciousness. All AI devices would be the result of human beings piddling about trying to generate images anywhere near as complex as the image of actual brains, without actually understanding the brain in the first place. Fat chance.
 
great point!!! one of those "duh, of course" things... once someone points it out :)

I can't claim I originated the idea--Bernardo Kastrup's been positing it for quite some time.:)

The brain is how the "external" image of consciousness is experienced (from the shareable, second person perspective), and qualia are the corresponding internal experiences (from the usually non-shareable, first person perspective) of consciousness.

Anyone with access to a brain scanner can see the image of the brain when human beings are (say) perceiving colours or experiencing pain or pleasure. However, usually only our innermost selves can actually experience colour, pain or pleasure. There might be exceptions to this through the empathetic phenomenon of telepathy, however.
 
Last edited:
I agree with Alex that the whole AI and Singularity stuff is bullshit. Forget AI solving the problem of consciousness: AI can't tell you how a bee navigates or how a young child ties her shoe laces. Contrary to what some students might get told in undergraduate Cognitive Psychology courses, AI cannot really tell you how we perceive the world nor can it tell you how humans form concepts about the world or how a person is able to reason for themselves that it is better to put their shoes on their feet rather then on their head. Sure, there are specific computational models of how the visual system may perceive three dimensional objects or the brain may parse human language etc. but frankly, these models are very limited and come no where near to what humans can do. Maybe somewhere in the deep recesses of the Pentagon lurks a 21st Century Quantum-Nano Sentient Robot but I very much doubt it. Why would the Pentagon want to spend billions on developing a sentient robot when a stupid drone will serve their purposes (spying on people and blowing crap up) far more cheaply and effectively? Interesting that one of the discussion members mentioned Wittgenstein (that's pronounced with a "v" not a "w"). Wittgenstein was a philosopher who emphasised the limits of human language. Certain events such as NDEs transcend the boundaries of normal experience in such a way that it is almost impossible to put these experiences into words and ground them in our ordinary understanding which is based on a linguistically constructed reality. Consciousness entails self awareness and sometimes self awareness presents us with a whole other reality that is almost ineffable in its nature. At this point - Wittgenstein tells us - all we can do is shut up. So now I will shut up.
 
I have seen the movie when it came out and I enjoyed it for what it was ... a good thriller with some interesting interaction between the characters and the usual plot twist at the end. Unfortunately, the premise of the movie is rather lazy and the authors seem to have put little to no effort to describe the origin of the synthetic intelligence...

On one hand I can understand that the story focuses on the interaction between the three main characters (Ava, Nathan and Caleb), and it willfully avoids to go into "boring" details on how the machine(s) became conscious. On the other hand I couldn't avoid the irritation at the few feeble allusions to the Turing test and some very vague "internet data" as the basis for Ava's sentience... a pretty lazy way to throw a bone to the audience with a couple of scientific/tech sounding terms.

Anyways the movies itself is a pretty good one, probably an 8 out of 10.

If you notice there have been quite a stream of similarly themed movies in the past couple of years ... "Transcendence" (bad), "The machine" (meh), "Automata" (bad), "Her" (well worth seeing). This modern time myth of a powerful enough computer becoming conscious seems growing stronger... just to confirm our possibly darkest fear: that that is what we really are, a bunch of complex enough circuits.

At least in part these movies seem to stare in the face of this scary prospect... while the rest is probably a mix of nerdom and playing god. Well that and, of course, the whole machine-becomes-conscious theme has still a hell of a lot of entertainment potential. So this is problably the reason why this myth is going ever stronger these days... nerditude + transhumanism + millions bucks at the movie theaters :D :D

Oh, by the way, I haven't seen this brought up by other posters: Bernardo wrote a nice article on this very movie and the problems that come with it --> http://www.bernardokastrup.com/2015/04/cognitive-short-circuit-of-artificial-consciousness.html

Among other things it points out to a crucial distinction between "artificial intelligence" and "artificial consciousness"... not even a subtle one. There's an ocean of difference!

cheers
 
Last edited:
I have seen the movie when it came out and I enjoyed it for what it was ... a good thriller with some interesting interaction between the characters and the usual plot twist at the end. Unfortunately, the premise of the movie is rather lazy and the authors seem to have put little to no effort to describe the origin of the synthetic intelligence...

On one hand I can understand that the story focuses on the interaction between the three main characters (Ava, Nathan and Calen), and it willfully avoids to go into "boring" details on how the machine(s) became conscious. On the other hand I couldn't avoid the irritation at the few feeble allusions to the Turing test and some very vague "internet data" as the basis for Ava's sentience... a pretty lazy way to throw a bone to the audience with a couple of scientific/tech sounding terms.

Anyways the movies itself is a pretty good one, probably an 8 out of 10.

If you notice there have been quite a stream of similarly themed movies in the past couple of years ... "Transcendence" (bad), "The machine" (meh), "Automata" (bad), "Her" (well worth seeing). This modern time myth of a powerful enough computer becoming conscious seems growing stronger... just to confirm our possibly darkest fear: that that is what we really are, a bunch of complex enough circuits.

At least in part these movies seem to stare in the face of this scary prospect... while the rest is probably a mix of nerdom and playing god. Well that and, of course, the whole machine-becomes-conscious theme has still a hell of a lot of entertainment potential. So this is problably the reason why this myth is going ever stronger these days... nerditude + transhumanism + millions bucks at the movie theaters :D :D

Oh, by the way, I haven't seen this brought up by other posters: Bernardo wrote a nice article on this very movie and the problems that come with it --> http://www.bernardokastrup.com/2015/04/cognitive-short-circuit-of-artificial-consciousness.html

Among other things it points out to a crucial distinction between "artificial intelligence" and "artificial consciousness"... not even a subtle one. There's an ocean of difference!

cheers

Mentioning Bernardo's exquisite reflections on the "strong AI" topic is a good turn, but I would also bring a figure less known here (and in paranormalist circles in general) - psychoanalytic paranthropologist Eric Wargo. His blog is full of highly novel and original - due to his Lacanian/Zizekian leanings, which is unusual for anomalist circles, in which Jungian strain of thought is expectedly prevalent. Ideas revelant to the conflict between "strong AI" dream and our (non-mainstream) knowledge about consciousness appear repeatedly in his blogposts, but this one centers around this very topic. (And here is another post which mentions Ex Machina movie in its postscript, even it is not directly related to "strong AI" theme).

And yes, many of Wargo's writings will seem a bit... weird to the people used to the linear and unambiguous reasoning of paranormal research literature - contrary to the common myth spreaded by extreme skeptics, a vast majority of psi proponents are quite rational people, preferring the literal precision of classic scientific discourse to the metaphoric ambivalence of psychoanalytically-influenced humanitarian narration. In fact, I'm a fairly rational person myself - yet I can also accept the paradoxality of Lacanian intellectual poetry, which can sometimes also lead to insights...
 
If you notice there have been quite a stream of similarly themed movies in the past couple of years ... "Transcendence" (bad), "The machine" (meh), "Automata" (bad), "Her" (well worth seeing). This modern time myth of a powerful enough computer becoming conscious seems growing stronger... just to confirm our possibly darkest fear: that that is what we really are, a bunch of complex enough circuits.

It comes and goes in waves. There's a whole bunch of them over the years.
Leaving out Matrix, Skynet, War Games,etc.
 
Fun chat. I enjoyed this film when I saw it earlier in th year.

I suspect that one of the reasons people have little faith in AI is because, until recently, the goal has been to create AI for specific roles (play chess, predict weather etc) and it has been very successful, given that scope. The goal has not to create an artificial human. And it will prove to be very tricky.

I can't foresee a way of creating a fully formed adult human robot because it bypasses the history that makes humans who we are.

Think about what made you the person you are today. Think about the sort of person you would be without any back story what-so-ever.

Your parents/family and their influence. Learing to use your body. Learning to use your senses. Learning to communicate. At every stage messing things up, trying again, messing up again. Persevering.

Discovering your peers, socialising, making friends, making enemies. Discovering emotions. Getting hurt. The joy of winning, the grief of losing.

Discovering love, discovering sex. Messing things up, trying again.

Hang ups, insecurities. Battling demons, trying to overcome them...

Trying to make sense of the world. Trying to make sense of the universe.

You get the picture.

To create an 'artificial' human adult we would need to create an 'artificial' human newborn, with all the feed back mechanisms, learning capabilities and ability to grow. This is technologically very tricky (impossible?) not to mention ethically awkward.

Worth the effort? I really don't know.
 
The question I ask myself is this: can human feelings be programmed? Is it possible write an algorithm for love, grief, elation, awe, etc? I have a similar problem with that other Scifi mainstay: teleportation. The idea of teleportation (whether actual transfer of atoms or transfer of information to be used to recreate an object at a remote place) seems to assume that the atoms which build a human body will automatically reproduce memories, feelings, etc. It just doesn't make sense to me.
 
Fun chat. I enjoyed this film when I saw it earlier in th year.

I suspect that one of the reasons people have little faith in AI is because, until recently, the goal has been to create AI for specific roles (play chess, predict weather etc) and it has been very successful, given that scope. The goal has not to create an artificial human. And it will prove to be very tricky.

I can't foresee a way of creating a fully formed adult human robot because it bypasses the history that makes humans who we are.

Think about what made you the person you are today. Think about the sort of person you would be without any back story what-so-ever.

Your parents/family and their influence. Learing to use your body. Learning to use your senses. Learning to communicate. At every stage messing things up, trying again, messing up again. Persevering.

Discovering your peers, socialising, making friends, making enemies. Discovering emotions. Getting hurt. The joy of winning, the grief of losing.

Discovering love, discovering sex. Messing things up, trying again.

Hang ups, insecurities. Battling demons, trying to overcome them...

Trying to make sense of the world. Trying to make sense of the universe.

You get the picture.

To create an 'artificial' human adult we would need to create an 'artificial' human newborn, with all the feed back mechanisms, learning capabilities and ability to grow. This is technologically very tricky (impossible?) not to mention ethically awkward.

Worth the effort? I really don't know.

Wonderfully said, Malf. In fact, you ideas on this issue are, to some extent, a simple and compact nutsell of sophisticated and lengthy Eric Wargo's writings on this topic. Eric's main point is the crucial difference between intelligence and sentinence: the former is just a part of the latter. Speaking roughly, "intelligence" is a descriptor for the reasoning ability; that is, a skill of rule-based game-making and problem-solving. Sentinence include the potential for intellection, but it also include impulses, desires, affections, contexts, metaphors, relations, memoirs - a whole psychic brew which provides intelligence with a stuff to analyse, and, most importantly, with a motivation and drive to analyse anything at all. And all the contents of one's mentality which I mentioned requires the one to live an actual life. Only a living being can be a sentinent being - and a computational model processed by a computer have no life. It is just a decontextualised simulation of some separated aspect of life, constructed by the living sentinents for fun or profit... or in an attempt to achieve a higher social status and deeper emotional satisfation when the algorithmic simulation which you have made will pass a "Turing Test" (well, you hope it will!).
 
I like Wargo but he's lost me on the precognition stuff though. :-)
 
I like Wargo but he's lost me on the precognition stuff though. :)

Or, he's able to confuse. I'm myself trying to squeeze my way through his labyrinthine scriptures. Reminds of me of deliberately mind-bending writings of some mystics (Crowley, for example).

Well, I always suspected that psychoanalytic stuff is a mysticism trying to present itself in not-too-spiritualized language. After all, Freud and his diverse (and mutually conflicting) bunch of followers are trying to dive into the depth of unconscious, bringing to the light a conceptual description of the eerie realms which was only explored experientially before, by shamans and witches.
 
Singularity technology doesn't require answering philosophical concerns over what it means to be (ex. consciousness.) It simply requires knowing how to compute harder problems, which offloads work from the human, and allows them to do more useful work.

Two hundred years ago, performing mathematics was a complex task with not much in the way of means to make it easier.

One hundred years ago, slide rules and books allowed speeding up tedious calculations of large numbers while still potentially having more than one mathematician on a given problem.

In the 80s and 90s a pocket calculator could completely replace volumes of reference books and people with barely any mathematical training can make use of formulas and solve basic problems.

In 2016, you can load observational data in to a computer and with the right software it will compute relationships between variables and provided data to create equations which can explain those. Hobbyist Python programmers have access to machine learning toolkits that perform basic classification work in about an hour of education.

Engineers do not care about Chalmer's philosophical problems.
 
Excellent show once again Alex!

I really enjoyed listening to the different viewpoints with regards to artificial intelligence and singularity. One thing I wanted to talk about was the analogy towards the end about language and evolution. I'm going to shoot it down and here's why. I studied linguistics in college so I've looked at Old English from roughly 800 years ago quite extensively as well as Shakesperian English from about 400 years ago. If someone born 200 years ago spoke English and were suddenly thrown into the modern day they wouldn't have much trouble understanding a paragraph in a TV magazine. 200 years ago was not that long ago. Sure, they might not understand what a TV, signals, or iPhone is, etc., but the grammar has not changed enough to where they wouldn't be able to understand anything at all. Language change and evolution do not operate along the same lines. They may have crossed a point a long time ago when we developed a more complex larynx that allowed us to speak to one another, but language change is an entirely different problem. Of course, take someone from 800 years ago when Old English was still in use and they won't understand us at all, but again that's a language issue, not an evolutionary one. Given enough time they could probably adapt to our various cultural and technological environments and learn modern-day English.

So I also don't agree that if, hypothetically, there is another species out there that could speak English they wouldn't be able to understand us. That doesn't make sense at all. If they've been studying us for the sole purpose of communication and have the necessary biological components to communicate with us in the first place, then there is absolutely no reason why they wouldn't be able to communicate. I also don't agree with the lion analogy. Animals of course, do not possess the same capabilities we do with regards to communication, but if they did and could speak English I again must pose the question why would they not be able to communicate or why would we not be able to understand each other? There wasn't a lot of evidence brought forth, even philosophically, as to why that would be the case.

Anyways, loved the new, slightly different interview format and had a great time listening!
 
Do any of you here think that you could love an AI?
And I don't mean love as in; "I love my car - its so cool".
If you knew it was an AI and it looked like, sounded like, and acted exactly like a human being of either sex, do you think, for whatever reason, you could fall in love with it?

I think I could not. I think I always would remind myself that it is an construction.

We all have different levels of "love", attachments, and friendships, to pets, cars, and other material things. But the love-love is usually for another human being (spouse, kids, family).

But,..on the other hand, I have never met an lady-AI just yet. She might be very charming, and lure me in. ;)
People 'fall in love' all the time with other people on the internet, often those people are not who they seem and may be scammers. I'm sure if the AI was sufficiently human like you could fall in love with it
 
I can understand why Alex was so dismissive of the possibility that an AI could achieve a human like state of consciousness because if it ever does it would seem to give strong support to the materialistic worldview and support the proposition that consciousness is an emergent property from a sufficiently complex neural network. That is unless the complexity is necessary to 'tune in' to the consciousness field.
 
Back
Top