The death of AI (yet again)

I'd suggest instead, Malf, that "Reports on Facebook chatbots developing their own language have been greatly exaggerated" if this article can be trusted to have accurately conveyed a sample of the purportedly "new language" (which is used by the bots - perplexingly, successfully at times - to negotiate deals):

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to
Maybe there's a hidden sophistication and new grammar (there is no new vocabulary) that I'm just not seeing?

Edit: Oh, I see that the article at the link you shared now provides a similar transcript too - pretty sure it didn't when I last looked at it.
 
I'd suggest instead, Malf, that "Reports on Facebook chatbots developing their own language have been greatly exaggerated" if this article can be trusted to have accurately conveyed a sample of the purportedly "new language" (which is used by the bots - perplexingly, successfully at times - to negotiate deals):



Maybe there's a hidden sophistication and new grammar (there is no new vocabulary) that I'm just not seeing?

Edit: Oh, I see that the article at the link you shared now provides a similar transcript too - pretty sure it didn't when I last looked at it.
Yes. This looks a different piece than the one I originally linked to... sneaky! :D
 
I'd suggest instead, Malf, that "Reports on Facebook chatbots developing their own language have been greatly exaggerated" if this article can be trusted to have accurately conveyed a sample of the purportedly "new language" (which is used by the bots - perplexingly, successfully at times - to negotiate deals):



Maybe there's a hidden sophistication and new grammar (there is no new vocabulary) that I'm just not seeing?

Edit: Oh, I see that the article at the link you shared now provides a similar transcript too - pretty sure it didn't when I last looked at it.
I think that story reveals something about AI! I mean what field would publicise its 'progress' in that way? Programmers see zillions of examples of rubbish like that, and every now and again something can be amusing (particularly errors in graphics programs) but they sure don't interpret the result as anything other than a bug that needs fixing.

There is a bug in this forum that I keep niggling Alex about, in which it will send someone an email to confirm they want a PW change, then make the change but fail to send another email to say what it is! I suppose if the forum was made with AI, this could be interpreted as some sort of incipient forum malevolence - "Ha ha ha I really enjoy locking people out of the forum!".

David
 
There is a bug in this forum that I keep niggling Alex about, in which it will send someone an email to confirm they want a PW change, then make the change but fail to send another email to say what it is! I suppose if the forum was made with AI, this could be interpreted as some sort of incipient forum malevolence - "Ha ha ha I really enjoy locking people out of the forum!".
Programmers sometimes jokingly refer to software bugs as an 'undocumented feature'. In fact at one stage in the past I took advantage of this very feature, I needed a timeout from the forum and used this method to make my password disappear.
 
Google’s artificial-intelligence guru, Demis Hassabis, has unveiled the company’s grand plan to solve intelligence by unraveling the algorithms, architectures, functions, and representations used in the human brain. But is that all there is to it?
So if GOOGLE needs a grand plan to solve AI, how come they and others are supposed to be using it already ;)

AI is a concept built out of hype, and based on a misunderstanding of the nature of the mind. It is the idea that mind must be a form of computation, that is just wrong. Most ordinary programs can be thought of as AI - for example the warning when you are about to do something like delete a load of files could be interpreted as the program helpfully 'realising' that its user is about to make a mistake, and asking him for confirmation! There is no clear dividing line between that sort of behaviour and AI, nor is there a clear definition of what constitutes an AI program.

The fascinating thing about the current AI hype, is that I think we will live to see it unravel, just like it did last time. Maybe people will learn something from the next crash - or maybe there will be another burst of hype in about 2070!

David
 
So if GOOGLE needs a grand plan to solve AI, how come they and others are supposed to be using it already ;)

AI is a concept built out of hype, and based on a misunderstanding of the nature of the mind. It is the idea that mind must be a form of computation, that is just wrong. Most ordinary programs can be thought of as AI - for example the warning when you are about to do something like delete a load of files could be interpreted as the program helpfully 'realising' that its user is about to make a mistake, and asking him for confirmation! There is no clear dividing line between that sort of behaviour and AI, nor is there a clear definition of what constitutes an AI program.

The fascinating thing about the current AI hype, is that I think we will live to see it unravel, just like it did last time. Maybe people will learn something from the next crash - or maybe there will be another burst of hype in about 2070!

David
What do you think AI will look like in 1000 years time? 2000?
 
What do you think AI will look like in 1000 years time? 2000?
If we haven't blown ourselves to bits by then, I think we may well have found ways to create machines that tap into the non-physical world - so they will be intelligent, but not by virtue of computation. AI as such will be seen as something of a dead end, and not even distinguishable from general computer programming.

David
 
What do you think AI will look like in 1000 years time? 2000?
Fair question. How would you respond?

I simply struggle with the notion of man creating consciousness while we can't even define the term. I mean its one thing to watch auto workers build cars and then construct robotics that can mimic or even improve upon that process. Its observable, well understood, and robotic proxies can be tested in very clear terms of success/failure. I'm not sure how to "get there from here" so to speak when discussing AI evolving into consciousness.
 
Fair question. How would you respond?

I simply struggle with the notion of man creating consciousness while we can't even define the term. I mean its one thing to watch auto workers build cars and then construct robotics that can mimic or even improve upon that process. Its observable, well understood, and robotic proxies can be tested in very clear terms of success/failure. I'm not sure how to "get there from here" so to speak when discussing AI evolving into consciousness.
Of course, I have no idea! :D

I think AI will be amazing given another millenium or two of development, but it is very unlikely to be the same as human consciousness. It will not exhibit the memory fallabilities, fragile egos, insecurities, petty jealousies etc etc.
 
If we haven't blown ourselves to bits by then, I think we may well have found ways to create machines that tap into the non-physical world - so they will be intelligent, but not by virtue of computation. AI as such will be seen as something of a dead end, and not even distinguishable from general computer programming.
I guess I've said this before, but that probably applies to many of the discussions we all have here.

My suggestion regarding a conscious machine, is that it would be something like a receptacle, or a home in which a consciousness could reside. It might be considered equivalent to building a nesting box and then sitting back to await the arrival of some residents. This though strikes me as very possibly something of a nightmare scenario. On the one hand we find ourselves in the presence of a machine which is possessed or haunted. (Well I suppose that in itself need not be a nightmare). But what of the spirit which finds itself inhabiting the machine? Would it feel entrapped, and curse its creators for constructing such a world? Would it radiate love, or display suicidal tendencies?

The question of intelligence is in my view quite different to that of consciousness. Consciousness may be intelligent. But the types of machine AI currently envisaged are not conscious.
 
AI is a concept built out of hype, and based on a misunderstanding of the nature of the mind. It is the idea that mind must be a form of computation, that is just wrong. Most ordinary programs can be thought of as AI - for example the warning when you are about to do something like delete a load of files could be interpreted as the program helpfully 'realising' that its user is about to make a mistake, and asking him for confirmation! There is no clear dividing line between that sort of behaviour and AI, nor is there a clear definition of what constitutes an AI program
Hi David, thanks for weighing in. I am torn in different directions with this, but I suppose I can sum up my (provisional) position with a few bullet points:

  • That to which we refer as artificial "intelligence" might be defined and delineated by its capacity to learn and to solve problems that it was not explicitly programmed for, but that it was rather programmed implicitly for, through "learning algorithms".
  • Some of that which we call "intelligence" in humans quite clearly can be simulated through "computation". Take, for example, our learning to walk: an intelligent process which is based on learning how to integrate a whole lot of sensations whilst outputting muscle movements such that one is propelled forwards whilst upright, rather than falling over. There is no question that this "computational" process of "learning to walk" can be simulated by machines, because it already has been - or at least walking itself has been simulated by machines. Other examples: Google's AI has been trained to play and win at a variety of ancient Atari games: an obvious example of "computational" intelligence/learning.
  • Some of that which we call "intelligence" in humans quite probably is inextricably linked to consciousness, and thus not subject to computation. I'm thinking here of "higher" intelligence, in particular abstract and mathematical cognition: the sort of genius shown by Albert Einstein amongst others, which seems to require self-reflective ability. One seems to need to be able to hold a concept in mind and relate it to other concepts, etc etc. Could this intelligence be "reduced" to "computational" intelligence? I really don't know, but for now I say "probably not". But that really is just a "probably".
 
Hi David, thanks for weighing in. I am torn in different directions with this, but I suppose I can sum up my (provisional) position with a few bullet points:

  • That to which we refer as artificial "intelligence" might be defined and delineated by its capacity to learn and to solve problems that it was not explicitly programmed for, but that it was rather programmed implicitly for, through "learning algorithms".
  • Some of that which we call "intelligence" in humans quite clearly can be simulated through "computation". Take, for example, our learning to walk: an intelligent process which is based on learning how to integrate a whole lot of sensations whilst outputting muscle movements such that one is propelled forwards whilst upright, rather than falling over. There is no question that this "computational" process of "learning to walk" can be simulated by machines, because it already has been - or at least walking itself has been simulated by machines. Other examples: Google's AI has been trained to play and win at a variety of ancient Atari games: an obvious example of "computational" intelligence/learning.
  • Some of that which we call "intelligence" in humans quite probably is inextricably linked to consciousness, and thus not subject to computation. I'm thinking here of "higher" intelligence, in particular abstract and mathematical cognition: the sort of genius shown by Albert Einstein amongst others, which seems to require self-reflective ability. One seems to need to be able to hold a concept in mind and relate it to other concepts, etc etc. Could this intelligence be "reduced" to "computational" intelligence? I really don't know, but for now I say "probably not". But that really is just a "probably".
I think even your definition is really a continuum. For example, does a program 'learn' as it reads in data, or does it learn if it makes an internal search - for example searching the integers looking for primes. Does it 'learn' when it consumes large quantities of text and collects various sorts of word association information. Even English grammar is fairly algorithmic, so some 'understanding' can be gleaned relatively easily.

More generally, what does 'understanding' mean? Suppose a computer understands that people get married to have children, and that there are two sexes, and that most first names are unique to one sex (with obvious exceptions, such as Pat). Does it really make sense to say it understands anything? Given this lack of real understanding, I certainly don't think AI will understand advanced concepts of this sort. It is interesting that some mathematicians do actually use computers. But they program them to do long strings of calculations that they (the mathematicians) have figured out will settle some point. The conceptual reasoning comes entirely from the human mathematician.

Of course, there were attempts to use AI to do maths - particularly calculus. However modern algebra programs do not use such methods and are not touted as AI software.

http://www.maheshsubramaniya.com/ar...ifferential-calculus-using-prolog-part-1.html

(Differentiation is an easy problem because the problem recursively decomposes, but integration (the reverse process) is much harder). The best integration algorithms have been hand crafted by mathematicians and programmers.

Your second point is clearly true, but that isn't really what AI is supposed to be about.

Your third point seems to relate much more to the concept of AI that is projected by hype. I like the example of driverless cars. I don't believe those cars will ever deliver the dream that they represent - drive at a reasonable speed but safely (or at least as well as people do) under any conditions, down any roads, in any traffic, handling diversion signs, horses, fallen trees, lorries shedding their loads, ice, and road works, with nobody at the wheel ready to take over. I think GOOGLE have created a wonderful test of AI with this project. My guess is that the project will be shelved at some point because of legal problems!

David
 
Last edited:
I think even your definition is really a continuum.
I'm not sure how relevant this is. Many things that we accept as real exist on a continuum, including life itself. i.e. We might ask: "When is the exact moment of a person's death? Aren't life and death on a continuum?" but this wouldn't cause us to deny the existence and definition of life as you seem to want to do of artificial intelligence!

In any case, you seem to ignore in the rest of what you write in that paragraph the criterion that I stipulated as a provisional definition of artificial intelligence: that the learning be not explicitly programmed for and that instead it occurs implicitly through "learning algorithms". Your examples seem to me to be that of explicitly coded "learning", rather than implicit learning through "learning algorithms".

I'm left wondering, then, what your objection to my definition is.

More generally, what does 'understanding' mean? Suppose a computer understands that people get married to have children, and that there are two sexes, and that most first names are unique to one sex (with obvious exceptions, such as Pat). Does it really make sense to say it understands anything?
Good questions. I asked similar questions in post #82 in this thread (nobody bit):

Question (open-ended): are "understanding" and "judgements" (referenced in the above) predicated on consciousness, or can (non-conscious) intelligence be fairly claimed (as in the above) to possess/make them?
To offer an answer then: I think that, strictly speaking, understanding is a function of consciousness, but that one can speak metaphorically of an artificially intelligent agent "understanding" something when it is capable of using that something in a (number of) context-appropriate way(s) for which it was not explicitly coded, but rather developed "autonomously" via implicit learning algorithms.

Your second point is clearly true, but that isn't really what AI is supposed to be about.
Don't you think that the AI tent is broad; that it has room for this as much as for more "high-falutin'" artificial intelligences?

Your third point seems to relate much more to the concept of AI that is projected by hype. I like the example of driverless cars. I don't believe those cars will ever deliver the dream that they represent - drive at a reasonable speed but safely (or at least as well as people do) under any conditions, down any roads, in any traffic, handling diversion signs, horses, fallen trees, lorries shedding their loads, ice, and road works, with nobody at the wheel ready to take over. I think GOOGLE have created a wonderful test of AI with this project. My guess is that the project will be shelved at some point because of legal problems!
That's curious... I wouldn't have associated driverless cars with my third point, but more with my second. It's a continuum, I guess!
 
My suggestion regarding a conscious machine, is that it would be something like a receptacle, or a home in which a consciousness could reside. It might be considered equivalent to building a nesting box and then sitting back to await the arrival of some residents.
Interesting, Typoz. I think along similar lines. Many people (esp. mainstream academics) seem to see consciousness "arising" out of complex algorithms. I don't think this is as likely as that consciousness "becomes embedded or entrapped" in a suitably complex assembly. This is very dualistic (and thus unfavourable in the academic mainstream) thinking.

This though strikes me as very possibly something of a nightmare scenario. On the one hand we find ourselves in the presence of a machine which is possessed or haunted. (Well I suppose that in itself need not be a nightmare). But what of the spirit which finds itself inhabiting the machine? Would it feel entrapped, and curse its creators for constructing such a world? Would it radiate love, or display suicidal tendencies?
I suppose the "saving grace" I see is that presumably some sort of guiding intelligence decides "which consciousness gets put into which materially-amenable consciousness-container", and thus that an excessively limiting "[artificial] consciousness-container" might be rejected by this guiding intelligence.

Another saving grace I see is that to maintain the free will of the inhabiting spirit, deterministic consciousness-containers would probably never be accepted by that guiding intelligence which decides upon inhabitation.

Edit: And of course we could debate whether a guiding intelligence is really required - perhaps you think this could be a matter of natural law rather than conscious decision.

Edit2: Oh, and we could also suggest that an embedded consciousness could override (to some extent) any otherwise deterministic parameters within which it was entrapped, just as (perhaps) happens w.r.t. our own physical brains.

The question of intelligence is in my view quite different to that of consciousness. Consciousness may be intelligent. But the types of machine AI currently envisaged are not conscious.
Agreed.
 
Last edited:
I'm with david on this one - if I understand him correctly.

If 'spirit' or 'consciousness' can exist apart from a physical body, I can't see why it definitely couldn't be associated with an artificial vessel to that extent that vessel permitted its expression. I don't think consciousness is computation or arises from it. It seems to me computation attempts to model consciousness. A painting of a landscape will never be the landscape no matter how good it is.
 
Interesting, Typoz. I think along similar lines. Many people (esp. mainstream academics) seem to see consciousness "arising" out of complex algorithms. I don't think this is as likely as that consciousness "becomes embedded or entrapped" in a suitably complex assembly. This is very dualistic (and thus unfavourable in the academic mainstream) thinking.
Yeah, I think dualism gets a bad press, probably because it highlights the fact that there are things we don't understand, and can't begin to explain. I don't cling to dualism as such, it's more a means of expressing ideas.

Laird said:
Typoz said:
This though strikes me as very possibly something of a nightmare scenario. On the one hand we find ourselves in the presence of a machine which is possessed or haunted. (Well I suppose that in itself need not be a nightmare). But what of the spirit which finds itself inhabiting the machine? Would it feel entrapped, and curse its creators for constructing such a world? Would it radiate love, or display suicidal tendencies?
I suppose the "saving grace" I see is that presumably some sort of guiding intelligence decides "which consciousness gets put into which materially-amenable consciousness-container", and thus that an excessively limiting "[artificial] consciousness-container" might be rejected by this guiding intelligence.
Actually I think the reason I expressed it as a possible 'nightmare scenario' is that some time ago I had just such a dream. My consciousness was contained within some sort of metallic or crystalline body, it felt like being strapped in a strait-jacket, only much, much worse, the very processes of thought were constrained. This was in some sort of alien civilisation where these bodies were manufactured and the process was overseen for some ulterior purpose.

Glad I woke up from that one!
 
Yeah, I think dualism gets a bad press, probably because it highlights the fact that there are things we don't understand, and can't begin to explain. I don't cling to dualism as such, it's more a means of expressing ideas.
Fair enough. I suppose I'd suggest too that a "strict" dualism isn't "strictly" correct anyway: that there is a continuum from grosser to lighter energies. I think somebody (Ian) some time back referenced the idea of "transcendental materialism" which maybe has some descriptive truth to it.

Actually I think the reason I expressed it as a possible 'nightmare scenario' is that some time ago I had just such a dream. My consciousness was contained within some sort of metallic or crystalline body, it felt like being strapped in a strait-jacket, only much, much worse, the very processes of thought were constrained. This was in some sort of alien civilisation where these bodies were manufactured and the process was overseen for some ulterior purpose.

Glad I woke up from that one!
So am I! It sounds ghastly.
 
Last edited:
Top