State of Artificial Intelligence

Hi,

I just had a big discussion with a friend of mine after he suggested I read this blog post on AI:

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

My friend was very excited by the article, but I am not so convinced. Then we started to look for what the actual state of the art of AI is. I don't see anything that amazing, but I may be missing something.

What is exciting going on in AI right now that you know of?

Do you think strong AI is a definite? Why or why not?

Thanks,

Chuck
 
Hi,

I just had a big discussion with a friend of mine after he suggested I read this blog post on AI:

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

My friend was very excited by the article, but I am not so convinced. Then we started to look for what the actual state of the art of AI is. I don't see anything that amazing, but I may be missing something.

What is exciting going on in AI right now that you know of?

Do you think strong AI is a definite? Why or why not?

Thanks,

Chuck

A long rambling speculative discourse about all the possibilities and ramifications of the threat of AI to humanity. Divorced from any grounding in the actual state of the research in terms of the hardware and software and the merits of the different approaches. This exercise might be useful for at least one thing - to furnish a source for science fiction and fantasy writers to mine for ideas. One interesting aspect is the view that AI could still be a threat even if it never achieves consciousness and self-awareness. It could still end up destroying mankind (?).

I think the most appropriate response is just to repeat my earlier post on the problem of language, which is just a baby step in the development of some form of existentially threatening AI:

The idea that AI will soon (or even at some time) achieve consciousness and beyond human intelligence is probably absurd. From a good essay on just one of the reasons (the language problem) why AI at the level assumed coming by transhumanists is pure fantasy, an impossible dream:

Consider the natural English language statement: "The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam, the large ball or the table?"

Watson would not perform well in answering this question, nor would Deep Blue. In fact there are no extant AI systems that have a shot at getting the right answer here, because it requires a tiny slice of knowledge about the actual world. Not "data" about word frequencies in languages or GPS coordinates or probability scoring of next-best chess moves or canned questions to canned answers in Jeopardy. It requires what AI researchers call "world knowledge" or "common sense knowledge."

The problem of commonsense knowledge -- having actual, not "simulated" knowledge about the real world -- is a thorny issue that has relegated dreams of true, real AI to the stone ages so far.

Given that minds produce language, and that there are effectively infinite things we can say and talk about and do with language, our robots will seem very, very stupid about commonsense things for a very long time. Maybe forever.

Notice that "knowledge" inherently implies "knowing", which is an inherent aspect of consciousness, not of mere number crunching processing of stored digital information.
 
Last edited:
Thanks NB and DG for replying.

The discussion between my friend and I basically boiled down to a few points. He is a computer programer and I also work in IT and can hack at basic languages like PHP, etc. So we are not unfamiliar with programming and it's main components.

1. We agreed that things like SIRI on the iPhone or systems like the Wall Street trading systems were not really AI in any true sense of the word. They are systems that are just programmed and use logic to determine an output. Obviously the value of the Wall Street systems is crunching an enormous amount of data, but it is not really artificial intelligence.

2. We both agreed that it is a mistaken understanding by most people that advanced artificial intelligence means "like a human." We agree that there will never be a machine that can "understand" what a dog is. They could make a machine that could recognize different breeds of dogs and even detect whether a dog might be a threat by picking up subtle cues. But the machine will never understand what a dog is, or be able to imagine what it is like to be a dog, or be able to appreciate the love and faithfulness of a pet.

3. We tried to pin down exactly what the nature of "intelligence" is. Without a lot of success. My friend kept coming back to learning. Nascent AI systems might begin by being able to learn. And we found an example from September of 2014 on the web from a major journal of, I guess cutting edge AI, where they had developed AI that could learn math on its own by recognizing patterns. I didn't actually read the study.

My friend came up with an idea for a program that would learn over time to become better at passing the Turing Test. As soon as someone sensed they were talking to a machine, they would alert the program. The program would evaluate what phrases or response had prompted the failure and would avoid it in the future. Over time it would become better at conversation.

4. I fail to see how multiplying processing power even exponentially will somehow magically bring about whatever Kurzweil says it will bring about. In the 1980's the TRS-80 computer was like a rock and a rubber band compared to the machines I work on now. But really they are basically the same, it is just that the applications are better. The applications aren't more intelligent. My MacBook isn't any more intelligent than the TRS-80 was. The intelligence of the machine hasn't changed one jot. As far as I can tell the machine itself has zero intelligence. It runs programs that were programmed by people with a brain.

5. I kept coming back to the fact that people can come up with ideas. Computers do not come up with original ideas. I can sit down and brainstorm and actually come up with new ideas. People do it all the time. When has a computer, even one like Watson or Deep Blue ever even had one original idea? Isn't that what intelligence really is?

6. We couldn't decide on what AI would actually do or look like. We have computers that fly planes and all sorts of other complex activities, but they are not artificially intelligent. They are logic boxes, programmed to perform a specific task given a specific dataset. So what will AI do? Read the entire Internet? And then what? Isn't the Internet already a kind of discrete entity that contains most of the human knowledge ever known to man? The Internet isn't artificially intelligent and it holds all our knowledge. It is entirely interconnected and available for the taking. So what will AI do? We couldn't really come up with anything. Maybe we are lacking in imagination.

In the end we didn't really get anywhere. He said that all it would take was that one breakthrough that would start the ball rolling. Maybe that is so.
 
Yeah, read my reply. Current research in AI is no where close to fo "true AI." Not sure I am convinced it is possible. I am impressed that you know the TRS-80. That was my first computer after the Osborne. After the TRS was the Turbo-XT. But then my father worked in the industry, and taught me programming at an early age, so I was exposed to this stuff at an early age. (I ended up being more interested in philosophy and literature, much to my dad's chagrin.)

I did work in the industry for awhile, though.
The TRS-80 was at school. At home I had a Timex Sinclair 1000 and also an Atari 800 that I would fiddle around on. You could store programs on cassette tapes. My buddy that I'm talking about had early Apple machines. I don't know if he still has them. Probably.
 
Consider the natural English language statement: "The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam, the large ball or the table?"

A lot of humans would probably get that wrong as well. The sentence is intentionally ambiguous, and there is no reason "simulated" knowledge is somehow worse than "real" knowledge deducing how styrofoam works. What is actually going on here is a lot of fuzzy logic and bayesian math[fn:1].

Not "data" about word frequencies in languages or GPS coordinates or probability scoring of next-best chess moves or canned questions to canned answers in Jeopardy. It requires what AI researchers call "world knowledge" or "common sense knowledge."
The problem of commonsense knowledge -- having actual, not "simulated" knowledge about the real world -- is a thorny issue that has relegated dreams of true, real AI to the stone ages so far.[/quote] [..]

There is no difference between "simulated" and "actual" knowledge. The real problem is that people are not designing AIs to be capable of learning--they are usually trying to cheat at some test by using a handful of gimmicks.

Given that minds produce language, and that there are effectively infinite things we can say and talk about and do with language, our robots will seem very, very stupid about commonsense things for a very long time. Maybe forever.

Until someone starts programming them to understand what an abstract concept is, anyway.

Notice that "knowledge" inherently implies "knowing", which is an inherent aspect of consciousness, not of mere number crunching processing of stored digital information.

Too bad there is no test for consciousness.

Footnotes

[fn:1] The object known as table is 75% likely to be made out of wood, with an average material strength of X. The object known as ball is in motion. The object known as ball is 40% likely to be made out of rubber. One object is designated as material styrofoam. Solve force equation to find which object is most likely damaged by collision.
 
Hi,

I just had a big discussion with a friend of mine after he suggested I read this blog post on AI:

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

My friend was very excited by the article, but I am not so convinced. Then we started to look for what the actual state of the art of AI is. I don't see anything that amazing, but I may be missing something.

What is exciting going on in AI right now that you know of?

Do you think strong AI is a definite? Why or why not?

Thanks,

Chuck

I think we are gonna need a lot of new science to get there... But I think it's probably inevitable.

Adrian Thompsons work on evolvable hardware has convinced me that if you could make the substrate of the processor totally adaptable, rather than fixed as they are now... And if that substrate had sufficient isolation from decoherance... And the system used fields and was freed to train it's input/output without control... I think we might get there in the fullness of time. Hundreds of years?
 
I think we are gonna need a lot of new science to get there... But I think it's probably inevitable.

Adrian Thompsons work on evolvable hardware has convinced me that if you could make the substrate of the processor totally adaptable, rather than fixed as they are now... And if that substrate had sufficient isolation from decoherance... And the system used fields and was freed to train it's input/output without control... I think we might get there in the fullness of time. Hundreds of years?

My guess is that the onset of circuit printing will help somewhat. A lot of current-day software engineering is constrained by bad or inefficient thinking; using suboptimal languages for prototypes (e.g. using 'dead' code such as C or Python instead of 'live' code such as Lisp) and unwillingness to use / inaccessibility of special hardware. It will amaze you how many professional programmers fail to understand simple parallel processing for example, yet some of these are the same people fumbling to emulate brains! An architecture where specialist hardware does specialist things (such as an Amiga) is also vastly more efficient, yet engineers still pattern thinking around the completely inefficient boondoggle that is IBM PC. Similarly, most designs for AI that I've seen revolve around trying to cram everything in to either a huge symbol processing machine or a huge neural network. It's kind of silly, because that's not at all efficient.

The amount of people who truly understand what they are trying to do in the AI field is embarrassingly small.
 
My guess is that the onset of circuit printing will help somewhat. A lot of current-day software engineering is constrained by bad or inefficient thinking; using suboptimal languages for prototypes (e.g. using 'dead' code such as C or Python instead of 'live' code such as Lisp) and unwillingness to use / inaccessibility of special hardware. It will amaze you how many professional programmers fail to understand simple parallel processing for example, yet some of these are the same people fumbling to emulate brains! An architecture where specialist hardware does specialist things (such as an Amiga) is also vastly more efficient, yet engineers still pattern thinking around the completely inefficient boondoggle that is IBM PC. Similarly, most designs for AI that I've seen revolve around trying to cram everything in to either a huge symbol processing machine or a huge neural network. It's kind of silly, because that's not at all efficient.

The amount of people who truly understand what they are trying to do in the AI field is embarrassingly small.
So what will AI, look like? What will it do? What will make it intelligent?
 
3. We tried to pin down exactly what the nature of "intelligence" is. Without a lot of success.
Yep. That's the kicker. There is no singular definition or meaning to the term "intelligence." By some definitions the case can be made that humans aren't intelligent. I'd guess that by AI most people mean a machine with a human-like intellect. Commander Data. Is that possible? Yes, of course. But employing computer programming as we currently know it, won't do it. Especially not digital computing.
 
4. I fail to see how multiplying processing power even exponentially will somehow magically bring about whatever Kurzweil says it will bring about. In the 1980's the TRS-80 computer was like a rock and a rubber band compared to the machines I work on now. But really they are basically the same, it is just that the applications are better. The applications aren't more intelligent. My MacBook isn't any more intelligent than the TRS-80 was. The intelligence of the machine hasn't changed one jot. As far as I can tell the machine itself has zero intelligence. It runs programs that were programmed by people with a brain.
Yes, essentially with today's CPUs we can make the usual AI look less disappointing.
 
A.I. could definitely be a threat to humankind but not the kind I think many of us SciFi fans might imagine, i.e. some computer becoming "conscious" like HAL in Space Odyssey. Currently there is no science that indicates inert lifeless matter can become suddenly conscious. In fact, there is a good amount of science now indicating consciousness itself may be just as fundamental as the matter we experience, or at the very least, as quantum physics suggest, consciousness is interwined with matter in a way we really don't yet scientifically understand.

However, that doesn't mean advanced computers can't recklessly become a massive threat to humans. Even though consciousness may likely never be achieved (although transferring consciousness to a machine might someday be possible) - the fact that computers can now beat any human being on the planet at chess, can excel and do some things faster and more efficiently than humans, and that AI simulation and control can lead to computerized oversight of vital functions to human civilization etc. - that formula could definitely prove disastrous for humankind - if we become too dependent on computer technology etc.

We already have the technology of the atom bomb, which if it fall in the wrong hands, could prove devastating to millions. Think of the potential of a computerized system gone amok. Especially sophisticated algorithms that say, are programmed to replicate themselves and exist independently on say... the Internet? The complexity of algorithms will likely increase with computers in the decades ahead. This seems very possible to me. Consciousness in computers - no. The danger however, is ever increasing powerful machines with complex sophisticated algorithms that can perform dangerous actions, not necessarily conscious actions, but robotic automatic actions.

My Best,
Bertha
 
Last edited:
Consider the natural English language statement: "The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam, the large ball or the table?"

Watson would not perform well in answering this question, nor would Deep Blue. In fact there are no extant AI systems that have a shot at getting the right answer here, because it requires a tiny slice of knowledge about the actual world. Not "data" about word frequencies in languages or GPS coordinates or probability scoring of next-best chess moves or canned questions to canned answers in Jeopardy. It requires what AI researchers call "world knowledge" or "common sense knowledge."

Oddly enough, I feel that question isn't a very good example of the kind of thing a computer intelligence would find it difficult to do. Can't we imagine a computer with a database constructed from the contents of Google Books, trying to resolve the ambiguity by searching for phrases like "A crashed through B" and looking for common factors in linked tables of the physical properties of A and B? The computational resources required would be large, but it's not hard to imagine the kind of program that could perform the task.
 
Yes, it is very hard for me to conceive of a truly conscious machine - how would that happen?

Is that really the stated goal of AI, or is that just how science fiction presents it? Do people who actually work in AI think that they are working toward developing a conscious machine? I seriously doubt it.

We already have artificial intelligence, but not conscious artificial intelligence. The AI we have now are programmed by humans, using human-derived algorithms. And that is why, I think, they are a huge threat. All the nastiness, prejudice, etc, of the worse side of humanity programmed into machines that will execute what they were programmed to do.

What in today's world would you consider to be AI that is currently in use?
 
Yes, it is very hard for me to conceive of a truly conscious machine - how would that happen? Sure, we have sci-fi to inform us, but that is fiction
What is being told in fiction is often actuality.

As for why it's hard for you to conceive a fully conscious machine, I'd guess you're not alone though I find that difficulty perplexing. Are humans truly conscious? Other animals? Maybe the issue for those who think of certain things/situations as near impossible is that they assess those things based on current human public knowledge. I'd guess if you intersect a time frame at say 500 AD many people would not allow themselves to conceive of instant communication worldwide.
 
A lot of humans would probably get that wrong as well. The sentence is intentionally ambiguous, and there is no reason "simulated" knowledge is somehow worse than "real" knowledge deducing how styrofoam works. What is actually going on here is a lot of fuzzy logic and bayesian math[fn:1].

[fn:1] The object known as table is 75% likely to be made out of wood, with an average material strength of X. The object known as ball is in motion. The object known as ball is 40% likely to be made out of rubber. One object is designated as material styrofoam. Solve force equation to find which object is most likely damaged by collision.

If the programmers knew what the question would be this is indeed a good outline model of the sort of specific software solution possible.

It should be noted that the example of answering the ball and table question is in just a very small subset of the large array of presently insuperable problems that would have to be solved to achieve a true hard AI system.

For just this very small part of the total development a huge effort modeled on "fn:1" would have to be undertaken to produce a general capability to answer any conceivable question on any of an innumerable array of subjects. These subjects would have to include not just the entire physical world, but also all conceivable abstract hypothetical or metaphysical areas, human experiences, emotions, etc. etc. An absolutely unmanageably vast number of individual cases would have to be programmed. Of course, this could be given broad limitations by assuming the normal limits of specialized knowledge in any individual human.

I think the best AI researchers will ever do along these lines is the various "expert systems" like medical diagnosis, where the scope and format of the questions and answers are strictly limited.

Until someone starts programming them to understand what an abstract concept is, anyway.

Not likely, ever. To "understand" is in an entirely higher level of existence than computer processing, since it involves conscious intentionality, comprehension, and awareness of the meaning of a nearly infinite number of possible things. AI systems will always be incredibly stupid until they can actually comprehend, for instance, what a question is.
 
Oddly enough, I feel that question isn't a very good example of the kind of thing a computer intelligence would find it difficult to do. Can't we imagine a computer with a database constructed from the contents of Google Books, trying to resolve the ambiguity by searching for phrases like "A crashed through B" and looking for common factors in linked tables of the physical properties of A and B? The computational resources required would be large, but it's not hard to imagine the kind of program that could perform the task.
One oddity of AI, is that if you can pinpoint something that a supposedly AI system can't do, you can always add a patch to make it do it - the problem is that you seem to need an infinite number of such patches. Interestingly reminiscent of Gödel's theorem where you can keep growing the axiom set, but it never becomes complete.

Also, related to your GOOGLE books idea, it would be possible to imagine a machine that conversed with a human by using a massive database of every conversation on the internet, and trying to continue a partial conversation just by matching against that database - something unthinkable in Turing's day, but obviously a cheat.

I think real AI seems inevitable to a materialist, and very unlikely to a non-materialist - because we don't think consciousness can be generated within a machine.

Of course, if the brain is a receiver of consciousness, it might be possible to build a mechanical device that did the same! This would exactly be AI, and to build one, you would have to understand how the brain couples to a non-material consciousness!

David
 
I think real AI seems inevitable to a materialist, and very unlikely to a non-materialist - because we don't think consciousness can be generated within a machine.

But isn't intelligence rather different from consciousness? I'd say the process I described could fairly be described as intelligent, but not as conscious. But then again, consciousness is never really defined in these discussions anyway. Nor is materialism, for that matter. Does believing that a machine could develop consciousness necessarily make one a materialist? I'm not sure why it should.
 
Also, related to your GOOGLE books idea, it would be possible to imagine a machine that conversed with a human by using a massive database of every conversation on the internet, and trying to continue a partial conversation just by matching against that database - something unthinkable in Turing's day, but obviously a cheat.


Perhaps, through education, expectation and suggestion, we are merely programming human children to the point that they can successfully "cheat" the Turing test?

:eek:
 
Back
Top