The death of AI (yet again)

I reckon it's possible, and probably sooner than we expect...

My paranoid part says that Adrian Thompson (of evolvable hardware fame) is probably working on a new development with the intelligence services, as he appears to have dropped off the face of the earth just a few years after his discovery.

Gawd knows what he's working on, but it's bound to be surprising, really really surprising when we find out about it.
 
https://theaviationist.com/2017/01/...s-unleash-swarm-of-mini-drones-in-first-test/


U.S. Navy F/A-18 Hornets released a swarm of 103 Perdix semi-autonomous drones in flight. Welcome to the future of war tech.

The U.S. Department of Defense has revealed in a press release dated Jan. 9, 2017 that three U.S. Navy F/A-18 Hornet two-seat variants have successfully released a “swarm” of 103 Perdix semi-autonomous drones in flight. The tests were carried out at China Lake range, on Oct. 25, 2016, and were administered by the Department of Defense, the Strategic Capabilities Office, partnering with Naval Air Systems Command.

The miniature Perdex drones, different from larger, more common remotely piloted vehicles (RPVs) like the well-known Reaper and Predator, operate with a high degree of collective autonomy and reduced dependency on remote flight crews to control them. The large group of more autonomous Perdex drones creates a “swarm” of miniature drones. The swarm shares information across data links during operation, and can make mission-adaptive decisions faster than RPV’s controlled in the more conventional manner.

In a statement released by the U.S. Department of Defense, Strategic Capabilities Office Director William Roper said, “Due to the complex nature of combat, Perdix are not pre-programmed synchronized individuals, they are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature,” Director Roper went on to say, “Because every Perdix communicates and collaborates with every other Perdix, the swarm has no leader and can gracefully adapt to drones entering or exiting the team.”
 
I'm struggling with this conception of "AI".

Using Jim's article on the Navy's testing of drone swarm-technology, are the drones or the swarm for that matter "intelligent"? I guess the definition has to start on what we mean by intelligence, or Artificial Intelligence. As an amateur "coder" myself, I feel like I have a good working understanding of the traditional computer software construct. The old "if then else" convention where by the computer has been coded with preexisting instructions on how to deal with various inputs. This, to me, is nothing close to intelligence at least if we're trying to compare it to human intelligence or even sentience.

Now, I'm sure there are much more advanced software models at work beyond my rather basic explanation. My question for those who might know is: Is there a computer anywhere that might do something "unpredictable" for lack of a better word? Meaning, even if the drones in the article appear to be acting intelligently I am assuming their creator (i.e., the "coder") could look at any behavior and say one of two things: 1) Yes, that's exactly how I coded the drone swarm to react or 2) Wow, that must be a bug.

Is there really a sense that this type of binary coded machine concept could ultimately result in a sentient "thing" with free (and there by "unpredictable") will? That would still leave another huge leap to "consciousness" experience.
 
http://www.itworld.com/article/3159...ai-system-is-beating-human-poker-players.html
After first week, A.I. system is beating human poker players
http://www.cbsnews.com/news/davos-how-artificial-intelligence-may-change-work-as-we-know-it/
Felix Marquardt, president international at Cylance, emphasized that the growth of artifical intelligence makes an education system overhaul more urgent than ever.

“The question truly is: are our educational systems ready for the challenges posed by A.I.?” Marquardt said. “Our schools are still primarily churning out job seekers, when what we need is for them to churn out job creators. Entrepreneurship needs to be much more seriously taught.”

Cylance is an antivirus software built on a platform powered by artificial intelligence and machine learning.
 
I'm struggling with this conception of "AI".

...

Is there really a sense that this type of binary coded machine concept could ultimately result in a sentient "thing" with free (and there by "unpredictable") will? That would still leave another huge leap to "consciousness" experience.

Many materialists think the brain is a computer with consciousness an epiphenomenon and no free will. They believe with enough computing power and software sophistication an AI equal to human intelligence is possible.

Machine learning is the new solution to AI, with genetic algorithms and other techniques the software modifies its logic to something that a programmer did not create. However there is no way this leads to awareness unless you define awareness as a "very complex algorithm".

My calculator can compute faster and more accurately than I, but I dispute that it is intelligent. I can buy a computer that beats me at chess, I dispute that it is intelligent.

There is no doubt that computers can process large amounts of data very quickly and very accurately, but this is no solution to the "hard problem".
 
https://motherboard.vice.com/en_us/article/the-real-threat-is-machine-incompetence-not-intelligence
We have general intelligence and so we see a simulacrum of intelligence on TV and assume that it too involves something like general intelligence, even though a Go-playing computer is more or less doomed to an existence as a Go-playing computer. In Bundy's words: "Many humans tend to ascribe too much intelligence to narrowly focused AI systems."

False positives in prior early-warning missile detection systems were common and had been triggered by, among other things, flocks of birds and the moonrise. A false positive could mean nuclear war.

AI will continue to develop in siloed form, where new and impressive machines continue to scare doomsayers for their abilities within relatively narrow task domains while remaining "incredibly dumb" when it comes to everything else.
 
Conference in Dehli where AI's place in global markets will be discussed.

Will let you know how it goes and if I have a chance to raise philosophical objections. ;)
 
The Rise of the Robots — Series 1, Where is my mind?
From Skynet and the Terminator franchise, through Wargames and Ava in Ex Machina, artificial intelligences pervade our cinematic experiences. But AIs are already in the real world, answering our questions on our phones and making diagnoses about our health. Adam Rutherford asks if we are ready for AI, when fiction becomes reality, and we create thinking machines.
Adam Rutherford seems to have back-pedalled a bit. At the end of the previous episode, he talked of "real, thinking, feeling machines" being the subject of this week's programme. But there was very little about feelings here. It was primarily a discussion of technology but which did touch, though not in any great depth, on some of the more challenging ideas about consciousness itself. The ability to suffer was one such characteristic given a mention at least. It was also taken as a fundamental truth that the brain was a requirement for consciousness - which I'd say was at the very least subject to debate, and arguably disproved already.
 
"But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond."

https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/?set=604130


MIT Technology Review

The Dark Secret at the Heart of AI

No one really knows how the most advanced algorithms do what they do. That could be a problem.

by Will Knight April 11, 2017

...
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.
...
Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems.
...
The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation.
...
In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know.
...
But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.​

This type of AI seems to have different implications for the ethical treatment of computers depending on whether or not one believes in materialism.
 
Last edited:
Here is what happens when you ask a neural network to find something in a picture (enhance the image) repeatedly through a feedback loop.

https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html
gDmeFU0HBI4rBZwkwMr75uDDjxRV4lImfmlad5j4w4AzBgDaAgDveJQ4iQp2iQ7-UsgtEBFbl-zGXXXk8cXLvYhcHSd--prWzGnRtnOVeYd6jaCNNsjNyjiBPOol5byZHeQHqw_tEv_OegLqdnePiVScGu-vt810nbbm_-JRgLA4oyO8T4QihVcYuQLJlFT2KEf8o8Y1525TMe_ZG3hOV8PCD1zM8YEWmLK2vOe5TcxhFZ2GBLAfJC_TuPUvyP8Mqa0IKltiBVOlYiVcqxyoNlSYvV3k57YPsq42so9ObEfUPJ812yOV-jzFPnnIe1yn-HJingpcz1wrmIvVCl2u19uFAd0M7qyM3w9gkgge7N-LCG4wzINKWfVeiI3LCtQ0G7YJuxG73dwxYM2_UdUw32U89wqzNuDoUVSw2Nb-YlmNzFWSsUmg6x-yVsSDck61qVcgUGd66U27T8tPin7O8iIz-fWD_0SvcDZyqnmnuZ1wPMX7SyYSH38p605yvWpf3xxhzoaKcGNHMi-bfgs_Z5g9sLCIs-r-Xe5ZO2SfHCIWfOJOIz4aMqAr9O1OrI7Bpu1SIrZhUEGYjRgg7dHSg_BfKcrusyLT_hwnYmzpDoh0_XWWtUh_ocJAOtjFtyJB33vWeVcpxMMpjAt5jDivPp4Nei46X9eC2-0LTBximQ=w1135-h305-no


Lots more here:
https://photos.google.com/share/AF1...?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB

I wonder if this has any relationship to what psychedelic drugs do to the brain?
 
Very interesting! Thanks for sharing

Here's more...

Inceptionism


This is a technical explanation (6:44)

Many examples (37:47)

https://play.google.com/music/previ...0ahUKEwiTysvC9Z_TAhXMx4MKHaLUBOoQr6QBCBsoADAB

Lucy In The Sky With Diamonds (Remastered)
The Beatles

Lyrics

Picture yourself in a boat on a river
With tangerine trees and marmalade skies
Somebody calls you, you answer quite slowly
A girl with kaleidoscope eyes

Cellophane flowers of yellow and green
Towering over your head
Look for the girl with the sun in her eyes
And she's gone

Lucy in the sky with diamonds
Lucy in the sky with diamonds
Lucy in the sky with diamonds
Ah

Follow her down to a bridge by a fountain
Where rocking horse people eat marshmallow pies
Everyone smiles as you drift past the flowers
That grow so incredibly high

Newspaper taxis appear on the shore
Waiting to take you away
Climb in the back with your head in the clouds
And you're gone

Lucy in the sky with diamonds
Lucy in the sky with diamonds
Lucy in the sky with diamonds
Ah

Picture yourself on a train in a station
With plasticine porters with looking glass ties
Suddenly someone is there at the turnstile
The girl with the kaleidoscope eyes

Lucy in the sky with diamonds
Lucy in the sky with diamonds
Lucy in the sky with diamonds
Ah
Lucy in the sky with diamonds
Lucy in the sky with diamonds
Lucy in the sky with diamonds
Ah
Lucy in the sky with diamonds
Lucy in the sky with diamonds
Lucy in the sky with diamonds

 
Many materialists think the brain is a computer with consciousness an epiphenomenon and no free will. They believe with enough computing power and software sophistication an AI equal to human intelligence is possible.
I agree that free will is an incoherent concept.

But I doubt that consciousness is an epiphenomenon. It would be very difficult to explain how we are having a conversation about it if it has no causal effect on our brains. I think it is more accurate to say that many materialists think that consciousness is an illusion; that is, it is not what we assume it to be.

~~ Paul
 
I agree that free will is an incoherent concept.

But I doubt that consciousness is an epiphenomenon. It would be very difficult to explain how we are having a conversation about it if it has no causal effect on our brains. I think it is more accurate to say that many materialists think that consciousness is an illusion; that is, it is not what we assume it to be.

~~ Paul

I think, therefore I am ...

Except that I don't really think - I only think that I think because consciousness is an illusion.
Right - got it.
So I am having an illusion about thinking yet the illusion is itself a thought - it is a mistaken thought, maybe, but it is still a thought although it can't be because I can't be thinking at all because thinking is an illusion.

Well, there I thought I had it but that was only an illusion too. I need to think about this some more.
Or not.

;)
 
This type of AI seems to have different implications for the ethical treatment of computers depending on whether or not one believes in materialism.
I'm not sure whether this is a quote from the article, or from Jim Smith. Either way it seems odd. It doesn't seem plausible that the ability of computers to experience suffering is dependent upon one's belief system. If I believe my alarm clock feels hurt when I shout at it, does my belief make it so?
 
I think, therefore I am ...

Except that I don't really think - I only think that I think because consciousness is an illusion.
Right - got it.
So I am having an illusion about thinking yet the illusion is itself a thought - it is a mistaken thought, maybe, but it is still a thought although it can't be because I can't be thinking at all because thinking is an illusion.

Well, there I thought I had it but that was only an illusion too. I need to think about this some more.
Or not.

;)
"I think, therefore i think i am", problem solved.
 
I'm not sure whether this is a quote from the article, or from Jim Smith. Either way it seems odd. It doesn't seem plausible that the ability of computers to experience suffering is dependent upon one's belief system. If I believe my alarm clock feels hurt when I shout at it, does my belief make it so?

The bit you quoted is me, not from the article (it wasn't indented or in quotation marks). I meant that if you are a materialist, you might think a sufficiently complex computer neural network is behaving enough like an animal or human brain that you would have to face the ethical questions that apply to animals or people.

Because of the evidence that consciousness is non physical, I am not a materialist and I don't believe a physical system like a computer or neural network can have conscious subjective experiences such as awareness or pleasure or pain and there would be no ethical issues arising from computer neural networks.
 
Last edited:
Back
Top