What's people's views on this?
http://www.theverge.com/2016/3/12/11210650/alphago-deepmind-go-match-3-result
http://www.theverge.com/2016/3/12/11210650/alphago-deepmind-go-match-3-result
What's people's views on this?
http://www.theverge.com/2016/3/12/11210650/alphago-deepmind-go-match-3-result
post: 87851, member: 10"]Has anyone suggested it is conscious or that that was their goal? I don't think so. It's quite an impressive feat for what it did set out to do. Thanks for the link!
What if conscious AI is a thing in your lifetime? What would you do then? He just appreciated an article for its entirety and you couldn't even differentiate between It's and Its... That AI might be more "concious" than you are.
Man, Tim.. stop it. Arouet appreciated the link and you just denied it from the get-go. What if conscious AI is a thing in your lifetime? What would you do then? He just appreciated an article for its entirety and you couldn't even differentiate between It's and Its... That AI might be more "concious" than you are.
I play a bit of Go, so I've been watching the progress of AlphaGo with some interest. I have to say that I think Lee Sedol lost for reasons quite unconnected with intelligence.
After AlphaGo beat the European Champion 5-0, professionals analysed the game and came to the conclusion that although the AI was good, it was only about a middling ranked professional level and should be no match for the World Champion.
I think this sense of hubris contributed to Lee Sedol's first defeat when he played aggressively, got an early lead, but underestimated his opponent and eventually lost by a small margin. In the second game, meanwhile, Lee Sedol was too cautious.
Then, of course, there's the pressure. This has had worldwide coverage on a much bigger scale than Go is used to.
It's interesting to note that, once he'd lost the series and the pressure was off, Lee Sedol won the next game so it currently stands at 3-1. At the risk of sounding like I'm making excuses, I wonder if the series had been carried out less publicly, maybe Lee Sedol would've played better.
I'll bet you love that, Arouet (tell me if I'm wrong). The "impressive feat ! " ...... I suspect it thrills you that there might be a chance that it could demonstrate that we actually are nothing but biological robots ?
I am very interested in that, because not knowing anything about GO (someone once tried to teach me about 40 years ago) I couldn't evaluate this at all. Certainly AI and hype are no strangers to each other!I play a bit of Go, so I've been watching the progress of AlphaGo with some interest. I have to say that I think Lee Sedol lost for reasons quite unconnected with intelligence.
After AlphaGo beat the European Champion 5-0, professionals analysed the game and came to the conclusion that although the AI was good, it was only about a middling ranked professional level and should be no match for the World Champion.
I think this sense of hubris contributed to Lee Sedol's first defeat when he played aggressively, got an early lead, but underestimated his opponent and eventually lost by a small margin. In the second game, meanwhile, Lee Sedol was too cautious.
Then, of course, there's the pressure. This has had worldwide coverage on a much bigger scale than Go is used to.
It's interesting to note that, once he'd lost the series and the pressure was off, Lee Sedol won the next game so it currently stands at 3-1. At the risk of sounding like I'm making excuses, I wonder if the series had been carried out less publicly, maybe Lee Sedol would've played better.
What if conscious AI is a thing in your lifetime?
E.Flowers really pinned the problem - making a conscious AI absolutely requires a solution to that problem - which is, of course, the Hard Problem!Does it "experience" the joy of winning?
Arouet, Tim and Travis, please try and shake hands (so to speak).
I am very interested in that, because not knowing anything about GO (someone once tried to teach me about 40 years ago) I couldn't evaluate this at all. Certainly AI and hype are no strangers to each other!
Arouet, Tim and Travis, please try and shake hands (so to speak). We all come to this from a different perspective, and it is best to just accept that.
One thing is sure, I think, if conscious AI does come about, it won't just be solely because of a better computer or program.
E.Flowers really pinned the problem - making a conscious AI absolutely requires a solution to that problem - which is, of course, the Hard Problem!
David
I still don't understand how combining two or more things that on their own are not conscious, yet when combined, are thought to spontaneously achieve consciousness?
Now. these non concious things already have the properties that. when they interact in a certain way (ie: integrate information) they have a corresponding experience
This is a bit off topic for this thread, but taking an Integrated Information Theory approach, the things are never individually conscious, but rather, it is the system (ie: the interaction between the two things) that is conscious. Now. these non concious things already have the properties that. when they interact in a certain way (ie: integrate information) they have a corresponding experience. In that sense, IIT holds consciousness as a fundamental property, even if it is not always active. The experience exists not within each individual part and is therefore irreducible to the system.
This thread has nothing to do with consciousness though, so if we want to continue this discussion we should probably take it to another thread.
Ok, point taken Arouet. Then my "view on this" is that it has demonstrated some very clever human problem solving but nothing beyond that. IMHO.
I'm not sure what you're saying here. Are you saying did not really achieve what they have claimed?