Discussion - Can we produce synthetic, non-organic sentience?

  • Thread starter Sciborg_S_Patel
  • Start date
Does it "mean" anything at all? Does "redness", for example, have any meaning outside of the sensory system it fires?
I know that symbols are arbitrary and can be anything. I suppose you are saying there isn't really any intentionality, but that you don't need it to understand stuff. I agree in that there is a sense in which you can understand a meaningless string of symbols (e.g. WXYYXW) if you know the grammar behind it (in this case it's 'forwards, then backwards'). Applied to natural language text, that's what Cleverbot or the Chinese Room does.

But there is apparently a spectrum of understanding. An untrained neural network can classify pictures into nonsense categories, which, if there is no intentionality, are no less meaningful than my own categories, merely less useful. At the other end, I can conceive of algorithms that can generate pictures of trees from various parameters, or used inversely, analyse all possible pictures of trees into tree-generator parameters (e.g. tallness, stoutness, species). That would seem to capture more of the essence of a tree, even though a tree is not literally a pattern of pixels or generated by a computer program. Where do you get that spectrum of understanding from? That's intentionality, right? While it's alright to judge a computer program purely on the basis of its practical applications, I'd like to think my own categories can actually represent something.

I mean, take the very idea of practical applications. That's all about human values, including ethics. If a person's pain is equivalent to a meaningless squiggle, there's no reason to help them. There is literally no fact of the matter about whether somebody is suffering or not. If another person's compassion is also equivalent to a meaningless squiggle, they aren't even capable of wanting to help them. Their compassion can't be 'about' the firing of nociceptive nerve fibres. So I am pretty sure you need to account for some sort of intentionality to stay sane, even if it is just a functionalist cause-and-effect model where our symbols are grounded in sensory inputs and motor outputs. Once we have that much, I think I'm allowed to judge the neural networks for thinking that fields of random noise are actually horses.
 
http://www.bbc.co.uk/news/technology-34423292




So... Is the car aware of its environment?
The link is interesting, but it isn't quite clear from that piece just how much of the driving was encountering completely random hazards, and how much was a test course. Remember that GOOGLE is a large company that ultimately wants to sell these things - the demo may not be exactly as it seems.

But we did see how the car's nervousness - which is the only way I can describe it - sometimes got it wrong. At one moment in our journey, a jogger ran by. He was on the opposite side of the road, and any human eye would have instinctively known he posed no danger.

But the Google car panicked, hitting the brakes for an emergency stop.

I don't want to carp, but this possibly hints at problems to come. I mean, driving is to some extent a process of taking small, very regulated risks - otherwise you would never even start your car.

The slower a car drives, the safer it will be, but eventually it will also not be that useful. Think of driving through rubbish filled streets - every bit of junk might, just might be a child lying in the road, yet you can't stop and investigate each item!

My point is that some tasks look easy to begin with, but escalate with problems as you get near to obtaining something useful. An automatic car is only really useful if you can stuff kids into it, and get them driven home, or get into such a car drunk, or sleepy and get home safely.

My sense is that this is precisely the kind of open ended task that a computer can be programmed to do up to a point, but no further.

Remember, we don't yet have household robots that tidy up and clean, wash the dishes, make the beds, and do gardening - relatively safe jobs. I believe there are mowers and vacuum cleaners that can run automatically, but they use cables or other tricks to specify the area to be processed! Is driving a car less complex than those tasks - just as Sat Nav's don't have an encyclopaedic visual knowledge of the roads?

Horses are quite a hazard near where I live, and they can be twitchy (and you also tend to notice if the rider is a child or an adult) but there are obviously innumerable small incidents involved in driving. We were once driving along a road in Yellowstone, and we saw what looked like a rather large unkempt man walking towards us. As we got closer, it turned out to be a bison! I'll bet the software has a function specially designed to deal with that situation!

David
 
Back
Top