Hieracosphinx
Member
I know that symbols are arbitrary and can be anything. I suppose you are saying there isn't really any intentionality, but that you don't need it to understand stuff. I agree in that there is a sense in which you can understand a meaningless string of symbols (e.g. WXYYXW) if you know the grammar behind it (in this case it's 'forwards, then backwards'). Applied to natural language text, that's what Cleverbot or the Chinese Room does.Does it "mean" anything at all? Does "redness", for example, have any meaning outside of the sensory system it fires?
But there is apparently a spectrum of understanding. An untrained neural network can classify pictures into nonsense categories, which, if there is no intentionality, are no less meaningful than my own categories, merely less useful. At the other end, I can conceive of algorithms that can generate pictures of trees from various parameters, or used inversely, analyse all possible pictures of trees into tree-generator parameters (e.g. tallness, stoutness, species). That would seem to capture more of the essence of a tree, even though a tree is not literally a pattern of pixels or generated by a computer program. Where do you get that spectrum of understanding from? That's intentionality, right? While it's alright to judge a computer program purely on the basis of its practical applications, I'd like to think my own categories can actually represent something.
I mean, take the very idea of practical applications. That's all about human values, including ethics. If a person's pain is equivalent to a meaningless squiggle, there's no reason to help them. There is literally no fact of the matter about whether somebody is suffering or not. If another person's compassion is also equivalent to a meaningless squiggle, they aren't even capable of wanting to help them. Their compassion can't be 'about' the firing of nociceptive nerve fibres. So I am pretty sure you need to account for some sort of intentionality to stay sane, even if it is just a functionalist cause-and-effect model where our symbols are grounded in sensory inputs and motor outputs. Once we have that much, I think I'm allowed to judge the neural networks for thinking that fields of random noise are actually horses.