Hi David, thanks for weighing in. I am torn in different directions with this, but I suppose I can sum up my (provisional) position with a few bullet points:
- That to which we refer as artificial "intelligence" might be defined and delineated by its capacity to learn and to solve problems that it was not explicitly programmed for, but that it was rather programmed implicitly for, through "learning algorithms".
- Some of that which we call "intelligence" in humans quite clearly can be simulated through "computation". Take, for example, our learning to walk: an intelligent process which is based on learning how to integrate a whole lot of sensations whilst outputting muscle movements such that one is propelled forwards whilst upright, rather than falling over. There is no question that this "computational" process of "learning to walk" can be simulated by machines, because it already has been - or at least walking itself has been simulated by machines. Other examples: Google's AI has been trained to play and win at a variety of ancient Atari games: an obvious example of "computational" intelligence/learning.
- Some of that which we call "intelligence" in humans quite probably is inextricably linked to consciousness, and thus not subject to computation. I'm thinking here of "higher" intelligence, in particular abstract and mathematical cognition: the sort of genius shown by Albert Einstein amongst others, which seems to require self-reflective ability. One seems to need to be able to hold a concept in mind and relate it to other concepts, etc etc. Could this intelligence be "reduced" to "computational" intelligence? I really don't know, but for now I say "probably not". But that really is just a "probably".
I think even your definition is really a continuum. For example, does a program 'learn' as it reads in data, or does it learn if it makes an internal search - for example searching the integers looking for primes. Does it 'learn' when it consumes large quantities of text and collects various sorts of word association information. Even English grammar is fairly algorithmic, so some 'understanding' can be gleaned relatively easily.
More generally, what does 'understanding' mean? Suppose a computer understands that people get married to have children, and that there are two sexes, and that most first names are unique to one sex (with obvious exceptions, such as Pat). Does it really make sense to say it understands anything? Given this lack of real understanding, I certainly don't think AI will understand advanced concepts of this sort. It is interesting that some mathematicians do actually use computers. But they program them to do long strings of calculations that they (the mathematicians) have figured out will settle some point. The conceptual reasoning comes entirely from the human mathematician.
Of course, there were attempts to use AI to do maths - particularly calculus. However modern algebra programs do not use such methods and are not touted as AI software.
http://www.maheshsubramaniya.com/ar...ifferential-calculus-using-prolog-part-1.html
(Differentiation is an easy problem because the problem recursively decomposes, but integration (the reverse process) is much harder). The best integration algorithms have been hand crafted by mathematicians and programmers.
Your second point is clearly true, but that isn't really what AI is supposed to be about.
Your third point seems to relate much more to the concept of AI that is projected by hype. I like the example of driverless cars. I don't believe those cars will ever deliver the dream that they represent - drive at a reasonable speed but safely (or at least as well as people do) under any conditions, down any roads, in any traffic, handling diversion signs, horses, fallen trees, lorries shedding their loads, ice, and road works, with nobody at the wheel ready to take over. I think GOOGLE have created a wonderful test of AI with this project. My guess is that the project will be shelved at some point because of legal problems!
David