217. DR. GARY MARCUS SANDBAGGED BY NEAR-DEATH EXPERIENCE SCIENCE QUESTIONS

I get your point, but in common parlance, a robot is not biological (certainly not natural) at all. Since Physics is also an artifact, and is always evolving, physics all the way down doesn't rule much out. Standard physics ceased to be deterministic a century ago, and whether or not an account of consciousness requires post-classical physics (or quantum computation), it need not be "mechanical" (or deterministic).
Well if quantum computers ever became a reality, I suppose a robot could have one on board! I think this argument is essentially just semantics.
A computer playing chess is "intelligent" in my lexicon, and I don't suppose it plays the game consciously, but I have no way of knowing really.
The trouble is, in the early days of computers people thought that accurately adding up columns of numbers in seconds (remember they were orders of magnitude slower than modern computers) showed these machines were intelligent. Just because a gadget, designed by a human, seems to do something intelligent, doesn't mean it is the source of the intelligence. As a software developer I feel that statement very directly.
I suppose you write both consciously and intelligently. A spider hunts prey intelligently and may hunt consciously. A computer plays chess intelligently and unconsciously. I suppose so anyway. I can only suppose so. I can't observe another being's consciousness, as far as I know.
There was a time when many scientists only attributed humans with consciousness - as if that in any way reduced the mystery of consciousness. I tend to think consciousness goes all the way down:

http://www.basic.northwestern.edu/g-buehler/FRAME.HTM
I tend to think that the author thinks intelligence and consciousness are the same thing because it contains this quote:
To the best of my knowledge, the term CELL INTELLIGENCE was coined by Nels Quevli in the year 1916 in his book entitled "Cell intelligence: The cause of growth, heredity and instinctive actions, illustrating that the cell is a conscious, intelligent being, and, by reason thereof, plans and builds all plants and animals in the same manner that man constructs houses, railroads and other structures."

I doubt that you could. With or without QM, complex dynamic systems are chaotic. We carefully, laboriously, tediously construct artificial systems to be predictable. I do it for a living and spend most of my time debugging (making unpredictable machinery predictable).
Well this is possibly the heart of the matter - QM tells us that endless interactions start out as a wave function that contains several possible outcomes, and at some (poorly specified!) point this resolves into one of those possibilities. On the face of it, that would combine with chaos, which amplifies small changes to macroscopic levels, to produce useless random behaviour. Indeed if some alien intelligence were examining our makeup, it might use this to prove that we could not be intelligent!

However, I have come to suspect that, the early quantum pioneers were right to formulate (one interpretation of) QM in terms of 'observers'. If indeed our consciousness is independent of our brains, one way it could interact with the brain would be by those observations. The physicist Henry Stapp has written extensively about this possibiliy. If consciousness works that way, and can foresee the chaotic outcome of its 'control by observation', it can use chaos to amplify its intentions.

My feeling is that machines or computer programs only exhibit the intelligence that was in the people who designed them.
If you could really build the machine you imagine, maybe it would, but we're discussing science fiction here. In reality, you can't even predict the trajectory of three bodies interacting gravitationally (classical gravity) indefinitely, and I suppose the three body system is unconscious, so the question of consciousness has little to do with predictability.
Yes, but doesn't that argument make you doubt that the brain can do any better? Since they obviously do, doesn't that suggest to you that it doesn't work as you think it does?
How any system feels is not something I can know. I can't even know how you feel. I can read a symbolic description of your feelings and interpret it in terms of my feelings, but I seem limited to this experience of your feelings.
Well this is what the philosopher David Chalmers dubbed the 'Hard Problem', I expect you know the argument, and it seems to me to point in the direction I prefer - that reality is composed of a combination of matter and 'mental stuff', and that it is irreducible to a purely physical description.
An equation (or algorithm) is an abstraction. An actual machine somehow simulating my brain's information processing is necessarily concrete. It is material and occupies space and time. It is "algorithmic" only in the sense that an isomorphism exists between the symbols of an abstract system and concrete, material objects interacting in space and time. How can consciousness arise from such a thing? I don't know. That's what I'm here to discuss.
Well yes, and I suppose at heart this is what Skeptiko (or at least the non-political part!) is all about. The thing is, that once you even suspect there is a non-material aspect to life, you don't find it hard to find the evidence - normally that evidence is just dismissed because it is inconsistent with the purely mechanistic view of life - a circular argument if ever the was one!. The main evidence for a non-material mind can be found in:

NDE's

OBE's

Deathbed visions in which sometimes people dying of severe dementia suddenly start talking coherently and recognise their family again.

Classical telepathy experiments.

People who hear voices, or exhibit multiple personalities.

A variety of curious incidents in which people who suffer brain damage seem to acquire new high level skills after they recover.

The fact that some people seem to have access to knowledge directly. A famous case was the mathematician Ramanujan, who came up with extraordinarily intricate theorems that he could not prove. His explanation was that in Indian goddess fed him these results in dreams!

Evidence for reincarnation (look up Professor Ian Stevenson for details)

Remote viewing

Etc etc.

If you're suggesting that consciousness cannot arise from matter occupying space and time, I'm curious to know your alternative, but simply naming an otherwise mysterious "stuff of souls" distinct from material stuff (as in Cartesian dualism) doesn't add much to my understanding. I already have the word "consciousness", and I don't object to "soul", but the word doesn't get me one step closer to a disembodied or immortal consciousness or an explanation of near death experiences.
I try to avoid the word 'soul' it carries too much baggage. I think the problem is that science grew up investigating matter and not mind, for the simple reason that studying the mind was liable to get a researcher burned at the stake.

Our understanding of mind is pretty damn limited - which is reflected in the difficulty doctors have in effectively treating people with mental problems.

BTW did you take on board my argument that a computer - or anything isomorphic with a computer - can be said to simply check timeless theorems - so computer consciousness doesn't make sense?

David
 
Last edited:
I'd say there are, broadly speaking, three opinions about consciousness:

1. It doesn't exist (as in eliminative materialism -- EM)
2. Its existence is an illusion
3. It's real

I suppose that some might argue there's not much difference between 1 and 2.

Thing is, whatever one believes about consciousness, I can't see there's any doubt that we all experience the thing we refer to as consciousness. Sam Harris is a materialist (though not someone endorsing EM) and agrees with this:

https://samharris.org/the-mystery-of-consciousness/
and the follow-up:
https://samharris.org/the-mystery-of-consciousness-ii/

What Harris seems largely not to consider is that a lot of his argument is based on the assumption that we live in a WYSIWYG universe. The brain is really existent exactly as it appears to be and is really, albeit in some ineffable way, the cause of consciousness:

The analogy is a bad one: Life is defined according to external criteria; Consciousness is not (and, I think, cannot be). We would never have occasion to say of something that does not eat, excrete, grow, or reproduce that it might nevertheless be “alive.” It might, however, be conscious.
But other analogies seem to offer hope. Consider our sense of sight: Doesn’t vision emerge from processes that are themselves blind? And doesn’t such a miracle of emergence make consciousness seem less mysterious?
Unfortunately, no. In the case of vision, we are speaking merely about the transduction of one form of energy into another (electromagnetic into electrochemical). Photons cause light-sensitive proteins to alter the spontaneous firing rates of our rods and cones, beginning an electrochemical cascade that affects neurons in many areas of the brain—achieving, among other things, a topographical mapping of the visual scene onto the visual cortex. While this chain of events is complicated, the fact of its occurrence is not in principle mysterious. The emergence of vision from a blind apparatus strikes us as a difficult problem simply because when we think of vision, we think of the conscious experience of seeing. That eyes and visual cortices emerged over the course of evolution presents no special obstacles to us; that there should be “something that it is like” to be the union of an eye and a visual cortex is itself the problem of consciousness—and it is as intractable in this form as in any other.

Notice in the highlighted parts where Harris simply assumes that our sense of sight is something that is understood, at least in principle, and has a definite and known cause. I think this is because he has a view of the world that is based on naive realism, where systems such as the visual one can easily (relatively at any rate -- witness his use of the adverb "merely") be explained in terms of a causality based on materialism. It all boils down to how particles and forces as physics models them interact. Well, of course, if one accepts the models of physics as real and concrete -- as reality itself -- one could hardly come up with a different explanation. It's all perfectly logical within that paradigm.

What he doesn't seem to consider is that we might live in a world of appearances, and that what is illusory might be causality as materialists conceive of it. Conceiving of -- modelling in other words -- it this way is undeniably useful and has led to many technological advances, but at the same time, it has led to a number of confounding issues such as the Hard Problem of Consciousness, as well as to seemingly irresolvable ones such as the incompatibility of General Relativity and Quantum Theory. It has also led to the development of weird and wonderful doctrinalism, where scientismists confidently spout about the age of the universe, multiple universes, inflation, black holes and so on, which in truth are conjectural and backed up by very little evidence.

IOW, the reifying of the models of physics has led to all sorts of paradoxes and the turning of the scientific enterprise into a what could be seen as a parody -- a ship captained by someone who is blindfolded by dogma. It's as if he wants to go to the West Indies, but refuses to use navigation aids such as sextants, instead relying on his preferences about what direction he should go in.

So what does it mean to say that we might live in a world of appearances? And what effect does this have on the current model of causality? Let's take an example. That there is a correlation between the brain and consciousness seems true enough. The physicalist takes this at face value and asserts that the brain generates consciousness, leading to the aforesaid Hard Problem: whence comes the experience of consciousness, of emotions, redness, the beauty of birdsong, the taste of oranges -- qualia in other words? Why couldn't we be conscious without experiencing qualia? Fact is, I don't think we could. Qualia are what generate the sensation of differences between various perceptions, and without qualia, it's hard to conclude that the world would appear as anything but undifferentiated mush.

Qualia, I hypothesise, are in a sense the basic elements of reality, rather than things that arise out of the particulate model of physics. When we look at the brain, it isn't an immensely complicated arrangement of material particles engaging in a dance of blind interactive forces, so much as an immensely complicated interaction of conscious processes giving the appearance of what physicalists think of as concrete particles and forces. All phenomena in fact could be the result of the interaction of conscious processes which create an appearance to our sensorium.

IOW, physicalism might have things the wrong way round: consciousness isn't caused by the physical, but rather the physical is caused by consciousness; physicalism is merely a way (quite often, but far from always, useful) of interpreting what arises out of consciousness. We have a very good reason for suspecting this could be the case, namely the paradox that the Hard Problem (which would become essentially an artifact of physicalism), creates.

The other way around, i.e. in idealism, the Hard Problem disappears; the paradox is resolved by eliminating it altogether -- one might almost say it is the complement of eliminative materialism, with the significant advantage that at a stroke it gets rid of a seemingly insurmountable problem. It also eliminates the problem that panpsychists have, namely how it could be that lots of tiny consciousnesses in elementary particles manage to get together to generate progressively more advanced consciousnesses. I mean, how would a blind blueprint for such a thing occur? Why should it occur? Even if it did occur, it would tend to support some kind of teleology or conscious source of direction, or plan.

I understand as well as the next person that it's very hard to wrap one's head around the concept of idealism. My senses, and those of everyone else, are constantly and insistently telling us that chairs, tables, stars and galaxies are literally what they appear to be. And there is to some extent great utility in thinking of things that way as a survival aid -- see this video by Donald Hoffman:


Hoffman speaks often of evolution and selection pressure, etc. and might seem like a Darwinist. But is he? I too accept RM + NS have an effect, but only at the microevolutionary level; what causes major developments in body form (macroevolution) above the level of genus is still unknown. I haven't seen him being directly asked the question, but nothing I've so far heard him say, to my mind at least, implies he's completely swallowed the conventional view of Darwinism.

His flavour of idealism (a bit like but not exactly the same as bishop Berkeley's, as Hoffman explains in the video) isn't saying that chairs and tables don't exist, only that they are a shorthand way of our interpreting reality, which may not actually be literally anything like tables and chairs. Some flavours of idealism grant that there is a reality, but the "thing in itself" (Kant's "Ding an sich") can never be directly perceived; it always appears to us as what Hoffman might call an "icon": a kind of shadow of what is actually existent, good enough for us to survive, with a bit left over for constructing our hypotheses about reality that we employ in empirically-based science.

I warm to people like Hoffman and Bernardo Kastrup because, even if they haven't got things entirely correct, they are challenging the naive assumptions of materialists and promising some kind of way out of the impasse about such things as the Hard and Combination Problems. Slowly, slowly, I think they are gaining ground in the general population if not so much in die-hard materialist circles. For my part, I reject materialism and lean towards idealism as the best explanation so far about the true nature of reality.
 
Last edited:
The trouble is, in the early days of computers people thought that accurately adding up columns of numbers in seconds (remember they were orders of magnitude slower than modern computers) showed these machines were intelligent.
I have no problem using the word "intelligent" this way, but again, I distinguish intelligence from consciousness, and people using "intelligence" in the context of "artificial intelligence" typically make this distinction. Words mean only what people commonly mean by them.

Just because a gadget, designed by a human, seems to do something intelligent, doesn't mean it is the source of the intelligence.
If you're saying that describing complex information processing with the word "intelligence" is "wrong", you're the one arguing semantics. I'm only using the word "intelligent" as a large class of people commonly use it. I'm not asserting anything fundamental about how brains operate or consciousness emerges.

There was a time when many scientists only attributed humans with consciousness - as if that in any way reduced the mystery of consciousness.
A portia (spider) could be conscious. I don't pretend to know. I do pretend to know that dogs are conscious.

I also have no problem saying that cells exhibit "intelligence", but I'm not saying anything about consciousness. I'll say that life is "intelligently designed", but I'm not saying anything about an anthropomorphic Creator, conscious or otherwise. I'm using "intelligence" in the way that AI proponents use the word (and being Darwinian), because Evolution by Natural Selection is a model of computation in the way that the Non-deterministic Turing Machine is one.

I tend to think that the author thinks intelligence and consciousness are the same thing because it contains this quote:
He seems to think so, but equating the two is not common in the "artificial intelligence" context. I suppose a cell could be conscious in some sense, but I don't pretend to know. I do pretend to know that a cell is "intelligent", because I define "intelligence" as complex information processing isomorphic to a Turing Machine, and Turing himself recognized that biological organisms process information this way.

On the face of it, that would combine with chaos, which amplifies small changes to macroscopic levels, to produce useless random behaviour.
Chaos and non-determinism do not imply useless behavior, only unpredictable behavior. Human behavior is notoriously unpredictable, but we are useful to one another.

Indeed if some alien intelligence were examining our makeup, it might use this to prove that we could not be intelligent!
Proving anything "not intelligent" requires a rigorous definition of "intelligence". I have a rigorous definition of "intelligence", but I have no rigorous definition of "consciousness". I only know it when I experience it.

However, I have come to suspect that, the early quantum pioneers were right to formulate (one interpretation of) QM in terms of 'observers'.
Architects of QM wanted to incorporate observation (in the conventional sense of an experimenter observing the outcome of an experiment) directly into the theory, but I doubt that they ever intended to explain consciousness. Certainly, Schrodinger did not.

A quantum's wave function collapses upon interaction with another quantum, and this sort of interaction is necessary for an experimental observation. We can say that one quantum "observes" the other without an experimenter present, but this "observation" has no obvious relationship with consciousness.

If indeed our consciousness is independent of our brains, one way it could interact with the brain would be by those observations.
My vision seems dependent on a lens focusing photons on an optic nerve, but I'm not conscious of the photons. I'm conscious of objects outside of my head (presumably of a model of the outside of my head constructed within my head). Quantum mechanical interactions are certainly involved, but I don't know how they're involved in consciousness. I do know how classical information processing systems can model objects outside of my head.

My feeling is that machines or computer programs only exhibit the intelligence that was in the people who designed them.
You don't use the word "intelligence" as I do. When I solve a differential equation, do I only exhibit intelligence in the person of my calculus instructor?

Yes, but doesn't that argument make you doubt that the brain can do any better? Since they obviously do, doesn't that suggest to you that it doesn't work as you think it does?
I don't know how a brain works. I have a very crude idea of how a brain processes information in very limited applications.

Well this is what the philosopher David Chalmers dubbed the 'Hard Problem', I expect you know the argument, and it seems to me to point in the direction I prefer - that reality is composed of a combination of matter and 'mental stuff', and that it is irreducible to a purely physical description.
I don't accept Cartesian dualism, but I don't reject it either. If "mental stuff" distinguishable from matter exists, it need not be separable from matter, and the organization of mental stuff and matter constituting my singular consciousness need not survive disintegration of the organization. The hypothetical "mental stuff" doesn't increase my understanding much. Like "dark matter", it's more of a label for something I don't understand.

I try to avoid the word 'soul' it carries too much baggage.
But once you suspect there is a soul, you find voluminous evidence for one, and testimony to this evidence is the baggage. NDEs, OBEs and the rest are evidence of something. I've had vivid dreams and lucid dreams as well as waking consciousness, and I "hear voices" and experience something like "multiple personalities", but the only evidence I've seen for telepathy and reincarnation is very sketchy. Why is it so sketchy? If telepathy exists, why are controlled experiments so difficult? Radio communication was once as inconceivable, but hardly anyone doubts it now, because we all experience it routinely. Why is telepathy different?

I think the problem is that science grew up investigating matter and not mind, for the simple reason that studying the mind was liable to get a researcher burned at the stake.
Science didn't grow up investigating quantum mechanical waves, and the waves are not "material". Descriptions of the phenomenon emerged from science.

Our understanding of mind is pretty damn limited - which is reflected in the difficulty doctors have in effectively treating people with mental problems.
Minds and brains are complex. Treating plant disease can also be difficult.

BTW did you take on board my argument that a computer - or anything isomorphic with a computer - can be said to simply check timeless theorems - so computer consciousness doesn't make sense?
I don't claim that computers are or ever will be conscious, but since I don't understand consciousness, I don't know what does or does not make sense about it. I don't understand "timeless theorems", but computers were mechanical theorem provers from the outset, i.e. Turing developed his "machine" to address Hilbert's Entscheidungsproblem. I don't know either why a computer (inorganic or otherwise) can't be conscious or how it could be.
 
Last edited:
I have no problem using the word "intelligent" this way, but again, I distinguish intelligence from consciousness, and people using "intelligence" in the context of "artificial intelligence" typically make this distinction. Words mean only what people commonly mean by them.
I too distinguish the concepts of intelligence and consciousness, all I am saying is that I doubt intelligence exists without consciousness.

One reason for saying that, is that for something to show intelligence, it has to have intention (sometimes referred to as agency). I piece of machinery only grinds its way through a sequence of tasks - it does not intend to do anything. To say otherwise anthropomorphises a bit of electronics!
A portia (spider) could be conscious. I don't pretend to know. I do pretend to know that dogs are conscious.
Agreed, and I know cats are conscious too!
I also have no problem saying that cells exhibit "intelligence", but I'm not saying anything about consciousness. I'll say that life is "intelligently designed", but I'm not saying anything about an anthropomorphic Creator, conscious or otherwise. I'm using "intelligence" in the way that AI proponents use the word (and being Darwinian), because Evolution by Natural Selection is a model of computation in the way that the Non-deterministic Turing Machine is one.
To see some of the problems with Darwinian selection in the context of DNA (remember that Darwin knew nothing about what genes were made of) read Behe's book. Evolution by RM+NS isn't tennable. Most of the books from the Discovery Institute are technical books for intelligent laymen, and they also publish scientific papers. I like the Discovery Institute because I think they are getting at a basic truth - we did not evolve by RM+NS - but I want to emphasise, I am not a Christian, nor do I belong to any other faith.
He seems to think so, but equating the two is not common in the "artificial intelligence" context.
I'd rather keep the two terms distinct but propose that intelligence implies consciousness (to repeat my self). That means I would argue that the artefacts that are currently called 'Intelligent' merely expose the cleverness of their creators. I mean on that basis, you might as well call a mechanical wrist watch intelligent!
Chaos and non-determinism do not imply useless behavior, only unpredictable behavior. Human behavior is notoriously unpredictable, but we are useful to one another.
Yes, but we want our bodies to do something for a specific purpose - anything else is useless in that context. You might be able tu use chaotic behaviour for some purpose, but that would be because you had analysed it in depth before hand.
Proving anything "not intelligent" requires a rigorous definition of "intelligence". I have a rigorous definition of "intelligence", but I have no rigorous definition of "consciousness". I only know it when I experience it.
Unlike the average religious site, we aren't really interested in certaintity. Consciousness is certainly hard to define, and any definition seems to detract from the real meaning of the word. My feeling is that intelligence is equally hard to define if you want to apply it to specific objects:
"This wrist watch is intelligent" - No
"This wrist watch designer is/was intelligent" - Yes
Architects of QM wanted to incorporate observation (in the conventional sense of an experimenter observing the outcome of an experiment) directly into the theory, but I doubt that they ever intended to explain consciousness. Certainly, Schrodinger did not.
This gives you some idea of the range of their views. The important thing to realise is that since that time QM has become, if anything even stranger with entanglement (that Einstein thought would prove QM wrong and wave functions that are collapsed after the particle they describe has reached the detector!

https://phys.org/news/2009-06-quantum-mysticism-forgotten.html

A quantum's wave function collapses upon interaction with another quantum, and this sort of interaction is necessary for an experimental observation. We can say that one quantum "observes" the other without an experimenter present, but this "observation" has no obvious relationship with consciousness.
Hang on, it is far from clear when the wave function collapses. There is even an interpretation of QM, the Many Worlds interpretation which denies that the wave function ever collapses - at the expense of an ever multiplying set of copies reality!
You don't use the word "intelligence" as I do. When I solve a differential equation, do I only exhibit intelligence in the person of my calculus instructor?
Well if your instructor just gave you a formula for a number of differential equations, and you did a lookup, then he would be intelligent. However, if you learn the theory of differential equations and apply it to a particular problem, then I would say the intelligence is shared. By contrast, computers can follow a fixed algorithm to differentiate an expression, so that does not require intelligence. Most of an engineer's intelligence is used choosing the equations plus any approximations and boundary conditions.
I don't know how a brain works. I have a very crude idea of how a brain processes information in very limited applications.
Well doesn't your crude conception admit of the possibility that the process could be implemented on a computer?
But once you suspect there is a soul, you find voluminous evidence for one, and testimony to this evidence is the baggage. NDEs, OBEs and the rest are evidence of something. I've had vivid dreams and lucid dreams as well as waking consciousness, and I "hear voices" and experience something like "multiple personalities", but the only evidence I've seen for telepathy and reincarnation is very sketchy. Why is it so sketchy? If telepathy exists, why are controlled experiments so difficult? Radio communication was once as inconceivable, but hardly anyone doubts it now, because we all experience it routinely. Why is telepathy different?
Well I hope you can keep the voices and personalities under control. There are websites and support groups for people who manage those symptoms without drugs.

It is estimated that about 10% of people who are revived from cardiac arrest have some sort of NDE! That is an incredible amount of data. There is absolutely no sensible theory as to how such a brain mechanism could evolve via RM+NS - I mean in the wild any animal that was that far gone is incredibly unlikely to revive and pass on its genes!

Telepathy experiments are mainly difficult because sceptics keep demanding more and more rigorous controls. For example in the best experiments either the sender or receiver is placed in a Faraday cage just in case their brains communicate by electromagnetic means. Read Dean Radin's book for a reasonably rigorous introduction:

https://www.amazon.co.uk/dp/B003YFIZLO/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1

As I said, the word 'soul' comes with loads of baggage. It is a daft word to use if you don't belong to a religion.
I don't claim that computers are or ever will be conscious, but since I don't understand consciousness, I don't know what does or does not make sense about it. Computers were mechanical theorem provers from the outset, but I don't know either why they can't be conscious or how they could be.

The main thing I want to do is instil some doubt into you! This site isn't about belief, it is about being open to, and exploring other possibilities.

David
 
One reason for saying that, is that for something to show intelligence, it has to have intention (sometimes referred to as agency).
Suppose a robot on wheels is programmed to roll around on the floor of an empty room, stopping at walls and turning to explore in a new direction until a green circle appears in the image captured by its camera/eye or until it has fully explored the room. I call this robot "intelligent" and "intentional". It intends to find a green circle.

Suppose a green circle appears on the robot itself below the camera, and suppose a mirror hangs on a wall of the room. While exploring the room, the robot approaches the mirror, and the reflection of the green circle appears in its camera. The robot stops in front of the mirror gazing at its own reflection. I call the robot "self-aware", but this "awareness" has nothing to do with consciousness. On the other hand, if a spider is conscious in some primitive sense, I don't know why this robot cannot be. I don't know how it can be either.

I piece of machinery only grinds its way through a sequence of tasks - it does not intend to do anything. To say otherwise anthropomorphises a bit of electronics!
Saying so only defines "intent" without implying consciousness.

I'm not denying consciousness here. I'm as certain of my own consciousness as of anything, but it is a mystery, so I don't have much language to describe it.

Evolution by RM+NS isn't tenable.
I haven't read the book, but I'm not convinced that Behe establishes irreducible complexity or the necessity of an anthropomorphically (or conscious) intelligent designer of living forms. Gaia (or the Earth's biosphere understood as a single organism) is certainly intelligent in the sense of "intelligence" I'm discussing here, and I can believe that She is conscious in some sense, but I have no way of knowing. I'm reasonably sure that She doesn't experience a stream of consciousness in Hebrew or literally speak to men through burning bushes, and I'm keenly aware that referring to Her as "She" is anthropomorphic. I have no problem with allegory (or traditional theology) as long as we understand what we're doing.

I mean on that basis, you might as well call a mechanical wrist watch intelligent!
Google "smart watch". Words mean what people commonly mean by them.

Yes, but we want our bodies to do something for a specific purpose - anything else is useless in that context.
When a bird plucks a worm from the ground and drops it into her chick's mouth, she is useful to to the chick whether or not she consciously intends to be.

Unlike the average religious site, we aren't really interested in certainty.
I'm interested in logical rigor. I can be certain (or close enough) that the Pythagorean theorem follows logically from Euclid's postulates, even if it doesn't follow without the Fifth and even if a coherent and useful geometry is possible without the Fifth. I can't be certain of very much, but I can be a careful thinker.

My feeling is that intelligence is equally hard to define if you want to apply it to specific objects:
I define "intelligence" simply (though an intelligence can be highly complex) and leave all of the difficulty in the definition of "consciousness". Both the watch and the watchmaker may be intelligent, even if the watchmaker is blind. I withhold judgement on the consciousness of either ... unless I'm the watchmaker myself.

This gives you some idea of the range of their views.

"Einstein, for his part, adamantly opposed any subjectivity in science. He disagreed with Bohr’s view that it is unscientific to inquire whether or not Schrödinger’s cat in a box is alive or dead before an observation is made."

Strictly speaking, I agree with Bohr that inquiring (or speculating) on the state of the cat is unscientific, but this agreement has nothing to do with the cat being "half dead" or "both dead and alive" until observed. The whole point of Schrödinger’s thought experiment is to ridicule this "both dead and alive" idea.

https://pdfs.semanticscholar.org/81...7.1758972202.1563829798-1717534098.1563829798

That's Schrödinger’s essay containing the cat experiment. See section 5.

Neither Bohr nor Schrödinger believed that a conscious observation is necessary to collapse the wave function. The Copenhagen interpretation says that the wave function represents the state of a hypothetical observer's knowledge before the wave function collapses, not that a conscious observation precipitates the collapse.

The important thing to realize is that since that time QM has become, if anything even stranger with entanglement (that Einstein thought would prove QM wrong and wave functions that are collapsed after the particle they describe has reached the detector!
Entanglement only implies non-locality, not the necessity of a conscious observer for wave function collapse. Einstein's problem was "spooky action at a distance" not conscious spooks in the machinery. Non-locality also seems spooky to me, but it has nothing obviously to do with consciousness, and entangled states are extremely fragile (thus the difficulty of quantum computing), so entanglement doesn't seem a likely mechanism for telepathy.

Hang on, it is far from clear when the wave function collapses. There is even an interpretation of QM, the Many Worlds interpretation which denies that the wave function ever collapses - at the expense of an ever multiplying set of copies reality!
If speculating on the state of the cat is unscientific, Many Worlds is not even wrong.

By contrast, computers can follow a fixed algorithm to differentiate an expression, so that does not require intelligence.
Intelligence is algorithmic in my way of thinking, but again, we're arguing over the meaning of a word here. Human intelligence far surpasses artificial intelligence in many respects, and it may differ in kind from artificial intelligence, but artificial intelligence is still evolving rapidly, and I don't know its limits.

Well doesn't your crude conception admit of the possibility that the process could be implemented on a computer?
Everything a brain does on any existing computer? No. If "computer" describes any artifact yet to exist, I don't know. Maybe quantum computers will factor primes that no classical computer can factor. I'm skeptical of this outcome, but experts believe the theory worth exploring. If quantum supremacy is achieved, would you also say that a quantum computer cannot be conscious or even intelligent?

Well I hope you can keep the voices and personalities under control. There are websites and support groups for people who manage those symptoms without drugs.
I've managed well enough for 57 years.

It is estimated that about 10% of people who are revived from cardiac arrest have some sort of NDE! That is an incredible amount of data.
I'm open to the data, but any data set has many possible interpretations.

There is absolutely no sensible theory as to how such a brain mechanism could evolve via RM+NS - I mean in the wild any animal that was that far gone is incredibly unlikely to revive and pass on its genes!
People defecate while dying even more frequently. Natural selection need not account for it. RM+NS is not a theory of everything.

Telepathy experiments are mainly difficult because sceptics keep demanding more and more rigorous controls.
Skeptics also demand more and more rigorous tests of spooky action at a distance, and physicists happily comply.

For example in the best experiments either the sender or receiver is placed in a Faraday cage just in case their brains communicate by electromagnetic means.
If two brains naturally communicate electromagnetically, I'd still call it telepathy, and this mechanism seems more likely than anything involving entanglement. If we don't discover natural, electromagnetic telepathy, we'll create it artificially soon enough. I'll add Radin's book to a long list, but I know too much about Quantum Mechanics to be encouraged by the title.

As I said, the word 'soul' comes with loads of baggage. It is a daft word to use if you don't belong to a religion.
I define "intelligence" very narrowly but define "religion" very broadly. Everyone belongs to a religion.
 
Last edited:
Intelligence critically involves the ability to dissent against programming, for novel and critically objective reasons which were not ascertainable by such programming nor machine learning.

In philosophy this is called the Lady Lovelace Objection:

Lady Lovelace's Objection (Stanford Encyclopedia of Philosophy)
One of the most popular objections to the claim that there can be thinking machines is suggested by a remark made by Lady Lovelace in her memoir on Babbage's Analytical Engine:
"The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform." (Hartree, p.70)
Turing cited that his Machine must be able to 1. Do something novel aside from its programming, 2. Decide that it does not want to do something, against its programming, and 3. Construct to a goal or Destruct towards a goal (even if chaos), aside from its programming.

You cannot deposit a bag of rocks in a bank and call yourself a millionaire. Logic cannot be built on bad logical objects.
 
Last edited:
That's not in the article you quote.

Corrected. It is a summary from Deutsche's "Universality and The Limits of Computation" from The Fabric of Reality... accidentally grabbed in the indent and blue'ing

What you are defining as 'intelligence' - both Deutsche and Wolfram define as 'computation'.

To wit, Stephen Wolfram cites in A New Kind of Science: "Section 12 - The Principle of Computational Equivalence" that there is indeed a limit to computational sophistication (p. 721) - but that is not defined as 'intelligence' - because intelligence does not yet have an observable computational sophistication limit.
 
Last edited:
What you are defining as 'intelligence' - both Deutsche and Wolfram define as 'computation'.
I am equating intelligence with computation, but I don't seem to be extending Wolfram. I haven't read the book, but "computational equivalence" sounds like the Church-Turing thesis, and Wikipedia summarizes Wolfram's conclusions as follows.

Wikipedia said:
At the deepest level, Wolfram argues that—like many of the most important scientific ideas—the principle of computational equivalence allows science to be more general by pointing out new ways in which humans are not "special"; that is, it has been claimed that the complexity of human intelligence makes us special, but the Principle asserts otherwise. In a sense, many of Wolfram's ideas are based on understanding the scientific process—including the human mind—as operating within the same universe it studies, rather than being outside it.

Even quantum computation doesn't violate this equivalence principle, i.e. a quantum computer can't in principle compute any function that a Turing machine cannot compute though quantum computers might in principle compute functions that are intractable for classical computers. The classical computer may require more time, by orders of magnitude, than a quantum computer to solve some problems, but the models of computation are nonetheless equivalent in the sense of the Church-Turing thesis, and Wolfram apparently assumes that human mental processes are also equivalent.

I don't assume that a human mind is generally equivalent to a Turing machine even if the two are computationally equivalent, because minds are conscious, and computers may not be. I don't know that an an artificial information processor cannot be conscious, but I don't know how it can be either. If a Turing machine or its equivalent (realized in silicon or whatever) cannot, even in principle, be conscious, then human minds are more than intelligent (in the sense of "intelligence" I'm using), but I don't assume this principle. I'm agnostic on this point.

The whole point of distinguishing intelligence from consciousness, as I do here, is to avoid equating conscious minds with machines that may not have subjective experiences, even in principle, but may nonetheless effectively compute what brains compute. "Artificial intelligence" is already firmly entrenched in the lexicon, and common usage of a word is what the word means definitively; however, recognizing "intelligence" in feats that machines now perform, like playing chess, does not imply much about consciousness, even if unconscious machines play better chess than any conscious mind.

Even an apparently creative machine need not be conscious. Being conscious of one's own creativity does not imply that consciousness is necessary for creativity any more than being conscious of playing chess implies the necessity of consciousness to play chess. I'm old enough to remember claims that unconscious machinery could never defeat consciously creative, human players at chess. These claims are now empirically false. It's now harder to say what a conscious mind can do that a presumably unconscious machine cannot do, aside from being conscious, but consciousness is not less remarkable for this reason.

But even if a classical computer cannot be conscious, and even if a quantum computer can't be conscious either, this conclusion doesn't imply much about OBEs or NDEs or distinguish these experiences from vivid dreams, even if scant evidence suggests that telepathic communication or other paranormal signals somehow inform the experiences. Whatever can be conscious seems nonetheless mortal and confined to a material body, at least in my subjective experience. I'd be happier believing otherwise, but I'm most skeptical of the happiest assumptions. It's a sad way of thinking, and I'm not recommending it here, only describing it, but I don't think I can escape it at this point.
 
Last edited:
I am equating intelligence with computation, but I don't seem to be extending Wolfram. I haven't read the book, but "computational equivalence" sounds like the Church-Turing thesis, and Wikipedia summarizes Wolfram's conclusions as follows.



Even quantum computation doesn't violate this equivalence principle, i.e. a quantum computer can't in principle compute any function that a Turing machine cannot compute though quantum computers might in principle compute functions that are intractable for classical computers. The classical computer may require more time, by orders of magnitude, than a quantum computer to solve some problems, but the models of computation are nonetheless equivalent in the sense of the Church-Turing thesis, and Wolfram apparently assumes that human mental processes are also equivalent.

I don't assume that the human mind is equivalent to a Turing machine (in the sense of the Church-Turing thesis), because minds are conscious, and computers may not be. I don't know that they cannot be conscious, but I don't know how they can be either. If a Turing machine or its equivalent (realized in silicon or whatever) cannot, even in principle, be conscious, then human minds are more than intelligent (in the sense of "intelligence" I'm using), but I don't assume this principle. I'm an agnostic on this point.

The whole point of distinguishing intelligence from consciousness, as I do here, is to avoid equating conscious minds with machines that may not have subjective experiences, even in principle, but may nonetheless effectively compute what brains compute. "Artificial intelligence" is already firmly entrenched in the lexicon, and common usage of a word is what the word means definitively; however, recognizing "intelligence" in feats that machines now perform, like playing chess, does not imply much about consciousness, even if unconscious machines play better chess than any conscious mind. Even an apparently "creative" machine need not be conscious.

Well responded... ;;/? Agreed, intelligence is the entry door to this hall of mirrors. Computation is the path leading up to that door. Beyond it gets even more convoluted - so we do well to distinguish intelligence from consciousness, as you propose in this thread. Incremental critical path.

I don't take Wolfram at his word on the entirety of ergodicity he projects from The Principle of Computational Equivalence. One the one hand he contends that 'the human mind is nothing special', yet on the other hand he clearly states that 'there exists a limit to computational sophistication'. One can be burned at the stake professionally for such a contention, if left open ended.

The way he resolves this is through the declaration of a corollary of TPoCE, which states "there is just one highest level of computational sophistication'. He further then extends from that premise with the contention that a Universal Turing Machine is able to produce complex computations through a matter of a simplified set of initial conditions. (p. 724)

For over and over again we have seen that simple initial conditions are quite sufficient to produce behavior of immense complexity, and that making the initial conditions more complicated typically does not lead to behavior that looks any different.
(The problem I have with this as a cryptologist, is that a sufficiently encrypted signal, and noise - are indistinguishable as behaviors (they don't look any different) - but they are not the same as intelligences)
This is the argument of the clinical neurologist. All one has to do is begin with a simple set of initial conditions, replicate the synapse expression and exchange of such simplicity in a interleaved feedback matrix of sufficient discrete-only complexity, and voilà one has intelligence, (and as the null hypothesis therefore, also consciousness). This is a religious miracle - dressed up in lab coat. I object to it scientifically in the same way as I object to the miracle of continuous interjected consciousness.

This is especially why I objected when you simplified the threshold of what qualifies as 'intelligence' - because such a Bridgman Reduction allows us to slip by (through induction and abduction) deontological conclusions without due rigor of science (deduction and falsification). Who needs to smuggle, pay and house voters into a democracy in order to overthrow it - when machines now have right-to-vote as conscious entities, and an influenced machine is indistinguishable from a free will machine under the simplified Turing Test?​

Nonetheless, a computational machine which is based upon the initial set of this 'highest level of computational sophistication' - still cannot serve to produce non-deterministic outcomes. The idea that such a computational machine can produce any outcome which a Church-Turing λ-calculus - based upon natural numbers and discrete algorithms of effective method - can produce - is still limited to the set of deterministic outcomes. In other words, such a machine can produce isomorphisms of observable reality and nothing more (excluding the set of imaginary mathematics). This is the essence of a Universal Turing Machine at a highest level of computational sophistication.

But I contend, and have observed first hand - that humans can outstrip this limitation (see for example, Andrew Paquette's dice progression thread). Wolfram and Deutsch would not (mainly because they have not looked). This process of using oversimplification and only inductive inference (not seeking to falsify one's pet theory) to determine the distinctions between computation, intelligence, self-awareness, consciousness and the hard problem of consciousness - is foolish. Such things can only be inferred by deductive inference. Avoiding deductive inference in favor of friendly inductive models (ground up computational-neurological analogues) is a method of pseudoscience called methodical deescalation.
 
Last edited:
Suppose a robot on wheels is programmed to roll around on the floor of an empty room, stopping at walls and turning to explore in a new direction until a green circle appears in the image captured by its camera/eye or until it has fully explored the room. I call this robot "intelligent" and "intentional". It intends to find a green circle.

Suppose a green circle appears on the robot itself below the camera, and suppose a mirror hangs on a wall of the room. While exploring the room, the robot approaches the mirror, and the reflection of the green circle appears in its camera. The robot stops in front of the mirror gazing at its own reflection. I call the robot "self-aware", but this "awareness" has nothing to do with consciousness. On the other hand, if a spider is conscious in some primitive sense, I don't know why this robot cannot be. I don't know how it can be either.


Saying so only defines "intent" without implying consciousness.

I'm not denying consciousness here. I'm as certain of my own consciousness as of anything, but it is a mystery, so I don't have much language to describe it.


I haven't read the book, but I'm not convinced that Behe establishes irreducible complexity or the necessity of an anthropomorphically (or conscious) intelligent designer of living forms.
Irreducible complexity is one argument, but Behe introduces a second one. I had a go at explaining the essence of Behe's argument (try to read the whole thread), but Behe's book is the place to go.
Gaia (or the Earth's biosphere understood as a single organism) is certainly intelligent in the sense of "intelligence" I'm discussing here, and I can believe that She is conscious in some sense, but I have no way of knowing. I'm reasonably sure that She doesn't experience a stream of consciousness in Hebrew or literally speak to men through burning bushes, and I'm keenly aware that referring to Her as "She" is anthropomorphic. I have no problem with allegory (or traditional theology) as long as we understand what we're doing.
So does a wrist watch intend to keep time, does the water in a stream intend to flow down hill, does a photon intend to travel at the speed c - at least in a vacuum.

No criticism of you, but I see this use of language as a corruption of science. In school you get told not to anthropomorphise physical phenomena. If you used intend like that you would get marked down. Then a bit further down the line that concept sneaks back in!

The way you use intend (even for your computer examples) really amounts to "follows the laws of physics" - it is almost empty of meaning.
I'm interested in logical rigor. I can be certain (or close enough) that the Pythagorean theorem follows logically from Euclid's postulates, even if it doesn't follow without the Fifth and even if a coherent and useful geometry is possible without the Fifth. I can't be certain of very much, but I can be a careful thinker.
Right, but sometimes when confronted with real mysteries it is better to explore rather than try to deduce stuff logically.

I like to think of ancient man studying fire. They didn't have a chance to come up with the idea that it was a chemical reaction with one of the gasses in the atmosphere - the best they could do was to explore evidence. I think the science of consciousness is at an equally primitive state.
I've managed well enough for 57 years.
Good - I have read that many people cope with their voices by themselves without invoking medical help and possibly being fed strong drugs. I hope things continue to go well for you.
I'm open to the data, but any data set has many possible interpretations.


People defecate while dying even more frequently. Natural selection need not account for it. RM+NS is not a theory of everything.
Yes, but that is merely a continuation of an existing process. The NDE is a hugely elaborate phenomenon, and even if you call it an hallucination it must involve the coordinated effort of a lot of neurones. When people become delirious, they rarely remember anything afterwards. People have been tested and shown to remember their NDE's unchanged many years after the incident.
Skeptics also demand more and more rigorous tests of spooky action at a distance, and physicists happily comply.
Er, not exactly, psi research is poorly funded, and electromagnetically shielded rooms and the like don't come cheap.
If two brains naturally communicate electromagnetically, I'd still call it telepathy, and this mechanism seems more likely than anything involving entanglement. If we don't discover natural, electromagnetic telepathy, we'll create it artificially soon enough.
Again lets not get wrapped up in semantics - the usual aim in telepathy experiments is to rule out communication by any known means. Ruling out entanglement is probably a bit difficult, but actually, standard QM rules prevent information passing in entanglement experiments - the entanglement is only detected statistically when both sets of results are compared.
I'll add Radin's book to a long list, but I know too much about Quantum Mechanics to be encouraged by the title.
Well I used QM as part of my Chemistry PhD, so I knew non-relativistic QM reasonably well - but the details of the calculations are a bit rusty by now (age 69). I know people use the term 'quantum' loosely nowadays, but that fact remains that the strange structure of QM with a wave function whose point of collapse is extremely hard to settle, may well lie at the heart of the consciousness puzzle. One thing that struck me, was that without something with a wave structure disrupting classical physics, there didn't seem any good reason why molecules or atoms should have particular properties - no eigenvalues - they would be like solar systems - every one is presumably different in detail. I know there were other problems that QM cleared up as well, but that, to me, is the most striking. A chemist tends to think of electrons as waves first, and particles second!

This book is primarily about the experiments performed, and the remarkable evidence that has built up for telepathy, but also for a form of precognition that Dean dubs 'presentiment'. All I can really say, is that if you are really interested in consciousness and psi - even just to try to rule the idea out - I am suggesting things you should read. You can't totally see the problem until you appreciate the totality of evidence for anomalous mental phenomena. You really should read Behe's book too - if you know the basics about DNA and how it codes for the amino acid sequence in proteins, you will be fine with that book.

My feeling is that science tends to push trick problems under the carpet - the simple question, "what is consciousness" is certainly one of them. Some theorists have even speculated that consciousness does not exist, or is an illusion (whatever that would mean:

https://www.nybooks.com/daily/2018/03/13/the-consciousness-deniers/

In conclusion, you have to read some of the suggestions I and others have suggested, for us to continue to discuss in a fruitful way. Though I would have thought your own mental phenomena (I have had almost none) might encourage you to search a bit beyond the standard science paradigms.

David

David
 
Even an apparently creative machine need not be conscious. Being conscious of one's own creativity does not imply that consciousness is necessary for creativity any more than being conscious of playing chess implies the necessity of consciousness to play chess. I'm old enough to remember claims that unconscious machinery could never defeat consciously creative, human players at chess. These claims are now empirically false. It's now harder to say what a conscious mind can do that a presumably unconscious machine cannot do, aside from being conscious, but consciousness is not less remarkable for this reason.

I think the reason that computers can play better chess than human beings is that a chess game can be modelled algorithmically. Human minds can think algorithmically to to some extent, but fall far short of the algorithmic capabilities of computers mainly due to restrictions on time and memory. Possibly, not all games could be modelled algorithmically, and those that couldn't would probably be playable only by human beings with human minds.

I don't think it's necessarily harder to say what a conscious mind can do that a presumably unconscious machine cannot do. It boils down to what tasks can be modelled algorithmically, and what tasks can't. Those that can, a competently programmed computer will tend to do better at. Those that can't, a human mind will tend to do better at. There's no consciousness whatsoever in a programmed algorithm, except insofar as the algorithm models the ingenuity of the programmer.

This is why I reject the term "artificial intelligence" -- It's a complete misnomer. Computers are thick as bricks, it's just that human beings are devilishly ingenious and have designed them so that they can operate algorithmically to solve problems; problems that only the programmer understands why they need to be solved in the first place. You can't ask a computer what it is doing when playing chess or why it is playing chess -- it hasn't the faintest idea.

It only operates on one instruction at a time: if you like, its "intelligence" is limited to the breadth of the one instruction it's currently dealing with. Likewise, its "memory" isn't truly memory, so much as a number of locations, in one of which it can place any intermediate value it may be currently generating. The overall program never exists in the computer, only in the mind of the programmer. I have the analogy of an animal following a line of small objects placed by an experimenter on the ground. Some of the objects are edible and others aren't. The animal eats the ones that are, and ignores those that aren't; it has no idea that when it eats the edible objects, it will end up creating a pattern predetermined by the experimenter.

As one of the guys who taught me programming said on the first day of class, his nickname for a computer was a "TOM" -- or "Totally Obedient Moron". Its only advantage is that it can do what it does orders of magnitude faster than we can. In principle, any computer algorithm can be followed to completion by a man working with a pencil and paper, but it might end up taking him a billion years. The genius is in the programmer, who can use his intelligence and ingenuity to design his algorithm in a way that will solve a problem he wants to solve, one small step at a time.

The sign of true intelligence is being able to solve problems without having to use an algorithm. I'll give an example, attributed to Gauss, who was asked at school to add all the numbers from 1 to 100, and did so in so short a time that he shocked the teacher. Everyone else followed the usual algorithmic route, adding the numbers one at a time. But Gauss had the insight that adding 100 to 1, 99 to 2, 98 to 3 and so on until he got to adding 1 to 100 would give him twice the sum he wanted. IOW, (100x101)/2 would give him the answer. That's easy enough to do in your head: 10,100/2 = 5050.

Now I know that, once Gauss had generated this way of adding a successive sequence of integers, it could have been made into an algorithm: but the point is, the first time it was done required insight and intelligence. This is what human beings can do and computers can't, nor ever will be able to. They'll never be able to come up with a completely new way of doing things; they can't truly innovate.

They're forever bound within the restrictions of their programmed algorithms. I've never come across a single example of a computer coming up with a way of doing something that was new, outside an algorithm it had been been supplied with. It's possible, I suppose, that very occasionally a programmer might unwittingly or by mistake specify doing something that led to a surprising and useful result, but the computer wouldn't be responsible for that, the programmer would.

I'm not a mathematician or mathematical programmer: I did programming and then systems analysis in a commercial context for about 14 years, but even so, knowing how they work, I've long been able to see that classical computers aren't and never could be intelligent. It surprises me that anyone could ever think so, or that anyone would coin the term "artificial intelligence". No one knows how we come up with new ideas, which is true intelligence: and seeing as we don't, coming up with them can't be algorithmised. It's an absolute requirement that to algorithmise something, you have to a) completely understand it and b) be able to break it down into computable steps or instructions.

If anyone ever comes up with an algorithm that can innovate, the computer running it would also have to have some non-algorithmic means of perceiving what we perceive, and interpreting that so as to formulate a solvable problem. Then it would have to be able to generate and run the algorithm for solving it.

Good luck with that. The only thing I'm aware of that can do it is a human being; computers have no apparatus for perceiving the world, or for interpreting it and formulating a solvable problem. All computers can do is execute the final step, the algorithm, which has to be supplied by human being(s), who have done all the prior work.

To speak of "artificial intelligence", to me, is ridiculousness on steroids. At best, classical computers are simply machines facilitating the last step in a long line of human processes; it's not even a step that human beings can't do for themselves -- albeit nowhere near as quickly.

As to quantum computers, I don't really understand how they work; but I get the impression (please do correct me if I'm wrong), they too would have to be algorithmically programmed by someone who had already formulated a problem, i.e. done all the prior work that they couldn't do.

But even if a classical computer cannot be conscious, and even if a quantum computer can't be conscious either, this conclusion doesn't imply much about OBEs or NDEs or distinguish these experiences from vivid dreams, even if scant evidence suggests that telepathic communication or other paranormal signals somehow inform the experiences. Whatever can be conscious seems nonetheless mortal and confined to a material body, at least in my subjective experience. I'd be happier believing otherwise, but I'm most skeptical of the happiest assumptions. It's a sad way of thinking, and I'm not recommending it here, only describing it, but I don't think I can escape it at this point.

I don't think it's a case of "even if a classical computer cannot be conscious...". IMO, it can't possibly be conscious, full stop. Same goes for a quantum computer. Anyone thinking so is mistaken and doesn't understand everything that has to go before the production of algorithms, which we are nowhere near understanding (and I strongly doubt we'll ever completely understand).

I don't quite understand why you think it's sad to think that consciousness is "mortal and confined to a material body" or why you'd like to believe otherwise. To me, it's simply an observation confirmed by my own subjective experience, as well a your own. Tell me, why would you like to think otherwise, and why would that make you happier?
 
Last edited:
The sign of true intelligence is being able to solve problems without having to use an algorithm. I'll give an example, attributed to Gauss, who was asked at school to add all the numbers from 1 to 100, and did so in so short a time that he shocked the teacher. Everyone else followed the usual algorithmic route, adding the numbers one at a time. But Gauss had the insight that adding 100 to 1, 99 to 2, 98 to 3 and so on until he got to adding 1 to 100 would give him twice the sum he wanted. IOW, (100 x101)/2 would give him the answer. That's easy enough to do in your head: 10,100/2 = 5050.
Right, and this is what makes it so hard to define what is or is not AI - I mean, although it would be pointless, you could devise an algorithm that would spot when Gauss' algorithm was applicable and print out, "Aha - this is a quick way to do that problem!"

That would impress a lot of people.

David
 
Suppose a robot on wheels is programmed to roll around on the floor of an empty room, stopping at walls and turning to explore in a new direction until a green circle appears in the image captured by its camera/eye or until it has fully explored the room. I call this robot "intelligent" and "intentional". It intends to find a green circle.
Seriously? Do you actually believe what you have written here?
The designers intended to build a robot, and to have it programmed to carry out a sequence of operations.

The robot is no more intelligent or intentional than an alarm clock which is programmed to sound an alarm at a specified time. To claim that the clock intends to wake you up is absurdity, a nonsensical attribution of human-like capabilities to a machine.

If one follows this line of belief, then a steam locomotive intends to haul its load. It doesn't have intention, it simply moves according to the laws of mechanics.

Suppose a green circle appears on the robot itself below the camera, and suppose a mirror hangs on a wall of the room. While exploring the room, the robot approaches the mirror, and the reflection of the green circle appears in its camera. The robot stops in front of the mirror gazing at its own reflection. I call the robot "self-aware", but this "awareness" has nothing to do with consciousness. On the other hand, if a spider is conscious in some primitive sense, I don't know why this robot cannot be. I don't know how it can be either.
"The robot stops in front of the mirror gazing at its own reflection. " - I think you're veering into the poetic here. What it is actually doing is receiving reflected photons and processing them according to the capabilities of the receiving sensor(s) and whatever algorithm has been programmed into it. Most likely it will merely calculate some value which symbolises that this is not a green circle.

Next you'll be telling us that it smiles with approval at how good-looking it is.

I don't know where you normally engage in such fantasies, but a serious discussion forum is not the place for them.
 
Seriously? Do you actually believe what you have written here?
Yes.

The designers intended to build a robot, and to have it programmed to carry out a sequence of operations.
The designers' parents (and others) intended to conceive, gestate, bear, feed, clothe, house and train the designers.

The robot is no more intelligent or intentional than an alarm clock which is programmed to sound an alarm at a specified time.
"No more intelligent or intentional than an alarm clock" does not imply "not intelligent or intentional", and your statement is clearly false. Alarm clocks don't navigate rooms or recognize green circles, so the robot is more intelligent than an alarm clock in this sense. You want to conflate intelligence with conscious perception, but my whole point here is not to do so.

To claim that the clock intends to wake you up is absurdity, a nonsensical attribution of human-like capabilities to a machine.
I'm not asserting anything profound or absurd. I'm only explaining a use of the word "intent". You're insisting that the word implies more than I intend, but it's only a word, a sequence of symbols representing something else.

If one follows this line of belief, then a steam locomotive intends to haul its load.
If one uses the word as I do, a steam locomotive "intends" to haul its load. Using the word this way implies no belief about a train's conscious state. This belief is yours, not mine.

It doesn't have intention, it simply moves according to the laws of mechanics.
Everything moves according to laws of mechanics.

"The robot stops in front of the mirror gazing at its own reflection. " - I think you're veering into the poetic here.
Poetry aside, I don't intend any conscious perception by "gaze". I'm not trying to convince you that any robot consciously perceives anything. The robot presumably does not consciously perceive anything. That's my point.

What it is actually doing is receiving reflected photons and processing them according to the capabilities of the receiving sensor(s) and whatever algorithm has been programmed into it.
Sure. My brain does these things. I'm not suggesting that it does only these things, but it does these things.

Most likely it will merely calculate some value which symbolises that this is not a green circle.
Merely? Savannah man might have called this robot "miraculous", but I'm not calling anything miraculous.

Next you'll be telling us that it smiles with approval at how good-looking it is.
You're already telling me what I'll tell you next rather than accepting my own description of my intent.

I don't know where you normally engage in such fantasies, but a serious discussion forum is not the place for them.
Nothing I've said about the robot is fantastic. The fantastic leaps are yours, not mine.
 
Last edited:
Possibly, not all games could be modelled algorithmically, and those that couldn't would probably be playable only by human beings with human minds.
Specifying such a game would impress a lot of people.

There's no consciousness whatsoever in a programmed algorithm, except insofar as the algorithm models the ingenuity of the programmer.
Algorithms are abstractions. Consciousness only exists in concrete, material beings as far as I know.

This is why I reject the term "artificial intelligence" -- It's a complete misnomer.
It's a very common name regardless. You're free to reject any term you don't like.

Computers are thick as bricks, ...
Comparing a computer to a brick seems even more of a stretch than comparing a computer to a brain, but either comparison is a stretch.

You can't ask a computer what it is doing when playing chess or why it is playing chess -- it hasn't the faintest idea.
"Hasn't the faintest idea" seems to imply something about the computer's conscious state. A computer playing chess could explain a move in natural language. The algorithm constructing this explanation presumably is even more complex than the algorithm constructing the move, but enacting either algorithm implies no consciousness.

It only operates on one instruction at a time: if you like, its "intelligence" is limited to the breadth of the one instruction it's currently dealing with.
Computers are not fundamentally serial, but operating on many instructions in parallel doesn't seem to explain consciousness either.

Likewise, its "memory" isn't truly memory, so much as a number of locations, in one of which it can place any intermediate value it may be currently generating.
"True memory" supposes some ultimate authority over use of the word "memory".

The overall program never exists in the computer, only in the mind of the programmer.
This statement makes no sense to me. Computers operate long after their programmers are dead.

I have the analogy of an animal following a line of small objects placed by an experimenter on the ground. Some of the objects are edible and others aren't. The animal eats the ones that are, and ignores those that aren't; it has no idea that when it eats the edible objects, it will end up creating a pattern predetermined by the experimenter.
This is basically Searle's argument, but I don't know how the animal's conscious awareness of the pattern it follows is relevant to anything. Animals routinely follow trails of food, whether or not an experimenter lays out the trail. Is an animal less intelligent because an experimenter lays out the trail of food it follows? Distinguishing the experimenter's intelligence from the animal's intelligence makes sense, but following a trail of food is not therefore "unintelligent".

As one of the guys who taught me programming said on the first day of class, his nickname for a computer was a "TOM" -- or "Totally Obedient Moron".
It's a fair description, but "computer" in this context seems to mean a general purpose computer (equivalent to a Universal Turing Machine) rather than a computer running a debugged program playing chess for example. A computer "obeys" its program, but most people would not say that a grandmaster chess player is "moronic". Human grandmasters aren't born playing chess. They learn to play and also obey rules.

Its only advantage is that it can do what it does orders of magnitude faster than we can.
That's a big advantage, but brains also do things (like contrive algorithms) orders of magnitude faster than any (existing) computer can do them. Maybe this limitation of computers is fundamental and linked somehow to conscious states that computers can never experience. I don't pretend to know.

The genius is in the programmer, who can use his intelligence and ingenuity to design his algorithm in a way that will solve a problem he wants to solve, one small step at a time.
No one is disputing the genius of human beings here. I don't know to what extent artifacts will ultimately exceed the abilities of their creators.

The sign of true intelligence is being able to solve problems without having to use an algorithm.
How do you know that Gauss used no algorithm to contrive his solution (which is itself an algorithm)?

... the first time it was done required insight and intelligence. This is what human beings can do and computers can't, nor ever will be able to.
I don't pretend to know, because I don't understand human creativity well enough, even if I can perform the feat myself.

They're forever bound within the restrictions of their programmed algorithms.
That's true tautologically, but since I don't know how Gauss arrived at his solution, I don't know what sort of algorithm might have been involved. "His goddess told him" doesn't explain much. "Consciousness did it" doesn't explain much either.

It's possible, I suppose, that very occasionally a programmer might unwittingly or by mistake specify doing something that led to a surprising and useful result, but the computer wouldn't be responsible for that, the programmer would.
I'm less concerned with who (or what) gets the credit.

It surprises me that anyone could ever think so, or that anyone would coin the term "artificial intelligence".
"Artificial intelligence" describes what it describes. It's only a pair of words. No one claims that artifacts can do everything that human brains do. Some people imagine artifacts doing so in the future, but that's science fiction, like time travel and similar magic.

No one knows how we come up with new ideas, which is true intelligence:
I don't know how Gauss came up with his solution, but "intelligence" is a word. Words have common meanings, not true meanings.

It's an absolute requirement that to algorithmise something, you have to a) completely understand it and b) be able to break it down into computable steps or instructions.
Spiders (as well as human beings) are clearly born with algorithms that no human programmer constructed. According to arachnologists, some spiders (like the portia) exhibit creativity, however primitive, but I doubt that a portia's creativity differs fundamentally from other, more "mechanical" spider behavior.

If anyone ever comes up with an algorithm that can innovate, ...
Someone will explain why it isn't truly innovative.

The only thing I'm aware of that can do it is a human being; ...
Gaia (the biosphere, conscious or not, mechanical or not) does it even better. Human beings create computers that don't impress you. Gaia creates human beings.

I don't think it's a case of "even if a classical computer cannot be conscious...". IMO, it can't possibly be conscious, full stop.
I don't know how anything can be conscious. I only know that I am. My ignorance of a mechanism is not evidence against the possibility, but I don't assume a mechanical (or materialistic) explanation. I have no explanation at all.

I don't quite understand why you think it's sad to think that consciousness is "mortal and confined to a material body" or why you'd like to believe otherwise.
I suppose I want to persist in the future, rather than die today, because Gaia programmed me this way. Wishing for immortality is like assuming the possibility of six apples upon seeing five apples. We imagine an infinity of "numbers" similarly.

Yes, I realize that "Gaia programmed me" is anthropomorphic. "Biological robot" seems descriptive enough, but I'm not therefore only a biological robot.
 
Last edited:
Comparing a computer to a brick seems even more of a stretch than comparing a computer to a brain, but either comparison is a stretch.

"Thick as a brick" is a colloquialism along the lines of "dumb as a box of rocks". You appear to be taking the figurative as literal and vice versa as and when it pleases you -- who knows, maybe just to be contentious for the heck of it. I have no idea why you're posting here, but won't be responding to anything else you say as I find it a pointless exercise.
 
I don't assume that a human mind is generally equivalent to a Turing machine even if the two are computationally equivalent, because minds are conscious, and computers may not be.
I don't know how much you know about computers, so let me enlighten you a little.

Computers have a large memory - typically several gigabytes (10^9 bytes) even for a humble desktop PC, and at a clock rate of about 10^9 cycles per second. At each cycle a number of bytes are read from the memory and decoded into an instruction. The instruction might be to load some data from a memory block at some offset, or it might be to perform any number of trivial mathematical operations on some data, or to store it away, etc etc.

These things are designed to do just that and nothing else, except keep themselves cool with a fan! Now, they read in a program that just consists of bytes full of data (each byte can hold a number between 0-255) and start reading that data as instructions, which are obeyed as described above.

Those slabs of byes full of instructions are constructed by programs called compilers that convert readable computer code into 'machine code' - i.e. long long strings of numbers between 0-255. There sounds like a chicken and egg problem there, but that was solved many decades ago.

My point is that it is utterly absurd to even contemplate the idea that a computer is conscious, because the only thing it does is follow mind bogglingly long sequences of instructions. We may interpret what it does as intelligent, but that is really a compliment to the guy who wrote the program!

Obviously in a modern computer there is fair amount of detail that could be added to that basic account(!!), but none of it would make a scrap od difference - none of it could possibly be conscious unless you want to invoke magic.

The computer grinds along like that whether it is executing AI code, playing a game, or adding up a payroll - so if it were conscious, it would be conscious of details utterly removed from the work it seemed to be doing.

If you wanted to say anything was conscious in there, it would be the computer program, but that in turn is rather like saying that a book might be conscious!

David
 
Back
Top