Is the Brain analogous to a Digital Computer? [Discussion]

  • Thread starter Sciborg_S_Patel
  • Start date
What I want to describe are mathematical/logical facts that are universal truths - like 2+2, and like all mathematical theorems. I didn't really want to call them Universal Mathematical Truths (UMT's) because that sort of jargon makes discussion harder to read. Also, there is no real dividing line between Pythagoras' theorem and the fact that a 3,4,5 triangle is right angled. One is just a special case of the other. In that sense 2+2 is a theorem, and that is the sense I mean throughout this discussion.

Let us try to make the discussion of computation a bit more concrete. We could do this with a Turing Machine, but honestly I think an actual computer is easier to conceive of - because every machine location is equally accessible. Let's imagine factorising a number:

3381265111639379629831

This needs a computer program, and rather than read the above number in as data, we could add a line that sets a variable to that value - which gives us something like the simulation program we are discussing.

The program will operate on this in a long sequence of steps, which we could represent symbolically as

P(s1)=s2, P(s2)=s3, P(s3)=s4 .......................... P(s-N-1)=s(N)

Here every s-i would represent the entire state of the computer memory (plus a few registers) at an instant of time, and the operator P would follow the mathematical/logical rules that define the computer to produce the change to the next state at the machine code level.

Given those rules, each one of those facts - e.g. P(s7)=s8 would be a theorem in the sense explained above.

This is a bit like saying that the number I described above might be the number of aphids that have ever lived on earth. If so, it would depend on innumerable events of aphid sex, garden sprays, and predators. True, but utterly irrelevant!

Now, not only is every one of those facts true as a theorem, but theorems like P^13(s10) => s23 are also theorems (where P^13 means the operation of applying P 13 times). So the whole operation of the program is a theorem, and every subset of the process is also a theorem - an eternal truth! The final result also has this quality:

3381265111639379629831=47055833459 * 71856449309

Obviously the simulation we have been discussing is exactly analogous to the above program.

Note in particular that:

1) The individual steps could be performed by using an actual computer, or by hand.

2) There is no sense in which any of those steps, or any subset of those steps could be said to have any emotional content. You would also find no computer engineer who would argue that running the program would generate any emotion, qualia, or experience along the way!

3) If you are unhappy with the idea that P(s1)=P(s2) is universally true, because it would refer to a computer, remember that the operation of P could be transformed into an actual mathematical function using the Gödel trick.

4) Once the program has been run, you don't get much by running it again, unless you need to test the computer!



Do you still want to deny that the simulation we are discussing is in any significant way different from the above factorisation?
The example you give here is completely different from our simulation in any way that is significant to the conclusion of this TE.

You do have a point though, the result is undeniably computation. It could be, at least in principle, be done on a Turing machine.

I agree that this way it is left open to the godelian argument against AI. an argument that does not hold up IMO, but that is a separate issue.
The point is that because something is computation, or can be replicated through computation, it does not mean that it is a universal truth in the same way as Pythagoras' theorem.
It does not mean all that much if we keep in mind that we could simulate every process in nature in the same way, and come to the same conclusion.

To compare our simulation with your example of factorisation and come to the conclusion they are analogous you have to ignore a few very important observations and differences.

- The program P does not add any logic to the way the simulant behaves, these two things are completely informationally separated. What determines the behaviour of the simulant is the brain state we copy from the subject. The program P's only function is to replicate our subject's body as faithful as possible.
P can be seen as an empty vessel we pour our brain state S in.

So it is not P=>O, but P+S =>O, where S is a black box to us, we have no access to the (non-)logic that formed it. The massively parallel way the brain computes leaves that completely impervious to any logical analysis. The only way we can know what it calculates is modeling it and let it run in a similar context as the one that brought it to existence.

This, to me, is very important, and yet you do not seem to want to engage with this point.

- The factorisation program has a clear premise. The program P does not. If someone would replace the real copied brain state with similar looking nonsense we would have no way of recognising it.

- Also in contrast with the clear premise of the factorisation program the point we copy brain state S into P is chosen arbitrarily, not based on any premise.

- The factorisation program can be right or wrong, it can have mistakes, we can easily verify and correct these mistakes.
The program P has no "right" or "wrong" answer, it only has an arbitrarily chosen point O and a brain a corresponding brain state S'

- The factorisation program has a clear end, it reaches a result, our program P does not, we can look at S at different places in time, but that is all.
Wherever you stop the program will correspond to a mathematical/logical fact that has been always true, and always will be!
I hope the above has convinced you this is not true.

I do agree that every time we run the program, the simulant will have the same experience over and over again, i do not see how that is paradoxical.
This is where it all gets interesting, because if you think of every step in the program as being a theorem in the above sense - which they are - it is awfully hard to see what constitutes an execution of the program - particularly if we can arbitrarily coalesce some steps as in P^13. We can also coalesce all the steps if we want - would that also generate the same emotions in the simulant? The poor old simulant seems to have become suspended in space-time, ready to suffer again whenever the same sequence of mathematical facts is reviewed again!
You quoted this, but i expanded on that in a later post:
If we repeat the TE from the same point over and over, the experience would always be the same.
So from the viewpoint of the simulant the events in the simulation would be only experienced once, no matter how many times we have run the program.
From our viewpoint, as observers, we would see the simulant have the same experience over and over again.
I think this follows logically from our TE, unless you find a way to transfer memory from one run of P to another
Imagine if the poor b***** was scanned with tooth ache! For all of eternity, that sequence of mathematical facts would be waiting to torture him again! Moreover, every conceivable such simulation is already 'out there' as a set of facts waiting to be reviewed in order!
No, as said above, from the perspective of the simulant all of this only happening once.
Every conceivable simulation is not "out there" because you are wrong on comparing our program P with a theorem.
There is, of course, the potential for a wide range of conceivable simulations, but we could say the same about real life situations to.
In both cases we can only find out by rolling the dice.

I think this is a reduction ad absurdum, and my conclusion is that a computer simply can't experience anything (or cause a simulant to do so), and is thus not conscious. I'd like to persuade you of that fact, so we can go on to think about what that means for the materialist conception of consciousness.

David
You are not going to persuade me by ignoring my arguments, or being very selective in what you quote, and putting "obviously" before statements.
 
The example you give here is completely different from our simulation in any way that is significant to the conclusion of this TE.

You do have a point though, the result is undeniably computation. It could be, at least in principle, be done on a Turing machine.
So if you specify the Turing machine and its initial state, the outcome is surely a theorem - I honestly can't see how you can get round that! Your S (the state of the simulant's brain) is already part of P - by hypothesis! Indeed, if you concatenated all the memory cells representing the state if the simulant - S - it would itself be one large integer!

I wasn't using Penrose's Gödel's argument as such, merely pointing out that the operation of moving from one state to the next, could be represented as a mathematical function using Gödel's method.

To claim that P-S (I write it like this because I originally defined P to contain the state of the brain, S) and S are 'informationally separate' is hardly to the point - in the computer running the simulation there is one set of uniform memory cells storing program and data being manipulated by the rules of the hardware. Remember that S is not a black box to us, we can observe its contents, but we probably don't know why most of has the values it has! That really doesn't alter the fact that P-S+S=>O is a theorem! Indeed there are probably numbers in many simulations that have values whose origins depend on a whole history of events - think of an oil well simulation. However the fact is that the oil well simulation doesn't produce oil!
- The factorisation program can be right or wrong, it can have mistakes, we can easily verify and correct these mistakes.
The program P has no "right" or "wrong" answer, it only has an arbitrarily chosen point O and a brain a corresponding brain state S'
I explained above that every transition - s34=>s35 (say) is a theorem - given the rules of the computer, the outcome is fixed. Therefore the outcome of every sequence of steps is a theorem This would still be true if the factorisation program contained a flaw, and gave the wrong answer.

- The factorisation program has a clear end, it reaches a result, our program P does not, we can look at S at different places in time, but that is all.
The output of the factorisation program would still be a theorem even if the program was wrong, or if it contained an arbitrary cutoff after, say, 10^10 operations. The final state would be a consequence of the initial state and the number of steps performed.

No, as said above, from the perspective of the simulant all of this only happening once.
OK - so from the perspective of the simulant, what exactly happens if the program is executed again?

Every conceivable simulation is not "out there" because you are wrong on comparing our program P with a theorem.
There is, of course, the potential for a wide range of conceivable simulations, but we could say the same about real life situations to.
In both cases we can only find out by rolling the dice.

There are no external dice to this simulation. Any random numbers are either generated algorithmically, or are generated by a true random number generator and stored as a table inside P.

You are not going to persuade me by ignoring my arguments, or being very selective in what you quote, and putting "obviously" before statements.
I am certainly not (at least intentionally) ignoring any of your arguments, even if I don't quote everything you wrote in order to maintain clarity in my reply.

In summary, you seem to base your contention that the specially constructed program P causes suffering (or maybe pleasure!) to the simulant, by partitioning it's memory into P-S and S, and arguing that these are different. However to the logic of the CPU, they are exactly the same. If factorisation programs can't experience something, it is very hard to see how any other type of program can have an experience using the identical CPU hardware. Furthermore, it is even harder to see what it means if a program is run more than once, or if it is 'run' in a special way - such as optimised to arbitrarily fewer steps, debugged with frequent halts, simulated by people (essentially the Chinese room argument), or simply read off as a theorem.

It really is worth pondering whether the rather abstract distinction that you feel you have made between other sorts of program and P could really be the difference between no experience and anything (good or bad) a brain could think or experience in the time available!

You are right to say that this argument could be extended in principle to anything in nature - but remember, it only works if we assume materialism - which I don't - so I seriously doubt if these simulations are possible even in principle - even for a fruit fly, and probably not even for an amoeba.

David
 
Last edited:
So if you specify the Turing machine and its initial state, the outcome is surely a theorem - I honestly can't see how you can get round that!
Though that might be true, if you extend the definition of "theorem" to any form of computation, but that shifts the discussion to what a theorem does and does not do.
Just to avoid further miscommunications do you think that all forms of computation are theorems?

The real point of disagreement is whether P is similar to theorems like Pythagoras' theorem or not. Even more specifically the discussion is about whether P is a universal truth that always has been and always will be.

It is much to specific to be an eternal universal truth, it can certainly not always have been, the circumstances that facilitate the existence of P were not always there.
It also can not for ever be, the existence of relies on the effort of extracting the information about the S from the subject.
No matter what substrate the information is stored in, as you said before, nothing is forever, it can always get lost.

Now, for P to be a theorem in the universal mathematical sense, we would expect it to be discoverable by means other than getting the information from the subject. Even is the subject is gone, or if he simulation is lost. Do you think that is possible?

Let us assume an alien race completely different from us, but intelligent in a way we would be able to recognize. We might assume they will discover Pythagoras' theorem, or similar ones. Can we really assume they would chance upon the "theorem" P?

So where is the difference?, for me it is the vastly parallel way the brain, or the simulated brain in P works, it simply is another way of computation, it does not compute on the basis of logical statements. We could consider neurons to be completely based on logic, but from the way they interact the behaviour of the subject/simulant emerges.
The state of every neuron is not based on logic, it is based on every event that came before, that is where this way of computing loses every point of analogy with programs like your factorisation program.

In your factorisation program, as well as in P we can divide the logic in smaller parts that better fit the definition of "theorem".
The difference between your factorisation program and P is the way these smaller parts are linked together.

In the case of your factorisation program these smaller parts of logic are put together in a serial way with more logic resulting in one continuing piece of logic that will indeed be simplifiable to a high degree.

In the case of P this is quite different, we have hundred billion of pieces of logic that are connected in a parallel way by non-logic.
We can even say that every neuron is the input for every other neuron, which negates your attempt to name P a program without input, at least at the level of the behaviour of the simulant. At that some level we can say that instead of no input there are trillions of input signals every cycle.

Objection one, we can simulate this on a serial computer. Probably but is this actually true if we could show theoretically this would not ever be practically possible?(time constraints?) and does that make any difference?

objection two, your compiler argument. There i am hindered by my naivety in maths and computers, but to me there is a limit to where we can simplify a serial representation of a parallel neural network.
How a parallel network is simulated by a serial process limits the steps necessary to a minimum, in my naive opinion. It seems to me that you have to calculate every state of every element of every neuron for every cycle, so you probably have many trillions of operations per cycle you can not avoid or simplify.

So to me it seems that the more steps not based on logic, the more computation goes from the platonic to the specific.
Where Pythagoras' theorem is on one side of that spectrum. And our brain, or a simulation of it, is on the complete opposite end.

My lack of knowledge and exact terminology make this post a bit wordy and rambling, but i hope i have made myself a bit more clear.
I can very easily be wrong about this, but at this time i still think the problem is more that i did not explain enough what i mean.

I will get back to what this TE actually might mean, but we seem a bit stuck on this point.
 
Though that might be true, if you extend the definition of "theorem" to any form of computation, but that shifts the discussion to what a theorem does and does not do.
Just to avoid further miscommunications do you think that all forms of computation are theorems?

The real point of disagreement is whether P is similar to theorems like Pythagoras' theorem or not. Even more specifically the discussion is about whether P is a universal truth that always has been and always will be.
As I said before, the reason I call P a theorem, is that P is eternally true in the same way as Pythagoras's theorem is true (assuming a Euclidean space, of course). To get a form of computation that isn't a theorem in that sense, you need to have something unpredictable - such as true random numbers, or actual interaction with - say - another human. Even if the input comes from outside, it is still true that the outcome, O, of running P+I=>O is fixed for all time! This, of course, is why I specified the TE to work this way.
It is much to specific to be an eternal universal truth, it can certainly not always have been, the circumstances that facilitate the existence of P were not always there.
Surely the extreme specificity makes it an uninteresting truth, but doesn't detract from it being eternal just as

3381265111639379629831=47055833459 * 71856449309

Is eternally true, but too specific to be interesting!

It also can not for ever be, the existence of relies on the effort of extracting the information about the S from the subject.
No matter what substrate the information is stored in, as you said before, nothing is forever, it can always get lost.
a) This is a TE - so you have to persuade me that this is a fundamental issue, and not just a practical issue - like finding an adequately powerful computer - or indeed performing the brain scan in the first place!
b) People might forget a theorem, but that wouldn't make it less true for all eternity!

Now, for P to be a theorem in the universal mathematical sense, we would expect it to be discoverable by means other than getting the information from the subject. Even is the subject is gone, or if he simulation is lost. Do you think that is possible?
Well if you forget the number 3381265111639379629831 (and its factors), you obviously lose track of the above factorisation, but it doesn't stop that factorisation being true for all time (can I start saying "being a Theorem" again - it is much more concise!)

Let us assume an alien race completely different from us, but intelligent in a way we would be able to recognize. We might assume they will discover Pythagoras' theorem, or similar ones. Can we really assume they would chance upon the "theorem" P?
It is unlikely, but if we choose an arbitrary 1000 digit number and factor it, it is unlikely that this race will have factored the identical number, but it doesn't make the factorisation any less true - any less a theorem, or any less discoverable!

So where is the difference?, for me it is the vastly parallel way the brain, or the simulated brain in P works, it simply is another way of computation, it does not compute on the basis of logical statements. We could consider neurons to be completely based on logic, but from the way they interact the behaviour of the subject/simulant emerges.
The state of every neuron is not based on logic, it is based on every event that came before, that is where this way of computing loses every point of analogy with programs like your factorisation program.
The simulation program can represent each neuron as a partial differential equation - it won't alter my argument!

In your factorisation program, as well as in P we can divide the logic in smaller parts that better fit the definition of "theorem".
The difference between your factorisation program and P is the way these smaller parts are linked together.
The only aspect of theoremhood that I care about, is that it be true for all kind. This does not depend on whether you, or the human race can remember the theorem!

In the case of your factorisation program these smaller parts of logic are put together in a serial way with more logic resulting in one continuing piece of logic that will indeed be simplifiable to a high degree.

In the case of P this is quite different, we have hundred billion of pieces of logic that are connected in a parallel way by non-logic.
We can even say that every neuron is the input for every other neuron, which negates your attempt to name P a program without input, at least at the level of the behaviour of the simulant. At that some level we can say that instead of no input there are trillions of input signals every cycle.
But all this non-logic ultimately ends up as numbers - voltages, concentrations of assorted chemicals, etc. These are what the simulation program operates on.

Objection one, we can simulate this on a serial computer. Probably but is this actually true if we could show theoretically this would not ever be practically possible?(time constraints?) and does that make any difference?
PC's did lots of parallel operations before the days of multi-core machines. You could be running a calculation while the machine updated your inbox and refreshed the Windows display in myriad ways. This still goes on, and there are far more processes running on your PC than there there are cores to run them on. The way it works is that every so often a timer interrupt comes along, and one task is stopped and its registers saved, then the operating system selects another process to run. All the processes share the same resources - like memory, but the memory is mapped differently for each process - so location 17 (say) would hold different values in each process. For the purposes of a TE, we can imagine initially that the simulation is run in a serial fashion incrementing the partial differential equation for every neuron by one step, and then repeating that over and over. The parallel quality of the original brain process has been mapped into a sequence of operations, each of which is totally predictable!

objection two, your compiler argument. There i am hindered by my naivety in maths and computers, but to me there is a limit to where we can simplify a serial representation of a parallel neural network.
How a parallel network is simulated by a serial process limits the steps necessary to a minimum, in my naive opinion. It seems to me that you have to calculate every state of every element of every neuron for every cycle, so you probably have many trillions of operations per cycle you can not avoid or simplify.
Well this is very gedanken, of course, but suppose the compiler recognised that the whole process had an outer loop round a process that updated every s-neuron, it might sneakily decide to precompute some of those steps and the program could simply skip those steps and copy the output from a table!

So to me it seems that the more steps not based on logic, the more computation goes from the platonic to the specific.
Where Pythagoras' theorem is on one side of that spectrum. And our brain, or a simulation of it, is on the complete opposite end.
Yes there is a continuum in the usefulness of these theorems, but they always remain absolutely true!

My lack of knowledge and exact terminology make this post a bit wordy and rambling, but i hope i have made myself a bit more clear.
I can very easily be wrong about this, but at this time i still think the problem is more that i did not explain enough what i mean.
I think we are starting to clarify our terminology!

I will get back to what this TE actually might mean, but we seem a bit stuck on this point.
Perhaps the point to come back to is that simulations of oil wells (or just about anything else) are not the same as the thing they are simulating - so why should a simulation of a brain actually experience emotions?

As I said, I think this setup is more analogous to simulating a TV set or (better) a remote control for a toy plane. Unless you attach peripherals to the computer to actually send and receive the messages, it can't begin to work, and even if you do, the simulation isn't flying!

David
 
Last edited:
This thread has been a bit technical for me. But in this thought experiment are we also simulating the computer's interaction with the environment? Ie: exposing it to light and sound waves, simulating the information that would be received by the nervous system when touching something, etc.?
 
This thread has been a bit technical for me. But in this thought experiment are we also simulating the computer's interaction with the environment? Ie: exposing it to light and sound waves, simulating the information that would be received by the nervous system when touching something, etc.?
That is a point of contention too, i think it is necessary, David seems to think it isn't.
I will try to respond further to David tommorow.
 
That is a point of contention too, i think it is necessary, David seems to think it isn't.
I will try to respond further to David tommorow.
The tomorrow to which you refer, is gone a while back! I really think this thread might be getting somewhere, and I would hate us to lose it.

As regards Arouet's point about the environment, the environment is obviously vital for this brain to develop for the 30-odd years it developed naturally, but that is not the point. The gedanken experiment deliberately creates a situation in which the thought processes of the brain are transferred to a computer at the start of a short period of contemplation, and continued there in simulation. The idea is that if the guy is sitting in a darkened room for 30 mins (say), thinking about a failed love affair (or whatever!) he doesn't really NEED the environment at that point in time to do that. OK in reality, he would have some environmental input, but is that really vital in that situation.

I am particularly interested in Bart's idea that the simulant will only experience his angst once, however many times we might run the simulation, provided we run it at least once. My problem with that, is that at least in principle, the outcome of the simulation is a forgone conclusion to the last bit - so why do we need to run the simulation even once in order for the simulant to experience his various emotions? Put another way, do we need to see the output of the simulation in order for the simulant to feel his emotions?

David
 
The tomorrow to which you refer, is gone a while back! I really think this thread might be getting somewhere, and I would hate us to lose it.

As regards Arouet's point about the environment, the environment is obviously vital for this brain to develop for the 30-odd years it developed naturally, but that is not the point. The gedanken experiment deliberately creates a situation in which the thought processes of the brain are transferred to a computer at the start of a short period of contemplation, and continued there in simulation. The idea is that if the guy is sitting in a darkened room for 30 mins (say), thinking about a failed love affair (or whatever!) he doesn't really NEED the environment at that point in time to do that. OK in reality, he would have some environmental input, but is that really vital in that situation.

I am particularly interested in Bart's idea that the simulant will only experience his angst once, however many times we might run the simulation, provided we run it at least once. My problem with that, is that at least in principle, the outcome of the simulation is a forgone conclusion to the last bit - so why do we need to run the simulation even once in order for the simulant to experience his various emotions? Put another way, do we need to see the output of the simulation in order for the simulant to feel his emotions?

David

If all you're imaging is thoughts then I can see that. If you're thinking of things like would the simulation be able to see or hear then that might require simulating those apendages.
 
As I said before, the reason I call P a theorem, is that P is eternally true in the same way as Pythagoras's theorem is true (assuming a Euclidean space, of course).
I disagree with P being a theorem that is eternally true in the same way as Pythagoras's theorem.
I do agree that P is a piece of computation that will always give the same result, i agreed with that from the very start.
To get a form of computation that isn't a theorem in that sense, you need to have something unpredictable - such as true random numbers, or actual interaction with - say - another human. Even if the input comes from outside, it is still true that the outcome, O, of running P+I=>O is fixed for all time! This, of course, is why I specified the TE to work this way.

Now, what you refuse to recognize is that the designed part of P simulates the environment the simulant operates in' even if that environment is only the brain (although i stated my doubt over we how we could recognize exactly where the brain stops?)
We do not design the way that brain works, we faithfully copy that from the subject into P.

So, the fact that we do not have factors that are random unpredictable for the short time we run our simulation, does not negate the fact that the part that is generating the conscious behaviour of our simulant is completely formed by random and unpredictable events, and therefore is no theorem in the mathematical sense.

We simulate a sophisticated and very complex interaction mechanism, if you take away the context for this interaction you end up with a set of meaningless numbers. The output for P (O) does not prove anything mathematical, it is therefore not a theorem.

If we want to know the meaning of what happened in the time leading up to O, the only thing we can do is continue running P and ask our simulant what she was thinking. Only looking at O learns us nothing.


Surely the extreme specificity makes it an uninteresting truth, but doesn't detract from it being eternal just as

3381265111639379629831=47055833459 * 71856449309

Is eternally true, but too specific to be interesting!
If you are talking about your factorisation program, yes.
But since i do not consider that equivalent with P this is not true for P


It is much to specific to be an eternal universal truth, it can certainly not always have been, the circumstances that facilitate the existence of P were not always there.
It also can not for ever be, the existence of relies on the effort of extracting the information about the S from the subject.
No matter what substrate the information is stored in, as you said before, nothing is forever, it can always get lost.

a) This is a TE - so you have to persuade me that this is a fundamental issue, and not just a practical issue - like finding an adequately powerful computer - or indeed performing the brain scan in the first place!
b) People might forget a theorem, but that wouldn't make it less true for all eternity!
It is definitely a fundamental issue if you want to say that P is an eternal truth, and build an argument from that.
The brain of our subject, and the way it is wired could not have been without the cosmological or chemical evolution that lead up to it's existence, therefore it could not always have been.

Well if you forget the number 3381265111639379629831 (and its factors), you obviously lose track of the above factorisation, but it doesn't stop that factorisation being true for all time (can I start saying "being a Theorem" again - it is much more concise!)
Again true for your factorization program, but not for P, But this one illustrates the difference.
The theorem(s) that drive your factorization program work for a whole class of problems.
If the knowledge about these sorts of theorems is lost now, it can (arguably) always be recovered by any other species that evolve a culture that knows logic an maths.

P, on the other hand, depends to much on past events, the part that really makes it go is not recoverable trough discovering some eternal logic.

It is unlikely, but if we choose an arbitrary 1000 digit number and factor it, it is unlikely that this race will have factored the identical number, but it doesn't make the factorisation any less true - any less a theorem, or any less discoverable!
No, but that is exactly the difference with our program P!

I will cut my answer short here, if the other issues are important they will pop up later.

I am sorry I couldn't respond earlier, in these hard to express matters I run in to the limitations of English not being my first language.
I also am running out of ways to make you clear why I think P is not a theorem in the eternal way you use as an argument.

Anyway, I certainly haven't lost interest in this thread, just stuck on this theorem thing a bit right now.
 
I disagree with P being a theorem that is eternally true in the same way as Pythagoras's theorem.
I do agree that P is a piece of computation that will always give the same result, i agreed with that from the very start.
I am really not clear what it is you disagree with here! I mean if we break the program into al its individual time-steps (at the machine instruction level) and concentrate on just one of them - s567124 - s567125 (say). It would be feasible to sit down with the CPU manual and figure out from the logic of the computer s567125 from s567124 without actually using the computer at all. This would actually be doable because any one step would usually access only a few locations from the vast memory. (I'll deal with the exceptions if you think it is important). So although the complete simulation of trillions of steps would be impossible for a human to perform, each and every one of them could be performed just from the computer logic. Given that, in what meaningful sense would it be wrong to say that the output from the whole program would be eternally true?

Now, what you refuse to recognize is that the designed part of P simulates the environment the simulant operates in' even if that environment is only the brain (although i stated my doubt over we how we could recognize exactly where the brain stops?)
We do not design the way that brain works, we faithfully copy that from the subject into P.
But I am not discussing design, I am saying that if we can in principle copy the state of the brain into a computer in a form that can be simulated, then a computer simulation starting from that copy is subject to the same rules as any other computer program. If we can't create a copy that can be simulated, we have no point of disagreement!
We simulate a sophisticated and very complex interaction mechanism, if you take away the context for this interaction you end up with a set of meaningless numbers. The output for P (O) does not prove anything mathematical, it is therefore not a theorem.
The theorem is that P => O, it is not O itself!

Let's stop using the word 'Theorem' because it seems to be causing confusion, what I have described above is the fact that not only is the output from P always going to be O, but that this does not depend (in principle) on the actual existence of the computer 5 billion years hence (say). Even in 5 billion years, someone/some entity could take P together with the spec of how the computer used to work, and derive O (and it would be the same). This would also be true 5 billion years in the past - provided only that someone guessed the starting state of P! In other words, even back then, P=>O would be a fact that something could stumble upon!

There probably isn't much point pursuing where this leads unless/until you see this point.

David
 
Searle always amused me - he invests so much effort in the Chinese Room Argument, and then didn't follow the argument through to the end.

David Chalmers and maybe even Roger Penrose seem to have followed that route somewhat as well. I mean the real sting in the tail of the computational consciousness argument I have been having above, is that once people reject the idea of a conscious computer, it is hard to see what is left in materialism - because anything physical can be simulated (in principle), and Penrose's idea of non-computational physics seems very tenuous, and it wouldn't actually explain consciousness in any case - all it would do is bypass the Gödel argument!

I think a lot of the problem is the academic stigma associated with being seen to endorse alternative views about reality.

David
 
As regards Arouet's point about the environment, the environment is obviously vital for this brain to develop for the 30-odd years it developed naturally, but that is not the point. The gedanken experiment deliberately creates a situation in which the thought processes of the brain are transferred to a computer at the start of a short period of contemplation, and continued there in simulation. The idea is that if the guy is sitting in a darkened room for 30 mins (say), thinking about a failed love affair (or whatever!) he doesn't really NEED the environment at that point in time to do that. OK in reality, he would have some environmental input, but is that really vital in that situation.
Since our sense of self is almost certainly dependent knowing what, who, where, and how we are, i think it is vital, although it is probably impossible to estimate to what degree.
There seem to be a Quite a few senses more than the traditional five. As i said before, if our subject finds herself suddenly without all her senses, the worry about a lost love will become the least important one very fast.
Another thing is, where are you going to make the cut-off in what to simulate, do we stop ate the spine ore do we include the rest of the nervous system?
Also, the development of the brain did not really start decades ago, but rather billions of years ago.

I am particularly interested in Bart's idea that the simulant will only experience his angst once, however many times we might run the simulation, provided we run it at least once.
Yes, that is what i think, at least something we agree on.
My problem with that, is that at least in principle, the outcome of the simulation is a forgone conclusion to the last bit
You have not shown that, but more on that in an answer to your other post.
- so why do we need to run the simulation even once in order for the simulant to experience his various emotions? Put another way, do we need to see the output of the simulation in order for the simulant to feel his emotions?

David

No the output O learns us nothing, it is just one state. To the subject O is a zero amount of time. If we want to know what she thought from S to O, we have to run P from O and ask her. That is why i insisted on some form of interface in our set-up of this TE. This is in no way intended as some sort of turing test, it is a direct result of the fact that we can not derive what the simulant thinks from P.
 
Since our sense of self is almost certainly dependent knowing what, who, where, and how we are, i think it is vital, although it is probably impossible to estimate to what degree.
There seem to be a Quite a few senses more than the traditional five. As i said before, if our subject finds herself suddenly without all her senses, the worry about a lost love will become the least important one very fast.
Another thing is, where are you going to make the cut-off in what to simulate, do we stop ate the spine ore do we include the rest of the nervous system?
Also, the development of the brain did not really start decades ago, but rather billions of years ago.

All of this is rendered utterly irrelevant by your belief that a brain scan of a sufficiently sophisticated form could render an image of the state of the brain in numbers that could be simulated. The numerical image of that brain - together with the simulator program - would then be subject to the rules of computer science - not evolutionary biology.

As regards what we simulate, since this is a TE, we can scan and simulate the entire body - it makes no difference to the argument.

BTW, we don't agree on the number of times the simulant will experience his angst. I contend that the simulant will not experience any angst however many times the program is run, and furthermore that the simulation will not even show the appropriate correlates of conscious emotional disturbance - because I suspect the brain is coupled to the mind, but it doesn't generate it. The simulation could not simulate this coupling.

David
 
Last edited:
I am really not clear what it is you disagree with here! I mean if we break the program into al its individual time-steps (at the machine instruction level) and concentrate on just one of them - s567124 - s567125 (say). It would be feasible to sit down with the CPU manual and figure out from the logic of the computer s567125 from s567124 without actually using the computer at all. This would actually be doable because any one step would usually access only a few locations from the vast memory. (I'll deal with the exceptions if you think it is important). So although the complete simulation of trillions of steps would be impossible for a human to perform, each and every one of them could be performed just from the computer logic.
OK but i do not see what difference that makes, or what paradox follows from that. You repeat the same sequence in a different way, nothing has changed.
Our subject will have the memory of what happened during that period, If you run P beyond O you can ask her.
Given that, in what meaningful sense would it be wrong to say that the output from the whole program would be eternally true?
In what meaningful sense can we say it is eternally true?, it is based on events that are lost in time, no recoverable logic. Sure it is repeatable for as long as we preserve a copy of the whole thing, but if it depends on that we can hardly call it eternally true.
But I am not discussing design, I am saying that if we can in principle copy the state of the brain into a computer in a form that can be simulated, then a computer simulation starting from that copy is subject to the same rules as any other computer program. If we can't create a copy that can be simulated, we have no point of disagreement!
I do believe it is, at least in principle, possible to copy the state of the brain into a computer. I do not think every computer program is an eternal truth.
I was not talking design, but it is important not to forget that we did not design the way the simulant behaves. The part that is designed Can be considered the environment the brain operates in, the brain state of which we did not design, but copied.

We can also simulate every random sequence of events in a computer program, does that make any random sequence of events an eternal truth?
I think you confuse repeatability with eternal status.

I would go further in mentioning a few points that, at least to me, illustrate the difference.
Did that before and you did not seem to want to engage, so i do not see the point of that right now.
Let's stop using the word 'Theorem' because it seems to be causing confusion, what I have described above is the fact that not only is the output from P always going to be O,
Up to here i agree, I do not want this to be a semantic discussion about the word 'theorem', I would rather like to be able to explain to you why I think P does not have the eternal Quality you seem to indiscriminately ascribe to any computer program.
but that this does not depend (in principle) on the actual existence of the computer 5 billion years hence (say). Even in 5 billion years, someone/some entity could take P together with the spec of how the computer used to work, and derive O (and it would be the same). This would also be true 5 billion years in the past - provided only that someone guessed the starting state of P! In other words, even back then, P=>O would be a fact that something could stumble upon!
That is simply not so IMO, it's future repeatability depends on preserving P, which in it's turn depends on a deliberate effort of some entity doing that. Statistically, that is going to go wrong sometime.
It's past existence depends on a finite but maybe incalculably small probability, putting it probably on the level of impossibility in practice .

There probably isn't much point pursuing where this leads unless/until you see this point.

David
I do not know, even if we do not agree on this, to the Simulant everything only happens once. even if this P should be 'eternally true'.

To the simulant, P is like the laws of nature are to us, she operates in the mini universe we created for her. How should she know we halted the program and restarted it five billion years later? To her that time does not exist.

For all we know we live in a simulation on a computer, maybe that program has crashed an been recovered a million times without us knowing.

So i really do not see why we would not pursue this further, if we let open minded curiosity be the motivation.
 
BTW, we don't agree on the number of times the simulant will experience his angst. I contend that the simulant will not experience any angst however many times the program is run, and furthermore that the simulation will not even show the appropriate correlates of conscious emotional disturbance - because I suspect the brain is coupled to the mind, but it doesn't generate it. The simulation could not simulate this coupling.

David
But that does not follow from this TE
 
We can also simulate every random sequence of events in a computer program, does that make any random sequence of events an eternal truth?
Yes, in exactly the same way as the factorisation of a randomly chosen 100-digit integer would be an eternal truth. But that is OK because we both agree it wouldn't generate any emotion :)

I think you confuse repeatability with eternal status.
If it were necessary to actually preserve a physical computer, I might agree. However it is not. The rules that the computer obeys for any particular step are written down in the low level programmer's manual - they allow any person to derive the next step from the current one. Furthermore, it is possible to take the process of decoding and executing a machine instruction and encode it as a mathematical function. For example, if the instruction set for the computer in question uses the bottom 4 bits to specify the type of instruction (load, add, subtract, etc), this can be represented as the remainder when the unsigned integer corresponding to the instruction is divided by 16. (There are several ways the bits in a memory slot can be interpreted - as an unsigned (i.e. positive) integer, as a signed integer, as a floating point number, etc. Each interpretation can be represented as a mathematical expression involving the bits). Proceeding in this way, the process of moving from step to step can be represented in a purely mathematical way - without any reference to an actual computer filled with chips and powered by electricity! The relationship P=>O is therefore really a mathematical one and the result is eternal in precisely the same way as a factorisation is eternal.

That is simply not so IMO, it's future repeatability depends on preserving P, which in it's turn depends on a deliberate effort of some entity doing that. Statistically, that is going to go wrong sometime.
It's past existence depends on a finite but maybe incalculably small probability, putting it probably on the level of impossibility in practice .

Well look, if I were to select at random an N-digit integer, would its factorisation cease to be an eternal truth if N were so large that there was a statistical problem storing the digits?

I do not know, even if we do not agree on this, to the Simulant everything only happens once. even if this P should be 'eternally true'.
Well if we assume materialism (i.e. that the simulation is indeed possible) then I would agree with you.
To the simulant, P is like the laws of nature are to us, she operates in the mini universe we created for her. How should she know we halted the program and restarted it five billion years later? To her that time does not exist.

For all we know we live in a simulation on a computer, maybe that program has crashed an been recovered a million times without us knowing.

So i really do not see why we would not pursue this further, if we let open minded curiosity be the motivation.[/quote]
Well the problem here is that once you turn P=>O into a piece of maths, it is really hard to know how many times this is 'executed' - so if we live inside a computer simulation, the entire simulation is equivalent to a piece of maths! This is not Gödel's theorem, but it is a vital step that he used to establish his theorem - encoding logic as (very messy) mathematical expressions.

You are left with the disturbing conclusion that the simulation of the simulant's brain was in fact an abstract fact that was always true. So also would be a near infinity of other possible simulations representing other possible brain-body simulations!

My perspective on all this, is that something has to give, and my belief is that the simulation is impossible because the brain does not generate the mind but couples with it. In that case, a computer simulation of a brain is bound to fail because it hasn't got a link to the mind - exactly as a simulation of a radio would not work unless it could be fed a simulation of a radio signal to decode.

By all means lets pursue this further, but you have to face one or two facts about computers - that the program and its data are stored in memory cells, each of which is simply a number, and that the operation of the computer is a fundamentally mathematical process - though not usually viewed that way! I realise that these ideas are intensely uncomfortable, because they come up bang against the way you would normally think about materialism in action in the brain or in a simulation, but I really don't think there is any wriggle room left in my argument.

David
 
Last edited:
Yes, in exactly the same way as the factorisation of a randomly chosen 100-digit integer would be an eternal truth. But that is OK because we both agree it wouldn't generate any emotion :)
Agreed, a factorisation program would not generate emotion.

If it were necessary to actually preserve a physical computer, I might agree.
Preserving instructions to build the computer and the program would be the same, right?
However it is not. The rules that the computer obeys for any particular step are written down in the low level programmer's manual - they allow any person to derive the next step from the current one. Furthermore, it is possible to take the process of decoding and executing a machine instruction and encode it as a mathematical function. For example, if the instruction set for the computer in question uses the bottom 4 bits to specify the type of instruction (load, add, subtract, etc), this can be represented as the remainder when the unsigned integer corresponding to the instruction is divided by 16. (There are several ways the bits in a memory slot can be interpreted - as an unsigned (i.e. positive) integer, as a signed integer, as a floating point number, etc. Each interpretation can be represented as a mathematical expression involving the bits). Proceeding in this way, the process of moving from step to step can be represented in a purely mathematical way - without any reference to an actual computer filled with chips and powered by electricity!
The problem is that we can not get to this mathematical representation without deriving it from our poor subject, and killing her in the process.
That means that the initial state of P can not be recovered , even as a mathematical representation, if P is lost.
It is based on a complex relation that accumulated through stochastic processes over a very long time, processes we can not ever replicate.

So we have have to preserve the mathematical representation of P, which is functionally the same as preserving the physical computer itself.

Therefore, in light of your statement above, i must assume you agree.

The relationship P=>O is therefore really a mathematical one and the result is eternal in precisely the same way as a factorisation is eternal.
No it isn't, P is not equivalent with your factorisation, the only thing that is really the same is the fact that they both can be done on computers.
That the relationship P=>O is always going to be true, is undeniable. That does not necessarily mean it is eternal.
You are still confusing repeatability with eternal status.

Well look, if I were to select at random an N-digit integer, would its factorisation cease to be an eternal truth if N were so large that there was a statistical problem storing the digits?
Of course not, that is because factorisation is based on axioms and other theorems. we can prove it works not just for this number we can prove it works for every number.
And there lays the difference with P, at the level of the mathematics, we can not even say it 'works', it is not a theorem that proves anything.
What is it's function? On the mathematical level nothing, it describes behaviour of a system, it does not stop when the number is factorized.
It does not actually 'start', because we copy and continue. it does not actually 'end', we decided to run it for half an hour, but we could also run it indefinitely;
From the outside it could be the mathematical description of complete gibberish, or it could be the mathematical description of a mini-universe that contains an conscious entity.

We can only tell if we run P and ask our simulant, that in itself has consequences we haven't even started to discuss and yet seem interesting.

Well if we assume materialism (i.e. that the simulation is indeed possible) then I would agree with you.
This TE assumes we do, it is meant to examine whether run into any logical obstacles if we do.
Making an argument on the basis that you think materialism is not true is begging the question, not?


Well the problem here is that once you turn P=>O into a piece of maths, it is really hard to know how many times this is 'executed' - so if we live inside a computer simulation, the entire simulation is equivalent to a piece of maths! This is not Gödel's theorem, but it is a vital step that he used to establish his theorem - encoding logic as (very messy) mathematical expressions.
It does not matter how many times P is run, from the simulant's viewpoint that happens in another universe, not even known by her.
she will always have the memory of that half an hour of simulation, IF we continue to run P and ask her.

Let's say we do not have to kill her to extract her brain state. We freeze her body, run P, copy the brain state at O back in to to her brain, revive her and she will have the memories of that half an hour of simulation.

You are left with the disturbing conclusion that the simulation of the simulant's brain was in fact an abstract fact that was always true. So also would be a near infinity of other possible simulations representing other possible brain-body simulations!
No it is not see above, and all my other posts.

My perspective on all this, is that something has to give, and my belief is that the simulation is impossible because the brain does not generate the mind but couples with it. In that case, a computer simulation of a brain is bound to fail because it hasn't got a link to the mind - exactly as a simulation of a radio would not work unless it could be fed a simulation of a radio signal to decode.
How does this follow from this TE?
By all means lets pursue this further, but you have to face one or two facts about computers - that the program and its data are stored in memory cells, each of which is simply a number, and that the operation of the computer is a fundamentally mathematical process - though not usually viewed that way! I realise that these ideas are intensely uncomfortable, because they come up bang against the way you would normally think about materialism in action in the brain or in a simulation, but I really don't think there is any wriggle room left in my argument.

David
The only lack of comfort i ever perceive on this subject is the discomfort that proponents feel if they have to consider that there thoughts and feelings are maybe reducible to physical processes.

I do not know about wiggle room in your argument.
What i do know is that you have stretched the definition of 'theorem' so far that it includes every numerical representation of any random sequence we can think of.
Thus making your attempt to carry over that elusive 'eternal quality' not a reductio at absurdum of the outcome of this TE, but a reductio ad absurdum of your own argument.
 
Agreed, a factorisation program would not generate emotion.


Preserving instructions to build the computer and the program would be the same, right?
No more so than preserving the number which was to be factorised! I dealt with this point already!

Note that theorems can have preconditions - such as the logic of the computer in question!

The problem is that we can not get to this mathematical representation without deriving it from our poor subject, and killing her in the process.
That means that the initial state of P can not be recovered , even as a mathematical representation, if P is lost.
It is based on a complex relation that accumulated through stochastic processes over a very long time, processes we can not ever replicate.
Of course the initial state of P can be recovered - it can be copied before the simulation is started!

So we have have to preserve the mathematical representation of P, which is functionally the same as preserving the physical computer itself.
In the same sense that you have to preserve Pythagoras' theorem if you want to check it 5 billion years hence. Yes P is bigger and messier, but is this really relevant - remember we are talking about a TE!

Therefore, in light of your statement above, i must assume you agree.


No it isn't, P is not equivalent with your factorisation, the only thing that is really the same is the fact that they both can be done on computers.
Note that the number to be factorised could be as big as P if you like!

From the outside it could be the mathematical description of complete gibberish, or it could be the mathematical description of a mini-universe that contains an conscious entity.

Yes - and this is the whole point! You are not only saying that the simulation will mimic the behaviour of the brain, you are claiming that the relevant emotions will actually be felt! It is the idea that running the simulation on the computer will actually generate experiences that we disagree on! Note that if the simulation was of a mini-universe, there would be no expectation that it would do anything but compute the evolution of the structure!

We can only tell if we run P and ask our simulant, that in itself has consequences we haven't even started to discuss and yet seem interesting.


This TE assumes we do, it is meant to examine whether run into any logical obstacles if we do.
Making an argument on the basis that you think materialism is not true is begging the question, not?
My core argument was over at that point, I was merely showing that if you don't assume materialism, there is no paradox.

It does not matter how many times P is run, from the simulant's viewpoint that happens in another universe, not even known by her.
she will always have the memory of that half an hour of simulation, IF we continue to run P and ask her.

Let's say we do not have to kill her to extract her brain state. We freeze her body, run P, copy the brain state at O back in to to her brain, revive her and she will have the memories of that half an hour of simulation.
We aren't really discussing memories, but actual experiences - live.

Thus making your attempt to carry over that elusive 'eternal quality' not a reductio at absurdum of the outcome of this TE, but a reductio ad absurdum of your own argument.

I don't see what is elusive about the notion of an eternal truth - isn't 2+2=4 eternal? Isn't any true arithmetic statement eternal? I was trying to show you how a computer program can be turned into an arithmetical statement (albeit rather large :) but you don't want to pursue that - we probably have to differ.

David
 
Bart,

Since I doubt we will ever agree, I'd like to broaden this discussion a bit, because one of the main reasons that I don't believe in materialism, is that I don't think it will ever have an explanation for consciousness.

Let's start with something that I expect we agree about. There is clearly a difference between a robot that acts as if it is experiencing emotions, and one that is really experiencing those emotions. Likewise, a program that can fake an empathic discussion with a human (say by email) is different from one that really experiences such emotions. This is the question of whether the program experiences qualia.

A program that faked emotional responses to emails might be quite crude. For example, it might apply a large number of very simple templates to the text - such as spotting phrases such as "my mother is ill", "my wife has left me", etc., and generate a more or less canned reply that might look very convincing.

OK - assuming that you agree that there is a real difference between a fake empathic program and the real thing, how would you propose to:

1) Test a program to see if it were genuine (assuming you couldn't read the source).

2) Attempt to produce a genuinely empathic program. I am assuming that you think such a program could be written - please say if you don't!

The difference between a genuine and a fake empathic program would be that the former would actually feel emotions of various sorts.

Now think about any ordinary program that you might write or use. The computer CPU would perform the actual calculations required, but the only way you could use those calculations, would by some sort of additional hardware - a printer, screen audio output channel, etc. A program has to produce some sort out output of this sort, otherwise what use it is?

En the case of a genuine empathic email conversation program, you would need to make it actually feel emotions as it runs. It couldn't really output these to anything, unless you want to propose some extra emotion generator hardware, so your program would be unique in that it would do something extra just as a result of executing. Could this mean that all programs generate emotions as they execute, or would you have to take some special action to make this happen. Please assume that waiting 5 billion years for another evolution isn't an option!

David
 
Could the Internet Ever “Wake Up”

Massimo Pigliucci's evaluation

I would say Massimo's issues with machine consciousness are pretty well elucidated, though it'll surprise no one that I don't think he's being all that fair to Panpsychism...and maybe the Gaia Hypothesis. <<insert appropriate smiley>>

First a sampling of his criticisms regarding conscious computers:
.
Koch realizes that brains and computer networks are made of entirely different things, but says that that’s not an obstacle to consciousness as long as “the level of complexity is great enough.” I always found that to be a strange argument, as popular as it is among some scientists and a number of philosophers. If complexity is all it takes, then shouldn’t ecosystems be conscious? (And before you ask, no, I don’t believe in the so-called Gaia hypothesis, which I consider a piece of new agey fluff.)

Either Dennett doesn’t think that the substrate matters — in which case there can’t be any talk of right or wrong stuff — or he thinks it does. In the latter case, then we need positive arguments for why replacing biologically functional carbon-based connections with silicon-based ones would retain the functionality of the system. I am agnostic on this point, but one cannot simply assume that to be the case.

More broadly, I am inclined to think that the substrate does, in fact, matter, though there may be a variety of substrates that would do the job (if they are connected in the right way). My position stems from a degree of skepticism at the idea that minding is just a type of computing, analogous to what goes on inside electronic machines. Yes, if one defines “computing” very broadly (in terms, for instance, of universal Turing machines), then minding is a type of computing. But so is pretty much everything else in the universe, which means that the concept isn’t particularly useful for the problem at hand.

I have mentioned in other writings John Searle’s (he of the Chinese room thought experiment) analogy between consciousness as a biological process and photosynthesis. One can indeed simulate every single reaction that takes place during photosynthesis, all the way down to the quantum effects regulating electron transport. But at the end of the simulation one doesn’t get the thing that biological organisms get out of photosynthesis: sugar. That’s because there is an important distinction between a physical system and a simulation of a physical system.

Now for his opposition to panpsychism, admittedly an aside so I'm putting it into a collapsing quote box:

Him:

"And talk about wild speculation: in the same interview Koch told Slate that he thinks that consciousness is “a fundamental property of the universe,” on par with energy, mass and space. Now let’s remember that we have — so far — precisely one example of a conscious species known in the entire universe. A rather flimsy basis on which to build claims of necessity on a cosmic scale, no?"


Me:

On the one hand, the consciousness of animals has had good arguments for it, so at minimum that's an open question. On the other, consciousness isn't like other categories like "capable of reproduction", it has an internal quality that can be only be completely evaluated from the inside. This is not to say panpsychism is the definite answer, but once someone recognizes the problem with emergence and materialistic accounts overall I think it's hard to decide between the remaining options which I believe fit into one of three categories -> Panpsychism, Idealism, and Neutral Monism.
 
Last edited by a moderator:
Back
Top