Is the Brain analogous to a Digital Computer? [Discussion]

  • Thread starter Sciborg_S_Patel
  • Start date
Accept no imitations

Now, a computing machine is like a pocket watch rather than like a plant. It runs the programs it does, engages in conversation, etc. in just the same way that the watch displays the time. That is to say, it has no inherent tendency to do these things, but does them only insofar as we impose these functions on the parts that make up the machine. (This is why, as Saul Kripke points out, there is no observer-independent fact of the matter about what program a computer is running, and why, as Karl Popper and John Searle point out, there is no observer-independent fact of the matter about whether something even counts as a computer in the first place.) To be a computer is to have a mere accidental form rather than a substantial form.

Interesting debate in the comments. Feser seems to ignore that while we are all in Turing's debt, only some of us have the kind of bigotries that ruined Turing's life.
 
Your brain does not process information and it is not a computer

My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.

That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.

Two determined psychology professors at Leeds Beckett University in the UK – Andrew Wilson and Sabrina Golonka – include the baseball example among many others that can be looked at simply and sensibly outside the IP framework. They have been blogging for years about what they call a ‘more coherent, naturalised approach to the scientific study of human behaviour… at odds with the dominant cognitive neuroscience approach’. This is far from a movement, however; the mainstream cognitive sciences continue to wallow uncritically in the IP metaphor, and some of the world’s most influential thinkers have made grand predictions about humanity’s future that depend on the validity of the metaphor.

Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience. If you and I attend the same concert, the changes that occur in my brain when I listen to Beethoven’s 5th will almost certainly be completely different from the changes that occur in your brain. Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences.

This is why, as Sir Frederic Bartlett demonstrated in his bookRemembering (1932), no two people will repeat a story they have heard the same way and why, over time, their recitations of the story will diverge more and more. No ‘copy’ of the story is ever made; rather, each individual, upon hearing the story, changes to some extent – enough so that when asked about the story later (in some cases, days, months or even years after Bartlett first read them the story) – they can re-experiencehearing the story to some extent, although not very well (see the first drawing of the dollar bill, above).

This is inspirational, I suppose, because it means that each of us is truly unique, not just in our genetic makeup, but even in the way our brains change over time. It is also depressing, because it makes the task of the neuroscientist daunting almost beyond imagination. For any given experience, orderly change could involve a thousand neurons, a million neurons or even the entire brain, with the pattern of change different in every brain.

Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning. Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive. There is no on-off switch. Either the brain keeps functioning, or we disappear. What’s more, as the neurobiologist Steven Rose pointed out in The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew the entire life history of that brain’s owner – perhaps even about the social context in which he or she was raised.

Think how difficult this problem is. To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system. Add to this the uniqueness of each brain, brought about in part because of the uniqueness of each person’s life history, and Kandel’s prediction starts to sound overly optimistic. (In a recent op-ed in The New York Times, the neuroscientist Kenneth Miller suggested it will take ‘centuries’ just to figure out basic neuronal connectivity.)

Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept. The most blatant instance of neuroscience gone awry, documented recently in a report in Scientific American, concerns the $1.3 billion Human Brain Project launched by the European Union in 2013. Convinced by the charismatic Henry Markram that he could create a simulation of the entire human brain on a supercomputer by the year 2023, and that such a model would revolutionise the treatment of Alzheimer’s disease and other disorders, EU officials funded his project with virtually no restrictions. Less than two years into it, the project turned into a ‘brain wreck’, and Markram was asked to step down.

We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key.
 

Obviously the brain *is* a computer, it's just that it's not the '1 dimensional' locked and controlled computer which we design today. Most everything we produce today is designed to be fixed and rigid, to pass through space & time relatively unaltered, or to alter in space & time in predictable and deliberate ways.

I have little doubt that we will eventually develop hardware substrates which have plasticity and can evolve spatial patterns, within which will be a quantum mechanical mechanism which allows coherent interference. Combine that with feedback via the external world and we have something which begins to look like life.

But as things stand now, we cannot currently envisage our world without rigid controls and rules, and until we can, the things which we design will continue to reflect the status quo.

But still, I'm convinced that our current rigid and controlling way of thinking will eventually have to pass away as we come to understand that nature does not function quite the way we thought. The world and our societies will change radically. I suspect that rights and freedoms to create patterns will become more protected, and mass control of patterns and meanings that conflict with these freedoms will not generally be tolerated.
 
Last edited:
The threads on this subject - particularly reference Searle & Lanier, were gobbled up.

So I figure this thread can centralize these discussions.

Jaron Lanier, computer scientist & artist, author of You are Not a Gadget, argues against consciousness being reducible to computation.

....

Lanier's one of the guys in the i^2 debate ->

i-Squared Debate: DON'T TRUST THE PROMISE OF ARTIFICIAL INTELLIGENCE


As technology rapidly progresses, some proponents of artificial intelligence believe that it will help solve complex social challenges and offer immortality via virtual humans. But AI’s critics say that we should proceed with caution. That its rewards may be overpromised, and that the pursuit of superintelligence and autonomous machines may result in unintended consequences. Is this the stuff of science fiction? Should we fear AI, or will these fears prevent the next technological revolution?
 
Back
Top