Discussion in 'Critical Discussions Among Proponents and Skeptics' started by Sciborg_S_Patel, Mar 29, 2014.
what you said about the computers doesn't make any sense to me.
Why - it shows how something could be passed to someone else, and yet it is represented by icons that differ on the two computers.
I always wonder if the Idealist model really involves some mind doing all the quantum calculations needed to run the universe, or if the whole thing works more like a video game, and most of the stuff is just approximated.
Personally I don't quite get the idea combining the Simulation Hypothesis with Idealism though I know some intellectuals have posited this metaphysics. If Mind is all there is, what's the necessity of creating the simulation within it's own thoughts? Is it to preserve consistency - what some might call the "laws of nature"?
But what is it about the Mind of God that requires the need for a programmed reality within its own thought/dream/whatever?
I suspect there's just the Real, that it isn't "mental" or "physical" as usually defined.
Well we plunge into the depths of speculation here!
I sometimes have a sneaking suspicion that what is going on is something like this:
Mental beings started by creating a very simple pretend physical home, with some simple rules about how it worked. Those responsible for maintaining this illusion, worked fairly hard, but the rules were comfortably inexact - some people thrived, others were sickly, some were robust mentally, others went mad.
Then people started to explore this space, and made things a bit tougher for the maintainers - they started measuring things, and the maintainers decided these measurements should show consistency - they postulated the existence of atoms, and pushed the maintainers into adding a layer of complexity called atoms and molecules. Then they started to study the properties of these atoms and their components - such as electrons - and conceived of these things running rather like solar systems but with electrostatic attraction. This created a whole slew of problems for the maintainers, whose desire was always to keep things consistent. One of the biggest problems was that every carbon atom (say) would have slightly different properties depending on the energy of the electrons in orbit - so chemistry would be the chemistry of a sort of undifferentiated sludge! This really made the maintainers think, and they realised that only a wave structure for the electron would solve this - because waves naturally form into discrete sets - as sound waves in an organ pipe.
Thus perhaps we are pushing the whole simulation process beyond what it can stand - so one solution is for the maintainers to crudely paint the parts of the universe that we can't explore in minute detail.
Remember that if someone decides to set up a quantum experiment (or whatever) they will give their intentions away to the maintainers in time for them to simulate a few particles in extreme detail, while maybe not even bothering with the atomic structure of the desk holding the apparatus.
(OK, maybe I have probably read SKEPTIKO for too long!)
If I'm understanding you correctly there are maintainers/creators who aren't on the level of the Ground of All Being?
So even the maintainers are working with the "thoughts of God"?
I guess I like to leave such questions open.
You see, the way I see it, science has an interesting structure. It consists of levels where level 0 is an approximation to level 1, etc. This structure might be purely coincidental, but it does suggest that these levels are somehow created by the process of exploring science itself.
L0: Simple common sense.
L1: Newtonian physics
L2: Atomic and electromagnetic theory. Light treated as waves
L3: Quantum theory explains atoms and molecules and light
Each of these expands the level below in a spectacular way, and also requires massively more effort on the part of the hypothetical maintainers.
I have left out relativity etc, because they are still in my mind somewhat shaky! Also the discovery of SR overlapped QM in time. However, if SR/GR is correct, then once again we have a vastly more complex theory (curved space time) that reduces to flat Minkowski space, that reduces to ordinary space and time - each is an approximation to the next.
At the same time, the size and time scale of our universe has expanded beyond all recognition - this must also strain the efforts of the maintainers.
Maybe it is worth asking why it is that reality can be almost described in successively more complex ways. They may just represent the maintainers successive efforts to keep the whole thing consistent!
I didn't understand it. Did it show how?
I don't quite understand what it is that is worrying you. The concept that what we see or experience is far removed from ultimate reality is already accepted, DH takes it further by suggesting that nothing - even the world revealed by scientific experiments, or even space (and maybe time?) - are to be taken as 'real' - they are part of an interface to reality.
He then suggests that the ultimate reality is composed of conscious entities all the way down. Thus things like spoons are conscious abstractions, and can be shared between consciousnesses without there being some external reality behind the spoon. Obviously, the rules of this conscious interface are such that headaches are not shareable whereas spoons are.
It is hard to discuss this idea with you if all you say is that you don't understand!
I have said repeatedly that to me, the ultimate reality almost has to be Idealism, but that maybe science has to embrace dualism as a step towards that. DH seems to be attempting to get there in one jump!
I'm not really interested in labels like idealism etc. just the specifics.
As I understood it back in 2014, Hofmann is not really saying things can be shared, he's saying that there is no public space, that's what he claimed in his 2006 article. In the much earlier computer example you gave, you mentioned passing and deleting, those terms as I understand them are quite different from sharing. Now you mention spoons are shareable, and headaches are not, and say that this distinction 'obviously' forms the 'rules', but there is nothing obvious about it... why is that obvious, when Hofmann claims in 2006 they are both the same, and you don't justify your claim. You just say it's so, but don't clarify why or how it is so.
I'm sorry, but it's just very confusing, what you are writing isn't specific enough for me to understand, or make any sense of. The terms you use here to discuss these issues are absolutely crucial... you have to be very, very careful and very specific, and define exactly what you mean. What you are writing are not explanations that I can understand.
Honestly I just think the Headache/Spoon essay was just poorly written.
It doesn't really align with Hoffman's other writing, I'd even say it's probably best ignored as a result of indigestion or something.
I haven't read his other writings... I saw a video of a presentation though, which seemed to me to suffer problems too... I'm loath to spend time understanding another... takes me hours of thinking...
I think the Interface Theory of Perception & Peeking Behind the Icons are great reads about the limits of our ability to perceive the true nature of reality.
His Idealism stuff I think presents an interesting theory, though I've never felt qualified to evaluate his math I find the idea of consensus reality made from interaction of conscious entities to be interesting. However, I'd agree that his Idealism-related stuff could be better presented.
I think it helps to put the Interface Theory into perspective by thinking about the perceptions of Mycoplasma genitalium. There is not enough processing power there to produce an interface that's very different from the actual world. So then you have to ask whether human perception of the bacterium is actually significantly different from what the bacterium is like in the actual world. That leads us to wonder whether human perception of ourselves is significantly different from what we are like in the actual world.
I'm not sure you can sort this out.
It looks like the forum ate a couple of posts in this thread, including my own. Oh, well.
I've been mulling over my own interpretation of Hoffman's work, and I don't think the implications are that bad. So, in any theory of perception, the basic problem is interpreting a stream of raw information being received by the mind. These could be neural spikes coming from the sensory organs, as materialism assumes, or it could be an actual stream of raw information coming through the agent-network, as conscious realism assumes (as would absolute idealism, transcendental idealism, information realism, etc.).
Since the 'camera model' has been ruled out by optical illusions, the sophistication of the visual system, etc., the usual assumption is that the human perceptual system is meant to accurately model the environment. When our perceptions don't match with objective (or, really, inter-subjective) reality, it's attributed to our imperfect sense organs (colour vs. EM spectra), our finite cognitive capabilities (change blindness), our lack of complete physical knowledge (quantum mechanics), or the mis-application of heuristics that usually work (optical illusions). Hoffman argues that we've mis-understood what perception is about. It's meant to model the fitness function of the environment using the least resources. (Note that he found that the evolutionary advantage of interface strategies did not depend on organism complexity.) This means ignoring some information, blurring over meaningful but irrelevant differences, and otherwise running roughshod over the truth. However, we don't have instinctive knowledge of the (Darwinian) fitness function, or else everyone would be popping out babies to the expense of all other concerns. All you have are some pleasure and pain instincts that are vaguely related. Also, we know that sensory input is required for perception to develop properly, so the interface does not come pre-packaged. It needs to be learned to some extent.
How do you get perception to match the fitness function, then? This is where I'd diverge from Hoffman's desktop interface analogy. What you want is a set of species-specific developmental biases that constrain the form of the organism's models of the environment to a subset that correlates well with the fitness function. For an analogy, if you want to match a numerical sequence to some pattern, you can restrict your candidate patterns to (say) polynomials. You need to do something like that, anyway, because if you don't restrict the space of possible sensory interpretations, you can run into the poverty of the stimulus; also, there's an infinity of useless interpretations that ought to be ruled out (e.g. the whole world is painted on the back of your eyelids). Thus, our perception of a 3+1D space-time continuum containing concrete objects viewed through the five senses is simply part of this instinctive package, which forces us to fit everything into a certain framework (shades of transcendental idealism). It may be a very powerful framework that can account for almost every eventuality, but the map is not the territory, and you can do pretty well without a deep understanding of reality. For this reason, we shouldn't be surprised when (meta)physical theories based on concrete observation turn out to miss the mark under close examination. It's happened before with classical mechanics; will materialism do any better?
Was this from the time when the forum was down, and I think rolled back slightly? If anyone writes a really big piece for this forum, it may be worth copying it to a local file so that you can place it back on the forum if there is a problem.
Alex? I hope you will interview Anthony Peake on his latest book. Just heard the lads at MU Do it, and while I think they are grand, I think you could easily map it into a three or more part detailed focus to help us get to some really strong framework as to the nature of consciousness and the universe itself.
Informational Realism, as first defined by Ken Sayre (Cybernetics and the Philosophy of Mind 1976) would fit with the idea of actual streams of information. 30 years later Luciano Floridi has introduced Informational Structural Realism, with a pointer to "informational objects". In this case the real world "of the possible" can come into the mind as unified structures. These informational objects are wholes of structured data and can carry functional meaning.
Floridi even uses the Gibsonian term, affordances, in his base description of semantic information. Affordances would be isomorphic (same organized structure) with fitness functions.
I don't think you quite realise how radical Hoffman's theory is - at least as I understand it. None of the 'real' world resembles its interface - and that includes the human brain. If Mycoplasma genitalium organisms are conscious, they might have a very complex interface, but one that didn't include the concept of a brain!
This is a significant and insightful comment. The general methods for measuring biophysics are no different with bacteria or men. At the physical level of abstraction, there still will be patterns of electrical charges relating to molecular signaling about the interesting signals from the external environment. There will be distinct evidence of physical processes that correlate to information processes, such as detecting opportunities (affordances) and "understanding" them in terms of logical behavior. Micro-organisms probably have no self-awareness, due to their limited ability to integrate information.
However, it is obvious from observation that microbia behaves in its own self-interest and this capability can be benchmarked and measured. There is no reason not to measure informational output, as well as physical output. Mycoplasma genitalium surely doesn't understand itself, but with its very small number of genes, it still purposefully directs its biological efforts at essential fitness functions.
Separate names with a comma.