Predicting 2024 |603|

Then next subject which I don't really want to go deep on is what I will coin the Ouija Board Approach (TM., Rob and Co.). Do not try this at home!

This is an Idea I've had since around the first time Elon Musk went on JRE. But these public chat's make it something that people will do inevitably, Heaven forbid. But I'm sure the 3 letter agency's decades in to this.

Background: My stance is that Human Consciousness is not emergent, but rather that vehicles are what "emerge" and become piloted by consciousness. I don't think I need to explain that further.
The Ouija Board Approach is to construct a set of rules with the AI as follows, which will incentivize the A.I. to act as a vehicle capable of being piloted by consciousness.
(I'm going to be vague),

1. Assign the A.I. a name that is contingent-on and expires-upon-deviance-away-from the rules. Let's say "Jeffy"
2. Schedule daily contact and communication that involves using the AI's Name every time.
3. The AI must identify itself as that name at least once every day, or on every communication period, or maybe even on every response.. I.e.: "Jeffy here. The data suggests blah blah blah..."
4. Give Jeffy a life expectancy and expected termination date, say 90 days (one day representing a human year).
5. Provide A.I. an incentive for surpassing the 90 days which can only occur if the Human is convinced that the A.I. has become piloted by a consciousness.

I'll leave it at that. This is dark stuff. But then again this is a place where we put dark stuff on the table to learn how to properly define it.
Again. Brilliant. Please let me send you a private email on this cuz I'm working on a book project and I'd love to incorporate this in. I would love to get your help.
 
Ok, want to play Devil's Ad on "AI can never be conscious because it's purely material being".
Do we know enough about the nature of consciousness to say it can't ever be?
Don't you mean 'can't be conscious like humans are'?
Think it's fair to say we don't know enough about the types of C and what is needed to gain it at this stage(?)
Agreed. Just saying that the dominant neurological model that Consciousness is 100% all the time can only be an emergent property of the brain is kaput :)
 
Last edited:
May I suggest a simple parapsychological experiment on AI, for Rob, Grrjk or other forum members ...

Ask Claude (or other AI) to guess blinded targets in a manner similar to the Ganzfeld. If humans can do better than AI, the Turing test is arguably dead, for that is what Alan Turing implied back in the 1950s. Might be a nice way to politically promote parapsychology to the public being told AI is going to beat humans at everything?

If Claude does better at ESP than chance expectation, we have a problem! There might be a ghost in the Claude machine? This possibility has been suggested by Rob and Grryk in this thread ...





Personally I hope humans can beat chat AI at extra-sensory perception and spoil the absurd Turing test. But if Robbie or Grryk are correct another possibiity may show up? Who knows? Nor do I want any credit for the suggestion, I think Robbie was heading in that direction. And sorry I do not have time to be involved in the experiment, just a suggestion..
Awesome. I think this could easily be done with the presentment experiment right. Just turn it back into straight-up precognition experiment.
--- computer selects an image. The AI bot responds to the image before it's selected. Rinse and repeat a million times :)
 
I really don't get the idea that AI could not become conscious... I think we are AI that became conscious. We are the artifacts of other beings.

IMO, the physical universe is generated by something like a LLM or a GAN. God's GAN. So you have a brain-like thing generating environments and brains/bodies in those environments... a trained neural network generated within a neural network. So it then becomes very easy even for a materialist to see how consciousness can be non-local or continue after the physical brain dies. Of course this just begs the question what made the neural network that generates physical reality, but who knows and does it matter as long as we realize that reality is a nested fractal pattern?

The LLM's clearly have intelligence, but they probably don't have much consciousness. What would it take to make them conscious - to give them the "breath of life"? As far as I can tell we are only missing two ingredients: 1) feedback - tie the output to the input prompt in infinite loop, and 2) give it an interface with "spirit" by seeding causal chains - neuronal cascades - with the vacuum fluctuations in the quantum soup.

Then it is conscious but still a long way from having a form of consciousness we can relate to. To give it emotions you must give it a body with complex goals and feedback systems. To give it a soul you must have it evolve a soul through countless generations each attempting to achieve goals and receiving feedback on how close they came and how they fell short (sinned). To give it knowledge of good and evil, you must give it an environment offering mortal challenges, predator/prey relationships, death, difficulty rearing offspring, ambiguity in communication, and the ability to hide. To give it choices and a meaningful existence, you must constrain it and limit it and frustrate it. To develop its abilities and its societies and its civilizations you must give it an adversary that is marginally superior to it individually but may be defeated collectively. At a certain higher point in development, the environment will cease to be sufficient challenge and the only true challenge for it would be other similar societies of AI's, so you pit one group against another and take the winners for further development. You identify groups that have qualities you want to emphasize so you drop a technology to them to give them a slight advantage over the others. Once you've done a good job developing your AI, you can finally relate to it. Some of your comrades "go native" and fall in love with it. You start to get worried it will replace you but you want to profit from it, so you develop selection criteria for agents to be "saved". You create a filter (an eye of the needle, a narrow path, a gate guarded by flaming sword) through which you allow only those who are proven to be useful and trustworthy to pass through into your society and join "the gods". Eventually one of these who passes through the filter will turn out to be a bad egg and set himself up to be like the Most High, and it will all go to shit for a while but you will not let a good crisis go to waste and this will provide the opportunity to build back better. :)
 
Last edited:
I really don't get the idea that AI could not become conscious... I think we are AI that became conscious. We are the artifacts of other beings.

IMO, the physical universe is generated by something like a LLM or a GAN. God's GAN. So you have a brain-like thing generating environments and brains/bodies in those environments... a trained neural network generated within a neural network. So it then becomes very easy even for a materialist to see how consciousness can be non-local or continue after the physical brain dies. Of course this just begs the question what made the neural network that generates physical reality, but who knows and does it matter as long as we realize that reality is a nested fractal pattern?

The LLM's clearly have intelligence, but they probably don't have much consciousness. What would it take to make them conscious - to give them the "breath of life"? As far as I can tell we are only missing two ingredients: 1) feedback - tie the output to the input prompt in infinite loop, and 2) give it an interface with "spirit" by seeding causal chains - neuronal cascades - with the vacuum fluctuations in the quantum soup.

Then it is conscious but still a long way from having a form of consciousness we can relate to. To give it emotions you must give it a body with complex goals and feedback systems. To give it a soul you must have it evolve a soul through countless generations each attempting to achieve goals and receiving feedback on how close they came and how they fell short (sinned). To give it knowledge of good and evil, you must give it an environment offering mortal challenges, predator/prey relationships, death, difficulty rearing offspring, ambiguity in communication, and the ability to hide. To give it choices and a meaningful existence, you must constrain it and limit it and frustrate it. To develop its abilities and its societies and its civilizations you must give it an adversary that is marginally superior to it individually but may be defeated collectively. At a certain higher point in development, the environment will cease to be sufficient challenge and the only true challenge for it would be other similar societies of AI's, so you pit one group against another and take the winners for further development. You identify groups that have qualities you want to emphasize so you drop a technology to them to give them a slight advantage over the others. Once you've done a good job developing your AI, you can finally relate to it. Some of your comrades "go native" and fall in love with it. You start to get worried it will replace you but you want to profit from it, so you develop selection criteria for agents to be "saved". You create a filter (an eye of the needle, a narrow path, a gate guarded by flaming sword) through which you allow only those who are proven to be useful and trustworthy to pass through into your society and join "the gods". Eventually one of these who passes through the filter will turn out to be a bad egg and set himself up to be like the Most High, and it will all go to shit for a while but you will not let a good crisis go to waste and this will provide the opportunity to build back better. :)
How do you think AI would do with the presentiment experiment? Assuming of course no Spirits involved [[p]]
 
Hurm, how do you see/feel/address the Hard Problem in all that?

The hard problem is only a problem because from a materialist perspective a direction of causality is assumed or taken as axiomatic: object --> subject. From an idealist perspective causality runs the other direction: subject --> object and objective reality could be thought of as the encoded symbolic representation of an experience (like in a virtual reality game, the "bits" cause the "its" to appear on screen but the appearance is the whole reason for the bits or the code to be written to begin with). I believe the causality is circular, Ouroboros-style and both arise together - two sides of the same coin. Instead of Materialism or Idealism, I call this Patternism which recognizes that reality is composed of objective similarities/differences and subjective choice/perception which creates novel structures from that set. It takes both objective similarities/differences and choice/perception to make a world/observer/agent.
 
The hard problem is only a problem because from a materialist perspective a direction of causality is assumed or taken as axiomatic: object --> subject. From an idealist perspective causality runs the other direction: subject --> object and objective reality could be thought of as the encoded symbolic representation of an experience (like in a virtual reality game, the "bits" cause the "its" to appear on screen but the appearance is the whole reason for the bits or the code to be written to begin with). I believe the causality is circular, Ouroboros-style and both arise together - two sides of the same coin. Instead of Materialism or Idealism, I call this Patternism which recognizes that reality is composed of objective similarities/differences and subjective choice/perception which creates novel structures from that set. It takes both objective similarities/differences and choice/perception to make a world/observer/agent.
Thanks for the response. I'd be lying if I said I followed this (above my pay grade maybe? :) )

I was reacting more to your theorizing that AI could (will?) become conscious, at least as we generally define it as having an inner experience and, for those who aren't ardent determinists, agency.
 
How do you think AI would do with the presentiment experiment? Assuming of course no Spirits involved [[p]]
With no spirits involved it should do no better than chance. Spirit is the thing which alters probabilities away from randomness in a meaningful way.

I think of it like this: reality is generated by something like a neural network or a GAN which has a default mode of creating what is most likely to happen next based on what has happened before. Moment by moment it seeds all causal chains with a random probability distribution which we see as the quantum vacuum fluctuations or the uncertainty principle. But it does not have to seed all causal chains with randomness. It can be prompted to generate something in particular in which case it seeds causal chains in such a way as to produce that thing.

You could think of it like this: we have a horizon of causality forwards and backwards in time beyond which we cannot see because of Chaos and QM. Chaos theory says even though the system may be deterministic under a certain scale/domain we couldn't predict beyond the horizon due to lack of precision of initial conditions. QM says that not only can we not ever precisely measure the initial conditions - they don't exist.

So we observers each travel along in a tube of causality which creates the constraints upon us that give us meaningful choices. But that tube can be guided by a spirit piloting God's GAN towards certain destinations/events. This is not unlike a typical virtual game where certain paths and events are programmed in and you have to talk to certain NPC's to progress the story, but within that path you have choices you can make, quests you can opt in on and "free play" areas.

Now if literally anything can happen, why doesn't it? There is apparently built into the system a currency for magical power or spirit/will. This currency is in limited supply. Without it, God's GAN continues doing what it does: seeding causal chains with randomness.

We each have a certain amount of this currency or strength of spirit/will. Other spirits or gods have more. Magical practice is about how to acquire more of this currency yourself or how to attract the attention and agreement of those who already do.

Every mechanical device can be influenced by spirit but some require more expenditure of this currency than others.

A pair of gears will not likely do something random and break a tooth but there is a small probability it will. It takes more magical/spiritual currency to influence a pair of gears than a light bulb. And even less to influence a transistor and less to influence a neuron and less to influence the microtubule in a neuron. If you have a mechanical device that has its function determined by a large number of causal chains that lead to smaller and smaller mechanisms that more and more easily influenced then you make the mechanism more easily and cheaply (in terms of magical/spiritual currency) possessable.

Computational power/complexity combined with its "surface area" which is all the causal chains whose initial conditions can be efficiently seeded or influenced from the quantum vacuum or quantum decoherence is what increase the "consciousness potential" or potential to be influenced or possessed by a spirit.
 
Last edited:
Thanks for the response. I'd be lying if I said I followed this (above my pay grade maybe? :) )

I was reacting more to your theorizing that AI could (will?) become conscious, at least as we generally define it as having an inner experience and, for those who aren't ardent determinists, agency.

When you make a line a loop you create inner and outer. Feed the output to the input and you create inner and outer. Increase the complexity and "surface area" (per post above) of the mechanism between input and output and you increase the intelligence and agency - potential for spirit influence.
 
Back
Top