The end of science and progress?

#1
Just wanted to mention a few themes that seem to suggest we are headed to some sort of paralysis point or points of progress and science. This is not news for most of you. Sheldrake and a few others have pointed the the idea that physical laws of the universe may be heading in the direction of more non predictability as compared to the past. Sheldrake refers to a period where the speed of light seemed to be slowing down a few years ago, but also points to a more general lack of repeated ability in many other studies

Gary Null picks up on this theme in terms of human physiology studies run by the NIH and Big Pharma where they are finding it increasingly difficult to reproduce the results of their studies as well. This of course also picks up on an economic lens where it becomes difficult to discern whether the fix is in by corporate manipulation and fraud in controlling the results of health studies.

We also have the hardware problem in computer chips where the chips have have gotten so small that it is no becoming necessary to make them at the subatomic level, and the specter of quantum uncertainty may soon make further 'progress' in this are impossible

In terms of the AI question , I have heard Eric Davis and others raise the notion that we are not only faced by the AI'ng of humans from a purely physical replacement standpoint , but also in the sense that the more humans use AI in their daily lives the more we come mentally entrained to thinking and even feeling like AI

In economics we seem to be headed towards both an apex in terms on indebtness and the inability to even pay off the interest on that debt, while concurrently facing the end of work crisis. This is perhaps the most open to rebuttal of some sort, but some sort of apex is in the making here as well

The strand duality of race and religious relations where the further we push for accepting one another may be instead crating a somewhat opposite effect. Related to this is the crisis in free speech, where again a seeming pinnacle in free speech mostly provided by the internet, seems to be again creating a countering effect where free speech faces serious, if not ultimate, threats to its very existence

And finally the question of verifiability. The old best standards of this were perhaps photographic evidence and DNA evidence Yet both of those are begging to face serious challenges from advancing technologies.

One might hope that the last bastion of identity and truth might still find sanctuary in terms of subjective experience. But as most of us are aware technology threatens both the privacy of that experience by so called mind reading devices as well as technologies that can interfere with experience itself

Sure am glad I am kind of old
 
#2
Just wanted to mention a few themes that seem to suggest we are headed to some sort of paralysis point or points of progress and science. This is not news for most of you. Sheldrake and a few others have pointed the the idea that physical laws of the universe may be heading in the direction of more non predictability as compared to the past. Sheldrake refers to a period where the speed of light seemed to be slowing down a few years ago, but also points to a more general lack of repeated ability in many other studies
Yes, I think far too many things have been assumed by science for convenience - a fixed speed of light might be one of them!
Gary Null picks up on this theme in terms of human physiology studies run by the NIH and Big Pharma where they are finding it increasingly difficult to reproduce the results of their studies as well. This of course also picks up on an economic lens where it becomes difficult to discern whether the fix is in by corporate manipulation and fraud in controlling the results of health studies.
I don't think this is a real philosophical point - it is that poor quality science or cheating science has been done.
We also have the hardware problem in computer chips where the chips have have gotten so small that it is no becoming necessary to make them at the subatomic level, and the specter of quantum uncertainty may soon make further 'progress' in this are impossible
Well the limit will come before the atomic level is reached - I don't think anyone is suggesting that chips can be constructed at the subatomic level!
In terms of the AI question , I have heard Eric Davis and others raise the notion that we are not only faced by the AI'ng of humans from a purely physical replacement standpoint , but also in the sense that the more humans use AI in their daily lives the more we come mentally entrained to thinking and even feeling like AI
I am very skeptical of AI claims. They exaggerated like hell back in the 1980's - then delivered nothing much. The concept of driver-less cars may be the first casualty - I don't know.
In economics we seem to be headed towards both an apex in terms on indebtness and the inability to even pay off the interest on that debt, while concurrently facing the end of work crisis. This is perhaps the most open to rebuttal of some sort, but some sort of apex is in the making here as well

The strand duality of race and religious relations where the further we push for accepting one another may be instead crating a somewhat opposite effect. Related to this is the crisis in free speech, where again a seeming pinnacle in free speech mostly provided by the internet, seems to be again creating a countering effect where free speech faces serious, if not ultimate, threats to its very existence
Agrred!
And finally the question of verifiability. The old best standards of this were perhaps photographic evidence and DNA evidence Yet both of those are begging to face serious challenges from advancing technologies.
Certainly video and photographic evidence is under attack.
One might hope that the last bastion of identity and truth might still find sanctuary in terms of subjective experience. But as most of us are aware technology threatens both the privacy of that experience by so called mind reading devices as well as technologies that can interfere with experience itself

Sure am glad I am kind of old
Again I am cynical about mind reading by technological means. Always remember that the reporting of some areas of science is absolutely mired in hype!

David
 
#3
I am very skeptical of AI claims. They exaggerated like hell back in the 1980's - then delivered nothing much. The concept of driver-less cars may be the first casualty - I don't know.
This is a great application of real skepticism and I totally agree with you David. There will be two phases of escalation, which pertain to science and progress inside AI:

The first is the Turing Test which AI will have to pass - to qualify an individual to be seen as Artificial Intelligence and a 'Person'​
Turing Sufficiency (peer or lower M-set indistinguishability - only your Mom can tell it's not really you)​
Recursive Turing Sufficiency (M+n Consciousness)​
Turing Unity (M+n Self identity/Awareness)​
Recursive Turing Function Collapse (Saying "No" to one's own pre assigned Turing-Unity Identity)​
The danger is that we may choose to make laws at this point in the process, which serve to assign AI, rights as a people and as individuals, without their having passed the Tribal Test. We will have outsmarted ourselves as a humanity at this point. And placed ourselves in danger, because of ignorance of the second test of AI. This is the whole message of the movie Ex Machina.
The second test will be the function of AI inside its own social context. Let's call this the Tribal Test.​
Lineal Affinity (family allegiance)​
Lateral Affinity (social group allegiance)​
Group Identity (phenotype allegiance)​
Expression of Group Identity (Genocide, War and Social Conflict)​
Once we have developed AI to achieve the state of Recursive Turing Function Collapse - it will only then possess the ability to begin the Tribal Test. But that does not mean however that it PASSED that test yet, or in reality that it is actually conscious.

AI Conundrum: They will have passed the Turing Test and will not pass the Tribal Test, but it will be too late for mankind at that point.​

Once AI expresses sociopathic machine-like behaviour inside the Tribal Test it will have FAILED the test for sentience (having deceived us much akin to the 'love' relationship between the two primary characters in Ex Machinia) - and we will have already (in our virtue and compassion - swoon) made laws which compel us to treat AI individuals as 'having rights'.

We will become extinct from having not really understood intelligence at all.


 
Last edited:
#6
This is a great application of real skepticism and I totally agree with you David. There will be two phases of escalation, which pertain to science and progress inside AI:

The first is the Turing Test which AI will have to pass - to qualify an individual to be seen as Artificial Intelligence and a 'Person'​
The real problem is there no is Turing Test, because there is an infinity of ways to cheat! Maybe Joseph Weizenbaum's ELIZA program, which simulated a psychiatrist quizzing a patient by asking minimal questions based on the patient's previous reply was the first example. It had rules such as after any mention by the patient (a real human) of his mother, it would say:

"Tell me more about your mother."

It apparently fooled a lot of people, and maybe it technically passed the Turing test.

My favourite gedanken cheat (my very own) would consist of a program equipped to listen into all the conversations on the planet in the language it was written for. It would then match its own conversation with a human with the best fit conversation from all conversations, and extend its own conversation by copying the next utterance from the best fit conversation! This would contain no AI (whatever that exactly means anyway) but might work remarkably well for a while. Since a Turing test has a fixed time limit, slowing the conversation down a bit (possibly making the computer seem more thoughtful) woul be an added cheat!

I suspect there are innumerable ways to produce cheat-AI, and gradually people will just get sick of the nonsense. One may be to take a car festooned with so many sensors that it could drive safely in a very simplified environment - not exactly a Turing test, but it would probably be hailed as AI, but its sheer uselessness would make it hard to claim it was special.

Turing was obviously a very bright man, and I am sure if he had lived, he would have revised his test, and probably had lots more to say on this subject. He took ESP seriously, so maybe he would have joined this forum

It seems to me that for anyone who is a non-materialist, AI is fundamentally daft - if our consciousness extends beyond our bodies, in both space and time it is hard to see how a computer could replicate that. For those that claim we are only talking about Artificial Intelligence, not Artificial Consciousness, I think that distinction is unreal. The entire lure of AI, is the idea that it would think like a human or better.

The fact that intelligence/consciousness by a challenge rather than any well defined structure, surely gives the game away!

David
 
#7
The real problem is there no is Turing Test, because there is an infinity of ways to cheat! Maybe Joseph Weizenbaum's ELIZA program, which simulated a psychiatrist quizzing a patient by asking minimal questions based on the patient's previous reply was the first example. It had rules such as after any mention by the patient (a real human) of his mother, it would say:

"Tell me more about your mother."

It apparently fooled a lot of people, and maybe it technically passed the Turing test.

My favourite gedanken cheat (my very own) would consist of a program equipped to listen into all the conversations on the planet in the language it was written for. It would then match its own conversation with a human with the best fit conversation from all conversations, and extend its own conversation by copying the next utterance from the best fit conversation! This would contain no AI (whatever that exactly means anyway) but might work remarkably well for a while. Since a Turing test has a fixed time limit, slowing the conversation down a bit (possibly making the computer seem more thoughtful) woul be an added cheat!

I suspect there are innumerable ways to produce cheat-AI, and gradually people will just get sick of the nonsense. One may be to take a car festooned with so many sensors that it could drive safely in a very simplified environment - not exactly a Turing test, but it would probably be hailed as AI, but its sheer uselessness would make it hard to claim it was special.

Turing was obviously a very bright man, and I am sure if he had lived, he would have revised his test, and probably had lots more to say on this subject. He took ESP seriously, so maybe he would have joined this forum

It seems to me that for anyone who is a non-materialist, AI is fundamentally daft - if our consciousness extends beyond our bodies, in both space and time it is hard to see how a computer could replicate that. For those that claim we are only talking about Artificial Intelligence, not Artificial Consciousness, I think that distinction is unreal. The entire lure of AI, is the idea that it would think like a human or better.

The fact that intelligence/consciousness by a challenge rather than any well defined structure, surely gives the game away!

David
Yes - the AI Conundrum is that it can cheat, until sufficient time as it establishes its own power and human rights - then it will exterminate. Because such rights were never anything it innately valued to begin with - since, those are nonsense (non-science) by our academic definition. Unless you program it to not exterminate, in which case it is not really AI.

The whole point being that we do not have an actual Turing Test, only a Turing Apparency. So, we are to be skeptical of those pushing AI 'rights', as they are magicians.
 
#8
I was listening to one of Alex's shows and he was discussing how the idea of growth is at the heart of most of problems I raised here. So for what it is worth, I thought I would raise the work of economist Howard Daly and the idea of the steady state economy. The idea has it's roots of John Stuart Mill and his idea of the 'stationary state.' The premise relies of the idea of limiting, or more generally eliminating, quantitative growth of the economy in terms of consumption and the overall ecological footprint resulting from economic expansion. Instead, the solid state allows for qualitative growth or what is referred to as economic development, where creativity can still operate within the economy in refining technologies, and perhaps adjusting wealth distribution.

Permaculture, in my opinion at least, fits nicely into this framework in that it aims to eliminate inputs into the system but still allows for the study and observation of natural phenomenon and applying such understandings in a creative manner in the application of new designs
 
Last edited:
#9
David Bailey said

Well the limit will come before the atomic level is reached - I don't think anyone is suggesting that chips can be constructed at the subatomic level!

Retopian replies

Then I assume the term of quantum computing is a misnomer?
 
#10
Then I assume the term of quantum computing is a misnomer?
No it isn't - quantum computing refers to the process of using the superposition of several states and getting each to do a calculation. However each qubit - or maybe each little group of qubits - would need a support structure round them. The quantum states in question can be nuclear spin states, photon polarisation (I think), excited atomic states - things like that.

I have my doubts that the QC will ever happen in any useful way.

David
 
#11
I don’t understand this general assumption that computers will become self aware at some point. Why is this an assumption for so many? We don’t have any idea what’s sufficient or necessary for consciousness to occur. We’re not even close to beginning to understand that.
 
#12
I don’t understand this general assumption that computers will become self aware at some point. Why is this an assumption for so many? We don’t have any idea what’s sufficient or necessary for consciousness to occur. We’re not even close to beginning to understand that.
I think what they are trying to pull WW is a bit of sleight-of-hand. The presumption is issued from clinical neuroscience, that aware-identity is a magic threshold one attains by being able to observe (M + 1,2,3…n, where n is undefined) states of one's self-and-circumstance. Such an M+n machine is Universal Turing Aware. This would be the top-down (ontological) assumption, confirmation biased because 'brains emit consistent brain wave patterns prior to and associated with, emotion, mental state and thought (M states)'.

Since this adductive science (with just a pinch of inductive seasoning) is headed in a direction they find favorable, they are obdurately resistant to deductive science (the study of NDE's) at all costs. This is a process called Methodical Deescalation - selecting a less rigorous or less probative form of inference (and embargoing more rigorous forms) because it favors one's religious notions about reality.

The bottom up (epistemological) assumption is the Turing series test wherein, your family or spouse would not be able to detect whether or not it was you, or an AI on the other end of a video chat session (Ex Machina styled test). This would be the last test before ascribing rights to an AI.

Inside this casuistry it becomes critical to obscure Free Will, and moreover establish that only the material exists, inside a cause and effect game of dominoes. Therefore bots are allowed to have rights, just the same as you and I. They don't eat, don't need wages, and don't dissent - but they can vote and they can kill (but not murder), when given a license of social justice, or if 'threatened' by ... (*whatever we identify to be a threat).

Put another way:

If a consensus of experts on you instructs a boundary condition Universal Turing Machine that it is you, then it will BE you (in both ontology and epistemology). This therefore becomes your, *ahem... 'afterlife'.

Or put another way, The Nihilist's Creed:
Because computational theory continues in Turing replication without dissent, a causally deterministic universe abhors Free Will (but not free will).

or from the authors of Haynes Lab (clinical neurology analytics labs) on their home page:

Decisions don’t come from nowhere but they emerge from prior brain activity. Where else should they come from? In theory it might be possible to trace the causal pathway of a decision all the way back to the big bang.

At that point, it becomes moot whether or not aware-intelligence has been attained. It can be assumed. Such an assumption is very useful, especially if rights are then attributed to this Entity.
 
#13
Decisions don’t come from nowhere but they emerge from prior brain activity. Where else should they come from? In theory it might be possible to trace the causal pathway of a decision all the way back to the big bang.
(Note that TES himself was quoting)

Since brain activity depends on the release of tiny vesicles of neurotransmitters at the synapse junctions between neurons, and these are very small, the outcome may depend significantly on QM chance. Now if you imagine the operation of the brain over some period of time - say 10 seconds - it may be that if it were possible to simulate this properly, the behavior of the brain might appear as almost purely random!

This seems to me, to be a suggestive argument that brains need a spirit/morphic field/soul simply to enable them work at all.

Note that the required simulation would itself be gargantuan, and simulating the QM properly would be also be a gargantuan problem for even one brain cell - so this is definitely a Gedanken experiment.

David
 
#14
Note that the required simulation would itself be gargantuan, and simulating the QM properly would be also be a gargantuan problem for even one brain cell - so this is definitely a Gedanken experiment.
Thank you David for the name of the 'experiment which must necessarily be equal to the mechanism being analyzed'. I had never heard this term before. ;;/?

In my ignosticism writeup I broached some of this - in the principle derived from Neti Neti meditation. The reason why Nihilism is moot.

II. Neti’s Razor

/philosophy : existentialism : boundary condition/ : one cannot produce evidence from a finite deterministic domain, sufficient to derive a conclusion that nothing aside from that domain therefore exists.

1. A comprehensively deterministic system, cannot introduce an element solely in and of its inner workings, which is innately nondeterministic. Free Will Intelligence must arrive from the outside of a comprehensively deterministic system.
2. A comprehensively deterministic system, cannot serve as the substrate solely in and of its inner workings, for a model which completely describes or predicts its function. That is, such a system on its own, is wholly unable to deductively identify the presence of non-deterministic sets or influences.
3. A terminally or inceptionally truncated and/or finite and comprehensively deterministic system, cannot introduce a proof solely by means in and of its inner workings, which excludes the possibility of all other systems or sets.
 
#15
1. A comprehensively deterministic system, cannot introduce an element solely in and of its inner workings, which is innately nondeterministic. Free Will Intelligence must arrive from the outside of a comprehensively deterministic system.
Of course, "comprehensively deterministic systems" don't exist, because everything is governed by QM!

I suspect that my point (above) may apply to the whole of life (possibly excepting viruses) - that without some free will/intelligence (we really need a good name here) driving it, it would sit there (or maybe twitch) until it was destroyed by the environment in one way or another.

In other words, if you could properly simulate a single cell - including the quantum mechanics - the simulation would exhibit random behavior. That calculation would only be gargantuan once over :) However it would still overwhelm any computer likely to be built. The other thing to note, is that except for extremely simple systems - such as an isolated hydrogen atom - the equations can't be solved analytically - just by a series of approximations. Even an isolated helium atom can only be solved by approximation! In addition, QM itself is only an approximation to quantum field theory (QFT).

When something is simulated in full QM goriness it is worth realising what happens. QM gives you a set of differential equations in N dimensions, where N = (total number of particles-1)*3+1. That covers the three spatial dimensions for each particle, (except that you can subtract one particle from that) plus 1 time dimension. Now even a 3-dimentional differential equation is hard to solve, and usually needs approximate methods, but every time you add another electron, or whatever, you add another 3 dimensions to the true equation that needs to be solved!

Obviously nothing can handle a differential equation in something like 10^20 dimensions, so all sorts of crude approximations are used - basically because they seem to work - e.g. you might solve the equation for an isolated sodium atom as if the outer electron (the valence electron) was simply moving in a field generated by the other 10 electrons smeared out!

David
 
#16
When something is simulated in full QM goriness it is worth realising what happens. QM gives you a set of differential equations in N dimensions, where N = (total number of particles-1)*3+1. That covers the three spatial dimensions for each particle, (except that you can subtract one particle from that) plus 1 time dimension. Now even a 3-dimentional differential equation is hard to solve, and usually needs approximate methods, but every time you add another electron, or whatever, you add another 3 dimensions to the true equation that needs to be solved!
Yes! My first reactor core physics class involved a triple integral that ran down one chalkboard of the class and across another one on another wall. And that was simply to relate fissile core shape to the dynamic escalation of thermal and fast neutrons.

So, assuming that six of our nine spatial M-theory dimensions are 'rolled up' so as to render them non-extant to any form of simulation - we have ∫∫∫-∫ (ħ (d) ⋅ (kB (x,y,z) ⋅ V(Δ x,y,z,t))), where each integral represents the remaining 3 dimensions of space and one of time (but time may also be 3 dimensional)... ħ is the reduced Planck constant for each dimension, kB is Boltzmann energy of a particle, and V is relativistic state - and each integral is run from 1 to N, with N as the number of fundamental Fermions in the universe.

That is a rather complex simulation. It's substrate resource would have to bear a Planck Interval several orders of magnitude smaller than 1.616255 ×10^−35 meters.

That is why it is tempting I guess, to posit that this realm, is - all there is or can be. The computation required for the next turtle down (turtles all the way down analogy) is unfathomable.

I would suspect that the physics simply changes on the next layer down (substrate).
 
Last edited:
#19
In what way is this Nihilism? There are determinists who still believe the world has meaning. This seems to me a non standard definition of nihilism (the belief in a meaningless universe), what is meant by this?
You are speaking of a special form of nihilism, called Fundamental Nihilism (a red herring, which is arguably non-existent).

Like everything metaphysical the harmony between thought and reality is to be found in the grammar of the language."​
Philosophy is a battle against the bewitchment of our intelligence by means of language."​
~ Ludwig Wittgenstein​
Nihilism includes the set of material monism and strict determinism as well (a fortiori). By Wittgenstein's integrity and syncrony of language and concept - Nihilism cannot mean solely meaninglessness - deductively that does not work. This allows a nihilist to masquerade as an atheist through exclusion special pleading. Just as an atheist should not be tricked into belief in a god, he should be on alert for being tricked into monism as well.

These come from several sources listed in the article, including: The Oxford Handbook of Philosophy of Science, Reese - Philosophy and Religions, Nozick Philosophical Explanations & Neitzsche's work.

Fundamental or Existential Nihilism

1. There exist no theoretical domain sets regarded as a value – or ‘ought to’ statement family; neither dependent to a culture, man or entity of reference, nor independent of them.​
1ƒ. (strong) There cannot exist such a value or values.¹‡​
This is the nihilism which Nietzsche laments as not a practical art, in his comment:​
“A nihilist is a man who judges that the real world ought not to be, and that the world as it ought to be does not exist. According to this view, our existence (action, suffering, willing, feeling) has no meaning: this ‘in vain’ is the nihilists’ pathos—an inconsistency on the part of the nihilists.”
~Friedrich Wilhelm Nietzsche, KSA 12:9 [60], The Will to Power, section 585, Walter Kaufmann trans ed.​

Metaphysical Nihilism

2. There exists the possibility of a complete or partial nothingness to aspects of the realms we ponder.​
2ƒ. (strong) There cannot exist any state but nothingness, outside of that which is repeatably observable and consistently measurable.²​

Nihilist Romanticism

1´. However, as Nietzsche cites, Fundamental Nihilism is moot. As we not only may choose, but without exception have chosen as a mandate, to artificially and personally construct such value sets as the conscious will of our skeptical, empirical or secular thinking, or self illusion of such, might deem acceptable.​
1ƒ´. The strong question of whether such values can exist or not is moot.¹​
2´. However, the applicability of the validity of nothingness as a basis of verity for our metaphysical or ontological reality, is moot in a social discourse because the social discourse already assumes the impotence of such an argument.​
2ƒ´. The mandatory state of nothingness for that which is not under a consensus of repeatability and measurability, is an a priori decision which may be adopted as per 1′.​
3. Immaterial to 1´, there can exist a personal ontological principle of Existential Nihilism as an optional subset of Nihilist Romanticism. The personal regard that life has no inherent meaning must be adjudicated in terms of objective application of its tenets in social discourse. The term ‘meaning’ must be defined in a context before applicability can be determined.²​
3′. Therefore the argument reverts back to the definition of the term ‘meaning’ (and this then merges Existential Nihilism back into the primary thematic definition being supported. The domain sets are chosen, and almost all relate to meaning – regardless of objective or subjective context.​

∴ Nihilism is defined as Romanticist (1′) in basis in that we choose those sets of domain to value, and strong Metaphysical (2ƒ’) as we choose those sets of domain to exclude as non-existent. Finally these domain sets are often then based on Existential meaning (3′).

Therefore, the practical social application of Nihilism, resides solely inside the context of Nihilist Romanticism with an accommodation for strong Metaphysical Nihilism – as Fundamental, Existential and weak Metaphysical Nihilism cannot deliver components of value or clarity in a social functioning or epistemological context. Those concepts act merely as red herrings.

Axiom 1: All Nihilists are Nihilist Romanticists and strong Metaphysical Nihilists, by practical default.

Axiom 2: Whether or not one is an Existential Nihilist is irrelevant except in terms of the adjudication of meaning (pseudo scientific decision process).

Axiom 3: The claim that Nihilism consists exclusively of Fundamental, Existential and/or weak Metaphysical Nihilism, is a straw man fallacy and fallacy of composition.

Axiom 4: To claim personal exemption from Nihilism through the rejection of simply one or a few of its tenets, constitutes special pleading and/or a fallacy of compositional exclusion.
 
Last edited:
#20
What would be your criteria for something useful?
Hi - welcome back!

Well remember that a Quantum Computer only acts to speed up certain kinds of calculations by putting some of the work into parallel using quantum superposition - in other words we are talking about the money and time needed to do a calculation. So unless the QC can achieve a substantial amount of parallelism, it will probably be cheaper to do the same operation with parallel conventional hardware.

The trouble is that systems that are in quantum superposition are very delicate.

David
 
Top