Neil
New
I am new to the forums and I wanted to make a few observations on a couple topics I have heard many times on the show and in the book. Please note that these are my opinions for what they may be worth and since I am new to the forums I hope this is not out of line since I am not familiar with everything yet.
First I would like to address a common claim made by skeptics that I have heard many times on the show, but I will use a quote from Matt Dillahunty posted in this forum in response to Alex:
I take issue with the comment that something like psi has not met its burden of proof. Aside from the use of the term "proof," the problem with this statement is it is not quantified in any way, which always leaves the option for the skeptic making this statement to subjectively dismiss evidence since there was not a specified threshold for what would constitute "proof." In science this type of requirement for what constitutes an effect is quantified, as in it has at least a p-value of <0.05 or 0.01 or whatever, or a 3 sigma constitutes and effect and 5 sigma constitutes a discovery.
Why do skeptics almost never quantify what constitutes "proof" for the evidence? Well, in a way, Ray Hyman did during the Ganzfeld debates with Charles Honorton, which was laid out in their 'joint communique' (Hyman and Honorton, 1986. Joint Communique: The Psi Ganzfeld Controversy.), but this was more about methodology than quantifying effect sizes, and in the same paper, it was written:
"We agree that there is an overall significant effect in this data base that cannot reasonably be explained by selective reporting or multiple analysis. We continue to differ over the degree to which the effect constitutes evidence for psi, but we agree that the final verdict awaits the outcome of future experiments conducted by a broader range of investigators and according to more stringent standards."
So when the results of the autoganzfeld experiments came out, which addressed the supposed methodological issues Hyman identified, and the results were significant with a p-value of 0.00005 and a Cohen's h (effect size) of 0.20 (which is considered a "small effect" by convention), which Jessica Utts points out that "the effect size observed in the ganzfeld data base is triple the much publicized effect of aspririn on heart attacks," which was considered very significant (Utts, 1991. Replication and Meta-Analysis in Parapsychology. Statistical Science.), did this result in Hyman conceding that this constitutes evidence for psi? Hyman states:
"Honorton's experiments have produced intriguing results. If independent laboratories can produce similar results with the same relationships and with the same attention to rigorous methodology, then parapsychology may indeed have finally captured its elusive quarry." (Comment in Statistical Science, 1991, pg 392).
Even though the 10 autoganzfeld experiments met the criteria set, and the effect was essentially the same effect demonstrated in the previous experiments, and there was no file-drawer effect, Hyman still would not say that this constitutes evidence for psi. Why not? The autoganzfeld experiments replicated the past database of Ganzfeld studies, demonstrating it was not methodological errors that resulted in the effect. The previous data base had an effect size of 6.60 sigma (remember 5 sigma is a discovery in physics) and a p-value of 3.37 X 10^-11 (Rosenthal, 1986. Meta-Analytic procedures and the Nature of Replication: The Ganzfeld Debate. Journal of Parapsychology.). By any other area of science, this would demonstrate an effect, but Hyman would not admit this. Why cannot skeptics quantify what constitutes evidence? Without quantifying what constitutes evidence, there is continual moving of the goal posts, so that there is an always an out.
This is related to what Alex refers to with Carl Sagan's claim that "Extraordinary claims require extraordinary evidence." Alex has rightfully pointed out the falsity of this statement, and the reason is that nothing here is quantified. What defines an extraordinary claim? And more importantly, quantify what constitutes extraordinary evidence. This is never done, and always leaves an out for skeptics to continue to say "it hasn't met the burden of proof." That's not science.
This is related to abuse of Bayesian reasoning, where a prior probability is used in analysis of data. The major problem is that calculating a prior probability is very subjective, and on top of that, all factors are not considered. If someone uses a prior that says they feel 99.99999999999999999% sure that psi doesn't exist, no amount of evidence (realistically) will then convince them that there is an effect.
The supposedly justified reasoning for this prior probability is essentially based on the fact that psi does not fit into our neuroscientific understanding of brain function and that it does not fit into our current understanding of physics. This is an incomplete and highly biased method for calculating a prior probability.
In calculating a prior, it should be considered that our understanding of consciousness is not only incomplete, but we do not even have a proposed mechanism for consciousness (in neuroscience). We are almost clueless when it comes to what consciousness is, and in philosophy of mind there is reason to think that the neuroscience method may never find a mechanism since we are dealing with strong emergence. Neuroscience may say consciousness is explained at some point, but only based on their narrow and incomplete definition of consciousness (since neuroscience, by definition, considers consciousness to be brain processes). Feyerabend points out that it is unreasonable to require a new theory to match the old theory. This is obvious when we are looking at new domains of exploration, which is, in this case, the domain of consciousness. What right do we have to require that a new theory would fit into the current neuroscientific model of consciousness?
Further, we also know that our physics is incomplete. If consciousness is fundamental to the universe, then we are exploring a new domain of the universe, and we have no real justification to say that some phenomenon of a new domain is impossible or extremely unlikely based on our mathematical models of different domains. How likely was quantum theory within the Newtonian theory paradigm? By standards used to say that psi is extremely unlikely or impossible, quantum theory would have had a prior probability that was extraordinarily low as well.
Then we should also consider pessimistic meta-induction, which is essentially considering the falsity of past theories. Now what is meant by "falsity" does appear to depend on the field of research, but at least in physics, for example, the falsity of past theories is not to say that the mathematical models are false, since it is demonstrated that they are extremely good at modeling the particular domain for which they were describing. The falsity lies in the truth of the theory when exploring new domains, which highlights the problem of scientific induction described by Karl Popper. The falsity of Newtonian theory is not that it is false in calculating paths of projectiles or rockets, but that it cannot be extrapolated to very small scales such as those described by quantum theory. It is false in that quantum theory is a more fundamental theory, even though Newtonian theory still works extremely well for its particular domain. The other falsity is in the metaphysical interpretation of the theory. You really cannot separate metaphysics from physics, and the metaphysical assumptions of theories are proven wrong over and over again. For example, General Relativity theory demonstrates the falsity of the metaphysical assumption of the absolute independent nature of space and time of Newtonian theory, and quantum theory further falsifies the metaphysical interpretation of what matter is that is found in Newtonian theory. History has demonstrated over and over again that our metaphysical interpretations are seemingly always proven wrong, and exploration of new domains uncovers more fundamental aspects of the universe that demonstrate previous theories being approximations only useful for particular domains, not absolute laws of nature.
So the point is that when exploring new domains such as consciousness, we have to seriously consider these factors. We should also consider past data, that would suggest the presence of psi. Even if one were not to consider past evidence as demonstrating psi, it at least has to be considered that we are attempting to replicate an effect in previous experiments. Skeptics would claim that there is no evidence, or at least not good evidence, but when statisticians come in to settle these debates (such as Jessica Utts), they come out saying that there is some sort of effect that needs explanation. Based on the patterns found, replicated, and predictions made and confirmed, this effect, such as in the Ganzfeld database, seems highly unlikely to be simply a lack of understanding of randomness and methods of analysis. In fact, if we do not understand randomness in systems or how to analyze, that would pose very serious problems for many areas of science.
So considering our incomplete theories in physics, the fact that neuroscience does not even have a theory of consciousness, the problems of scientific induction (taking a model and making it universal), pessimistic meta-induction of the falsity of past theories, and previous data, what is the prior probability of something like psi? These really can't be quantified, and the prior probability still remains subjective, but the point is that the origin of the "extraordinary claims require extraordinary evidence" is unscientific since it is not quantified in any way, and really relies on highly subjective prior probabilities in Bayesian reasoning that demonstrates confirmation bias more than anything.
So in the end, I think Alex should press skeptics to quantify what constitutes this "burden of proof" or "extraordinary evidence." At least that would make some sort of progress and hold them accountable so there is not continual moving of the goal posts.
First I would like to address a common claim made by skeptics that I have heard many times on the show, but I will use a quote from Matt Dillahunty posted in this forum in response to Alex:
Matt Dillahunty said:But pointing out that a claim hasn't met it's burden of proof and cannot rationally be considered "true" is NOT the same as claiming that the claim is false.
I take issue with the comment that something like psi has not met its burden of proof. Aside from the use of the term "proof," the problem with this statement is it is not quantified in any way, which always leaves the option for the skeptic making this statement to subjectively dismiss evidence since there was not a specified threshold for what would constitute "proof." In science this type of requirement for what constitutes an effect is quantified, as in it has at least a p-value of <0.05 or 0.01 or whatever, or a 3 sigma constitutes and effect and 5 sigma constitutes a discovery.
Why do skeptics almost never quantify what constitutes "proof" for the evidence? Well, in a way, Ray Hyman did during the Ganzfeld debates with Charles Honorton, which was laid out in their 'joint communique' (Hyman and Honorton, 1986. Joint Communique: The Psi Ganzfeld Controversy.), but this was more about methodology than quantifying effect sizes, and in the same paper, it was written:
"We agree that there is an overall significant effect in this data base that cannot reasonably be explained by selective reporting or multiple analysis. We continue to differ over the degree to which the effect constitutes evidence for psi, but we agree that the final verdict awaits the outcome of future experiments conducted by a broader range of investigators and according to more stringent standards."
So when the results of the autoganzfeld experiments came out, which addressed the supposed methodological issues Hyman identified, and the results were significant with a p-value of 0.00005 and a Cohen's h (effect size) of 0.20 (which is considered a "small effect" by convention), which Jessica Utts points out that "the effect size observed in the ganzfeld data base is triple the much publicized effect of aspririn on heart attacks," which was considered very significant (Utts, 1991. Replication and Meta-Analysis in Parapsychology. Statistical Science.), did this result in Hyman conceding that this constitutes evidence for psi? Hyman states:
"Honorton's experiments have produced intriguing results. If independent laboratories can produce similar results with the same relationships and with the same attention to rigorous methodology, then parapsychology may indeed have finally captured its elusive quarry." (Comment in Statistical Science, 1991, pg 392).
Even though the 10 autoganzfeld experiments met the criteria set, and the effect was essentially the same effect demonstrated in the previous experiments, and there was no file-drawer effect, Hyman still would not say that this constitutes evidence for psi. Why not? The autoganzfeld experiments replicated the past database of Ganzfeld studies, demonstrating it was not methodological errors that resulted in the effect. The previous data base had an effect size of 6.60 sigma (remember 5 sigma is a discovery in physics) and a p-value of 3.37 X 10^-11 (Rosenthal, 1986. Meta-Analytic procedures and the Nature of Replication: The Ganzfeld Debate. Journal of Parapsychology.). By any other area of science, this would demonstrate an effect, but Hyman would not admit this. Why cannot skeptics quantify what constitutes evidence? Without quantifying what constitutes evidence, there is continual moving of the goal posts, so that there is an always an out.
This is related to what Alex refers to with Carl Sagan's claim that "Extraordinary claims require extraordinary evidence." Alex has rightfully pointed out the falsity of this statement, and the reason is that nothing here is quantified. What defines an extraordinary claim? And more importantly, quantify what constitutes extraordinary evidence. This is never done, and always leaves an out for skeptics to continue to say "it hasn't met the burden of proof." That's not science.
This is related to abuse of Bayesian reasoning, where a prior probability is used in analysis of data. The major problem is that calculating a prior probability is very subjective, and on top of that, all factors are not considered. If someone uses a prior that says they feel 99.99999999999999999% sure that psi doesn't exist, no amount of evidence (realistically) will then convince them that there is an effect.
The supposedly justified reasoning for this prior probability is essentially based on the fact that psi does not fit into our neuroscientific understanding of brain function and that it does not fit into our current understanding of physics. This is an incomplete and highly biased method for calculating a prior probability.
In calculating a prior, it should be considered that our understanding of consciousness is not only incomplete, but we do not even have a proposed mechanism for consciousness (in neuroscience). We are almost clueless when it comes to what consciousness is, and in philosophy of mind there is reason to think that the neuroscience method may never find a mechanism since we are dealing with strong emergence. Neuroscience may say consciousness is explained at some point, but only based on their narrow and incomplete definition of consciousness (since neuroscience, by definition, considers consciousness to be brain processes). Feyerabend points out that it is unreasonable to require a new theory to match the old theory. This is obvious when we are looking at new domains of exploration, which is, in this case, the domain of consciousness. What right do we have to require that a new theory would fit into the current neuroscientific model of consciousness?
Further, we also know that our physics is incomplete. If consciousness is fundamental to the universe, then we are exploring a new domain of the universe, and we have no real justification to say that some phenomenon of a new domain is impossible or extremely unlikely based on our mathematical models of different domains. How likely was quantum theory within the Newtonian theory paradigm? By standards used to say that psi is extremely unlikely or impossible, quantum theory would have had a prior probability that was extraordinarily low as well.
Then we should also consider pessimistic meta-induction, which is essentially considering the falsity of past theories. Now what is meant by "falsity" does appear to depend on the field of research, but at least in physics, for example, the falsity of past theories is not to say that the mathematical models are false, since it is demonstrated that they are extremely good at modeling the particular domain for which they were describing. The falsity lies in the truth of the theory when exploring new domains, which highlights the problem of scientific induction described by Karl Popper. The falsity of Newtonian theory is not that it is false in calculating paths of projectiles or rockets, but that it cannot be extrapolated to very small scales such as those described by quantum theory. It is false in that quantum theory is a more fundamental theory, even though Newtonian theory still works extremely well for its particular domain. The other falsity is in the metaphysical interpretation of the theory. You really cannot separate metaphysics from physics, and the metaphysical assumptions of theories are proven wrong over and over again. For example, General Relativity theory demonstrates the falsity of the metaphysical assumption of the absolute independent nature of space and time of Newtonian theory, and quantum theory further falsifies the metaphysical interpretation of what matter is that is found in Newtonian theory. History has demonstrated over and over again that our metaphysical interpretations are seemingly always proven wrong, and exploration of new domains uncovers more fundamental aspects of the universe that demonstrate previous theories being approximations only useful for particular domains, not absolute laws of nature.
So the point is that when exploring new domains such as consciousness, we have to seriously consider these factors. We should also consider past data, that would suggest the presence of psi. Even if one were not to consider past evidence as demonstrating psi, it at least has to be considered that we are attempting to replicate an effect in previous experiments. Skeptics would claim that there is no evidence, or at least not good evidence, but when statisticians come in to settle these debates (such as Jessica Utts), they come out saying that there is some sort of effect that needs explanation. Based on the patterns found, replicated, and predictions made and confirmed, this effect, such as in the Ganzfeld database, seems highly unlikely to be simply a lack of understanding of randomness and methods of analysis. In fact, if we do not understand randomness in systems or how to analyze, that would pose very serious problems for many areas of science.
So considering our incomplete theories in physics, the fact that neuroscience does not even have a theory of consciousness, the problems of scientific induction (taking a model and making it universal), pessimistic meta-induction of the falsity of past theories, and previous data, what is the prior probability of something like psi? These really can't be quantified, and the prior probability still remains subjective, but the point is that the origin of the "extraordinary claims require extraordinary evidence" is unscientific since it is not quantified in any way, and really relies on highly subjective prior probabilities in Bayesian reasoning that demonstrates confirmation bias more than anything.
So in the end, I think Alex should press skeptics to quantify what constitutes this "burden of proof" or "extraordinary evidence." At least that would make some sort of progress and hold them accountable so there is not continual moving of the goal posts.