Parapsychology: Science or Pseudoscience?

#41
I have a headache, so I may be missing something obvious (apart from the fact that I shouldn't be looking at a monitor if I have a headache) but Francis says "Evidence for publication bias in a set of experiments can be found when the observed number of rejections of the null hypothesis exceeds the expected number of rejections" and he uses Bem's precognitive habituation work as an example: 9 replications out of 10 when the effect being described as being small would suggest that not all the data is being presented.

But the Ganzfeld has nothing like that kind of hit rate. Depending on where you look, the percentage of ganzfeld experiments that reject the null (at p<0.05) is between 22-27%.

Just to clarify, 22% comes from my database (38 replications out of 169) while 27% comes from Storm, Tressoldi et al's database (30 replications out of 109 experiments).
I think you're confusing Francis's (then-)definition of publication bias with the test he used to detect it. Francis defined publication bias to include both failure to publish null or disconfirming findings, p-hacking, and exercise of researcher degrees of freedom. What I was saying is that, so defined, publication bias is a plausible explanation for non-null ganzfeld results.

If the test that Francis used (which we now call the test for excess success [TES]), when applied to a set of experiments, results in a small p-value, then the set of experiments almost certainly is biased. But the converse is not true: if a set of experiments is biased, the TES p-value will often not be small. In fact, as Francis's extensive simulations have shown, the TES has very low power to detect bias. So the fact that a set of ganzfeld studies wound pass the TES, does not suggest that the set of experiments is unbiased.
 
Last edited:
#42
I think you're confusing Francis's (then-)definition of publication bias with the test he used to detect it. Francis defined publication bias to include both failure to publish null or disconfirming findings, p-hacking, and exercise of researcher degrees of freedom. What I was saying is that, so defined, publication bias is a plausible explanation for non-null ganzfeld results.
I'm not sure how this would apply to the autoganzfeld run of 11 experiments after the joint communique of Hyman and Honorton. There was no file drawer, there have been many analyses done including statisticians (so I don't see the p-hacking), and I am not sure what the degrees of freedom of the researchers would be.
 
#43
And in the Ganzfeld for example, this is done. So again, how is it a criticism?

It would be nice to see something quantitative from Blackmore rather than some vague criticism.
My point was that the "experimenter effect" does not imply "psi phenomena" as Craig said, and you agreed with.
 
#44
I'm not sure how this would apply to the autoganzfeld run of 11 experiments after the joint communique of Hyman and Honorton. There was no file drawer, there have been many analyses done including statisticians (so I don't see the p-hacking), and I am not sure what the degrees of freedom of the researchers would be.
Although certain forms of p-hacking may be visible to tests like the TES or p-curve, in general p-hacking is invisible. For example, if researchers, occasionally dropped a subject whose responses they didn't like, this would bias the experimental results, but would be virtually impossible to detect. The dataset would look entirely ordinary and the positive effect would look real.
 
#45
My point was that the "experimenter effect" does not imply "psi phenomena" as Craig said, and you agreed with.
If we look at well controlled experiments like the autoganzfeld, it shouldn't matter if I expect there to be results or not. Either way it should be 25% hit rate.
 
#46
Although certain forms of p-hacking may be visible to tests like the TES or p-curve, in general p-hacking is invisible. For example, if researchers, occasionally dropped a subject whose responses they didn't like, this would bias the experimental results, but would be virtually impossible to detect. The dataset would look entirely ordinary and the positive effect would look real.
Well that poses a real problem. How can this be a valid criticism of it is virtually impossible to detect? That makes it a criticism that could always be thrown out as "it could be this" that could never be falsified; it would be a criticism immune to response.
 
#47
You aren't addressing the experimental evidence, such as the Ganzfeld and if you noticed, this paragraph is addressing the nature of psi, not whether it exists. The latter seems to be assumed here.
Yes, the author is a "believer." So, this is how he is interpreting it. No doubt a skeptic would interpret this differently. But even if we adopt that author's interpretation, it still would follow that testing for psi effects is not truly amenable to the scientific method. (An effect that is not sustainable (as the author acknowledges) is not amenable to the scientific method.)
 
#48
Well that poses a real problem. How can this be a valid criticism of it is virtually impossible to detect? That makes it a criticism that could always be thrown out as "it could be this" that could never be falsified; it would be a criticism immune to response.
Welcome to parapsychology criticism. If this kind of brain dead skepticism was applied to other sciences we would still be living in caves. A spear? "Based on the trials your crude spear only had a 15% success rate which was only slightly better than my rock and that could have been due to chance and selective reporting as well as experimenter bias. Spear research is useless and a waste of tribe resources."
 
#49
Yes, the author is a "believer." So, this is how he is interpreting it. No doubt a skeptic would interpret this differently. But even if we adopt that author's interpretation, it still would follow that testing for psi effects is not truly amenable to the scientific method. (An effect that is not sustainable (as the author acknowledges) is not amenable to the scientific method.)
I will ban you if you keep using the term believer. He is a scientist, you will refer to him as you would any other scientist. And we are proponents, not believers. Is that clear?
 
#50
I've read a lot of Kennedy's stuff and he raises some pretty important concerns, imo. That said, the types of issues that he raises are [not] limited to the relatively small field of parapsychology, but apply across the board.
One could argue that the observer effect comes into play in physics. But whether a physics experiment is repeatable is not dependent on whether the experimenter has faith in its repeatability.
 
#51
Kennedy is not raising questions about the legitimacy of the existing studies, as far as I know, he is only bringing up a well known point that psi is not particularly easy to tame. He has merely restated a known feature of psychic ability.
If psi is "not particularly easy to tame," then testing for it is not amenable to the scientific method. That's the whole point!
 
#54
If psi is "not particularly easy to tame," then testing for it is not amenable to the scientific method. That's the whole point!
This is not true. If one does not understand a phenomenon and all confounding variables, it may be difficult to tame at first. This is seen in other areas of science and does not mean you cannot apply the scientific method.
 
#55
If the validity of some experiment is dependent on the faith of the experimenter, then the experiment is not really amenable to the scientific method.
It is a psychological phenomenon. We know it exists in another field.

And it's not "faith." It can have to do with many factors including how subjects are treated and even how attentive the researcher is.

A confounding variable does not make it in conflict with the scientific method. The scientific method takes confounding variables into account.
 
#56
What's the difference between a believer and a proponent?
What I find odd is that if psi does not exist then even if someone is a proponent it shouldn't matter any more than any other area of science.

Would you really think that scientifists in other fields are totally indifferent to the results testing their hypotheses?
 
#58
And it's not "faith." It can have to do with many factors including how subjects are treated and even how attentive the researcher is.
You asked me what were some of Susan Blackmore's criticisms. This was one of them. She was told that she was not getting positive results because she lacked faith.
 
#60
If the validity of some experiment is dependent on the faith of the experimenter, then the experiment is not really amenable to the scientific method.
This has been addressed in the parapsychological literature and many skeptics have gotten positive results in a wide variety of studies. This is commonly known. Have you done any research at all? I'm beginning to doubt that you've read what you said you have. You're spouting a lot of nonsensical talking points.
 
Top