Why is paranormal research ignored?

I'm starting to think we need a The Ganzfelds: Redux thread to prevent this one from getting crushed under discussion of that experiment.

This is one of the things I really wanted to focus on with the research methodology thread : to really look closely at the methodology of all theses experiments to see whether they should be considered to be high risk of bias or not. If the case is that the risk of bias is high, then wouldn't scientists be justified in not relying all that much on what's been produced to date? I don't think we can answer the question in the OP without taking a very close look at these methodological issues.

I'm all for it. I know things got a little contentious in the old forum but what the hell, new forum, new beginnings! :)

Pat
 
One thing that hasn't yet been discussed is the issue of the percieved credibility of the journals in which paranormal research is published. I think this ties in closely with catching the interest of mainstream scientists. I get the impression that there's very little psi research published in the "normal" peer-reviewed journals that scientists read. If a psi-specific journal is even on the average scientist's radar, what will be his attitude toward the publication? Fringe nonsense? Legitimate source for research papers? Something in between? If the psi journals are suspect, whether justly or unjustly, that's got to be a major roadblock to acceptance of psi by scientists.

Pat
 
But the majority of people who ignore this work don't take a close look at the methodological issues, so the question in the OP is quite unrelated to the ganzfeld specifically. Why is there this taboo against psi research?

I made a slideshow which should sum this up succinctly.

In short: Because the claims are "obviously" impossible, run counter to a lot of people's common sense, and they're only familiar with the skeptical literature on the topic. Moulton and Kosslyn (2008) cited Wiseman-Milton, probably because the refutation of that analysis is not indexed in the same academic search engines (and thus it didn't show up for them.)
 

Attachments

  • WhyNoPsi.pdf
    142.8 KB · Views: 15
As I mused, if it takes thirty minutes of mentation to score only 5% better than if I threw a dart at the paper I would just throw the dart at the paper.

If, on the other hand one focused on the test groups which scored significantly higher than the baseline Ganzfelds I would become interested again. Spending 30 minutes of mentation to get 10-20% better odds on something begins to sound worth it, and there was one report that I recall actually obtained a 40% success rate on a large sample by selecting creative subjects.
People research lots of things which are not instantly useful! The early research on radioactivity wasn't useful, but people could see the potential. Equally, it is obvious to anyone that a successful ESP/Ganzfeld experiment blows a massive hole in orthodox science, and like most new things in science is likely to have the potential to be developed into something useful (or very dangerous).
Prisoners dilemma. If one academic talks they can be ousted, if twenty academics talk then someone realizes they can't really throw out half the specialists and have to begrudgingly tolerate it. Unfortunately this will only happen if the "prisoners" decide they value truth more than vanity.

Well the sociology of taboos is only of passing interest, the point is that paranormal research isn't ignored because nobody is interested, because as I pointed out, there would be real value in discovering why it is that some of these results emerge. Researchers were happy to expose the imaginary nature of N-rays, and to expose false mediums, etc. - which was clearly far more effective than mere scoffing or debunking. I think they probably realise that some of the ψ research done nowadays is far harder to debunk, and is probably valid.

David
 
As I mused, if it takes thirty minutes of mentation to score only 5% better than if I threw a dart at the paper I would just throw the dart at the paper.

If, on the other hand one focused on the test groups which scored significantly higher than the baseline Ganzfelds I would become interested again. Spending 30 minutes of mentation to get 10-20% better odds on something begins to sound worth it, and there was one report that I recall actually obtained a 40% success rate on a large sample by selecting creative subjects.



Prisoners dilemma. If one academic talks they can be ousted, if twenty academics talk then someone realizes they can't really throw out half the specialists and have to begrudgingly tolerate it. Unfortunately this will only happen if the "prisoners" decide they value truth more than vanity.
Your reference is Kathy Daltons work, btw. I believe it is excluded from most MAs as an outlier because its so large and so successful.
 
I made a slideshow which should sum this up succinctly.

In short: Because the claims are "obviously" impossible, run counter to a lot of people's common sense, and they're only familiar with the skeptical literature on the topic. Moulton and Kosslyn (2008) cited Wiseman-Milton, probably because the refutation of that analysis is not indexed in the same academic search engines (and thus it didn't show up for them.)

Good slide show, I enjoyed the humour... I like the conversational style. I encounter a lot of educated people who regard a belief in psi, and the idea of "sending thoughts", as naive, and almost childish... (to be clear this is not my position, before I get a heap of "dislikes"). You've presented a great way to approach those types...
 
Researchers were happy to expose the imaginary nature of N-rays, and to expose false mediums, etc. - which was clearly far more effective than mere scoffing or debunking. I think they probably realise that some of the ψ research done nowadays is far harder to debunk, and is probably valid.

I suspect they don't know that it exists because while the Journal for Alternative and Complementary medicine is indexed through MEDLINE and PubMed, I do not believe I have seen parapsychology journals indexed in academic search engines. I don't have access to the private ones to test this, and I haven't tried Google Scholar, but I suspect this is the case because I haven't seen them listed in any of the huge tables of acknowledged abbreviations in various reference managers.

People research lots of things which are not instantly useful!

Of course they do, but traditionally its always been a small subset of people who work on these projects until they become commercially viable.
 
Last edited:
Your reference is Kathy Daltons work, btw. I believe it is excluded from most MAs as an outlier because its so large and so successful.
I thought that is who it was; and I think Dalton was excluded from Wiseman because of a cutoff date selected to avoid including it. That one and a swedish study which tested the procedure with and without marijuana (there was actually an improvement, unsurprisingly) make me more interested on their own than the "baseline" Ganzfelds. I need to read those specific papers.
 
I thought that is who it was; and I think Dalton was excluded from Wiseman bey cause of a cutoff date selected to avoid including it. That one and a swedish study which tested the procedure with and without marijuana (there was actually an improvement, unsurprisingly) make me more interested on their own than the "baseline" Ganzfelds. I need to read those specific papers.
Bare in mind that the hit-rate beyond 30% is almost exclusively carried by selected individuals, if that's what you're looking for. Normal hit rate for unselected individuals is barely above chance, ( we're talking 1-2% ), but the selected participants is usually in the ball part of 35-40.
 
Bare in mind that the hit-rate beyond 30% is almost exclusively carried by selected individuals, if that's what you're looking for. Normal hit rate for unselected individuals is barely above chance, ( we're talking 1-2% ), but the selected participants is usually in the ball part of 35-40.

Which tells us there is something to look in to for the attributes of these selected participants that would give them such a staggeringly higher success rate.
 
Which tells us there is something to look in to for the attributes of these selected participants that would give them such a staggeringly higher success rate.
Well, there are 4 finds that can be attributed to the findings in both GZ conditions and presentiment studies. The first is that people who are inwardly focused, like meditators perform far better than others. The second is that artistic people perform much better than the average populace. The third is that couples and people who are close perform much better than the populace. And the last is that in presentiment studies, fast reactionary thinking trumps slow methodical thinking. So psi is not some all pervasive attribute. Different forms of psi use different mechanisms.
 
The second is that artistic people perform much better than the average populace. The third is that couples and people who are close perform much better than the populace.

Aren't artistic people more likely to be introverts or spend long periods of time contemplating?
 
Well, there are 4 finds that can be attributed to the findings in both GZ conditions and presentiment studies. The first is that people who are inwardly focused, like meditators perform far better than others. The second is that artistic people perform much better than the average populace. The third is that couples and people who are close perform much better than the populace. And the last is that in presentiment studies, fast reactionary thinking trumps slow methodical thinking. So psi is not some all pervasive attribute. Different forms of psi use different mechanisms.

Here's a paper I dug up that discusses this. It's been awhile since I've read it though but I think it found that there wasn't a big difference.

http://www.academia.edu/695143/Are_..._and_psi_with_an_experience-sampling_protocol

The overall outcome of the study was perceived to be such that the methodology warrants further research, although a number of pitfalls were identified. Psi- performance was at levels commensurate with the performance of artists in previous free-response ESPresearch (
r = .423, n = 30, with a hit rate of 43%). However, the planned sum-of-ranks analysis did not reach statistical significance (z = 1.03, p = .152, 1-t). Artists did not out-perform carefully matched controls, whodiffered only on ‘artistic creative personality’, possibly attributable to the autonomy enabled by theexperience sampling protocol. In line with previous research, none of the creativity measures selectedsignificantly predicted psi-outcome, thus the hypothesis that affective dimensions of creativity might berelated to psi-performance was rejected. However, in planned exploratory analyses one cognitive-stylesignificantly predicted psi-performance, where the use of ideas that seem to come from ‘beyond the self’ inthe creative process was associated with psi-missing ( rho = -.429, p =.018, 2-tailed); and cognitive flexibilityand originality was significantly associated with magnitude of the psi-effect (rho = -.535, p = .004, 2-tailed)
 
Here's a paper I dug up that discusses this. It's been awhile since I've read it though but I think it found that there wasn't a big difference.

http://www.academia.edu/695143/Are_..._and_psi_with_an_experience-sampling_protocol
You need to go back and look at what the inclusion criteria for selected participants is for all studies that harbored them. I'm in class right now, so I'm trying to pay attention to the lecture, but I believe Dalton used artistic participants in her huge study.
 
I believe that's correct about Dalton - IIRC this paper looks more broadly than at that one study. Got to re-read it myself, its been quite awhile since I looked at it.
 
As I mused, if it takes thirty minutes of mentation to score only 5% better than if I threw a dart at the paper I would just throw the dart at the paper.

If, on the other hand one focused on the test groups which scored significantly higher than the baseline Ganzfelds I would become interested again. Spending 30 minutes of mentation to get 10-20% better odds on something begins to sound worth it, and there was one report that I recall actually obtained a 40% success rate on a large sample by selecting creative subjects.

I see where you are coming from but small effect sizes still matter. As Radin says... small effect sizes are enough for science to accept data which in turn gets the FDA to allow Disprin to market that taking their drug can stop you from having a heart attack... yet the Ganzfeld shows 5 times the effect size and somehow it is not considered relevant and so is ignored. 31% is still pretty damn good.

Instead of using the throw a dart at paper analogy... I use a gambling analogy. 30 minutes of mentation so that in a game of heads of tails I have a 55% chance instead of 45% chance of winning.... well if you only have 10 flips (trials) then yes I agree its not worth the effort because the 55/45 isn't going to have enough effect that it is going to guarantee that I walk away with more money than I had.

But if you told me 30 mins mentation was going to GUARANTEE me a 55% win rate there is not a man or woman alive that would not gamble. How do you guarantee it? With lots and lots of trials to show that it is not chance.

So it isn't so much about the percentage as the odds against chance. Binomial probability shows that the more trials you do and the more above chance you get then the greater the odds against that actually happening... well if you can prove over 1000 trials I can get 5% above chance then absolutely I am going to do 30 mins mentation to gamble with those odds.

I'm sure you know about using binomial probability to calculate the odds of something happening by chance... but for those that don't this is how it works.
Flipping 5% with 11/20 heads = meh. The odds are 1 in 2.4 Nothing to see here.
Flipping 5% with 55 heads out of 100 tosses = interesting. The odds are 1 in 5.5 of that happening.. interesting but nothing out of the ordinary.
Flipping 5% with 555 heads out of 1000 heads = VERY interesting. The odds are 1 in 3568.

All of a sudden when you are running large trials and you are at the 5 or 6% that Radin was seeing then you have odds that make it very unlikely this is happening due to chance and something else is at play here. So 5% is still very valid.
 
I see where you are coming from but small effect sizes still matter. As Radin says... small effect sizes are enough for science to accept data which in turn gets the FDA to allow Disprin to market that taking their drug can stop you from having a heart attack... yet the Ganzfeld shows 5 times the effect size and somehow it is not considered relevant and so is ignored. 31% is still pretty damn good.

Instead of using the throw a dart at paper analogy... I use a gambling analogy. 30 minutes of mentation so that in a game of heads of tails I have a 55% chance instead of 45% chance of winning.... well if you only have 10 flips (trials) then yes I agree its not worth the effort because the 55/45 isn't going to have enough effect that it is going to guarantee that I walk away with more money than I had.

But if you told me 30 mins mentation was going to GUARANTEE me a 55% win rate there is not a man or woman alive that would not gamble. How do you guarantee it? With lots and lots of trials to show that it is not chance.

So it isn't so much about the percentage as the odds against chance. Binomial probability shows that the more trials you do and the more above chance you get then the greater the odds against that actually happening... well if you can prove over 1000 trials I can get 5% above chance then absolutely I am going to do 30 mins mentation to gamble with those odds.

I'm sure you know about using binomial probability to calculate the odds of something happening by chance... but for those that don't this is how it works.
Flipping 5% with 11/20 heads = meh. The odds are 1 in 2.4 Nothing to see here.
Flipping 5% with 55 heads out of 100 tosses = interesting. The odds are 1 in 5.5 of that happening.. interesting but nothing out of the ordinary.
Flipping 5% with 555 heads out of 1000 heads = VERY interesting. The odds are 1 in 3568.

All of a sudden when you are running large trials and you are at the 5 or 6% that Radin was seeing then you have odds that make it very unlikely this is happening due to chance and something else is at play here. So 5% is still very valid.
If I was a 'skeptic', I would not be attacking the statistics. I would be attacking the perceived methodological flaws.
 
This is another nice little demonstration I use to show my less scientifically minded friends that are not the type to come on a forum like this when explaining how valid something like 6% is within Ganzfeld and that they can do for themselves.

Open up Excel and do a quick formula using the randbetween function. (Before the skeptics start Yes I know Excel is not the perfect RND # generator algorithm and is not exactly based on chance but it is not the point)

Copy into cell A1 the function =RANDBETWEEN(1,4) and copy it to cells A1 to A10. This returns a random number between 1 and 4.

In cell B1 type in the amount of trials 10 to start with. This will be used by the formula.

In cell C1 enter the following formula =SUMIF(A1:A2000,1) This will tell you how many 1's you in your trial. Chance would be 1 in 4.

In cell D2 enter in the following formula. =SUM(C1/B1)*100 This will give you the %. 25% is what we would expect is chance.
Now when you run 10 trials in Excel several times I get 30%, 50%, 60% and even 70%. Really means nothing because not enough sample size.

Now copy the formula from A1 to A50 and change B1 to 100.(# of trials). I am getting 27%, 17%, 26%, 24%, 22%. The %'s are starting to hover more around the 22-27% range with the odd anomoly like 17%.

Now copy the formula from A1 to A500 and change B1 to 500.(# of trials). What do we now notice. The %'s are 27.4, 24.7, 26.2, 22.4, 26.8 Starting to get a little interesting... we are between 22.4% - 26.8% even though it is only 500 trials. The chances of getting to 6% variability either side of chance has reduced significantly. In fact I re-ran this about 20 times and it didn't once get close to 31% or 19%.

So what happens when we run 2000 trials. What we see now is that when run the %'s of heads coming up are 25.15, 25.95, 24.9, 24.55, 24.75. We are now getting %'s in the 24.55 to 25.95% so up to 0.95% above chance. You never see anything even remotely approaching 6% above chance.

To get 6% when running that many trials are astronomical odds.

So the problem isn't the % it's the probability of achieving 6% above chance in most of the Ganzfeld studies which as Dean Radin correctly points out is significant. We have something here which needs further investigation.
 
If I was a 'skeptic', I would not be attacking the statistics. I would be attacking the perceived methodological flaws.

Of course... it's always easy to just say "There must have been a flaw in the method" than to attack the data and actually replicate it or yourself and then realise that shit... there is something to this. Just ask Ray Hyman.
 
This is another nice little demonstration I use to show my less scientifically minded friends that are not the type to come on a forum like this when explaining how valid something like 6% is within Ganzfeld and that they can do for themselves.

Open up Excel and do a quick formula using the randbetween function. (Before the skeptics start Yes I know Excel is not the perfect RND # generator algorithm and is not exactly based on chance but it is not the point)

Copy into cell A1 the function =RANDBETWEEN(1,4) and copy it to cells A1 to A10. This returns a random number between 1 and 4.

In cell B1 type in the amount of trials 10 to start with. This will be used by the formula.

In cell C1 enter the following formula =SUMIF(A1:A2000,1) This will tell you how many 1's you in your trial. Chance would be 1 in 4.

In cell D2 enter in the following formula. =SUM(C1/B1)*100 This will give you the %. 25% is what we would expect is chance.
Now when you run 10 trials in Excel several times I get 30%, 50%, 60% and even 70%. Really means nothing because not enough sample size.

Now copy the formula from A1 to A50 and change B1 to 100.(# of trials). I am getting 27%, 17%, 26%, 24%, 22%. The %'s are starting to hover more around the 22-27% range with the odd anomoly like 17%.

Now copy the formula from A1 to A500 and change B1 to 500.(# of trials). What do we now notice. The %'s are 27.4, 24.7, 26.2, 22.4, 26.8 Starting to get a little interesting... we are between 22.4% - 26.8% even though it is only 500 trials. The chances of getting to 6% variability either side of chance has reduced significantly. In fact I re-ran this about 20 times and it didn't once get close to 31% or 19%.

So what happens when we run 2000 trials. What we see now is that when run the %'s of heads coming up are 25.15, 25.95, 24.9, 24.55, 24.75. We are now getting %'s in the 24.55 to 25.95% so up to 0.95% above chance. You never see anything even remotely approaching 6% above chance.

To get 6% when running that many trials are astronomical odds.

So the problem isn't the % it's the probability of achieving 6% above chance in most of the Ganzfeld studies which as Dean Radin correctly points out is significant. We have something here which needs further investigation.

Gosh, are they still your friends after that? ;)
 
Back
Top