My first introduction to parapsychology came at least 1 and a half year ago with the Ganzfeld Experiments. Since then, I've been studying both skeptical and pro-psi proponents, as far as my money, time and resources allow me (I don't have enough cash to buy expensive papers/books, and my language skills are limited to spanish and english). In the time, I've meet four different potential objections to it that I've been unable to find any reply from the pro-psi proponents, and one seems to be so obscure that I've actually never found anything at all in the Internet about it. Here I'll put them, for two reasons: (1) Further discussion and positive feedback, and (2) ask you, ¿what should I do next? I've find myself in a dead end concerning this issue. ¿Should I move into another parapsychology topic?
So, here are the four potential objections:
(a) Kennedy's paper concerning power size: This one is pretty recent, actually from last year. You can find it for free here (http://jeksite.org/psi/jp13a.pdf). I came to know about this paper by Prescott's blog where he (Kennedy) and Carter engaged a little bit with the topic. I've also seen Arouet use it in a post concerning the Ganzfeld in this forum, but there where no further replies I'm aware of. The central issue in Kennedy's paper is that under-power-sized meta-analysis tend to give incorrect data, specially if they are post-hoc.
(b) Goodfellow's Objection: I've only found this particular objection in a book named "Introduction to Parapsychology" (5th edition) in the chapter concerning theories for psi. Sadly, my Kindle is death so I can't quote however I do recall a poster named "Linda" in the James Randi Foundation making a similar post. Namely, that the assumption that 25% is the expect chance may be wrong because people usually have a bias toward certain numbers, like the first and the last options showed, and the shuffling distribution of the target many not always follow a 25/25/25/25 pattern. Apparently Goodfellow found evidence of this effect (where people choose in a non-random way, like first, first, second, second).
(c) Hardy's Objection: From this, I've only been able to found a small reference in the skepdic, which I'll quote directly (http://www.skepdic.com/psiassumption.html):
d) Ioannidis/Ersby Objection: Ioannidis famous paper from Nature of why so many papers are false seems to give a series of criteria to discern the probability of a study or series of studies of having erroneous data, and I've read some quite convinging cases given as to why Ganzfeld may fit the bill to be labelled as the effect caused by different biases and errors leaking in the studies.
Ersby makes a similar objection, which can be read at the end of his introduction to his work on the Ganzfeld in skeptic's report page. His objection is about his personal recollection of over 7000 Ganzfeld studies, and how, when put in line, they don't show a funnel graph, which is an indicator of a genuine effect. I've read that the file-drawer objection is mislabeled ( Randi has make this counter-objection), but I've also read that there is some controversy as to how exactly detect the file-drawer effect, and that some analysis may be give over-inflated results. Ersby objection can be found at the bottom, here (http://www.skepticreport.com/sr/?p=316).
So, those are the objections that I've found the strongest. ¿Any opinion, suggestions? Thanks in advance.
So, here are the four potential objections:
(a) Kennedy's paper concerning power size: This one is pretty recent, actually from last year. You can find it for free here (http://jeksite.org/psi/jp13a.pdf). I came to know about this paper by Prescott's blog where he (Kennedy) and Carter engaged a little bit with the topic. I've also seen Arouet use it in a post concerning the Ganzfeld in this forum, but there where no further replies I'm aware of. The central issue in Kennedy's paper is that under-power-sized meta-analysis tend to give incorrect data, specially if they are post-hoc.
(b) Goodfellow's Objection: I've only found this particular objection in a book named "Introduction to Parapsychology" (5th edition) in the chapter concerning theories for psi. Sadly, my Kindle is death so I can't quote however I do recall a poster named "Linda" in the James Randi Foundation making a similar post. Namely, that the assumption that 25% is the expect chance may be wrong because people usually have a bias toward certain numbers, like the first and the last options showed, and the shuffling distribution of the target many not always follow a 25/25/25/25 pattern. Apparently Goodfellow found evidence of this effect (where people choose in a non-random way, like first, first, second, second).
(c) Hardy's Objection: From this, I've only been able to found a small reference in the skepdic, which I'll quote directly (http://www.skepdic.com/psiassumption.html):
"
Studies comparing random strings with random strings, to simulate guessing numbers or cards, have found significant departures from what would be expected theoretically by chance (Alcock 1981: 159). For example, Harvie “selected 50,000 digits from various sources of random numbers and used them to represent “target cards” in an ESP experiment. Instead of having subjects make guesses, a series of 50,000 random numbers were produced by a computer.” He found a hit rate that was significantly less than what would be predicted by chance (Alcock 1981: 158-159).
In the 1930s, Walter Pitkin of Columbia University printed up 200,000 cards, half red and half blue, with 40,000 of each of the five ESP card symbols. The cards were mechanically shuffled and read by a machine. The result was two lists of 100,000 randomly selected symbols. One list would represent chance distribution of the symbols and the other would represent chance guessing of the symbols. However, the actual matches and what would be predicted by accepted odds didn’t match up. The total number was 2% under mathematical expectancy. Runs of 5 matching pairs were 25% under and runs of 7 were 59% greater than mathematical expectancy (Christopher 1970: 27-28). The point is not whether these runs are typical in a real world of real randomness or whether they represent some peculiarity of the shuffling machine or some other quirk. The point is that it is not justified to assume that statistical probability based on true randomness and a very large number of instances applies without further consideration to any finite operation in the real world such as guessing symbols in decks of 25 cards shuffled who knows how or how often, or rolling dice, or trying to affect a random number generator with one's mind. As Alcock put it: “If such significant variation can be produced by comparing random strings with random strings, then the assumption that any significant variation from chance is due to psi seems untenable (Alcock 1981: 158-159).”"
d) Ioannidis/Ersby Objection: Ioannidis famous paper from Nature of why so many papers are false seems to give a series of criteria to discern the probability of a study or series of studies of having erroneous data, and I've read some quite convinging cases given as to why Ganzfeld may fit the bill to be labelled as the effect caused by different biases and errors leaking in the studies.
Ersby makes a similar objection, which can be read at the end of his introduction to his work on the Ganzfeld in skeptic's report page. His objection is about his personal recollection of over 7000 Ganzfeld studies, and how, when put in line, they don't show a funnel graph, which is an indicator of a genuine effect. I've read that the file-drawer objection is mislabeled ( Randi has make this counter-objection), but I've also read that there is some controversy as to how exactly detect the file-drawer effect, and that some analysis may be give over-inflated results. Ersby objection can be found at the bottom, here (http://www.skepticreport.com/sr/?p=316).
So, those are the objections that I've found the strongest. ¿Any opinion, suggestions? Thanks in advance.