A small point of contention with Kai: statistics don't break down when effect sizes become small; it is in such situations, rather, where the real power and beauty of statistics is made manifest. For by the clarity they afford us, we can scrutinize the liminal, and do, uncovering unexpected relationships and effects that we could never know otherwise (with incredible precision too—provided we observe a couple laws of good scientific practice).
----------------------------------------------------------------
As I have mentioned before, many psi experiments are basically Bell's theorem-type physics experiments; the goal is to demonstrate a correlation across barriers, where the correlated objects are separated in such a way as to make conventional or classical information transfer impossible between. And for both experiments the effect size is relatively small, with similar results.
Now, I won't try to convince you that parapsychology has made a case for psi as convincing as these Bell-type experiments have made for non-local influence. It has not. Psi effects do not reproduce with equal ease—they vary as humans vary. Nor can they be predicted to occur by some mathematically elegant mechanics (though I hope that in the future they will lend themselves to physical interpretation); rather, psi phenomena are entirely divorced from explanatory theories, floating in the vacuum of the unknown.
That does endow them with a certain appeal, though, don't you think?
From a personal perspective, after several years examining various strands of evidence for psi—a fascinating journey that has chartered a new course for my life—I have come to the conclusion that the psi signal is strong enough to be distinguishable from the noise. Let me be clear: I did not come to parapsychology as a skeptic. Parapsychology made me a skeptic. By its conceptual rigor, scientific integrity, and investigative mindset, I discovered a different way to think about the paranormal. No beliefs. Just ideas. Ideas that can be tested.
Why do I think psi research probably has detected a real effect? I can only give the short answer here (you will find the longer one in either of the papers I co-authored with Maaneli, in publication). The effects, simply put, just don't seem to go away. Experiments get better, the statistics gets better, but the signal remains. The studies have also proved resistant to criticism; as we (purport) to show in our future JP paper, for every proposed conventional explanation, there is a preferable counter-explanation that better describes the data. File-drawer effects can be shown to be negligible in the best MAs. Ganzfeld investigators get better results with higher quality experiments. Selected subjects perform far better than unselected subjects in every database we have examined so far (these being Gz, forced-choice, RV, and Dream-ESP). To me, the combined weight of these facts (and others which I omit) suggests strongly that we are not dealing with an artifact of experimental design. But I am presenting you only my perspective.
One of the problems that parapsychology confronts today is that an impartial investigator, making only a cursory overview of the literature, would not have access to this information. They might be in doubt, and rightfully so, about whether artifacts could explain the small effects they saw in meta-analyses. In other words, psi is evident, but not self-evident. We need better, stronger data to convince those scientists (most) who cannot spare the time to examine the various proposed explanations and counter-explanation for parapsychological results. And we won't get that until more experiments succeed, and their effect sizes get larger, making them more credible prima facie. Maaneli and I have proposed methods for doing this in the forthcoming edition of The Handbook of Parapsychology. Simply put, the solution we recommend is to raise power in several areas of research, utilizing for this purpose the key findings of previous meta-analyses (e.g. Storm et al, 2010; Bosch et al, 2004; Utts et al, 1996; etc). In total, we predict that if ganzfeld experiments adopt our suggestions, they should achieve a reproducibility rate of approximately 60-70% (rather than the 20-30% they have now). Forced-choice studies would also jump to 70-80% power. This would constitute significant progress.
Just my two cents.