I think the problem is that parapsychologists do get the same level of trust as your regular run-of-the-mill scientist. I don't think people realize the level of mistrust which permeates science because we are already intimately familiar with the ways in which the system is gamed in our own fields. The discrepancy isn't between how Radin, Bem and Stevenson are treated in comparison. The discrepancy is in how you think others are treated.
I think it's not impossible for a reasonable skeptic to reject the strong narrative of orthodoxy/heterodoxy that is frequently presented here, while still accepting that fringe researchers face obstacles unlike those of their mainstream colleagues, some of which operate independently of the level of evidence those researchers amass. This is not a revolutionary idea, nor does it require any remarkable speculation; on the contrary, it follows directly from well-understood principles in sociology and such evidence as it has been possible to acquire.
I am privy to the information, for example, that a fairly mainstream outlet had agreed to devote an entire research area of one of its journals to the publication of works on parapsychology, only later to rescind its invitation with a minimum of consideration for the scientists involved. The research area had been open for several months, during which a couple parapsychologists had managed to solicit the participation of dozens of high-profile contributors, before it was cancelled without any explanation. When its authors were unable to locate their venue they contacted said parapsychologists, who in turn contacted the chief editors of the journal and waited two weeks for a reply. After it finally came, the management claimed the topic had never received approval from certain chief editors who had opposed it from the beginning; cited the fact that it would detract from the prestige of their journal; and then explained that, in reality, the erection of the research area had been an "anomaly", and had been posted accidentally.
I say this affected by no small amount of dislike, for on the one hand I really would like to expose this particular journal's base level of scientific ethical misconduct—perhaps even cowardice—but on the other, I am restrained by the fact that those of us who know about the incident have been enjoined to reveal no specifics, in order that psi papers continue to be allowed publication there.
I have also mentioned, on several previous occasions, that there are at least three very high-profile physicists interested in psi research who will not reveal their identities lest their careers suffer as a result.
I suspect that the average scientist running across something by Radin or Bem is going to suspect them of the same kinds of shennigans that they see others in their field using in order to obtain 'significant' results. And the problem is that Radin and Bem make it too easy to call "shenanigans". As far as I know, Bem has never owned up to the exploratory nature of at least some of the studies he published in "Feeling the Future", nor the size of the pool from which he selected those studies.
As I recall, we only had suggestive evidence for this. If I see Bem at the PA conference this year, I will be sure to ask him about that. You were correct in suggesting that Bem would not reply to my email, BTW, but then again his inbox probably contains a great deal of unread and unconsidered emails.
Bem presents a meta-analysis where half of the included studies are "personal communication". That's how drug companies go about making their useless or dangerous products look good, so it's not a surprise that scientists will roll their eyes in this case as well.
I've looked at the MA; the references show eight studies were personally communicated out of more than 50 considered. If you were treating all non-peer reviewed studies as personal communication, that may have been the source of your mistake, but if you'll notice (1) the MA included a comparison between the two categories of study, finding that their ES's had substantially overlapping 95% CIs, and thus were not significantly different from each other and (2) a whole raft of Moulton's negative psi studies were never peer reviewed, but nevertheless made it into the MA because of its efforts to combat publication bias. Personal communication is one way a researcher can avoid taking mainstream journals' unreliable pool of published studies at face value.
I think Johann and Maaneli are on the right track (if I've understood Johann's description correctly) with respect to getting the kinds of results regular run-of-the-mill scientists are used to seeing. I'm looking forward to seeing their paper on the subject.
One of the things our paper emphasizes, with respect, is that the results found in parapsychology
are the kinds of results regular run-of-the-mill scientists are used to seeing—but that they can also get better.
I would like to take the opportunity, also, to dispel the notion that Maaneli and I have been conducting psi experiments (as someone speculated here). Unfortunately, it is not so. Our power predictions derive primarily from Maaneli's work exploring the most successful results of prior meta-analyses (with a history of consistency which we document), and a series of straightforward calculations. Making use of moderator variables, we can significantly boost power.
An example of a strong moderator variable for psi is participant selection; in Storm et al. (2010), Honorton & Ferrari (1984), and Storm et al. (2012), studies with selected participants have considerably higher ES values than studies with unselected participants, and this finding has held even for the Milton & Wiseman database. Honorton's three-predictor model, as well, produced hit rates in excess of 40% in two prospective replications, one independent. Studies with creative subjects across the ganzfeld show a proportion of more than 40% hits too. These sorts of observations suggest a route to amplifying power by more than double what it is now. It's really no more complicated than that.