I think I'm just confused by the idea of unrealistic expectations. I think the drug companies upset about lack of replicable data had pretty reasonable expectations.
Are you sure about that? When you look at the papers that Sheldrake cites in that article you see some different themes. The first article notes that many researchers in pharmaceutical companies are well aware of the the need to check that the initial studies hold up. The article describes the in-house attempts of the companies to conduct their own replications. The frustration that they express is that dissappontingly few are successfully replicated. They do they precisely to avoid wasting too much money on an unproductive line or research. The article notes that this is pretty well known.
The second link describes research where the investigators did not bother to validate the initial studies. The frustration of these teams was much worse: because they ended up wasting a lot of time and money. The article suggests not doing that! (among other advice).
Note in that second article the authors say that this shouldn't be interpreted as the system being fundmantally broken (I'm paraphrasing). They note there is a lot of really good research out there. The rest of the article is geared, similarly to the iaonidis paper, to providing advice on best practices. Note: the authors took pains to consult some of the original authors and noted they were perfectly competent and serious researchers.
Same with people who would've thought that a major theory of psychology was built on some kind of solid foundation?
From the article Chuck posted:
Another good example of the kinds of things I've long been posting about. When you read the paper that this skate article is referring to you see that it's not really describing an entire field based on a false premise that no one tried to replicate. Rather, that entire field was basically replications in one form or another! The problem, however, is that what you had were a lot of variations on a theme but dominated by small scale, underpowered studies. So even when meta analysis was done it, being based on small, underpowered studies (along with some other issues involving topics we've often discussed such as selection biases, etc.) you had meta results that likely showed effects that weren't there. (Note, this is in line with a study I posted awhile back that confirmed that one or two fully powered studies are more reliable than entire meta analyses filled with underpowered studies.
When these guys did their big, fully powered, study, the effect all but disappeared. Note, the authors suggest potential issues with their paper as well, and suggest that they may be off as well but that the issue is worth pursuing.
The authors aren't chagrined about the system as a whole either and provide advice.
What you are seeing, I suggest, is not evidence of fundmental flaws but the emergence of a better understanding of how to produce reliable results.
These findings are important and should play a big role in the future allocation of funds and publishing decisions. But I'm not sure we can blame those who came before, or blast them as boobs. It takes time to figure this stuff out. These results are not always intuitive. The research had to be done first and meta research was not always easy. It's the advent of the Internet id guess that had really allowed the field to burst in the last decade or so. The capacity for this scale of study would have been extremely difficult earlier.
Note that there are a lot of parallels we can see in the study of the history of these ego experiments to the history of parapsychology. Its worthwhile reading closely for many on this forum, and I think it suggests certain questions to ask in this field as well.
Gotta say, it's nice that others are starting to draw attention to these studies on this forum. I've been trying to generate discussion on them for years! So thank you!
You can find all sorts of similar cases I'm sure. Remember, we both agree there are flaws and abuses. But we shouldn't evaluate a system as a whole based solely on its failures. There are many other factors. And there is no system that will be failure free. Evaluating the system requires a much broader view.
It's the "what's next" that's disturbing. There are already a lot of revealed issues, but without some major efforts on replicating the various findings in every field it's hard to know how far the rot extends?
If you read those papers I think you'll see some pretty good suggestions for ways to move forward.
The rest of your post seems to be more of the us vs them skeptic proponent stuff - which tend to be discussion killers this had been a great discussion so far and it would be a shame to kill it so I'm not going to address them.[/QUOTE]