Arouet,
I think 'generalities' do have a place sometimes - you have to lift your head above the trees sometimes and look at the view.
I agree with that. Both are valuable in their proper contex - I have views on both. (Though in this case the generality I was referring to was Soulatemen's critique of me personally)
The thing I never feel you realise, is that no experiment is above criticism - no experiment in no scientific discipline!
I know you've said this before and I'd meant to respond to you then but I might have forgotten to get back to that post. I do recognise that no experiment is perfect - or rather - that it is exceptionally difficult to design a perfect experiment. The standard of proof is not that of certainty and that is not the standard I am applying. I'll come back to this below.
In most areas of science there is a consensus as to when an experiment is well done, and people who want to criticise, should do so by performing some experimental work themselves, rather than publishing a theoretical criticism - particularly if it involves dishonesty.
Yes and no. That is, there is nothing wrong with people who have criticised also performing experimental work on their own but I don't think we want to establish that we will only accept as valid criticism from people who are going to personally perform the experiments themselves. This is not how the system is designed nor should it be and I don't think you believe that either. When we frame everything in terms of the culture war pithy lines like have rhetorical force but I don't think they stand to scrutiny. All science should be open to critique and expect it. Saying that the critique doesn't do their own experiments at the end of the day is simply an ad hom attack when it comes to the merits of their argument. Now, it may be more relevant vis-a-vis an evaluation of the person's credentials and experience - but that is separate from the critique itself.
Preferably they should not just try and fail to reproduce a result, but make a huge effort to succeed and talk to the original author(s) before accepting that they are right and the other guy made a mistake. I was once in a position to do just that, and we did all of the above (but no dishonesty was involved), and ultimately published a paper with the original author as one of the new authors (we needed to stay on good terms with him for various reasons :) ).
Again - there is absolutely nothing wrong with going out and getting involved on that level. It's just not pragmatic - which I think you recognised when you said "preferably". Moreover, it wouldn't change anything vis-a-vis this issue since all that would happen is that you should expect your own experiment to be subject to criticism from others who have no intention of going out and doing their own replications and investigations.
The ψ-science debate really is stuck-on because it breaks that normal scientific approach. No science would progress if it were done in that way.
Look, I'm not a scientist obviously and so am not in that milieu. But from what I've understood over the last few years from an outsider perspective is that it is precisely the cycle of perform an experiment, write it up, subject it to critique from the greater scientific community, modify experiment, rinse repeat that leads to progress.
You talk about mistakes and while that's certainly part of the process of evaluation I think that doesn't quite get at what we should be doing when approaching these studies. It's not just about whether the researchers were careful (although that's part of it). What we need to do is consider the question being asked, look closely at the methodology used, and try and determine how reliably the methodology used can help us answer the question asked. This is often a very difficult task in itself. And it is particularly difficult when the question being asked is not what something is - but rather is trying to answer what it is not.
It's true that there are no perfect experiments. But that's looking at it from the wrong way. The way to do it, I would suggest, is from the bottom up: given the methodology used, how should we interpret the results? How does the methodology used relate to the question asked? How closely do the conclusions related to the data collected? What is the risk of bias and how important is the particular risk in the context of the question we are asking? What are the limitations of the methodology used. How much confidence should we require in our conclusions?
None of these are easy questions. But they are quesions that have to be asked and they are questions that should be discussed. It's can be less fun, more frustrating, and much hard than engaging in triballistic Us vs. Them cheerleading and goading but if there's going to be progress made from the scientific perspective and particularly from the perspective of the introduction of a possible new paradigm or a better understanding of the old paradigm it's going to be there.