David Bailey
Member
I think that one defence of Alex's choice of his book title, is to use President Clinton's argument - it depends on the definition of the word 'is'!
Nobody argues that science was right about a hell of a lot of stuff in the past, but then it bloated out and became insanely arrogant so that right now it is wrong about a hell of a lot!
What all those fake papers really tell us, is that peer review is badly broken by now. If the reviewers could be fooled by computer generated gibberish, how much easier would it have been to fool them with papers that used fake data, or performed unreasonable adjustments to data to obtain the desired result?
To me, this reveals a deeper level of pretence in science - for just how long does it take to read a paper well enough to be able to say whether it is suitable for publication? Some papers might need months of work to unravel, or would really require some attempt at replication to validate them - yet reviewers simply can't afford to spend that amount of effort when the system offers them no reward for exploring other's work.
As has been revealed in Climate Science, often the actual data on which a paper is based, is not available for a variety of reasons - so that reproduction is impossible. Like many other areas of modern science, many published results are based on huge computer models. Anyone who believes these are a useful tool should read this:
http://www.wsj.com/articles/confessions-of-a-computer-modeler-1404861351
How is any potential reviewer of such work supposed to operate? Is he supposed to obtain the program in source form, check that it produces the specified result, then analyse the whole thing for bugs and valid assumptions, and to determine what range of alternative conclusions were possible by tweaking the code - as the above article discusses?
Given that we all depend on it for so much, I find the state of modern science quite scary.
David
Nobody argues that science was right about a hell of a lot of stuff in the past, but then it bloated out and became insanely arrogant so that right now it is wrong about a hell of a lot!
What all those fake papers really tell us, is that peer review is badly broken by now. If the reviewers could be fooled by computer generated gibberish, how much easier would it have been to fool them with papers that used fake data, or performed unreasonable adjustments to data to obtain the desired result?
To me, this reveals a deeper level of pretence in science - for just how long does it take to read a paper well enough to be able to say whether it is suitable for publication? Some papers might need months of work to unravel, or would really require some attempt at replication to validate them - yet reviewers simply can't afford to spend that amount of effort when the system offers them no reward for exploring other's work.
As has been revealed in Climate Science, often the actual data on which a paper is based, is not available for a variety of reasons - so that reproduction is impossible. Like many other areas of modern science, many published results are based on huge computer models. Anyone who believes these are a useful tool should read this:
http://www.wsj.com/articles/confessions-of-a-computer-modeler-1404861351
How is any potential reviewer of such work supposed to operate? Is he supposed to obtain the program in source form, check that it produces the specified result, then analyse the whole thing for bugs and valid assumptions, and to determine what range of alternative conclusions were possible by tweaking the code - as the above article discusses?
Given that we all depend on it for so much, I find the state of modern science quite scary.
David