Ahh. We get to the crux of where I think the misinterpretation of the DS results usually lay: in the defining of what can actually cause the collapse. Some used to say observation. Then it was amended to human observer. Then "conscious" observer. And on it goes. Let me spend some time with video and see what I can gleen.
Thanks!
OK so after watching the video-
I have to say,,, this is great data! Worth a watch for anyone who hasn't seen it.
There aren't many shortcomings here but one big one that he isn't touching on, and which I think may be a critical part of using this type of material as a truly convincing "game-changer", has to do with the intention of the testers. It has been shown that the intention of an experimenter impacts the experiment. Obviously this impacts the design of the experiment and analysis of the resulting data. But I think it goes beyond these points and gets to the actual results varying with intention. Intention of course in many cases is the exact thing we are trying to test, but in this case I refer to the tester and not the tested.
Researchers have always tried to mitigate this effect through constraining and changing the actual test itself. As we have often discussed, this eventually actually degrades the ability of the test to show any meaningful results, which works right into the skeptic's argument. Which means: conversation over, skeptics win.
I would love to see a multi-blind experiment which attacks this effect from a different angle. Rather than bastardizing the test, simply design the data collection and analysis aspects of the test to account for it.
For example- set up the test so two groups of experimenters (those who believe they are proving the effect, those who believe they are disproving the effect) are allowed to run the experiment according to a mutually agreed protocol, and where they collect and then independently analyse their own results. In addition, results are also provided to each team in three blind subsets. One subset only contains (or is a preponderance of) "provers" data, one subset 2- of"dis-provers" data and the last, a random set.
I'd like to see the results of the analysis of each of the two teams to see if they found similar results and if they found different results from their vs the other team's, vs randomized data.
I would predict that even when following identical protocols, their results would track their expectations. This effect is what (I think) underlays much of the confusion and disagreement in these sort of experiments. Where skeptics, even if they took the time to try and duplicate results, can't.
Bottom line is- the intention of the tester may play a major part in the process. A test like this might peel another layer off this onion...
Dean. Are you reading this?
Actually I've asked Dean if he would comment. Let's see.