I Fooled Millions Into Thinking Chocolate Helps Weight Loss. Here's How

Bucky

Member
http://io9.com/i-fooled-millions-into-thinking-chocolate-helps-weight-1707251800

...
We ran an actual clinical trial, with subjects randomly assigned to different diet regimes. And the statistically significant benefits of chocolate that we reported are based on the actual data. It was, in fact, a fairly typical study for the field of diet research. Which is to say: It was terrible science. The results are meaningless, and the health claims that the media blasted out to millions of people around the world are utterly unfounded.

Here’s how we did it.

...​

Fascinating... and scary :)
 
It’s called p-hacking—fiddling with your experimental design and data to push p under 0.05—and it’s a big problem.

Indeed. Everyone should read that Nature article linked here. Much food for thought regarding studies with small effect sizes.
 
The trick is getting people to be as critical when looking at studies which support their beliefs as they are with studies which contradict their beliefs.

Really, who wants to question a study which let's you eat chocolate?!

Linda
 
So there are people who lie and put together elaborate hoaxes based on actual data? Really? Wow. What stunning new info. lol

Also, his hoax aside, it's an obvious fact that chocolate (as in cacao) can help with weight loss.
 
Indeed. Everyone should read that Nature article linked here. Much food for thought regarding studies with small effect sizes.

Indeed! That was a good paper. But you cut your quote off before the most important line:

It’s called p-hacking—fiddling with your experimental design and data to push p under 0.05—and it’s a big problem. Most scientists are honest and do it unconsciously.

While certainly we want to avoid fraud its the unconscious p hacking by honest scientists that is, imo, a much bigger issue as it happens much more often.

I recently discovered a new meta-research group out of Stanford who is doing work similar to the Cochrane Collaboration. It's called the Meta-Research Innovation Center at Stanford. Their tagline is: Advancing Research Excellence.

Their Research page has links to many papers that some here may find interesting and relevant. Ioannidis shows up in about 40 papers on the Faculty Publications page! I've only just started going through them.
 
Doesn't apply to any psi research, but it's interesting. P hacking can only occur when you're fishing through multiple avenues for a statistically significant result and marginally acceptable statistical significance is enough. That. just. doesn't. happen. in psi research. No one pays any attention to a p value of .05 in parapsychology. Even .001 isn't considered enough most of the time.

Also, the intense skepticism functions to discourage playing with the data.
 
  • Like
Reactions: tim
Doesn't apply to any psi research, but it's interesting. P hacking can only occur when you're fishing through multiple avenues for a statistically significant result and marginally acceptable statistical significance is enough. That. just. doesn't. happen. in psi research. No one pays any attention to a p value of .05 in parapsychology. Even .001 isn't considered enough most of the time.

Also, the intense skepticism functions to discourage playing with the data.
Sorry, but this is ridiculously wrong. Plenty of examples abound of this very thing. Plus the p-value in common use in parapsychology is 0.1 (i.e. a one-tailed 0.05) let alone something smaller. And the "intense skepticism" is ignored or dismissed.

Have any parapsychology journals tied publication to pre-registration, for example?

Linda
 
iIf there is a field of study where p-hacking has been taken under consideration is PSI research. Those involved in the experiments have been literally accussed of it by skeptics for a while now.

If anything, this article is more relevant when discussing how the media sensationalizes studies even if they are absolutedly incapable of perusing the data. I think that was the author's point, when he notes that the newspaper didn't even check his background for credentials or verified if the "institute" was legit.
 
Just FYI... The problem of p-hacking is known to parapsychologists - for example, look at the recent meta-analysis of the precognition experiments by Bem, Tressoldi, Rayberon and Duggan. They specifically adressed the p-hacking
concerns and performed a statistical test designed to detect a p-hacking in a set of meta-analysed studies (look at pages 23 - 25). No p-hacking was found.
Please note that they did not quite do the right test. They did not find evidence of "deep" p-hacking. But looking only for deep p-hacking will miss many cases as it is an insensitive test.

"The two-tailed sign test with a p = 0.025 threshold (above) and the tests proposed by Simonsohn et al. [41] can detect severe p-hacking, but are insensitive to more modest (and arguably more realistic) levels of p-hacking. This is true especially if the average true effect size is strong, as the right skew introduced to the p-curve will mask the left skew caused by p-hacking. A more sensitive approach to detect p-hacking is to look for an increase in the relative frequency of p-values just below 0.05, where we expect the signal of p-hacking to be strongest."

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4359000/

If we look at their suggested test - comparing the bins just below 0.05 - we see that there are 3 times as many studies in the 0.05 bin as there are in the 0.04 bin, which is very suggestive of p-hacking. This isn't particularly exact - you need to see the actual distribution of p-values. But it sure is suspicious.

Linda
 
Sorry, but this is ridiculously wrong. Plenty of examples abound of this very thing. Plus the p-value in common use in parapsychology is 0.1 (i.e. a one-tailed 0.05) let alone something smaller. And the "intense skepticism" is ignored or dismissed.

Have any parapsychology journals tied publication to pre-registration, for example?

Linda

Linda, this kind of misinformation is shameful.
 
  • Like
Reactions: tim
Linda, this kind of misinformation is shameful.

It's not "misinformation" just because it contradicts what you've been led to believe.

Like I said, the trick is getting people to be as critical when looking at studies which support their beliefs as they are with studies which contradict their beliefs.

Linda
 
It's not "misinformation" just because it contradicts what you've been led to believe.

Like I said, the trick is getting people to be as critical when looking at studies which support their beliefs as they are with studies which contradict their beliefs.

Linda
No. I've studied this stuff including the skeptical garbage. It's not a matter of mere opinion because the facts are quite clear. You're misinforming.
 
  • Like
Reactions: tim
No. I've studied this stuff including the skeptical garbage. It's not a matter of mere opinion because the facts are quite clear. You're misinforming.
Isn't that just an example of what I said earlier? I can believe that you looked at the "skeptical garbage" critically. Your prejudices would ensure that you would do so. But I would be much more hesitant to believe that you have studied parapsychology critically.

For one, you have recently come up with some statements which were quite markedly mistaken. For example, in the Presentiment thread you claimed "they were also published in an extremely prestigious journal so flaws in the protocols can be presumed to be imaginary". Not only are there published articles which outline the flaws in Bem's research (in spite of their "imaginary" existence), but I pointed you to research which shows that he is hardly alone in this when it comes to finding systematic patterns of flaws in extremely prestigious journals (Science, in this case).

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0114255

Second, in this very thread Vortex provided an example of parapsychology research where p-hacking would be suspected based on using the recommended test for p-hacking (not the insensitive test which the researchers used), which contradicts your claim that p-hacking never happens in parapsychology research.

http://www.skeptiko-forum.com/threa...-helps-weight-loss-heres-how.2288/#post-68599

Third, plenty of examples can be found of parapsychology researchers using the threshold of p<0.1 (one-tailed 0.05) for their significance testing, rather than the much lower threshold of p<0.001. Just looking at Bem's paper, here are some of the larger p-values he specified as "significant".

http://dbem.ws/FeelingFuture.pdf
(p-values given as two-tailed values for consistency)
0.02
0.062
0.022
0.070
0.046
0.028
0.082
0.078

Another way to find more examples is to look through the ganzfeld database and pick out those with z-scores near 1.65 (a p<0.1 cut-off) and see if they were reported as "significant". I did so for the first one from this list (Morris et. al. 2003) and discovered that it was reported as "statistically significant" with it's p=0.10 p-value.
http://www.deanradin.com/FOC2014/Storm2010MetaFreeResp.pdf (appendix A)
http://www.koestler-parapsychology.psy.ed.ac.uk/cwatt/Documents/WattJP07.pdf (table 1)

Please note that this is a minuscule sampling of all that is out there.

Linda
 
Isn't that just an example of what I said earlier? I can believe that you looked at the "skeptical garbage" critically. Your prejudices would ensure that you would do so. But I would be much more hesitant to believe that you have studied parapsychology critically.

For one, you have recently come up with some statements which were quite markedly mistaken. For example, in the Presentiment thread you claimed "they were also published in an extremely prestigious journal so flaws in the protocols can be presumed to be imaginary". Not only are there published articles which outline the flaws in Bem's research (in spite of their "imaginary" existence), but I pointed you to research which shows that he is hardly alone in this when it comes to finding systematic patterns of flaws in extremely prestigious journals (Science, in this case).

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0114255

Second, in this very thread Vortex provided an example of parapsychology research where p-hacking would be suspected based on using the recommended test for p-hacking (not the insensitive test which the researchers used), which contradicts your claim that p-hacking never happens in parapsychology research.

http://www.skeptiko-forum.com/threa...-helps-weight-loss-heres-how.2288/#post-68599

Third, plenty of examples can be found of parapsychology researchers using the threshold of p<0.1 (one-tailed 0.05) for their significance testing, rather than the much lower threshold of p<0.001. Just looking at Bem's paper, here are some of the larger p-values he specified as "significant".

http://dbem.ws/FeelingFuture.pdf
(p-values given as two-tailed values for consistency)
0.02
0.062
0.022
0.070
0.046
0.028
0.082
0.078

Another way to find more examples is to look through the ganzfeld database and pick out those with z-scores near 1.65 (a p<0.1 cut-off) and see if they were reported as "significant". I did so for the first one from this list (Morris et. al. 2003) and discovered that it was reported as "statistically significant" with it's p=0.10 p-value.
http://www.deanradin.com/FOC2014/Storm2010MetaFreeResp.pdf (appendix A)
http://www.koestler-parapsychology.psy.ed.ac.uk/cwatt/Documents/WattJP07.pdf (table 1)

Please note that this is a minuscule sampling of all that is out there.

Linda

I'm not even going to try to sort this garbage out. There are a long line of people who have gone to great lengths to expose your deceits and there's no need for me to repeat it. Your reputation precedes you.
 
I'm not even going to try to sort this garbage out. There are a long line of people who have gone to great lengths to expose your deceits and there's no need for me to repeat it. Your reputation precedes you.
Okay, so in actuality, you seem uninterested in critically examining your claims/beliefs, even though you sincerely believe that you are. I get that (and this is "other stuff", after all). But that's why the guy in the OP found it easy to fool people.

"Fascinating and scary" indeed.

Linda
 
Third, plenty of examples can be found of parapsychology researchers using the threshold of p<0.1 (one-tailed 0.05) for their significance testing, rather than the much lower threshold of p<0.001. Just looking at Bem's paper, here are some of the larger p-values he specified as "significant".

http://dbem.ws/FeelingFuture.pdf
(p-values given as two-tailed values for consistency)
0.02
0.062
0.022
0.070
0.046
0.028
0.082
0.078


The paper mostly dealt with one-tailed tests, did you convert them to 2 tailed? I'm trying to track them down in the paper. What were the original values?
 
The paper mostly dealt with one-tailed tests, did you convert them to 2 tailed?

Just so we're talking about the same numbers across the field.

I'm trying to track them down in the paper. What were the original values?

Divide the numbers I listed by 2. They are in order, I think. I skipped over some of p-values (mostly the ones for secondary/tertiary analyses and the smaller ones).

Linda
 
Back
Top