CDC Autism Whistleblower Admits Vaccine Study Fraud

Hmmm...an article published in an 'open access' journal published in Nigeria, which isn't indexed on medline, from an anti-vaccine/anti-fetal stem cell group...

Maybe it would help if this kind of stuff wasn't published under conditions which scream "bogus"?

Of course, when the OP researchers' choice of whether or not to make note of a spurious association turns into "fraud", it's probably already pretty hopeless.

Linda
 
It's amusing how there's still people throwing labels such as "anti-vaccine" to anyone who is looking into making vaccines safer for everyone.
It's like saying that engineers trying to improve the safety of vehicles are "anti-vehicles", or those researching food safety are "anti-food" :D

Of course if one characterizes the risk of a crippling illnesses as "spurious associations" the sense of hopelessness is indeed of cosmic proportions.

It's a pity you didn't catch the opportunity to shut up, for a change. :(
 
It's amusing how there's still people throwing labels such as "anti-vaccine" to anyone who is looking into making vaccines safer for everyone.
It's like saying that engineers trying to improve the safety of vehicles are "anti-vehicles", or those researching food safety are "anti-food"
I wouldn't call someone trying to make vaccines safer an anti-vaxer. I would call someone like Jenny McCarthy an anti-vaxer.

~~ Paul
 
It's amusing how there's still people throwing labels such as "anti-vaccine" to anyone who is looking into making vaccines safer for everyone.

I call people who are looking into making vaccines safer "vaccine researchers". I call people who use poor science and ignorance to scare people off vaccine use "anti-vaxxers".

Of course if one characterizes the risk of a crippling illnesses as "spurious associations" the sense of hopelessness is indeed of cosmic proportions.

The problem wasn't that the researchers did not report on a risk of a crippling illness. The problem was that they didn't report on a spurious association. If this were any other research report, it wouldn't be an issue (wasting space reporting on findings regarded as spurious is generally discouraged). But the CDC knows that when it comes to vaccines, there will be groups committed to making a volcano from a grain of sand. So they should have known better than to leave anything out, regardless of whether the association was related. And that's basically what the CDC and the "whistleblower" were apologizing for doing.

Linda
 
The problem wasn't that the researchers did not report on a risk of a crippling illness. The problem was that they didn't report on a spurious association. If this were any other research report, it wouldn't be an issue (wasting space reporting on findings regarded as spurious is generally discouraged).

What a weird spin you've put on Thompson's statement [1]... Like you, I didn't think this issue was about whether MMR causes Autism, but I did think it was clear that Thompson thought he and his CDC collegues had not followed agreed research protocols, and deliberately omitted the reporting of statistically significant data... post-hoc.

I thought you generally didn't like that sort of thing wherever it happens, not just the CDC, because it allows researchers to report (or not in this case) associations after the fact, which may be biased?

The way you've phrased your response, it almost seems like your saying the CDC should have known better on such a sensitive issue... but otherwise it's generally OK not to follow agreed research protocols.

[1] http://www.morganverkamp.com/august-27-2014-press-release-statement-of-william-w-thompson-ph-d-regarding-the-2004-article-examining-the-possibility-of-a-relationship-between-mmr-vaccine-and-autism/
 
Last edited:
What a weird spin you've put on Thompson's statement [1]... Like you, I didn't think this issue was about whether MMR causes Autism, but I did think it was clear that Thompson thought he and his CDC collegues had not followed agreed research protocols, and deliberately omitted the reporting of statistically significant data... post-hoc.

There are two separate issues. Was there a statistically significant difference found in a post hoc sub-sub-sub-group? Do the researchers and the CDC have any reason to think that, while there is no risk of autism from vaccines in everyone else, there is a risk in that one post hoc sub-sub-sub-group?

The important question is, is there any indication of a real increased risk to be worried about? And the researchers and the CDC make it clear that this research does not support the idea of increased risk in any group, including that post hoc sub-sub-sub-group. Even William Thompson states he would advise children of any race be vaccinated, so he does not think this is a real association, either. It is almost inevitable with these epidemiological studies, that one or some of these post hoc sub-groups will differ in some way. The next time it will be Hispanic girls, or children in a particular zip code, or whatever.

I don't know what the protocol violation was. They had general information on all subjects, plus they had a subset of subjects who also had a Georgia birth certificate which gave them more detailed information such as mother's education and age. The statistically significant sub-sub-sub-group was based on the general information. The subset with Georgia birth certificates did not show this difference. I'm guessing that the protocol pre-specified which sub group analyses would be performed, and that it included race and age sub-groups on the entire study population, as well as the subset with the Georgia birth certificates. But the researchers reported on the race and age sub-groups using the Georgia birth certificate data.

I thought you generally didn't like that sort of thing wherever it happens, not just the CDC, because it allows researchers to report (or not in this case) associations after the fact, which may be biased?

I don't like that sort of thing, and they should have reported it if it was part of their protocol. However, that doesn't make it a meaningful or a real association. The problem is usually the other way around. The researchers make a big fuss about some spurious association they found post hoc, as though it can be taken to be meaningful.

The way you've phrased your response, it almost seems like your saying the CDC should have known better on such a sensitive issue... but otherwise it's generally OK not to follow agreed research protocols.

[1] http://www.morganverkamp.com/august-27-2014-press-release-statement-of-william-w-thompson-ph-d-regarding-the-2004-article-examining-the-possibility-of-a-relationship-between-mmr-vaccine-and-autism/

No, I think it's generally OK not to report spurious findings, but protocols should be followed. But the important thing in all of this is that regardless of what anyone did or did not do, the research does not support the idea that there is any increased risk of autism associated with the age at which the MMR vaccine is received, even in African American boys in a narrow age window.

Linda
 
I don't like that sort of thing, and they should have reported it if it was part of their protocol.

You've cleared that up.

One issue that I'm now struggling with (now that you've raised it), is this idea that post hoc decisions not to report statistically significant but spurious associations is OK, but, post hoc decisions to report statistically significant but spurious associations is not OK.

I'm comparing these two scenarios:

1) if the researcher is biased towards not wanting to find/report an association (i.e. it would be advantageous for whatever reason not to find anything).

2) if the researcher is biased towards wanting to find/report any associations (i.e. it would be advantageous for whatever reason to find something).

I'm struggling to decide why a post hoc decision:

a) not to report statistically significant but spurious associations in scenario 1)

is any different

b) to reporting statistically significant but spurious associations in 2)

What stands out, is that the answer probably has something to do with your use of the term 'spurious'. So if there is a justifiable difference between a) and b), it probably has something to do with the process of deciding whether something is spurious or is not spurious.

How is the decision as to whether an association is spurious or is not spurious arrived at, and how is the risk of bias eliminated, in both 'a priori' decisions, and 'post hoc' decisions for both senarios?
 
You've cleared that up.

One issue that I'm now struggling with (now that you've raised it), is this idea that post hoc decisions not to report statistically significant but spurious associations is OK, but, post hoc decisions to report statistically significant but spurious associations is not OK.

I'm comparing these two scenarios:

1) if the researcher is biased towards not wanting to find/report an association (i.e. it would be advantageous for whatever reason not to find anything).

2) if the researcher is biased towards wanting to find/report any associations (i.e. it would be advantageous for whatever reason to find something).

I'm struggling to decide why a post hoc decision:

a) not to report statistically significant but spurious associations in scenario 1)

is any different

b) to reporting statistically significant but spurious associations in 2)

What stands out, is that the answer probably has something to do with your use of the term 'spurious'. So if there is a justifiable difference between a) and b), it probably has something to do with the process of deciding whether something is spurious or is not spurious.

How is the decision as to whether an association is spurious or is not spurious arrived at, and how is the risk of bias eliminated, in both 'a priori' decisions, and 'post hoc' decisions for both senarios?
Those are good questions.

When you are planning a study, you need to be clear beforehand which variables you are going to measure and what the justification is for measuring each variable. Most of the variables measured in this study are either separate risk factors for autism (e.g. sex or maternal age) or are demographic factors used to generalize the results (e.g. race). There's no reason beforehand to think that black boys are uniquely at risk of autism from vaccines (this idea is solidly contradicted by the studies).

If you happen to find an association when data-dredging observational studies, which wasn't predicted beforehand, it's likely to be happenstance. Or you may have picked up on a factor which happens to be correlated with some other risk factors within that sub-group (e.g. maybe there was pre-school community program targeting black boys for autism screening which increased the diagnosis rate in that sub-group). A causal association is probably the least likely possibility, in general, and is doubly so in this case because of the mountain of evidence against the idea from all the previous research on autism.

An association which wasn't predicted beforehand, which popped out when testing multiple sub-groups, which is not plausibly causal, and there is already evidence against a causal association, is safely described as "spurious".

As far as what should be reported...the study should predict beforehand which comparisons are relevant and why, before any results are known. As you point out, biases in the researchers can lead to selective reporting of the results which can give a misleading impression. However, failing to report spurious results does not bias the results regardless.

So to look at your scenarios...

Not reporting a spurious association...the comparison based on race and sex was not relevant to the main question (the risk of autism from the age at first MMR vaccination) but was relevant to controlling for other risk factors. The authors seem to have decided that there were better comparisons which addressed that issue and reported on those instead. In this case, failing to report on a particular comparison does not bias the results.

Reporting a spurious association...this does have the potential to bias the results. As soon as it gets treated as though it was predicted beforehand or that it wasn't found by testing multiple subgroups, it will lead to a misleading impression.

So whether or not a choice is egregious depends upon recognizing when an association is spurious, and when reporting, or failing to report, an association will bias the results.

Linda
 
Those are good questions.

When you are planning a study, you need to be clear beforehand which variables you are going to measure and what the justification is for measuring each variable. Most of the variables measured in this study are either separate risk factors for autism (e.g. sex or maternal age) or are demographic factors used to generalize the results (e.g. race). There's no reason beforehand to think that black boys are uniquely at risk of autism from vaccines (this idea is solidly contradicted by the studies).

If you happen to find an association when data-dredging observational studies, which wasn't predicted beforehand, it's likely to be happenstance. Or you may have picked up on a factor which happens to be correlated with some other risk factors within that sub-group (e.g. maybe there was pre-school community program targeting black boys for autism screening which increased the diagnosis rate in that sub-group). A causal association is probably the least likely possibility, in general, and is doubly so in this case because of the mountain of evidence against the idea from all the previous research on autism.

An association which wasn't predicted beforehand, which popped out when testing multiple sub-groups, which is not plausibly causal, and there is already evidence against a causal association, is safely described as "spurious".

As far as what should be reported...the study should predict beforehand which comparisons are relevant and why, before any results are known. As you point out, biases in the researchers can lead to selective reporting of the results which can give a misleading impression. However, failing to report spurious results does not bias the results regardless.

So to look at your scenarios...

Not reporting a spurious association...the comparison based on race and sex was not relevant to the main question (the risk of autism from the age at first MMR vaccination) but was relevant to controlling for other risk factors. The authors seem to have decided that there were better comparisons which addressed that issue and reported on those instead. In this case, failing to report on a particular comparison does not bias the results.

Reporting a spurious association...this does have the potential to bias the results. As soon as it gets treated as though it was predicted beforehand or that it wasn't found by testing multiple subgroups, it will lead to a misleading impression.

So whether or not a choice is egregious depends upon recognizing when an association is spurious, and when reporting, or failing to report, an association will bias the results.

Linda

That's an interesting explanation. I can see your logic that if there is already other pre-existing data (more or stronger) which competes with an unexpected statistically significant association, you could reasonably label the unexpected association as 'spurious', and not report it.

I guess over time, not reporting 'spurious' associations would generally lead to only strong associations being reported in well researched fields.

If the research was novel or new, (i.e where there is no other existing data that you could compare your research to), then deciding whether something is 'spurious' or not would seem to become more difficult.

I'm still sitting on the fence though, because it seems to me that being humans, researchers still have bias blind spots (if I can call them that), and these methods can only reduce the impact of them, but not eliminate them entirely.
 
That's an interesting explanation. I can see your logic that if there is already other pre-existing data (more or stronger) which competes with an unexpected statistically significant association, you could reasonably label the unexpected association as 'spurious', and not report it.

It's the other way around. Any association is likely to be a false positive, unless you have some sort of support for the idea that it is not. And even when you have support for the idea, there is still a good chance that it is false.

http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

I guess over time, not reporting 'spurious' associations would generally lead to only strong associations being reported in well researched fields.

It's not so much about choosing which associations to report (all findings should be reported). It's about designing, implementing and analyzing studies so that spurious results are less likely to be produced in the first place. For example, the study by Brian Hooker was performed in a way which produces an excessive number of false results (e.g. multiple comparisons). So instead of worrying about which of his spurious results should be reported, the study should have been designed so as not to be at a large risk of producing spurious results in the first place.

If the research was novel or new, (i.e where there is no other existing data that you could compare your research to), then deciding whether something is 'spurious' or not would seem to become more difficult.

When we measure the frequency with which exploratory studies produce 'real' vs. spurious results, that frequency is very low (see table 4 in the Ioannidis paper). So the decision is even easier - the probability is very high that any significant findings are actually spurious, when you know nothing else about the association.

I'm still sitting on the fence though, because it seems to me that being humans, researchers still have bias blind spots (if I can call them that), and these methods can only reduce the impact of them, but not eliminate them entirely.

What do you mean by "sitting on the fence"? About what?

I agree that there are still ways for false results to creep into the process, even when best practices are followed. But that doesn't mean that we should abandon the goal altogether, or ignore the fact that a 'significant finding' was reported under conditions which produces spurious results.

Linda
 
I'm amused at how the issues of fraud are glossed over here just as with the mainstream media. Thompson, the one closest to the situation apologized to Andrew Wakefield for ruining his career. Clearly he understands the implications. This also begs the question, if this was covered up, what else was covered up? Vaccines are a $52 billion dollar business conducted by a morally bankrupt pharmaceutical industry.

I see Linda is pretending to be an expert again.
 
What do you mean by "sitting on the fence"? About what?

Well, you had not convinced me that it's OK not to report spurious findings. The issue for me is that this definition of 'spurious' still seems open to bias.

...biases in the researchers can lead to selective reporting of the results which can give a misleading impression. However, failing to report spurious results does not bias the results regardless.

I've rephrased you... selective reporting of results can bias a paper, but not reporting results will not bias a paper?

Obviously the key word here I've missed out again is 'spurious'.
 
Well, you had not convinced me that it's OK not to report spurious findings.

Maybe that's because I don't think it's 'okay' not to report spurious findings. I think all findings should be reported, regardless of whether they can be regarded as spurious or not (I'm pretty sure I've said that several times, now). My point was that studies should be planned so that you aren't producing results which will be spurious, in the first place.

I've rephrased you... selective reporting of results can bias a paper, but not reporting results will not bias a paper?

Selective reporting will bias the results if you fail to report true results or report false results as true. Selective reporting will not bias the results if you fail to report false results or report true results as true. Biasing the results is the far more concerning error. So in this case, the apologies were for selective reporting, but this did not lead to a biasing of the results, so the error was relatively minor.

Linda
 
Maybe that's because I don't think it's 'okay' not to report spurious findings. I think all findings should be reported, regardless of whether they can be regarded as spurious or not (I'm pretty sure I've said that several times, now).

Well you can see why I'm struggling with your responses... find below a post where you've stated the opposite...

...I think it's generally OK not to report spurious findings...
 
Well you can see why I'm struggling with your responses... find below a post where you've stated the opposite...
I explained what I meant, though. I used scare quotes because the main concern is to produce and report on true results and to avoid producing and reporting on false results. I'm not going to complain (it's 'okay') if someone fails to report on false results, but I'd be more okay with it (i.e. I'd drop the scare quotes) if they avoided producing the false results in the first place.

Linda
 
Well you can see why I'm struggling with your responses... find below a post where you've stated the opposite...

What I've gathered from her position is this:

- researchers should report all results that comes out of the research design, even spurious ones
- however, not reporting spurious results is not that big an error, since it doesn't bias the results. It would be preferable to have reported it, but it shouldn't be seen as a major concern or fraudulent
- even better would be to have designed the study in a manner that it doesn't produce spurious results
 
I explained what I meant, though. I used scare quotes because the main concern is to produce and report on true results and to avoid producing and reporting on false results. I'm not going to complain (it's 'okay') if someone fails to report on false results, but I'd be more okay with it (i.e. I'd drop the scare quotes) if they avoided producing the false results in the first place.

You've made two contradictory statements in two different posts above:

"...I think it's generally OK not to report spurious findings..."

"...I don't think it's 'okay' not to report spurious findings. I think all findings should be reported..."

...and now your trying to justify the contradiction. I don't think I'll persue this anymore... seriously my head hurts. :-(
 
You've made two contradictory statements in two different posts above:

"...I think it's generally OK not to report spurious findings..."

"...I don't think it's 'okay' not to report spurious findings. I think all findings should be reported..."

...and now your trying to justify the contradiction. I don't think I'll persue this anymore... seriously my head hurts. :-(
What's so hard to understand? Arouet got it easily (it seems). What don't you understand about Arouet's three bullet points?

Linda
 
What's so hard to understand? Arouet got it easily (it seems). What don't you understand about Arouet's three bullet points?

Linda
Also, it may be easier for you to understand if you take my whole sentence, instead of the truncated bit you quoted (not sure why you did that, if you are sincere about trying to understand).

"No, I think it's generally OK not to report spurious findings, but protocols should be followed."

So I'm stating that findings should be reported, regardless of whether they are spurious, if it was part of the protocol to report those findings regardless. And that is what I said the second time as well - all findings should be reported regardless.

Linda
 
Update on the retraction of the Hooker paper: http://www.translationalneurodegeneration.com/content/3/1/22

The Editor and Publisher regretfully retract the article 1] as there were undeclared competing interests on the part of the author which compromised the peer review process. Furthermore, post-publication peer review raised concerns about the validity of the methods and statistical analysis, therefore the Editors no longer have confidence in the soundness of the findings. We apologise to all affected parties for the inconvenience caused.
 
Back
Top