You've cleared that up.
One issue that I'm now struggling with (now that you've raised it), is this idea that post hoc decisions not to report statistically significant but spurious associations is OK, but, post hoc decisions to report statistically significant but spurious associations is not OK.
I'm comparing these two scenarios:
1) if the researcher is biased towards not wanting to find/report an association (i.e. it would be advantageous for whatever reason not to find anything).
2) if the researcher is biased towards wanting to find/report any associations (i.e. it would be advantageous for whatever reason to find something).
I'm struggling to decide why a post hoc decision:
a) not to report statistically significant but spurious associations in scenario 1)
is any different
b) to reporting statistically significant but spurious associations in 2)
What stands out, is that the answer probably has something to do with your use of the term 'spurious'. So if there is a justifiable difference between a) and b), it probably has something to do with the process of deciding whether something is spurious or is not spurious.
How is the decision as to whether an association is spurious or is not spurious arrived at, and how is the risk of bias eliminated, in both 'a priori' decisions, and 'post hoc' decisions for both senarios?
Those are good questions.
When you are planning a study, you need to be clear beforehand which variables you are going to measure and what the justification is for measuring each variable. Most of the variables measured in this study are either separate risk factors for autism (e.g. sex or maternal age) or are demographic factors used to generalize the results (e.g. race). There's no reason beforehand to think that black boys are uniquely at risk of autism from vaccines (this idea is solidly contradicted by the studies).
If you happen to find an association when data-dredging observational studies, which wasn't predicted beforehand, it's likely to be happenstance. Or you may have picked up on a factor which happens to be correlated with some other risk factors within that sub-group (e.g. maybe there was pre-school community program targeting black boys for autism screening which increased the diagnosis rate in that sub-group). A causal association is probably the least likely possibility, in general, and is doubly so in this case because of the mountain of evidence against the idea from all the previous research on autism.
An association which wasn't predicted beforehand, which popped out when testing multiple sub-groups, which is not plausibly causal, and there is already evidence against a causal association, is safely described as "spurious".
As far as what should be reported...the study should predict beforehand which comparisons are relevant and why, before any results are known. As you point out, biases in the researchers can lead to selective reporting of the results which can give a misleading impression. However, failing to report spurious results does not bias the results regardless.
So to look at your scenarios...
Not reporting a spurious association...the comparison based on race and sex was not relevant to the main question (the risk of autism from the age at first MMR vaccination) but was relevant to controlling for other risk factors. The authors seem to have decided that there were better comparisons which addressed that issue and reported on those instead. In this case, failing to report on a particular comparison does not bias the results.
Reporting a spurious association...this does have the potential to bias the results. As soon as it gets treated as though it was predicted beforehand or that it wasn't found by testing multiple subgroups, it will lead to a misleading impression.
So whether or not a choice is egregious depends upon recognizing when an association is spurious, and when reporting, or failing to report, an association will bias the results.
Linda