Need Help With Upcoming Episode on Mask Junk Science

Right here, for 50th time
"Among the 335,382 participants who completed symptom sur- veys, 27,166 (8.1%) reported experiencing COVID-like illnesses during the study period. More participants in the control villages reported incident COVID-like illnesses (n=13,893, 8.6%) com- pared with participants in the intervention villages (n=13,273, 7.6%). Over one-third (40.3%) of symptomatic participants agreed to blood collection. Omitting symptomatic participants who did not consent to blood collection, symptomatic seroprevalence was 0.76% in control villages and 0.68% in the intervention villages. Because these numbers omit non-consenters, it is likely that the true rates of symptomatic seroprevalence are substantially higher (perhaps by 2.5 times, if non-consenters have similar seroprevalence to consenters)."
There's no 76% or 68% anywhere in that paragraph. There's a 0.76% and a 0.68%, but they have nothing to do with "the percentage of serum tests that are positive" (that number was 22%). And the 0.76% applies to the control group, but you used it for the intervention group.

Again, please explain where you got those numbers from.

Yeah duh. Of course that's my claim. Because that is what the study claims it did (see quote from above - now 51st time I've pointed it out to you)

Huh? Why would I say that? I didn't say that
Huh? You just said that. You said that they extrapolated the results from the blood tests. Well the only results from the blood tests were "22% of the blood tests were positive for COVID and 78% weren't. If you are going to claim that that result was extrapolated to the asymptomatic population, there would have to be a claim somewhere that all the people who weren't sick were 22% positive for COVID and 78% not positive for COVID. But nobody made that claim. And most certainly nobody made that claim in the paragraph you quoted.

The study either ends with the blood serum test or those results get extrapolated to a larger population or this study is a complete miss mash of disjointed concepts and objectives (I think the latter) that you are defending in troll like fashion.
The study ends with counting all the people in each group, including the group who were symptomatic and had a blood test positive for COVID. I can't even tell what you are trying to claim here, it's so incoherent and uninformed. Are you trying to claim that they don't count anybody but the number of people in one group, and one group only?

OK. So you say now that the study ends with the blood serum test treatment villagers = 5,006 and 4,971 control villagers.

That's it. Those results define the study. If that's the case, then the study is baloney. You cannot make statements about how the masks worked (or didn't) for hundreds of thousands of subjects from a small subset who got blood tested).
The study ends with collecting all the information you need to determine the outcome. In order to determine "symptomatic seroprevalence" (prevalence is a rate, so you need the numerator and denominator), you need the following information for each group:

1. Number of people completing a symptom survey

2. Number of people completing a symptom survey who had symptoms of COVID

3. Number of people completing a symptom survey who had symptoms of COVID and consented to a blood test

4. Number of people completing a symptom survey who had symptoms of COVID, consented to a blood test, whose blood test was positive for COVID

YOU MOST DEFINITELY DO NOT ONLY COLLECT THE LAST NUMBER. YOU NEED ALL OF THOSE NUMBERS TO CALCULATE THE OUTCOME - SYMPTOMATIC SEROPREVALENCE.

I agree with those numbers. But what do they mean? How do they use them in a calculation to show significance?
They are the numbers you use to calculate "symptomatic seroprevalence". Please note, the numbers in the list are for the whole population. You can easily figure out the numbers for each group using the same information (the results paragraph, Table A1, and post #86).

Numerator = number of people who were symptomatic and whose blood tested positive for COVID (line 4 from above).
Denominator = number of people completing a symptom survey (line 1) - number of symptomatic people who did not consent to a blood test (line 2 minus line 3).

As I mentioned, the numbers I gave earlier were for the whole population (not one set of numbers for Intervention and one set for Control). But I will show the calculation for illustration purposes using those numbers.

Line 1 = 335,382
Line 2 = 27,166
Line 3 = 10,952
0Line 4 = 2293

2293/(335382 - (27166-10952)) = 2293/319168 = 0.0072 or 0.72%.

If we use the numbers per group instead, we have:

1131/(174171 - (13273 - 5414)) = 1131/166312 = 0.0068 or 0.68% for the intervention group

1162/(161211 - (13893 - 5538)) = 1162/152856 = 0.0076 or 0.76% for the control group.

Again, are you now telling me they calculated from the subset of serum positive? Like I did in comment 155?

If not, show your calculation. Put up or shut up time for you. I don't think you got it, but prove me wrong.
Someone who claims to be an actuary does not need to be told how to calculate a prevalence. They also know that an ANOVA is not an appropriate method for comparing count data, and a Chi-square test is.

Just saying.
 
Last edited:
I wish I were better than I am at maths and stats, but hey, I'm not...

Near as I can tell, what happened was that they got villagers in a number of villages to agree to take part in the study. They had 146,783 from villages that went maskless and 160,323 from villages where people were supplied with masks. Is that correct so far?
Just to be clear...the intervention was mask promotion, which included providing masks, but some people in both groups wore masks (i.e. some people in villages where there was no mask promotion still wore masks, just at a lower overall rate).

The numbers were 178,288 in intervention villages and 163,838 in control villages. (Table A1 in the paper).

Now: from my (admittedly rather naive) viewpoint, unless they had the opportunity and/or resources to test absolutely everyone who took part, they'd need to select a number of people from both groups (masked and maskless) to test for seropositivity for Covid.
No, their pre-specified intervention was "symptomatic seroprevalence", so they would attempt to test all of those who were symptomatic, not a sample.

Assuming a) 100% accuracy of the tests either way, b) that their selections were random and representative of whole populations, and c) that selections were sufficiently large for purposes of detecting statistical significance, then in theory, all would seem to be well.

However, if I got this right, their selections weren't entirely random. They only tested those who self-reported Covid-like symptoms.
Correct. This is the group they should have tested and did.

But Covid is known to be asymptomatic in some people, and, correct me if I'm wrong, few if any self-reporters were medically qualified to make the judgement.

How was it decided whether what they reported as Covid symptoms were in fact such? Were they questioned what their symptoms were? Were any of their self-reports rejected because they didn't actually indicate Covid? Were any of those who didn't self-report questioned to try to confirm they didn't in fact have Covid?
They did not ask the subjects to determine whether or not they were symptomatic, nor did they ask them to judge whether they had COVID. They asked everyone "have you had 'this' symptom?" from a pre-specified list of symptoms and way of asking about the symptoms (scripts were given to the field workers and are provided in the study documentation). Then they took those answers and in a standardized manner determined who had symptoms consistent with COVID and who did not (also in the study documentation).

Also, what exactly was the seropositivity test? Was it PCR-based? If so, how many amplification cycles were applied? 20, 30, 40, or what? The higher the number, the more likely the false positives.
It was an antigen test. This information is available in the paper.

Forgive me if I haven't understood the maths and stats and my understanding is incorrect. Please tell me if it is, and why. I'm completely open to being corrected.
Please consider reading the paper.
 
@Eric Newhill cut out the personal insults.
Dealing w/ someone who insults me over a typo an knows perfectly well it was typo is a waste of time. Of course I meant to type .76% and .68%. I used the correct decimals in my equation.

Ellis is just a pest. He is here to obfuscate and he has changed his tune many times, though, of course, he will ask to be shown where he has changed, screw it, it's a matter of record.

Now he says the study wasn't about mask effectiveness. Well it certainly was promoted as that, not how to promote mask use.

But yeah, the study cannot be used to demonstrate mask effectives. It tries to speak to it with blood serum group, but you can't extrapolate back to a larger population from that.

I'll be more sly in personal insults, like Ellis is, if that's what you prefer. Anyhow, I'm done with this. I think you have your answer.

Note that Ellis has said the study is about serum prevalence. Now he says it's about mask promotion. The guy is just making it up as he goes. He has an agenda and it's not getting at the truth of this.
 
Last edited:
Just to be clear...the intervention was mask promotion, which included providing masks, but some people in both groups wore masks (i.e. some people in villages where there was no mask promotion still wore masks, just at a lower overall rate).

The numbers were 178,288 in intervention villages and 163,838 in control villages. (Table A1 in the paper).



No, their pre-specified intervention was "symptomatic seroprevalence", so they would attempt to test all of those who were symptomatic, not a sample.

Correct. This is the group they should have tested and did.



They did not ask the subjects to determine whether or not they were symptomatic, nor did they ask them to judge whether they had COVID. They asked everyone "have you had 'this' symptom?" from a pre-specified list of symptoms and way of asking about the symptoms (scripts were given to the field workers and are provided in the study documentation). Then they took those answers and in a standardized manner determined who had symptoms consistent with COVID and who did not (also in the study documentation).



It was an antigen test. This information is available in the paper.



Please consider reading the paper.
I haven't read the paper because for whatever reason, I can't find it. Kindly supply the URL and I'll take a look.

My first reaction is to this statement of yours:

No, their pre-specified intervention was "symptomatic seroprevalence", so they would attempt to test all of those who were symptomatic, not a sample.​
The point is, they can call the intervention whatever they want, but in the end, it depends on the self-reports, or whether people agreed that they had had certain key symptoms. Don't know about you, but when a Doctor's asked me something like that in the past, and I've not been entirely sure, I've usually said so. But sometimes I've agreed or disagreed without being sure. There may be the temptation for the researchers, maybe subconsciously, to interpret the answers in a way that agrees with their prejudices. Not asking the question at all obviates the possibility of that.

Anyway, like I said, why frame the study that way at all? Self-reporting, whether freeform or as a result of targeted questions is inherently not completely reliable. I still don't see why they couldn't simply have made a random selection in both masked and unmasked groups and let the chips fall as they may.

Please supply the URL and I may say more when I've read it.
 
Dealing w/ someone who insults me over a typo an knows perfectly well it was typo is a waste of time. Of course I meant to type .76% and .68%. I used the correct decimals in my equation.
I did not insult you over a typo. You used 76% and 68% in your equation (i.e. 0.76 and 0.68). If you had used 0.76% and 0.68%, your equation would have said 0.0076 and 0.0068. And whether it's 0.76 or 0.0076 doesn't matter, because neither of those numbers have anything to do with the proportion of blood tests that are positive. And what's even worse, if that is what you were going for, is that you applied the INTERVENTION rate ("0.68") to the CONTROL numbers (4971) and vice versa.

Now he says the study wasn't about mask effectiveness. Well it certainly was promoted as that, not how to promote mask use.
I didn't say it wasn't about mask effectiveness. Of course it's about mask effectiveness. Do villages with more mask wearing have less symptomatic COVID? And it was demonstrated that "yes, they do".

What I pointed out that was that there was mask wearing in both groups. David had said the control villages were maskless, and I was worried that that might create some confusion. And he may have just been using the term as a short-hand. Which is fine as long as everyone understands it wasn't about masked vs. unmasked individuals, but about the prevalence of masking in villages.

Note that Ellis has said the study is about serum prevalence. Now he says it's about mask promotion.
Yes. The study is about an intervention and the outcome from that intervention. The intervention is mask promotion. The outcome is symptomatic COVID seroprevalence.

This is statistics 101. I'm pretty sure someone claiming to be an actuary would have statistics 101.
 
Last edited:

Alex

Administrator
https://www.globalresearch.ca/do-fa...ladesh-abaluck-et-al-results-reliable/5756323

Do Face Masks Reduce COVID-19 Spread in Bangladesh? Are the Abaluck et al. Results Reliable?
By Prof Denis Rancourt
Global Research, September 21, 2021

All Global Research articles can be read in 51 languages by activating the “Translate Website” drop down menu on the top banner of our home page (Desktop version).
Visit and follow us on Instagram at @crg_globalresearch.
***

Purpose
This really should be the end of the debate,” says Ashley Styczynski, an infectious-disease researcher at Stanford University in California and a co-author of the preprint describing the trial. The research “takes things a step further in terms of scientific rigour”, says Deepak Bhatt, a medical researcher at Harvard Medical School in Boston, Massachusetts, who has published research on masking. — Nature | News | 09 September 2021 | “Face masks for COVID pass their largest test yet

The leading trend-setting mainstream media and institutional public relations offices have been unreservedly enthusiastic about “the Bangladesh mask study” (see Appendix A).
Here, I review the methods and results of that study by Abaluck et al. (2021) published as a working paper by Innovations for Poverty Action (IPA): “The Impact of Community Masking on COVID-19: A Cluster-Randomized Trial in Bangladesh”, 01 September 2021.
The study’s stated primary outcome regarding the benefits of face masks is “symptomatic SARS‑CoV‑2 seroprevalence”, meaning the prevalence during the study period of individuals self-reporting COVID-like symptoms who also test positive using a laboratory blood test presumed to be specific for SARS-CoV-2.
Summary
The cluster-randomized trial study of Abaluck et al. (2021) is fatally flawed, and therefore of no value for informing public health policy, for two main reasons:
  1. The antibody detection was performed using a single commercial FDA emergency-use-authorized (EUA) serology test that is not suitable for the intended application to SARS-CoV-2 in Bangladesh (not calibrated or validated for populations in Bangladesh; undetermined cross-reactivity against broad-array IgM antibodies, malaria, influenza, etc.).
  2. The participants (individual level, family level, village level) in the control and treatment arms were systematically handled in palpably different ways that are linked to factors established to be strongly associated to infection and severity with viral respiratory diseases, in particular, and to individual health in general.
These disjunctive fatal flaws are explained below. Either one is sufficient to invalidate the results and conclusions of Abaluck et al.
Furthermore, the Abaluck et al. symptomatic seroprevalence (SSP) results are prima facie statistically untenable. The treatment-to-control differences in numbers of symptomatic seropositive individuals are too small to rule out large unknown co-factor, baseline heterogeneity, and study-design bias effects. In addition, they are at best borderline significant, in terms of purely ideal-statistical estimations of uncertainty. Finally, the practice of using whole households while reporting on an individual basis, introduces unknown correlations/ clustering, and vitiates the mathematic assumptions that underlie the statistical method.
 

Alex

Administrator
Rancourt adds a detail I left out:

I work backwards from their numbers to calculate the numbers of symptomatic individuals having positive blood test results, as follows:

Treatment arm:

178,288 participants x 0.0068 (SSP) x 0.408 (RCB) = 495 (2σ≈44) symptomatic seropositive individuals

→Scaled to the same population as the control → 455 (2σ≈41)

Control arm:

163,838 participants x 0.0076 (SSP) x 0.399 (RCB) = 497 (2σ≈45) symptomatic seropositive individuals

===

The difference, 497 – 495 = 2 individuals, is the number giving rise to Abaluck et al.’s difference in absolute symptomatic seroprevalence (SSP) of 0.0008. As such, given the expected sources of bias and measurement errors described herein, and given the size of this difference of only two (2) events, the SSP difference on increased masking in the treatment arm, reported by Abaluck et al., cannot be taken as anything but unreliable.
 
Rancourt adds a detail I left out:

I work backwards from their numbers to calculate the numbers of symptomatic individuals having positive blood test results, as follows:

Treatment arm:

178,288 participants x 0.0068 (SSP) x 0.408 (RCB) = 495 (2σ≈44) symptomatic seropositive individuals

→Scaled to the same population as the control → 455 (2σ≈41)

Control arm:

163,838 participants x 0.0076 (SSP) x 0.399 (RCB) = 497 (2σ≈45) symptomatic seropositive individuals

===

The difference, 497 – 495 = 2 individuals, is the number giving rise to Abaluck et al.’s difference in absolute symptomatic seroprevalence (SSP) of 0.0008. As such, given the expected sources of bias and measurement errors described herein, and given the size of this difference of only two (2) events, the SSP difference on increased masking in the treatment arm, reported by Abaluck et al., cannot be taken as anything but unreliable.
Hmmm...that's kinda the opposite of what the authors describe. They didn't say they estimated the results for the non-consenters and added them back in. They said they omitted the non-consenters. Is Rancourt a hack?
 
Rancourt adds a detail I left out:

I work backwards from their numbers to calculate the numbers of symptomatic individuals having positive blood test results, as follows:

Treatment arm:

178,288 participants x 0.0068 (SSP) x 0.408 (RCB) = 495 (2σ≈44) symptomatic seropositive individuals

→Scaled to the same population as the control → 455 (2σ≈41)

Control arm:

163,838 participants x 0.0076 (SSP) x 0.399 (RCB) = 497 (2σ≈45) symptomatic seropositive individuals

===

The difference, 497 – 495 = 2 individuals, is the number giving rise to Abaluck et al.’s difference in absolute symptomatic seroprevalence (SSP) of 0.0008. As such, given the expected sources of bias and measurement errors described herein, and given the size of this difference of only two (2) events, the SSP difference on increased masking in the treatment arm, reported by Abaluck et al., cannot be taken as anything but unreliable.
Yes. That is pretty much what I was saying, my late night worn out from work typoes on the percents aside. Same concept.
 
Last edited:
Before you all get carried away with that idea…it didn’t happen. For one, they would tell you that they did it, and describe some elaborate scheme for how they did and it and why it was justified. But more importantly, they specifically tell you that they didn’t do it.

Because these numbers omit non-consenters, it is likely that the true rates of symptomatic seroprevalence are substantially higher (perhaps by 2.5 times, if non-consenters have similar seroprevalence to consenters).”

That is, if they had already added in the extra numbers, then the “true rate“ couldn’t be higher. If the 0.68% and the 0.76% represented 100% of the symptomatic people, instead of 40%, there’s no way to make those rates higher, and that last line makes no sense.
 
https://www.globalresearch.ca/do-fa...ladesh-abaluck-et-al-results-reliable/5756323

Do Face Masks Reduce COVID-19 Spread in Bangladesh? Are the Abaluck et al. Results Reliable?
By Prof Denis Rancourt
Global Research, September 21, 2021

All Global Research articles can be read in 51 languages by activating the “Translate Website” drop down menu on the top banner of our home page (Desktop version).
Visit and follow us on Instagram at @crg_globalresearch.
***

Purpose
This really should be the end of the debate,” says Ashley Styczynski, an infectious-disease researcher at Stanford University in California and a co-author of the preprint describing the trial. The research “takes things a step further in terms of scientific rigour”, says Deepak Bhatt, a medical researcher at Harvard Medical School in Boston, Massachusetts, who has published research on masking. — Nature | News | 09 September 2021 | “Face masks for COVID pass their largest test yet

The leading trend-setting mainstream media and institutional public relations offices have been unreservedly enthusiastic about “the Bangladesh mask study” (see Appendix A).
Here, I review the methods and results of that study by Abaluck et al. (2021) published as a working paper by Innovations for Poverty Action (IPA): “The Impact of Community Masking on COVID-19: A Cluster-Randomized Trial in Bangladesh”, 01 September 2021.
The study’s stated primary outcome regarding the benefits of face masks is “symptomatic SARS‑CoV‑2 seroprevalence”, meaning the prevalence during the study period of individuals self-reporting COVID-like symptoms who also test positive using a laboratory blood test presumed to be specific for SARS-CoV-2.
Summary
The cluster-randomized trial study of Abaluck et al. (2021) is fatally flawed, and therefore of no value for informing public health policy, for two main reasons:
  1. The antibody detection was performed using a single commercial FDA emergency-use-authorized (EUA) serology test that is not suitable for the intended application to SARS-CoV-2 in Bangladesh (not calibrated or validated for populations in Bangladesh; undetermined cross-reactivity against broad-array IgM antibodies, malaria, influenza, etc.).
  2. The participants (individual level, family level, village level) in the control and treatment arms were systematically handled in palpably different ways that are linked to factors established to be strongly associated to infection and severity with viral respiratory diseases, in particular, and to individual health in general.
These disjunctive fatal flaws are explained below. Either one is sufficient to invalidate the results and conclusions of Abaluck et al.
Furthermore, the Abaluck et al. symptomatic seroprevalence (SSP) results are prima facie statistically untenable. The treatment-to-control differences in numbers of symptomatic seropositive individuals are too small to rule out large unknown co-factor, baseline heterogeneity, and study-design bias effects. In addition, they are at best borderline significant, in terms of purely ideal-statistical estimations of uncertainty. Finally, the practice of using whole households while reporting on an individual basis, introduces unknown correlations/ clustering, and vitiates the mathematic assumptions that underlie the statistical method.
I've had a chance to go through this. It's not going to be particularly useful. He hasn't really noticed anything different from what has already been said here. And any of his legitimate points are overwhelmed by those which aren't, and by the numerous errors.

His two main points are that the serology test may not be accurate enough in real world settings. And that the intervention and control groups were treated differently. The first point wouldn't alter the results (because of randomization, they would affect both groups) except to make them noisier, so only stronger signals would get through. This would only potentially be a "fatal flaw" if the study didn't show a significant effect. The second point has already been mentioned - it goes along with incomplete blinding. That is a well-identified problem, and gets extensive discussion from the study authors. So he adds nothing new to the conversation with respect to whether this is a fatally-flawed study. But then he wraps it all up in a series of ill-informed and incorrect points, which damages its credibility.

In the section about the IgG antibody test, he states it wasn't tested for IgM vs. IgG, was inadequately tested for cross-reactivity, was cross-reactive for malaria, wasn't validated in real world populations, and wasn't validated for Bangladesh. He cites a few studies from the FDA and manufacturer website for these claims, plus a study performed in Benin which showed some cross-reactivity with acute malaria. And yes, the studies/information he cited didn't show that. Problem is, a google scholar search easily turns up lots more research which did test for IgM vs. IgG, real world tests, and real world tests in Bangladesh plus real world validation tests in Bangladesh, which all showed that the test performed well.

And I looked up information about malaria in Bangladesh and it occurs in some specific areas in the east at a rate of 2-3/1000 per year. And 80% of the cases occur between May and October. For comparison, there is a map showing the location of the study villages in Bangladesh, and most of them are in the west where the rate is low to non-existent. And the study took place between November and January. Putting all that together, if there was some cross-reactivity with acute malaria, at worst, it could account for less than a dozen cases over the 10-12 week period of the study. And again, because of randomization, the handful of extra cases would be found in both groups.

He makes this complaint about the handling of the data and the statistical results:

"The treatment-to-control differences in numbers of symptomatic seropositive individuals are too small to rule out large unknown co-factor, baseline heterogeneity, and study-design bias effects. In addition, they are at best borderline significant, in terms of purely ideal-statistical estimations of uncertainty. Finally, the practice of using whole households while reporting on an individual basis, introduces unknown correlations/ clustering, and vitiates the mathematic assumptions that underlie the statistical method."

However, he bases the first part on some erroneous assumptions and so none of his calculations are valid (the part you quoted in your second post on this opinion piece). And while the last sentence is correct, it does not apply in this case, because the researchers didn't report results on an individual basis whenever clustering and unknown correlations could be present. The authors outline their analyses in the documents I linked for Michael, and their analyses were clustered/grouped, or adjusted for correlations. You will notice, going through the results, that results are given in terms of villages, of village-pairs, or households (depending upon which variables are tested), and that group/individual results are adjusted (any time you see an "a" in front of the outcome measure).

So that leaves us with the concern about incomplete blinding/different treatment between groups, which is a legitimate concern. But I told you that already in my very first post. ;)
 
Once a person gains a little experience with statistical arguments, it doesn't take long to see how they can be manipulated to favor whatever bias you want. For example, right away, a person can see that 340,000 people in another country with a high number of cases doesn't equate to 75 people at my local grocery store in a green zone with no incidence of COVID. Yet that study will be cited by pro-maskers as a reason for everyone in the grocery store to wear masks. If people would just use 1/10th of their brain for a little critical thinking, the world would be a better place.
 
Last edited:
Once a person gains a little experience with statistical arguments, it doesn't take long to see how they can be manipulated to favor whatever bias you want. For example, right away, a person can see that 340,000 people in another country with a high number of cases doesn't equate to 75 people at my local grocery store in a green zone with no incidence of COVID. Yet that study will be cited by pro-maskers as a reason for everyone in the grocery store to wear masks. If people would just use 1/10th of their brain for a little critical thinking, the world would be a better place.
Alex thinks the incidence of COVID was really, really, low - like the lowest place on earth - during this study. Now it turns out that it was a "high number of cases". I think you're right - anti/pro-maskers will just make up whatever they want to about this study to support their pre-existing biases.
 
Once a person gains a little experience with statistical arguments, it doesn't take long to see how they can be manipulated to favor whatever bias you want. For example, right away, a person can see that 340,000 people in another country with a high number of cases doesn't equate to 75 people at my local grocery store in a green zone with no incidence of COVID. Yet that study will be cited by pro-maskers as a reason for everyone in the grocery store to wear masks. If people would just use 1/10th of their brain for a little critical thinking, the world would be a better place.
How does one know that the reason the people at the grocery store don't have Covid is because they are more likely to be wearing masks? Or that they are following social distancing guidelines, or some other aspect of attempted Covid control? Don't get me wrong, I myself am sceptical about wearing masks. It's just that I find the argument pretty weak. To make a more accurate comparison, I suppose one could compare like with like (if ever possible) -- e.g. two similar-sized and otherwise broadly similar groups with more or less equal occurrence of Covid, and compare incidence of mask wearing in the two groups.

But I take your underlying point that statistics can be used in ways to prove some kind of correlation where there may be all sorts of confounding factors that haven't been, or couldn't realistically, be controlled for. That, I agree with. It's especially relevant in cases where there are many variables, not all of which are known or understood. Statistics is a place where, if one isn't careful, mathematics meets and can be influenced by sociological factors. If the zeitgeist leans in a particular direction, statisticians may, consciously or unconsciously, skew the methodology and results to get a particular outcome.
 
How does one know that the reason the people at the grocery store don't have Covid is because they are more likely to be wearing masks?
Here, wastewater samples from various neighborhoods are taken twice a week and from that, the locations with the highest concentrations of COVID have been identified. That combined with demographics from the census shows that neighborhoods with high occupancy residences with people who are also in contact with the pubic, are where the majority of cases originate.

Then, given that the same sized stores with the same masking rules in effect are located in the different zones, the issue of masks balances out, leaving other variables as the reason for the differences in transmission. One could argue that it's simply a fluke that is correlative, but it seems to me that a little common sense would suggest otherwise.
 
How does one know that the reason the people at the grocery store don't have Covid is because they are more likely to be wearing masks?
I don't think J Randall Murphy is making an argument about whether masks work. It's about whether the underlying prevalence justifies mask use. A 20% reduction in cases may only represent 1 case in a low prevalence area, whereas in a high prevalence area it may represent 100 cases. Personally, I'm too lazy to gather and analyze the information necessary to make micro-location decisions about wearing a mask on a daily basis, but I'm impressed that J Randall Murphy does so.
 
Top