2/n Paper is here, it's a pretty simple ecological study comparing countries on their deaths/million from COVID-19 and Google mobility data nature.com/articles/s4159…
3/n The authors modelled the impact of time spent in "residential" areas as shown by Google against number of COVID-19 deaths in different areas, and in most cases found that there was no significant explanatory power for this model
4/n In other words, if you compare places that had people spending more time in "residential" areas against those that didn't, they had similar COVID-19 deaths per million
5/n The authors ensured that the countries/regions were reasonably comparable by controlling for a few population measures like markers of income and healthcare
6/n Now, the first issue is a pretty obvious one that springs out immediately:
Google "residential" mobility data
7/n Firstly, this is a selected dataset. Only people who use Google services (mostly Android users) AND HAVE LOCATION HISTORY TURNED ON are represented in this dataset
Almost certainly not representative of the people who are mostly dying from COVID-19
8/n This is mentioned in a sentence in the discussion, but I think it's a fundamental issue that makes this analysis a bit useless. We know that 50%+ of COVID-19 deaths are in the over-65 population, who are the least likely to be represented in this dataset!
9/n Furthermore, only using the "residential" data*, as the authors did, is a big problem
You see, most people already spend most of their time at home
*there's also an issue with how opaque the term "residential" is and how this is calculated, but one issue at a time
10/n Google even points this out in the explainer for mobility data. Most people already spend 12+ of their 24 hours a day at home, so the "residential" category is the LEAST LIKELY to show any increase/decreases
11/n It makes sense when you remember that Google mobility data tracks CHANGES, not absolute figures. So 50% of the population working from home 100% would reduce office mobility by 50%, but only increase residential by a fraction of that amount
12/n For example, here is the "residential" vs "workplace" mobility data for the state of Victoria in Australia during their mammoth lockdown. "Residential" never goes above a 25% increase, but "workplace" decreases FAR more
13/n What this means is that by comparing "residential" mobility, you are the most likely to find no difference by default. This is called a bias towards the null, and it's not ideal
14/n Furthermore, remember my asterisk from above? Yeh, turns out that it's really hard to find out what "residential" actually means, how it's calculated, or what the raw figures are based on, presumably because this is proprietary Google analysis
15/n So the conclusions about staying at home make no sense at all. "Residential" mobility data might not have been different between places, but for all we know that's a meaningless measure anyway that has very little to do with how much people stay at home
16/n On top of that, this study suffers from the same drawbacks that most ecological trials do. To their credit, the authors acknowledge this in the discussion, but it certainly hasn't filtered through to the public
17/n Limitations inherent in this sort of research are many and varied, but as one example it's hard to make any realistic inferences about individuals staying at home when your unit of study is Spain vs the United States of America
18/n Even within Australia, which was included, the massive Victorian outbreak/lockdown skew the figures enormously, because one state with 1/4 of the population locked down while the rest of the country opened up
19/n We might actually expect null findings from an ecological trial of this sort, because at the country level heterogeneity in local policy irons out a lot of the impact
20/n It's also worth noting that the study literally does not address the question of whether government orders influenced COVID-19 deaths. Even if you ignore all the other issues, "residential" mobility data simply can't answer that question!
21/n There are many reasons that people stay at home, and given the opacity of "residential" data it's hard to say much about the results other than that this is a hard question that we may never answer well
22/n That being said, the idea that this study disproves staying at home as a driver of COVID-19 mortality is obviously wrong - at best, it is an example of how difficult answering that question can be
• • •
Missing some Tweet in this thread? You can try to
force a refresh
@VPrasadMDMPH@CT_Bergstrom My favorite part of this is that we can actually do a fairly basic empirical test of whether the idea that twitter royalty is required to be a FB fact-checker is true, or whether it's simply a correlation due to pandemic expertise by looking at pre-pandemic follower counts
@VPrasadMDMPH@CT_Bergstrom Of the people quoted for the healthfeedback piece, the median number of twitter followers was 4,514, with two people having well below 1,000 prior to COVID-19. The mean is skewed up to 35k by Topol
It was always predictable that COVID-19 denialism would morph into anti-vaccine advocacy because it was never about public health, it was always about attacking government measures
The Great Barrington Declaration was sponsored by an organisation that promotes tobacco smoking, denies global warming, and lies about asbestos. There's a reason no serious public health scientists signed it!
If your entire philosophy is predicated on reactionary outrage over any government intervention it's pretty much a given that you'd move on to being anti-vaccine when safe+effective vaccines came out for COVID
My first degree was a double major in psychology and the philosophy of science, and I love this stuff. The idea that science is some fixed system breaks down remarkably easy
One brilliant exercise - take a list of fields and classify them into science, pseudoscience, and not science
This is usually fairly easy!
Now, try and describe ~why~ things fall into the categories they do
In patients where a clinical cause was able to be identified, the vast majority of cases were clearly deaths caused by COVID-19, which means that this is likely a true undercount of the deaths
Worryingly, only 1 in 7 of the children who died of COVID-19 had been tested for it beforehand
Also, this was a beautiful thing to see while reading the study. "We made a mistake at the start so we fixed it but here's all the data so you can tell for yourself" is absolutely the right thing to do when reporting on your trial outcomes!
I suspect when I do a formal risk of bias score for the study it will come out looking fantastic simply from this one thing. Researchers who are entirely open about their methods are the ones who publish the best studies!