In order to Calculate VE, we need to calculate Risk Ratio (RR).
CDC, in their epidemiological handbook, gives this definition/example for Varicella:
Then we can calculate the VE as follows:
I've rebuilt above example in Google Sheets:
- VE for this varicella example/outbreak is at 73%.
- Not statistically significant at 90%, but close.
Now we should be able to do the same for Covid19 right?
I'm using the CDC MWWR Case Study of the Massachussets/Barnstable County outbreak, as that had quite a lot of data.
Unfortunately they do not disclose the # of negative tests per group - so we can try to estimate them.
So we know that about 60k people were in the county.
CDC also discloses the % of vaccinated/unvaccinated.
So we can calculate the absolute size of the two groups.
Now, the question is, what's the proportion of people in each group that got tested? We don't know.. let's use 10%
Using 10%, gives us the total # of tests, and we can finally calculate VE.
VE is calculated the same way as before. So what's the result? --> -26%
So if the proportion of people tested is equal, the VE is negative! WOW
So to get to a CDC defined minimum VE of 50%, the amount of people tested in the vaccinated group would have had to be at least 2.5x times (10% vs 4%) higher.
Is that realistic?
So unfortunately CDC does not know or publish the # of tests by vaccine group.
What do you think?
Please let me know your thoughts in the comments?
But, if we only look at the states, that have a high neg. correl between vaxx&excess, there's no correlation towards vaxx (correl= .09, p=.66; not stat. sign.)
As an example, we can find 3rd lowest Vaxx state, Alabama in this group, as well as Massachusetts, 3rd highest!
2/n
The latest decline, appears to be solely driven by seasonality and other factors, not by vaccine rates.