20 Nov, 24 tweets, 5 min read
Reupping, because I was asked about Ontario ICU forecasts this morning, and want to walk folks through how to use this tool.
Many caveats as per usual:

1. This is a pre-print, currently under peer review. It may b wrong.
2. This isn't a mechanistic mathematical model; it's a simple statistical model.
3. Because I just made this it's not validated (events to validate or invalidate haven't occurred).
Though it does display good "convergent" validity with publicly available ICU occupancy estimates, as well as those from CCS and CIHI.
The TLDR here: case counts, by themselves, are meaningless as predictors of future deaths, hospitalizations and ICU admissions.

They need to be contextualized with 2 additional elements: the age of those being infected, and the amount of testing being done to find those cases.
What you can see in this graph is that cases (second panel) in Ontario are much more numerous now than they were in the spring, but both hospitalizations (3rd panel) and ICU admissions (4th panel) are lower.
The reason for that is partly visible in the first panel...mean age of cases declined after the spring wave...a lot more infections in younger people who are markedly less likely to get severely ill and die. Not shown here is testing...
But that's also changed A LOT. This is a graph of weekly tests for SARS-CoV-2 in Ontario over time. The X-axis is week of year...orient yourself by dividing by 4. 40 = 10, which is October; 16 = 4, which is April, etc.
If we're testing a lot more we should see a lot more cases, all else equal.
Here's the table from that short little paper (letter really) in medrxiv.org/content/10.110…

You can see that log cases are associated with increasing ICU admissions; more log tests means less cases all else equal (you're just finding more); and older cases = higher ICU admits.
We're doing this as a 2 week lag, as that's the approximate interval between people getting sick and winding up in the ICU.

We're not trying to do longer term forecasting...with a disease that's so sensitive to control measures, that's a mug's game.
If we get concerned, we change our personal behavior, and we also get more strictures from government, which means burden changes downstream.
But you can see that we've converted the regression model to a (pretty) simple point score, just by dividing all the coefficients by the smallest coefficient (that's 0.046, for age).
The score is 49.1(log10(weekly cases)) + (mean age) – 14.6(log10(weekly tests)) – 39.5. Weekly predicted ICU admissions are then calculated exp(0.046S), where 0.046 is the score.
So if we have 1300 cases a day (9100 a week, which is around 3.95 log), and 210,000 tests a week (30,000 a day, 5.3 log) and average case age of 45, then the score is 122.5 and predicted admissions = exp(0.046 x 122.5) = 280...280 admissions per week 2 weeks from now.
Of course, we also have outflow from the ICU's because people get better but also unfortunately die. That opens up beds. For occupancy I am using following heuristic:

O(t+2) = O(t+1)*((L-1)/L) + A(t+2)

O(t+2) is occupancy in 2 weeks; O(t+1) is next week's occupancy.
L is length of stay (in weeks) and A(t+2) is the model prediction, weekly admissions in 2 weeks time.
The L stuff is a bit funky...it seems to be going down in Ontario, but we have some issues with people having admit dates for ICU but no discharge or death dates, which makes them "immortal". There are undoubtedly clever statistical tricks for dealing with this but right now...
Am conservatively assigning L a value of 13 days (1.9 weeks). If length of stay is falling that means occupancy projections will be too high (probably want to err on the side of caution given the catastrophic nature of ICU overflow in SK and MB right now).
Let's say average ICU occupancy next week is 200 (I think we're above 150 now), that would mean the ICU occupancy in 2 weeks is 200 - 200/(1.9) + 280. Again, 1.9 is just 13 days, converted into weeks.
Hope this is helpful. I am quite keen to try to validate this in other geographies...I think the testing weight might be different and geographically specific given how variable testing is. But the general approach may be helpful.
And yes, I'm aware that 1/1.9 assumes exponential hazard and that's not what ICU stay looks like blah blah but the idea here is to produce a tool that works pretty well and is simple enough to use with available data and to tabulate in a spreadsheet.
It does seem to work nicely in Ontario anyway.

This is occupancy from the model (line) vs CIHI
And forecast admissions...white circles = a new dataset that wasn't used for model building. We obviously don't have enough weeks to evaluate predictive validity yet.
I managed to split this thread (I suck at this) but it continues here:

• • •

Missing some Tweet in this thread? You can try to force a refresh

This Thread may be Removed Anytime!

Twitter may remove this content at anytime! Save it as PDF for later use!

# More from @DFisman

17 Nov
A thread...great news on the vaccine front this week, but perhaps a good time to remind folks that this is NOT the new normal, and that pandemics have a beginning, a middle and an end. We are in the middle now; the end will come.
I want to map out for you, in the most general terms, what I think is the likely future contour of the pandemic globally, based on how remarkably constant GLOBAL case growth has been for a number of months now.
Caveats galore.

As either Neils Bohr or Yogi Berra said: "Prediction is hard, especially about the future".

I think Neils Bohr but more fun if it was Yogi.
13 Nov
New with Steve Drews and Sheila O’Brien from @CANBloodServ and the amazing @AshTuite

Infection fatality ratio for covid-19 in Ontario. Tldr: it’s around 1% after excluding longterm care deaths, same estimate as in many other countries.

medrxiv.org/content/10.110…
Here is the major take home:
We digitized the figure from @GidMK @BillHanage et al’s brilliant work (medrxiv.org/content/10.110…) and overlaid Ontario age specific IFR. Our axis labels have been chopped off.
5 Nov
Ontario's @NDP table a bill that would ensure independence of the CMOH and transparency of public health responses during public health crises.

It's a no-brainer that would help all of us, so I assume it'll have an uphill battle. 😃.

ontariondp.ca/news/honouring…
Encore: merci a @nickelbelt pour ayant fait ca.
To the friends who have pointed out that we have previously had CMOH's recommend unwise courses of action (e.g., de facto criminalization of HIV infection):

yes, there needs to be a mechanism to remove underperforming CMOH's even in a crisis.
30 Oct
👇🏼
Schools are also an upstream enabler if the rest of our economy in a way that bars, gyms and restaurants are not.
Look, I’ve been saying for months now that schools are the one mass gathering it’s hard to cancel.

We don’t want to close them. That’s why reducing class sizes is so critically important.
Looking at data, our hospitalizations and icu’s are surprisingly flat in Ontario. I get that this sucks for people in affected businesses, but the closures are targeted, and many of the outbreak hotspots aren’t close-able.
26 Oct
Asked by a friend to comment on the reasonableness of the IHME forecasts for Canada (30+k deaths by Feb). (covid19.healthdata.org/canada?view=to…). The IHME model is impressive...
And again, based on the 2nd (winter) wave of the pandemic ahead of us, and given that we currently stand around 10k deaths, the projection of 30k deaths by February seems reasonable. Note my earlier tweet about 2:1 ratio of 2nd to 1st wave in 1918/1919.
What's impressive to me in IHME is the forecast that we would/could save 10,000 Canadian lives in the months ahead with a national mask mandate. This, again, seems reasonable, based on best available data.
25 Oct
Global R(t) has been remarkably stable since the first wave. Here it is plotted against global doubling time. Will try to unpack this further when I get a chance but it’s a good news/bad news story: many infections ahead, but this isn’t open-ended.
“Moving average” = 7 day moving average for doubling time.
Just to unpack this a bit, there's a direct relationship between R and doubling time, inasmuch as R(t) is a function of growth rate, as is doubling time. Doubling time may just be a bit more intuitive.