Excited that the @Royal_College Research Forum on using CBD assessment data to improve CBME that I presented at has made its way online (with subtitles!). Full presentation here: royalcollege.ca/rcsite/researc… Tweetorial here:
Historical assessment programs often consisted of 12-13 rotation-based assessments per YEAR of training. CBME has changed that - in Canada our EM residents are getting 100-200 EPA-based assessments of patient interactions per year.
Multiply that by ~70 Canadian EM residents per cohort and when our programs are filled with CBME residents we'll complete ~50,000 EPA assessments/year across Canada.
The obligatory objectives slide along with a cool picture of the Canadian EM leaders that built our CBD (Competence By Design) CBME assessment program.
The movie 'Moneyball' described how Billy Beane revolutionized the game of baseball using data analytics. Similar metrics are now widely used to optimize athletic training and performance. Could we use our assessment data to do the same? (shoutout to @sherbino for the ref)
In #MedEd we haven't always had a lot of performance data to analyze. But now we do. How can we use learning analytics to safely and ethically support our residents and improve our programs and system?
We started this at @USaskEM by building a super cool (I am very biased) dashboard to effectively display CBME assessment data for our residents and competence committee. Papers: ncbi.nlm.nih.gov/pmc/articles/P…
Video:
Then we built faculty development, program evaluation, and oversight dashboards (not yet published but demo'd in the presentation and here's the code for the whole thing! github.com/kiranbandi/cbd…) using design-based research - pic is of the oversight dashboard. cc @venkatkarthikb
Analyzing and presenting the data in this way makes it easier for our program to understand what is happening and how we (our residents, faculty, and program) can improve. We can also aggregate data across programs for further insights...
Or how residents progress through their stages of training (notably, we thought they'd be done TTD by October and Foundations by July). (ref: meridian.allenpress.com/jgme/article-a…)
But that's just the beginning. This field has so much potential to inform and improve what we do in #meded. As we collect more data and learn how to analyze it more effectively I can imagine a time when...
Our learning management systems flag learners in difficulty based on the patterns in their assessment data just like this ECG machine flagged this as a STEMI (was it right though? further interpretation required)
Or our assessment apps prompt our faculty on how they can improve their feedback at the 'point of assessment' in the same way this helpful system makes sure that our passwords are strong. (e.g. could you describe what you saw? could you suggest how they could improve?)
It may even be possible to build on early work on equity in assessment to identify biased assessors, programs, specialties, or institutions (it may be easier to improve it if we can quantify it).
Maybe someday we could even integrate learning analytics with practice analytics and clinical outcomes to see how our assessments align with the quality of the care that we provide. We'll need to collaborate with those QI leaders!
Of course, there will be MANY CHALLENGES and getting to this utopia (or dystopia?) ethically and safely is going to be difficult. MUCH more on this from @TChanMD@drellaway in a future issue of @AcadMedJournal.
Moving forward, we are more likely to be successful if our leaders learn about and get on top of this field, our scholars and researchers collaborate and share their innovations, and our ed tech people can facilitate this work by providing usable and secure data.
Thanks for making it to the end! The full presentation has more details, a Q&A with @drjfrank@thorsley_handle@smoffattbruce1, and a sneak peak on a lot of what my collaborators and I are thinking/writing about. HUGE shoutout to @TChanMD for contributing to all of these ideas.
Hey @SkGov, am I missing something, or does your new #covid19sk vaccination plans contradict the Canadian NACI recommendations and previous @saskhealth plans by deprioritizing immunizations for healthcare workers?? I'm worried about exposures to #COVID19 in these HCWs :(
I don't seem to be missing anything... @SkGov, why??? Exposures happen in frontline healthcare environments not currently classified as 'high-risk settings' and one COVID+ case can knock a cohort of healthcare workers into isolation. We don't have spare HCWs. Please reconsider.
Initially, I was confused. Now I'm getting angry as I hear from disheartened colleagues seeing #COVID19 patients daily in high-risk contexts. They are lumped in a risk category with others 'their age.' We need to take care of them as they take care of COVID patients for us.
Many believe we are overreacting, #COVIDsk is just the flu, only the elderly/those with comorbidities die, we can protect the vulnerable, if you survive it you are fine, your activities can continue safely, and it will not end up being as bad as we are making it out to be. 1/10
The SK Health Minister suggested this week that we could still follow their 'optimistic' modeling scenario and may be able to relax restrictions over Christmas. None of this is true. The optimistic scenario is clearly not happening. 2/10
The realistic models (which have not been clearly presented to the public) show that we are tracking a scenario that leads to cancelled surgeries and overwhelmed emergency departments, wards, and ICUs. A scenario that leads to a lot of unnecessary death. 3/10