The discussion about ATLAS related to #DNRTulane made me realize that all programs can and should be more transparent with the processes they use to interview and rank applicants. Here's the process @MedPedsUMN. 1/9
When applications come in the APD's and I reviewed them and assign scores based on clinical grades in Medicine and Pediatrics, code Step 1 as pass/fail, and assign points for Community Service, Advocacy, and Engagement; Leadership; and Research. 3/9
For example, if somebody has a multiple longitudinal community service, advocacy or engagement experiences, they get points equivalent to an honors grade in a clerkship. 4/9
We also have codes for life accomplishments, GHHS, and MSPE ranking, but those latter two aren't scored, they are just used as modifiers. 5/9
Lastly, we have a "Better than Application" category that allows us to flag folks who have good grade trajectories or other aspects of their application that are particularly strong but don't make it into the coding schema. 6/9
For interviews we use a combination of standardized questions (which is a best practice in terms of mitigating bias) and free-form discussion. 7/9
At the rank meeting we have interviewers break up into groups and create mini-rank lists that inform the final rank list. The idea is that we want to get as many people from a range of backgrounds looking at each application, again, to mitigate bias. 8/9
I then take all of this information and create a rank list and review it with my APD's. Much more goes into this than 9 tweets, but it at least gives a sense of our process e.g.: interviewers and I read the entire application, including LOR and personals statements. 9/9
• • •
Missing some Tweet in this thread? You can try to
force a refresh