The @usnews rankings are always questionable at best, but this year's newly released version, which rank #publichealth sub-disciplines, is particularly egregious. Time for a 🧵.
For the first time, the rankings include not only an overall ranking of schools of public health, but also of disciplines within public health including biostatistics, epidemiology, environmental health, health policy, and social/behavioral sciences.
Sounds great, right? Except instead of being ranked by, you know, experts within each of these disciplines, the rankings were done primarily by deans and "other academics".
The upshot (or downshot, depending on your view) is that each subdiscipline's rankings are nearly a perfect match for the overall public health rankings, a level of uniformity across specialty areas that defies common sense. Some examples:
Johns Hopkins
Harvard
BU
Yale
Pitt (side travesty: @pittbiostat, a well-established department, isn't even ranked!)
There's more ridiculousness when you dig into biostatistics, which was ranked by actual experts (department chairs) last year. The decision to rank biostat as a pub hlth "sub-discipline" this year means great programs based in a med school, like @UPennDBEI, barely make the list.
You know something is very wrong when the correlation between the program rankings of biostat and environmental health programs this year is (much) higher than the correlation between last year's and this year's biostat rankings.
Yes, rankings aren't everything, but they are an important entry point for students considering graduate study in public health, especially for students at smaller colleges without an academic public health footprint.
When we erase the distinctions between the various sub-specialties within public health by basing rankings on the opinions of those without specialty knowledge, we do a disservice not only to the programs themselves but also to those students.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We (epi & biostat folks) have a major conflict of interest when it comes to evaluating COVID research that I’m not sure we’ve fully acknowledged. 1/
Mostly, when we assess studies or evidence, we are at arm’s length from the problem we’re studying. Yes, we may know people who have a condition, but our findings don’t have implications for our lives. 2/
Indeed, we are justifiably skeptical of whether someone who is heavily invested in the outcome of a study can be objective in evaluating its quality. This is why we have COI declarations. 3/
In 2 weeks, @PublicHealthUMN will remove its number from ETS, going #GRExit for all programs. Our decision was largely based on the results of a RANDOMIZED assessment of how GRE scores influence admissions decisions.
What we did and what we found: a thread. 1/n
Quick #GRExit background: there is published literature looking at whether GRE scores predict success in grad school. Most show it doesn't (much), but it's tough to define/measure "success", and selection bias clouds interpretation of study results. 2/n
We decided to ask a simpler question: does seeing the GRE score actually affect how admissions committee members score an application?
Now *that's* a question we can design a randomized study to answer! 3/n
Grad school application season is just around the corner. So, it’s time for a thread on #grexit. Spoiler: I’m on the fence. Here’s why. 1/
First, let’s limit the conversation: I want to talk about #grexit for PhD admissions in (bio)statistics. Why just PhD? Admit rates are (much) lower, and the “financial barrier” argument for #grexit is more relevant for a fully-funded program. 2/
Next, my experience: I am DGS and sit on the admissions committee at @umnbiostat. We receive ~180 PhD applicants each year, and make ~25 first-round offers. All members of the adcom score every PhD applicant; there are no “automatic rejects” based on grades or test scores. 3/