In a blinded name-swap experiment, black female high school students were significantly less likely to be recommended for AP Calculus compared to other students with identical academic credentials. Important new paper from @DaniaFrancis:
Some background: one of the best ways to collect real-world evidence of discrimination is through name-swapping "audit" studies. In these experiments, people are presented with job applications, resumes, mortgage applications, etc., that are identical except for the name…
The applicant’s name is varied to suggest the individual’s race/ethnicity/gender. Think “John” vs “Juan” or “Michael” vs. “Michelle”.
These audit studies have demonstrated significant discrimination in a variety of contexts. For instance, “John” is more likely to be hired than “Jennifer” for a scientific position, even if they have otherwise-identical resumes. pnas.org/content/109/41…
This new paper used an audit methodology to investigate something different - who gets tracked into an Advanced Placement math class. AP classes are heavily weighed for college admissions, so this choice can have significant ramifications for a student's future.
The researchers in this current study set up a booth at a national education conference and invited school counselors to review different student transcripts. The transcripts either had no name, or had a name to suggest the student’s race/gender.
The counselors were then asked how likely they were to recommend that the student take AP calculus.
The researchers found that when a transcript showing strong grades was given a black female name, counselors were 20% less likely to recommend them for AP calc compared to an identical but anonymous transcript.
You can see that other gender/race combinations mostly cluster around 1. But in three of four experiments, the black female student was less likely to be recommended for AP calc compared to the nameless transcript. “Black female” was significant in the pooled analysis as well:
These frustrating results underscore the prevalence of implicit biases even among school guidance counselors.
I think about these results in terms of the “cumulative advantage” theory of inequality: one decision (like taking AP Calc) may not be huge by itself, but a lifetime of being 20% less likely to recommended for honors, promotions, etc. can add up to a lot: annualreviews.org/doi/abs/10.114…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Angelika Amon passed away this morning. She was the greatest scientist I’ve ever met. This is a huge loss for her family, her friends, and for every biologist.
As a grad student with Kim Nasmyth and then an independent fellow at the Whitehead, Angelika changed our understanding of the cell cycle.
People thought that cell cycle kinases just got degraded at the end of mitosis, but she showed that regulated phosphatase activity was actually crucial to completing the cell cycle and re-entering G1:
In two weeks, the Nobel Committee at the Karolinska Institute will award the 2020 Nobel Prize in Medicine/Physiology.
Who will win? We don’t know for sure - but I think that we can make some educated guesses.
Science is dominated by a phenomenon called “the Matthew effect”. In short, the rich get richer. Getting one grant makes it more likely you’ll get the next. Winning one prize makes it more likely you’ll win another.
Here are the award rates for 11 different postdoc fellowships in 2019.
There’s a huge variation in success rates: four different organizations fund fewer than 6% of applications that they receive, while the success rates for the K99 and F32 are >24%.
To back up - my appointment at CSHL let me run a lab without doing a postdoc, so I never had the experience of applying for these grants. To help out my current postdocs, I wanted to make up for my lack of experience by doing some research.
I collected the award rates for each of these grants either from the org’s website or by emailing them directly. (I included an asterisk to indicate uncertainty. For instance, Beckman said they received “over” 150 applications, and I used 150 as the denominator).
Question: can anyone name a paper whose findings were challenged by a “matters arising” or “technical comment”-type rebuttal, but subsequent research proved that the original paper was actually correct?
One example: Charles Sawyers published that leukemia patients who relapsed on Gleevec developed ABL-T315I mutations.
Science then published 2 technical comments reporting that other groups didn't find this mutation in independent patient populations:
Larger surveys subsequently confirmed that T315I was a common (though not universal) cause of Gleevec resistance, T315I became the paradigmatic example of a “gatekeeper” resistance mutation, and Sawyers won the Lasker prize.
What happens to a paper submitted to a top journal?
Among a set of manuscripts sent out for review by Cell in 2018:
-33% were published in Cell
-26% were published in another Cell-family journal
-7% are still under review at Cell
-The median time to publication was 391 days
To back up: in 2018, Cell started the “Sneak Peek” program, in which authors had the option of posting a preprint of their manuscript if it was sent out for review by a Cell-family journal. cell.com/sneakpeek
Using this site, I found 46 papers that were sent out for review at Cell and posted on “Sneak Peek” between June 1st and Dec 31st, 2018. Each paper’s current status was also noted: “Published”, “Under review”, or “Review Complete” (a nice euphemism for “rejected”).
Two whole-genome CRISPR screens for SARS-CoV-2 resistance are on bioRxiv.
**Among the top 100 hits in each screen, 99 are non-overlapping.**
Could cell type-specific differences could explain this discrepancy? And if so, what’s the “right” way to study SARS-CoV-2 in culture?
A few thoughts: Wei recovered ACE2 as their #1 hit, which is strong evidence in favor of the biological validity of their screen.
(Heaton didn't recover ACE2, which they suggest is because their cells transgenically express ACE2 cDNA, though I don't get why that should matter.)
Wei also validated a large number of their top hits in individual CRISPR assays.
Heaton validated their top hit, the kinase SRPK1, by treating cells with an "SRPK1 kinase inhibitor", given at 50uM! No way on earth that that's specifically inhibiting a single kinase.