What I learned from years on graduate admissions committees is that they don’t predict success - they determine it. Everyone has a pet theory, rarely based on evidence, and never based on good evidence, about what makes a successful student.
And, because the admitted pool is enriched for students who meet whatever criteria happen to be in the ascendancy, and because some of these students succeed, we convince ourselves that we were right and keep doing it.
I’m not saying that everyone is equally likely to succeed in graduate school in its current form and that there are not predictors of success. I am saying we don’t - and given our methods can’t - know with confidence what they are.
And I don’t claim to be free of this - I have things I look for (people who succeeded in situations where success was not handed to them) - but that’s predicated on liking to work with such students and wanting to give them a chance as well as on belief they’ll do well.
Which brings me to a hypothesis that I wish I had data to interrogate - that we as individual scientists are better at identifying students who will do well as our individual trainees than we are at identifying students who will do well in a generic lab.
And thus I’ve grown increasingly uncomfortable with the working of admissions to the large umbrella programs that dominate molecular biology.
I see the advantages of organizing graduate training this way for the students - but in my experience the admission process tends towards a kind of consensus based uniformity in the types of students who get opportunities that is counterproductive for students and science.
And I think that homogenization is a product of people thinking they know how to identify people who will succeed “in graduate school” as opposed to in their labs.
Hence I am extremely leery of any effort anywhere to try to query and hence build broader consensus on “what predicts success in graduate school”, especially when it’s based, at best, on anecdata.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
it's a cartoon explaining a classic result in microbial evolutionary biology that (largely) resolved the question of whether selection acts on preexisting variation or if the selection induces mutations to occur (it won Salvador Luria and Max Delbruck a Nobel Prize)
the idea is as follows - you take a population of cells and divide them equally into a bunch of tubes and let them grow for several generations - then you pour the cells onto plates, apply some selective pressure to the cells, and count the number of colonies that grow
in the original experiment the selective pressure was exposure to a lethal virus, but it can and has been repeated with almost any condition where the growth of the bacteria requires a mutation not found in the original cell
Lots of discussion here, but I really don't think it's that complicated: it reifies racism and abets racists to routinely assign population labels, especially socially constructed ones, to groups of individuals based on genetic data or for use in genetic studies.
That is not to say that the use of such labels is never scientifically justified, as @arbelharpak points out. But there should be a very high bar for their use, and it should be for very specific, clearly articulated purposes.
It is simply untenable to claim - correctly - that race is not a scientific concept, and then turn around and casually use race as if it IS a real scientific entity in papers. And substituting geographic labels for socially constructed race doesn't solve the problem.
I hope we get some more clarity from Whitehead about what led to Sabatini's dismissal. Was there overwhelming evidence that the institution couldn't ignore? Or does this represent a shift in the way institutions are handling harassment allegations against prominent faculty?
Obviously, full transparency is impossible to protect people who spoke up. But that has often bogusly used by institutions as an excuse to provide zero transparency when they take no action, and I hope that doesn't happen in this case.
It is as important to demand transparency when institutions do act against their prominent faculty as it is when they don't. Because as much as I have faith in Ruth Lehmann as a person, I have zero faith in the institution she leads (or any academic institution for that matter).
A decade ago my close colleague in science and publishing Pat Brown came to me with some data on the climate impact of animal agriculture published by the UN fao.org/3/a0701e/a0701…. This report (aspects of which are controversial) motivated me to begin looking at the issue.
Zoom ahead 12 years and I've finally had a chance to write up some work I've done myself on the problem that has convinced me that we are, if anything, underestimating the scale of the problem. A preprint describing the work is available here: biorxiv.org/content/10.110…
Since it seems it's "You need an SNC paper to get a job" season again, there are a couple of things about the faculty hiring system that seem often to get glossed over, and I'm curious what people think about them.
I want to start by stipulating that, in the US there is no hard rule about what you "need" to get published, but there is, for sure, a strong correlation between publication record and faculty search success. What I'm interested in is why this correlation exists.
When discussing this fact, nearly everyone seems to jump from correlation to causation - assuming that people hired to faculty positions with SNC paper got their jobs *because* of those SNC papers. But what's the evidence that this is true?
There are good political/social reasons for wanting SARS-CoV-2 to have entered humans directly from animals, and many pushing the WIV lab accident hypothesis have nefarious intent. I am nonetheless surprised at the degree of confidence people express in a natural origin.
I've looked at a lot of the evidence, and, while the direct transfer from bats remains the strongest hypothesis, the case is far from airtight. And it might never be, because even if it were true, we'd be lucky to find evidence in wild bat populations that would erase all doubt.
And there is an at least plausible case for lab accident too, in that the virus first appeared in the rough vicinity of a lab that is studying precisely this kind of virus and doing the kind of experiments that, if something went wrong, would lead to disaster.