Carl T. Bergstrom Profile picture
#BlackLivesMatter Information flow in bio, society, & science. Book *Calling Bullshit*: https://t.co/RJrqkYSrwM I love crows and ravens. he/him
a king on a throne with his eyes torn out Profile picture eDo Profile picture BlackeyedSusan28 Is Fully Vaccinated Profile picture Antonio Ripa - #COVIDSCIENZA #ZEROCOVID Profile picture ☀️💧Leon-Gerard Vandenberg 🇨🇦🇦🇺🇳🇱 Profile picture 94 added to My Authors
4 Jun
This is beyond outrageous behavior by @dartmouth's @GeiselMed school.

Without warning students, they used activity logs from Canvas, an online course management system not intended for forensic use, to dragnet for cheating on exams.

The problem is...

nytimes.com/2021/05/09/tec…
...a system like Canvas can generate activity even when a user is not at the keyboard, so long as one is still logged in.

More generally, Canvas activity logs were never designed to be used for forensic analysis and cannot be trusted for such purposes.
In one of the stupidest quotes I've ever read from a university administrator, @GeiselMed dean Duane Compton admits that these data generated numerous false positives—and has the gall to suggest that this means the system is working. Image
Read 7 tweets
31 May
It's interesting to look back on the things that I got wrong over the course of the COVID pandemic, and to understand why.

I think I got a fair bit right as well—perhaps most notably in being the first to point out the problems in the IHME model...
...and in arguing early-on about the futility of a natural herd immunity strategy.

But let's look at what I got wrong, roughly in order, and why. In almost every case my mistake was in anchoring too strongly on influenza.
1. Early on I was skeptical that R0 was >3 instead of <2.

This was anchoring on flu directly, and also in looking at the epidemic curve. You can't tell from the epidemic curve whether you have higher R0 and longer generation interval, or lower R0 and shorter generation interval.
Read 9 tweets
31 May
Online exam proctoring software is bullshit.

Instructors, don't use it.

Students, if you're been forced to use Proctorio or other academic spyware, consider contacting the dean of students or dean of undergraduate education at your college.

newyorker.com/tech/annals-of…
The article above is so horrifying that it's hard to know what to pick out to highlight.

By no means the worst part of the article, but if your clients don't know how to use the bullshit software you pushed on them and cause active harm, that's on you, not on them.
Early in the pandemic I wrote a long thread about the harms these kinds of programs cause.

It's a bit tl;dr, but I'd encourage instructors and students to explore the problems with academic spyware in more detail:

Read 4 tweets
24 May
This is your weekly reminder that "up to" means something different than "at least".

BMPCs were sampled 7-8 months after infection and remained present at that time. This sets a lower bound, not an upper bound, on persistence.
Here's a simple cheat sheet.
Indeed this very paper sampled a subset of the patients again at 11 months, and found the BMPC levels stable in almost all of them, providing direct evidence of persistence beyond 7 months.
Read 4 tweets
22 May
No, this paper doesn't show that COVID antibodies are lost within a year.

The title could certainly be better, because it does provide a misleading impression if you don't read any further.

medrxiv.org/content/10.110… Image
If you look at the text, though, you'll see that the "up to" is not meant to set 12 months as an upper bound to persistence; it's the period of observation. Image
Most importantly, let's the look at the data they present. Antibody titers are substantial after 12 months, and there is some suggestion that they may be stabilizing. Image
Read 5 tweets
20 May
In light of the recent paper claiming to provide "initial evidence for bullshit ability as an honest signal of intelligence", I think it's useful to talk a bit about what a signal is, as compared to cue.
Let's start with Grice's distinction between natural and non-natural meaning, in his 1957 paper entitled simply "Meaning".

Compare:

"Storm clouds mean rain."

and

"The symbol ♂ means male".

(Here I loosely follow my paper with @KevinZollman et al sciencedirect.com/science/articl…)
"Storm clouds mean rain" involves what Grice calls natural meaning.

This is a naturally arising correlation that an observer can use to learn about unobserved features or predict future happenings. No intent is involved; storms clouds don't form to tell us about impending rain.
Read 23 tweets
19 May
Changing denominators are a bear.

This is a comparison of reservation occupancy at the places open and taking reservations currently, to their pre-pandemic occupancies.

It doesn’t tell anything like the whole story. The places that went out of business are not counted.
This matters because (1) with fewer open restaurants, we expect an increased demand on the remaining venues, and (2) there is a selection effect here in that being included in the sample is correlated with having done well during the pandemic (instead of going out of business).
Maybe more importantly, the same article looks at overall traffic including walk-ins, as opposed to just reservations.

This has not recovered even given the caveats above.
Read 4 tweets
19 May
There is a new paper out which claims that the ability to bullshit is an honest signal of intelligence.

journals.sagepub.com/doi/pdf/10.117…

I have thoughts.
Imagine a colleague came to you with a purported explanation for fighting ability among territorial vertebrates.

“Over the eons,” he claims, “the ability to kick ass has been selected because it is an honest signal of the ability to kick ass.”
I hope it would be transparent to you that an honest signalling story is unnecessary here. The ability to kick is ass is selected because one can then kick the asses of those whose asses need kicking, and no signaling is needed.
Read 12 tweets
15 May
Shame on MIT for what, exactly?

Posting a preprint with a grad student first author?

Since I work on disinformation, I read it when it came out. It's quite good. Have you read it? What aspects the paper (as opposed to the authorship) do you have a problem with?
You do realize that final two authors are well-known MIT professors, right? I don't think matters—but they have *worked hard and written a lot of papers to earn recognition.*

What does any of this have to do with John Ioannidis? He's not mentioned anywhere in the paper.
This is a vile post, Michael.

The thing about careers is that every single one of us had a first paper, a second paper, etc.

NONE of us had a Nobel Laureate criticize us for daring to publish at that early stage.
Read 7 tweets
13 May
When you extrapolate from data about within-group values to the existence of between-group differences.

Via @miketaddow
(To explain a bit more, the people ranking BBQ joints in Seattle are not the same people ranking them in Brownsville, TX. These data tell us that Seattlites are nice when they rank things and/or have low standards for BBQ, not that we are a contender on the national stage.)
I now really want to see the rankings for best pizza, using the same absurd metric.
Read 4 tweets
12 May
Before anyone panics, note (1) the selection bias arising because this is exampled picked out of the various outbreak case reports as being "worrisome", (2) the small sample size, and (3) these numbers still give you a point estimate of 84% effectiveness against infection.
In a bit more detail: As small case clusters arise and are reported worldwide, we expect to see a distribution of effectiveness estimates. Some will have more vaccinated cases by chance, some fewer. The smaller the clusters, the wider the distribution.
Singling out a small cluster that yields a low effectiveness estimate for some variant of concern—and ignoring all the other data on that variant of concern everywhere else in the world—is reckless, and, odds are, misleading.
Read 6 tweets
4 May
In today's much-discussed @nytimes story from @apoorva_nyc (nytimes.com/2021/05/03/hea…) there is a graph that I find quite problematic. It purports to show county-level data about vaccine hesitancy. Image
But look at how sharp those state boundaries are.

One of the key insights from our @callin_bull course is that in the real world, data are messy. And if they come out too clean, something is wrong.

This one screams that something is wrong. Image
So what's going on here?

The @nytimes graphic appears to come from this HHS/CDC report.

aspe.hhs.gov/pdf-report/vac… Image
Read 11 tweets
30 Apr
Today a story has been going around about a cluster of B.1.617 cases in Israel. This is the India-associated strain.

Unfortunately, this is in some places being spun as a possible example of vaccine escape. But the numbers suggest exactly the opposite!

timesofisrael.com/children-from-…
Here are the numbers.

24 with recent travel history.
17 with no travel history
5 children
4 vaccinated

Approximately 85% of the adult population in Israel has been fully vaccinated. So what does this tell us about vaccine effectiveness against B.1.617 in adults?
I'll just do point estimates.

Assume the 5 children were <16 and thus unvaccinated.

That gives us 32 cases among unvaccinated adults, and 4 cases among vaccinated adults.

The basic calculation for effectiveness then gives us a remarkable 98% against B.1.617.
Read 6 tweets
27 Apr
1. Today’s antivax propaganda comes from a….vaccine manufacturer?

Unfortunately, yes. The manufacturer of the Sputnik V vaccine is tweeting absolutely nonsense statistics in an effort to question the safety record of its competitors. Image
2. Their unfounded claim is that we are observing higher death rates among Pfizer recipients.

This is rubbish. In our book, we address the way in which people will try to bamboozle you with the unwarranted authority of numbers by throwing lots of stats at you.
3. But statistics (1) are only as good as the methods used to derive them, and (2) are only useful when they allow you to make fair and meaningful comparisons.

The Sputnik V numbers fail spectacularly on both accounts.
Read 14 tweets
27 Apr
Osprey and dinner
Crows arrive on the scene.

"Wait, how much will you give me if I ride him?
The approach.
Read 7 tweets
24 Apr
Genomics and the poetry of racist injustice:

Let's start with the poetry, because if you read that, it doesn't matter one iota whether you make the connection to genomics.

Please, please take a moment and read this. Slowly, aloud, and more than once.

newyorker.com/magazine/2020/…
What does this have to do with genomics?

To pack a huge amount of information into very small genomes, viruses make use of overlapping reading frames. From Bergstrom and Dugatkin (2016), the HBV genome:
We present an extremely stupid example of what this would look like using three-letter English words instead of codon triplets.
Read 5 tweets
23 Apr
In the '20s, without consent of the parents, an Ivy League school actually used the remains of African American children murdered by the state as a teaching tool.

No, NOT the 1920s. This very year.

theguardian.com/us-news/2021/a…
TW: Disturbing disregard for the deceased.
“`Nobody said you can do that, holding up their bones for the camera. That’s not how we process our dead. This is beyond words. The anthropology professor is holding the bones of a 14-year-old girl whose mother is still alive and grieving,' Michael Africa Jr said."
Read 6 tweets
19 Apr
This is a very nice thread about a different way to teach and use Bayes’s rule. I’ve always found this odds ratio framing much more intuitive.
Since people are still on about that headline: my problem with it is that it suggests the lazy "arcana" narrative about a piece of obscure math that turned out to be useful, whereas the article does a nice job of explaining Bayes's rule as a foundational piece of probability.
The real key to me is to whom the word "obscure" is intended to refer. No one would refer to the mRNA in an mRNA vaccine as "an obscure alternative form of genetic material", even though most readers (pre-2021) would not know the term.
Read 4 tweets
18 Apr
Yes, because it's common knowledge that a multiple murderer fleeing the cops and likely facing the death penalty in Texas will take every possible precaution to avoid injuring members of the general public.
Citizens in Wisconsin this afternoon should feel similarly secure in the knowledge that their mass shooter on the loose is not a threat either.
Thirteen year old boy: threat.

Handcuffed detainee: threat.

Unarmed 90 pound grandmother: threat.

Former police marksman accused of child sex abuse, on the lam after murdering three: no risk to the general community.
Read 4 tweets
11 Apr
1. The more I think about it, the more astonished I become that @wileyinresearch removed a lengthy section of a published paper without any formal notice.

2. For 20 years now, commercial publishers have been aggressively attacking preprint culture as risky and unreliable, while claiming that only formal publication can provide a trusted, authenticated version of record (VOR).
3. Industry mouthpiece @scholarlykitchn has been banging away at this theme since its inception. Just this week, we read that:

scholarlykitchen.sspnet.org/2021/04/05/pub…
Read 5 tweets
8 Apr
1. Thread: Proactive testing in a partially vaccinated population.

I will start with a disclosure. The work described was done in collaboration with @Color Health and I was paid as a consultant for my efforts. I have no financial stake in COVID tests, treatments, or vaccines.
2. Large-scale proactive testing has been an important COVID control measure, because it identifies those who are presymptomatic, asymptomatic, or paucisymptomatic and allows them to self-isolate.

As vaccination becomes widespread, two questions arise:
3. First, at what level of vaccine coverage is proactive testing no longer necessary?

Second, as we transition to this point, what are best practices for tapering off testing efforts?

I explored these questions with @RS_McGee, @ay_zhou, @jrhomburger, and @hewillia34.
Read 24 tweets