Cory Doctorow NONCONSENSUAL BLUE TICK Profile picture
Aug 2, 2021 32 tweets 8 min read Read on X
The worst part of machine learning snake-oil isn't that it's useless or harmful - it's that ML-based statistical conclusions have the veneer of mathematics, the empirical facewash that makes otherwise suspect conclusions seem neutral, factual and scientific.

1/ MAD Magazine's Alfred E. Ne...
If you'd like an unrolled version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

pluralistic.net/2021/08/02/aut…

2/
Think of "predictive policing," in which police arrest data is fed to a statistical model that tells the police where crime is to be found. Put in those terms, it's obvious that predictive policing doesn't predict what criminals will do; it predicts what POLICE will do.

3/
Cops only find crime where they look for it. If the local law only performs stop-and-frisks and pretextual traffic stops on Black drivers, they will only find drugs, weapons and outstanding warrants among Black people, in Black neighborhoods.

4/
That's not because Black people have more contraband or outstanding warrants, but because the cops are only checking for their presence among Black people. Again, put that way, it's obvious that policing has a systemic racial bias.

5/
But when that policing data is fed to an algorithm, the algorithm dutifully treats it as the ground truth, and predicts accordingly. And then a mix of naive people and bad-faith "experts" declare the predictions to be mathematical and hence empirical and hence neutral.

6/
Which is why @AOC got her face gnawed off by rabid dingbats when she stated, correctly, that algorithms can be racist. The dingbat rebuttal goes, "Racism is an opinion. Math can't have opinions. Therefore math can't be racist."

arstechnica.com/tech-policy/20…

7/
You don't have to be an ML specialist to understand why bad data makes bad predictions. "Garbage In, Garbage Out" (#GIGO) may have been coined in 1957, but it's been a conceptual iron law of computing since "computers" were human beings who tabulated data by hand.

8/
But good data is hard to find, and "when all you've got is a hammer, everything looks like a nail" is an iron law of human scientific malpractice that's even older than GIGO. When "data scientists" can't find data, they sometimes just wing it.

9/
This can be lethal. I published a @Snowden leak that detailed the statistical modeling the NSA used to figure out whom to kill with drones. In subsequent analysis, @vm_wylbur demonstrated that NSA statisticians' methods were "completely bullshit."

s3.documentcloud.org/documents/2702…

10/
Their gravest statistical sin was recycling their training data to validate their model. Whenever you create a statistical model, you hold back some of the "training data" (data the algorithm analyzes to find commonalities) for later testing.

arstechnica.com/information-te…

11/
So you might show an algorithm 10,000 faces, but hold back another 1,000, and then ask the algorithm to express its confidence that items in this withheld data-set were also faces.

12/
However, if you are short on data (or just sloppy, or both), you might try a shortcut: training and testing on the same data.

There is a fundamental difference from evaluating a classifier by showing it new data and by showing it data it's already ingested and modeled.

13/
It's the difference between asking "Is this LIKE something you've already seen?" and "Is this something you've already seen?" The former tests whether the system can recall its training data; the latter tests whether the system can generalize based on that data.

14/
ML models are pretty good recall engines! The NSA was training it terrorism detector with data from the tiny number of known terrorists it held. That data was so sparse that it was then evaluating the model's accuracy by feeding it back some of its training data.

15/
When the model recognized its own training data ("I have 100% confidence this data is from a terrorist") they concluded that it was accurate. But the NSA was only demonstrating the model's ability to recognize known terrorists - not accurately identify UNKNOWN terrorists.

16/
And then they killed people with drones based on the algorithm's conclusions.

Bad data kills.

Which brings me to the covid models raced into production during the height of the pandemic, hundreds of which have since been analyzed.

17/
There's a pair of new, damning reports on these ML covid models. The first, "Data science and AI in the age of COVID-19" comes from the @turinginst:

turing.ac.uk/sites/default/…

18/
The second, "Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans," comes from a team at Cambridge.

nature.com/articles/s4225…

19/
Both are summarized in an excellent @techreview article by @strwbilly, who discusses the role GIGO played in the universal failure of ANY of these models to produce useful results.

technologyreview.com/2021/07/30/103…

20/
Fundamentally, the early days of covid were chaotic and produced bad and fragmentary data. The ML teams "solved" that problem by committing a series of grave statistical sins so they could produce models, and the models, trained on garbage, produced garbage. GIGO.

21/
The datasets used for the models were "Frankenstein data," stitched together from multiple sources. The specifics of how that went wrong are a kind of grim tour through ML's greatest methodological misses.

22/
* Some Frankenstein sets had duplicate data, leading to models being tested on the same data they were trained on

* A data-set of health children's chest X-rays was used to train a model to spot healthy chests - instead it learned to spot children's chests

23/
* One set mixed X-rays of supine and erect patients, without noting that only the sickest patients were X-rayed while lying down. The model learned to predict that people were sick if they were on their backs

24/
* A hospital in a hot-spot used a different font from other hospitals to label X-rays. The model learned to predict that people whose X-rays used that font were sick

25/
* Hospitals that didn't have access to PCR tests or couldn't integrate them with radiology data labeled X-rays based on a radiologist's conclusions, not test data, incorporating radiologist's idiosyncratic judgements into a "ground truth" about what covid looked like

26/
All of this was compounded by secrecy: the data and methods were often covered by nondisclosure agreements with medical "AI" companies. This foreclosed on the kind of independent scrutiny that might have caught these errors.

27/
It also pitted research teams against one another, rather than setting them up for collaboration, a phenomenon exacerbated by scientific career advancement, which structurally preferences independent work.

28/
Making mistakes is human. The scientific method doesn't deny this - it compensates for it, with disclosure, peer-review and replication as a check against the fallibility of all of us.

The combination of bad incentives, bad practices, and bad data made bad models.

29/
The researchers involved likely had the purest intentions, but without the discipline of good science, they produced flawed outcomes - outcomes that were pressed into service in the field, to no benefit, and possibly to patients' detriment.

30/
There are statistical techniques for compensating for fragmentary and heterogeneous data - they are difficult and labor-intensive, and work best through collaboration and disclosure, not secrecy and competition.

31/

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Cory Doctorow NONCONSENSUAL BLUE TICK

Cory Doctorow NONCONSENSUAL BLUE TICK Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @doctorow

Jan 27
If Elon Musk wants to cut $2t from the US federal budget, there's a pretty straightforward way to get there - just eliminate all the beltway bandits who overcharge Uncle Sucker for everything from pharma to roads to (of course) rockets, and make the rich pay their taxes.

1/  A highwayman in a tailcoat and top-hat with Elon Musk's laughing face points a pair of oversized revolvers at a bent-double Uncle Sam, his face a bitter mask; Sam is being crushed under a small mountain of variegated fardels, e.g., large sacks, barrels, and railroad rails. In the background is a halftoned image of a Falcon Heavy rocket plummeting out of the sky, chased by a plume of flame.   Image: Steve Jurvetson (modified) https://www.flickr.com/photos/jurvetson/52005460639/  CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

pluralistic.net/2025/01/27/bel…

2/
There *is* a ton of federal bloat, but it's not coming from useless programs or overpaid federal employees.

3/
Read 56 tweets
Jan 25
The core regulatory proposition of the tech industry is "it's not a crime if we do it with an app." It's not an unlicensed taxi if we do it with an app. It's not an illegal hotel room if we do it with an app.

1/  Two caricatures of top-hatted millionaires whose bodies are bulging money-sacks. Their heads have been replaced with potatoes. The potatoes' eyes have been replaced with the hostile red eye of HAL 9000 from Kubrick's '2001: A Space Odyssey.' They stand in a potato field filled with stoop laborers. The sky is a 'code waterfall' as seen in the credit sequences of the Wachowskis' 'Matrix' movies.   Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg  CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

pluralistic.net/2025/01/25/pot…

2/
It's not an unregistered security if we do it with an app. It's not wage theft if we do it with an app.

3/
Read 34 tweets
Jan 24
"Boss politics" are a feature of corrupt societies. When a society is dominated by self-dealing, corrupt institutions, strongman leaders can seize control by appealing to the public's fury and desperation.

1/ An altered version of a Gilded Age editorial cartoon titled 'Who controls the Senate?' which depicts the Senate as populated by tiny, ineffectual politicians ringed by massive, bloated, brooding monopolists. A door labeled 'people's entrance.' is firmly locked. A sign reads, 'This is a senate of the monopolists, by the monopolists and for the monopolists.' The image has been altered: an editorial cartoon of Boss Tweed, portrayed as a portly man in a business suit with a money-bag for a head, stands in the foreground. He is wearing a MAGA hat. On his shoulder perches a tiny, 'big stick' swin...
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

pluralistic.net/2025/01/24/enf…

2/
Then, the boss can selectively punish corrupt entities that oppose him, and since *everyone* is corrupt, these will be valid prosecutions.

3/
Read 49 tweets
Jan 22
Turns out Donald Trump isn't the only world leader with a tech billionaire "first buddy" who gets to serve as an unaccountable, self-interested de facto business regulator. UK PM Kier Starmer has just handed the keys to the British economy over to Jeff Bezos.

1/ A vintage Puck cover illustration depicting a tophatted millionaire as a puppeteer, operating two marionettes, one dressed as a general, the other as a businessman. It has been altered: the puppeteer's face has been replaced with Jeff Bezos's. The general marionette's face has been replaced with Keir Starmer's. The other marionette's face has been replaced with a vintage oil pastel drawing of an outraged bricklayer in a folded paper hat. The puppet theater is surmounted by the UK royal crest.   Image: UK Parliament/Maria Unger (modified) https://commons.wikimedia.org/wiki/File:Keir_Starmer_...
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

pluralistic.net/2025/01/22/aut…

2/
Oh, not literally. But here's what's happened: the UK's Competitions and Markets Authority, an organisation charged with investigating and punishing tech monopolists (like Amazon) has just been turned over to Doug Gurr, the guy who used to run Amazon UK.

3/
Read 50 tweets
Jan 20
Many of us have left the big social media platforms; far more of us *wish* we could leave them; and even those of us who've escaped from Facebook/Insta and Twitter still spend a lot of time trying to figure out how to get the people we care about off of them, too.

1/ A page out of a medieval hand-illuminated grimoire; it is an illustration of a tree, with each branch terminating in a demon; these branches are annotated in an unknown script. The demons have been replaced with 19th century caricatures of shouting millionaire industrialists.
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

pluralistic.net/2025/01/20/cap…

2/
It's lazy and easy to think that our friends who are stuck on legacy platforms run by Zuckerberg and Musk lack the self-discipline to wean themselves off of these services.

3/
Read 93 tweets
Jan 18
We're less than a month into 2025 and I'm already overwhelmed by my backlog of links! Herewith, then, is my 25th linkdump post, a grab-bag of artful transitions between miscellaneous subjects. Here's the previous 24:



1/ pluralistic.net/tag/linkdump/A pile of miscellaneous junk.  Image: Jen (cropped) https://www.flickr.com/photos/jenrab/4877784036  CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

pluralistic.net/2025/01/18/rag…

2/
Last week's big tech event was the Supreme Court giving the go-ahead for Congress to ban Tiktok, because somehow the First Amendment allows the US government to shut down a speech forum if they don't like the content of its messages.

3/
Read 101 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(