My new essay: In topics ranging from covid-19 to HIV research to the long history of wrongly assuming women’s illnesses are psychosomatic, we have seen again and again that medicine, like all science, is political.
We are not prepared for the surge in disability due to #LongCovid. The physiological damage covid causes can include cognitive dysfunction, GI immune system damage, immune dysfunction, increased risk of kidney outcomes, dysfunction in T cell memory generation, pancreas damage, 2/
We are seeing concerted efforts to downplay the long-term health effects of covid using strategies straight out of the climate change denial playbook... Many have a significant financial interest in distorting the science around long term effects of covid. 3/
In thinking of science as perfectly objective, many scientists miss their own blindspots and are vulnerable to bad-faith attacks...“We spent a long time thinking we were engaged in an argument about data & reason, but now we realize it’s a fight over money & power" 4/
There is not a clean break from the history of racism & sexism in medicine where bias was eliminated and all unknowns were solved. 5/
Medical data are filtered through the opinions of doctors, decisions of what tests to order, what types of scans to take, what guidelines recommend, what tech currently exists. And the tech that exists depends on research & funding decisions stretching back decades. 6/
Research shows that receiving a psychological misdiagnosis lengthens the time it takes to get the correct diagnosis, since many doctors will stop looking for physiological explanations. This dynamic holds true at the disease level as well. 7/
Globally, we are at a pivotal time in determining how societies & governments will deal with the influx of newly disabled people due to long covid. Narratives that take hold early often have disproportionate staying power. 8/
Please read the full article here (includes more context about the GBD, history of ACT UP, treatment of ME/CFS, and other relevant context) 9/
It is funny when the "only doctors are allowed to speak about public health" crew suggests I wouldn't want outsiders weighing in on AI, because one of my core values/driving motivations is that I want people from all backgrounds involved with AI & AI ethics 1/
Most problems related to AI/automated systems are inherently interdisciplinary, benefiting from having experts from many domains involved, spanning STEM, social sciences, and the humanities.
And those who are most impacted by a system should be centered most. 2/
Too often, there has been unnecessary gatekeeping & credentialism in AI, narrow definitions of who has the "right" background. I talk about this more here, but our vision for fast.ai was a community for ppl w/ the "wrong" backgrounds. 3/
We need more nuanced ways to talk about medical & public health disagreements, without the simplistic black-and-white reductions that you either trust ALL doctors in ALL matters OR you must be anti-science. 1/
"Science is less the parade of decisive blockbuster discoveries that the press often portrays, and more a slow, erratic stumble toward ever less uncertainty." @zeynep 2/
Designing recommender systems to depolarize:
- algorithmic social media isn't primary driver of polarization, but could be useful intervention
- goal: to transform conflict, not to suppress or eliminate it
- 3 stages: moderation, ranking, & user interface
paper by @jonathanstray
Polarization is involved in variety of feedback loops:
- it leads to less intergroup contact, which causes polarization
- it is a precursor to violence, which causes
- polarization it leads to selective information exposure, which causes polarization
3 key places where changes to recommender systems could be used for depolarization:
- which content is available (moderation)
- which content is selected (ranking)
- how content is presented (interface)
Pundits urge people to “listen to the science,” as if “the science” is a tome of facts and not an amorphous, dynamic entity. The naive desire for science to remain above politics meant many researchers were unprepared for a crisis that was both scientific & political to its core.
The pandemic hasn’t just been a science story. It is an omnicrisis. One must understand not just virology, but also the history of racism & genocide, the carceral state, nursing homes, historical attitudes toward medicine, social media algorithms, & more. theatlantic.com/science/archiv…
Much of journalism is fragmentary. For science, that means treating individual papers as a sacrosanct atomic unit and writing about them one at a time. But for an omnicrisis, this approach leads only to a messy, confusing, & ever-shifting mound of jigsaw pieces. @edyong209
"My concern is that reducing humans to acting as data sources is fundamentally inhumane."
-- Alan Blackwell 1/ dl.acm.org/doi/abs/10.714…
"But whereas the core problem of symbol-processing AI was its lack of connection to context – the problem of situated cognition – the core problem of machine learning is the way in which it reduces the contextualised human to a machine-like source of interaction data." 2/
The user is effectively submitting to a comparison between their own actions and those of other people from which the model has been derived. In many such comparisons, the effect will be a regression toward the mean. 3/
When patients reject a mental health (mis)diagnosis for symptoms they know have physiological origins, it is *not* bc they are devaluing mental health.
Patients do this bc they know that it will lead to ineffective treatments & useless research. 1/5
There's a pernicious cycle: label a poorly understood illness as psychogenic ➡️ don't invest money in researching the physiological origins ➡️ claim the lack of evidence on physiological mechanism proves it's psychogenic ➡️ repeat
2/5
Bonus: if patients are not "rational" enough in their suffering as medical establishment offers them nothing ➡️ use this as further evidence that their symptoms can't have physiological origins 3/5