Flaws of countering disinfo w/ appeal to authority:
"Worrying about whether we trust institutions without asking if these institutions deserve trust... A program of infantalization – trust that the adults know what is right – will provoke equally infantile resistance." @Aelkus
Failure of legacy institutions to respond appropriately to the pandemic, from March 2020 @aelkus, h/t @RSButner
A society that cares more about declining trust in institutions than what institutions have done to deserve trust – and which devotes far more effort towards managing the behavioral psychology of risk than actually reducing risk – is engaged in narrative-making above all else.
Compared to ethics principles in medicine, AI ethics principles lack: 1. common aims & fiduciary duties 2. professional history & norms 3. proven methods to translate principles into practice 4. robust legal & professional accountability mechanisms
"The truly difficult part of ethics—actually translating theories, concepts & values into good practices AI practitioners can adopt—is kicked down the road like the proverbial can." @b_mittelstadt 2/
"Ethics has a cost. AI is often developed behind closed doors without public representation... It cannot be assumed that value-conscious frameworks will be meaningfully implemented in commercial processes that value efficiency, speed and profit." 3/
Australia's competition regulator found:
- Google engages in anti-competitive behavior in digital advertising, which harms consumers & businesses accc.gov.au/media-release/…
Many people have a false dichotomy that you are either FOR or AGAINST covid restrictions, with no nuance about the TYPE of restrictions or level of effectiveness, much less that eschewing all restrictions → hospitals collapse & lockdown more likely. 1/
There has been a lot of terrible public health messaging & contradictory government policies in the West, from the start of the pandemic, continuing now, and these erode public trust, create false expectations, & contribute to “pandemic fatigue” 2/
The “only elderly & chronically ill are at risk” was both false AND ineffective messaging. This has been clear from the VERY START of the pandemic. (I RTed @jenbrea at the time) 3/
The false hope of current approaches to explainable AI in health care: current explainability approaches can produce broad descriptions of how an AI system works in general, but for individual decisions, the explanations are unreliable or superficial 1/ thelancet.com/journals/landi…
Explainability methods of complex AI systems can provide some insight into the decision making process on a global level. However, on an individual level, the explanations we can produce are often confusing or even misleading. @MarzyehGhassemi@DrLaurenOR@AndrewLBeam 2/
Increased transparency can hamper users’ ability to detect sizable model errors and correct for them, "seemingly due to information overload." 3/
"Who benefits from data sharing in Africa? What barriers exist in the data sharing ecosystem, and for whom? If much of the data sharing practice is shaped by the Global North, how can we ensure that the narrative for Africa is controlled by Africans?" 1/
Stakeholders in the African data sharing ecosystem. Those at the top of the iceberg hold significant power & leverage in guiding data sharing practices & policy compared to those in the hidden part of the iceberg. More powerful stakeholders wield disproportionate power. 2/
Dominant narratives around data sharing in Africa often focus on lack, insufficiency, deficit.
This framing minimizes the strength, agency, and scientific & cultural contributions of communities within the continent, and overlooks community norms, values, & traditions. 3/