1/ After a week of exposés about Facebook by the @WSJ, #FacebookFiles, one of which around Instagram being toxic for teen girls, Adam Mosseri (@mosseri) Head of Instagram, defended the company (Recode Media podcast, @pkafka) by saying:
3/ Back in 2019, during an interview with TIME magazine (@katysteinmetz) on the subject of online bullying, @mosseri said:
4/ Where do you learn to dodge responsibility? Your Boss. In 2018, in an interview with @karaswisher, addressing the Russian election meddling and the killings in Myanmar, Mark Zuckerberg explained how Facebook could “do better”:
5/ The #ResponseTemplate went viral across Big Tech. Last month, addressing the spread of Misinformation on YouTube, its Chief Product Officer (@nealmohan) said:
6/ They use the same PR playbook because the news cycle continues, and the latest issue goes away. The platforms amplify not only the good, but also the bad - in various ways. So, this reoccurring response is simply their attempt to reduce their responsibility.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Remember the signatories of the @FLIxrisk open letter, which called for a pause on advanced AI development?
According to a new paper, "Why They're Worried," their motivation to sign had nothing to do with X-risk.
Their concerns were NOT centered on "Human Extinction" at all.
1
Despite its limitations (small sample size, not peer-reviewed), the "Why They're Worried" paper is well-organized and includes valuable quotes about the signatories' actual concerns…
In response to the Future of Life Institute's letter, there was intense media coverage focused on "X-risk."
The paper's authors (@imstruckman & Sofie Kupiec) "sought to understand signatories' personal perspectives, and how their beliefs relate to the letter's stated goals."
Only 149 answered the "Extinction from AI" Q.
= 20% of the 738 respondents (3% of the 4,271 AI experts contacted)
Only 162 answered "Extinction from human failure to control AI" Q.
= 22% of the 738 respondents (3.7% of AI experts contacted)
@MelMitchell1 AI Impacts FUNDING:
MIRI (Yudkowsky), Survival & Flourishing Fund, The Centre for Effective Altruism (Oxford), Effective Altruism Funds, Open Philanthropy, Fathom Radiant. Previously: The Future of Life Institute, the Future of Humanity Institute (Bostrom), FTX Future Fund (RIP).
@MelMitchell1 Tristan Harris cites this study in the "AI Dilemma," his podcast, on NBC, and his New York Times OpEd with Raskin and Harari.
The New York Times REALLY loves to share this stat as well; It already appeared in Klein's column & podcast, Wallace-Wells' OpEd & The Morning newsletter
How does the AI Dilemma copy the Social Dilemma?
Different tech - same scare tactics 🧵
The "Social Dilemma" argues that social media has godlike power over people.
The "AI Dilemma" argues that AI has godlike power over people.
The "Social Dilemma" anthropomorphizes the evil algorithms.
The "AI Dilemma" anthropomorphizes the evil A.I.
Both are … monsters 👾
Causation is asserted as a fact: Those technological "monsters" CAUSE all the harm.
Despite other factors - confronting variables, complicated society, messy humanity, inconclusive research into those phenomena - it's all due to the evil algorithms/AI.