, 25 tweets, 17 min read Read on Twitter
1. Talking at #EmTechDigital today on 'Preparing for #Deepfakes: Trust and Truth' placing in broader framework of how we think about AI and social good. My cliffnotes here in this thread events.technologyreview.com/emtech/digital… (in great company with @red_abebe and @latonero)
2. Quick intro for those don't know @witnessorg; work on helping anyone, anywhere using video and tech for human rights. We're focused on making you more effective, ethical and safe if you do. And that means also keeping eye out for emerging tech threats witness.org
3. Big emerging tech threats we're concerned about now are: #AI at intersection with #disinformation, #media manipulation and rising #authoritarianism. We know this is where the rubber hits the road for activists and civic witnesses on the ground.
4. These are the people who document in #Burma, #Syria and #USA, and who've gotten used to being called #fakenews for years. They are on frontlines of new potential threats. They've also been exposed to negative side of platform decisions that harm them and to online harassment.
5. Starting point in any conversation on #AI entering this space, we need to be acutely conscious that we live in a flawed real world, and one where harmful consequences can be expected and should be expected from technology developments.
6. And in any conversation on #AI and #harms, we need to be talking about how we:
- Anticipate harms
- Include most likely to be harmed communities
- Do research and create products responsibly
- Ground all in human rights principles of privacy, dignity and free expression
7. With #deepfakes, despite the technopocalyptic rhetoric...We’re before the big storm. There are serious concerns around use at scale in non-consensual sexual imagery and initial attacks on journalists but as yet we haven't seen other widespread malicious usages.
8. We need to de-escalate rhetoric. Why? 1. Not clear we’re in a rupture moment, not an evolution. 2. Our escalation promotes very harm trying to avoid. Weaponization of idea that seeing is no longer believing – in most cases simply untrue – is a boon to authoritarians worldwide
9. We're in calm before storm because malicious synthetic media are not yet widespread in usage, tools not yet mobile, haven’t been productized. It's a an opportunity to be seized. We can prepare, not panic by making this a global conversation with right people. #EmTechDigital
10. We can prepare for #deepfakes in a way that we failed for other forms of disinfo - thinking globally, with a particular focus on high-risk users and high public-interest content, and while being proactive in defending key values like free speech and privacy
11. Quick background on possibilities #deepfakes and #synthetic media:
insert screenshot #EmTechDigital
12. We've done #threat modelling with journos, activists, platform insiders - many models from improved phishing, to reputation attacks, to mid-level politicians, to cyber bullying at greater scale. More at wit.to/Deepfakes.
13. But biggest threats may be integration into relentless volume of contradictory media that combined w. computational propaganda + individualized microtargeting creating “floods of falsehood” + plausible deniability for powerful (their own deepfake "get out of jail" free card)
14. Solution-sets? Series of convenings via @witnessorg (see wit.to/Deepfakes) + connecting journos, researchers, technologists to clarify risks and solutions (wit.to/DeepfakesSolut…) #EmTechDigital
15. + Within Working Group of @PartnershipAI focused on AI and media and public discourse that I co-chair we’ve also been extensively exploring this space- asking what people consider useful research + publication norms and what preparedness needs to happen for news orgs
16. On solutions. We should build on what exists already - see this not as rupture, or isolated problem or an #AI problem. Build on existing expertise in dealing with world of falsehood and communities who confront on a daily basis. #EmTechDigital
17. 'Shallowfakes' (recycled vids) already prob at scale (like this I've seen in 5-6 ctries). We have #OSINT #UGC newsgathering and spatial analysis groups/individuals w. deep expertise on + complex fakes: @bellingcat @storyful @firstdraftnews @situ_research @witnessorg @DFRLab
18. Just recently we connected #AI researchers, technologists working on development/detection to key #UGC #OSINT researchers - report coming out #soon. And organizing inputs from communities in Global South -critical communities who’ve faced extensive shallowfake threats
19. Need ask what they want to see: technical, informational, legislative responses. And what they fear - e.g. digital wildfire of a rumour in a WhatsApp group may be v different from fears in US/Europe (different platforms, different threats). Needs to be reflected in responses
20. It's a wicked problem, not just AI problem, fake image problem, also disinfo problem. Response needs to link to other disinfo trends - bots + compu-propaganda, attention economy online, algorithmic rec engines + communication feeds, to declining public trust in institutions.
21. Tech infrastructure choices will matter in terms of how reify existing problems. @witnessorg we know harms of choices around #AI, humans and #contentmoderation (e.g. hate speech flows freely in #Burma, or war crimes footage is taken down in #Syria, buzzfeednews.com/article/meghar…)
22. This is visible in choices around 2 key approaches to #deepfakes - detection on sharing, and more focus on authentication at source. For #authentication at source it's tools @Truepic leading market; @witnessorg and @guardianproject produce open-source ProofMode tool
23. Typically tools 4 #authentication + #provenance focus on controlled capture, rich metadata, tests 4 spoofing and re-capture, signing to a distributed ledger. Extra contextual signals for trust can be extremely valuable, incl. 2 #humanrightsdefenders who face #fakenews claims
24. Colleague Gabi Ivens, soon 'Ticks or It Didn't Happen' report ret implications and trade-offs of moving to a greater focus on #authenticity #provenance and who included/excluded as well as societal impact of 'disbelief as default' if tools were mainstreamed. Thread cont'd!
Here's thread continuation to choices on tech infrastructure, platform roles and key points to proactively Avery harms
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Sam Gregory
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!