Profile picture
Sam Gregory @SamGregory
, 36 tweets, 20 min read Read on Twitter
1/ THREAD. What are possible #solutions to the threats #deepfakes and synthetic media could pose to evidence, truth and freedom of expression? Our survey from recent @witnessorg @firstdraftnews expert convening.
2/ Invest in MEDIA LITERACY + RESILIENCE/discernment for news consumers - how to spot both individual items of synthetic media (e.g. via visible anomalies such as mouth distortion often present in current deepfakes) as well as develop approaches to assessing image credibility
3/ Recent examples sharing commonsense guidance on this e.g. @CraigSilverman buzzfeed.com/craigsilverman… and @samanthsunne gijn.org/2018/05/28/wha…
4/ And approaches need to reflect lags in understanding how to deal with visual disinformation and misinformation (focus to-date more on text), and build on research on how to confront - e.g. summary @HarvardBiz hbr.org/cover-story/20… + research @firstdraftnews @datasociety etc.
5/Also #deepfakes circulation and potential targeting overlap with issue of computational propaganda and how images are circulated in the dark web by anti-#humanrights forces e.g. recent work by @datasociety datasociety.net/output/media-m…
6/Build on existing efforts in civic and journalistic education and SKILLS/TOOLS FOR PRACTITIONERS re verification. E.g. work of expanding OSINT community in #humanrights and #journalism including @firstdraftnews @bellingcat @GoogleNewsInit @witnessorg vae.witness.org
7/ Reinforce JOURNALISTIC KNOWLEDGE AND COLLABORATION KEY EVENTS by supporting coalitions to understanding/rapidly ID and respond in the journalism community around key events like elections - models like @crosscheck @VerificadoMX avoid 'weak links' being exploited in crisis
8/ Explore TOOLS AND APPROACHES FOR VALIDATING, SELF-AUTHENTICATING AND QUERYING individual media items. The big enchilada of how we consider the pros and cons of tools seek to provide a more metadata-rich, cryptographically signed, hashed, notarized image from point of capture.
9/ Existing egs in commercial sector, eg @truepicinc as well as non-profit eg @guardianproject ProofMode - and big Q of how blockchain + distributed ledgers play a role, and how these get mainstreamed in platforms/devices. @witnessorg leading research on pros +cons of approaches.
10/ Invest in RIGOROUS APPROACHES TO CROSS-VALIDATING MULTIPLE VISUAL SOURCES, building on work of gps like @situ_research in #Ukraine nytimes.com/2018/05/30/mag… as well as @ForensicArchi @bellingcat @nytvideo. And one groundtruthed video can allow for others to be trusted.
11/ Invest in NEW FORMS OF MANUAL FORENSICS. Significant investment happening in this area e.g. via the DARPA MediFor program - to detect image manipulation, copy-paste, impossible physics, giveaway tells of #deepfakes.such as absence of a pulse.
12/ Invest in new forms of DEEP LEARNING BASED DETECTION APPROACHES. New automatic GAN-based tools such as FaceForensics generate fakes then utilize these large volumes of fake images as training data for neural nets that do fake-detection. @MattNiessner
13/ Other tools look to identify tell-tale absence of blinking (from training images that featured people with eyes open), e.g. arxiv.org/abs/1806.02877, and such approaches per 6/ need to be incorporated into key browser extensions or dedicated tools like @InVID_EU
14/ And could form part of a platform, social media networks and search engines’ approaches to identifying signs of manipulation. Platform image databases can help detect use of elements of existing images in fakes; as well as share databases of training data of known fakes.
15/ TRACK AND ID VIA OTHER SIGNALS: best way to identify malicious deepfakes and synthetic media could be via other signals of activity in the info ecosystem - e.g. research by @iftf on iftf.org/statesponsored… … or comprop.oii.ox.ac.uk/research/cyber… @oiioxford
16/ Identify, incentivize and reward high-quality information, and rooting out mis/mal/disinformation via BUG BOUNTIES or a similar approach derived from cyber-security
17/ Support PLATFORM-BASED APPROACHES (social networks, video-sharing, search, news). Could incl. many of tools above + include single or cross-industry detection + signaling of detection at upload, at sharing, or at search, de-indexing malicious or down-ranking, to outright bans
18/ Key policy + tech elements: how distinguish malicious deepfakes from satire/entertainment/creativity; how distinguish levels computational manipulation from photo taken in “portrait mode” to fully engineered face transplant; how reduce false positives; how communicate to user
19/ And how to do this new form of #contentmoderation well - extreme caution needed seeing current pressures around ‘fake news’ and countering violent extremism—e.g. Facebook in #Myanmar/Burma and with YouTube’s handling of evidentiary content from Syria
20/ WITNESS’ recent submission to the United Nations Special Rapporteur on Freedom of Opinion and Expression highlights many issues we have encountered around #contentmoderation: blog.witness.org/2018/06/new-un…
21/ One place platforms could fix fast (?) - respond to existing gaps in verification tools on their platforms by providing better REVERSE VIDEO SEARCH native to platforms.
22/ Ensure COMMERCIAL TOOLS PROVIDE CLEAR SIGNS OF MANIPULATION OR WATERMARKING. Companies like @Adobe have no incentive to hide the forensic traces of manipulation - should be industry consensus that consumer video and image manipulation should be machine forensics readable.
23/ A new approach to watermarking could consider eg via Hany Farid invisible signatures via images created using Google’s TensorFlow technology, an open-source library used in much machine learning and deep learning work.
24/ Protect individuals vulnerable to malicious deepfakes by investing in new forms of ADVERSARIAL ATTACKS. Invisible-to-the-human-eye pixel shifts or visible scrambler-patch objects in images that disrupt computer vision and result in classification failures.
25/ Hypothetically adversarial attacks could be used as a user/platform-lead approach to “pollution” of training data to prevent bulk re-use of images available on image search platform (eg. Google Images) as training data useable to create a synthetic image of an individual.
26/ IMMUTABLE AUTHENTICATION TRAILS: high-profile indivs commit to 3rd party lifelogging register of themselves (at cost to privacy and surveillance), providing “certified alibi credibly proving he or she did not do or say the thing depicted” via @BobbyChesney @daniellecitron
27/ MAKE SURE BETTER COMMUNICATION between KEY AFFECTED COMMUNITIES AND AI RESEARCH: mal-uses of synthetic media cd be widespread in vulnerable societies + build on existing probs w. closed messaging apps + 'digital wildfire' rumors in India, Sri Lanka etc bbc.com/news/world-asi…
28/ Groups like Global South Facebook Coalition and others need to be at center of discussion of harms and solutions qz.com/1284128/dearma…
29/ Confront SHARED ROOT CAUSES with other dis/mal/misinformation problems - goes almost w/o saying, but can't separate from how audiences understand and share mis/disinformation. And overlaps with issues re micro-targeting of advertising + personalized content, attention economy
30/ Relate to industry and AI SELF-REGULATION AND ETHICS BOARDS , as well as 3rd party review boards and adherence to #humanrights principles like @accessnow Toronto Principles on AI. Also relates to discussion about transparency on new developments +cultures of open-publishing
31/Existing and novel legal, regulatory and policy approaches. A rich vein of ideas has emerged here, particularly via @bobbychesney and @daniellecitron paper including many US-centric legal, regulatory and policy options
papers.ssrn.com/sol3/papers.cf…
32/ LEGAL OPTIONS: narrowly targeted prohibition on some intentionally harmful deepfakes, #defamation or #fraud law, civil liability inc.suing creators or platforms for content (w. potential amends to CDA Section 230), copyright law or right to publicity, also criminal liability
33/ SECTION 230: @marcorubio and @MarkWarner have been vocal on threats and responses; including @markwarner floating potential liability for platforms if fail to take manipulated audio/video and #deepfakes after being notified of it by a victim axios.com/mark-warner-go…
34/US REGULATORY responses per @BobbyChesney and @daniellecitron could come in limited ways in US via Federal Trade Commission, the Federal Communications Commission and the Federal Elections Commission papers.ssrn.com/sol3/papers.cf…
35/ IMAGE-BASED SEXUAL ABUSE LEGISLATION AND POST-MORTEM PUBLICITY RIGHTS might be other areas c.f. UK legislation pushes theguardian.com/world/2018/jun…
and @sagaftra push gizmodo.com/the-screen-act…
36/ And much more.. What else should be considered? And who needs to be involved? How do we build on existing efforts and related problems?
blog.witness.org/2018/07/deepfa…

@witnessorg have some recommendations on next steps we'll share in a Thread.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Sam Gregory
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!