, 10 tweets, 4 min read Read on Twitter
I appreciate @NBCNewsTHINK giving me a chance to discuss DeepFakes. Some of my takeaways:

1) The biggest current risk from synthetic video and images is their use to embarrass or harass individuals, most often young women. DeepFakes + Sextortion is a huge safety challenge.
According to @Thorn's 2016 study, 8% of sextortion incidents started with faked images:

This paper by @daniellecitron and @BobbyChesney gives a good overview of how the current legal environment fails to address DeepFakes:
2) I think that more subtle alterations will continue to be the standard mode of use in political disinformation. Completely synthetic videos are much easier to disprove or push back against than subtle changes.
3) Even today, without widespread examples of synthetic media being used in disinformation, the existence of this technology allows the powerful to deny the authenticity of embarrassing footage.
4) The media is going to have to respond by creating new transparency and integrity mechanisms. Misleading video is much larger than just purely synthetic video, and the television and online media often utilize editing techniques that should be reconsidered or supplemented.
For example, while I didn't totally agree with @frontlinepbs' editing of my interview on the 2016 election, I have to compliment them for posting my entire 96 minute interview and transcript. This should be the standard.

The Covington Catholic incident demonstrated that something as simple as choosing when you cut a video has huge impact on how it will be interpreted (along with the framing and audience). DeepFakes are a good reason to reconsider what is ethical use of video by the media.
I'm generally not a huge fan of trying to apply blockchain to every possible societal problem, but this is a situation where a real-time, public ledger of perceptual/audio hashes of footage tied to raw GPS data could be useful to establish provenance.
5) The social media platforms need a definition of misleading video that is not tied to fact-checking workflows. They need to tag all potentially misleadingly edited videos much more aggressively, including obvious parody.

6) If we had really good video editing GANs I would use them to FIX MY COLLAR IN THIS DAMN VIDEO.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Alex Stamos
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!