This is a good thread by Sarah.

One thing I'll add: Congress is putting the cart before the horse on regulation. They are sitting on a data treasure trove but it doesn't seem to be informing legislation.
Thanks to @FrancesHaugen, Congress is now in possession of hundreds of documents representing the most thorough look at the intersection between adversarial online harms, product design and content moderation regimes outside of Facebook itself. Why aren't they using it?
There are a lot of great staffers working for these committees, but I don't think we will get the best understanding of what these documents indicate via selective press reporting or politically motivated leaks from members.

I see two options:
1) Congress can put together a team that will systematically remove sensitive personal data from the documents and release them on a rolling basis. There needs to be a filter as I expect multiple documents discuss specific harmful scenarios that can be used to unmask victims.
2) Or Congress can put together a group of 30-40 researchers across multiple disciplines and give them access under an NDA that allows for discussion of the findings but prevents the release of any personal data. We would get useful work released over the next three months.
Frantically putting together performative legislation that is likely unconstitutional and not tied to evidence-based interventions means that Congress is doing what they accuse tech executives of doing: ignoring the real research into harms and responses.
An idea: Congress could start by putting together the initial research group of 30, and ask that whatever they publish be accompanied by the relevant decks/documents with PII minimization accomplished by the research team. You could probably run this with two staffers.
That way, we will get a rolling release of documents accompanied by an initial framing document written by an SME, but all parties would be able to publish their own interpretation and use it as the basis for further research.
We are in the same situation as we were in the early days of the Snowden docs, when infosec professionals were collectively screaming at the interpretations of the decks by journalists. The release of the raw slides was critical to understanding/mitigating the fundamental flaws.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Alex Stamos

Alex Stamos Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @alexstamos

28 Sep
I've gotten some questions about a potential regulatory path forward on child safety. An idea:

1) Replace COPPA with a law that encourages a phased approach for kids < 10, 10-12, 13-15 and 16-18, instead of the one bright line at 13.
2) Require mobile devices (phones and tablets) sold in the US to include a flow, triggered during initial setup, that asks if the primary user is a child and stores their birthdate locally. The calculated age (rounded to year) should be provided via API to every app.
3) Require apps that will allow users under 18 to publish their child safety plans for each of the relevant age ranges above. We are way too early in the field to have a unified set of product features that work for everybody, but we can at least encourage thoughtful design.
Read 5 tweets
27 Sep
Ugly Truths:

1) Preteens probably shouldn’t have phones, but parents give them anyway.

2) Young teens shouldn’t be on social media, but parents allow.

3) Older teens still need guidance and check-ins.

4) If you have younger users, knowing and catering to that might be safer.
A lot of the public policy challenge here is balancing caring about:
1) Passive harms to kids using screens
2) Active but non-criminal harms, like teen bullying, “Thinstagram” and overall crappy influencer culture
3) Active, adversarial and criminal harms (grooming, sextortion)
And doing so in an environment where you can’t trust parents to properly enforce boundaries (although maybe they would with better tooling) or without a PRC-like model of showing ID to get online accounts.
Read 10 tweets
24 Sep
Prof. Kerr has provided an excellent write-up of something that mostly escaped notice but that is really a big deal.

Tl;DR: In order to service a laudable goal, supporting international justice in Myanmar, a judge in DC gutted one of the most important US privacy laws.
ECPA/SCA don't get as much discussion as, say GDPR, but they have been much more important at protecting privacy around the world than probably any other laws. Why? They govern access by the US and foreign governments to data held by US companies.
This is such an important law that @Riana_Crypto just taught it yesterday as part of the first week of our cybersecurity course. I'll let her speak to the details, but this might be another sign that this Reagan-era statute needs to be updated for modern situations.
Read 4 tweets
15 Sep
@evelyndouek I think the big picture is that several mid-level VPs and Directors invested and built big quantitative social science teams on the belief that knowing what was wrong would lead to positive change. Those teams have run into the power of the Growth and unified Policy teams.
@evelyndouek Turns out the knowledge isn’t helpful when the top execs haven’t changed the way products are measured and employees are compensated. So the only recourse for those teams to affect change is leaking to the WSJ.
@evelyndouek I’m sure other products have the same impacts and problems, but they are two small (Twitter, Snap) to build these big, expensive teams that don’t drive revenue or have a strategy of not looking (YouTube).
Read 4 tweets
7 Sep
This ProPublica article on WhatsApp is terrible.

It is inconsistent with much of what ProPublica has written in the past, it incorrectly conflates responsible reporting mechanisms with proactive moderation, and creates the wrong incentive structure for E2EE products.
There is a really hard tradeoff between protecting the privacy of users and keeping those users safe from the abusive behavior of others. For the last three years, I've been recycling a talk on this and the balancing act is a big part of my class.

What can happen on chat apps?
- Bullying and harassment
- Sending unwanted NCII
- Trading CSAM
- Live sexual abuse of children
- Organization of violence
- Amplification of disinformation

You won't able to address every issue while protecting privacy, but you can try!
Read 23 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(