Avi 🐰🏳️‍🌈🏳️‍⚧️ Profile picture
Aug 6, 2021 34 tweets 10 min read Read on X
TW: Apple, CSAM

Plenty of incredible privacy researchers, cryptographers, sex workers, and others have talked about why Apple’s new CSAM detection tool is terrible in numerous ways.

I also want to quickly point out the abusability of this tool against adult survivors of CSAM.
TW: Apple, CSAM

Many victims of CSAM will grow up and become adults. And here’s the thing: adults can make decisions including taking photos of their bodies whether sexually, for medical reasons, it’s not your business!

Do you see what I’m seeing here?

techcrunch.com/2021/08/05/app… A screenshot from the linke...
TW: Apple, CSAM

The lack of abusability testing of this tool and trying to be smart matching up to the existing NCMEC CSAM corpus means for many survivors you may find yourself losing autonomy again because an algorithm picked up on a unique identifier: birth marks, moles, etc.
TW: Apple, CSAM

No one should expect survivors of CSAM to remember or know how CSAM of them were created to avoid collision with the tool because the algorithm is looking for similar photos.

No one should have to worry about hiding parts of their body if they don’t want to.
TW: Apple, CSAM

Got some follow up questions. To clarify, this is horrifying on a privacy level for everyone (I'll link to multiple well written threads on this later).

For survivors though the potential for collision with the algorithm is high since we don't know how it works.
TW: Apple, CSAM

Traditionally, you use a hash of an image. This stays consistent if the same image gets distributed. Of course this hash can be changed in a myriad of ways. Apple's CSAM ML/AI algorithm allegedly can match the same visual image ignoring crops/distortions/etc.
TW: Apple, CSAM

ML/AI classifiers have issues with recall and precision on matching images and text. Think about the hot dog app! You can find plenty of papers on tricking ML/AI to think two images have the same things when they're clearly different to the human eye.
TW: Apple, CSAM

We already struggle with matching images correctly. But now Apple is asking us to accept an algorithm we have no insight into to compare photos on our devices against the NCMEC CSAM corpus.

Positive matches could be someone in that corpus just many years apart.
TW: Apple, CSAM

This penalises survivors for surviving. What does Apple and NCMEC intend to do for survivors who positively flag the algorithm? What will they do for survivors re-traumatised by this experience of losing access to their personal device and data?
TW: Apple, CSAM

A few quick updates to this thread. I read through Apples updates and supporting docs (we're still in a blackbox).

I'm aware they're thankfully not using a ML classifier. Yes it's still an issue. Once again, this system should not exist.

TW: Apple, CSAM

@matthew_d_green has written several excellent threads on this issue and I highly recommend following him to stay up to date. For folks unaware of how hashes and collisions work, he explained in this thread:
TW: Apple, CSAM

I want to briefly step away from the technical issues of Apples system and talk about the hypothetical: what if it worked perfectly as intended? NCMEC only receives true positive reports. What now?

For the vast majority of victims and survivors: nothing.
TW: Apple, CSAM

NCMEC's By The Numbers state:
- "The CyberTipline has received over 82 million reports"
- "CVIP has reviewed over 322 million images/videos"
- "Over 19,100 victims have been identified by Law Enforcement"

CVIP began in 2002. So 19 years.

missingkids.org/theissues/csam
TW: Apple, CSAM

In 19 years, the CVIP on average has identified just over 1000 victims per year.

In 2020 alone CyberTipline received 21,751,085 reports, a 28% increase from 2019.

For every identified victim you have 21,751 reports.
missingkids.org/gethelpnow/cyb…
TW: Apple, CSAM

Law enforcement do not protect survivors, and most reports and investigations are dismissed or closed. There is no universal healthcare in the US to pay for the lifelong costs that comes with trauma.
TW: Apple, CSAM

NCMEC has the audacity to tell survivors they'll help "locate an attorney to help you understand your legal rights and pursue monetary restitution". Because we have no universal healthcare. Also NCMEC is not paying for the attorney.

missingkids.org/content/dam/mi…
TW: Apple, CSAM

Not every issue is a technical issue with a technical solution. Apple's tool does not help with the systemic issues involved, creates new harms and challenges especially for survivors of CSAM, and sets precedent for policymakers to push on more surveillance.
TW: Apple, CSAM

I needed a break, rapidly adding more info + context for folks.

@SarahJamieLewis wrote a great intro thread on hashes and collisions to explain why Apple's perceptual hash for CSAM is so problematic. Read up + down thread from here:
TW: Apple, CSAM

If you're not following @Riana_Crypto you best be following one of the most brilliant people in the world by the time you finish reading this :)

Apple's tool harms all of us. I turned off my auto updates. That's a security nightmare.

TW: Apple, CSAM

Most people cannot roll their own security. Even security people can't.

Survivors of CSAM especially have more involved threat models that require stronger security posturing. Things like having auto updates on are best practices because it's peace of mind.
TW: Apple, CSAM

With Apple's perceptual hashing tool likely for survivors of CSAM in the NCMEC corpus to create collisions with and meet the safety threshold, that risk for many will result in surviors choosing to avoid security updates moving forward.

That's terrifying.
TW: Apple, CSAM

Survivors of CSAM deserve security updates, cloud backups of their photos and data. They should not be subject to further potential abuse and harassment because of an unsecured device that was entirely preventable by a security update.

This is preventable.
TW: Apple, CSAM

To share a bit of my life right now, outside of privacy and security, I spend a majority of my day job working on T&S and ML/AI safety involving CSAM. I disagree with @alexstamos on a lot of his thread but this is a good example breakdown.
TW: Apple, CSAM

I want to specifically point out positively from @alexstamos's tweet/thread that there are mitigations Apple could create today such as a robust reporting feature. And breaking down the issues into the specific categories of CSAM is necessary to combat them.
TW: Apple, CSAM

Lets also talk about NCMEC and Thorn's response to all of this, and why it's so harmful to survivors. Calling people who are criticising the potential harms of Apple's tool the "screeching voices of the minority" is damaging full stop.
TW: Apple, CSAM

This gaslights survivors, especially those who have dedicated their adult lives to solving these challenges. This also places the onus on survivors to potentially feel the necessity to out their statuses of being victims/survivors.
TW: Apple, CSAM

Survivors of CSAM, especially those in NCMEC's corpus, should not have to re-traumatise themselves to "prove" that they themselves are victims/survivors that Apple/NCMEC/Thorn are claiming to be protecting but are not listening to.

They're not listening anyways.
TW: Apple, CSAM

Here's what I would like to see:
- Apple actually joins the rooms with folks working on these challenges including survivors of CSAM that are criticising the current tools they've announced
- NCMEC and Thorn work on direct support and assistance with survivors
1/
TW: Apple, CSAM

- NCMEC and Thorn have both created several tools to report or identify, and collected collectively decades of data. Conduct follow up surveys on outcomes of survivors to direct psychological, medical, housing, financial support for survivors in the long-term
2/
TW: Apple, CSAM

- Apple doesn't destroy years of security industry outreach to get the most vulnerable to update their devices and undo this tool. Create a robust reporting feature and other mitigations in the meantime. We know ML/AI is not there yet. This isn't it either.
3/
TW: Apple, CSAM

- Survivors of CSAM should not out themselves as victims/survivors because of the harmful language NCMEC and Thorn used in their PR statements. NCMEC and Thorn should apologise, especially to survivors, and work on their language moving forward.
4/
TW: Apple, CSAM

- The collective desensitisation to CSAM due to how this all came out is horrifying. There needs to be more emphasis placed on warning people in advanced on the subject matter, using more accurate terminology, correcting people using inaccurate language usage.
5/
TW: Apple, CSAM

- Dedicate resources towards tools for the autonomy of survivors. How did we land here with letting Apple dedicate the NCMEC corpus to train tools with the potential for real time real life harms against survivors for LE? Whose data is being used? Survivors.
/6
TW: Apple, CSAM

Going to stop to take another break.

But I don't really get a break from this. I have to think of this every day. At work on ML/AI safety on CSAM and combating unintended side effects of classifiers. Working on our abusability testing framework at @PlatformAbuse

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Avi 🐰🏳️‍🌈🏳️‍⚧️

Avi 🐰🏳️‍🌈🏳️‍⚧️ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @_llzes

Jan 28, 2022
Anyone who knows me knows:
- I read every document provided in full: privacy policy, Terms of Service, etc.
- I've conducted privacy assessments, worked on privacy policies, GDPR/CCPA/privacy issues
- I've used crisis support hotlines

Hey, Crisis Text Line: this is exploitation. A cropped mobile screenshot...
When Shawn Rodriguez, @CrisisTextLine VP and general counsel, says: "sensitive data from conversations is not commercialized, full stop", he's taking advantage of the phrase "sensitive data".

The US, unlike the EU/UK with GDPR, does not have the concept of "sensitive data".
"Sensitive data" is an officially defined category of data under GDPR. This is distinct from "Personal data" which is what many of us think of, like Social Security, names, addresses.

Here's a list from UK GDPR for good examples (not the same as GDPR): ico.org.uk/for-organisati…
Read 62 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(