2/ I'd say the joint statement on commercial #spyware is unprecedented.
A few years ago spyware like #Pegasus was was treated as a human rights issue.
But the dizzying speed of proliferation made big problems for governments, forcing them to prepare positions & action.
3/ The statement's commitment guardrails for accountable domestic #spyware use is important.
But devil will be in the implementations. Civil society will be watching.
(Note: issue wasn't covered in White House Spyware Executive Order on Monday, so nice to see USA commit here)
4/ Export control commitments on #Spyware. Again, important.
Worth noting, several signatories have a complex history on surveillance tech export...
So transparency about license granting & denials will be essential for accountability & to ensure commitment has teeth.
5/ Tracking & information sharing. Maybe public shaming? Norms? Again, important.
The mercenary #spyware industry has hidden from researchers & victims.
Let's hope it's harder for them to hide from governments.
6/ Commercial #spyware proliferation is now a global problem. Whether it's sold to autocrats, or to more 'democratic' governments in the EU... that wind up abusing it
But a key driver? Investment firms in the US & elsewhere. Good to see the joint statement speak to this.
8/ Spyware proliferation went too far & did too much harm.
Result? Governments are waking up & have started taking action.
But this is also a reminder of all the progress still needed on many fronts, like domestic accountability, oversight & transparency from every signatory.
9/ It remains puzzling to me as I read the joint statement on #Spyware that some EU countries are notably missing (where is #Germany?).
It also puts into stark relief that the EU Parliament's efforts on Spyware have a long way to go.
I hope there is some pressure to catch up!
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh
UPDATE: @Plaid for AI happened faster than I warned.
We are in a historic transformation around AI agents.
Disruption will extend to the core of your privacy.
Companies know the appeal of agentic AI & are working to lock consumers into ecosystems designed to maximize data extraction.
It's not too late, but it might be soon.
But the thing about transformative moments is that new possibilities often open simultaneously with the risks.
We need to build, experiment with & use good private + open AI tools, local models that respect privacy by default & confidential inference that prevents companies from mining the data they process.
Do that & give us a fighting chance for future that respects our freedom, and our boundaries.
Sleep on the challenge of building openly & we relinquish the playing field to the same companies and dynamics that already degrade our autonomy...only faster & everywhere.
2/ What's the deal with @Plaid?
I find people are dimly aware about something involving connecting banking accounts.
I bet you don't know that Plaid helps themselves to mountains of your financial data in exchange for the convenience.
3/ Basically, by providing 'rails' @Plaid has managed to get an absolutely gods-eye-view on peoples financial behavior.
In real time.
That data is available to other companies. And governments.
YIKES: @perplexity_ai is flexing that they have OS-level access to 100M+ Samsung S26s.
Zero mention of:
Privacy
Security
Encryption
What will Perplexity do with this growing stash of personal data from deep inside Samsung phones? What jurisdictions will it live in? Who will it get shared with?
Here's the thing: Android's current security & privacy model involves sandboxing 3rd party apps from each other. TikTok can't read your private notes, for example.
Sandboxing is good & it narrows the attack surface against your private stuff.
But this #Perplexity integration breaks that baseline sandbox model, making a kernel-adjacent data bridge for Perplexity into your personal stuff.
Will users understand the structural shift in privacy?
Meanwhile, the risk of prompt injection & other attacks against an agentic AI that has OS-level access to personal stuff is also real.
Lots of speed, no signs of caution.
2/ Multiple agents & flows each with their own distinct security & privacy issues and levels of OS-level access to private stuff.
I doubt users have the cognitive spare room to parse privacy & security downsides each time they want to ask a question.
NEW: When Kenyan cops arrested activist & presidential candidate @bonifacemwangi they took his devices.
When he got his personal phone back, the password was gone.
We @citizenlab found they'd abused @cellebrite to break into it.
Here's why this abuse matters 1/
2/ Your phone holds the keys to your life, and governments shouldnβt be able to help themselves to the contents just because they donβt like what you are saying.
But everywhere you look, cops are getting phone cracking technology from companies like @cellebrite.
Many abuse it.
3/ @Cellebrite's abuse potential is clear.
Now, Cellebrite says that they have a human rights committee & do due diligence...
Because even Cellebrite knows that if you sell phone cracking tech to security services with bad oversight, you have a problem.
So why are there so many sales to questionable security services?