Like donuts, rage farming content is *designed for dunking.*
To get the perfect dunk, big accounts share the content w/their followers.
Thus, they also get rewarded w/engagement for perpetuating the cycle.
The only winning move is not to play.
Did you Quote Tweet a political ad?
You just donated free advertising.
Would you contribute a particular politician?
No? Then don't QT.
Well, we did it. We got rage farmed into amplifying a disgraced toxic politician into a busy news cycle.
Next step? He'll claim censorship & that he's under attack by democrats.
Then fundraise.
This is an entirely predictable playbook.
Step 1: Everyone watch this bad thing he did!
Step 2: We must drop everything & condemn him.
Step 3: Here's more bad things he did!
Step 4: Wait, why does his stuff drown out things we care about?
Meanwhile, all Twitter's algorithm hears is "SHOW US MORE OF HIM!"
"So, should we ignore it when politicians say extreme things?"
No. We're in dark place & need to fight it.
But we must be smart, especially on Twitter.
That means learning how algorithms 'hear' us.
And making sure we aren't baited into inadvertently platforming our opponents.
Whomever cooked up his rage farming knew exactly what they were doing.
Predictably, he followed up by amplifying critical coverage in the WaPo that... included his video.
Of course, people angrily Quote Tweet that, too.
And so he's done it. And we've helped at every step.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
3/ Poor Arthur. But this is an institutional signal that, ~8 years in, militaries are still allowing enough location-aware devices in that it's a big threat.
Incidentally, the @lemondefr team has now been on the #stravaleaks issue for 3 years! I
UPDATE: @Plaid for AI happened faster than I warned.
We are in a historic transformation around AI agents.
Disruption will extend to the core of your privacy.
Companies know the appeal of agentic AI & are working to lock consumers into ecosystems designed to maximize data extraction.
It's not too late, but it might be soon.
But the thing about transformative moments is that new possibilities often open simultaneously with the risks.
We need to build, experiment with & use good private + open AI tools, local models that respect privacy by default & confidential inference that prevents companies from mining the data they process.
Do that & give us a fighting chance for future that respects our freedom, and our boundaries.
Sleep on the challenge of building openly & we relinquish the playing field to the same companies and dynamics that already degrade our autonomy...only faster & everywhere.
2/ What's the deal with @Plaid?
I find people are dimly aware about something involving connecting banking accounts.
I bet you don't know that Plaid helps themselves to mountains of your financial data in exchange for the convenience.
3/ Basically, by providing 'rails' @Plaid has managed to get an absolutely gods-eye-view on peoples financial behavior.
In real time.
That data is available to other companies. And governments.
YIKES: @perplexity_ai is flexing that they have OS-level access to 100M+ Samsung S26s.
Zero mention of:
Privacy
Security
Encryption
What will Perplexity do with this growing stash of personal data from deep inside Samsung phones? What jurisdictions will it live in? Who will it get shared with?
Here's the thing: Android's current security & privacy model involves sandboxing 3rd party apps from each other. TikTok can't read your private notes, for example.
Sandboxing is good & it narrows the attack surface against your private stuff.
But this #Perplexity integration breaks that baseline sandbox model, making a kernel-adjacent data bridge for Perplexity into your personal stuff.
Will users understand the structural shift in privacy?
Meanwhile, the risk of prompt injection & other attacks against an agentic AI that has OS-level access to personal stuff is also real.
Lots of speed, no signs of caution.
2/ Multiple agents & flows each with their own distinct security & privacy issues and levels of OS-level access to private stuff.
I doubt users have the cognitive spare room to parse privacy & security downsides each time they want to ask a question.