The Midas Project Profile picture
We are a watchdog nonprofit that monitors and reports on the practices of leading AI companies. Also tracking safety updates @SafetyChanges
Feb 25 22 tweets 7 min read
A new filing just dropped in the Musk v. Altman case, and it may be the most brazen and cynical document OpenAI has produced yet.

It's a motion to exclude the testimony of Stuart Russell, but their attacks blatantly contradict things @OpenAI itself has said for years.

🧵 Image For context, Stuart Russell is an AI professor and a Fellow of the Royal Society.

He won the 2025 AAAI Award for AI for the Benefit of Humanity, TIME named him one of the 100 most influential people in AI, and he has testified before the U.S. Senate. Image
Feb 6 25 tweets 8 min read
1/ Did @OpenAI just break California’s new AI safety law?

The answer appears to be yes, and OpenAI could owe millions in fines. 🧵 Image 2/ Since December, OpenAI has been racing to reclaim the title of best coding AI.

Yesterday, they launched GPT-5.3-Codex, regaining the lead — but there's a problem: they may have broken California law to do it.
Jan 17 13 tweets 5 min read
Something strange happened on conservative Twitter on Thursday.

A dozen right-wing influencers suddenly became passionate about semiconductor export policy, posting nearly identical (and often false) attacks over a 27-hour period on a bill most people have never heard of.

🧵 Image The AI OVERWATCH Act is a Republican bill that would let Congress review AI chip exports to adversaries like China. It's backed by Microsoft and right-leaning think tanks.

But starting January 15, influencers called it pro-China sabotage and a Democrat plot, all in unison. Image
Jun 18, 2025 16 tweets 6 min read
Today, @TheMidasProj and @Tech_Oversight released The OpenAI Files, a comprehensive investigation into safety issues, integrity concerns, and employee testimonies at @OpenAI as they consider restructuring to a for-profit. Some highlights from the report: (1/16) Dario and Daniela Amodei, who left in 2020 to found Anthropic, described Altman’s tactics as “gaslighting” and “psychological abuse” Image
Dec 12, 2024 9 tweets 2 min read
For months, we've been calling upon the team at @cognition_labs to make an industry-standard risk evaluation plan. They've consistently refused.

Now, one day after their product launch, a live streamer revealed a massive security vulnerability in front of an audience of 6,000.🧵 What happened?

The tl;dr is that, when sharing your screen while using Devin, confidential access to Devin's workspace is granted via a URL revealed in plaintext with no auth.

In other words, anyone who sees your screen can access your (and Cognition's) sensitive data.