(🧵1/11) For the past year and a half, I've been investigating OpenAI and Sam Altman for @NewYorker. With my coauthor @andrewmarantz, I reviewed never-before-disclosed internal memos, obtained 200+ pages of documents related to a close colleague, including extensive private notes, and interviewed more than 100 people.
OpenAI was founded on the premise that A.I. could be the most dangerous invention in human history—and that its C.E.O. would need to be a person of uncommon integrity. We lay out the most detailed account yet of why Altman was ousted out by board members and executives who came to believe he lacked that integrity, and ask: were they right to allege that he couldn't be trusted?
A thread on some of of our findings:
(2/11) In the fall of 2023, OpenAI's chief scientist, Ilya Sutskever, acting at the behest of fellow board members and with other concerned colleagues, compiled some 70 pages of memos about Altman and his second-in-command, Greg Brockman—Slack messages and H.R. documents, some photographed on a cellphone to avoid detection on company devices. One memo begins with a list: "Sam exhibits a consistent pattern of . . ." The first item is "Lying."
Separately, Dario Amodei—who left to co-found Anthropic—kept years of private notes on Altman and Brockman. More than 200 pages of related documents, never before publicly disclosed, have circulated in Silicon Valley. In one document, Amodei writes that Altman's “words were almost certainly bullshit.”
(3/11) The colleagues who facilitated his ouster accuse him of a degree of deception that is untenable for any executive and dangerous for a leader of such a transformative technology. Mira Murati, who had given Sutskever material for his memos, said: “We need institutions worthy of the power they wield…The board sought feedback, and I shared what I was seeing. Everything I shared was accurate, and I stand behind all of it."
Opinions vary on the extent to which we should consider these traits benign or malign. Altman attributes the criticism to a tendency, especially early in his career, “to be too much of a conflict avoider."
(4/11) What does this trait look like in practice?
In late 2022, Altman assured the board that features in a forthcoming model had been approved by a safety panel. Board member Helen Toner requested documentation. She found that the most controversial features had not, in fact, been approved.
In 2023, as the company was preparing to release GPT-4 Turbo, Altman apparently told Murati that the model didn't need safety approval, citing the company's general counsel, Jason Kwon. But Kwon said he was "confused" about where Altman had gotten that idea.
(5/11) OpenAI increasingly has meaningful power to shape global security.
The piece describes in detail how its executives considered enriching the company by playing world powers—including China and Russia—against one another, perhaps starting a bidding war for advanced A.I. technology. (The plan was dropped after several employees talked about quitting. An OpenAI representative said it was just one of many ideas "batted around at a high level.")
(6/11) A legal review that was an integral part if Altman's return. The review, by the law firm WilmerHale, was overseen by two board members selected in close conversation with Altman. People close to the investigation told us that no written report was ever produced—though many executives expected one, given the high profile nature of the scandal. Only an 800 word announcement from OpenAI was released, acknowledging a "breakdown in trust."
Some of the lawyers involved defended their work as "an independent, careful, comprehensive review" and one of the new board members said there was "no need for a formal written report." Many others disagreed:
(7/11) OpenAI was established as a nonprofit, whose board had a duty to prioritize the safety of humanity over the company’s success, or even its survival. The company accepted charitable donations, and some former employees told us they joined because of assurances about the nonprofit and its noble mission, even taking pay cuts to do so.
But internal records show that the founders had private doubts about the nonprofit structure as early as 2017. Brockman, Altman's co-founder, wrote in a diary entry: "cannot say that we are committed to the non-profit . . . if three months later we're doing b-corp then it was a lie."
OpenAI has since recapitalized as a for-profit entity.
(8/11) Some former OpenAI researchers argue that the company has forfeited its original safety mission and accelerated an industry-wide race to the bottom.
The piece details a set of public and internal safety commitments that former researchers say were abandoned. Several safety-related teams at the company have been dissolved. The Future of Life Institute recently gave OpenAI an F on existential safety—alongside every other company, except for Anthropic, which got a D, and Google DeepMind, which got a D-.
Altman told us he still prioritizes safety, and that "we still will run safety projects, or at least safety-adjacent projects.”
(9/11) In the cut-throat race for A.I. dominance, these more substantive critiques of Altman commingle with no-holds-barred opposition efforts in which rivals have weaponized his personal life. Intermediaries directly connected to—and in at least one case compensated by—Elon Musk have circulated dozens of pages of salacious and unsubstantiated opposition research reflecting extensive surveillance: shell companies, personal contacts, interviews about a purported sex worker conducted at gay bars.
In the course of our reporting, multiple people within rival companies reached out to insinuate to us that Altman sexually pursues minors, a narrative persistent in Silicon Valley which appears to be untrue. We spent months looking into the matter and could find no evidence to support it.
(10/11) Why does all of this matter?
A.I. does already have life-saving applications, from medical research to weather warnings. Altman has supported OpenAI's growth with promises of a superabundant future.
But the dangers are also no longer a fantasy. A.I. is already being deployed in military operations around the world. Researchers have documented its power to rapidly identify chemical warfare agents. OpenAI faces seven wrongful-death lawsuits alleging ChatGPT prompted suicides and a murder. A.I. could soon cause severe labor disruption, perhaps eliminating millions of jobs. The U.S. economy is increasingly dependent on a few highly leveraged A.I. companies and some experts warn of a bubble and recession risks.
OpenAI has one of the fastest cash burns of any startup in history, relying on partners that have borrowed vast sums. A board member told us, “The company levered up financially in a way that’s risky and scary right now.” (OpenAI disputes this.)
If the bubble pops, much more than one company is at stake.
(11/11) There is much more in the piece—on the saga of Altman's firing and return; a history of alleged similar complaints earlier in his career; gifts from foreign leaders and a security-clearance vetting process that turned up what one official described as "a lot of red flags," and more. And it looks at wider critiques from industry insiders of the current moment's anti-regulation trajectory—something that stands to affect all of us.
I hope you take the time for a long-read in this case, and subscribe to @NewYorker to support this kind of investigative reporting: newyorker.com/magazine/2026/…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
(🧵1/11) When OpenAI board members hired the law firm WilmerHale to investigate Sam Altman's firing over two years ago, many executives at the company expected to see extensive findings. Instead, OpenAI released a brief announcement with few details. One new disclosure in our @NewYorker investigation: there was no written report, and findings were kept deliberately out of writing.
(2/11) When Altman sought the removal of board members who had fired him over an alleged pattern of deception and manipulation, they made an independent third party investigation a condition of their exit. Altman initially resisted any inquiry, but eventually acceded to a review.
(3/11) But the two new board members who controlled the review—Larry Summers and Bret Taylor—were selected in close consultation with Altman. He texted Satya Nadella: "bret, larry summers, adam as the board and me as ceo and then bret handles the investigation."
1/9 🧵Iran's military recently released a video threatening the "complete and utter annihilation" of the $30 billion OpenAI Stargate data center being built in Abu Dhabi. My @newyorker investigation with @andrewmarantz into Sam Altman and OpenAI helps explain how we got here—and the geopolitical entanglements at the heart of OpenAI's expansion into the Gulf.
2/9 During the Biden administration, Altman explored getting a security clearance to join classified AI policy discussions. A staffer at the RAND Corporation, which helped coordinate the process, wrote that Altman had been "actively raising 'hundreds of billions of dollars' from foreign governments," and that the UAE had gifted him a car—"I assume it was a very nice car." The staffer continued: "The only person I can think of who ever went through the process with this magnitude of foreign financial ties is Jared Kushner, and the adjudicators recommended that he not be granted a clearance." Altman withdrew.
3/9 Building advanced AI requires staggering capital. As one tech executive told us: "When you think about entities with a hundred billion dollars they can discretionarily spend per year… there's the US government, the Saudis, and the Emiratis—that's basically it." OpenAI's fundraising strategies reflected that reality.
Jeff Bezos bought the Washington Post offering “financial runway." This month it gutted its newsroom—more than 300 layoffs.
Whatever you think of legacy news, the hard data shows us that newspaper closures hurt Americans. Here's how: 🧵
Between 2008 and 2020, U.S. newspaper newsroom employment fell 57%.
More than 200 counties are now "news deserts"—no local outlet at all. In another 1,500+ counties, only one remains. pewresearch.org/short-reads/20…
Studies show that when local news outlets stop scrutinizing government, efficiency drops. Public payrolls bloat. Waste increases.
The cost gets passed to you—roughly $85 in added taxes per person after a county loses one of its last few papers. sciencedirect.com/science/articl…
(1/10 🧵) If you live in NY, you may see a new warning: “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.” This mandatory disclosure went into effect late last year, and it’s the first attempt by a US state to grapple with a new generation of surveillance pricing.
(2/10) You know dynamic pricing—think Ubers, flights, or concert tickets that surge based on supply and demand. “Surveillance pricing” takes this to a new level: using your data to set a “price for you” based on your predicted breaking point. This is, increasingly, everywhere.
(3/10) A December 2025 Consumer Reports investigation found that Instacart prices for identical items varied by as much as 23% between different users. Instacart characterizes these discrepancies as routine ‘A/B testing’. consumerreports.org/money/question…
(1/13 🧵) Here's my further analysis of the recent journalist arrests, for those not inclined to hunt down my posts elsewhere. While this is a political flash point, I do not believe it is a partisan issue. It is part of a pattern perpetrated by, and that hurts, both parties:
(2/13) What one party does to shrink the space for newsgathering is a loaded gun the next party can use. I’ve always covered this as a bipartisan problem—here I am in 2020 reporting on a DOJ whistleblower and noting that going after reporters' sources was a wider trend:
(3/13) That trend has continued. Under Biden: the FBI raid on Tim Burke (using the CFAA to criminalize finding public URLs), the Project Veritas "diary" raids targeting journalistic materials, and the delayed prosecution of Steve Baker for trespassing on Jan 6th...
But the procedural history now emerging is unusual. Before the arrests, a federal magistrate judge found no probable cause to arrest them. The government appealed anyway.
Here’s why that matters—and what it signals more broadly.
They’re charged under the Freedom of Access to Clinic Entrances Act. Written to protect abortion clinics, it also covers places of worship.
The government says the law applies because the protest took place at Cities Church in St. Paul.
The church's pastor, David Easterwood, is Field Office Director for ICE—one of the highest-ranking deportation officials in the Midwest.
Protesters were there to expose what they saw as an official using religious privacy as cover. Reporters were there to document it.