Insurrection Barbie Profile picture
“Put on the whole armor of God, that you may be able to stand against the wiles of the devil.” Ephesians 6:11-13

Mar 13, 9 tweets

🧵🧵🧵 OpenAI and Anthropic are building the AI products that threaten to displace millions of workers while simultaneously funding the research that quantifies that displacement, bankrolling the political candidates who propose government solutions like universal basic income, and lobbying for regulations that only the wealthiest companies can afford to comply with, which locks out competitors, consolidates their control over how the public accesses information, and ensures that taxpayers rather than tech billionaires absorb the economic fallout.​​​​​​​​​​​​​​​​

And they are trying to do it before midterms hoping they can swing the election to the democrats.

OpenAI CEO Sam Altman personally funded the largest universal basic income study in American history through his nonprofit OpenResearch, giving $1,000 a month to 3,000 people over three years.

He did this before ChatGPT even launched, meaning he was building the case for government cash payments while simultaneously building the product he now says will destroy jobs.

Altman has publicly stated that many professions will “go away” and that “the world is not prepared.” He is the salesman and the arsonist selling fire insurance.

Anthropic spent over $3.1 million lobbying the federal government in 2025 on AI regulation, national security, export controls, and federal procurement.

Anthropic donated $20 million to Public First Action, a PAC backing pro-regulation candidates in both parties for the 2026 midterms.

Anthropic’s affiliated PACs have directly supported co-chairs of the House Democratic Commission on AI, including Reps. Valerie Foushee and Josh Gottheimer.

Anthropic published a major study on March 5, 2026, eight months before the midterms, mapping which jobs AI can replace, which Fortune framed as evidence of a coming “Great Recession for white-collar workers.”

Anthropic has stated that AI transparency regulation “should apply only to companies developing the most powerful AI models.” That means Anthropic and its handful of competitors. Every startup, every open source project, and every smaller company gets priced out of the market through compliance costs that only billion dollar companies can absorb.

Dustin Moskovitz, who co-founded Facebook with Mark Zuckerberg, and his wife Cari Tuna have directed over $4 billion in total grants through Coefficient Giving, formerly Open Philanthropy.

Over $580 million of that has gone specifically to AI safety research and advocacy.

They hold a $500 million stake in Anthropic moved into a nonprofit vehicle.

Moskovitz spent over $50 million to elect Joe Biden.

He and Tuna were the third largest donors in the 2016 election cycle, giving $20 million to Democratic PACs including the Hillary Victory Fund and MoveOn.org.

Their organization has made over 440 grants through its AI safety fund to universities, think tanks, and policy organizations that produce the studies Democratic lawmakers cite when arguing for regulation and economic intervention.

Chris Hughes, another Facebook co-founder, created and primarily funds the Economic Security Project, the single most influential organization behind the guaranteed income movement in America.

ESP was originally a project of the Hopewell Fund, operated by Arabella Advisors, a consulting firm that manages a large network of left-of-center nonprofits.

ESP has seeded over 100 guaranteed income pilot programs. Additional ESP donors include Pierre Omidyar’s network, George Soros’s Open Society Foundations, the Ford Foundation, the Rockefeller Foundation, the Hewlett Foundation, the Knight Foundation, and the Google Foundation. Hughes and his husband are among the top donors to progressive candidates in the country.

The Economic Security Project Action directly endorses Rep. Bonnie Watson Coleman’s Guaranteed Income Pilot Program Act of 2025, which authorizes $495 million per year for five years.

Other organizations endorsing the bill include Oxfam America, Georgetown Center on Poverty and Inequality, the Shriver Center on Poverty Law, NETWORK Lobby for Catholic Social Justice, and United for Guaranteed Income.

Many of these organizations receive funding from the same foundation network.

The $500 million Humanity AI initiative, launched in late 2025, involves the MacArthur Foundation, Ford Foundation, and Kapor Foundation.

Its stated goals include protecting democracy, protecting creators from AI theft, and ensuring AI enhances rather than replaces workers. These are the same foundations funding ESP and the broader guaranteed income advocacy infrastructure.

The headline job loss estimates driving public fear come from Goldman Sachs (300 million jobs globally at risk), McKinsey (60 to 70 percent of tasks automatable by 2030), and the World Economic Forum (92 million jobs eliminated by 2030).

These numbers get cited by the advocacy organizations funded by Moskovitz, Hughes, and the foundation network, which get cited by Democratic lawmakers, which get amplified by media, which creates the public perception that mass unemployment is imminent and government action is urgent.

But Goldman Sachs’s own researchers say actual displacement would be 6 to 7 percent of the workforce and would be transitory, with unemployment rising only half a percentage point during the transition.

A recent National Bureau of Economic Research study found that nearly 90 percent of C-suite executives across the US, UK, Germany, and Australia said AI has had no impact on workplace employment over the past three years.

WebAI CEO David Stout wrote that tech founders are under pressure to justify enormous AI investment, which is why many have created narratives of mass worker displacement.

Researchers at the Peterson Institute found that job posting declines in AI-exposed occupations actually began in 2022 before ChatGPT launched, and correspond more closely to rising interest rates than to AI adoption.

Meanwhile, AI company CEOs are making the most extreme predictions.

Anthropic CEO Dario Amodei projected AI could disrupt 50 percent of entry-level white-collar jobs within five years. Microsoft AI CEO Mustafa Suleyman said “most” white-collar tasks could be automated within 12 to 18 months.

Altman said “the inside view” at AI companies is that “the world is not prepared.” Every one of these predictions doubles as a sales pitch for their own products while feeding the fear narrative that justifies government intervention.

If regulation consolidates the AI market around five companies, those companies become the primary interface through which hundreds of millions of people access information, get advice, conduct research, and make decisions.

Unlike a search engine that shows you links, an AI chatbot synthesizes and frames an answer through whatever values and guardrails its creators encoded.

The “safety” framework gives these companies the power to decide what their AI will discuss, how it frames political topics, what viewpoints it presents, and what it refuses to engage with.

When those content decisions become part of government-mandated safety standards, the people who defined “safe” output control the information environment at scale, with legal authority, while every potential competitor is locked out by compliance costs only billion-dollar companies can absorb.

Build the technology. Fund the studies that say it will destroy jobs. Fund the advocacy groups that propose government payments as the solution. Fund the candidates who introduce the legislation. Lobby for regulation only you can afford to comply with. Let taxpayers pay for displacement instead of the companies that caused it. Lock out competitors. Control the information layer. Repeat.

Personally I am not a fan of this entire model that will try to swing the midterms to the left.

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling