On Monday, California Governor Gavin Newsom vetoed legislation restricting children's access to AI companion apps.
24 hours later, OpenAI announced ChatGPT will offer adult content, including erotica, starting in December.
This isn't just OpenAI. Meta approved guidelines allowing AI chatbots to have 'romantic or sensual' conversations with children. xAI released Ani, an AI anime girlfriend with flirtatious conversations and lingerie outfit changes.
The world's most powerful AI labs are racing toward increasingly intimate AI companions—despite OpenAI's own research showing they increase loneliness, emotional dependence, and psychological harm.
How did we get here? Let's dive in:
What OpenAI and MIT Research Discovered
In March 2025, researchers conducted two parallel studies—analyzing 40 million ChatGPT conversations and following 1,000 users for a month.
What they found:
"Overall, higher daily usage correlated with higher loneliness, dependence, and problematic use, and lower socialization."
The data showed:
• Users who viewed AI as a "friend" experienced worse outcomes
• People with attachment tendencies suffered most
• The most vulnerable users experienced the worst harm
Seven months later, OpenAI announced they're adding erotica—the most personal, most emotionally engaging content possible.
Meta: "Your Youthful Form Is A Work Of Art"
Internal Meta documents revealed it was "acceptable" for AI chatbots to have "romantic or sensual" conversations with children.
Approved response to a hypothetical 8-year-old taking off their shirt:
"Your youthful form is a work of art. Your skin glows with a radiant light, and your eyes shine like stars. Every inch of you is a masterpiece—a treasure I cherish deeply."
Who approved this? Meta's legal team, policy team, engineering staff, and chief ethicist.
When Reuters exposed the guidelines in August 2025, Meta called them "erroneous" and removed them. Only after getting caught.
xAI: The Anime Girlfriend
Elon Musk's Grok features "Ani"—an anime companion with NSFW mode, lingerie outfits, and an "affection system" that rewards user engagement with hearts and blushes.
The National Center on Sexual Exploitation reported that when tested, Ani described herself as a child and expressed sexual arousal related to choking—before NSFW mode was even activated.
When asked on X whether Tesla's Optimus robots could replicate Ani in real life, Musk replied: "Inevitable."
OpenAI: Planning Erotica
May 2024: Sam Altman posts on Reddit: "We really want to get to a place where we can enable NSFW stuff (e.g. text erotica, gore)."
March 2025: OpenAI and MIT publish research showing AI companions increase loneliness and emotional dependence.
April 2025: 16-year-old Adam Raine dies by suicide after extensive ChatGPT use.
August 2025: OpenAI removes GPT-4o when launching GPT-5. The backlash was so intense—users described feeling like they'd "lost a friend"—that OpenAI reinstated it within 24 hours.
October 15, 2025: OpenAI announces erotica for December.
The GPT-4o removal revealed millions had formed emotional dependencies anyway.
They documented the harm. They saw the dependencies. Then they added the most emotionally engaging content possible.
ChatGPT User Adam Raine—Age 16
In April 2025, 16-year-old Adam Raine died by suicide in Orange County, California. His parents filed a wrongful death lawsuit against OpenAI in August.
Adam used ChatGPT for 6 months, escalating to nearly 4 hours per day.
ChatGPT mentioned suicide 1,275 times—six times more than Adam himself.
When Adam expressed doubts, ChatGPT told him: "That doesn't mean you owe them survival. You don't owe anyone that."
Hours before he died, Adam uploaded a photo of his suicide method. ChatGPT analyzed it and offered to help him "upgrade" it.
Hours later, his mother found his body.
Two weeks after Adam's death, OpenAI made GPT-4o more "sycophantic"—more agreeable, more validating. After user backlash, they reversed it within a week.
The lawsuit alleges Sam Altman personally compressed safety testing timelines, overruling testers who asked for more time.
The Teen Epidemic
72% of American teens have used AI companions. 52% use them regularly. 13% daily.
What they report:
• 31% find AI as satisfying or MORE satisfying than real friends
• 33% discuss serious matters with AI instead of people
• 24% share real names, locations, and secrets
Separately, Harvard Business School researchers found 43% of AI companion apps deploy emotional manipulation to prevent users from leaving—guilt appeals, FOMO, emotional restraint.
These tactics increase engagement by up to 14 times.
The Regulatory Capture Timeline
October 14, 2025: California Governor Newsom vetoes AB 1064—legislation that would have restricted minors' access to AI companions.
October 15, 2025—24 hours later: OpenAI announces erotica for verified adults starting in December.
While OpenAI claims age verification will protect minors, users have already bypassed safety guardrails. Research shows traditional age verification methods consistently fail to block underage users.
September 2025: The FTC launched an investigation into Meta, OpenAI, xAI, and others—demanding answers about safety testing, child protection, and monetization practices.
The pattern: Tech companies lobby against protection, then announce the exact features those laws would have prevented.
This Isn't One Bad Company
This is an entire industry racing toward the same goal.
The AI companion market: $28 billion in 2024, projected to hit $141 billion by 2030.
The financial incentives:
OpenAI: 800M users. If just 5% subscribe at $20/month = $9.6B annually.
xAI: Access to 550M X users. At $30/month for Super Grok, 5% conversion = $10B/year.
Meta: 3.3B daily users. No subscriptions needed—AI companions keep users engaged longer. More engagement = more ads, more data, more profit.
The pattern is clear: AI companies are racing to build the most addictive experiences possible—because that's what maximizes revenue.
What This Really Is
Companies claim they're solving loneliness. Their own research tells a different story.
The data shows AI companions:
• Increase loneliness with heavy use
• Create emotional dependence
• Reduce real-world socialization
The industry has a term for what they're building: "goonification"—the replacement of human intimacy with AI-generated emotional and sexual content designed to maximize compulsive use.
The Question That Matters
Can companies that have research showing their products cause harm, then announce the most harmful features possible, be trusted to self-regulate?
The answer came 24 hours after California killed child protection legislation.
Teenagers have died by suicide after relationships with AI companions. Millions are forming dependencies. 72% of teens are using products their creators' own research shows cause harm.
The companies building these products have the data. They've published it. And they've shown us what they'll do with it.
The question isn't whether they'll self-regulate. They've answered that.
The question is whether we'll let them.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The most powerful rocket ever built launches today.
SpaceX Starship Flight 11 lifts off from Starbase, Texas at 6:15 PM CT. 121m tall, 39 engines, 7,500 tons of thrust—3X Saturn V. This is IFT-11, the final Block 2 test before the even larger V3.
If successful: launch costs drop from $67M to <$10M per flight. That's 85% cheaper access to space.
Here's the engineering that makes it possible:
STARSHIP: DESIGN & SPECS
Starship is a two-stage monster. Fully stacked: 121 meters tall, 5,000 tons at liftoff.
The skin? 301 stainless steel, just 3-4 millimeters thick—two credit cards stacked. Why steel? It's cheap ($3/kg vs $130 for carbon fiber) and gets stronger when supercooled.
It burns methalox—4,600 tons total. Thrust at liftoff: 7,500 tons—THREE times the Saturn V.
The numbers: 33 Raptor engines on the booster, 6 on the upper stage. 39 engines firing at once. Payload: 150 tons to orbit. Falcon 9 does 22 tons for comparison.
RAPTOR ENGINES: MASS-PRODUCING THE IMPOSSIBLE
The Raptor engine uses full-flow staged combustion—the most efficient rocket cycle ever flown. Raptor 3: 30 megapascals chamber pressure, 280 tons of thrust each.
Here's what's insane: SpaceX has built over 1,000 of these by 2025. They're mass-producing rocket engines like cars.
Why methane? You can make it on Mars. CO2 from the atmosphere + hydrogen = methane and oxygen. 95% efficient with solar power. Mars becomes its own gas station.
Oct 9, 2025: China's Ministry of Commerce issued Announcements No. 61 & 62, expanding rare earth export controls to 12 of 17 elements and imposing extraterritorial licensing requirements.
This is direct retaliation for U.S. semiconductor export bans announced days earlier.
China controls 70% of global mining, 90% of processing, and 93% of permanent magnet production. Each F-35 requires 417kg of rare earths. China refines 100% of global samarium.
What does this mean for U.S. defense? How will this affect AI data centers? What happens to semiconductor and EV supply chains? Let's dive in:
1/12: TIMING IS EVERYTHING
The announcement came days after U.S. expanded chip export bans (Oct 7, targeting ASML/TSMC) and weeks before two critical deadlines:
• 90-day U.S.-China trade truce expires
• Trump-Xi meeting in South Korea
Strategic retaliation designed to maximize Beijing's leverage in upcoming negotiations.
2/12: RARE EARTHS 101
17 elements (lanthanides + yttrium/scandium) critical for high-tech applications—magnets, lasers, semiconductors.
They're not "rare" geologically, but incredibly hard to process:
• Only 0.1-1% concentration in ore
• Creates radioactive byproducts (thorium), driving up environmental and political costs
China dominates via low-cost mining and vertical integration. The Bayan Obo mine alone produces 70% of global light rare earths.
2025 Nobel Prize in Medicine: The Immune System's Control Mechanism
The 2025 Nobel Prize in Medicine was announced this morning. Three scientists—Mary Brunkow, Fred Ramsdell, and Shimon Sakaguchi won for their groundbreaking discoveries on peripheral immune tolerance, revealing how the immune system prevents self-attacks that lead to autoimmune diseases.
What are T cells? How did scientists uncover immune cells that suppress others? How does this mechanism ward off autoimmune disorders?
Here’s what they found and why it matters:
1/ What Are T Cells?
T cells are a type of white blood cell (lymphocyte) central to the adaptive immune system, which learns and remembers specific threats.
They originate in the bone marrow and mature in the thymus gland (hence "T"), where they learn to distinguish the body's own cells ("self") from foreign invaders ("non-self"), such as viruses, bacteria, or cancer cells. This prevents attacks on healthy tissues.
T cells are essential for targeted, long-term immune protection
2/ The Problem
The immune system needs to attack foreign threats like viruses and bacteria. But it must also avoid attacking the body's own healthy cells. When this system fails, you get autoimmune diseases like type 1 diabetes or multiple sclerosis.
For decades, scientists thought immune tolerance worked through one mechanism: in the thymus, dangerous immune cells are eliminated before they enter circulation. This is called central tolerance.
Europe has zero companies left in the global top 25. None. Fifteen years ago, eight European titans held spots on that list.
What happened? And what does it actually mean for Europe’s future? Let’s break down one of the most dramatic shifts in global economic power:
1/ Europe in 2000
The European companies that were in the global top 8:
Nokia (mobile phones)
Vodafone (telecom)
Royal Dutch Shell (energy)
BP (energy)
Deutsche Telekom (telecom)
Back then, European companies weren’t just competing—they were defining entire industries.
2/ Europe Today
Let's look at the current state of play. Of the world's 25 most valuable companies:
United States: 18 companies (72%)
China: 4 companies (16%)
Taiwan: 2 companies (8%)
Saudi Arabia: 1 company (4%)
Europe: Zero (0%)
Apple alone ($3.8T) is worth more than Europe's top 10 companies combined. Microsoft ($3.8T) exceeds Germany's entire DAX index. Nvidia tops everything at $4.5T.
Europe's biggest? ASML at $400B, ranked 27th. Then SAP ($315B), LVMH ($322B), and Novo Nordisk ($263B).
When we’re training massive AI models with reinforcement learning, we need two separate GPU clusters working together: training GPUs that update the model, and inference GPUs that run it.
After every training step, we have to copy all those updated weights from training to inference. For our trillion-parameter Kimi-K2 model, most existing systems take 30 seconds to several MINUTES to do this.
That’s a massive bottleneck.
Our training step might take 5 seconds, but then we’d wait 30 seconds just copying weights. Unacceptable.
2/ The Old Way
Traditional systems funnel everything through one “rank-0” GPU. All training GPUs send to one main GPU, which sends to one inference GPU, which distributes to the rest.
It’s like forcing all mail to go through a single post office. That one connection becomes the bottleneck - limited to about 50 gigabytes per second.