I work in Hollywood. I represent real human beings whose voices, faces, and livelihoods are being stolen by AI companies. This thread is about how it's happening and who's fighting back. š§µ
Let's start with Matthew McConaughey, who just did something kinda brilliant. His lawyers secured EIGHT federal trademarks from the USPTO covering his voice, likeness, and signature phrases - including a sound mark on "alright, alright, alright."
And the trademark filing is hilariously specific. It describes "a man saying 'ALRIGHT ALRIGHT ALRIGHT', wherein the first syllable of the first two words is at a lower pitch than the second syllable, and the first syllable of the last word is at a higher pitch than the second syllable."
They also trademarked a 7-second video of him standing on a porch. A 3-second clip of him in front of a Christmas tree. And audio of him saying "Just keep livin', right?" with his specific pauses and cadence. His MANNERISMS are now federally protected IP.
Now I know what you're thinking: "How can someone trademark a PHRASE?" And also: "Didn't 'alright, alright, alright' come from Dazed and Confused? Doesn't Linklater or the studio own that?"
Great questions. Here's the key distinction. COPYRIGHT and TRADEMARK are completely different legal regimes. The copyright in Dazed and Confused - the script, the footage, the scene - belongs to the studio. McConaughey isn't claiming he owns the movie.
TRADEMARK protects the association between a mark and a commercial source. Think of the NBC chimes or the MGM lion's roar. The legal test is whether something functions as a "source identifier" - meaning when people hear it, they immediately know who it's coming from.
McConaughey's argument is that "alright, alright, alright" - delivered in his specific cadence - has become so synonymous with HIM through decades of use at awards shows, interviews, and his own business ventures, that it now functions the same way a corporate logo does. The USPTO agreed.
It's similar to how "Where's the beef?" originated in a Wendy's commercial but became associated with Clara Peller personally. Or how Schwarzenegger's "I'll be back" transcends the Terminator franchise. The phrase started in a movie. The BRAND lives in the person.
So why do this? Because AI can now replicate anyone's voice with terrifying accuracy, and the existing law hasn't caught up. There's no federal right of publicity. State laws are a patchwork. McConaughey is building legal infrastructure BEFORE someone steals him.
His own words: "It's not coming. It's here." And his advice to every creator: "Own yourself. Your voice, your likeness, whatever you've got - own yourself. So no one can steal you."
But here's the important context. McConaughey isn't anti-AI. He partnered with ElevenLabs to create an AI version of his voice for a Spanish-language edition of his newsletter. He's also an investor in the company. The difference? HE chose it. HE controls it. HE profits from it.
And that distinction - consent vs. theft - is the entire ballgame right now. Which brings us to Scarlett Johansson.
You might remember: In September 2023, OpenAI CEO Sam Altman personally called Johansson and asked her to be the voice of ChatGPT. He told her she could "bridge the gap between tech companies and creatives" and that her voice would be "comforting to people." She said no.
Nine months later, OpenAI rolled out a new voice assistant called "Sky." Johansson's friends and family immediately called her. News outlets couldn't tell the difference. It sounded exactly like her.
And then - I cannot stress how brazen this is - Sam Altman posted a single word on Twitter after the demo: "her." A direct reference to the 2013 Spike Jonze film where Johansson voices an AI that a man falls in love with. He literally winked at the whole thing.
Oh, and two days BEFORE the demo? Altman contacted Johansson's agent again asking her to reconsider. So: asked her, she said no, launched a voice that sounds just like her, publicly referenced the connection, and had JUST asked her again days earlier. The audacity.
Arizona State University later ran a forensic voice analysis comparing Sky to ~600 professional actresses. Johansson's voice was more similar to Sky than 98% of them. They even had identical vocal tract lengths.
OpenAI's defense? They hired a different actress. Which... even if true, raises the obvious question: did they hire someone who sounds like Scarlett Johansson after Scarlett Johansson told them no?
Johansson's lawyers sent two very aggressive letters. OpenAI pulled the voice. But here's the problem: Johansson had no clear federal legal remedy. Rights of publicity are a patchwork of STATE laws that vary wildly. Some states barely protect you at all.
This is exactly why McConaughey's trademark strategy matters. He's creating a FEDERAL cause of action - a door into federal court - that doesn't depend on which state you're in. It's genuinely novel legal territory. His own lawyer admitted: "I don't know what a court will say in the end. But we have to at least test this."
But let's be real about who this works for. McConaughey can afford one of the top entertainment law firms in the country. The average working actor cannot. And that's where this story gets much darker.
Meet Paul Skye Lehrman and Linnea Sage, two New York voice actors. In 2019 and 2020, they were contacted on Fiverr by anonymous users asking for voice samples. Lehrman was told it was for "academic research." Sage was told it was "test scripts for radio ads."
Lehrman was paid $1,200. Sage got $400. They were assured the recordings would be used internally and never made public.
Two years later, Lehrman heard his own voice narrating a YouTube video about Russian military weapons. Words he never said. In places he never agreed to be. His voice had been cloned by AI startup LOVO, renamed "Kyle Snow," and was the DEFAULT VOICE on their platform.
Sage's voice was renamed "Sally Coleman." It was even used in LOVO's investor pitch deck - the company literally raised venture capital using a voice it stole from a woman they paid $400 on Fiverr.
And LOVO's website? It also offered celebrity sound-alike voices under names like "Barack Yo Mama," "Mark Zuckerpunch," and "Cocoon O'Brien." I wish I was making this up.
Lehrman and Sage sued. And here's the legal gut-punch: the court DISMISSED the copyright infringement claim. Why? Because copyright law protects the original sound recording - the fixed expression - not the abstract qualities of a voice. Your voice, legally speaking, may not be "yours" under copyright.
Their right-of-publicity claims survived. But this illustrates the fundamental problem: the law was not built for a world where your identity can be scraped, blended, renamed, and sold as a subscription product.
And this is happening at industrial scale. It's not just voices. AI companies have been training their models on massive amounts of stolen content. A federal judge found that Anthropic downloaded over 7 MILLION pirated books from sites like Library Genesis to train their AI. The company knew the books were pirated.
In the OpenAI lawsuits, discovery has revealed internal Slack channels called "project-clear" and "excise-libgen" where employees discussed deleting training datasets of pirated books. When asked about it, OpenAI first claimed privilege, then tried to withdraw earlier statements about the deletions. A judge called it "a moving target of privilege assertions."
The settlement in the Anthropic case was $1.5 billion - about $3,000 per book. Some authors are now suing individually rather than joining class actions, arguing that $3,000 is "a tiny fraction - just 2% - of the Copyright Act's statutory ceiling of $150,000 per willfully infringed work."
Now here's where it gets really insidious. If AI is trained on human voices, human writing, human creative work - the OUTPUT becomes nearly indistinguishable from human work. Which means detection tools don't work.
AI detection tools used by schools and publishers have been flagging the US CONSTITUTION as AI-generated. Classic literature written a hundred years ago gets scored as 90% likely to be AI. Because of course it does - the AI learned to write BY READING THOSE EXACT WORKS.
These same detection tools disproportionately flag neurodivergent writers, non-native English speakers, and anyone whose writing is "too polished" or "too predictable." The tools measure statistical resemblance, not authorship. And polished human writing looks just like AI output - because AI learned from polished human writing. It's completely circular.
This same logic applies to voices. If a model trains on 1,000 voice actors and produces a composite, no individual actor can prove their voice is "in there." The only way to verify it would be through litigation discovery - forcing the company to open its training data. Which is exactly why these companies fight so hard to settle before discovery and keep their training processes opaque.
So what's actually being done? California passed the Generative AI Training Data Transparency Act, effective January 1, 2026. It requires AI developers to publicly disclose what data they used for training - including whether copyrighted material or personal information was included.
The AI industry's response? Elon Musk's xAI immediately sued California's Attorney General, calling it a "trade-secrets-destroying disclosure regime" that would "gut the AI industry." The court rejected their arguments and denied the injunction. The law stands. For now.
Meanwhile, the Trump administration signed an executive order in December 2025 proposing a federal framework that would PREEMPT state AI laws - essentially setting up a mechanism to dismantle state-level protections in the name of "innovation." So the federal government is currently positioned as the AI industry's ally, not its regulator.
SAG-AFTRA has been fighting this fight on multiple fronts. The 2023 contract established baseline protections: informed consent for digital replicas, compensation requirements, and rules about scanning actors on set. But the technology has moved so fast that those protections already feel outdated.
The 2025 Interactive Media Agreement got more specific - distinguishing between AI tools like noise reduction and pitch adjustment (allowed) and full digital replicas of performers (heavily regulated with consent and per-line compensation). That's actually a smart distinction.
But the big moment is coming: the 2026 TV/Theatrical Agreement negotiations. That's where the real fight over AI happens. And the proposed NO FAKES Act - which SAG-AFTRA helped draft - would create the first FEDERAL intellectual property right in voice and likeness. It would extend 70 years after death.
Here's what I think needs to happen. We need legislation requiring AI companies to be fully transparent about their training inputs. Not "high-level summaries." Actual, auditable disclosure. The federal government should establish a dedicated office - like the SEC for AI - with the authority to audit training datasets and impose real penalties.
And the guilds need to push for something like automatic likeness registration for all members. When you join SAG-AFTRA, your voice print and likeness get registered in a centralized, protected database. Pair that with federal legislation, and you have a real framework.
Because right now we're in a world where a company can pay a voice actor $400, clone their voice, rename it, sell it as a subscription product, raise millions in venture capital off it - and the actor might never even know unless they accidentally stumble across themselves on YouTube.
And in a beautiful bit of irony - while all of this is happening, yesterday OpenAI announced it's shutting down Sora, its AI video generator. The app that was supposed to replace Hollywood. The thing that had the entertainment industry in a panic. It lasted six months.
Downloads plunged 45% by January. The app made a total of about $2.1 million from in-app purchases. Disney's $1 billion investment deal? Dead. Disney teams were literally working with OpenAI on a Sora project on Monday evening, and thirty minutes after the meeting they were told the product was being killed. One person called it "a big rug-pull."
Meanwhile, Coca-Cola ran AI holiday ads two years in a row and got destroyed by consumers both times. McDonald's pulled an AI ad within three days. Vogue ran an AI-generated Guess ad and got dragged by its own readers. Brands like Almond Breeze are now running campaigns that explicitly MOCK AI-generated content.
The market is speaking: consumers can tell when something is AI-generated, and they don't want it. The value of real human creative work is being reinforced by every single one of these backlash stories.
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
