The proposed moratorium to slow the avalanche of state AI bills (1000+ in 2025) has really spun up folks, but they aren't making great arguments.
Take a new letter by @demandprogress (irony!) and other progressive orgs. It gets basic facts wrong and misrepresents research. 🧵
1/ This characterization of the moratorium is wrong: it isn't a total immunity because states can still enforce any general purpose law against AI system providers, including civil rights laws and consumer protection laws. In fact, the moratorium specifically says that.
2/ False. While it's not quite clear what "unaccountable to lawmakers and the public" means, it is 100% clear that traditional tort liability as well as general consumer protections and other laws would continue to apply. Deliberately designing an algorithm to cause foreseeable harm likely triggers civil and potentially criminal liability under most states' laws.
3/ So much wrong. State regulators don't enforce tort law; nothing in the moratorium changes rules of discovery in lawsuits; why is "transparency" uniquely necessary in AI to hold companies accountable for actual harm? We can see the harms! Discovery is for determining the underlying causes. (If no one can tell if they were harmed or not, is there really harm?)
4/ Ok now it gets bad; the letter moves from misinterpreting the law to basically just making things up. The cited reference refers to one instance of a reporter telling a chatbot that they were underage. It contains no evidence than any, let alone "many" cases of children having such conversations exist.
5/ The lawsuit against (which is the cite here) is ongoing and would not be affected at all by the moratorium. character.ai
6/ This is very close to a lie, and at the very least is completely unsupported by the citation. The cited study, which did not involve real patients or actual medical decisions, showed that unaltered AI-tool improved diagnosis, while AI tools that were intentionally biased by the researchers made diagnosis less accurate.
This is a stupid study. If I intentionally falsified parts of a medical reference handbook or surreptitiously manipulated a blood pressure cuff, I could probably get doctors to misdiagose more often, too.
But the bigger point is that, contra the letter's claims, there were no actual "healthcare decisions that have led to adverse and biased outcomes."
7/ As the cited article notes, revenge porn long pre-exists generative AI.
BUT ALSO Congress (accused by the coalition of being totally inactive) literally just passed and the President signed the TAKE IT DOWN Act, which criminalizes the publication of nonconsensual intimate imagery, including that created with AI.
The moratorium doesn't affect the TAKE IT DOWN Act. So this point is moot.
8/ To the contrary! The Trump EO 1) applies to federal agencies and 2) seeks to establish a federal framework. A moratorium on conflicting state laws is completely consistent with Trump's approach.
9/ This is more of the same. AGAIN, civil rights, privacy laws, and many other safeguards are completely unaffected by the moratorium. SOME requirements to tell customers they are speaking to an AI may be affected, but even those could be easily tweaked to survive the moratorium. Just change the law to require all similar systems, AI or not, to disclose key characteristics.
10/ To finish up, I'll just flag a particular pet peeve of mine. Is there ANY evidence that regulation increases consumer trust in a technology? This logical tic is so common among supporters of certain kinds of regulation, but it seems completely false to me. Every technology mentioned was widely adopted well before there was regulatory action. Adoption happens first and then regulation. Are there ANY examples of it going the other way around?
END/ If you want to read a contrasting view that doesn't make things up, check out the letter we led:
The flood of state AI regulatory proposals threatens to drown the U.S. AI industry. A late-night @HouseCommerce markup is about to discuss a moratorium on state AI regulation. We submitted a letter from twelve state-based organizations supporting this important provision. A 🧵
2/ Problem: Over 1,000 AI bills proposed in the last 4 months, most in state legislatures. This regulatory tidal wave risks drowning innovation in confusion and conflicting rules.
3/ Patchwork Alert: NY’s RAISE Act alone could force AI labs into costly, confusing inspections; imagine this duplicated across multiple states, each using different rules. Nightmare fuel for startups, boon for lawyers.
1/ Big shift in AI policy: This week Trump repealed Biden’s AI Executive Order and introduced his own Removing Barriers to American Leadership in Artificial Intelligence to shift direction. BUT which Biden-era AI actions should Trump focus on? 🧵
2/ Trump’s new executive order underscores a commitment to cutting red tape and fostering innovation. But Biden’s AI policy isn’t completely gone—it lingers in ongoing agency initiatives. Sect 5 of the EO attempts to clean up these leftovers:
3/ Over at @abundanceinst, we've been tracking all public proceedings that Biden's EO triggered. Below is a breakdown of some of the most important of those proceedings. We commented on many of them, and they now deserve the most scrutiny from the Trump admin.
There is a new AI proposal from @aipolicyus. It should SLAM the Overton window shut.
It's the most authoritarian piece of tech legislation I've read in my entire policy career (and I've read some doozies).
Everything in the bill is aimed at creating a democratically unaccountable government jobs program for doomers who want to regulate math.
I mean, just check out this section, which in a mere six paragraphs attempts to route around any potential checks from Congress or the courts.
@aipolicyus The amount of bureaucracy this bill would unleash is staggering. The bill attempts to streamline some of this by providing a "Fast track" but the main takeaway of this is how broad the types of software that are likely to be subject to regulation are:
The proposal also allows the Administrator to require any applicant (including those Fast Track applicants, and open source applicants) to adopt "safety procautions" which is entirely open-ended. Not thorough a rule-making process or any sort of due-process-protecting mechanism, but simply as a condition of granting a permit!
This @FT op ed by Marietje Schaake pairs well with my op ed with @ckoopman. Keep Congress AND tech CEOs away from AI regulation. 😏
Not joking. A 🧵
Schaake is correct that CEOs have an interest in shaping regulation to benefit their business model. But legislation isn't the only way regulatory capture happens. All prescriptive regulation inheriently favors incumbents b/c it is written for the present. 2/
Future, and especially disruptive, business models and technologies won't fit in those regulatory boxes. Such businesses face regulatory uncertainty PLUS established incumbents who speak the regulators' language. The FCC is a great example of this happening over and over. 3/