There is a new AI proposal from @aipolicyus. It should SLAM the Overton window shut.
It's the most authoritarian piece of tech legislation I've read in my entire policy career (and I've read some doozies).
Everything in the bill is aimed at creating a democratically unaccountable government jobs program for doomers who want to regulate math.
I mean, just check out this section, which in a mere six paragraphs attempts to route around any potential checks from Congress or the courts.
@aipolicyus The amount of bureaucracy this bill would unleash is staggering. The bill attempts to streamline some of this by providing a "Fast track" but the main takeaway of this is how broad the types of software that are likely to be subject to regulation are:
The proposal also allows the Administrator to require any applicant (including those Fast Track applicants, and open source applicants) to adopt "safety procautions" which is entirely open-ended. Not thorough a rule-making process or any sort of due-process-protecting mechanism, but simply as a condition of granting a permit!
Over and over, the legislation has this one-way ratchet clause: the Administrator has the freedom to make rules stricter without any evidence, but has to prove a negative to relax any rules.
Whole section on open source criteria. Again, if a project doesn't get a gov OK, it CANNOT BE CONTINUED. Except for the FastTrack app, I think an app could just sit in process for a long time without approval, preventing court review. This is how to kill open source competitors.
The review process is somewhat similar to the SEC or FTC's Administrative Law Judge process, where the Administrator can overturn what the more independent ALJs decide. Only after all this process can a case be appealed - and then, for some reason, the party seeking the permit only has 20 days to do so. Why?!
Oh, and by the way, if it wasn't clear yet, you can't do ANYTHING until the government says you can.
And if you are operating under a permit and your model gets too good, you have to stop working and stop using it until the government signs off.
The bill creates a registry for all high-performance AI hardware. If you "buy, sell, gift, receive, trade, or transport" even one covered chip without completing the required form on time, you have committed a CRIME. The Administration is directed collect all that competitively sensitive information and compile it into reports.
More wild shit: The Frontier Artificial Intelligence Systems Administration (which I've called "Administration," as in the draft) can straight up compel testimony and conduct raids for any investigation or proceeding, including speculative "proactive" investigations. This really is math cops.
I'm going to skip the civil liability section because its so bonkers I can't handle looking at it any more. This alone would bury the AI industry in an avalanche of lawsuits. (At least the private right of action is limited to alleging >$100 million in "tangible" damages.)
On criminal liability section- THERE IS A CRIMINAL LIABILITY SECTION. FOR DOING MATH. Or for attempting to do math, or for not telling the gov that you're doing math.
Also officials who don't do their jobs can be criminally prosecuted? By whom? I have never seen that before.
Section 16 is "EMERGENCY POWERS". I'm sure this one is measured .....
oh. no. The administrator can, ON HIS OWN AUTHORITY, shut down the frontier AI industry for 6 months.
Oh, and if the President initiates, the Administrator can literally sieze and destroy all the hardware and software. It puts the future in the hands of one dude, who may have formed his opinions on AI from watching the latest Mission Impossible.
Oh look the Administrator can conscript troops:
Other agencies are required to consult with the Administration if they're doing AI enforcement stuff. (And b/c the Administrator has expansive legal authorities beyond anything else in fed. law enforcement except maybe anti-terrorism, I suspect all the cases will end up there.)
And out of nowhere the bill also amends the antitrust laws to give the Administration a near veto on AI mergers. Remarkable.
Almost to the end, any more surprises? Well, funding can come from anywhere, including the fines imposed AND DONATIONS, so that should work out well. Vitalik probably still has some shitcoins laying around.
Finally, the end. No boilerplate severability clause for @aipolicyus, let's tell courts how to do their jobs.
Gotta love that the last eight words of this bill, which is a giant middle finger to the Constitution, are "to the maximum extent permitted by the Constitution."
Seriously, this bill is so authoritarian that it ought to get them laughed out of every congressional office. They might as well have proposed a Constitutional Amendment that says, "New AI Administrator can do whatever they wants not withstanding the rest of this document."
Anyhow, if you want to read the entire fantasy yourself, check it out here. One note: there are several references to Section 11 as the emergency powers portion but obviously they meant Section 16. AI could have caught that one for them. assets.caip.org/caip/RAAIA%20%…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1/ Big shift in AI policy: This week Trump repealed Biden’s AI Executive Order and introduced his own Removing Barriers to American Leadership in Artificial Intelligence to shift direction. BUT which Biden-era AI actions should Trump focus on? 🧵
2/ Trump’s new executive order underscores a commitment to cutting red tape and fostering innovation. But Biden’s AI policy isn’t completely gone—it lingers in ongoing agency initiatives. Sect 5 of the EO attempts to clean up these leftovers:
3/ Over at @abundanceinst, we've been tracking all public proceedings that Biden's EO triggered. Below is a breakdown of some of the most important of those proceedings. We commented on many of them, and they now deserve the most scrutiny from the Trump admin.
This @FT op ed by Marietje Schaake pairs well with my op ed with @ckoopman. Keep Congress AND tech CEOs away from AI regulation. 😏
Not joking. A 🧵
Schaake is correct that CEOs have an interest in shaping regulation to benefit their business model. But legislation isn't the only way regulatory capture happens. All prescriptive regulation inheriently favors incumbents b/c it is written for the present. 2/
Future, and especially disruptive, business models and technologies won't fit in those regulatory boxes. Such businesses face regulatory uncertainty PLUS established incumbents who speak the regulators' language. The FCC is a great example of this happening over and over. 3/
Starting ASAP, @elonmusk should require Twitter staff to record all requests for content moderation or user discipline from governments or government officials.
This info should be publicly released in periodic reports like the ones platforms do for law enforcement requests.
All other platforms should do this, too, btw.
Woah, this is doing numbers! I don’t have a SoundCloud ….
So I've been stewing on the swirl around the @Facebook / NYU's Ad Observatory / @FTC issue for a few days, and it just keeps getting further under my skin. This latest news triggers me. A THREAD. washingtonpost.com/technology/202…
As prelude, I am a strong supporter of independent research on social media platforms. My org has funding at seven figures+ such research. I support even adversarial research and would support CFAA reform to enable it. (Reach out if you want to collaborate there!)
And I believe the team at NYU's Ad Observatory has been doing useful and careful research and I think their contribution has been important. I hope (and believe) they can continue their work.