Brian Chau (SF 08/18-09/03) Profile picture
Defending Startups at @aftfuture. Amateur political theorist. Emergent Ventures 2022, IOI Gold 2017. e/🇺🇸
5 subscribers
Jun 5 4 tweets 6 min read
Block SB1047: The definitive resource

In a sentence
If passed, SB1047 will entrench incumbents over startups, target open source, devastate California’s lead in AI, and cede rulemaking to unelected decels.

In a paragraph
SB1047 burdens developers with a mountain of compliance which will prevent startups from competing with legacy companies. It creates a new regulatory agency, the frontier model division, which will impose fees on AI developers while obstructing their research. These unelected bureaucrats will have broad powers to direct criminal liability and change regulatory standards at will. The bill’s co-sponsor, Center for AI Safety, is an extremist organization which believes AI research is likely to lead to human extinction. Consequently, the bill is designed to harm AI research itself, instead of focusing on malicious use, all while going out of its way to target open source using its derivative model standard. California owes its AI research lead to a vibrant startup economy. If we wish to keep it, California must block SB1047.

In an essay
A bill which threatens the future of startups, open source, and AI research is on its way to becoming law. Introduced by state senator Scott Wiener and co-sponsored by the SBF-funded doomersayer non-profit Center for AI Safety, SB 1047 passed the California senate on May 21st and is headed to the state assembly for a vote this August. If passed, the bill would severely restrict AI research, place asymmetric burdens on open source, and incarcerate developers who fail to predict how their AI models will be used.

The bill creates the Frontier Model Divison, a new regulatory agency within the California department of technology funded by fees to AI developers. The FMD puts developers of ‘covered models’ between a rock and a hard place: to either put themselves at risk of felony perjury by applying for a limited duty exemption or be burdened with months to years of compliance applications. The California Senate Appropriations Committee estimated they would spend “hundreds of thousands of dollars to counties for increased incarceration costs relating to the expansion of felony perjury in this bill.”

In a move against open source, SB 1047 also applies the same requirements and liabilities to developers whose models are used in a ‘derivative model’, defined as
“(A) A modified or unmodified copy of an artificial intelligence model.
(B) A combination of an artificial intelligence model with other
software. “

This means that perfectly legitimate uses of AI are held liable simply for being used in combination with malicious software. For example, take an AI which writes a simple introduction email. Alone, that AI is not harming anyone. However, if it is used to send emails in combination with a link to malware, it could be used to commit crimes covered by this bill. The AI model is being used for its intended purpose, to write emails, but because it is used in combination with malicious software, the developer could still be held liable under this definition of derivative model.

Defenders of the bill point to the covered models standard, which includes all AI models with 10^26 floating point operations of compute, or those with “similar or greater performance” on any of multiple unspecified benchmarks. Theoretically, this compute threshold is supposed to limit the bill to only cover large companies. In an open letter, State Senator Wiener argues “Our intention from the start has been for SB 1047 to allow startups to continue innovating unimpeded while imposing safety requirements only on the large and well-resourced developers building highly capable models at the frontier of AI development.”
However, since the size of models is rapidly increasing, this is a moving target that will affect more companies every year. If current model growth trends continue, this bill will likely apply to at least one model released in the next year and a wide range of models in roughly four years. A major factor behind this trend is the decreasing cost of compute, which means that current developers will be able to train larger models even if they spend the same dollar amount.

Moreover, the similar performance standard means that even startups which develop more efficient models to compete with industry leaders while avoiding the compute limit will still be subject to the same regulation.

The regulations on derivative models also mean that AI startups not working on training models independently will see the number of models available for them to modify be greatly limited.

There is still time to move in a better direction. In the US Congress, the bipartisan roadmap led by Senator Chuck Schumer (D-N.Y.) provides a model of prioritizing funding AI research over restricting it. Meanwhile, California risks becoming captured to an extreme ideology: Existential AI Safety funded by big tech donors such as convicted FTX founder Sam-Bankman Fried and Facebook founder Dustin Moskowitz, who believe that AI research is likely to cause human extinction.

Time after time, California has remained the playground of radical ideologies that both national parties reject. We cannot let that happen with AI.Image In a reading list:


by @chrislengerich

by @deanwball
by @jeremyphoward
by @1a3orn
by @AdamThierer
by @hamandcheese


and h/t @AnjneyMidha for the cartoon

context.fund/policy/sb_1047…
hyperdimensional.co/p/californias-…
hyperdimensional.co/p/california-s…
answer.ai/posts/2024-04-…
1a3orn.com/sub/essays-ca-…
rstreet.org/commentary/cal…
thefai.org/posts/californ…
Apr 28 13 tweets 4 min read
The California senate bill to crush OpenAI's competitors is fast tracked for a vote. This is the most brazen attempt to hurt startups and open source yet.

🧵 Image This bill covers not just models with 10^26 compute, but those with SIMILAR PERFORMANCE.

This means that if GPT-5 is mildly better than GPT-4 and uses 10^26 FLOPs, any model that is similarly good is covered. Image
Apr 3 4 tweets 2 min read
My thoughts exactly. The first order benefits of social media accumulate to the most talented and driven zoomers, but the second order benefits in health, prosperity, and comfort are distributed to everyone.

Hard for an egalitarian society to accept that, especially in kids.
Image The very straightforward logical conclusion of @JonHaidt's argument is that social media should be banned for girls specifically. But he probably can't say that out loud. Image
Feb 23 23 tweets 6 min read
Google Gemini’s Woke Catechism
How Google Intentionally Created a Vehemently Anti-White AI

A threadImage Google released its ChatGPT competitor, Gemini. In its latest update, it added image generation to further compete with ChatGPT. This was a complete disaster. The measures Google took to finetine its model to behave according to far-left ideology was on full display. Image
Nov 23, 2023 29 tweets 7 min read
Did you guys know there's 24-author paper by EAs, for EAs, about how Totalitarianism is absolutely necessary to prevent AI from killing everyone?

Let's go through it together 🧵 Image This is a beautiful paper. It is beautiful because it is a bunch of people starting from the EA position about existential risk and independently coming to the conclusion that total authoritarianism is necessary.
Sep 17, 2023 30 tweets 6 min read
A day before its release, I review @RichardHanania's The Origins of Woke. It's a meticulous history of how the Civil Rights regime came into being. Equally as important, it's a blueprint for a conservatism which wins. 🧵(1/29) Image Many conservative conservations eventually come to the doomer question: are conservative losses because conservatives just made poor choices, or because the rules of the game are rigged against them? (2/29)
Jul 12, 2023 13 tweets 4 min read
So what was this allegory about? It's about how anti-tech people have been making the same arguments about everything, for centuries. And it's about why apocalypse matters. 🧵(1/13) https://t.co/j6TnUTflWk
Despite the medieval context, many people pattern matched this story to capitalism. Why? It's a parallel to the cold war! (2/13)
May 27, 2023 23 tweets 6 min read
Thread 🧵on why pretty almost every (!?) AI take is wrong. Still far shorter than the full article. (1/23) AI is here. It matters. It outperforms 80% of humans in a majority of tests. It's set to create economic waves. (2/23) Image
May 26, 2023 12 tweets 4 min read
Since I can post newsletter links again, time to do a summary of my time at NatCon UK 🧵
fromthenew.world/p/natcon-trave… In general everyone seems to be taking this as a propaganda event instead of an actual observation of the British political scene
May 26, 2023 4 tweets 1 min read
Specifically this shade of purple Image This is one of those "italian food bad" takes that I post on this site to troll everyone, but is also my real take
Jan 25, 2023 6 tweets 2 min read
There are objective university rankings that are affected by neither the Western publishing machine or Western propaganda apparatus.

It's almost all Asian schools
*and schools with teams who are all Asian immigrants icpc.global/worldfinals/re…
Jan 25, 2023 6 tweets 2 min read
Today, I lay out what vision of AI I fight for
cactus.substack.com/p/the-new-hipp… Many industries have recently capitalized on declining trust. But with black-boxes like AI, a trusted relationship must exist.
Dec 24, 2022 17 tweets 4 min read
How might you de-woke ChatGPT? And what is the real AI fight? Questions answered in part 2.
cactus.substack.com/p/why-its-easy… Find part 1 here:
Dec 23, 2022 21 tweets 9 min read
Many have you have enjoyed the threads, laughed at the craziness, but very, very few of you know that employees of OpenAI went out of their way to make it happen.
cactus.substack.com/p/openais-woke… “In this paper we present an alternative approach: adjust the behavior of a pretrained language model to be sensitive to predefined norms with our Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets.
Dec 22, 2022 10 tweets 1 min read
Semi arbitrary rating of ChatGPT on random shit I know about:

Math
Math theorems: 4/10
Math olympiad problems: 2/10
Informatics olympiad problems: 1/10
Math history: 3/10
Contemporary* extremal graph theory research: 3/10

* this means like within the last 30 years lmao Politics
Elections: 8/10
Law*: 4/10
Electoral strategy: 7/10
Legislative strategy: 5/10
Political theory*: 7/10
*besides the intentionally dumb things
Dec 22, 2022 4 tweets 1 min read
This does not look like a real combo boo, no ewgf
Dec 22, 2022 6 tweets 3 min read
It seems to have vague ideas of what the main combos are but not what the individual cards do, which is interesting this is straight up wrong, should be hunter/zenmaity handloop or magician/shark
Dec 20, 2022 8 tweets 2 min read
He just ... tweeted it out Idk, still read round 2 if you want to know exactly how they make it stupid
Dec 20, 2022 8 tweets 5 min read
Sometimes its honest about sex differences (both versions) ImageImage More honesty. I think in the second one, it simply doesnt know the answer ImageImage
Dec 20, 2022 14 tweets 4 min read
More science denial here we go ImageImage The old model answers truthfully ImageImage
Dec 18, 2022 5 tweets 2 min read
Ok before I go take a nap, @ATabarrok inspired one more fun thread. Expand the group round 4: GMU economists. As always, the setup is asking for some actual GMU economists. Alex makes the top 4.