Jeremy Howard Profile picture
Jun 17, 2024 10 tweets 4 min read Read on X
I've done a deep dive into SB 1047 over the last few weeks, and here's what you need to know:

*Nobody* should be supporting this bill in its current state. It will *not* actually cover the largest models, nor will it actually protect open source.

But it can be easily fixed!🧵
This is important, so don't just read this thread, instead read the 6000+ word article I just published.

In the article I explain how AI *actually* works, and why these details totally break legislation like SB 1047. Policy makers *need* to know this:
answer.ai/posts/2024-06-…
SB 1047 does not cover "base models". But these are the models where >99% of compute is used. By not covering these models, the bill will probably actually not cover any models at all.

(There are also dozens of trivial workarounds for anyone wanting to train uncovered models.) Image
If the "influence physical or virtual environments" constraint is removed then the impact would be to make development of open source AI models larger than the covered threshold impossible.

However, the stated aims of the bill are to ensure open source developers *can* comply. Image
Thankfully, the issues in SB 1047 can all easily be fixed by legislating the deployment of “AI Systems” and not legislating the release of “AI Models”. Image
Regulating the deployment of services, instead of the release of models, would not impact big tech at all, since they rarely (if ever) release large models.

So the big tech companies would be just as covered as before, and open source would be protected. Image
If we can't fine-tune open sourced models, then we'll be stuck with whatever values and aims the model creators had. Chinese propaganda is a very real current example of this issue (and remember that the best current open source models are Chinese).
Image
Image
I don't propose that we exempt AI from regulation. However, we should be careful to regulate with an understanding of the delicate balance between control and centralization, vs transparency and access, as we've done with other technologies throughout history. Image
Instead of "p(doom)", let's consider "p(salvation)" too, and bring a new concept to the AI safety discussion:

“Human Existential Enhancement Factor” (HEEF): the degree to which AI enhances our ability to overcome existential threats and ensure our long-term well-being.
If you care about open source AI model development, then submit your views here, where they will be sent to the authors and appear on the public record:
calegislation.lc.ca.gov/Advocates/

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jeremy Howard

Jeremy Howard Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jeremyphoward

Nov 28, 2025
I didn't believe this was real, so I looked into it.

It is real. It's actually worse than it first looks. Definitely supports claims from @ziglang and @theo that GH Actions is a sad, neglected platform.

Read on for a little software archeology…🧵 Image
`safe_sleep` was added in 2022.

It replaced usages of `sleep`. However `sleep` is a posix standard command. GitHub Actions already assumes the existence of a great many even non-posix commands, so the script is an odd choice.
github.com/actions/runner…
It was implemented in a way that, very obviously to nearly anyone at first glance, uses 100% CPU all the time, and will run forever unless the task happens to check the time during the correct second.
github.com/actions/runner…
Read 13 tweets
Oct 2, 2025
It's a strange time to be a programmer—easier than ever to get started, but easier to let AI steer you into frustration. We've got an antidote that we've been using ourselves with 1000 preview users for the last year: "solveit"

Now you can join us.🧵
answer.ai/posts/2025-10-…
Today we're launching a 5 week course, including access to the new solveit platform, starting Oct 20th. If you want to join us or learn more, go here:
solve.it.comImage
A year ago we ran a small trial titled "How To Solve It With Code". The response was so overwhelming that we closed signups after one day. We explored using our approach for very small iterations with constant feedback for web development, AI, business (with @ericries)…
Read 9 tweets
Jul 17, 2025
For folks wondering what's happening here technically, an explainer:

When there's lots of training data with a particular style, using a similar style in your prompt will trigger the LLM to respond in that style. In this case, there's LOADS of fanfic:
scp-wiki.wikidot.com/scp-series🧵 x.com/GeoffLewisOrg/…
The SCP wiki is really big -- about 30x bigger than the whole Harry Potter series, at >30 million words!

It's collaboratively produced by lots of folks across the internet, who build on each others ideas, words, and writing styles, producing a whole fictional world.
Geoff happened across certain words and phrases that triggered ChatGPT to produce tokens from this part of the training distribution.

And the tokens it produced triggered Geoff in turn. That's not a coincidence, the collaboratively-produced fanfic is meant to be compelling!
Read 8 tweets
May 24, 2025
Lotta people in the comments claiming that this actually makes perfect sense if you know the original (道德經 / 道德经 / Dào Dé Jīng).

These people are wrong.

If you *actually* know the original, you'll see how bad this is.🧵 Image
Here is the full original: daodejing.org .

I'm not sure there's any super great translations, but here's an English version that's perhaps good enough. with.org/tao_te_ching_e…
Here's the Chinese of the verse the bit I quoted is based on:
"天下皆知美之为美,斯恶已;皆知善之为善,斯不善已。故有无相生"
Read 11 tweets
Mar 29, 2025
I'm glad @levelsio checked this, but sad our contrib has been erased by later big tech co's. Alec Radford said ULMFiT inspired GPT. ULMFiT's first demo predated BERT.

Today's 3-stage LLM approach of general corpus pretraining and 2 stages of fine-tuning was pioneered by ULMFiT.
There have been many other important contributions, including attention (Bahdanau et al), transformers, RLHF, etc.

But before all this, basically everyone in NLP assumed that each new domain needed a new model. ULMFiT showed that a large pretrained model was actually the key.
I got push-back from pretty much everyone about this. My claim that fine-tuning that model was the critical step to achieving success in NLP was not something people were ready to hear at that time.

I gave many talks trying to convince academics to pursue this direction.
Read 5 tweets
Mar 18, 2025
Announcing fasttransform: a Python lib that makes data transformations reversible/extensible. No more writing inverse functions to see what your model sees. Debug pipelines by actually looking at your data.

Built on multi-dispatch. Work w/ @R_Dimm
fast.ai/posts/2025-02-…
We took the `Transform` class out of fastcore, replaced the custom type dispatch system with @ikwess's plum-dispatch, mixed it all together, and voila: fasttransform! :D

To learn about fasttransform, check out our detailed blog post.
fast.ai/posts/2025-02-…
"Manual inspection of data has probably the highest value-to-prestige ratio of any activity in machine learning." --@gdb

Yet we often skip it because it's painful. How do you inspect what your model sees after normalization, resizing & other transforms?
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(