I've done a deep dive into SB 1047 over the last few weeks, and here's what you need to know:
*Nobody* should be supporting this bill in its current state. It will *not* actually cover the largest models, nor will it actually protect open source.
But it can be easily fixed!🧵
This is important, so don't just read this thread, instead read the 6000+ word article I just published.
In the article I explain how AI *actually* works, and why these details totally break legislation like SB 1047. Policy makers *need* to know this:
answer.ai/posts/2024-06-…
SB 1047 does not cover "base models". But these are the models where >99% of compute is used. By not covering these models, the bill will probably actually not cover any models at all.
(There are also dozens of trivial workarounds for anyone wanting to train uncovered models.)
If the "influence physical or virtual environments" constraint is removed then the impact would be to make development of open source AI models larger than the covered threshold impossible.
However, the stated aims of the bill are to ensure open source developers *can* comply.
Thankfully, the issues in SB 1047 can all easily be fixed by legislating the deployment of “AI Systems” and not legislating the release of “AI Models”.
Regulating the deployment of services, instead of the release of models, would not impact big tech at all, since they rarely (if ever) release large models.
So the big tech companies would be just as covered as before, and open source would be protected.
If we can't fine-tune open sourced models, then we'll be stuck with whatever values and aims the model creators had. Chinese propaganda is a very real current example of this issue (and remember that the best current open source models are Chinese).
I don't propose that we exempt AI from regulation. However, we should be careful to regulate with an understanding of the delicate balance between control and centralization, vs transparency and access, as we've done with other technologies throughout history.
Instead of "p(doom)", let's consider "p(salvation)" too, and bring a new concept to the AI safety discussion:
“Human Existential Enhancement Factor” (HEEF): the degree to which AI enhances our ability to overcome existential threats and ensure our long-term well-being.
If you care about open source AI model development, then submit your views here, where they will be sent to the authors and appear on the public record:
calegislation.lc.ca.gov/Advocates/
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.