Today, I officially resigned from the OpenAI board.
Thank you to the many friends, colleagues, and supporters who have said publicly & privately that they know our decisions have always been driven by our commitment to OpenAI’s mission.
1/5
Much has been written about the last week or two; much more will surely be said. For now, the incoming board has announced it will supervise a full independent review to determine the best next steps.
2/5
To be clear: our decision was about the board's ability to effectively supervise the company, which was our role and responsibility. Though there has been speculation, we were not motivated by a desire to slow down OpenAI’s work.
3/5
When I joined OpenAI’s board in 2021, it was already clear to me and many around me that this was a special organization that would do big things. It has been an enormous honor to be part of the organization as the rest of the world has realized the same thing.
4/5
I have enormous respect for the OpenAI team, and wish them and the incoming board of Adam, Bret and Larry all the best. I’ll be continuing my work focused on AI policy, safety, and security, so I know our paths will cross many times in the coming years.
5/5
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Thread of my quick reactions to the AI executive order:
(the full text doesn't appear to be out yet, so this is based on the factsheet for now - but I may add more thoughts later once the full thing is up) whitehouse.gov/briefing-room/…
1) Glad to see the very first point is exactly what we rec'd in a recent blog post: requiring companies to share info about so-called "frontier AI" systems. Curious how they're defining which models count.
2) NIST is tasked with developing "standards, tools, and tests" for safe/secure/trustworthy AI. This is great, I love it, and also, where will they get the money and the people?
(This would be a great place for a functioning Congress to jump in & back this up with $)
New piece from @jennywxiao, @jjding99 and me pushing back on the claim that we can't regulate AI because that would just let China pull ahead. This is not a good argument!
The caveat: In the piece we breeze past the question of whether regulating AI would actually slow down US innovation.
"We can't regulate because China" assumes that regs = slowdown, but this is far from certain! Smart regulation can be neutral or even positive for innovation.
But now onto our case:
1: First and most importantly—Chinese large language models (the type of AI we focus on) just aren't that competitive with the cutting edge of the field. Unless you go by parameter counts—which you shouldn't—it's hard to be impressed by Chinese releases.
The most important thing to know is that these regulations aren't a one-off.
China has a complex and ever-growing web of laws & regulations around AI/the internet/data governance, and these slot right into that bigger picture.
For instance...
Some folks have remarked on the fact that these regs would require generative AI providers to submit a security assessment - that's not a new thing! It's just saying that 2018 rules for services with “public opinion properties” or “social mobilization capacity" apply.
If you spend much time on AI twitter, you might have seen this tentacle monster hanging around. But what is it, and what does it have to do with ChatGPT?
It's kind of a long story. But it's worth it! It even ends with cake 🍰
THREAD:
First, some basics of how language models like ChatGPT work:
Basically, the way you train a language model is by giving it insane quantities of text data and asking it over and over to predict what word[1] comes next after a given passage.
Eventually, it gets very good at this.
This training is a type of ✨unsupervised learning✨[2]
It's called that because the data (mountains of text scraped from the internet/books/etc) is just raw information—it hasn't been structured and labeled into nice input-output pairs (like, say, a database of images+labels).
Thread of musings on sth I noticed recently: conversations about how we might find meaning in a post-work world heavily feature music and art... but I can't remember sports being mentioned even once. How come, when it provides so much meaning/community/joy to so many people? 1/5
Obvious answer is obvious: sports aren't mentioned because these discussions are being had by Serious Intellectuals with Serious Intellectual tastes. It's a shame though - such a good way to channel our instincts for tribalism & physical competition, especially of young men. 2/5
In general I wish there were more exploration (e.g. in fiction) of rich, meaningful ways we might spend time, make meaning, build communities etc in the future. From the perspective of 100 years ago, our current team sports system could seem like an example of this... 3/5