Miles Brundage Profile picture
Policy stuff at @openai. I mostly tweet about AI, animals, and sci-fi. Views my own.
May 31, 2023 4 tweets 2 min read
"They're just saying that their technology might kill everyone in order to fend off regulation" seems like a very implausible claim to me, and yet many believe it.

What am I missing? Are there precedents for that tactic? How would regulating *less* help with the killing thing? I certainly can't rule out that there are some galaxy brain execs out there, but this seems pretty obviously not the reason why e.g. there was a statement spearheaded by...academics, and many people who have talked about this stuff long before they had products to regulate.
May 22, 2023 4 tweets 1 min read
The cluster of issues around:

- Use of AI in influence operations + scams
- Watermarking/provenance for AI outputs
- Online proof of personhood

is definitely on the harder end of the spectrum as far as AI policy issues go.

Lots of collective action problems + tradeoffs. It's also among the more underestimated issues--it's *starting* to get semi mainstream but the full severity + linkages to other issues (how do we solve the other stuff if no one knows what's going on + democracy is breaking) + lack of silver bullets are not widely appreciated.
May 6, 2023 4 tweets 1 min read
A lot of people are wondering if the current moment of interest in AI regulation is just a passing thing or the new normal.

I think it's almost certainly the new normal. The reason it's happening is widespread AI adoption, and *that* is only going to massively increase. (unless, that is, there is significant regulation to prevent it, so... 🤷‍♂️ )
Dec 29, 2022 5 tweets 2 min read
Timely read in the age of #twitterdown en.wikipedia.org/wiki/I_Have_No… Naturally #twitterdown is censored lol
Dec 26, 2022 7 tweets 2 min read
Some meta-level thoughts on the ChatGPT-induced alarm among teachers/professors about LM-enabled cheating (substantive thoughts to come next year):

- like many LM things, it is not coming out of nowhere. People warned about it (+ some started cheating) no later than Feb. 2019. - similarly to other LM things, the severity of the issue (and the prospective benefits for education via legitimate uses of LMs) scales with model capabilities, so was nascent before, and will be more severe in the future. Also scales with wide access + usability of interface.
Oct 30, 2022 17 tweets 3 min read
Like everyone else, it seems, I have hot takes on Elon/Twitter stuff. I will try to differentiate mine by making them falsifiable.

Let’s call three scenarios for how this plays out “Exit Left,” “Mission Accomplished,” and “Golden Age.”
🧵 Exit Left (~40% chance): Mass exodus after overly aggressive relaxing of rules in a more politically conservative direction + general anti-Elon animus on the left leads …
Jul 24, 2021 29 tweets 6 min read
I think some are reticent to be impressed by AI progress partly because they associate that with views they don't like--e.g. that tech co's are great or tech ppl are brilliant.

But these are not nec related. It'd be better if views on AI were less correlated with other stuff.🧵 (The premise is admittedly speculative--I am confident there's a correlation but less so re: causality. Best theory I have but will be curious for reactions. There are of course other reasons to be unimpressed such as things really being unimpressive, fear of seeming naive, etc.)
Jul 8, 2021 23 tweets 6 min read
Excited to finally share a paper on what a huge chunk of OpenAI has been working on lately: building a series of code generation models and assessing their capabilities and societal implications. 🧵

arxiv.org/abs/2107.03374 First, just want to emphasize how collaborative the effort has been. Assessing the societal implications of any technology—especially pre-widespread deployment—is inherently hard and requires drawing on lots of disciplinary perspectives. See long authors list + acknowledgements!
Mar 1, 2021 14 tweets 3 min read
What's going on here besides people optimizing for different things, or not bothering to do their homework? One answer is that AI policy researchers are falling victim to Collingridge's dilemma (en.wikipedia.org/wiki/Collingri…). That is, by default, people miss the sweet spot between when AI outcomes are not yet foreseeable in detail, and when a lot of the key decisions have been made. That time period is short (small number of months/years), e.g. I think language models are in the sweet spot right now.
Mar 1, 2021 9 tweets 2 min read
There's been a ton of growth in publications on AI ethics/policy over the past few years. I think most observers would agree that only a small fraction of that output is "impactful," though. A thread of preliminary thoughts on what that means/why it's happening:🧵 [A note on terminology: AI ethics/policy/safety/governance etc. can be distinguished in various ways. I use "AI policy" as a catch-all for all these here because I think my points are pretty general, but I'm interested in feedback on that among other points made here!]
Feb 28, 2021 6 tweets 2 min read
I mostly enjoyed "Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity" a while back and find myself thinking about it from time to time. The big idea is that humanity expanding into space isn't as obviously good as many think. amazon.com/Dark-Skies-Exp… I'm generally pro space expansionism/the book didn't change my mind a ton but it was useful for reexamining my biases. The book is sort of an analytical companion to the show The Expanse, which also presents a less-than-utopian portrayal of space, with similar themes...
Nov 21, 2020 9 tweets 3 min read
Lady Gaga as diagrams about AI systems (thread). AlphaGo.
Jun 22, 2019 5 tweets 2 min read
I've been thinking a bit about the growing practice of fine-tuning generic pretrained models: first in computer vision, now NLP (highly recommend @seb_ruder's great article on this ruder.io/nlp-imagenet/)...Last time I mentioned this, people were skeptical that RL would be next. E.g. folks pointed out that RL is maybe the wrong level/domain of analysis, and maybe there is insufficient commonality across RL tasks people care about for this to make sense, etc... But in any case, I suspect more will happen in this vein beyond vision and NLP.
Jul 5, 2018 18 tweets 5 min read
Reading the old AI literature is underrated. Will share some fun examples when I’m at my computer later... One quick example that’s easy to do by phone - “Steps Toward Artificial Intelligence,” Minsky, 1960: courses.csail.mit.edu/6.803/pdf/step…

Remarkable how the taxonomy of research areas roughly resembles today - learning, planning, etc at high level, lower stuff like exploration/hierarchy, etc