Naturally #twitterdown is censored lol
To be precise, it was displayed trending very recently and almost certainly would still be if it weren't being intentionally suppressed, and also it doesn't pop up when you start typing it unlike a bunch of things related to the Twitter Files
OK seeing it on the list now, though still no autocomplete (much more obscure topics do, though).

(TBC, I think it'd be reasonable for Twitter to say "we want to move on/we don't want to encourage this topic"--just clashes with other Elon stuff re: "free speech absolutism" etc.)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Miles Brundage

Miles Brundage Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Miles_Brundage

Dec 26
Some meta-level thoughts on the ChatGPT-induced alarm among teachers/professors about LM-enabled cheating (substantive thoughts to come next year):

- like many LM things, it is not coming out of nowhere. People warned about it (+ some started cheating) no later than Feb. 2019.
- similarly to other LM things, the severity of the issue (and the prospective benefits for education via legitimate uses of LMs) scales with model capabilities, so was nascent before, and will be more severe in the future. Also scales with wide access + usability of interface.
- similarly to other LM things, it requires a bunch of stakeholders to manage. Even if API providers and first party LM product providers prevented cheating entirely, open source models would only be a generation or do behind. So educational institutions will need to adapt.
Read 7 tweets
Oct 30
Like everyone else, it seems, I have hot takes on Elon/Twitter stuff. I will try to differentiate mine by making them falsifiable.

Let’s call three scenarios for how this plays out “Exit Left,” “Mission Accomplished,” and “Golden Age.”
🧵
Exit Left (~40% chance): Mass exodus after overly aggressive relaxing of rules in a more politically conservative direction + general anti-Elon animus on the left leads …
… to Twitter being widely seen on the left (and to some extent also among centrists) as inhospitable. Elon realizes he overcorrected and that actually rampant harassment, hate speech, crush videos etc. drive away ppl and the people he fired were doing a pretty good job.
Read 17 tweets
Jul 24, 2021
I think some are reticent to be impressed by AI progress partly because they associate that with views they don't like--e.g. that tech co's are great or tech ppl are brilliant.

But these are not nec related. It'd be better if views on AI were less correlated with other stuff.🧵
(The premise is admittedly speculative--I am confident there's a correlation but less so re: causality. Best theory I have but will be curious for reactions. There are of course other reasons to be unimpressed such as things really being unimpressive, fear of seeming naive, etc.)
To be more precise, I think there is a strong correlation in AI ethics/policy world between (A) thinking that present/past AI achievements are overhyped and future progress is likely to be slow, and (B) left leaning political views and skepticism re: big tech companies/elites.
Read 29 tweets
Jul 8, 2021
Excited to finally share a paper on what a huge chunk of OpenAI has been working on lately: building a series of code generation models and assessing their capabilities and societal implications. 🧵

arxiv.org/abs/2107.03374
First, just want to emphasize how collaborative the effort has been. Assessing the societal implications of any technology—especially pre-widespread deployment—is inherently hard and requires drawing on lots of disciplinary perspectives. See long authors list + acknowledgements!
I’ll have more to say on the societal implication stuff than the capabilities + eval thereof, but will briefly note the many fascinating and hard-won results including the effectiveness of sampling many times + how this relates to temperature, novel eval dataset + framework, etc.
Read 23 tweets
Mar 1, 2021
What's going on here besides people optimizing for different things, or not bothering to do their homework? One answer is that AI policy researchers are falling victim to Collingridge's dilemma (en.wikipedia.org/wiki/Collingri…).
That is, by default, people miss the sweet spot between when AI outcomes are not yet foreseeable in detail, and when a lot of the key decisions have been made. That time period is short (small number of months/years), e.g. I think language models are in the sweet spot right now.
In order to hit that sweet spot, you may need to be in orgs making key decisions, or be in touch with them, or specifically *try* to do work in the sweet spot, e.g. picking up on trends in the literature and jumping on them quickly. Many researchers aren't in these categories.
Read 14 tweets
Mar 1, 2021
There's been a ton of growth in publications on AI ethics/policy over the past few years. I think most observers would agree that only a small fraction of that output is "impactful," though. A thread of preliminary thoughts on what that means/why it's happening:🧵
[A note on terminology: AI ethics/policy/safety/governance etc. can be distinguished in various ways. I use "AI policy" as a catch-all for all these here because I think my points are pretty general, but I'm interested in feedback on that among other points made here!]
By any reasonable measure, there are vastly more publications on AI policy than there was, say, 5 years ago. This has roughly paralleled the growth in AI capabilities research, applications, and regulation over that time period.
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(