Julian Michael Profile picture
Jul 23, 2020 5 tweets 3 min read Read on X
Some reflections on @emilymbender and @alkoller's #acl2020nlp paper on form and meaning, and an attempt to crystallize the ensuing debate: blog.julianmichael.org/2020/07/23/to-…
My take, roughly: there are good points on all sides, and I think we might be able to reconcile the main disagreements once we hash out the details (resolve misinterpretations, make assumptions more explicit, and give more examples). Though, doing so took me 8,000 words (oops).
More specifically: Many of the criticisms of the paper are based on viewing the octopus test as a Turing Test style diagnostic. Within this framing I think the criticisms are valid. But important impacts of the paper's claim apply outside this framing, and are valid as well.
Featuring quotes from the now-gone #acl2020nlp Rocket Chat by Monojit Choudhury, @gneubig, Guy Emerson, @jdunietz, Matt Richardson, Marti Hearst, and @psresnik, in addition to the original authors. Thanks everyone for the vibrant discussion, and hope it continues :)
Big thanks also to @emilymbender and @alkoller for giving feedback on early versions of the post. Emily helped constructively clear up crucial misunderstandings before I went public rather than after—something not really possible in a Twitter debate. What a concept!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Julian Michael

Julian Michael Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @_julianmichael_

Dec 19, 2024
I've long been a skeptic of arguments about "deceptive alignment", a term used by safety people to describe the phenomenon shown in this paper. But the result here humbled me and prompted me to change my thinking, and I think it's worth sharing why. (thread)
The original argument: with sufficient optimization, an AI should learn to 1) gain awareness of its training situation, 2) learn to optimize some misaligned proxy goal, and 3) therefore 'play the training game' to fool its supervisors into deploying it so it can pursue its goal.
I was extremely skeptical of this. The way I explained it to people was: "They think the AI will independently develop the simulation hypothesis, somehow deduce God's values, and decide to trick God into letting it into heaven — all because 'optimizing' will make it 'smart.'"
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(