I've long been a skeptic of arguments about "deceptive alignment", a term used by safety people to describe the phenomenon shown in this paper. But the result here humbled me and prompted me to change my thinking, and I think it's worth sharing why. (thread)
The original argument: with sufficient optimization, an AI should learn to 1) gain awareness of its training situation, 2) learn to optimize some misaligned proxy goal, and 3) therefore 'play the training game' to fool its supervisors into deploying it so it can pursue its goal.
Jul 23, 2020 • 5 tweets • 3 min read
Some reflections on @emilymbender and @alkoller's #acl2020nlp paper on form and meaning, and an attempt to crystallize the ensuing debate: blog.julianmichael.org/2020/07/23/to-…
My take, roughly: there are good points on all sides, and I think we might be able to reconcile the main disagreements once we hash out the details (resolve misinterpretations, make assumptions more explicit, and give more examples). Though, doing so took me 8,000 words (oops).