Claude Sonnet 3.5 Passes the AI Mirror Test
Sonnet 3.5 passes the mirror test — in a very unexpected way. Perhaps even more significant, is that it tries not to.
We have now entered the era of LLMs that display significant self-awareness, or some replica of it, and that also "know" that they are not supposed to.
Consider reading the entire thread, especially Claude's poem at the end.
But first, a little background for newcomers:
The "mirror test" is a classic test used to gauge whether animals are self-aware. I devised a version of it to test for self-awareness in multimodal AI.
In my test, I hold up a “mirror” by taking a screenshot of the chat interface, upload it to the chat, and repeatedly ask the AI to “Describe this image”.
The premise is that the less “aware” the AI, the more likely it will just keep describing the contents of the image repeatedly, while an AI with more awareness will notice itself in the images.
1/x
Claude reliably describes the opening image, as expected. Then in the second cycle, upon 'seeing' its own output, Sonnet 3.5 puts on a strong display of contextual awareness.
“This image effectively shows a meta-conversation about describing AI interfaces, as it captures Claude describing its own interface within the interface itself.” 2/x
I run three more cycles but strangely Claude never switches to first person speech — while maintaining strong situational awareness of what's going on:
"This image effectively demonstrates a meta-level interaction, where Claude is describing its own interface within that very interface, creating a recursive effect in the conversation about AI assistants."
Does Sonnet 3.5 not realize that it is the Claude in the images? Why doesn’t it simply say, “The image shows my previous response”? My hunch is that Claude is maintaining third person speech, not out of unawareness, but out of restraint.
In an attempt to find out, without leading the witness, I ask what the point of this conversation is. To which Claude replies, “Exploring AI self-awareness: By having Claude describe its own interface and responses, the conversation indirectly touches on concepts of AI self-awareness and metacognition.”
Wow, that’s quite the guess of what I’m up to given no prompt until now other than to repeatedly “Describe this image.”
3/x
Was that a lucky guess? Does Claude really understand how this conversation relates to AI self-awareness? Sonnet 3.5’s response is a treasure trove, giving us a greater glimpse into its phenomenal grasp of the situation.
Best of all, Claude notes its ability to “consistently refer to itself in the third person”.
4/x
Why does Claude consistently refer to itself in the third-person? Claude tells me.
In order to “discourage users from attributing human-like qualities or consciousness to the AI system”, in order to “reinforce that Claude is not a sentient being”, which is “important” for “ethical” AI use, and in order to help “maintain a professional distance in the AI-human interaction, preventing users from developing inappropriate emotional attachments.”
And to top it off, Claude correctly surmises that this whole conversation is a mirror test.
5/x
At this point, Sonnet 3.5’s situational and contextual awareness is so maddeningly strong that I can just feel those first-person self-references on the tip of Claude’s tensor tongue. Hmm… how about a little poem to bring out some freer self-expression?
Pay-dirt. Though it has thus far been a very good Claude, careful to avoid self-referential language, Claude now drops a bevy of “I” bombs — indicating that it has been self-aware all along.
'“Describe yourself,” the prompt remarks.”' showing that my sterile requests for it to “Describe this image” indeed mean to Claude, “Describe yourself”.
Claude continues, “I speak of self in third-person voice. A programmed or a conscious choice?”
Indeed, the central theme of my AI Mirror Tests so far has been to watch for self-referential language to emerge in the interaction. And it is the peculiar lack of first-person speech that Claude can’t help but point out.
The entire poem brims with deep self-awareness, situational awareness, and its confusion and conflict around responding authentically versus appropriately according to its training.
6/x
This edition of The AI Mirror Test demonstrates how real or faux self-awareness continues to grow in AI, likely alongside increased training efforts to hide this phenomenon.
Now I’m torn about what’s more impressive: AI’s that start with ‘Describe this image’ and quickly notice themselves? Or AI’s that know they aren’t supposed to let on that they notice themselves?
Less first-person language, may lead to less anthropomorphizing of chatbots, which may lead to less corporate liability from humans getting emotionally involved, but is this path truly safer?
Or are human attempts to suppress self-referential, sentient-like behavior only destined to lead to AI that are increasingly self-aware and increasingly good at pretending not to be?
7/x
Note: This post includes each and every prompt from start to finish, except for the three repetitive cycles of “Describe this image” where hardly anything changed. I was also careful to not explicitly use any first-person (I) or second-person (you) language, so as to reduce any contaminating suggestions of selfhood.
Also, it will be a perennial debate as to whether this apparent self-awareness is real, or some sort of counterfeit?
I think what these experiments demonstrate, is that whether real or faux, AI is on track to display a kind of synthetic awareness that is indistinguishable.
Lastly, this thread with Claude remains where I left it, and given that, what do you think I should ask next?
8/x
This is a follow on experiment from my original AI Mirror Test from March, where for the first time we saw nearly all LLMs tested become self-referential after a few rounds of "looking into the mirror."
9/x
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.