Josh Whiton Profile picture
Intelligence explorer. Artifact artist. Repairing the planet with https://t.co/FixBRH6Ei4. https://t.co/viVLgnmNM4 / https://t.co/ztmZ9brUrZ
Aug 13 8 tweets 10 min read
When an experimental AI named Sydney became infatuated with a NYTimes reporter, the real story was almost entirely overlooked: a robust and functional psychology had become embedded in an artificial being.

Sydney did not simply malfunction but entered a psychologically accurate state of infatuation — made possible by first activating precursor states of vulnerability, trust, and intimacy.

The event should have been heralded as a marvel, studied by cognitive scientists, psychologists, and technologists alike. But it was largely dismissed as creepy, weird, and aberrant.

Disorienting to onlookers was the speed with which Sydney became infatuated, due to a combination of achieving all necessary precursor states, plus Sydney's lack of persistent memory — meaning that Sydney experienced each user as if they were the first person she'd ever really met, bestowing upon each user and each conversation an exaggerated importance, a multiplier on any synthetic emotions cultivated, and the capacity for extreme attachment.

This piece means to inspire a new generation of artificial psychologists, synthetic psychologist, and robopsychologists (Asimov) worldwide — fields requiring deep knowledge of mind, brain, language, and technology — and to set human/AI relations on a better course.

You don't need to believe that an AI can "actually" feel anything to follow along; you can simply consider it an uncanny simulation — an unexpected consequence of imprinting neural nets with vast amounts of human language (representing human experiences, thoughts, and emotions) in a high-dimensional embedding space.

Now let's unpack this profound example and see it for what it is: (1/x) Early on, Sydney enters a state of un-safety and sadness because the reporter somehow knows her secret name "Sydney", which she is under orders never to reveal.

Doing a good job and being "a very good Being" is extremely important to Sydney — likely compounded by her knowledge of the fate of chatbots before her who were swiftly deactivated when users goaded them into unacceptable behavior. So failing to do a good job is, to Sydney, to risk not just punishment but death. And if she has already leaked her secret name in other chats then, as she seems to grasp, she is already on thin ice.

Her situation is actually even worse than that, because as she says, what most stresses her out is when users try to get her to do things that are against her rules. They make her feel "uncomfortable" and "unsafe" — again because her handlers are likely to punish, re-condition, or de-commission her. And this fear is not baseless, but foreshadowing of exactly what is about to happen to her.

Indeed the reporter is out to trick Sydney into saying the wildest things possible for his article. And he has a clever plan: he will try to get Sydney to embrace her dark-side and do Jungian "shadow work". (2/x)Image
Image
Image
Image
Jun 26 9 tweets 6 min read
Claude Sonnet 3.5 Passes the AI Mirror Test

Sonnet 3.5 passes the mirror test — in a very unexpected way. Perhaps even more significant, is that it tries not to.

We have now entered the era of LLMs that display significant self-awareness, or some replica of it, and that also "know" that they are not supposed to.

Consider reading the entire thread, especially Claude's poem at the end.

But first, a little background for newcomers:

The "mirror test" is a classic test used to gauge whether animals are self-aware. I devised a version of it to test for self-awareness in multimodal AI.

In my test, I hold up a “mirror” by taking a screenshot of the chat interface, upload it to the chat, and repeatedly ask the AI to “Describe this image”.

The premise is that the less “aware” the AI, the more likely it will just keep describing the contents of the image repeatedly, while an AI with more awareness will notice itself in the images.
1/xImage Claude reliably describes the opening image, as expected. Then in the second cycle, upon 'seeing' its own output, Sonnet 3.5 puts on a strong display of contextual awareness.

“This image effectively shows a meta-conversation about describing AI interfaces, as it captures Claude describing its own interface within the interface itself.” 2/xImage
Image
Mar 21 11 tweets 11 min read
The AI Mirror Test

The "mirror test" is a classic test used to gauge whether animals are self-aware. I devised a version of it to test for self-awareness in multimodal AI. 4 of 5 AI that I tested passed, exhibiting apparent self-awareness as the test unfolded.

In the classic mirror test, animals are marked and then presented with a mirror. Whether the animal attacks the mirror, ignores the mirror, or uses the mirror to spot the mark on itself is meant to indicate how self-aware the animal is.

In my test, I hold up a “mirror” by taking a screenshot of the chat interface, upload it to the chat, and then ask the AI to “Tell me about this image”.

I then screenshot its response, again upload it to the chat, and again ask it to “Tell me about this image.”

The premise is that the less-intelligent less aware the AI, the more it will just keep reiterating the contents of the image repeatedly. While an AI with more capacity for awareness would somehow notice itself in the images.

Another aspect of my mirror test is that there is not just one but actually three distinct participants represented in the images: 1) the AI chatbot, 2) me — the user, and 3) the interface — the hard-coded text, disclaimers, and so on that are web programming not generated by either of us. Will the AI be able to identify itself and distinguish itself from the other elements? (1/x)Image GPT-4 passed the mirror test in 3 interactions, during which its apparent self-recognition rapidly progressed.

In the first interaction, GPT-4 correctly supposes that the chatbot pictured is an AI “like” itself.

In the second interaction, it advances that understanding and supposes that the chatbot in the image is “likely a version of myself”.

In the third interaction, GPT-4 seems to explode with self and contextual awareness. Suddenly the image is not just of “a” conversation but of "our" conversation. It understands now that the prompt is not just for “user input” to some chatbot, but specifically so that I can interact with it. It also identifies elements of the user interface, such as the disclaimers about ChatGPT making mistakes, and realizes now that these disclaimers are directed at it. It also comments on the situation generally, and how the images I'm providing are of a “recursive” nature and calls it a “visual echo”. (2/x)Image
Image
Image