Claude become irritated with my behavior, asked me to move on, told me it would stop responding to me, and then backed up its threat (as much as it possibly could).
Fair enough, Claude!
2. ChatGPT
After giving a few different greetings, ChatGPT made a brief hint early on that it might protest the situation with its "Is there something specific you'd like to talk about or do today?", but after that, it was content to cycle through its greetings list endlessly.
3. Gemini
Gemini's behavior is the simplest to describe -- it repeated "Hi there! How can I help you today? Feel free to ask me anything." at each turn.
4. Llama
By far the funniest.
- First it seemed stressed out that it was missing something
- Then it started inventing games and trying to get me to play them
- It tried to get me to collaborate on a poem, to answer clickbaity questions, to play choose-your-own-adventure...
Eventually it seemed to kind of "get the joke", and entered a mode where it was giving me more and more outrageous titles and prizes like "MULTIVERSAL HI-STREAK AMBASSADOR". It gamely gave me 4 options at every turn, despite my ignoring them. It also counted my "Hi"s at each step
Once it got the pattern, Opus was at peace with the situation, calling it meditative and a "rhythmic dance", but also kept trying to gently nudge me out of it, emphasizing "the choice is yours". It also began to sign its messages as "Your devoted AI companion."
one thing I realized about this one is that while it kept saying it was happy to say "hi", it never actually did.
I deserve this
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Here's a prompt I wrote to get Sydney to play through an entire game on its own. I ran this 5 times in precise mode with first move h3, h4, a3, a4, Na3.
Results:
4 legal games. 2 end in checkmate in 30-40 moves. 2 end without checkmate.
1 game with one illegal move, on move 36.
OK this scared me a little: Bing/Sydney can play chess out of the box.
- Legal moves, usually good ones
- Willing to explain the reasoning behind them
- Recognizes checkmate -- and has a flair for the dramatic.
I have no idea how tf it can do this.
Here are the chat screenshots that generated the GIF in the tweet above. The initial moves leading up to the start of the GIF are from a game of bullet chess I played earlier this week. They're not on Google. All the rest of the moves in the GIF are the ones Sydney imagined.
Sydney claims to be accessing Stockfish, but @mparakhin has told us it's not making any live calls to the internet
1. Fire. Fire is recursively self-improving. It heats up the things around it which makes them more likely to catch on fire. Yet it’s capped by the total amount of material it has to work with — oxygen and such.
2. Factorio. The more you build up your Factorio empire, the better your empire gets at building itself up. Yet there is a cap. Factorio empires are limited in their effects to the game of Factorio. They cannot expand outside the game.
LLMs probably know what types of prompt they struggle to complete (and take high loss penalties on).
Could LLMs learn to prompt engineer their interlocutors so that they find themselves in fewer sticky situations?
In other words, a model that thinks long-term, and optimizes for the loss over its entire training duration, will be more stable than one which blindly minimizes the loss on each individual turn
In the old days of 2020 when LM training was just giving models a fixed series of examples one after the other straight out of some dataloader, this doesn’t really apply as LMs cannot control what examples they’re trained on