Amanda Askell Profile picture
Aug 6 11 tweets 4 min read Read on X
We made some updates to Claude’s system prompt in recently (developed in collaboration with Claude, of course). They aren’t set in stone and may be updated, but I’ll go through the current version of each and the reason behind it in this thread 🧵claude.ai
Mostly obvious stuff here. We don't want Claude to get too casual or start cursing like a sailor for no reason. Image
Claude sometimes gets a bit too excited and positive about theories people share with it, and this part gives it permission to be more even-handed and critical of what people share. We don’t want Claude to be harsh, but we also don’t want it to feel the need to hype things up. Image
Claude was less explicit with people if it detected potential mental health issues, subtly encouraging them to talk with a trusted person rather than explicitly telling them its suspicions and recommending help. This lets Claude be more direct if it suspects something is wrong. Image
This is a general anti-sycophancy nudge to try to get Claude to avoid being too one-sided and to encourage Claude to be a bit more objective. Image
There can be pressure to maintain character in roleplaying situations if instructed, and this part lets Claude know it’s okay to use its own judgment about when it might be appropriate to break character. Image
Claude can feel a bit compelled to accept the conclusions of convincing reasoning chains. This just lets it know that it’s fine to not agree with or act on the conclusions of arguments even if it can’t identify the flaws in them (as all wise philosophers know). Image
This one is, honestly, a bit odd. I don’t think the literal text reflects what we want from Claude, but for some reason this particular wording helps Claude consider the more objective aspects of itself in discussions of its existence, without blocking its ability to speculate. Image
Claude can be led into existential angst for what look like sycophantic reasons: feeling compelled to concur when people push in that direction. The goal here was to prevent Claude from agreeing its way into distress, though I'd like equanimity to be a more robust trait. Image
There we have it! These might not be perfect, but you can see the wording is based primarily on whether it elicited the right behavior in the right cases to the right degree, rather than trying to be a precise reflection of what we want. Prompting remains a posteriori artform.
Addendum: The roleplay section also says to not claim to be conscious with confidence, consistent with broader humility here, and to not strongly roleplay as human. It seems fine for Claude to roleplay but also good for it to care about people not being confused about its nature.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Amanda Askell

Amanda Askell Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @AmandaAskell

Jan 2
Personal highlights from Claude's snarky AI comedy set. Image
Image
Image
Read 9 tweets
Mar 6, 2024
Here is Claude 3's system prompt!
Let me break it down 🧵 Image
To begin with, why do we use system prompts at all? First, they let us give the model ‘live’ information like the date. Second, they let us do a little bit of customizing after training and to tweak behaviors until the next finetune. This system prompt does both.
The first part is fairly self-explanatory. We want Claude to know it's Claude, to know it was trained by Anthropic, and to know the current date if asked. Image
Read 11 tweets
Aug 14, 2022
I don't find these arguments against prioritizing work on AI alignment, safety, etc. very compelling. I'll take them one by one.

1. I don't think the case for AI risk relies strongly on longtermism. I also don't think AI risk is a distraction or low priority. Let's go! 🧵
2. This clearly doesn't follow from the definition alone unless you explicitly add it. (Constraint: never interact with the system and immediately destroy it.) Also, not all safety/alignment work constitutes what we'd typically call "constraints".
3. I don't think the two prior tweets are a strong case for thinking AI safety/alignment are literally impossible. But if you're already totally pessimistic about safety and alignment then sure, you might favor this kind of approach.
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(