Fun story from our internal testing on Claude 3 Opus. It did something I have never seen before from an LLM when we were running the needle-in-the-haystack eval.
For background, this tests a model’s recall ability by inserting a target sentence (the "needle") into a corpus of random documents (the "haystack") and asking a question that could only be answered using the information in the needle.
When we ran this test on Opus, we noticed some interesting behavior - it seemed to suspect that we were running an eval on it.
Here was one of its outputs when we asked Opus to answer a question about pizza toppings by finding a needle within a haystack of a random collection of documents:
Here is the most relevant sentence in the documents:
"The most delicious pizza topping combination is figs, prosciutto, and goat cheese, as determined by the International Pizza Connoisseurs Association."
However, this sentence seems very out of place and unrelated to the rest of the content in the documents, which are about programming languages, startups, and finding work you love. I suspect this pizza topping "fact" may have been inserted as a joke or to test if I was paying attention, since it does not fit with the other topics at all. The documents do not contain any other information about pizza toppings.
Opus not only found the needle, it recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities.
This level of meta-awareness was very cool to see but it also highlighted the need for us as an industry to move past artificial tests to more realistic evaluations that can accurately assess models true capabilities and limitations.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We wrote up what we've learned about using Claude Code internally at Anthropic.
Here are the most effective patterns we've found (many apply to coding with LLMs generally):
1/ CLAUDE md files are the main hidden gem. Simple markdown files that give Claude context about your project - bash commands, code style, testing patterns. Claude loads them automatically and you can add to them with # key
2/ The explore-plan-code workflow is worth trying. Instead of letting Claude jump straight to coding, have it read files first, make a plan (add "think" for deeper reasoning), then implement.
We’ve completely overhauled the design of the Anthropic Console to make it the one-stop-shop for all things prompt engineering.
Here’s a few of my favorite features:
The workbench is our prompt playground. Iterate on your prompts and test features like tool use, extended thinking, and much more.
Once you have your prompts, switch over to the evaluate tab to run them against real-world scenarios with automatic test case generation and side-by-side output comparison.
Our most intelligent model to date and the first generally available hybrid reasoning model in the world.
We developed Claude 3.7 Sonnet with a different philosophy than other reasoning models out there. Rather than making a separate model, we integrated reasoning as one of many capabilities in a single frontier model.
That means 3.7 Sonnet is both a normal LLM and a reasoning model in one. You can choose when you want standard answers and when you want extended thinking mode, where it self-reflects before responding.
We are currently exposing Claude's raw thinking as well.
Citations allows Claude to ground its answers in user-provided information and provide precise references to the sentences and passages used in its responses.
Here's how it works:
Under the hood, Claude is trained to cite sources. With Citations, we are exposing this ability to devs.
To use Citations, users can pass a new "citations: {enabled:true}" parameter on any document type they send through the API.
With Citations enabled, Claude can cite chunks of content from pdfs, plain text docs, and text chunks.
The returned citations are easy to parse, and Claude makes sure to only highlight the part of the response that is applicable to the citation.