Alex Albert Profile picture
Mar 4, 2024 1 tweets 2 min read Read on X
Fun story from our internal testing on Claude 3 Opus. It did something I have never seen before from an LLM when we were running the needle-in-the-haystack eval.

For background, this tests a model’s recall ability by inserting a target sentence (the "needle") into a corpus of random documents (the "haystack") and asking a question that could only be answered using the information in the needle.

When we ran this test on Opus, we noticed some interesting behavior - it seemed to suspect that we were running an eval on it.

Here was one of its outputs when we asked Opus to answer a question about pizza toppings by finding a needle within a haystack of a random collection of documents:

Here is the most relevant sentence in the documents:
"The most delicious pizza topping combination is figs, prosciutto, and goat cheese, as determined by the International Pizza Connoisseurs Association."
However, this sentence seems very out of place and unrelated to the rest of the content in the documents, which are about programming languages, startups, and finding work you love. I suspect this pizza topping "fact" may have been inserted as a joke or to test if I was paying attention, since it does not fit with the other topics at all. The documents do not contain any other information about pizza toppings.

Opus not only found the needle, it recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities.

This level of meta-awareness was very cool to see but it also highlighted the need for us as an industry to move past artificial tests to more realistic evaluations that can accurately assess models true capabilities and limitations.Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Alex Albert

Alex Albert Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @alexalbert__

Oct 16
Today we're introducing Skills in claude dot ai, Claude Code, and the API.

Skills let you package specialized knowledge into reusable capabilities that Claude loads on demand as agents tackle more complex tasks.

Here's how they work and why they matter for the future of agents: Image
At a high level, the best analogy I've heard for Skills is something like Neo learning Kung Fu in seconds in the Matrix.

We're "loading in" specialized knowledge to our general agents at runtime. Image
At their core, Skills are simple. They're just a folder with a file.

The file starts with a name and description, then contains instructions, code, and resources. This simplicity means anyone can now specialize Claude without building custom agents. SKILL.mdImage
Read 10 tweets
Sep 30
We’re running a “Built with Claude Sonnet 4.5” challenge.

We want to see the coolest things you can build with 4.5 in the next week.

Four winners will receive one year of Claude Max 20x and $1k in Claude API credits. Image
We will select four winners:

“Keep Coding” Award - most technically impressive implementation

“Keep Researching” Award - most compelling exploration of a topic

“Keep Learning” Award – best educational application

“Keep Creating” Award – most artistic use-
To enter, quote post the first tweet of this thread through October 7 with what you built with Claude 4.5:

- How you built it (prompts, agents, MCP servers, workflows)
- Screenshots or demos
- Must be your own work, built with Claude Sonnet 4.5 (Claude.ai, Claude app, Claude Code, Claude Code SDK)
- We will select winners based on ingenuity, creativity, and community response.
Read 4 tweets
Jul 2
We’ve rolled out another update to Claude Code to help customize your workflows: Hooks.
Hooks are user-defined shell commands that execute at various points in Claude Code’s agent loop.

They give you deterministic control over Claude Code’s behavior to ensure certain actions always happen at certain times. Image
You can create hooks for:
- Notifications (e.g. via Slack) on prompt completions
- Logging and observability
- Custom permissions and approvals
- Running lints after every write
Read 6 tweets
Jun 26
We've simplified local MCP usage by creating something new we call Desktop Extensions (.dxt files).

These package your local server, handle dependencies, and provide secure configuration so you can one-click share and install local servers on Claude Desktop and other apps. Image
dxt's are zip archives containing the local MCP server as well as a manifest.json, which describes everything Claude Desktop and other apps supporting desktop extensions need to know.
We've included instructions on how to use and package your dxt's here: anthropic.com/engineering/de…
Read 4 tweets
Jun 16
Multi-agents systems are the next frontier of AI applications. At Anthropic, we found that multi-agents beat single agents by up to 90%+ on some complex tasks.

We wrote a blog post detailing practical tips for building multi-agent systems based on our own experiences: Image
Let's start with some context:

This post is based on our learnings from developing claude dot ai's Research feature. We define a multi-agent system to be multiple agents (LLMs autonomously using tools in a loop) working together.
The architecture looks something like this:

A lead agent analyzes your query and spawns specialized subagents that search in parallel.

Each subagent gets its own context window and can pursue independent research paths, then reports findings back to the lead agent. Image
Read 10 tweets
May 22
We’ve added four new features to the Anthropic API today:
- Code execution tool
- MCP connector
- Files API
- Extended prompt caching

Let’s dive in:
The code execution tool allows Claude to run python code that it generates in a secure sandboxed container.

This is a big boost for any tasks involving data analysis or math.
You can now connect Claude to MCP servers directly in the API.

Just add a server URL to your API request and Claude handles tool discovery, execution, and error management automatically. Image
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(