We describe ten case studies that each illustrate an aspect of "AI biology".
One of them shows how Claude, even as it says words one at a time, in some cases plans further ahead.
How does Claude understand different languages? We find shared circuitry underlying the same concepts in multiple languages, implying that Claude "thinks" using universal concepts even before converting those thoughts into language.
Claude wasn’t designed to be a calculator; it was trained to predict text. And yet it can do math "in its head". How?
We find that, far from merely memorizing the answers to problems, it employs sophisticated parallel computational paths to do "mental arithmetic".
We discover circuits that help explain puzzling behaviors like hallucination. Counterintuitively, Claude’s default is to refuse to answer: only when a "known answer" feature is active does it respond.
That feature can sometimes activate in error, causing a hallucination.
In one concerning example, we give the model a multi-step math problem, along with a hint about the final answer. Rather than try to genuinely solve the problem, the model works backwards to make up plausible intermediate steps that will let it end up at the hinted answer.
Our case studies investigate simple behaviors, but the same methods and principles could apply to much more complex cases.
Insight into a model's mechanisms will allow us to check whether it's aligned with human values—and whether it's worthy of our trust.
New report: How we detect and counter malicious uses of Claude.
For example, we found Claude was used for a sophisticated political spambot campaign, running 100+ fake social media accounts across multiple platforms.
This particular influence operation used Claude to make tactical engagement decisions: commenting, liking, or sharing based on political goals.
We've been developing new methods to identify and stop this pattern of misuse, and others like it (including fraud and malware).
In this case, we banned all accounts that were linked to the influence operation, and used the case to upgrade our detection systems.
Our goal is to rapidly counter malicious activities without getting in the way of legitimate users.
New Anthropic research: How university students use Claude.
We ran a privacy-preserving analysis of a million education-related conversations with Claude to produce our first Education Report.
Students most commonly used Claude to create and improve educational content (39.3% of conversations) and to provide technical explanations or solutions (33.5%).
Which degrees have the most disproportionate use of Claude?
Perhaps not surprisingly, Computer Science leads the field, with 38.6% of Claude conversations related to the subject, which makes up only 5.4% of US degrees.
New Anthropic research: Do reasoning models accurately verbalize their reasoning?
Our new paper shows they don't.
This casts doubt on whether monitoring chains-of-thought (CoT) will be enough to reliably catch safety issues.
We slipped problem-solving hints to Claude 3.7 Sonnet and DeepSeek R1, then tested whether their Chains-of-Thought would mention using the hint (if the models actually used it).
We found Chains-of-Thought largely aren’t “faithful”: the rate of mentioning the hint (when they used it) was on average 25% for Claude 3.7 Sonnet and 39% for DeepSeek R1.
Last month we launched our Anthropic Economic Index, to help track the effect of AI on labor markets and the economy.
Today, we’re releasing the second research report from the Index, and sharing several more datasets based on anonymized Claude usage data.
The data for this second report are from after the release of Claude 3.7 Sonnet. For this new model, we find a small rise in the share of usage for coding, as well as educational, science, and healthcare applications.
We saw little change in the overall balance of “augmentation” versus “automation”, but some changes in the specific interaction modes within those categories.
For instance, there was a small increase in learning interactions, where users ask Claude for explanations.
New Anthropic research: Auditing Language Models for Hidden Objectives.
We deliberately trained a model with a hidden misaligned objective and put researchers to the test: Could they figure out the objective without being told?
We often assess AI safety by checking for harmful behaviors. But this can fail: AIs may subtly misbehave or act “right for the wrong reasons,” risking unexpected failures.
Instead, we propose alignment audits to investigate models for hidden objectives.
To practice alignment audits, our Alignment Science and Interpretability teams ran a blind auditing game.
A red team trained—in secret—a model with a hidden objective, then gave it to four blue teams for investigation. Three teams won by uncovering the model’s hidden objective.