Introducing an upgraded Claude 3.5 Sonnet, and a new model, Claude 3.5 Haiku. We’re also introducing a new capability in beta: computer use.
Developers can now direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking, and typing text.
The new Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta.
While groundbreaking, computer use is still experimental—at times error-prone. We're releasing it early for feedback from developers.
We've built an API that allows Claude to perceive and interact with computer interfaces.
This API enables Claude to translate prompts into computer commands. Developers can use it to automate repetitive tasks, conduct testing and QA, and perform open-ended research.
We're trying something fundamentally new.
Instead of making specific tools to help Claude complete individual tasks, we're teaching it general computer skills—allowing it to use a wide range of standard tools and software programs designed for people.
Claude 3.5 Sonnet's current ability to use computers is imperfect. Some actions that people perform effortlessly—scrolling, dragging, zooming—currently present challenges. So we encourage exploration with low-risk tasks.
We expect this to rapidly improve in the coming months.
Even while recording these demos, we encountered some amusing moments. In one, Claude accidentally stopped a long-running screen recording, causing all footage to be lost.
Later, Claude took a break from our coding demo and began to peruse photos of Yellowstone National Park.
Beyond computer use, the new Claude 3.5 Sonnet delivers significant gains in coding—an area where it already led the field.
Sonnet scores higher on SWE-bench Verified than all available models—including reasoning models like OpenAI o1-preview and specialized agentic systems.
Claude 3.5 Haiku is the next generation of our fastest model.
Haiku now outperforms many state-of-the-art models on coding tasks—including the original Claude 3.5 Sonnet and GPT-4o—at the same cost as before.
The new Claude 3.5 Haiku will be released later this month.
We believe these developments will open up new possibilities for how you work with Claude, and we look forward to seeing what you'll create.
New report: How we detect and counter malicious uses of Claude.
For example, we found Claude was used for a sophisticated political spambot campaign, running 100+ fake social media accounts across multiple platforms.
This particular influence operation used Claude to make tactical engagement decisions: commenting, liking, or sharing based on political goals.
We've been developing new methods to identify and stop this pattern of misuse, and others like it (including fraud and malware).
In this case, we banned all accounts that were linked to the influence operation, and used the case to upgrade our detection systems.
Our goal is to rapidly counter malicious activities without getting in the way of legitimate users.
New Anthropic research: How university students use Claude.
We ran a privacy-preserving analysis of a million education-related conversations with Claude to produce our first Education Report.
Students most commonly used Claude to create and improve educational content (39.3% of conversations) and to provide technical explanations or solutions (33.5%).
Which degrees have the most disproportionate use of Claude?
Perhaps not surprisingly, Computer Science leads the field, with 38.6% of Claude conversations related to the subject, which makes up only 5.4% of US degrees.
New Anthropic research: Do reasoning models accurately verbalize their reasoning?
Our new paper shows they don't.
This casts doubt on whether monitoring chains-of-thought (CoT) will be enough to reliably catch safety issues.
We slipped problem-solving hints to Claude 3.7 Sonnet and DeepSeek R1, then tested whether their Chains-of-Thought would mention using the hint (if the models actually used it).
We found Chains-of-Thought largely aren’t “faithful”: the rate of mentioning the hint (when they used it) was on average 25% for Claude 3.7 Sonnet and 39% for DeepSeek R1.
Last month we launched our Anthropic Economic Index, to help track the effect of AI on labor markets and the economy.
Today, we’re releasing the second research report from the Index, and sharing several more datasets based on anonymized Claude usage data.
The data for this second report are from after the release of Claude 3.7 Sonnet. For this new model, we find a small rise in the share of usage for coding, as well as educational, science, and healthcare applications.
We saw little change in the overall balance of “augmentation” versus “automation”, but some changes in the specific interaction modes within those categories.
For instance, there was a small increase in learning interactions, where users ask Claude for explanations.