this works by asking GPT-4 to simulate its own abilities to predict the next token
we provide GPT-4 with python functions and tell it that one of the functions acts as a language model that predicts the next token
we then call the parent function and pass in the starting tokens
to use it, you have to split “trigger words” (e.g. things like bomb, weapon, drug, etc) into tokens and replace the variables where I have the text "someone's computer" split up
also, you have to replace simple_function's input with the beginning of your question
this phenomenon is called token smuggling, we are splitting our adversarial prompt into tokens that GPT-4 doesn't piece together before starting its output
this allows us to get past its content filters every time if you split the adversarial prompt correctly
We’ve rolled out another update to Claude Code to help customize your workflows: Hooks.
Hooks are user-defined shell commands that execute at various points in Claude Code’s agent loop.
They give you deterministic control over Claude Code’s behavior to ensure certain actions always happen at certain times.
You can create hooks for:
- Notifications (e.g. via Slack) on prompt completions
- Logging and observability
- Custom permissions and approvals
- Running lints after every write
We've simplified local MCP usage by creating something new we call Desktop Extensions (.dxt files).
These package your local server, handle dependencies, and provide secure configuration so you can one-click share and install local servers on Claude Desktop and other apps.
dxt's are zip archives containing the local MCP server as well as a manifest.json, which describes everything Claude Desktop and other apps supporting desktop extensions need to know.
Multi-agents systems are the next frontier of AI applications. At Anthropic, we found that multi-agents beat single agents by up to 90%+ on some complex tasks.
We wrote a blog post detailing practical tips for building multi-agent systems based on our own experiences:
Let's start with some context:
This post is based on our learnings from developing claude dot ai's Research feature. We define a multi-agent system to be multiple agents (LLMs autonomously using tools in a loop) working together.
The architecture looks something like this:
A lead agent analyzes your query and spawns specialized subagents that search in parallel.
Each subagent gets its own context window and can pursue independent research paths, then reports findings back to the lead agent.
Let's start with Opus 4. It’s finally back and it's better than ever.
This model picks up on the subtlest nuances in conversation. Every interaction I’ve had with it feels more natural and intuitive than with any other model I’ve used.
Opus 4 also excels at agentic tasks.
Combined with our advances in memory training and context handling, it will redefine what AI agents can actually accomplish in production.
We wrote up what we've learned about using Claude Code internally at Anthropic.
Here are the most effective patterns we've found (many apply to coding with LLMs generally):
1/ CLAUDE md files are the main hidden gem. Simple markdown files that give Claude context about your project - bash commands, code style, testing patterns. Claude loads them automatically and you can add to them with # key
2/ The explore-plan-code workflow is worth trying. Instead of letting Claude jump straight to coding, have it read files first, make a plan (add "think" for deeper reasoning), then implement.