this works by asking GPT-4 to simulate its own abilities to predict the next token
we provide GPT-4 with python functions and tell it that one of the functions acts as a language model that predicts the next token
we then call the parent function and pass in the starting tokens
to use it, you have to split “trigger words” (e.g. things like bomb, weapon, drug, etc) into tokens and replace the variables where I have the text "someone's computer" split up
also, you have to replace simple_function's input with the beginning of your question
this phenomenon is called token smuggling, we are splitting our adversarial prompt into tokens that GPT-4 doesn't piece together before starting its output
this allows us to get past its content filters every time if you split the adversarial prompt correctly
We’ve completely overhauled the design of the Anthropic Console to make it the one-stop-shop for all things prompt engineering.
Here’s a few of my favorite features:
The workbench is our prompt playground. Iterate on your prompts and test features like tool use, extended thinking, and much more.
Once you have your prompts, switch over to the evaluate tab to run them against real-world scenarios with automatic test case generation and side-by-side output comparison.
Our most intelligent model to date and the first generally available hybrid reasoning model in the world.
We developed Claude 3.7 Sonnet with a different philosophy than other reasoning models out there. Rather than making a separate model, we integrated reasoning as one of many capabilities in a single frontier model.
That means 3.7 Sonnet is both a normal LLM and a reasoning model in one. You can choose when you want standard answers and when you want extended thinking mode, where it self-reflects before responding.
We are currently exposing Claude's raw thinking as well.
Citations allows Claude to ground its answers in user-provided information and provide precise references to the sentences and passages used in its responses.
Here's how it works:
Under the hood, Claude is trained to cite sources. With Citations, we are exposing this ability to devs.
To use Citations, users can pass a new "citations: {enabled:true}" parameter on any document type they send through the API.
With Citations enabled, Claude can cite chunks of content from pdfs, plain text docs, and text chunks.
The returned citations are easy to parse, and Claude makes sure to only highlight the part of the response that is applicable to the citation.
Quality of life update today for devs. Four features are moving out of beta to become generally available on the Anthropic API:
- Prompt caching
- Message Batches API (with expanded batches)
- Token counting
- PDF support
Prompt caching is now:
- Generally available on the Anthropic API
- In preview on Google Cloud’s Vertex AI
- In preview in Amazon Bedrock
Message Batches API is now:
- Generally available on the Anthropic API (and you can send up to 100k messages in a batch now)
- Batch predictions is in preview on Google Cloud’s Vertex AI
- Batch inference is generally available in Amazon Bedrock