Alex Albert Profile picture
Head of Claude Relations @AnthropicAI
4 subscribers
Dec 17 7 tweets 2 min read
Quality of life update today for devs. Four features are moving out of beta to become generally available on the Anthropic API:
- Prompt caching
- Message Batches API (with expanded batches)
- Token counting
- PDF support Prompt caching is now:
- Generally available on the Anthropic API
- In preview on Google Cloud’s Vertex AI
- In preview in Amazon Bedrock
Nov 26 11 tweets 3 min read
It's only been a day since we've released MCP and folks are already starting to build tons of stuff on top of it.

The future of MCP is truly going to be community-led and not controlled by any single entity.

Here are some of the highlights I'm seeing from across the industry: Replit is looking into adding MCP support to Agents
Nov 25 9 tweets 3 min read
I just connected Claude to an internet search engine using MCP.

Here's how you can do it too in under 5 minutes: First, you will need to download the latest version of our Claude desktop app here: claude.ai/download
Nov 25 13 tweets 3 min read
Introducing the Model Context Protocol (MCP)

An open standard we've been working on at Anthropic that solves a core challenge with LLM apps - connecting them to your data.

No more building custom integrations for every data source. MCP provides one protocol to connect them all: Image Here's a quick demo using the Claude desktop app, where we've configured MCP:

Watch Claude connect directly to GitHub, create a new repo, and make a PR through a simple MCP integration.

Once MCP was set up in Claude desktop, building this integration took less than an hour.
Nov 14 8 tweets 3 min read
We've added a Claude-powered prompt improver to the Anthropic Console.

Take any prompt, run it through the improver, and get an optimized prompt in return.

Here's how it works: Image To start, you enter a prompt and specify what aspects of the prompt you would like to improve.

Once you hit enter, a six-step prompt improvement process begins.
Nov 4 6 tweets 3 min read
We held our first Builder's Day in partnership with @MenloVentures this past weekend!

It was a great event with tons of extremely talented devs in attendance.

Here's a recap of the day: Image We kicked the day off with a @DarioAmodei fireside chat.

Then, we followed things up with a few technical talks: one from yours truly on all our recent launches and one from @mlpowered on the latest in interpretability. Image
Image
Image
Nov 4 6 tweets 2 min read
Claude 3.5 Haiku is now available on the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI.

Claude 3.5 Haiku is our fastest and most intelligent cost-efficient model to date. Here's what makes it special: Image 3.5 Haiku surpasses all previous Claude models (except the new 3.5 Sonnet) on coding and agentic tasks, while being significantly more affordable -- a fraction of the cost of Sonnet and Opus. Image
Nov 1 5 tweets 2 min read
It's a big day for Claude's PDF capabilities.

We're rolling out visual PDF support across claude dot ai and the Anthropic API.

Let me explain:
Up until today, when you attached a PDF in claude dot ai, we would use a text extraction service to grab the text and send that to Claude in the prompt.

Now, Claude can actually see the PDF visually alongside the text.
Oct 23 9 tweets 3 min read
The new Claude 3.5 Sonnet is one of the best models I've ever used. We listened to the feedback on the old 3.5 Sonnet and worked to improve the new model in a number of ways.

Here are some of my favorite improvements: Self-correction and reasoning

Tau bench is an agent benchmark that evaluates a model’s ability to interact with simulated users and APIs in customer service scenarios - the new 3.5 Sonnet is SOTA.

Personally I've noticed the the model gets stuck in loops less often than before. Image
Oct 23 5 tweets 2 min read
Anyone can try out computer use with Claude in less than 5 minutes - no coding required.

Here's how to easily set it up: Here's the github repo with the commands.

Please pay attention to the disclaimer at the top as you start to build applications that use computer use!
github.com/anthropics/ant…
Oct 22 13 tweets 3 min read
Computer use is the first step toward a completely new form of human-computer interaction.

In just a few years, the way we interface with computers will be completely different from today.

Let me explain: Image Computer use allows AIs to use computers just as you would.

No complex abstractions or specific APIs. Just pure visual understanding and interaction—exactly like how you use your computer.
Oct 22 10 tweets 3 min read
I'm excited to share what we've been working on lately at Anthropic.

- Computer use API
- New Claude 3.5 Sonnet
- Claude 3.5 Haiku

Let's walk through everything: Image Computer use API

We've built an API that allows Claude to perceive and interact with computer interfaces.

You feed in a screenshot to Claude, and Claude returns the next action to take on the computer (e.g. move mouse, click, type text, etc). Image
Oct 18 6 tweets 1 min read
We just published our second Anthropic Quickstart - a financial data analyst powered by Claude.

Upload spreadsheets, documents, or financial charts and instantly get actionable insights with beautiful visualizations.

Deploy your own instance in seconds with our open-source code on GitHub.Image The analyst can help with all sorts of tasks:
- Data Extraction: Upload files, extract metrics, and analyze trends
- Visualization: Create custom charts to compare metrics
- Interactive Analysis: Ask questions about data for detailed insights
Sep 19 8 tweets 3 min read
Excited to share our latest research on Contextual Retrieval - a technique that reduces incorrect chunk retrieval rates by up to 67%.

When combined with prompt caching, it may be one of the best techniques there is for implementing retrieval in RAG apps.

Let me explain: Image Standard retrieval in RAG often destroys context in the process of splitting documents into chunks for embedding.

This can lead to retrieval errors, especially with complex information like financial reports or technical documents that heavily rely on context. Image
Sep 6 6 tweets 2 min read
Friday (docs) feature drop:

We've revamped our docs homepage, putting Claude-powered search front and center.

Let me show you a few things it can do: Image Instead of digging through pages with ctrl-f, let Claude find what you're looking for.

The tip-of-the-tongue problem is gone. Image
Sep 3 5 tweets 2 min read
Announcing the Anthropic Quickstarts repo, a collection of projects designed to help developers quickly get started with building deployable applications using the Anthropic API.

Featuring our first quickstart - a Claude-powered customer support agent app: We partnered with @skirano to create this initial quickstart - he did a great job getting the details right, it's super scalable and dev-friendly!

The app itself is built with Next JS and uses the Anthropic API, Bedrock Knowledge Bases for RAG, and Shadcn for the UI.
Aug 21 6 tweets 2 min read
We just released two new resources for learning prompt engineering.

1. An interactive intro to prompting tutorial for people just getting started with Claude
2. A real-world prompting course for developers building on the Anthropic API

Here's what they cover: Image The interactive tutorial is over 9 chapters long and introduces all the prompting fundamentals—from how to structure your prompt to best practices for including examples.

If you've never heard of terms like CoT or few-shot, this is the place to start! Image
Aug 20 16 tweets 5 min read
A few days ago someone asked me what the point of using AI is if you're not a programmer.

To answer that, I kept track of every time I talked to Claude in one day to show them what I use it for: 7:13am

Wanted to make my morning smoothie but was out of oats. Was trying to keep it a thicker consistency and online recipes weren't helping. Image
Aug 15 7 tweets 2 min read
Yesterday we launched prompt caching in the Anthropic API which significantly reduces API input costs and latency.

What I'm most excited about though is how it unlocks mega few-shot prompting as a lightweight alternative to finetuning: Examples are the #1 thing I recommend people use in their prompts because they work so well.
Aug 14 10 tweets 3 min read
We just rolled out prompt caching in the Anthropic API.

It cuts API input costs by up to 90% and reduces latency by up to 80%.

Here's how it works: To use prompt caching, all you have to do is add this cache control attribute to the content you want to cache:

"cache_control": {"type": "ephemeral"}

And this beta header to the API call:

"anthropic-beta": "prompt-caching-2024-07-31" Image
Aug 8 4 tweets 1 min read
I have a special place in my heart for jailbreaking. Back in the day I ran a site called jailbreakchat dot com and was one of the first to jailbreak GPT-4.

That's why I'm excited about our new program that rewards those who find novel jailbreaks in our frontier models: Image If you are accepted to this program, you will get early access to our new models.

If you find a jailbreak in a high risk domain like CBRN (chemical, biological, radiological, and nuclear) or cybersecurity, you can be awarded up to $15k.