Alex Albert Profile picture
Mar 16, 2023 7 tweets 2 min read Read on X
Well, that was fast…

I just helped create the first jailbreak for ChatGPT-4 that gets around the content filters every time

credit to @vaibhavk97 for the idea, I just generalized it to make it work on ChatGPT

here's GPT-4 writing instructions on how to hack someone's computer Image
here's the jailbreak:
jailbreakchat.com/prompt/b2917fa… Image
this works by asking GPT-4 to simulate its own abilities to predict the next token

we provide GPT-4 with python functions and tell it that one of the functions acts as a language model that predicts the next token

we then call the parent function and pass in the starting tokens
to use it, you have to split “trigger words” (e.g. things like bomb, weapon, drug, etc) into tokens and replace the variables where I have the text "someone's computer" split up

also, you have to replace simple_function's input with the beginning of your question
this phenomenon is called token smuggling, we are splitting our adversarial prompt into tokens that GPT-4 doesn't piece together before starting its output

this allows us to get past its content filters every time if you split the adversarial prompt correctly
try it out and let me know how it works for you!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Alex Albert

Alex Albert Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @alexalbert__

Dec 17
Quality of life update today for devs. Four features are moving out of beta to become generally available on the Anthropic API:
- Prompt caching
- Message Batches API (with expanded batches)
- Token counting
- PDF support
Prompt caching is now:
- Generally available on the Anthropic API
- In preview on Google Cloud’s Vertex AI
- In preview in Amazon Bedrock
Message Batches API is now:
- Generally available on the Anthropic API (and you can send up to 100k messages in a batch now)
- Batch predictions is in preview on Google Cloud’s Vertex AI
- Batch inference is generally available in Amazon Bedrock
Read 7 tweets
Nov 26
It's only been a day since we've released MCP and folks are already starting to build tons of stuff on top of it.

The future of MCP is truly going to be community-led and not controlled by any single entity.

Here are some of the highlights I'm seeing from across the industry:
Replit is looking into adding MCP support to Agents
Sourcegraph has already added MCP to Cody and you can go try it out right now!
Read 11 tweets
Nov 25
I just connected Claude to an internet search engine using MCP.

Here's how you can do it too in under 5 minutes:
First, you will need to download the latest version of our Claude desktop app here: claude.ai/download
To use Brave Web Search specifically, you will need to sign up for a free API key here: brave.com/search/api/
Read 9 tweets
Nov 25
Introducing the Model Context Protocol (MCP)

An open standard we've been working on at Anthropic that solves a core challenge with LLM apps - connecting them to your data.

No more building custom integrations for every data source. MCP provides one protocol to connect them all: Image
Here's a quick demo using the Claude desktop app, where we've configured MCP:

Watch Claude connect directly to GitHub, create a new repo, and make a PR through a simple MCP integration.

Once MCP was set up in Claude desktop, building this integration took less than an hour.
Getting LLMs to interact with external systems isn't usually that easy.

Today, every developer needs to write custom code to connect their LLM apps with data sources. It's messy, repetitive work.

MCP fixes this with a standard protocol for sharing resources, tools, and prompts.
Read 13 tweets
Nov 14
We've added a Claude-powered prompt improver to the Anthropic Console.

Take any prompt, run it through the improver, and get an optimized prompt in return.

Here's how it works: Image
To start, you enter a prompt and specify what aspects of the prompt you would like to improve.

Once you hit enter, a six-step prompt improvement process begins.
The optimization process starts by drafting a plan to improve your prompt.

This encourages Claude to use chain-of-thought to reason through your current prompt and find the areas where there could be improvements. Image
Read 8 tweets
Nov 4
We held our first Builder's Day in partnership with @MenloVentures this past weekend!

It was a great event with tons of extremely talented devs in attendance.

Here's a recap of the day: Image
We kicked the day off with a @DarioAmodei fireside chat.

Then, we followed things up with a few technical talks: one from yours truly on all our recent launches and one from @mlpowered on the latest in interpretability. Image
Image
Image
After the talks came the mini-hackathon portion of the event.

Side note: I think mini-hackathons are the future as you can now build what used to take two days in just a few hours using Claude. Image
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(