Matt Shumer Profile picture
Sep 12 8 tweets 3 min read Twitter logo Read on Twitter
Here's a simple guide to set up your OpenAI Playground for day-to-day use, as a (better!) replacement for ChatGPT.

I've been getting so many questions about this, so hopefully this is helpful!

Read on:

Image
First, why would you want to use the Playground over ChatGPT?

- Greater system prompt/behavior control
- Save multiple system prompts
- Temperature/creativity control
- Longer outputs for reasoning prompts/working with longer text
- Non-nerfed models :)
- Edit all messages

Etc.
So how can you set it up in a way that makes it as frictionless as using ChatGPT?

We'll do this by creating a 'preset' that enables instant access to an optimized setup.

Let's get started:
First, go to .

Make sure you're in Chat mode and select GPT-4, or GPT-4-32K, if you have it! platform.openai.com/playground/
Image
Next, set temperature to 0.4. I find this is a good starting point for most use-cases, and you can adjust from there for each use-case.

Increase maximum length to enable longer outputs than ChatGPT offers.

For GPT-4, do 3000. For GPT-4-32K, set it to 8000. Image
After that, paste your system prompt here.

Here is a useful system prompt if you don't yet have one:
Image
Finally, press 'Save' and save the preset with a name you'll remember.

Then you can take the link of the page and bookmark it to access this preset whenever you need help from AI! Image
Using this setup will 10x what you're able to do with OpenAI models.

I hope you find this helpful!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Matt Shumer

Matt Shumer Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mattshumer_

Aug 23
This is the world's simplest way to fine-tune a task-specific GPT-3.5.

**Just write a sentence describing the model you want.**

A chain of AI systems will generate a dataset and train a model for you.

And it's open-source: github.com/mshumer/gpt-ll…
This is a new addition to gpt-llm-trainer library.

gpt-llm-trainer is a constrained agent -- meaning its behavior is highly-controlled, leading to better results than open-ended agents.

It chains together lots of GPT-4 calls that work together to create a great dataset for you.
How it works, in a nutshell:

- The user describes the model they want
Ex: "A model that writes Python functions"

- GPT-4 generates a dataset to train on

- We process the dataset, and train a model!
Read 5 tweets
Aug 16
Introducing `gpt-oracle-trainer` ✍️

The easiest way to create a chatbot that can answer questions about your product.

Just paste in your product's docs, and a chain of AI systems will generate a dataset and train a LLaMA 2 for you.

And it's open-source: github.com/mshumer/gpt-or…
gpt-oracle-trainer is a constrained agent -- meaning its behavior is highly-controlled, leading to better results than open-ended agents.

It chains together lots of GPT calls that work together to create a great dataset for you.
How it works, in a nutshell:

- The user pastes in their product's documentation

- GPT generates a dataset to train on, by coming up with relevant questions and answers about the dataset

- We process the dataset, and train a model!
Read 5 tweets
Aug 9
Introducing `gpt-llm-trainer` ✍️

The world's simplest way to train a task-specific LLM.

**Just write a sentence describing the model you want.**

A chain of AI systems will generate a dataset and train a model for you.

And it's open-source: https://t.co/ANXr0SXPOjgithub.com/mshumer/gpt-ll…
gpt-llm-trainer is a constrained agent -- meaning its behavior is highly-controlled, leading to better results than open-ended agents.

It chains together lots of GPT-4 calls that work together to create a great dataset for you.
How it works, in a nutshell:

- The user describes the model they want
Ex: "A model that writes Python functions"

- GPT-4 generates a dataset to train on

- We process the dataset, and train a model!
Read 6 tweets
Aug 2
Introducing `Agent-1`: a breakthrough foundation model that can operate software like a human.

This is the brain powering Personal Assistant.

We’re already well above previous state-of-the-art, and we’re improving massively each week.

More details:
First, why are we building this?

Current hosted APIs are amazing — but operating software isn’t a task today’s models can handle reliably.

Even the next generation of unreleased closed models aren’t up to the task (and trust me, we’ve tried).
And with the complexity that comes with this type of task, costs are through the roof, and speed is an issue.

So, we decided to build our own suite of models, with one purpose: to operate software reliably, quickly and cheaply.
Read 7 tweets
Jul 11
Introducing `gpt-prompt-engineer-classify`✍️

An agent that creates optimal GPT classification prompts.

Just describe the task, and an AI agent will:
- Generate many prompts
- Test them in a tournament
- Return the best prompt

And it's open-source: github.com/mshumer/gpt-pr…
This is part of the larger `gpt-prompt-engineer` project I open-sourced last week.

Now, you can use it to do more than create generative prompts -- with this update, powerful classifiers can be created automatically.

gpt-prompt-engineer is a constrained agent -- meaning its behavior is highly-controlled, leading to better results than open-ended agents.

It chains together lots of GPT-4 and GPT-3.5-Turbo calls that work together to find the best possible prompt.
Read 6 tweets
Jul 4
Introducing `gpt-prompt-engineer` ✍️

An agent that creates optimal GPT prompts.

Just describe the task, and a chain of AI systems will:
- Generate many possible prompts
- Test them in a ranked tournament
- Return the best prompt

And it's open-source: https://t.co/rcnlJ5g5ZNgithub.com/mshumer/gpt-pr…
gpt-prompt-engineer is a constrained agent -- meaning its behavior is highly-controlled, leading to better results than open-ended agents.

It chains together lots of GPT-4 and GPT-3.5-Turbo calls that work together to find the best possible prompt.
How it works, in a nutshell:
- The user describes the task, and provides test cases
- GPT-4 generates many candidate prompts to try
- Each prompt generates against each test case, and the outputs are compared (by GPT!) for each combo, ELO tournament-style
- Highest score wins!
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(