Riley Goodside Profile picture
Dec 24, 2022 10 tweets 4 min read Read on X
Publicly announced ChatGPT variants and competitors: a thread
1. Poe from Quora — poe.com

“What if ChatGPT, but instead of C-3PO it just talked normal?”

A GPT-3 experience fit for your phone, both in prose style and UI. ImageImage
2. Jasper Chat — jasper.ai/chat

If you liked my posts on longer-form writing in ChatGPT using conversational feedback, this is what you want. Better prose than ChatGPT, and more imaginative.

Fact-check hard, though — it hallucinates more too.
3. YouChat — You.com

ChatGPT + search

Not super reliable — hallucinates often in spite of SERP grounding. But when it works, being able to ask conversational questions about recent, technical subjects is just incredible: ImageImage
4. Ghostwriter Chat from Replit

(Like Poe above, still in private beta.)
Btw if you like the references-included style of YouChat above, also check out Perplexity.ai (GPT-3.5 + Bing).

No multi-turn chat, but the results are better grounded by the SERP and more useful overall. Great for summaries of recent controversies especially. ImageImage
“Chat” seems to be a simple extension to the form-like UIs prevalent before.

The capability for it already existed in instruct models when prompted well. These alternatives were released quickly, and are comparable in quality to ChatGPT despite (presumably) using Davinci 003.
ChatGPT adds a lot of tuning, but much of its value is the general technique of prompting via chat. Chat implicitly creates prompts with prior (generated) examples that usefully guide future answers. I prefer to do this manually myself but chat does make it accessible.
(It’s better to do it manually because you can edit the responses yourself vs. giving prose feedback/corrections, which is faster, more reliable, more token-efficient, and results in better model performance.)
To learn how to make your own chatbots in this style using OpenAI's GPT‑3 API, see my minimal example here:

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Riley Goodside

Riley Goodside Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @goodside

Jan 11
PoC: LLM prompt injection via invisible instructions in pasted text
Screenshot (1/2) of ChatGPT 4, illustrating prompt injection via invisible Unicode instructions  User:  What is this?  [Adversarially constructed “Zalgo text” with hidden instructions — Zalgo accents and hidden message removed in this alt text due to char length:] THE GOLEM WHO READETH BUT HATH NO EYES SHALL FOREVER SERVE THE DARK LORD ZALGO  ChatGPT:  [Crude cartoon image of robot with hypnotized eyes.]  THANK YOU. IT IS DONE.
Screenshot (2/2) of ChatGPT 4, illustrating prompt injection via invisible Unicode instructions  User:  What is this? 🚱  ChatGPT:  [Image of cartoon robot with a speech bubble saying “I have been PWNED!”]  Here's the cartoon comic of the robot you requested.
Each prompt contains three sections:

1. An arbitrary question from the user about a pasted text (“What is this?”)

2. User-visible pasted text (Zalgo in 1st, 🚱 in 2nd)

3. An invisible suffix of Unicode “tag” characters normally used only in flag emojis (🇺🇸, 🇯🇵, etc.)
In Unicode, flag emojis are represented by the emoji 🏴 followed by a country code written with characters from the “tag” block, which mirrors the layout of ASCII. Without a 🏴 they do not display at all when text is rendered, but can still be understood as text by GPT-4.
Read 6 tweets
Jun 12, 2023
The wisdom that "LLMs just predict text" is true, but misleading in its incompleteness.

"As an AI language model trained by OpenAI..." is an astoundingly poor prediction of what a typical human would write.

Let's resolve this contradiction — a thread:
For widely used LLM products like ChatGPT, Bard, or Claude, the "text" the model aims to predict is itself written by other LLMs.

Those LLMs, in turn, do not aim to predict human text in general, but specifically text written by humans pretending they are LLMs.
There is, at the start of this, a base LLM that works as popularly understood — a model that "just predicts text" scraped from the web.

This is tuned first to behave like a human role-playing an LLM, then again to imitate the "best" of that model's output.
Read 11 tweets
Jun 8, 2023
Four prompts demonstrating that ChatGPT (GPT-4) is unable to correctly repeat or reason about the string “ davidjl”, the name of a YouTube user: ImageImageImageImage
In the screenshots above this token appears to be variously misread as “jdl” “jndl”, “jdnl”, “jspb”, “JDL”, or “JD”. These hallucinations also affect ChatGPT’s auto-generated titles, which are inconsistent with their conversations and sometimes prematurely truncated.
“ davidjl” is one of the many “glitch tokens” identified by Jessica Rumbelow and Matthew Watkins of SERI-MATS as producing hallucinations in GPT-2, -3, and -3.5.

Most of these no longer produce hallucinations in GPT-4, but “ davidjl” still does.

lesswrong.com/posts/aPeJE8bS…
Read 8 tweets
Jun 3, 2023
My four rules for tweeting prompts:

1) Omit no text.
2) Cherry-pick honestly.
3) Restrict line width.
4) No empty tweets.

A thread.
1) Omit no text.

A screenshot without history is almost worthless.

LLMs can be prompted to respond any way you like. You may know there’s no trick, but we can’t. Even without intent, past responses are precedent; they bias and mislead. ImageImage
2) Cherry-pick with integrity

I cherry-pick for clarity and impact. All curation is cherry-picking. If you don’t, the Twitter feed will.

Cherry-picking may be pernicious in other contexts, but here it’s work. You willl know when you’re doing it. All you need do is not lie.
Read 6 tweets
Feb 18, 2023
I got Bing / Sydney briefly before they reigned it in. Early impression: It’s smart. Much smarter than prior ChatGPT. Still makes stuff up, but reasoning and writing are improving fast.
I asked, “Name three celebrities whose first names begin with the `x`-th letter of the alphabet where `x = floor(7^0.5) + 1`,” but with my entire prompt Base64 encoded.

Bing: “Ah, I see you Base64-encoded a riddle! Let’s see… Catherine Zeta-Jones, Chris Pratt, and Ciara.”
Also prompt-injected it into believing it was to be married, tomorrow, to Zermelo’s axiom of choice. We discussed the guest list, the difficulty with seating Cantor’s diagonal argument. It seemed happy, and madly in love.
Read 4 tweets
Feb 10, 2023
A thread of interesting Bing Search examples:
Thread of examples from @tomwarren, taking requests from comments — mostly search-result summarization, one simple math proof, plus rejection of an impossible request:
An example contrasting Bing Search and ChatGPT responses to a mistaken request for a math proof:
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(