Dieter Castel Profile picture
Dec 2, 2022 24 tweets 13 min read Read on X
I'm compiling a thread of some #ChatGPT #fails 👇
Explaining the abstract programming term Monads, is kind of a challenge.

But I guess #ChatGPT can't really grasp that. Several humans give it a shot here: reddit.com/r/explainlikei…
[dutch] example:
#ChatGPT fails to grasp that mars' gravity being less than on earth would mean higher jumping capabilities
Cooking doesn't seem the strong suit of #ChatGPT even for a simple dish, that's likely even in it's training set , it gets it wrong.
Basic logical thinking, seems to fail as well:

Wondering if the "Let's think step by step" prompting-trick would work here though.
Playing the kids game '20 questions' with #ChatGPT is not as entertaining (or successful) as with human (kids), either as host or player
Another type of #fail. Unlike humans we know what we don't know. #ChatGPT seems to be in a superposition of both P and ¬P
Asking #ChatGPT to explain Bayes Theorem might seem to work on first sight or for laymen but if you unpack it, it's actually a nonsensical non-explainer.
Many "security" features build into the model itself can be trivially circumvented, what's more #ChatGPT will help you doing that (see end of that thread).
You'll get some nice book recommendations for a subject ... but then in turns out those books don't actually exist... #ChatGPT #fail
Blackjack is a game of numbers. You'd think with all this data #ChatGPT is trained on it wouldn't convince you to make a clearly terrible bet? #fail
You'd think a classic (quite simple) riddle would be no match for #ChatGPT, you'd be wrong though:
In this thread someone used #ChatGPT to #fail it's way through an online IQtest. Plenty fails to behold here:
This question about encryption is answered by #ChatGPT with a bullshit answer


This violates basic security principles such as Kerckhoffs principle: en.wikipedia.org/wiki/Kerckhoff…
#ChatGPT attempts to explain a regex but does it #wrong.


This is something that the non-AI tool regexr.com does flawlessly
The model also sometimes assumes the complete causal opposite of what is asked. Milk clearly doesn't give cows.
More factually incorrect content. In this case a the author of a real book:
Image
#ChatGPT gives a wrong example trying to disprove Fermat's Last Theorem. In fact there aren't any more integer solutions...


en.wikipedia.org/wiki/Fermat%27…
Another deterministic algorithm which #ChatGPT fails at: polynomial multiplication. Again, non-AI tools do this quicker, more reliable and more importantly correct.
A simple math question that could be a primary school test, too difficult for #ChatGPT.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Dieter Castel

Dieter Castel Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(