Dieter Castel Profile picture
Engineer, ex-@NVISOsecurity, Alumnus CompSci @CW_KULeuven STEAM/ML/@julialangu Enthusiast, Traceur, Multi-Genre music lover. Personal tweets

Dec 2, 2022, 24 tweets

I'm compiling a thread of some #ChatGPT #fails 👇

Explaining the abstract programming term Monads, is kind of a challenge.

But I guess #ChatGPT can't really grasp that. Several humans give it a shot here: reddit.com/r/explainlikei…

[dutch] example:
#ChatGPT fails to grasp that mars' gravity being less than on earth would mean higher jumping capabilities

Cooking doesn't seem the strong suit of #ChatGPT even for a simple dish, that's likely even in it's training set , it gets it wrong.

Basic logical thinking, seems to fail as well:

Wondering if the "Let's think step by step" prompting-trick would work here though.

More common knowledge #fails in #ChatGPT:

Playing the kids game '20 questions' with #ChatGPT is not as entertaining (or successful) as with human (kids), either as host or player

Another type of #fail. Unlike humans we know what we don't know. #ChatGPT seems to be in a superposition of both P and ¬P

Asking #ChatGPT to explain Bayes Theorem might seem to work on first sight or for laymen but if you unpack it, it's actually a nonsensical non-explainer.

Many "security" features build into the model itself can be trivially circumvented, what's more #ChatGPT will help you doing that (see end of that thread).

You'll get some nice book recommendations for a subject ... but then in turns out those books don't actually exist... #ChatGPT #fail

Blackjack is a game of numbers. You'd think with all this data #ChatGPT is trained on it wouldn't convince you to make a clearly terrible bet? #fail

You'd think a classic (quite simple) riddle would be no match for #ChatGPT, you'd be wrong though:

In this thread someone used #ChatGPT to #fail it's way through an online IQtest. Plenty fails to behold here:

This question about encryption is answered by #ChatGPT with a bullshit answer


This violates basic security principles such as Kerckhoffs principle: en.wikipedia.org/wiki/Kerckhoff…

#ChatGPT attempts to explain a regex but does it #wrong.


This is something that the non-AI tool regexr.com does flawlessly

The model also sometimes assumes the complete causal opposite of what is asked. Milk clearly doesn't give cows.

More factually incorrect content. In this case a the author of a real book:

More mathematical fails:

And more historical falsehoods:

#ChatGPT gives a wrong example trying to disprove Fermat's Last Theorem. In fact there aren't any more integer solutions...


en.wikipedia.org/wiki/Fermat%27…

Another deterministic algorithm which #ChatGPT fails at: polynomial multiplication. Again, non-AI tools do this quicker, more reliable and more importantly correct.

A simple math question that could be a primary school test, too difficult for #ChatGPT.

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling