It all starts with this 4chan post where a supposed researcher claims that people with an IQ less than 90 can't understand questions like "How would you have felt yesterday evening if you hadn't eaten breakfast or lunch?"
Later in the original thread, the author uses the same breakfast question found in the post on 4chan.
The breakfast question is frequently linked to memes implying black people have low IQs. Like this George Floyd meme.
Or this one, where they swap a black woman for the much more typical version of the meme which features a white man.
See here, where this person suggests that this black teen isn't capable of understanding hypotheticals.
I don't know anything about this person but he's listed on the Southern Poverty Law Center website as a white nationalist. Source: splcenter.org/fighting-hate/…
On a personal note, I frequently get people on here asking me the breakfast question in a sad attempt to troll me as a black person on the internet.
I'm not accusing the original author of the thread of racism. Perhaps he is unaware of the origins.
But spreading these kinds of ideas without acknowledging the context is exactly how racist ideas get mainstreamed.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
You may have heard hallucinations are a big problem in AI, that they make stuff up that sounds very convincing, but isn't real.
Hallucinations aren't the real issue. The real issue is Exact vs Approximate, and it's a much, much bigger problem.
When you fit a curve to data, you have choices.
You can force it to pass through every point, or you can approximate the overall shape of the points without hitting any single point exactly.
When it comes to AI, there's a similar choice.
These models are built to match the shape of language. In any given context, the model can either produce exactly the text it was trained on, or it can produce text that's close but not identical
I’m deeply skeptical of the AI hype because I’ve seen this all before. I’ve watched Silicon Valley chase the dream of easy money from data over and over again, and they always hit a wall.
Story time.
First it was big data. The claim was that if you just piled up enough data, the answers would be so obvious that even the dumbest algorithm or biggest idiot could see them.
Models were an afterthought. People laughed at you if you said the details mattered.
Unsurprisingly, it didn't work out.
Next came data scientists. The idea was simple: hire smart science PhDs, point them at your pile of data, wait for the monetizable insights to roll in.
As a statistician, this is extremely alarming. I’ve spent years thinking about the ethical principles that guide data analysis. Here are a few that feel most urgent:
RESPECT AUTONOMY
Collect data only with meaningful consent. People deserve control over how their information is used.
Example: If you're studying mobile app behavior, don’t log GPS location unless users explicitly opt in and understand the implications.
DO NO HARM
Anticipate and prevent harm, including breaches of privacy and stigmatization.
Example: If 100% of a small town tests positive for HIV, reporting that stat would violate privacy. Aggregating to the county level protects individuals while keeping the data useful.
Hot take: Students using chatgpt to cheat are just following the system’s logic to its natural conclusion, a system that treats learning as a series of hoops to jump through, not a path to becoming more fully oneself.
The tragedy is that teachers and students actually want the same thing, for the student to grow in capability and agency, but school pits them against each other, turning learning into compliance and grading into surveillance.
Properly understood, passing up a real chance to learn is like skipping out on great sex or premium ice cream. One could but why would one want to?
If you think about how statistics works it’s extremely obvious why a model built on purely statistical patterns would “hallucinate”. Explanation in next tweet.
Very simply, statistics is about taking two points you know exist and drawing a line between them, basically completing patterns.
Sometimes that middle point is something that exists in the physical world, sometimes it’s something that could potentially exist, but doesn’t.
Imagine an algorithm that could predict what a couple’s kids might look like. How’s the algorithm supposed to know if one of those kids it predicted actually exists or not?
The child’s existence has no logical relationship to the genomics data the algorithm has available.