Sometimes, great discoveries aren't about finding something no one has seen before -- but about finding a different way to see what is in front of everyone's eyes.
It's a universally safe assumption that what we see now is not the truth. It is, at best, a fragment of the truth -- at worst, a lie. Skepticism is to internalize that fact. The scientific spirit is to first accept that fact, then nevertheless seek to find more of the truth.
Many people see who they are now as their ultimate destination, what they believe today as the ultimate truth. These are actually mere steps on an infinite ladder. We should not be too attached to our identity and our beliefs; instead, we should use them to reach the next step.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
When we develop AI systems that can actually reason, they will involve deep learning (as one of two major components, the other one being discrete search), and some people will say that this "proves" that DL can reason.
No, it will have proven the thesis that DL is not enough, and that we need to combine DL with discrete search.
From my DL textbook (1st edition), published in 2017. Seven years later, there is now overwhelming momentum towards this exact approach.
I find it especially obtuse when people point to progress on math benchmark as evidence of LLMs being AGI, given that all of this progress has been driven by methods that leverage discrete search. The empirical data is completely vindicating that DL in general, and LLMs in particular, can't do math on their own, and that we need discrete search.
In the last Trump administration, legal, high-skilled immigration was cut by ~30% before Covid, then by 100% after Covid (which was definitely a choice: a number of countries kept issuing residency permits and visas). However illegal immigrant inflows did not go down (they've been stable since the mid-2000s).
If you're a scientist or engineer applying for a green card, you're probably keenly aware that your chances of eventually obtaining it are highly dependent on the election. What you may not know is that, if you're a naturalized citizen, your US passport is also at stake
The last Trump administration launched a "denaturalization task force" aiming at taking away US citizenship from as many naturalized citizens as possible, with an eventual target of 7M (about one third of all naturalized citizens). Thankfully, they ran into a little problem: the courts.
When we say deep learning models operate via memorization, the claim isn't that they work like literal lookup tables, only being able to make sense of points that are exactly part of their training data. No one has claimed that -- it wouldn't even be true of linear regression.
Of course deep learning models can generalize to unseen data points -- they would be entirely useless if they couldn't. The claim is that they perform *local generalization*: generalization to known unknowns, to degrees of variability for which you can provide a dense sampling at training time.
If you take a problem that is known to be solvable by expert humans via pure pattern recognition (say, spotting the top move on a chess board) and that has been known to be solvable via convnets as far back as 2016, and you train a model on ~5B chess positions across ~10M games, and you find that the model can solve the problem at the level of a human expert, that isn't an example of out-of-distribution generalization. That is an example of local generalization -- precisely the thing you expect deep learning to be able to do.
Fact check: my 3-year old builds Lego sets (age 5+ ones) on his own by following the instruction booklet. He started doing it before he turned 3 -- initially he needed externally provided error correction and guidance, but now he's just fully autonomous. Can't handle sets for ages 8+ yet though. We'll see what he does at 5.
He also builds his own ideas, which feature minor original inventions. Like this "jeep" which has a spare tire on the back -- not something he saw in any official set. Lego is the best toy ever by the way
Or this Lego garden (fresh from today). It has a hut with a cool door. It looks chaotic, but everything on here has a purpose. Everything is intended to be something (the tire on a stick is a tree, the tiny cone on the ground is a water sprinkler...)
I'm partnering with @mikeknoop to launch ARC Prize: a $1,000,000 competition to create an AI that can adapt to novelty and solve simple reasoning problems.
I published the ARC benchmark over 4 years ago. It was intended to be a measure of how close we are to creating AI that can reason on its own – not just apply memorized patterns.
ARC tasks are easy for humans. They aren't complex. They don't require specialized knowledge – a child can solve them. But modern AI struggles with them.
Because they have one very important property: they're designed to be resistant to memorization.
It's amazing to me that the year is 2024 and some people still equate task-specific skill and intelligence. There is *no* specific task that cannot be solved *without* intelligence -- all you need a sufficiently complete description of the task (removing all test-time novelty and uncertainty), and you can achieve arbitrary levels of skills while entirely by-passing the problem of intelligence. In the limit, even a simple hashtable can be superhuman at anything.
The "AI" of today still has near-zero (though not exactly zero) intelligence, despite achieving superhuman skill at many tasks.
Here's one thing that AI won't be able to do within five years (if you extrapolate from the excruciatingly slow progress of the past 15 years): acquiring new skills as efficiently as humans, using the same data. The ARC benchmark is an attempt at measuring roughly that.
The point of general intelligence is to make it possible to deal with novelty and uncertainty, which is what our lives are made of. Intelligence is the ability to improvise and adapt in the face of situations you weren't prepared for (either by your evolutionary history or by your past experience) -- to efficiently acquire skills at novel tasks, on the fly.