, 32 tweets, 15 min read
My Authors
Read all threads
Now reading the ARC paper by @fchollet.
arxiv.org/abs/1911.01547 “On the measure of intelligence” where he proposes a new benchmark for “intelligence” called the “Abstraction and Reasoning corpus”.
Highlights below ->
@fchollet Chess was considered the pinnacle of human intelligence, … until it was solved by a computer and surpassed Garry Kasparov in 1997. Today, it is hard to argue that a min-max algorithm with optimizations represents “intelligence”.
@fchollet AlphaGo took this to the next step. It became world champion at Go by using deep learning. Still, the program is narrowly focused on playing Go and solving this task did not lead to breakthroughs in other fields.
@fchollet Humans use their intelligence to play Chess and Go but computers can take other routes.
@fchollet concludes that “The hallmark of broad abilities (including general intelligence) is the power to adapt to change, acquire skills, and solve previously unseen problems”.
@fchollet Intelligence obviously goes further than any narrow task, however complex. But can it be defined as a general concept, independently of our human experience ? How “general” is human intelligence ?
@fchollet Very good analogy with our physical abilities. Human bodies are incredibly versatile. Humans can run in the savannah, hunt with weapons, gather nuts, but also type on a keyboard or play basketball.
@fchollet Still, our fitness to survive is limited to a minuscule corner of the universe. Viewed in this context, human’s physical abilities, even if they are quite versatile, are in fact very narrowly focused.
@fchollet It is probably the same for our cognitive abilities. We are good at a broad set of tasks but terrible at others. For example:
@fchollet Humans are very good at determining the shortest route between a set of points. Shortest route planning is probably hard-wired in our brain. But ask a human to compute the *longest* route and he/she does no better than the most simplistic algorithm.
@fchollet Another example: we are very good are reasoning about 2D problems, quite good with 3D and completely incapable of thinking in 4D or above.
@fchollet “As such, it is conceptually unsound to set “artificial general intelligence” in an absolute sense (i.e. “universal intelligence”) as a goal. To set out to build broad abilities of any kind, one must start from a target scope” @fchollet
@fchollet @fchollet suggests that focusing AI research on “human intelligence” is a good scope because its applications will have a good probability of intersecting with things we find useful.
@fchollet Intelligence also uses prior knowledge and any test of an AI for “human-level of intelligence” must accurately measure the prior knowledge available to both.
@fchollet @fchollet suggests to only use priors defined by the “Core Knowledge” theory of cognition such as elementary geometry and topology, basic numbers and arithmetic, elementary physics of objects, … for both human and AI intelligence tests.
@fchollet My note: how about the ability of humans to build vast databases of knowledge and access them in creative ways to generate new ideas ? It looks like this form of “intelligence” is out of scope and it probably should not be.
@fchollet @fchollet then suggests the following definition of intelligence:
“The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty.”
@fchollet And he uses information theory and Algorithmic Complexity to quantify “generalization difficulty”, “experience”, and “priors”. The Algorithmic Complexity of a task is the length of the program needed to encode the task and measures its “information content”.
@fchollet In the end, he comes up with a formula measuring the intelligence of a system over a given scope. “in plain English: intelligence is the rate at which a learner turns its experience and priors into new skills at valuable tasks that involve uncertainty and adaptation.”
@fchollet I like the attempt at quantifying intelligence but more interesting than the formula is the conclusion: “the process of creating an intelligent system can be approached as an optimization problem, where the objective function would be a computable approximation of our [] formula”
@fchollet In other words @fchollet offers a credible explanation why intelligence is a computable problem, for a definition of “intelligence” that is likely to intersect well with our human needs and expectations. There is hope! 😆 We don’t know how *practically* computable it is though 😅
@fchollet The cherry on the cake is that he also suggests a clever benchmark for this intelligence: the “Abstraction and Reasoning Corpus” (ARC). All the tasks in ARC involve coloring squares on a grid based on a few examples of what the output should be. Examples follow.
@fchollet Solution: "reproduce shape with only 4 colored squares (all shapes but 1 have 5 colored squares)"
@fchollet Solution: this is a de-noising task
@fchollet Solution: this is a "bounce the ball" task.
@fchollet Can you figure this one out ?
I find it incredibly easy for humans but i have no idea how to program something that would solve it without being super-specialized...
@fchollet The useful characteristics of all the ARC tasks is that:
- They are relatively easy for humans
- The performance of any known AI system on them is close to zero
Which means that there is only one way to go for AI: UP!
🥳
@fchollet My note: all of these tasks involve figuring out an underlying program from examples and then run it on the test data. I wonder about the Algorithmic Complexity of the program to find. How useful is it ?
@fchollet Since these tasks must be solvable by humans, the algorithmic complexity must be low. For ex., If the examples are generated by a highly complex particle physics simulation program, humans are unlikely to figure it out.
@fchollet On the other hand, humans might be bad at figuring out even extremely simple programs. For example, a 10x10 grid encoding 10 decimals of Pi, the goal being to predict the next 10 decimals.
@fchollet My ARC-like hypothetical example: 3.14159265 is followed by 3589793238 because decimals of Pi. Useful or not ?
@fchollet I guess this is where filtering the task set against human abilities ties into @fchollet 's pursuit of “human intelligence”.
@fchollet The ARC set of tasks is here:
github.com/fchollet/ARC
Git clone the repo and open the HTML interface in a browser. It’s a lot of fun to play with! 😃
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Martin Görner

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!