How close are we to building a truly intelligent agent?
Most scientists think we are still decades away, but today, a group of scientists from @DeepMind claims they know how to get there.
Let's talk about what's going on.
What is "Artificial General Intelligence" (AGI)?
An agent capable of learning any intellectual task than a person can also learn.
Today, AI has been limited to systems that can learn particular tasks. A system that can learn anything you teach it, just like a human, is AGI.
Unfortunately, there's no way to build such a general, intelligent agent without formulating a custom solution for every individual task.
This sucks. This doesn't scale. This doesn't get us to AGI.
But maybe we aren't that far off...
"Reward Is Enough" is a paper from DeepMind, the company behind AlphaGo and AlphaZero.
Their claim:
"Reinforcement learning agents could constitute a solution to artificial general intelligence."
Reinforcement Learning is a well-studied branch of Machine Learning that's based on reward maximization and trial-and-error experience acquisition.
The paper suggests that these characteristics are enough to build agents that exhibit intelligence.
How we've been tackling problems:
Break it down, build components that solve each piece, and connect them with some logical glue.
But this is not how natural intelligence works.
DeepMind's hypothesis:
"(...) the generic objective of maximizing reward is enough to drive behavior that exhibits most if not all abilities that are studied in natural and artificial intelligence."
In other words: We can get to AGI by mimicking how nature works.
We have been studying Reinforcement Learning for quite a long time, so we know it well, and we have made impressive progress using it.
This makes me hopeful. We might reach AGI sooner than we thought!
Of course, there are still many challenges we need to solve.
Reinforcement learning needs a lot of data to gain experience. Designing a reward system is hard, and we still don't know how to build systems that work across different domains.
More work needs to happen.
Let me give you a few interesting links.
First, here is the full paper, in case you want to read it:
1. You need a lot of math to start 2. You need a Ph.D. to get a job 3. You always need a lot of data 4. You need to buy expensive hardware 5. It's hard to become proficient in it 6. It's the solution for most problems
Bullshit.
In the last 6 months, I've posted more than 100 threads here on Twitter talking about machine learning and how you can build a career on it.
And I'm just getting started!
Stay tuned. A lot more is coming.
First misconception: All machine learning is hardware-hungry.
Deep learning stretches you, but outside that, it gets much better.
If you need GPUs/TPUs, there are many free/cheap options you can use, especially while learning.