Dave Lauer Profile picture
Creating @UrvinFinance at https://t.co/wRGu09xo8l. Advocate for fair markets. Passionate about science & my kids. Believer in @WeTheInvestors_. 🇺🇲 in 🇨🇦

May 18, 2021, 11 tweets

I really enjoyed this paper by @MelMitchell1 - "Why AI is Harder Than We Think"
arxiv.org/pdf/2104.12871…

It provided an excellent historical overview of efforts in AI, & why the current advances we have been witnessing are not really are impressive as they may seem on the surface

I found the paper to be very approachable & would recommend it even to those who aren't steeped in AI.

There may be some confirmation bias here, as I've written before about the fallacy of focusing on system accuracy and veneration of deep learning:
urvin.ai/when-artificia…

Deep learning has become the bedrock of AI, & frankly has become the hammer that makes most AI scientists think each problem is a nail. As Mitchell points out, this is problematic because deep learning is a limited and brittle technique that has difficulty adapting to real world.

These systems are insecure as well, an area not enough people are focused on (and that I'll have something to say about in the near future). We also don't really understand how deep learning arrives at the answers that it does, so can't be sure it actually understands problems.

The way she describes the fallacies that we have in our conceptualization of AI is fantastic and really resonated with how I see the world of AI too. The first fallacy is that "narrow intelligence is on a continuum with general intelligence."

The second fallacy seemed very prescient to me.

"Easy things are easy and hard things are hard"

You see this everywhere - the first 80% of a problem could be very easy to solve, but if you can't solve the last 20% the solution isn't viable - and the effort isn't linear.

Fallacy 3 is the real indictment of deep learning. It strikes at the heart of its problems. We can't really mimic the brain, but we try. It also alludes to the conflict-of-interest where cloud co's push DL hard, because they're trying to sell compute resources.

It's also primarily these same players who are designing the benchmarks, so it's little surprise that the models continue to improve on benchmarks that end up being extremely superficial. They don't test or demonstrate real understanding.

The final fallacy was new to me - the idea that "intelligence is all in the brain." Can we possibly build an intelligent system with only the neural component, while neglecting everything else that lets humans think and reason?

Finally, the summary of AI as modern alchemy. It's kind of perfect, especially where deep learning is concerned. The field of AI resembles alchemy more than science at the moment, and it's a problem in so many ways - not just a hindrance to AGI.

tldr; We've got a long way to go in AI, and many advances that look really impressive may be little more than systems that have figured out how to look impressive.

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling