It provided an excellent historical overview of efforts in AI, & why the current advances we have been witnessing are not really are impressive as they may seem on the surface
I found the paper to be very approachable & would recommend it even to those who aren't steeped in AI.
There may be some confirmation bias here, as I've written before about the fallacy of focusing on system accuracy and veneration of deep learning: urvin.ai/when-artificia…
Deep learning has become the bedrock of AI, & frankly has become the hammer that makes most AI scientists think each problem is a nail. As Mitchell points out, this is problematic because deep learning is a limited and brittle technique that has difficulty adapting to real world.
These systems are insecure as well, an area not enough people are focused on (and that I'll have something to say about in the near future). We also don't really understand how deep learning arrives at the answers that it does, so can't be sure it actually understands problems.
The way she describes the fallacies that we have in our conceptualization of AI is fantastic and really resonated with how I see the world of AI too. The first fallacy is that "narrow intelligence is on a continuum with general intelligence."
The second fallacy seemed very prescient to me.
"Easy things are easy and hard things are hard"
You see this everywhere - the first 80% of a problem could be very easy to solve, but if you can't solve the last 20% the solution isn't viable - and the effort isn't linear.
Fallacy 3 is the real indictment of deep learning. It strikes at the heart of its problems. We can't really mimic the brain, but we try. It also alludes to the conflict-of-interest where cloud co's push DL hard, because they're trying to sell compute resources.
It's also primarily these same players who are designing the benchmarks, so it's little surprise that the models continue to improve on benchmarks that end up being extremely superficial. They don't test or demonstrate real understanding.
The final fallacy was new to me - the idea that "intelligence is all in the brain." Can we possibly build an intelligent system with only the neural component, while neglecting everything else that lets humans think and reason?
Finally, the summary of AI as modern alchemy. It's kind of perfect, especially where deep learning is concerned. The field of AI resembles alchemy more than science at the moment, and it's a problem in so many ways - not just a hindrance to AGI.
tldr; We've got a long way to go in AI, and many advances that look really impressive may be little more than systems that have figured out how to look impressive.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
There are some really great points in this writeup from @AlexanderGerko, much of which I agree with. Nearly all retail trading today is internalized by a duopoly whose incentives are not to ensure best execution for retail clients.
This does several things to markets, but one of those things is to take that flow away from lit markets and open competition. Markets should encourage open competition - how is that even controversial? Get retail flow on lit markets.
If Citadel and Virtu are truly providing best execution, they'll still be on the other side of the trade! If they're not, others will step in. Retail brokers SHOULD charge commissions, instead of hiding those costs in securities lending, PFOF and margin interest.