@ezraklein I think your statement is a misunderstanding of what GPT-3 is and what it can do. First, GPT-3 is not "getting better" in any way. It was trained once on a huge amount of text data and then its weights were fixed. It does not improve at all as people use it.
@ezraklein Second, GPT-3 cannot do "much of we do". It can generate paragraphs of human-sounding text, but it doesn't understand what it "says" in any humanlike way. AI is nowhere near being able to do "much of what we do".
@ezraklein Finally, I think we humans don't have good intuitions about two things regarding GPT-3: the scale of the system and its training data -- in effect how much the system can, and has effectively memorized; and how a system that has memorized such a vast amount of language can...
@ezraklein use that vast data to recreate convincing sounding output on many topics. I think we will gradually attain better intuitions about why these huge language models can do what they do, and also what their limitations are.
@ezraklein If you're interested in how current AI systems work, and how they actually compare to human intelligence, you might enjoy my recent book on this very topic: melaniemitchell.me/aibook/
• • •
Missing some Tweet in this thread? You can try to
force a refresh
First thought: Why frame this as "DeepMind lost $572 million" rather than "Google made a very large, long-term investment in one of its research arms, DeepMind"? /2
DeepMind's mission is to "solve intelligence" -- a statement I find nonsensical BTW -- but whatever you want to call it, AI is a very hard, long-term research problem and it's great for companies to fund basic, long-term research that doesn't have immediate payoff. /3