Alberto Romero Profile picture
Writes a blog about AI that's actually about people
Sep 30 8 tweets 2 min read
1/8 Some folks think OpenAI is hurting because it’s unprofitable and wants to hike prices to offset losses. Training (and now inference) is expensive but OpenAI is sitting on a gold mine. They’re about to show us why: 2/8 We’re leaving generative AI behind. o1 belongs to a new phase. Reasoning AI, LRMs whatever. It’s not valuable for what it generates but for what it doesn’t. o1 transcends generative AI. Current heuristics don’t work anymore
Mar 24, 2023 16 tweets 6 min read
Everyone’s talking about ChatGPT plugins, GPT-4 AGI sparks and I can only think about one thing: OpenAI has fooled us all.

From non-profit open-source AI lab untethered from shareholders’ interests to the opposite in 8 years: closed-source, for-profit, fully corporate Here's the *first* paragraph of the *first* blog post that OpenAI published (2015). Quite different from what it is now…
Mar 21, 2023 10 tweets 2 min read
Before ChatGPT and GPT-4, life was quiet. Now I can't help but see what's approaching fast.

Behind Richard Sutton's Bitter lesson, there's a Bitterer one. Something that ChatGPT accelerated and GPT-4 made apparent. And I don't think we can escape it in any way: If you're anxious, exhausted, FOMO-ing, uncertain—and even unprecedentedly enthusiastic, you're already feeling it somehow.

This isn't about misinformation, job losses, or existential risks. It's just a bitter reality—a very bitter one.
Mar 21, 2023 26 tweets 8 min read
I've been doing weekly AI reviews since September. in the beginning I had to search for interesting things to add. it's Tuesday and I've already got all these: This illustrative image reveals just how much AI has improved in the last few years (@OurWorldInData)

Mar 21, 2023 6 tweets 2 min read
there's a lot of talking going on about AI alignment, existential risks, AIs turning evil, and what we can do about it. this is a great summary of the main arguments by Yoshua Bengio: Image Here's @ylecun most recent take:

Dec 30, 2022 15 tweets 3 min read
2022 has been an incredible year of progress and developments for AI (generative AI in particular). Thinking about it, I've realized something. What if AI's destiny was never intelligence? Let me explain:

thealgorithmicbridge.substack.com/p/ais-destiny-… Maybe the original purpose of AI-creating human-level intelligence - was doomed from the start. But not because it was too great a challenge, but because it was inevitable that we'd get distracted by a more enticing prize.
Oct 18, 2022 15 tweets 4 min read
There's a debate on AI writing tools on Twitter right now. As an AI writer, I want to give my 2 cents.

Here's my hot take: Mastering human language is out of reach for AI and it will remain this way unless current paradigms change radically.

Here's why: First, the popular side of my opinion: GPT-3 and the apps built on top are quite impressive.

Here's @nateliason's testimony of Lex (created by @nbashaw), which prompted me to write this thread:

Jun 28, 2022 9 tweets 3 min read
BLOOM by @BigScienceW is the most important AI model in the last decade.

Not DALL·E 2. Not PaLM. Not AlphaZero. Not even GPT-3. I'll explain why in this short thread.

🧵1/ In 2020 OpenAI's GPT-3 came out and redefined the guidelines for the AI industry (NLP in particular).

Current SOTA language models follow the trends: Large transformer-based and trained with lots of data, using big computers.

2/