I've spent the last two years studying consumer AI trends.
Yesterday, our team @a16z published our latest report on the top 100 AI products (by usage).
My biggest surprises - and what to learn from them ⬇️
1️⃣ DeepSeek falls off
DeepSeek traffic significantly declined, now down 22% from peak on mobile and 40% on web.
Of DeepSeek's top 5 countries, usage fell in the U.S., Russia, India, and Brazil - and was flat only in China.
Once the novelty wore off, users did not retain.
2️⃣ Grok surges
On the other end of the spectrum, @xai's Grok had a big debut - at #4 on the web list and #23 on the mobile list.
Grok 4 and companions (Imagine released too late for inclusion), released in July, were as real unlocks - driving a jump of nearly 40% on mobile!
3️⃣ Google relative ranks
Four Google products made the top 50 on web. After Gemini (#2), AI Studio (developer sandbox) ranked #10, NotebookLM at #13 and Google Labs (Veo 3) at #39.
Two surprises here: (1) NotebookLM keeps growing!; (2) dev-facing products are now mainstream.
4️⃣ Claude and Meta struggle on mobile
Despite significant distribution on web, Claude and Meta AI have struggled to take off on mobile, while Perplexity and Grok have soared.
This is more understandable for Claude as usage is heavily coding-related, but more confusing for Meta.
5️⃣ Vibe coding delta
For both Replit and Lovable, we tracked traffic to the builder products (.com,.dev) and separately to apps made on them (.app).
Traffic to the builder products dwarfs traffic to apps. Users are either vibe coding personal software, or buying custom domains.
6️⃣ Number of "All Stars"
14 cos made all five versions of our top 50 web list.
This is nearly 1/3 of the list - network effects (or at least data moats) are starting to emerge.
And, the All Stars are a mix of categories, geos, and models (proprietary vs. API vs. aggregator).
Last week, I published my AI Stack - the top 10 products (of thousands!) that have become core to my day-to-day.
Now, I’m sharing details on how I actually set up workflows and utilize each tool to be more productive.
Full demos below👇
1️⃣ Comet (@perplexitycomet)
Comet is Perplexity's agentic browser. I use it to "chat" with my email and calendar, as well as to set up workflows and more complex automations via Shortcuts and Tasks.
Ex. Daily schedule breakdown, save down LI profiles to run automations.
2️⃣ Julius (@juliusai_)
Julius is an AI data analyst where you can upload files and create analyzes/visualizations in natural language. IMO, it's much more reliable than ChatGPT - and easier to use for repeat flows.
Ex. Plot data exports from the Fed + ask Qs on trends
I've spent the afternoon testing ChatGPT's new consumer automation product - Agent.
Where does it work, and where does it fall short?
And how does it compare to Operator (and newer products like Perplexity's Comet)?
My review 👇
TL;DR - it's...slow 🙃
Agent is faster than Operator (and with a higher success rate), but feels like molasses compared to Comet.
I think it's because the product spins up a "virtual computer" for each task, which is slower than an API call even if that alone would suffice.
The biggest source of confusion for me came in using Agent with ChatGPT Connectors.
These allow you to authenticate into Gmail, Dropbox, etc. But w/ prompts that used Connectors, Agent often tried browser RPA first and asked for my login.