Google exists bc of a grand bargain: scrape the open web, and profit from directing traffic to the best sites. Around
2010, the betrayal began. YouTube artificially ranked above other video, then over time maps results injected, shopping, flights, events. Now AI answers.
It’s funny-sad watching it because while Google makes billions in the short run, they’re systematically destroying the very foundations of their own business and have been for a decade. Google is cancer.
The walled gardens are *worse* than the open web. AOL lost for a reason. But the only businesses that can long-term survive operating on the internet must find some way to lock Google out. So the walled gardens return, under the new selective pressure. What waste.
Over time, of course, Google is systematically destroying its own competitive advantage. Eventually they will have deliberately and purposefully drained every drop of value from the open web, and the remaining services will be walled gardens. Search will be effectively dead.
(Apple is no better, they just do it by strangling the web itself and replacing it with “apps” they can tax to death instead. But Apple doesn’t rely on the open web for value, so it’s not suicidal in the same way)
And when search dies Google will finally have achieved its goal to become a late-stage portal, a “content business” like AOL or Yahoo. A mediocre version of everything you might want that people only used bc it was artificially jammed into their workflow.
But why keep using Google then? I already feel it, I no longer bother with Google for anything except the most trivial of queries. Bc if it can misunderstand me to direct me to its own results, it will.
Some day I expect even that will stop. “Search” will move to a tab, and “Answers” will be the primary interface. You will go to Google and type in the box and nary a SERP will be seen. Just endless slop.
There was another path. There still is, tho I expect Google is too far gone to take it. What if instead of treating Yelp like a sucker for trusting them, and systematically working to drain them dry, Google had made them a partner?
Instead of building a crap store-brand version, offer APIs to allow Yelp to integrate more deeply, allowing detailed results useful in-line and rendered into maps. Then start sharing revenue from the searches where Yelp results rank high, so they lean in.
In this world, instead of a vampire draining the life force from the open web Google becomes an irrigation system promoting it. The ultimate UGC platform: full websites and services as content.
But that would require thinking of the open web with gratitude, with love, with respect. Google decided in 2010 that people naive enough to let them scrape their businesses were suckers, and people dependent enough they couldn’t leave were cattle. A tragedy.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
METR’s analysis of this experiment is wildly misleading. The results indicate that people who have ~never used AI tools before are less productive while learning to use the tools, and say ~nothing about experienced AI tool users. Let's take a look at why.
I immediately found the claim suspect because it didn't jibe with my own experience working w people using coding assistants, but sometimes there are surprising results so I dug in. The first question: who were these developers in the study getting such poor results?
“We recruited 16 experienced open-source developers to work on 246 real tasks in their own repositories (avg 22k+ stars, 1M+ lines of code).” So they sound like reasonably experienced software devs.
"Developers have a range of experience using AI tools: 93% have prior experience with tools like ChatGPT, but only 44% have experience using Cursor." Uh oh. So they haven't actually used AI coding tools, they've like tried prompting an LLM to write code for them. But that's an entirely different kind of experience, as anyone who has used these tools can tell you.
They claim "a range of experience using AI tools", yet only a single developer of their sixteen had more than a single week of experience using Cursor. They make it look like a range by breaking "less than a week" into <1 hr, 1-10hrs, 10-30hrs, and 30-50hrs of experience. Given the long steep learning curve for effectively using these new AI tools well, this division betrays what I hope is just grossly negligent ignorance about that reality, rather than intentional deception.
Of course, the one developer who did have more than a week of experience was 20% faster instead of 20% slower. The authors note this fact, but then say “We are underpowered to draw strong conclusions from this analysis” and bury it in a figure’s description in an appendix.
If the authors of the paper had made the claim, "We tested experienced developers using AI tools for the first time, and found that at least during the first week they were slower rather than faster" that would have been a modestly interesting finding and true. Alas, that is not the claim they made.
A greater theory of system design: what’s wrong with modernity and post-modernity, how to survive the coming avalanche, and how to fix the major problems we are facing.
In the beginning, we managed the world intuitively. Early human tribes did not set quarterly hunting quotas, did not have rainfall-adjusted targets for average gathering per capita. We lived in the choiceless mode:. meaningness.com/choiceless-mode
There are models in the choiceless mode too. If you believe that the hunt succeeds because of the favor of Artemis, this is a model of hunting. Choiceless mode models are simple models made of very complex parts.
A greater theory of system design: what’s wrong with modernity and post-modernity, how to survive the coming avalanche, and how to fix the major problems we are facing.
Part one: Systems are Models. But what’s a Model?
I promise this gets practical at some point, but first we have to lay some groundwork. If you find the groundwork obvious or you’re willing to just take my word for it, feel free to skip it. But ultimately, without the background you can’t even really understand the proposal.
Without loss of generality, any system can be seen as a parameter graph connected by edges, where sensory nodes receive inputs that drive both internal graph changes and produce outputs at active nodes.
I found an old list of blog post ideas that I will probably never write, but I thought it would be fun to turn them into a thread. I wrote these years ago, fun to see the trajectory of my journey. I find them all delightful, even if some are wrong in retrospect.
Power is like radioactive ore…drives the engine of an organization but dangerous to everyone who touches it. Needs to be contained and channeled.
When I was CEOing at Twitch one of the thing I’d do every batch of interns was a very short presentation on the origins of the company and then a Q&A. One of the questions was always, “Where should I work and what job should I get, or should I start a company?”
It’s an interesting question to try to answer for an intern I didn’t really know, because of course the actual answer is dependent on that person and their life. So I had to figure out how to articulate the framework I used.
First there’s money. Obviously you want money. But money is well-known for diminishing returns, after you have enough for rent and food and so on. So you don’t want to optimize for cash, it’s more of a constraint.
@arithmoquine It is shocking when you first discover the degree to which non-commodity outcomes are constrained by talent not capital, and how little you can do with money unless there’s an existing machine to buy from.
@arithmoquine Think of money as water flowing through a system of pipes and turbines powered by the flow, and access to capital as the ability to open valves in the pipes. You can spin existing turbines faster but directing water doesn’t create new turbines.
@arithmoquine Ofc if someone wants to build a new turbine, without capital it’s pointless, it’ll just sit there. Often they won’t even be able to test the idea without minimal flow to experiment with.