In honor of the 10th anniversary of launching Twitch, I thought I’d share some of the lessons I learned along the way. Each of these insights could probably be expanded into an essay, of course. Like the ones you would want me to expand on in the future.
Most of these insights are things I heard someone else say, as a caveat. I’m not claiming that I thought this all up myself! They’re the things that I find myself telling people over and over when they ask me for advice.
Make something 10 people completely love, not something most people think is pretty good.
If your product is for consumers, either it’s a daily habit, it’s used consistently in response to an external trigger, or it’s not going to grow.
There are only five growth strategies that exist, and your product probably only fits one. Press isn’t a growth strategy, and neither is word of mouth.
The five growth strategies are high-touch sales, paid advertising, intrinsic virality, intrinsic influencer incentives (Twitch!), and platform hacks.
For internet companies, growth is more important than profit. It’s very rare for a company to achieve massive scale of use, and then die because they can’t figure out the economics. The reverse is common.
Ignore your competitors, but don’t ignore their customers.
If you’re a first time manager, you suck. That’s ok, everyone sucks. Apologize to your employees, get a coach or join a support group, read books, and generally treat management like a new important skill you can master.
Every time you add a layer of hierarchy underneath you, your job as a leader changes against and gets harder. You have to keep learning and growing. Note: good reason not to hire too fast!
You know when you need to hire: when you just can’t keep up with all the work, and desperately need someone else to take over some part of the job.
Plans are useless, but planning is essential.
Your time horizon for strategic planning should approximately be equal to the length of time your organization has existed so far.
Over time, develop a huge vision that’s bigger than any specific thing you’re working on. Put it as far in the future, and make it as huge, as you have the guts to.
You think you have a morale problem; a management problem; a recruiting problem; you don’t. You have a growth problem. Nothing succeeds like success.
Three ways to have a startup idea: something you want, something you’ve directly experienced others needing, something you’ve invented through analytic thought. They are listed in order of increasing risk.
Your culture is determined by what people perceive to be the behaviors you reward and punish. Note: Not what you actually reward and punish, and also not what you say you reward and punish.
Company cultures are reflection of their founders. To change your company's culture, seek to change how you behave. To change your company's values, seek to change what you value.
Letting an underperforming employee go is difficult and painful. You invested a lot in hiring them, and you want them to succeed. As a result you will almost always fire too late.
Presume deals won’t close and manage accordingly. Not only do deals fall through as a default, if you need the deal to close it impacts negotiations and actually makes it less likely to close.
Do the job before you hire for it. You know nothing about X, so you think you need to hire an expert in X. But you can’t tell which experts are any good until you’ve learned enough to be dangerous yourself. (Exception: cofounders)
Don’t start a company. You aren’t cut out for it. And if I can persuade you not to start a company by saying it in this tweet, definitely don’t start a company. You’re buying the economy-sized amount of effort and pain.
Today is the best time ever to start a company. You might fail, you might succeed, it’s a crazy ride either way, and you’ll learn and grow more than at any job.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Since the cool kids are doing it, my quantum gravity prediction below! Epistemic warning: crackpot physics from someone who isn't a physicist. Epistemic upside: I think I have one maybe actually correct idea buried in it.
Ok, so there's just one quantum field. Likely in C^4 interacting via CP^3 ala Twistors or teleparallel gravity, so we'll go with that. A "particle" excitation in this field is a probability density, basically a (mixed-state) spinor.
There's only one force, sortagravity: spinors want to be in the same state as other spinors they interact with, and also want to stay the way they are. The precision of the distribution is sortamass, since interactions are basically Bayesian. Faster change = lower precision.
Google exists bc of a grand bargain: scrape the open web, and profit from directing traffic to the best sites. Around
2010, the betrayal began. YouTube artificially ranked above other video, then over time maps results injected, shopping, flights, events. Now AI answers.
It’s funny-sad watching it because while Google makes billions in the short run, they’re systematically destroying the very foundations of their own business and have been for a decade. Google is cancer.
The walled gardens are *worse* than the open web. AOL lost for a reason. But the only businesses that can long-term survive operating on the internet must find some way to lock Google out. So the walled gardens return, under the new selective pressure. What waste.
METR’s analysis of this experiment is wildly misleading. The results indicate that people who have ~never used AI tools before are less productive while learning to use the tools, and say ~nothing about experienced AI tool users. Let's take a look at why.
I immediately found the claim suspect because it didn't jibe with my own experience working w people using coding assistants, but sometimes there are surprising results so I dug in. The first question: who were these developers in the study getting such poor results?
“We recruited 16 experienced open-source developers to work on 246 real tasks in their own repositories (avg 22k+ stars, 1M+ lines of code).” So they sound like reasonably experienced software devs.
"Developers have a range of experience using AI tools: 93% have prior experience with tools like ChatGPT, but only 44% have experience using Cursor." Uh oh. So they haven't actually used AI coding tools, they've like tried prompting an LLM to write code for them. But that's an entirely different kind of experience, as anyone who has used these tools can tell you.
They claim "a range of experience using AI tools", yet only a single developer of their sixteen had more than a single week of experience using Cursor. They make it look like a range by breaking "less than a week" into <1 hr, 1-10hrs, 10-30hrs, and 30-50hrs of experience. Given the long steep learning curve for effectively using these new AI tools well, this division betrays what I hope is just grossly negligent ignorance about that reality, rather than intentional deception.
Of course, the one developer who did have more than a week of experience was 20% faster instead of 20% slower. The authors note this fact, but then say “We are underpowered to draw strong conclusions from this analysis” and bury it in a figure’s description in an appendix.
If the authors of the paper had made the claim, "We tested experienced developers using AI tools for the first time, and found that at least during the first week they were slower rather than faster" that would have been a modestly interesting finding and true. Alas, that is not the claim they made.
A greater theory of system design: what’s wrong with modernity and post-modernity, how to survive the coming avalanche, and how to fix the major problems we are facing.
In the beginning, we managed the world intuitively. Early human tribes did not set quarterly hunting quotas, did not have rainfall-adjusted targets for average gathering per capita. We lived in the choiceless mode:. meaningness.com/choiceless-mode
There are models in the choiceless mode too. If you believe that the hunt succeeds because of the favor of Artemis, this is a model of hunting. Choiceless mode models are simple models made of very complex parts.
A greater theory of system design: what’s wrong with modernity and post-modernity, how to survive the coming avalanche, and how to fix the major problems we are facing.
Part one: Systems are Models. But what’s a Model?
I promise this gets practical at some point, but first we have to lay some groundwork. If you find the groundwork obvious or you’re willing to just take my word for it, feel free to skip it. But ultimately, without the background you can’t even really understand the proposal.
Without loss of generality, any system can be seen as a parameter graph connected by edges, where sensory nodes receive inputs that drive both internal graph changes and produce outputs at active nodes.
I found an old list of blog post ideas that I will probably never write, but I thought it would be fun to turn them into a thread. I wrote these years ago, fun to see the trajectory of my journey. I find them all delightful, even if some are wrong in retrospect.
Power is like radioactive ore…drives the engine of an organization but dangerous to everyone who touches it. Needs to be contained and channeled.