“Burnout” is a particularly modern affliction, feeling simultaneously overwhelmed and paralyzed. I’ve found it’s best to think of burnout not as a disease but as a symptom, with many different etiologies. The big three: permanent on-call, broken steering, and mission doubt.
It used to be most jobs necessarily had to give you time off, because they could only be done from the office/factory/etc. The 24/7 on-call rotation was made possible thanks to the Magic of Technology(tm) (eg. 80s Doctors w pagers). Too long on-call causes mental breakdown.
Broken steering is a metaphor for that feeling at work where your actions seem to have no impact. Turn the wheel, car still goes straight. This is rare in blue collar work: the car got assembled, now you have car. It is common in knowledge work: you sent some email, so what?
(Broken steering destroys motivation because it breaks the core feedback loop which makes work rewarding. When you throw a rock in a pond and it makes a splash, there is a little feeling of power in the impact on the world. Take away the splash and the intrinsic reward dies.)
Mission doubt happens when you start asking, why am I doing this work at all? It is the most common when people are very comfortable. If you really need the money from this week’s paycheck, it’s obvious why you keep at it. What if you’d be fine for 6 months? 2 years?
So burnout is becoming more and more common because (a) we can work from anywhere and then have bad boundaries, (b) our work is increasingly abstract and it’s harder to tell if it matters, and (c) we are collectively richer than our ancestors.
If you find yourself feeling “burnout”, it can be good to consider which of these might be the cause. The solution to permanent on-call is more vacation and time-off. But that can actually make broken steering worse, where you need to instead increase your impact per unit work.
May all beings obey the inscrutable exhortations of their soul.
May all beings experience flow.
May all beings yearn for the vast and endless sea.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Every time you dunk on someone’s stupid, evil, unacceptable behavior and message, imagine yourself serving them. A slave on the line every other hater, towing the blocks up the pyramid, building the idol of oppression you hate. Attention is care, energy, fuel: starve them out.
You are a living fountain. Your presence and attention matter. They matter more than any of us can directly understand. And they pour out from us at all times.
Every moment is a chance to refocus. Every danger is a cue to focus on your allies and barricades. Every disgusting thing a cue to clean your own house, to wash your hands and mind. Every delusion a cue to inspect your own eye.
Since the cool kids are doing it, my quantum gravity prediction below! Epistemic warning: crackpot physics from someone who isn't a physicist. Epistemic upside: I think I have one maybe actually correct idea buried in it.
Ok, so there's just one quantum field. Likely in C^4 interacting via CP^3 ala Twistors or teleparallel gravity, so we'll go with that. A "particle" excitation in this field is a probability density, basically a (mixed-state) spinor.
There's only one force, sortagravity: spinors want to be in the same state as other spinors they interact with, and also want to stay the way they are. The precision of the distribution is sortamass, since interactions are basically Bayesian. Faster change = lower precision.
Google exists bc of a grand bargain: scrape the open web, and profit from directing traffic to the best sites. Around
2010, the betrayal began. YouTube artificially ranked above other video, then over time maps results injected, shopping, flights, events. Now AI answers.
It’s funny-sad watching it because while Google makes billions in the short run, they’re systematically destroying the very foundations of their own business and have been for a decade. Google is cancer.
The walled gardens are *worse* than the open web. AOL lost for a reason. But the only businesses that can long-term survive operating on the internet must find some way to lock Google out. So the walled gardens return, under the new selective pressure. What waste.
METR’s analysis of this experiment is wildly misleading. The results indicate that people who have ~never used AI tools before are less productive while learning to use the tools, and say ~nothing about experienced AI tool users. Let's take a look at why.
I immediately found the claim suspect because it didn't jibe with my own experience working w people using coding assistants, but sometimes there are surprising results so I dug in. The first question: who were these developers in the study getting such poor results?
“We recruited 16 experienced open-source developers to work on 246 real tasks in their own repositories (avg 22k+ stars, 1M+ lines of code).” So they sound like reasonably experienced software devs.
"Developers have a range of experience using AI tools: 93% have prior experience with tools like ChatGPT, but only 44% have experience using Cursor." Uh oh. So they haven't actually used AI coding tools, they've like tried prompting an LLM to write code for them. But that's an entirely different kind of experience, as anyone who has used these tools can tell you.
They claim "a range of experience using AI tools", yet only a single developer of their sixteen had more than a single week of experience using Cursor. They make it look like a range by breaking "less than a week" into <1 hr, 1-10hrs, 10-30hrs, and 30-50hrs of experience. Given the long steep learning curve for effectively using these new AI tools well, this division betrays what I hope is just grossly negligent ignorance about that reality, rather than intentional deception.
Of course, the one developer who did have more than a week of experience was 20% faster instead of 20% slower. The authors note this fact, but then say “We are underpowered to draw strong conclusions from this analysis” and bury it in a figure’s description in an appendix.
If the authors of the paper had made the claim, "We tested experienced developers using AI tools for the first time, and found that at least during the first week they were slower rather than faster" that would have been a modestly interesting finding and true. Alas, that is not the claim they made.
A greater theory of system design: what’s wrong with modernity and post-modernity, how to survive the coming avalanche, and how to fix the major problems we are facing.
In the beginning, we managed the world intuitively. Early human tribes did not set quarterly hunting quotas, did not have rainfall-adjusted targets for average gathering per capita. We lived in the choiceless mode:. meaningness.com/choiceless-mode
There are models in the choiceless mode too. If you believe that the hunt succeeds because of the favor of Artemis, this is a model of hunting. Choiceless mode models are simple models made of very complex parts.
A greater theory of system design: what’s wrong with modernity and post-modernity, how to survive the coming avalanche, and how to fix the major problems we are facing.
Part one: Systems are Models. But what’s a Model?
I promise this gets practical at some point, but first we have to lay some groundwork. If you find the groundwork obvious or you’re willing to just take my word for it, feel free to skip it. But ultimately, without the background you can’t even really understand the proposal.
Without loss of generality, any system can be seen as a parameter graph connected by edges, where sensory nodes receive inputs that drive both internal graph changes and produce outputs at active nodes.