It does explain how vulnerable they are to the anti-instrumentation sales pitch.
The idea that you might not have to have anyone who deeply understands your systems? That you can pay them $$$$ and they will autoinstrument your code and tell you what to look at? So, so tempting.
WRT instrumentation: we can make it easier, we can gather a ton of stuff up automatically, we can write libraries to standardize and enable and more.
But auto-instrumentation is exactly as useful and as usable as auto-generated commenting for your code.
Which is because that is exactly what it is: a record of programmer intent.
Comments: "What I plan to do"
Instrumentation: "What I am doing"
Instrumentation (for observability) is simply commenting your code for interpretation at runtime.
(Instrumentation for metrics is a bit different, it often serves as a translation layer between code and low level systems statistics. But let's keep it simple.)
It's not a bad litmus test. If you don't have to do *some* instrumentation by hand, it probably isn't observability.
On the subject of senior engineers and their lack of fungibility: this gets more and more true the more senior they get.
When an engineer is growing from junior->intermediate->senior, we want them to become a reasonably well rounded senior engineer.
This doesn't always happen, sadly, but part of a manager's job is making sure you eat your vegetables.
That you don't just do the one or two things you enjoy and are good at over and over, but are exposed to various parts of the stack, and know enough not to be dangerous.
But once someone is solidly a senior engineer -- once you know what you like and what you don't, and are less of a danger to yourself and others -- then your path is in your hands.
And people tend to become more and more...specific...versions of themselves, as they grow.
Engineers who are 3-5 years out of college are way, WAY more fungible (on average) than engineers who are 10, 20, or 30 years removed.
I don't just mean when it comes to languages and technologies, either. The way you interact with your team+org is probably more important.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I woke up this am, scanned Twitter from bed, and spent an hour debating whether I could stomach the energy to respond to the latest breathless fatwa from Paul Graham.
I fell asleep again before deciding; just as well, because @clairevo said it all more nicely than I would have.
(Is that all I have to say? No, dammit, I guess it is not.)
This is so everything about PG in a nutshell, and why I find him so heartbreakingly frustrating.
The guy is brilliant, and a genius communicator. He's seen more and done more than I ever will, times a thousand.
And he is so, so, so consistently blinkered in certain predictable ways. As a former fundamentalist, my reference point for this sort of conduct is mostly religious.
And YC has always struck me less like an investment vehicle, much more like a cult dedicated to founder worship.
Important context: that post was quote tweeting this one.
Because I have also seen designers come in saying lovely things about transformation and user centricity, and end up wasting unthinkable quantities of organizational energy and time.
If you're a manager, and you have a boot camp grad designer who comes in the door wanting to transform your org, and you let them, you are committing professional malpractice.
The way you earn the right to transform is by executing consistently, and transforming incrementally.
(by "futureproof" I mean "true 5y from now whether AI is writing 0% or 100% our lines of code)
And you know what's a great continuous e2e test of your team's prowess at learning and sensemaking?
1, regularly injecting fresh junior talent
2, composing teams of a range of levels
"Is it safe to ask questions" is a low fucking bar. Better: is it normal to ask questions, is it an expected contribution from every person at every level? Does everyone get a chance to explain and talk through their work?
The advance of LLMs and other AI tools is a rare opportunity to radically upend the way we talk and think about software development, and change our industry for the better.
The way we have traditionally talked about software centers on writing code, solving technical problems.
LLMs challenge this -- in a way that can feel scary and disorienting. If the robots are coming for our life's work, what crumbs will be left for you and me?
But I would argue that this has always been a misrepresentation of the work, one which confuses the trees for the forest.
Something I have been noodling on is, how to describe software development in a way that is both a) true today, and b) relatively futureproof, meaning still true 5 years from now if the optimists have won and most code is no longer written by humans.
A couple days back I went on a whole rant about lazy billionaires punching down and blaming wfh/"work life balance" for Google's long slide of loss dominance.
I actually want to take this up from the other side, and defend some of the much hated, much-maligned RTO initiatives.
I'm purposely not quote tweeting anyone or any company. This is not about any one example, it's a synthesis of conversations I have had with techies and seen on Twitter.
There seems to be a sweeping consensus amongst engineers that RTO is unjust, unwarranted and cruel. Period.
And like, I would never argue that RTO is being implemented well across the board. It's hard not to feel cynical when:
* you are being told to RTO despite your team not being there
* you are subject to arbitrary badge checks
* reasonable accommodations are not being made