📉 Monitoring is for running and understanding other people's code (aka "your infrastructure")
📈 Observability is for running and understanding *your* code -- the code you write, change and ship every day; the code that solves your core business problems.
Questions monitoring tools (like datadog, signalfx) can answer:
* When will my disk fill up?
* Am I running out of capacity in $(cluster)?
* Did the % free memory drop after my last deploy?
* What is the avg, 90th, 99th percentile latency per service?
Questions observability tools (like honeycomb, lightstep) can answer:
* What (1..many) things do all the errors in that spike have in common?
* How many exports per second is $app doing, and how large are they, and how does this compare to the average export size in kB?
* Break down by app and sort by export size: what are your top 3 export users, and what is the sum of their total throughput compared to the overall throughput?
* Are the errors evenly distributed across workers, AZs, instance types, software versions, build_id versions, shards?
* Are the timeouts happening for all our users, or only the test users, or only our top users by write volume, etc?
* For all of the deliveries that failed over a specified time period, what are the top three reasons they failed, and what % of failures were from a single app?
Running infrastructure means running black boxes. You may have some insight into them (god i hope so) but you don't have the ability to tweak their instrumentation, and you certainly aren't shipping code changes every day.
And when it comes to monitoring and understanding your infrastructure, metrics-based monitoring tools that let you understand performance in aggregate are the tool for the job.
Esp when workloads are high throughput with little differentiation (routers, etc) metrics are king.
When it comes to aligning developer perspective with user experience to provide core business value, though: event-based observability tools are the only way to get at the information you need.
You need the flexibility and precision of a scalpel, not an axe.
To see an expert yet beginner-friendly (and entertaining!) intro to observability for business problems, check out this talk from @seebails -- observe2020.io/2020/03/chris-…
And to continue my killjoy track record of stiffly caring about technical definitions for technical terms, if you'd like to read more, please read my three year history of observability in the software domain. aka how we got here and where we're going:
annnd -- should you be in the mood to build an o11y tool in-house, or want to argue with me about why datadog and signalfx and their ilk are definitely not observability tools (or about what constitutes an o11y tool), do read this:
I devoured the recap @martinfowler posted from the deer valley summit. Loved it.
But the notes suggest we may be replicating a perennial blind spot in software engineering: treating code like the outcome, and production like an afterthought.
Formal methods and test suites are like flight simulators.
Flight simulators are genuinely impressive. Airlines use them, pilots log real hours, they catch real failure modes. Nobody's saying skip the simulator.
But a pilot who has only flown simulators is not a pilot.
I woke up this am, scanned Twitter from bed, and spent an hour debating whether I could stomach the energy to respond to the latest breathless fatwa from Paul Graham.
I fell asleep again before deciding; just as well, because @clairevo said it all more nicely than I would have.
(Is that all I have to say? No, dammit, I guess it is not.)
This is so everything about PG in a nutshell, and why I find him so heartbreakingly frustrating.
The guy is brilliant, and a genius communicator. He's seen more and done more than I ever will, times a thousand.
And he is so, so, so consistently blinkered in certain predictable ways. As a former fundamentalist, my reference point for this sort of conduct is mostly religious.
And YC has always struck me less like an investment vehicle, much more like a cult dedicated to founder worship.
Important context: that post was quote tweeting this one.
Because I have also seen designers come in saying lovely things about transformation and user centricity, and end up wasting unthinkable quantities of organizational energy and time.
If you're a manager, and you have a boot camp grad designer who comes in the door wanting to transform your org, and you let them, you are committing professional malpractice.
The way you earn the right to transform is by executing consistently, and transforming incrementally.
(by "futureproof" I mean "true 5y from now whether AI is writing 0% or 100% our lines of code)
And you know what's a great continuous e2e test of your team's prowess at learning and sensemaking?
1, regularly injecting fresh junior talent
2, composing teams of a range of levels
"Is it safe to ask questions" is a low fucking bar. Better: is it normal to ask questions, is it an expected contribution from every person at every level? Does everyone get a chance to explain and talk through their work?
The advance of LLMs and other AI tools is a rare opportunity to radically upend the way we talk and think about software development, and change our industry for the better.
The way we have traditionally talked about software centers on writing code, solving technical problems.
LLMs challenge this -- in a way that can feel scary and disorienting. If the robots are coming for our life's work, what crumbs will be left for you and me?
But I would argue that this has always been a misrepresentation of the work, one which confuses the trees for the forest.
Something I have been noodling on is, how to describe software development in a way that is both a) true today, and b) relatively futureproof, meaning still true 5 years from now if the optimists have won and most code is no longer written by humans.