π Monitoring is for running and understanding other people's code (aka "your infrastructure")
π Observability is for running and understanding *your* code -- the code you write, change and ship every day; the code that solves your core business problems.
Questions monitoring tools (like datadog, signalfx) can answer:
* When will my disk fill up?
* Am I running out of capacity in $(cluster)?
* Did the % free memory drop after my last deploy?
* What is the avg, 90th, 99th percentile latency per service?
Questions observability tools (like honeycomb, lightstep) can answer:
* What (1..many) things do all the errors in that spike have in common?
* How many exports per second is $app doing, and how large are they, and how does this compare to the average export size in kB?
* Break down by app and sort by export size: what are your top 3 export users, and what is the sum of their total throughput compared to the overall throughput?
* Are the errors evenly distributed across workers, AZs, instance types, software versions, build_id versions, shards?
* Are the timeouts happening for all our users, or only the test users, or only our top users by write volume, etc?
* For all of the deliveries that failed over a specified time period, what are the top three reasons they failed, and what % of failures were from a single app?
Running infrastructure means running black boxes. You may have some insight into them (god i hope so) but you don't have the ability to tweak their instrumentation, and you certainly aren't shipping code changes every day.
And when it comes to monitoring and understanding your infrastructure, metrics-based monitoring tools that let you understand performance in aggregate are the tool for the job.
Esp when workloads are high throughput with little differentiation (routers, etc) metrics are king.
When it comes to aligning developer perspective with user experience to provide core business value, though: event-based observability tools are the only way to get at the information you need.
You need the flexibility and precision of a scalpel, not an axe.
To see an expert yet beginner-friendly (and entertaining!) intro to observability for business problems, check out this talk from @seebails -- observe2020.io/2020/03/chris-β¦
And to continue my killjoy track record of stiffly caring about technical definitions for technical terms, if you'd like to read more, please read my three year history of observability in the software domain. aka how we got here and where we're going:
annnd -- should you be in the mood to build an o11y tool in-house, or want to argue with me about why datadog and signalfx and their ilk are definitely not observability tools (or about what constitutes an o11y tool), do read this:
It felt, to me, like those participating were stepping very cautiously around a few of the third rails Jaana just tripped over. (π)
"Work-life balance"
"Working hard vs working smart"
"Meritocracy"
The intersection of company tech cultures and expectations and performance.
These are hard, complicated topics, and there are some very good reasons for speaking carefully. People can pick up a sentence and run in the wrong direction with it, and do a lot of damage.
I have abandoned god only knows how many drafts on this topic, for that reason.
The question is, how can you interview and screen for engineers who care about the business and want to help build it, engineers who respect sales, marketing and other functions as their peers and equals?
It's a great question!! I have ideas, but would love to hear from others.
I said "question", but there are actually two: 1) how to hire engineers who are motivated by solving business problems and 2) aren't engineering supremacists.
Pro tip: any time you see someone confidently opining on what all good CTOs know or do, it is β¨bullshitβ¨
There is no stock template for CTO, or default set of expectations or responsibilities. It stands alone among the C-levels in that good ones are all over the freaking map.
This may not hold true for publicly traded companies. But in my experience, a good CTO can be:
* over all of R&D
* over engineering, like a VP eng
* like a principal eng or architect
* team lead for special projects
* a great senior programmer
(continued... π)
A CTO can also be:
* a great communicator and popularizer
* on the road as a devrel
* a field CTO, whose authority opens doors to big customers
* a product visionary who sweats the details
* more of a cofounder than technical contributor, sharing "company-running" duties w/CEO
Yeah, this is a fair caveat. If you're already a decent senior engineer and manager, it's kind of possible to split your attention between managing a small team and writing code.
But you aren't going to improve at either skill set. Those cycles get devoured by context switching.
Tech lead managers ("TLMs") are a mistake we make over and over in this industry. I've written about this a bit, but the definitive post was written by @Lethain.
My coworker @suchwinston wrote a terrific piece on burnout before the break:
There's a reason why burnout and work/life balance are such evergreen topics, and it's not actually because the world is so terribly harsh and everyone is criminally overworked.honeycomb.io/blog/product-mβ¦
Just to be clear: some places *are* awful, and some people *are* criminally overworked. But burnout and work/life balance are an issue for everyone, not just those people.
I think this is bc there is no real "solution". Each of us has to find and maintain our own equilibrium.
It takes a lot of hard work to become good at technology, and a lot more hard work to maintain your edge in a fast-changing industry.
I don't know of anyone for whom this is _easy_. None of this is remotely natural, from an evolutionary perspective. π