"we have a realtime observability platform built from the ground up"
"we have a deterministic AI that tells you what the root cause is"
"drill down with unlimited cardinality, no lag, and no sampling"
#shitmyvendorsays
when you read between the lines of all these foofy claims, the secret sauce is aggregation.
if they don't sample, then they pre-aggregate.
and if they aggregate, then it's not observability, because o11y hinges on your ability to *ask new questions*, and that means raw events.
i also like "second-level granularity". that means all the requests that happen over the course of a second, get smushed into a single number.
a lot of shit happens in a second, y'all.
"i can narrow the bug down to these ten thousand requests" does sound less impressive 🙃
if you're running at scale and you want observability, you're ultimately gonna be doing some kind of dynamic sampling.
it's not scary. turns out most 200 OK requests to most endpoints are bo-ring and you don't care about them.
can we make sampling the hawt new bandwagon to jump on? because it really is like magic.
have your cake and eat it too: WITH SAMPLING!
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
