Two ways: first you test for all the known failures and those you can predict, then you push it out... and see what happens. Uncomfortable truth.
Kittens, the deploy is when real engineering work *begins*. Everything leading up to that is child's play.
(You do this every day already, it just stings to hear it put so bluntly. Sit with that feeling a bit.)
Your development process extends waaaaaayyyyyy into prod. You should be up to your elbows in prod every goddamn day.
Often this manifests as getting code into prod fast... But easing usage up very slowly, starting with internal users only.
This is not a terrifying principle. This is a liberating principle.
It's also where you get the highest leverage for validating your code against real conditions & unknown unknowns, while staying under SLO budget.
* canaries
* internal users first
* progressive deploys
* high cardinality tooling (🐝)
* raw event inspection (🐝)
* traffic splitters
* shadow nodes
* just fucking instrument and look at the code you wrote after you ship it (🐝)
Be curious. The overwhelming majority of bugs can and will be spotted IMMEDIATELY after they ship, if the developer practices observability-driven development,
Note that I didn't say "instrument it, and tell ops how to check it." Only the author has the full context, the original intent. Devs, you gotta live in prod too.
If you reframe your job to extend to user experience, there are a million reasons to be in prod erry day.
ah yes production
As @lyddonb likes to say, every minute an engineer spends in an environment other than prod, is a moment spent learning the wrong lessons. (paraphrase)
And where exactly do you think that instinct is forged and honed? Their laptop? Staging?
You are learning the right tools, the right habits, the right instincts for what is fast or slow, dangerous or safe. You are leveling up at real engineering.
Like that running mysql -e "drop database blah" is fiiiiinnnne.