Short personal post-mosterm of how a small code change I made in Spinnaker caused Netflix to run with ~10k extra AWS instances overnight.
Most deploys at Netflix use the red/black strategy, which causes us to temporarily use twice the needed capacity as the old and new server groups run side by side.
If there is an issue that causes this phase to last a long time, there is a danger that both server groups would start scaling down (as they are each receiving 50% of the traffic), and by the time we disable the old server group the new server group would be underprovisioned.
To avoid this risk, we prevent scale-ins by "pinning" server groups, i.e. we set their min size to their desired size. This means the autoscaler won't be able to set the desired size any lower, and instances won't be destroyed.
For some reason, we currently only pin the new server group, not the old one. Which means the old one can scale down, and if we need to roll back or cancel the deployment there is a risk that it will be underprovisioned.
I decided to address this, and was surprised to see we already supported this for some deployment strategies. Adding redblack to the list seems easy enough! (and not particularly risky...)
And since the project is open source, you can see the offending code here (sweat intensifies 😅)…
Oddly enough my testing focused on the corner cases (what happens if the deploy fails? What happens if the deploy fails *because of a timeout*?...) I "forgot" to test the happy path
Because of an ordering of operations issue, this change caused 🔥every🔥 new server group to stay pinned at the end of the deployment. Oops...
Interestingly, nothing blows up because of this. There is no error, no exception, deployments just seem to proceed normally and minutiae like cluster min sizes is not normally something people pay attention to.
Because of some other compounding factors (like me being sick and dealing with sick kids and a broken arm at home when this made its way to prod...) the change was like for over 24h before we realized the problem and rolled back.
As a result, we had about 600 server groups deployed with about 10k extra instances that wouldn't be able to scale down properly. It was 5pm by the time we realized the magnitude of the impact, and because of the lack of imminent risk we stayed in this configuration overnight.
This is what it looks like when a typical cluster is pinned and can't scale down (the green area is instances up, blue line is the min size, black line is instances up last week)
Thanks everyone who helped me by reporting the issue, investigating, rolling back, brainstorming, measuring the impact, communicating with service owners, and ultimately remediating the problem!
That would be you @ajordens, @erikmunson, @aaronblohowiak's Demand Engineering team, our CORE team, @joshgord and gang from Edge Engineering...! 👏
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to dreynaud, expensive paper weight
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!