Noah Kantrowitz Profile picture
Apr 6, 2022 24 tweets 4 min read Read on X
FAANG promo committees are killing Kubernetes: A Short Thread 🧵
For those outside my teeny tiny social and media bubble, "FAANG" means "big tech company" (originally Facebook+Apple+Amazon+Netflix+Google, now a bit broader), "promo" means job promotion, "promo committees" are the panels which decide who gets promoted and who doesn't.
The promo process isn't exactly the same everywhere, but most of the FAANG-o-sphere has settled on a similar pattern of:

1. write up a document that says why you should get promoted, your "promo packet"
2. get your manager to submit that to the committee
3. wait
4. maybe profit
Packets have become huge manifestos at this point. You need to describe why your work is important, how you've made money for the company, how you've been a leader, internal letters of recommendation from high-ranking peers, and for top titles often external recommendations too.
This takes a mildly ridiculous amount of time and effort to put together, not to mention the ongoing time to lobby for yourself once the documents are assembled (first to get your management to submit it, and then usually lots of time advocating for yourself to Important People).
If you get denied for a promotion, you often can't try again for a while so the system naturally incentivizes "go big or go home" for these documents. Plus even if they don't say it, you're competing with all the others trying to advance so you need to stand out from the field.
And this is a live-fire environment, going L6->7 at Google is worth ~200k/year, 7->8 is ~400. Similar patterns at other places. This isn't just about a title, ridiculous amounts of money are on the line.
Okay, so what, Machiavellian capitalism is nothing new, what does this have to do with Kubernetes? Because promo committees have, for years now, been consistently undervaluing the work of full-time Kubernetes contributors. Or really of open source work more broadly.
Attributable revenue has been taking over as one of the most important factors at most companies. And Kubernetes has very little of that. It's happened gradually, and I don't think this was ever an intended outcome but it's a thing and we have to live with it.
It's too indirect, fixing a bug in kube-apiserver might retain a GCP customer or avoid a costly Apple services outage, but can you put a dollar value on that? How much is CI stability worth? Or community happiness?
And then add on top of it, the time cost. "FOSS maintainers are overloaded" should not be news to anyone, but now add 20/hours a week of campaigning to other high-level folks to "build buzz" for your work and let me know how that goes.
Amazing people put their blood, sweat, and tears into Kubernetes and projects like it, big wonderful impossible projects that make the world better. But if they can't take care of their own needs, eventually that will have to win out.
People have kids, mortgages, student loans, and on top of that most of these companies see not getting promoted as a failing. There is no prize for doing the right thing when everyone tells you otherwise, trust me, I checked.
So people move on, sometimes to other teams at the same FAANG that work on more prestigious (read: more billable revenue) projects, sometimes to small startups to try and monetize their niche expertise, many just burn out from the stress of it all.
Kubernetes looms large in the industry but there is no Kubernetes Inc. Every person who works on it either does it in increasingly rare spare time (like me) or works for one of our vendors, mostly FAANG-y companies.
We, as a project, can't control hiring or really even what people work on. Some high-level contributors get a lot of of their time left undirected to work on things decided on by their SIGs but that's increasingly rare. And as people leave there is no way to replace them.
So here we are, a rapidly shrinking pool of maintainers mostly working on really esoteric features because someone worked out how to connect them to revenue numbers. This is not a sustainable situation.
This is not the only problem facing Kubernetes sustainability but I'm convinced it is by far the biggest one, a huge external incentive pressure that we have zero control over. I honestly don't see a solution here.
If you are on a FAANG promo committee, think long and hard about how you've valued this kind of work, please. I don't know what to say to the rest of us though. I normally bang the drum of UBI and grant programs but neither is close to enough to sustain something at this scale.
And I don't think there is any world in which these companies would give up sovereignty over their people for the greater good. In fact a trend lately is to move dev time to single-vendor projects like Istio so the company doesn't have to share control over direction and goals.
And the sad thing here is this work *does* benefit the companies. I know there is a contingent who says "we never should have open-sourced K8s, look at all that money we left on the table", but that's simply not how reality works.
All the revenue across hundreds or thousands of businesses can only exist because Kubernetes is a big tent, a community-governed standard that everyone can build on together. Fragmenting back into little vendor-specific toolkits is against their own interests.
I guess this wasn't a short thread in the end, but I really hope we can find a way to reverse this trend. I believe strongly in Kubernetes both as a sound technical base to build on for the future and as an amazing community I want to see continue.
tl;dr big tech promotion incentives are pushing people away from multi-vendor-project open source contribution as a full time job, so work is slowing down and I don't know how to fix it.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Noah Kantrowitz

Noah Kantrowitz Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @kantrn

Jan 22, 2021
Some thoughts on Elastic and the SSPL: this shit is hard. I agree with other voices that this was counterproductive and a betrayal of the community but I get it.
To better explain things I think we need to split Elastic into two pieces, the steward of an immensely popular project and a VC-backed startup providing hosting services. In their role as steward, the relicensing leaves their community in a terrible place.
Many don't know for sure if using ES is legally safe, others don't know if they should jump to the fork that Elastic 100% knew was inevitable. That's a betrayal of their responsibilities to the community.
Read 12 tweets
Oct 1, 2020
I've seen a bunch of "it's not that much spam, just close the PRs, being welcoming is important" takes about Hacktoberfest so I want to share why it makes me so cranky. Background, I'm a long-time maintainer, currently mostly on Kubernetes (a very big and visible project).
It's not the individual spam PRs that bug me. Yes it's annoying to have the noise, and it's even more annoying we have to pay for the CI time every time someone opens one, but we could survive that. If it was a bunch of new users really wanting to contribute, I would make it work
But it's not that, it's a marketing campaign for DigitalOcean, a company with $100MM/yr in revenue. They want to send out as many shirts as possible because each one is a walking ad for DO. This is how all swag works, we get that right? Companies don't do swag giveaways for fun.
Read 5 tweets
Aug 25, 2020
So, Docker Inc has finally updated the FAQ for their their previously announced service limits. And I'm not going to lie, it's pretty brutal. You should consider any (unpaid) use of Docker Hub to be an operational risk going forward.
The anonymous pull limits have been clarified to be by IP, so anyone running container tooling behind a NAT expect to hit those very quickly (100 pulls per 6 hours). And image expiry will apparently be per-version.
The latter has me very worried that old versions of FOSS tools will quickly age out, potentially destroying quite a bit of archived history. There is a nebulous answer that FOSS projects can get some kind of special plan (docker.com/community/open…).
Read 6 tweets
Aug 13, 2020
So putting aside the "I am altering the deal" of deleting images that haven't been used in 6 months, Docker added a "Data Transfer" limit section to the pricing page. I haven't seen this mentioned anywhere else. It wasn't there in the last Wayback scan from July. Is this new? Image
And if it is new, what does it mean? Is that 100 pulls per IP? Per image? Because it reads like that's per image and if so that makes Hub 100% not viable as a public resource for open source projects.
The new ToS is similarly vague:

These limitations include but are not limited to quantity of [...] pull rate (defined as the number of requests per hour to download data from an account on Docker Hub)
Read 4 tweets
Aug 1, 2020
It's time for some Friday night thoughtleading. I see a ton of people asking for help on Slack/SO/Twitter/etc with a Kubernetes webapp where each user gets their own container where they can do something. Please don't do this.
In the very bad cases, some folks want to use Kubernetes as executable sandboxing. This isn't 100% impossible but it's very unrealistic for almost everyone. Container escapes vulnerabilities happen, cross-service escalation attacks happen, bitcoin miners happen.
But even that aside, this is a really terrible application architecture. It puts starting a new container in the most important hot path of your app, it leads to structural sprawl faster than you can imagine, and it makes development really hard since you have all this state.
Read 4 tweets
Apr 28, 2020
Since I just spent 2 days explaining this to a Slack channel, I guess it isn't well known:

Probably don't use CPU limits in Kubernetes.

The Linux cpu quota system has had intermittent bugs basically forever and unless you do a lot of recon, it's hard to know if it's busted.
Because of how the CFS scheduler works, they are rarely needed. Even under load, you'll get generally reasonable behavior if your CPU request values are correct because those are set as your cpu.shares for the cgroup.
The big exception is reining in bad code that burns CPU cycles it doesn't need. Usually this means software with a spinlock or busy wait loop, where you can't actually fix the code so you just need to put a limit on how much CPU it can eat.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(