, 15 tweets, 4 min read Read on Twitter
Kubernetes Borg/Omega history topic 7: The Kubernetes Resource Model: why we (eventually) made it uniform and declarative. A topic even deeper than watch. More details can be found here: github.com/kubernetes/com…
Like most internal Google services, Borgmaster had an imperative, unversioned, monolithic RPC API built using the precursor to grpc.io, Stubby. It exposed an ad hoc collection of operations, like CreateJob, LookupPackage, StartAllocUpdate, and SetMachineAttributes
Hundreds to thousands of clients interfaced with this API. Many of them were asynchronous controllers or monitoring agents, as discussed in previous threads, and there was a simple command-line tool, and two widely used configuration CLIs
The APIs were manually mapped into the two Turing-complete configuration languages, and there was also a hand-crafted diff library for comparing the previous and new desired states. The sets of concepts, RPC operations, and configurable resource types were not easily extended
Some extensions of the core functionality, such as for batch scheduling and vertical autoscaling, used the Borgmaster as a configuration store by manually adding substructures stored with Job objects, which were then retrieved by polling Jobs.
Others, such as for load balancing, built independent services with their own service APIs and configuration mechanisms. This enabled teams to evolve their services independently, but created a heterogeneous, inconsistent management surface.
Omega supported an extensible object model, and @davidopp had proposed putting an API in front of the persistent store, as we later did in Kubernetes, but it wasn't declarative. Separate work on a common configuration store was discontinued as Google Cloud became the focus
GCP was comprised of independent services, with some common standards, such as the org hierarchy and authz. They used REST APIs, as the rest of the industry, and gRPC didn't exist yet. But, GCP’s APIs were not natively declarative, and Terraform didn’t exist, either
@jbeda proposed layering an aggregated config store/service with consistent, declarative CRUD REST APIs over underlying GCP and third-party service APIs. This sort of later evolved into Deployment Manager.
We folded learnings from these 5+ systems into the Kubernetes Resource Model, which now supports arbitrarily many built-in types, aggregated APIs, and centralized storage (CRDs), and can be used to configure 1st-party and 3rd-party services, including GCP:
KRM is consistent and declarative. Metadata and verbs are uniform. Spec and status are distinctly separated. Resource identifiers, modeled closely after Borgmaster’s (issues.k8s.io/148), provide declarative names. Label selectors enable declarative sets.
For the most part, controllers know which fields to propagate from one resource instance to another and wait gracefully on declarative object (rather than field) references, without assuming referential integrity, which enables relaxed operation ordering.
There are some gaps in the model (e.g., issues.k8s.io/34363, issues.k8s.io/30698, issues.k8s.io/1698, issues.k8s.io/22675), but for the most part it facilitates generic operations on arbitrary resource types.
In the next thread, I’ll cover more about configuration itself, such as the origin of kubectl apply
BTW, when I was digging through old docs/decks, I found a diagram from the Dec 2013 API proposal. Sunit->Pod, SunitPrototype->PodTemplate, Replicate->ReplicaSet, Autoscale->HorizontalPodAutoscaler.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Brian Grant
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!