My Authors
Read all threads
I'm going to (try and) livetweet @ellenkorbes' talk at the @D2iQ online event; it'll be about "the Quest for the Fastest Deployment Time".

It'll be in this thread :)
One of the pain points in k8s is the length of the feedback loop: make changes to the code ... save ... {% build images ... push them ... get them on the cluster ... upate stuff %} ... see hte results.

We'd like to make everything between the {% %} transparent; that's the goal.
"The best indicator of a healthy development workflow is a short feedback loop." @ellenkorbes

... Unfortunately, as their poll below shows, for almost half of the people who answered the question, that's close to 30 minutes. We need to do better!
"You should not use CI in development."

(To clarify: while developing, we shouldn't save-waitforCI-checkresults, because that's just too damn slow.)

CI solves a different class of problems.
"Run the CI when you're done with the development part of the workflow; when you're ready to push things; not as part of your inner dev loop."

💯
When developing something that has multiple services (e.g. a frontend and a database... Yes, multiple starts at 2!) we should use a "MDX tool".

(MDX stands for Multiservice Development eXperience. Think something like Compose, but... for the k8s world!)
This other presentation by @ellenkorbes has a treasure trove of information about these tools on k8s (and other tools as well); if you're a developer working on Kubernetes, I strongly recommend to check it out!)

Now @ellenkorbes will demo Tilt (tilt.dev) to see how to achieve a really short development loop in Kubernetes. That's the part that is Very Relevant To My Interests.

This is the repo with the sample code for the app that they'll use:

github.com/windmilleng/en…
(Apparently that app is part of the software packages that will be used by Skynet to take over the world in some not-so-distant future; but I'll let that slip)
Alright, we start with a very basic Dockerfile; that feels very familiar, nothing fancy so far:
Then we use Tilt to declare that this Dockerfile should be used to build an image. So far so good
Then we tell Tilt that we have some k8s resources ... These are YAML files for Deployments and such k8s resources. Nothing fancy here either.
Then there is some extra stuff to actually time how long it will take from code change, to code running on the cluster; but I'm not gonna include that (it's just L's benchmarking code) yet
So with this basic setup ...

Build time was about 20s, so we get a 20s loop when working locally.

However, when working with a remote cluster, it takes almost 1 minute (because of push/pull time).
(OK, there is a trick here using dependency vendoring to bring down the build time, and I'm not familiar with this so I'll check it out offline after! But it brings significant improvements. Sorry I wasn't able to screenshot the table with the results fast enough 😬😅)
The next trick is to remove debugging artifacts, to try to bring down the size of the image. Hey, image size reduction, I know this 😎

(Shamless plug to my series of blog posts about optimizing image size: ardanlabs.com/blog/2020/02/d…)
Current ladder:
Next step: use a compiler cache to make the build faster.

Preliminary testing brings the build time from "a few seconds" to "almost nothing".

This saves some time for the local case; however that same cache didn't work (and slowed down) for the remote case.

Also, ...
"This is a convoluted setup; the complexity is not worth the gain".

But can we see the compiler cache ... without having to copy it around?

We move to a whole new class of techniques involving "hot reload", where instead of creating new containers, we keep the same container.
Tilt gives us live_update/sync primitives so that our local code can be seamlessly brought into (potentially remote) containers.

(Garden calls this "hot reloads; Skaffold calls this "file sync"; there also tools entirely dedicated to this operation, like ksync!)
Pro tip: use "entr" to monitor files and automatically run an action when files change!

(I know that one! I'm using that when working on my slides, so that when I change a markdown file, the HTML version gets recompiled immediately :))
Improvements are *huge*.

Now it takes 2 seconds to build+restart locally; 2.5s to build+restart on a remote cluster.

*two point five friggin seconds*

*TWO POINT FIVE SECONDS* to get new code up and running in a remote k8s cluster.

🤯🌩️⚡️
There are downsides, though. Since we're compiling code in the container now, we need:

- the build tooling in the container
- the compute resources to actually do the compilation

So ... can we work around this?

(Yes)
The solution here is to sync just the binary.

So instead of throwing code at the remote container and building it there, throw the binary.

This gives us the best results locally; however, remotely it's a bit slower, because binaries are big (bigger than their source code).
So ... how do we reduce the size of the binary?

UPX. It compresses binaries, and then they uncompress on the fly when you execute them.

Fun fact: that specific Go binary compressed from 11 MB to 4 MB with UPX.

And that reduces significantly the time to push it!
👇🏻
Bottom line:

If you're using a local cluster (minikube, kind, docker desktop, microk8s...)
-> compile locally and push the binary to the local cluster

If you're using a remote cluster
-> if you have extra resources, sync code and build remote
-> else, push compressed binary
This is the repo that has all the tests that L showed during their talk:

github.com/windmilleng/en…
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Jérôme Petazzoni

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!