How does the Ingress controller really work in Kubernetes?

I had to find out for myself, so I built one from scratch in bash
1/

Before diving into the code, here is a quick recap on how the ingress controller works

You can think of it as a router that forwards traffic to the correct pods
2/

More specifically, the ingress controller is a reverse proxy that works (mainly) on L7 and lets you route traffic based on domain names, paths, etc
3/

Kubernetes doesn't come with one by default

So you have to install and configure an Ingress controller of choice in your cluster

But it provides a universal manifest (YAML) definition
4/

The exact YAML definition is expected to work regardless of what Ingress controller you use

The critical fields in that file are:

➀ The path
➁ The backend
5/

The backend describes which service should receive the forwarded traffic

But, funny enough, the traffic never reaches it

This is because the controller uses endpoints to route the traffic

What is an endpoint?
6/

When you create a Service, Kubernetes creates a companion Endpoint object

The Endpoint object contains a list of endpoints (ip:port pair)

The IP and ports belong to the Pod
7/

Enough, theory

How does this work in practice if you want to build your own controller?

There are two parts:

➀ Retrieving data from Kubernetes
➁ Reconfiguring the reverse proxy
8/

In 1), the controller has to watch for changes to Ingress manifests and endpoints

If an ingress YAML is created, the controller should be configured

The same happens when the service changes (e.g. a new Pod is added)
9/

In practice, this could be as simple as `kubectl get ingresses` and `kubectl get endpoints `

With this data, you have the following:

- The path of the ingress manifest
- All the endpoints that should receive traffic
10/

With `kubectl get ingresses`, you can get all the ingress manifest and loop through them

I used `-o jsonpath` to filter the rules and retrieve: the path, and the backend service
11/

With `kubectl get endpoint `, you can retrieve all the endpoints (ip:port pair) for a service

Even in this case, I used `-o jsonpath` to filter those down and save them in a bash array
12/

At this point, you can use the data to reconfigure the ingress controller

In my experiment, I used Nginx, so I just wrote a template for the nginx.conf and hot-reloaded the server
13/

In my script, I didn't bother with detecting changes

I decided to recreate the `nginx.conf` in full every second

But you can already imagine extending this to more complex scenarios
14/

The last step was to package the script as a container and set up the proper RBAC rule so that I could consume the API endpoints from the API server

And here it is — it worked!
15/

If you want to play with the code, you can find it here: github.com/learnk8s/bash-…

I plan to write a longer form article on this; if you are interested, you can sign up for the Learnk8s newsletter here: learnk8s.io/newsletter
16/

And if you are unsure what ingress controllers are out there, at @learnk8s we have put together a handy comparison:

docs.google.com/spreadsheets/d…
17/

And finally, if you've enjoyed this thread, you might also like the Kubernetes workshops that we run at Learnk8s learnk8s.io/training or this collection of past Twitter threads

Until next time!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Daniele Polencic — @danielepolencic@hachyderm.io

Daniele Polencic — @danielepolencic@hachyderm.io Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @danielepolencic

Dec 2, 2024
Kubernetes CPU limits are not intuitive at all.

You think you are protecting against CPU abuse from other apps, but you risk incurring latency spikes.

Some people will say that you shouldn't use them. Let's learn why and whether you should join them.Image
1/

Kubernetes limits are an abstraction over the Linux Kernel, particularly the Completely Fair Scheduler (CFS)

The scheduler decides how CPU time is allocated to a process. In general, each process is given some processing time over some time Image
2/

For example, a limit of 0.1 vCPU or 100 millicores means you can use 100 millicores every 0.1 seconds Image
Read 17 tweets
Oct 28, 2024
How do Kubernetes Services work?

You probably know there are some iptables somewhere, but do you know the exact sequence of chains involved in routing traffic to a ClusterIP?

What about a NodePort? Is that different?

🧵Image
1/

Services relies on the Linux kernel's networking stack and the Netfilter framework to modify and redirect network traffic. The Netfilter framework provides hooks at different stages of the networking stack where rules can be inserted to filter, change, or redirect packets Image
2/

The Netfilter framework offers five hooks to modify network traffic: PRE_ROUTING, INPUT, FORWARD, OUTPUT, and POST_ROUTING. These hooks represent different stages in the networking stack, allowing you to intercept and modify packets at various points in their journey Image
Read 15 tweets
Mar 11, 2024
Having multiple tenants sharing a Kubernetes cluster makes sense from a cost perspective, but what's the overhead?

How much should you invest to keep the tenant isolated, and how does it compare to running several clusters?

We ran three experiments and recorded the costs. Image
Before examining the costs, let's look at the scale of the problem.

Most teams partition their cluster by environment.

For example, ten teams might have three environments each (i.e. dev, test and prod).

If you partition the cluster by environment and team, you will have 30 distinct slices.Image
What happens when you scale to 50 teams?

You will end up with 150 slices, of course.

But what are the consequences of this decision? Image
Read 18 tweets
Oct 3, 2023
By default, Kubernetes doesn't recompute and rebalance workloads.

You could have a cluster with fewer overutilized nodes and others with a handful of pods

How can you fix this?

🚨 Spoiler: you can watch Chris talking about this next week:

Continues…👇 bit.ly/k8s-optimize-3
Image
1/

Let's consider a cluster with a single node that can host 2 Pods

You maxed out all available resources so you can scale the cluster to have a second node and spread the load Image
2/

You provision a second node; what happens next?

Does Kubernetes notice that there's a space for your Pod?

Does it move the second Pod and rebalance the cluster?

Unfortunately, it does not

But why? Image
Read 19 tweets
Sep 5, 2023
What if you could choose the best node for your Kubernetes cluster before writing any code?

Imagine being able to estimate:

- Utilization.
- Overcommitment.
- Wasted resources.
- Costs.

And compare the results for multiple setups.

Let me show you how.
1/

First, not all resources in worker nodes can be used to run workloads

You need to account for memory and CPU used by kubelet, kube-proxy, operating system, etc Image
2/

Assuming you have accounted for those, instance types come in all shapes and sizes

How do you pick the best?

That's a tricky question, so I usually take a different approach: What's the best worker node for my workload?
Read 18 tweets
Jun 6, 2023
⎈ 20 Kubernetes threads in 20 weeks ⎈

I shared one (interesting) Kubernetes weekly thread for the past five months.

Here's the complete list: Image
1/

Isolating Kubernetes pods for debugging
2/

Learning how an ingress controller works by building one in bash
Read 23 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(