How to Expose Multiple Containers On the Same Port
First off, why you may need it:
- Load Balancing - more containers mean more capacity
- Redundancy - if one container dies, there won't be downtime
- Single Facade - run multiple apps behind one frontend
Interested? Read on!🔽
Docker doesn't support binding multiple containers to the same host port.
Instead, it suggests using an extra container with a reverse proxy like Nginx, HAProxy, or Traefik.
Here are two ways you can trick Docker and avoid adding the reverse proxy:
1. SO_REUSEPORT 2. iptables
Multiple Containers On the Same Port w/o Proxy (I)
1) Use SO_REUSEPORT sockopt for your server sockets 2) Run containers with `--network host` and the same port
SO_REUSEPORT allows binding diff processes to the same port.
--network host puts all containers on one network stack.
Multiple Containers On the Same Port w/o Proxy (II)
`--network host` reduces containers' isolation. Here is an alternative that keeps containers in separate network namespaces.
1) Pick an ip:port on the host 2) Replace (ip:port) in packets with (cont_ip:port) using iptables NAT
Check out this article for working examples of both techniques.
Continue with the thread to learn how to expose multiple containers on the same port with a reverse proxy. 🔽
Containers are Virtual Machines (controversial thread)
Some mental gymnastics. Bear with me.
Person A comes to Containers with prior VM experience.
Dockerfiles start FROM debian/centos/etc.
docker run/exec feels like SSH-ing sessions into servers.
Containers are VMs!
A container starts in less than a second
A VM takes tens of seconds to start
A bare-metal server can run hundreds of containers
Only a few VMs can coexist on a server
How come?
Person A starts digging into the internals to understand the difference between containers and VMs.
Person A: Aha! Containers are just isolated and restricted Linux processes + OS-level virtualization!
Person A starts sharing the finding with friends and colleagues - seasoned backend devs. Everyone instantly grasps the idea.
Then a Person B comes by. W/o prior VM experience.
- What is Kubernetes Service?
- When to use ClusterIP, NodePort, or LoadBalancer?
- How does multi-cluster service work?
- Why both Ingress and Ingress Controller?
The answers become clear when things are explained bottom-up! 🔽
1. Low-level Kubernetes Networking Guarantees
To make Pods mimicking traditional VMs, Kubernetes defines its networking model as follows:
- Every Pod gets its own IP address
- Pods talk to other Pods directly (no visible sNAT)
- Containers in a pod communicate via localhost
2. Kubernetes does nothing for low-level networking!
It delegates the implementation to Container Runtimes and networking plugins.
A typical example: cri-o (CR) connects pods on a node to a shared Linux bridge; flannel (plugin) puts nodes into an overlay network.
- PHP - simplest, traditional
- Python - more generic, traditional
- JavaScript - enter the async world!
- Go - learn goroutines
- Scala/Clojure - functional
2. Try different server-side frameworks
Don't try to learn all ins and outs. Instead, learn what's common for all frameworks.
- Request handling - processes, threads, coroutines
- Request routing - how to bind code to requests attrs
- Templating
- ORM integrations
3. Learn your platform
Linux basics - sockets, i/o, filesystem
Network basics - Ethernet, IP, TCP, HTTP
DNS - hostnames are only for humans
TLS, HTTPS - how X.509 certificates work
Learn SysAdmin craft - how to install packages, configure servers, troubleshoot perf issues.
The idea of Kubernetes Operators is simple and attractive.
But as it usually happens, the devil is in the details. I've been working on an operator for the past few weeks, and the learning curve is quite steep, actually.
Here are some projects that may help 🔽
1. kubernetes-sigs/kubebuilder
GitHub says it's an "SDK for building Kubernetes APIs using CRDs."
But you can scaffold an operator project with it.
2. operator-framework/operator-sdk
Much like kubebuilder, this project allows you to scaffold an operator real quick.
The difference is that it comes with batteries included:
- support of Ansible and Helm operators
- simplifies releasing operators (OLM)
- e2e testing
- linting