FaaS is a higher-level kind of Serverless tech where the smallest deployable unit is a Function.
AWS Lambda, Azure Functions, GCP Cloud Function are all super handy, but what if you can't use them for some reason?
Meet OpenFaaS! 🔽
OpenFaaS is an open-source project that turns a piece of lower-level infra into a high-level FaaS solution.
Sounds too abstract?
Kubernetes cluster + OpenFaaS = FaaS API
Single VM + containerd + OpenFaaS = same FaaS API!
where FaaS API is:
- mgmt. methods
- invoke functions
At a high level, OpenFaaS has a universal architecture that allows it to be installed on almost any kind of infra:
API Gateway (concrete) + FaaS-Provider (abstract)
OpenFaaS on Kuberentes
With Kubernetes FaaS-Provider, you get out of the box:
- High-availability
- Horizontal scaling
- Disaster recovery (the only state is in etcd)
OpenFaaS on a VM with containerd
A lightweight setup designed for cheaper servers and IoT devices (e.g Raspberry Pi).
- Blisteringly fast scaling to zero
- Super quick function cold starts
- x10 functions per server (comparing to K8s)
- systemd manages all long-living processes
I've been evaluating OpenFaaS for my personal projects, both from:
How to Expose Multiple Containers On the Same Port
First off, why you may need it:
- Load Balancing - more containers mean more capacity
- Redundancy - if one container dies, there won't be downtime
- Single Facade - run multiple apps behind one frontend
Interested? Read on!🔽
Docker doesn't support binding multiple containers to the same host port.
Instead, it suggests using an extra container with a reverse proxy like Nginx, HAProxy, or Traefik.
Here are two ways you can trick Docker and avoid adding the reverse proxy:
1. SO_REUSEPORT 2. iptables
Multiple Containers On the Same Port w/o Proxy (I)
1) Use SO_REUSEPORT sockopt for your server sockets 2) Run containers with `--network host` and the same port
SO_REUSEPORT allows binding diff processes to the same port.
--network host puts all containers on one network stack.
Containers are Virtual Machines (controversial thread)
Some mental gymnastics. Bear with me.
Person A comes to Containers with prior VM experience.
Dockerfiles start FROM debian/centos/etc.
docker run/exec feels like SSH-ing sessions into servers.
Containers are VMs!
A container starts in less than a second
A VM takes tens of seconds to start
A bare-metal server can run hundreds of containers
Only a few VMs can coexist on a server
How come?
Person A starts digging into the internals to understand the difference between containers and VMs.
Person A: Aha! Containers are just isolated and restricted Linux processes + OS-level virtualization!
Person A starts sharing the finding with friends and colleagues - seasoned backend devs. Everyone instantly grasps the idea.
Then a Person B comes by. W/o prior VM experience.