Infrastructure as code is complex because infrastructure is complex. You have two choices: automation or abstraction.
You can automate the provisioning of infrastructure using tools like Cobbler, Terraform, or shell scripts depending on the API you're working with. You are essentially encoding configuration decisions into a tool and running it. Hopefully the target API supports idempotency.
Assuming your PDUs, networking gear, physical servers, and operating systems expose provisioning APIs, your infrastructure as code would reflect the aggregated complexity of each layer in the stack.
There is a lot of buzz around internal developer platforms, but it feels like PaaS all over again, but this time the new products are aiming even higher up the stack, and integrating developer workflows beyond deployments. Take @ChoreoDev for example. wso2.com/choreo/
Choreo integrates API testing, CI/CD, and just enough observability to get pretty far without adding any other tools. The best part is the consolidation of all these concepts behind a common UI.
Then you got platforms like @massdriver* that attempt turn infrastructure and applications into a common set of modules that can be composed together.
* I'm a technical advisor and sit on the cap table. massdriver.cloud
What is being described here was already happening. Companies are spending too much time managing CI/CD pipelines, IaC, random bash scripts, and a whole collection of custom tooling no one wants to talk about.
Containers were about adopting a new abstraction and decoupling your application from the machine. Bundle your application and dependencies so you can spend less time messing with OS and configuration management tools. Docker and Kubernetes are optional.
Kubernetes is an infrastructure framework for building your own platform. You layer it on top of bare metal, virtual machines, or better yet, an IaaS provider of your choice, and you get an opinionated way for deploying containers to servers and exposing them to the network.
The Amazon Prime Video team was able to reduce cost by moving from Serverless backed by Lambda to monoliths running on VMs.
"Moving our service to a monolith reduced our infrastructure cost by over 90%. It also increased our scaling capabilities." primevideotech.com/video-streamin…
This isn't a dig against Lambda as that platform helped the team build the service fast and get to market.
"We designed our initial solution as a distributed system using serverless components, which was a good choice for building the service quickly."
But it is a testament to the overhead of microservices in the real world. Moving data around is typically an underestimated cost.
"The second cost problem we discovered was about the way we were passing video frames (images) around different components."
You can run databases on Kubernetes because it's fundamentally the same as running a database on a VM. The biggest challenge is understanding that rubbing Kubernetes on Postgres won't turn it into Cloud SQL. 🧵
First, the fundamentals. Kubernetes will schedule a database just like any other application. If you use a deployment, Kubernetes will schedule your database on a random node in the cluster, and if you add a volume, Kubernetes will mount it. That's it. The rest in on you.
You can even pin your database on a specific machine in the cluster to ensure the database lands on the same node, with the same IP, with the same data volume, kinda like you would do today on VMs.