Having put some time into #Kustomize now, I think it's probably a better solution than Helm for building permutations of configs for multiple clusters and environment.
It also makes it easier to integrate upstream config changes, because you're just applying patches/mixins.
Plus, you can also just render Helm charts with default values, and then apply Kustomizations afterwards. So you can still take advantage of published Helm charts, as long as they're just templating and don't use the advanced features like hooks.
Helm tries to replicate the “apt get” package management experience, but as you may recall, that doesn’t work very well in the cloud. That’s why Config Management exists.
I’m fairly convinced that hard multi-tenancy within a single k8s cluster is a use case it wasn’t designed for.
The harder I try to reduce the blast radius of potentially compromised service components, the harder it becomes to justify the effort.
Even best in class hosted k8s distributions are insecure out of the box. Wide open network policy, privileged mode required to deploy security daemons, unrestricted host volume mounting, BYO RBAC config, default public nodes, SSH enabled, unrestricted kubectl exec, etc.
#Spinnaker may be popular, but it has HUGE hidden cost when using with/on #Kubernetes. 1. The primitives are all IaaS-based 2. Many config changes require admin management and halyard reboot, meaning you need continuous deployment for your continuous deployment
3. There’s no abstraction for container platform in spinnaker, so you have to have a new “account” per k8s 4. There’s no namespace abstraction in Spinnaker, so you need a new account for each cluster+namespace permutation.
5. If you use RBAC in K8s, you have to synchronize Spinnaker permissions manually or write a tool. 6. The usability cliff to use Spinnaker is multiplicative when combined with the usability cliff of K8s, making onboarding new team members a nightmare.