Right, who doesn't want to template hundreds of lines of code in a language that uses whitespace for logic and was never made neither for templating nor complex long documents(YAML)? What could possibly go wrong ("error missing xxx at line 728, but it might be a problem elsewhere").
I wonder why people don't use fromYaml + toJson to avoid stupid errors with indent.
Yaml is for all intents and purposes a superset of JSON so if you render your subtree as JSON you can stick it in a Yaml file and you don't need to care with indentation.
It’s not completely that white space is bad, but in particular that white space is very difficult to template relative to something character-delimited like JSON or s expressions.
In JSON or s expressions, the various levels of objects can be built independently because they don’t rely on each other. In YAML, each block depends on its parent to know how far it should be indented, which is a huge pain in the ass when templating.
Nothing. Even if it's objectively terrible (thousands of lines of templated YAML, or thousands of lines of spaghetti bash), being in Git as code is still better. At least you know what it is, how it evolved, and can start adding linting/tests.
I manage large IaC repos and it's mostly HCL, well-structured and easy to work with. Where we have Kubernetes manifests, they're usually split into smaller files and don't cause any trouble as normally we usually don't deploy manifests directly.
It’s not that bad if you need to deploy at least 3 things and for most cases it beats the alternatives. You can get away with a bootstrapped deployment yaml and a couple of services for most scenarios. What should you use instead? Vendor locked app platforms? Roll out your own deploy bash scripts?
Sure the full extend of Kubernetes is complicated and managing it might be a pain, but if you don’t go bonkers is not that hard to use it as a developer.
Lot's of things are nice to have but are expensive.
I'd love to have a private pool in my backyard. I don't, even though it's nice to have, because it is too expensive.
We intuitively make cost-benefit choices in our private lives. When it comes to deciding the same things at work, our reasoning often goes haywire.
Sometimes we need an expensive thing to solve a real problem.
Your point about object-oriented programming makes sense. Sometimes, a bash script suffices, and the person who decides to implement that same functionality in Java is just wasting resources.
All of these solutions have a place where they make sense. When they are blindly applied because they're a fad they generate a lot of costs.
I’ve only ever seen a single dev team managing their own K8s cluster. If by deploy you mean “they merge a branch which triggers a lot of automation that causes their code to deploy,” you don’t need K8s for that.
Don’t get me wrong, I like K8s and run it at home, and I’d take it any day over ECS or the like at work, but it’s not like you can’t achieve a very similar outcome without it.
K8s ruffles my feathers because it’s entirely too easy to build on it (without proper IaC, natch) without having any clue how it works, let alone the underlying OS that’s mostly abstracted away. /r/kubernetes is more or less “how do I do <basic thing I could have read docs for>.”
I’m a fan of graduated difficulty. Having complex, powerful systems should require that you understand them, else when they break – and they will, because they’re computers – you’ll be utterly lost.
Genuine question: say you have 3-4 services and a bunch of databases that make up your product, what's the alternative to plemping them all into K8s according to you?
Assuming there aren’t any particular scaling or performance requirements, if I were managing something like that, I would almost certainly not use k8s. Maybe systemd on a big box for the services?
I agree with you and I'm always confused when people talk about process isolation as a primary requirement for a collection of small internal services with negligible load.
In addition the overhead and reporting drawbacks of running multiple isolated databases is vastly higher than any advantaged gained from small isolated deployments.
If I had 3-4 services and a bunch of databases, I would look at them and ask "why do we need to introduce network calls into our architecture?" and "how come we're using MySQL and Postgres and MongoDB and a Cassandra cluster for a system that gets 200 requests a minute and is maintained by 9 engineers?"
Don't get me wrong, maybe they're good choices, but absent any other facts I'd start asking questions about what makes each service necessary.
AWS Fargate is popular among large companies in my experience.
Some of them try to migrate from it to a unified k8s "platform" (i.e. frequently not pure k8s/EKS/helm but some kind of in-house layer built on top of it). It takes so long that your tenure with the company could end before you see it through.
Using cloud platform as a service options. For example, on Azure you can deploy such system with Azure App Service (with container deployment), or Azure Container Apps (very suitable for microservices). For database, you can use Azure Database for PostgreSQL (flexible), or Azure Cosmos DB for PostgreSQL.
This way, Azure does most of the heavy lifting you would otherwise have to do yourself, even with managed kubernetes.
Maybe at first, but once you start building all of the IaC and tooling to make it useful and safe at scale, you might as well have just run EKS. Plus, then you can get Argo, which is IMO the single best piece of software from an SRE perspective.