> If a company sets this up correctly developers can create tooling incredibly fast
I find that it has its place in companies with lots of micro services. But I think that because it is made "easy" it encourages unnecessary fragmentation and one ends up with a distributed monolith.
In my opinion, unless you actually have separate products or a large engineering team, a monolith is the way to go. And in that case you get far with a standard CI/CD pipeline and "old school" deployments
But of course I will never voice my opinion in my current company to avoid the "boomer" comments behind my back. I want to stay employable and am happy to waste company resources to pad my resume. If the CTO doesn't care about reducing complexity and costs, why should I?
In my example it was a simple CRUD app, no microservice. It could just as easy been ran by scping the entire dev dir to a vm and ensuring a port is open. But I wouldn't get many of the things I described above and I don't need to monitor it at all.
You had PR merge and automatic release before Kubernetes too, and it's not that hard to configure.
If one has a small project where a few seconds of downtime is acceptable, you can just setup a simple Github action triggered on commit/merge. It can scp the file to the server and run "sysctl restart" automatically. I have used this approach for small side projects (even with external paying users)
And if you need a "no downtime" release, a proper CI/CD pipeline can handle a blue/green switch. I don't think you would spend much more time setting that up, than Kubernetes from scratch unless you have extensive experience with Kubernetes.
You're not expecting them to set k8s up from scratch, just as you'd not expect the dev team to set up the datacentre power or networking from scratch for the server in your "scp and sysctl restart" scenario.
Typically, a k8s installation is looked after by a cross-functional Platform team, who look after not just the k8s cluster but also the gateways, service mesh, secrets management, observability and other common services, shared container images, CI/CD tooling, as well as platform security and governance.
These platform services then get consumed by the feature dev teams (of which there could be anywhere between half a dozen and multiple thousands). To deploy a new app, those dev teams need only create a repo and a helm chart, and the platform's self-service tooling will do the rest automatically. It really shouldn't take more than a few minutes for a team with some experience.
Yes, it's optimised for a very different scale of operation than a single server at a managed hosting provider. But there are plenty of situations in which that scale is required, and it's there that k8s shines.
I find that it has its place in companies with lots of micro services. But I think that because it is made "easy" it encourages unnecessary fragmentation and one ends up with a distributed monolith.
In my opinion, unless you actually have separate products or a large engineering team, a monolith is the way to go. And in that case you get far with a standard CI/CD pipeline and "old school" deployments
But of course I will never voice my opinion in my current company to avoid the "boomer" comments behind my back. I want to stay employable and am happy to waste company resources to pad my resume. If the CTO doesn't care about reducing complexity and costs, why should I?