IMO "use something managed" gets reduced to "we shan't run Kubernetes on-premises" which ends up meaning "we won't learn anything about failure modes until it's too late to think about mitigating them"
Which might be in line with what you said about
> 80% of orgs don't have the scale, core competencies or justifiable need to be managing container clusters themselves.
But also, would at least have some potential to be solved and much more cost effectively, or maybe at least grown past, if they would just spend some energy on deploying Kubernetes internally; even if we can't or won't afford an entire team dedicated to doing only that, (and even if we commit to using only managed services for production anywhere and everywhere.)
In my experience the way some places reflexively avoid it like it's a trap to be stayed out of, winds up being a bit like a self-fulfilling prophecy "we're not doing Kubernetes" - I empathize with the person who you triggered, even if now we're up to two walls of text from just a simple comment, I feel triggered too.
This is basically a build vs buy discussion. Businesses have generally concluded they should only build things that give them a strategic advantage and buy other services to maintain focus on their sources of competitive advantage. Eg in the case of k8s, it's not just spinning it up, it's securing it, patching, monitoring, etc. It's for this reason the majority of orgs shouldn't run it themselves.
However some balance is needed. Orgs may want to do exploration since it may not be obvious where competitive advantage can come from, or like you say perhaps hybrid makes sense, using it only in non prod.
I agree that it makes more sense to buy your Kubernetes on an organizational basis because one should not reinvent the wheel, and taking advantage of a commoditized service is only possible if you work with a competent broker.
However I am wary of the capacity of skills vendors to take advantage when you come to depend on them, even when their intentions are good and all ideals aligned. Being able to deliver the limited Kubernetes experience for yourself in low-stakes contexts, where you can depend on it because you know how it works, well enough to administer in a pinch, but availing that also in a pinch you're not the bottleneck to solve a problem, because you use the managed broker in all the places where it matters, feels like a sweet spot to me.
I don't want to pay money to a broker every time I spin up a new experiment for the duration of the experiment =/= I don't want to perform experiments.
That's where I see the disconnect that "Leadership" may fail to understand. You can provide a service at low marginal cost to take some of the load off your people, and that might also have the effect of stopping any experiments that fall beneath a certain threshold as "not worth the cost" - all because we settled on getting something for cheap that should have been free.
Then again, dodging all those diversions might have been a part of the strategy...
It makes sense if you take in context that Flux, the most visible and well-known product of Weaveworks, is a donated CNCF open source project, and that companies like Microsoft, VMWare, AWS – can all engage with it directly, or by forking, or by building support for it directly into their own products.
How do you place a value on Microsoft building Flux into Azure Arc? I know it isn't worth $0 but do they actually need a contract with anybody (at Flux or Weaveworks) in order to go on doing that - no. They don't need one.
I still use it, shamefully, ex-Weaveworks employee - there is a fork I can recommend which has a live maintainer, actively interested in keeping it up:
If you use Weave net still, definitely follow his work and consider learning to build the image, so you can keep it ahead of CVE scanners. (You are using a CVE scanner in your clusters, right?)
Flux has a pinned discussion for nearly a month now, to be as upfront as possible, without being able to disclose anything that might be privileged information, but anticipating that the news would get out about our backer sooner or later (aiming to avoid 100 threads about the same topic)
> a bunch of snowflake workloads by design (or bad design).
That's a really interesting characterization of WGE, and I can't say I disagree much (my personal opinion as an ex-Wyvern/OSS Engineer DX @ weaveworks)
Based on the last time I looked: good handling of dependencies between builds (e.g. the ability to do an "edge build" where for any change in a given project, you check whether that will break your other projects when they upgrade to depend on that), advanced scheduling, plugins that integrate all sorts of random tools into your build views.
I was hired to support Flux v1 two years after the events you described, in the beginning of 2021 to go on supporting Flux v1 until we could get everyone off the boat.
(I worked at Weaveworks until last month, and I'm still a Flux maintainer! Keep the Flux talk in Present tense please! ;-)