These things happen, props to Hashicorp for communicating about it.
It would be easy to say that this is a failure of open-source in some way, but to do so would be unfair to the huge amount of work that companies put into tools like this, and the stewardship that they offer, both of which take a lot of time and money. If periods of low activity while teams change are the cost the community needs to pay for that, I think that's very fair.
I had a conversation with my coworkers when I worked there about being upfront with folks that we wouldn't review or accept their PR and stop leaving people hanging (this was on the TFE provider). I'm glad this was added.
I stand by my statement then and now: there is nothing worse than contributing to something open source and then have your PR completely ignored.
I don't understand why people think that just doing unsolicitied work and pressing a button means the people on the other side are obligated to spend time reviewing and integrating it. GitHub has weirdly unbalanced open source contributions with a heavier burden and burnout on maintainers.
Contributing to a project doesn't mean just slinging code and calling it good. Communicate--talk to the people maintaining the code. Ask them, hey I have an idea and want to add this feature/fix this bug/etc.--do you have bandwidth for that change? Mailing lists, discussion forums, chat rooms, e-mails, etc. are the place to sort this out, not snarky or even angrier and angrier replies to an unsolicited pull request that goes unreviewed.
The overhead of working out where the people who maintain the code are is usually higher than just fixing the bug I found. This is doubly true if the bug is bad enough that I need to temporarily fork the repo to deal with it.
In the same way that the maintainers don't have an obligation to review my PR, I don't have an obligation to go find them and learn how to use IRC/bugzilla/mattermost/mailing lists/smoke signals/yodelling to communicate with them with the exact secret handshake to get a review. I can just throw a PR out there, point our code at my fork, rebase it whenever they make changes, and otherwise ignore their requirements until they either fix the bug themselves or merge my code.
In usual circumstances, free work is generally appreciated.
But I imagine it's a bit different with code, as most people prefer writing code to reading it.
In the case of a bug in software you didn't write, you can spend hours reading and debugging a foreign codebase, and end up writing nothing but a one-line fix and a ten-line test case.
There is such a thing as open source project with "pull requests welcome" in readme and whole elaborate "how to contribute" wiki chapter. Except when you follow that wiki to the T, it still ends up ignored because no one is reading pull requests. It is not that these contributions are going out of nowhere, from clueless people who dont realized maintainers dont want pull requests. Pretty often maintenners wanted pull requests, made sure to promote the option and then found themselves unable/unwilling to merge them in.
I am not saying projects must merge in pull requests. I am saying they should not have "please contribute it is welcome" messages in their readmes.
And on second plan, there are those campaigns to make people contribute. And shaming of companies/people for freeloading if they dont contribute - especially here on HN. Again, if then people conclude that contribution is something expected, it is not only their own fault.
Early in the project the maintainer(s) are super-excited of their new baby. Later on they burn out and move on to do different things. The PR guides are written in the first stage, while PRs get ignored in the latter.
A bot should do it. Because sometimes people throw a tantrum, so it's easier to just ignore a PR. Or a maintainer might post a quick reply, only to be bitten later on. I love and live by open source, but the drama can be exhausting.
"open source" doesn't mean "accepting drive-by contributions from unknown authors". "open source" means "open source". If you want to apply your patches, you're free to fork the repo.
Is there no way to just have a script add a comment on every new PR saying "Due to insufficient staff, your PR may not be reviewed for a considerable amount of time."?
Your code is important to us. Please stay on the line, and the next available reviewer will answer your code. Due to unusually high code volumes, this response may take longer than usual.
I think the issue is about letting people know that PRs won't be reviewed/merged before they bother putting a lot of effort into them (both the commit itself and setting up/documenting the PR).
This has always been a weird quirk of GitHub. You can disable issues, boards, wikis, but pull requests cannot be toggled. It's a pain point for lots of projects: those closed to contributions, those which are not primarily code (issues-only), those that use a different platform for review (e.g. Gerrit), ...
Why does GitHub persist? It's not like forcing the feature enabled helps anyone. The maintainers will still not merge if they don't want to. There are bots in the marketplace that will close PRs with a message. GitLab has a toggle. Why is this so important to GitHub?
I’ve made PRs because I need the change for my day job. By the time I’ve traced an issue down to a third party library the cost of making a PR is minimal. But if they don’t want to take the changes, it doesn’t bother me.
Of course I don’t work on open source out of passion, so I could imagine this is different for the true believers.
Personally, I tend to post the change in a comment on the issue that it fixes. I don't generally bother with doing a formal PR, since that would mean setting up the repo in a dev environment, branching, etc. and would be a bunch of extra work.
Locally, I just make the change and check it in to my project.
Every repo has different contribution rules and a one-off in one repo often just isn't worth the time to learn all the bespoke boxes that need to be checked. The work is there in the comments and if it's of value, someone more familiar can take it the last mile ... and codespaces _just_ came out generally. Could you edit online before that?
First of all, building a provider isn't straightforward. The best way I've found to do this is to wrap `terraform init`, and have it `docker run` a build process for a plugin version that never existed - then dumping the built provider into the `.terraform` directory for the project. It's prone to failure; new users of the Terraform project complain that the build eats 8GB of RAM and takes many minutes.
Second, providers are constantly changing, and it's not always possible to cleanly rebase a set of community changes on top of master. Part of the trouble with letting PRs wither on the vine is that they themselves become stale - in one case, the code still compiles, but the end result is completely wrong.
For what it's worth: my use case was needing to use Terraform with some more "unusual" features of CloudFront and ALBs. The 80% use case for support is great. There's a remaining 15% that's well implemented by unmerged PRs, and another 5% entirely that's completely unsuppored. I've kept it IaC by using the `null_resource` provisioner to shell out to the AWS CLI where absolutely necessary.
heh. i've been considering a switch from ansible to terraform because i've been frustrated by ansible's limited support for some of the edge cases around ALBs. good to hear that terraform also sucks.
i'm gradually coming to the conclusion that all the tools that are supposed to make provisioning cloud infrastructure easier aren't as good as a bunch of crappy custom scripts using boto or aws-cli.
Terraform is a massive improvement over Ansible if you do anything non-trivial; even if it sucks, it sucks significantly less than Ansible at managing infrastructure resources.
I sometimes fool myself into thinking I can use Ansible in simple cases and that Terraform would be overkill but so far I've regretted those decisions every time.
Ansible is sort of bad at everything, and does a few things decently.
I use it as an orchestrator / clusterssh replacement, but for configuration management it makes me nervous because I can't trust it to just not break for stupid reasons.
Last time I used it on Amazon Linux 2 with packer. It worked, but 1/3 of the time it failed with some strange error about yum database corruption.
I suspect what was happening is that Amazon Linux on start run yum to apply all available updates, and ansible was not respect yum locks, which is very surprising given that Ansible came from Red Hat so they should have known how important locking is for yum.
I ended up using salt (which has its own set of issues) and never looked back.
One big difference here is that while the tools might all depend on the SDK provided by the cloud, the tools themselves can also do a whole lot of good/wrong on their side. Terraform fixes a lot of that by delivering a 'standard' provider interface with normalised data formats, resource structures and encapsulation. That was pretty much a requirement for the tool to work anyway, otherwise it wouldn't have a unified way to check for configuration drift, plan changes, apply changes, do cleanups etc. You also wouldn't be able to pass data around easily (you'd end up shuffling strings around instead).
Some people prefer to do the CDK thing where you use a general programming language to synthesise the IaC stuff and then run it that way, but that doesn't really fix anything because a CDK is just built on top of the same SDK. As an added insult to injury, you now don't have a domain-specific language so save you from yourself (and your team) with all the anti-patterns you now have at your disposal ;-)
You can sideload Terraform providers so you don't need to do this. I personally recommend the [implicit local mirror directory](https://www.terraform.io/docs/cli/config/config-file.html#im...) where you just place your provider in your OS's respective Terraform plugin directory (MacOS: `$HOME/.terraform.d/plugins/`).
There's other ways to sideload providers on that docs page too
So then when your upstream repo diverges would you just rebase and manually add in anything you want from the forked development tree on the upstream side?
Not sure what's best practice so... just curious how people have handled this - I usually leave my forks of stuff pretty stale and focus on my own little sub-pieces to achieve what I want but not too much else.
Yes, we would branch from upstream again and apply the patch set against the new branch. Normally it is trivial but once in a while there are manual changes necessary.
I usually have my own branch which can be used to get a diff and maintain the diff on any major code structure change. Most often there is not much hassle but can be annoying.
If you spent the time and effort to hunt down the problem for them, the least they could do is look into it. repos that don't care enough are a waste of time.
TBF, even if your PR solves an actual issue a number of users have, it still might not be a good fit for the project's future, and starting a discussion about it might not be worth anyone's time.
More importantly, as you disclosed your solution, other affected by the issue can rely on your code in the meantime (= the rest of the project's life in some cases. I've been there). If it really is a critical issue that isn't solved, your fix can always be used in a different fork with better maintenance.
All in all, the upstream repo not responding to a PR isn't the end of the world I think, and the openness of the system makes it an acceptable state in many ways IMHO.
I disagree, when someone opens a PR on my project it's an imposition of my time. I appreciate the help but I will review when I have time and feel like it's a good time to do it.
"It would be easy to say that this is a failure of open-source in some way"
I have seen far more commercial, closed source products go through similar staffing crunches. The difference is that the problems are hidden away behind misdirecting sales teams and so on.
I can't tell you how many times I've reached out to someone on the inside of a company to get a straight answer as to whether a product is being properly staffed and supported. Or, conversely, how many times I myself have had to decide to orphan some commercial, customer facing work to meet a goal with a higher priority.
In my experience, useful open source products are less likely to suffer from inadequate staffing than closed.
I think it's the likely reality of far more projects than just terraform, and I don't fault the project completely for reaching this state - attrition is a brutal thing.
It's distressing personally though, that it's an issue unknown to me before I read this post, and affects a product I use and champion.
Comparing programming languages to individual open source projects is silly. Lots of Node and Rust projects are behind on PRs. Also Vue just doesn't allow community development, which makes everything much easier.
I commented, committed, worked with contributors and Hashicorp employees early on in Terraform. It was so refreshing to see a project that did what I wanted, but better than the tooling I'd already written -- and that it was so actively working with the community.
Phinze was a large part of feeling like my contributions were welcomed, even just as a community member. A few months back I encountered something of Paul's, noticed his GitHub work had tapered off and found the blog posts entitled "diagnosis" and "treatment" on his blog.
I know nothing more about current events pertaining to Hashicorp or Terraform, but seeing some negativity here coupled with the experience of encountering that sad news led me to comment.
That team has made world class tooling, with the community. The people I know who work there are some of the sharpest and kindest people I've ever met. I'm willing to give them the benefit of the doubt and I hope you yourself will as well before presuming anything that leads to more negativity.
It's obviously not ideal, but perhaps there are legitimate reasons and at least they were transparent.
Props to them for being honest, but it's still not a good look for Hashicorp. Their stewardship of Terraform leaves a lot to be desired. For years now I've watched Terraform PRs just wither on the vine. You get the impression that nobody is working on the AWS provider at all. Every PR to the AWS provider that I've ever cared about has taken years to be reviewed and merged despite lots of thumbs-ups and comments from users begging for it to be merged. This has been ongoing for years. There is clearly a priority problem here.
A few years ago, there used to be community reviewers of pull requests to the AWS provider - indeed I reviewed dozens after leaving HashiCorp. My own access for this got removed in (IIRC) late 2018, and I’d assume this was across the board.
I don’t think it’s conceivable that any reasonable number of employees could satisfy the demand of maintaining the first-party provider set in the current form, without leveraging the community. However, I for one will not sign a CLA that allows proprietary relicensing, and I’d guess most people who could give meaningful reviews are in a similar boat, or already work on the provider teams.
However, I’m also not sure that there is a “priority problem” as such - most providers don’t make HashiCorp money from consumers or contributors, and employee time is better spent on products which contribute to a positive bottom line.
The Terraform Provider Registry has made it much more palatable to run a fork of any given provider than it was previously - I’d recommend doing so if you have functionality you need that hasn’t been integrated.
> I for one will not sign a CLA that allows proprietary relicensing
Why the objection?
For it to have a material effect, both (a) Hashicorp would need to take Terradorm proprietary within a few years while your use actively need updates, and (b) there would have to be no one else maintaining a fork based on the existing MPL2[3] code.
They say[2] why they need a CLA, which doesn’t seem deceptive.
From the CLA[1] “you reserve all right, title, and interest in and to Your Contributions”, which is unlike the FSF which demands copyright assignment[4] “Put simply, this is the legal transfer of copyright on a program from the developers to the Free Software Foundation.”[5].
There's a group called cloudposse who have a huge repository of their own terraform AWS plugins. I haven't had to use their plugins yet (I've been lucky so far in having found a way to do most of what I've wanted through the official provider) but their selection is pretty comprehensive:
These guys are really really good. I've read a lot of their terraform code, it's top quality, well architectured. Definitely recommend to read their modules.
(Minor contributor, perhaps major issue-opener-commenter speaking)
I have a lot of respect for the team personally (as in myself), though I do recognise what you say completely too. I just think there's a lot of focus on trying to do things right, and long-term-maintainably, that it often presents with a poor outlook or like there's a lack of interest.
@apparentlymart from OP in particular - I have no idea who he is in Hashicorp hierarchy, I just recognise him from GitHub - in particular is really excellent in responding to good issues and adding information along the lines of 'yes we want this but unfortunately XYZ so we're hoping PQR is going to make this easier but first we need to ABC so yes but sorry not a priority right now' sort of thing.
Ya, but I’ve seen those same responses 4+ years ago. I had to give up on using it for a project years ago as they didn’t support some things and closed the PRs
I really don't see how you can look at the changelogs for the weekly releases of the AWS provider and think that nobody's working on it. If you want to see what a neglected provider looks like, spend some time with the poor Vault provider, where I have a two-year-old bug report about a reproducible crash condition without so much as a reaction emoji on it.
Oh, it's absolutely not a good look for Hashicorp because their company culture is the problem. They grew too fast and hired too many middle managers who brought in a brogrammer culture. It drove everyone with better options away (at least per friends who formerly worked on Terraform at Hashi). And from the resumes I see come across my desk it feels like they've been bleeding talent at all levels for a while.
One of these days Hashicorp is going to cash in and one of the major cloud operators will own the lingua franca of cloud provisioning. Oracle, for instance, directs the cloud users to Terraform as its de facto first class provisioning tool. It works rather well BTW.
I've been very happy with Terraform. I also have a huge investment in Cloudformation and it doesn't take long to discover that Cloudformation isn't implemented consistently across services, it's full of bugs, and unless your bug happens to impact Amazon internally you can wait years for a fix. New functionality for existing resources can also take years to materialize. Back when I had an eight figure AWS budget they would offer to let me fund a Cloudformation fix (seriously! They wanted me to pay them to fix their errors) but since moving to a smaller shop they don't even offer that.
With Terraform (and its AWS provider) I've found things are generally implemented consistently, I've not encountered any crashing bugs yet, and all of the manual steps I've had to do with the AWS CLI have Terraform equivalents.
Looking at what CloudFormation can and can't do it seems like completely different teams work on a service and its CloudFormation support.
I guess Amazon use the AWS API internally?
I say alleged because who knows if that's the code that's really running. And related to this current thread even if it is the code, with this being Amazon who knows if I ever found a bug and PR-ed it that it wouldn't /dev/null for 5 years before being closed by some stale-issue-bot or something
I work at AWS. The service teams do own and write their CloudFormation providers, although if you've written a custom provider you'd see it is somewhat clunky, so it's sometimes considered more of an operational burden, and you can tell.
We dogfood both the SDK and CloudFormation internally, we just deal with its numerous gripes much the same way you would externally (although we can also contact service teams directly if needed).
Million dollar question would be if all the “feature requests” that AWS support opens when they don’t know what else to do actually go someplace or if they just get swept aside every few days. I’ve always been under the impression they receive very little or no attention. Even bugs where I have included a reproducible test case that results in an internal error, they just keep trying to close the ticket every few days until you go on vacation or e-mail burps and you miss the notice.
I work for AWS. We have an internal tracking system containing customer feature requests, and our service teams review them biweekly and they get prioritized by product management. “Working backwards from the customer” is gospel here.
That is cool to know! For several years I have been tell the support people not to bother with feature requests because I didn’t think they went anyplace, maybe I’ll let them open them going forward.
I'm assuming that there's a communications lockdown and freeze ahead of an IPO, and that extends even to reviewing pull requests. But I know jack and perhaps I'm just seeing ghosts.
Datadog. I've used their products for years and I believe in their team. Picked most of what I have up after their ipo. Hashicorp is similar (but I haven't been a paying customer of theirs).
I'm a timid investor and am pretty nihilistic about tech in general. I forget the exact Charlie munger quote, but it was something about staying in your circle of competence.
I recall once upon a time there was a GitHub project wherein the owner would just immediately merge any PRs that were opened -- I believe it was a social experiment, and I don't recall the exact nature of the repo in order to know if that kind of thing is ludicrous here. But I do think it'd be good fun to take this lull and find out the outcome of a hypothetical github.com/open-terraform/open-terraform which just ran a github action that merged PRs that had more than a 5(10?) differential of :+1: to :-1: reactions on it. Build failures would instantly close the PR, and test failures would be exempt from auto-merge
If that worked sufficiently well, I'd initially mirror every repo from https://github.com/terraform-providers into that same GitHub organization and continue that exercise. IMHO the providers suffer from bitrot a lot more than formal terraform does
I think of that hypothesis as a "open source optimist" versus "open source pessimist:" are people who go out of their way to open PRs trying to improve the common good, or trying to drain the life out of maintainers?
The beauty of open source is you can run your project however you want: bazzar, cathedral, or something else. If people think its a bad choice, they can fork.
Surely community patches are such a good return on investment they should be pulling devs off other areas to keep someone on reviewing public prs. I've always been confused by companies that aren't over the moon to spend minimal review time to get the benefit of hours of work by free employees.
There are no free lunches, pull requests are no exception. For starters, before merging every pull request needs to be reviewed at a minimum. That by itself can oftentimes be a very time-consuming activity, especially if the changes are from someone outside the circle of regular contributors. Outside of fixes for typos and other trivialities, pull requests generally require a lot of back and forth to get to a good state — does this change make sense architecturally, does it cover edge cases, does it come with tests? Additionally, oftentimes pull requests expand the scope of what you need to maintain, whether you want to take on that permanent burden is a critical question in and of itself. The list goes on. There are many projects that do make it work, but make no mistake that this takes a considerable amount of effort.
My experience has been that the first-order ROI of community PRs is negative. PRs which do more than just fix a typo that you can just go "thanks" and merge are extremely rare. Most external PRs take more work to get into a good state than it would have been for us to fix the problem ourselves.
The main reason to accept community PRs is because it helps you get passionate users, not because they're free labor.
> Most external PRs take more work to get into a good state than it would have been for us to fix the problem ourselves.
But, isn't that a strange comment to make on a thread where there was an announcement "sorry, we don't have bandwidth to even look at any problems that aren't on some PM's roadmap"
To tug on that a little more, community PRs (and issues, but I'm focused on the folks who want something to work bad enough to actually contribute a fix) are far more likely to be some edge case that a real user has stepped on which the core project either didn't consider, didn't test, or thinks "who would use the spacebar to heat their computer?"
One can get passionate anti-users, too, if they have their PRs thrown in the trash
In my experience, getting a high quality PR that you’d want to maintain is exceedingly rare. Getting a community submission to that standard takes a lot of effort - sometimes more than if you just did it yourself.
On top of that, a lot of developers tend not to enjoy reviewing and massaging community PRs all day. They want to write code themselves, and they want it to be important code. Putting your team on review duty is a great way to make people feel like their role is low impact and unrewarding. Again, they’d rather write the code themselves.
I find it takes a lot of experience for developers to recognize the value, impact, and reach of indirect contributions like that, so it’s rare to have a team with enough people who will do a great job of reviewing, supporting, and maintaining quality community submissions. If you assign it to relatively inexperienced developers you’re likely to wind up getting a lot of things merged that shouldn’t be in a rapidly growing project that’s increasingly difficult to maintain.
It’s a hard problem to solve. But again, this is just my experience.
In addition, many people will only contribute once or twice. The result is that you as a maintainer may need to invest a lot of time, while the results are minimal.
I would guess that the signal to noise ratio is pretty poor on community submitted patches. You'd rather just submit a bug to an employee and then get a more consistently correct solution than sift through potentially poor PRs.
You're getting a lot of downvotes for a good reason.
The opposite of this: "code is much harder to read than it is to write" is held up as a ten-commandments style law of programming.
Here's why: When you write code, you as the author know exactly what it does, so you have exactly one copy of the code in your head.
But as you read code, you repeatedly run into "forks", where you encounter something you aren't sure of the meaning of. Even at a very small rate, like understanding 95% of what you're reading, and being unsure about 5%, it adds up. At every one of these points, you create multiple hypotheses of what the program actually does. Each one of these hypotheses is a full "copy" of the program, running in your head. Frequently to __really__ read code, you have to rig it up and test these hypotheses to keep the mental burden low (since directly testing it and confirming one of them collapses/nullifies all the other ones). (This is a huge reason why software that can be inspected live (lisp, javascript, etc) has a fairly high value, and why companies like MS have built fancy IDEs to enable the same thing with compiled software like C++, C#, etc. Past a certain point, you need to poke it with an inspector to test what parts of it do, in order to "read" the code.)
If you just "read code" and think you know what the program actually does — specifically by skimming over those parts where it's like "yeah, I'm not sure, but it probably does XYZ", it's a very juvenile, dangerous mindset. I don't have a polite way to put it, but it's in exactly the same bucket as the usual brogrammers who think their software has no security holes, for no reason other than that they trust their own work. This is where "programming as craftsmanship" breaks down; like other fields like structural engineering, it's better to build a bridge and know it will hold up because you did the actual material calculations (i.e. to not trust your own judgement, but to verify it externally). As opposed to building one, and simply having a hunch that it's sturdy enough to hold for no reason other than that you've built a lot of stuff, and your gut says it's solid.
I don't think the downvotes are for a good reason, as we are speaking within the context of PRs.
If you're the maintainer, then you already have knowledge of how the system works. The PR just has to fit into your mental map of how things should be.
What I mean is perhaps best summarised as 'reading and writing are both easy, but a good job of either is preceded by understanding, which is hard'.
So I start with them equal, but then I think understanding can be harder to ascertain from the PR than the initial investigation, or if the solution didn't follow the same lines as you might've chosen yourself.
It's cool that they're being open about this, but I'd be curious to know if the situation was their own doing or not? For example, how many non-hashicorp employees are maintainers? Do they allow this? (I'm actually asking-- I don't know the answer.)
Open source is great, but a single SPOF in one company is becoming too much of an healthy norm. If you love your software, set it free. And if you're worried about people^W companies taking it proprietary and not giving back, then use copyleft.
This is pretty sad as it has too much of a lost potential with the slow updates of the core. I've been waiting for dynamic providers (i.e. created via for_each, count, or dynamic), but they are not coming to Terraform anytime soon and the copypasta must go on. Some of the providers, which HashiCorp maintains are pretty behind. It used to be where people used Terraform for AWS, because CloudFormation was so behind, but nowadays it's the opposite story.
ARM is a complete joke, and Bicep compiles down to ARM.
(More details: ARM is simply a dumb script that says "do this, do that", it doesn't interact with your cloud resources in any smart way. As one example: You can download a template for an Azure SQL instance from the Azure portal. That template will then randomly fail to execute, because it contains two "configuration" child resources of Azure SQL, which ARM will try to deploy in paralell, but oh, they touch the same parent sources, so they crash with an error...
Then try to add depends_on in order to have it, perhaps, not fail.
Or how about the Azure SQL feature where, if you block Azure SQL to only AD logins, you'd have to declare an administrator password on first ARM run (resource creation), then remove it on subsequent ARM runs (resource update) because having it is then prevented...
These aren't minor issue or glitches; it's symptomatic of a system that's just built in the wrong way from ground up. ARM isn't a stateless description of your resources, it's an awkward scripting language (although since the APIs it interacts with are idempotent the difference isn't obvious at first).
How Microsoft could decide to have Bicep compile to ARM is beyond me -- Azure has some good features (refer to resources by name rather than allocated ID) that could greatly simplify the approach that Terraform/Pulumi takes if they only targeted Azure -- why didn't Microsoft do that instead, give us Terraform/Pulumi for Azure without having to keep a statefile (especially a statefile with secrets in it).
This is only partly accurate, because it conflates ARM the CRUD API with ARM Templates. ARM’s CRUD API is used by Terraform’s AzureRM provider (I bootstrapped that provider in 2016).
I’m not sure exactly what you’re looking for with regards to referencing by name rather than ID - names must be qualified with a resource group and so forth, and to actually unambiguously identify things, you’d need all components of the ID.
Bicep looks like a solution in search of a problem to me, though fortunately I haven’t had to spent more than about 5 minutes looking at it.
Hmm, not sure how I am conflating things, I meant to just wrote about ARM templates full stop? The API I refer to is the Azure Management API (we ended up using that directly from Python instead of using ARM ourselves)...
I said that referencing by name is one of the few things that are good about Azure. It means one does not have to persist a lot of resource IDs when deploying infra, one can just query the cloud state.
What I am looking for -- or what I would love if someone built for me -- is utilizing that feature to deliver Terraform or Pulumi without the need for a statefile. The statefile should not be needed on Azure due to the resource naming scheme -- just query the management APIs to get currrent state.
When we looked at Terraform and Pulumi we saw that the tools by default sucked down evey secret in the resources into the statefile. This is behaviour I very much disagree with, we do not want secrets stored anywhere or pass through anywhere, we want to rely on service identity everywhere.
PS I am speculating here about why the statefile is needed...I have been assuming something about Amazon made it needed, but I may be wrong.
Sounds like you're just angry I don't like the same technologies as you. I don't need terraform because I only work with Azure. What are you going to do to stop me?
I think he's more angry at ARM templates than anything else.
I'm learning terraform to work with Azure. Primarily because one day I might need to work with any other cloud provider and I don't want to learn 3 different DSLs when I might be able to get away with learning 1.
But even that sounds like wishful thinking as I write it out.
So Terraform manages to become the recommended DevOps tool for most cloud providers and now they won't even accept PRs to improve and add features from the community? I think there are greater issues here, first that we centralized around so few major providers instead of improving tooling to scale to any hosting provider and 2 that we let hosting providers create so much abstraction that each one requires an entirely different provider API. Clouds should be as simple as renting VMs or dedicated servers from hosting providers and having an image that automatically brings everything up for you.
> So Terraform manages to become the recommended DevOps tool for most cloud providers
It's not the recommended devops tool for any cloud provider.
> and now they won't even accept PRs to improve and add features from the community?
Random people writing code doesn't make that code valuable or of any sound quality. They have employees whose jobs it is to improve and add features to their software.
> Clouds should be as
Clouds are and should be exactly as simple or complex as their customers request. Clouds should not be built around opinionated forum posts.
> It's not the recommended devops tool for any cloud provider.
The Deployment manager templates under CFT for GCP specifically state they recommend the Terraform modules.
It is also the first provider listed under Oracle cloud. Pretty much every cloud list it as the preferred DevOps tool.
> Random people writing code doesn't make that code valuable or of any sound quality. They have employees whose jobs it is to improve and add features to their software.
It doesn't mean that it isn't. You act like only employees can write valuable code with sound quality, when in fact the opposite could be true and often is.
> Clouds are and should be exactly as simple or complex as their customers request. Clouds should not be built around opinionated forum posts.
The concept of a cloud provider is not anything unique. Hence why there are so many of them implementing their own API. Cloud providers budgets are heavily skewed to marketing, because if everyone could get the rates available through OVH, Hetzner, etc while still getting the same features through their own 3+node zero-config cluster then no one would pay the outrageous prices of AWS, Azure, etc.
> The Deployment manager templates under CFT for GCP specifically state they recommend the Terraform modules.
So, "It's listed as the recommended tool for GCP" means "it's the recommended tool for any cloud provider" ???????????????
Both Azure and AWS maintain and promote their own resource provisioning systems. Also you just totally ignored the GCP Deployment manager templates listed on the same page. Meaning, Terraform is _an option_.
> It doesn't mean that it isn't. You act like only employees can write valuable code with sound quality, when in fact the opposite could be true and often is.
The average quality of an internal employee will be higher than the average quality of a random person submitting PRs, because you'd fire your employees if they weren't. This is common sense.
> The concept of a cloud provider is not anything unique. Hence why there are so many of them implementing their own API. Cloud providers budgets are heavily skewed to marketing, because if everyone could get the rates available through OVH, Hetzner, etc while still getting the same features through their own 3+node zero-config cluster then no one would pay the outrageous prices of AWS, Azure, etc.
The concept of a paid service isn't unique. Adding features to a paid service to increase users' engagement and/or acceptable price is normal and good. Again, common sense.
for azure you have azcli, arm templates (their cloud formation), and terraform. They also have started? work on bicep which is a DSL to create arm templates. As for what they recommend i think that changes depending on the service/team you speak too.
It would be easy to say that this is a failure of open-source in some way, but to do so would be unfair to the huge amount of work that companies put into tools like this, and the stewardship that they offer, both of which take a lot of time and money. If periods of low activity while teams change are the cost the community needs to pay for that, I think that's very fair.