Hacker Newsnew | past | comments | ask | show | jobs | submit | rorychatt's commentslogin

Very strange, this has always worked for me with Zed - both with local and remote ssh sessions.

I use `"autosave": "on_focus_change"` which may be keeping my buffer in sync with the file contents as I switch between terminal and zed.


I had this today when I cancelled a Microsoft Subscription, where it asked me for feedback on why I left.

One of the options they gave was because "My company needs an AI solution".

None of the options had anything to do with why I was unsubscribing.


Building out test infrastructure for correctness to support the project sounds like a fantastic idea.

That said, while it's compatible with Linux via fuse, unless you're helping to build RedoxOS, I don't think there's any real expectation that you would try it.


> why that's a bad idea

Given that traffic inspection for user and service proxies rely on MITM traffic inspection for many forms of IPS/IDS beyond basic SNI signature detection - I'd love to hear more!

I'm not necessarily suggesting it should be mandatory - I remember the pain of introducing Zscaler about a decade ago and the sheer number of windows apps that simply broke, leaving a trail of complex PAC files - but not enough to warrant off the solution.

I would assume the half way house would be to leave Name Constraints off your offline CA, maintain (at least) one intermediary with constraints turned on for regular certificate lifecycle management for internal certs, and a dedicated intermediary that is only used to generate the MITM certs?


ZScaler is an absolute horror for a software developer also in charge of ops.


I found things got a lot better when the proper apis and terraform zpa/zia coverage happened

Still many foot guns, but I’ve much the same feelings for most of the tooling in the proxy/vpn space.


While I agree generally with the pattern (dynamically generating manifests, and using pipelines to co-ordinate pattern change), I could never quite figure out the value of using Branches instead of Folders (with CODEOWNER restrictions) or repositories (to enforce other types of rules if needed).

I can't quite put my finger on it, but having multiple, orphaned commit histories inside a single repository sounds off, even if technically feasible.


I believe the idea is that it makes it very explicit to track provenance of code between environments, eg merge staging->master is a branch merge operation. And all the changes are explicitly tracked in CI as a diff.

With directories you need to resort to diffing to spot any changes between files in folders.

That said there are some merge conflict scenarios that make it a little annoying to do in practice. The author doesn’t seem to mention this one, but if you have a workflow where hotfixes can get promoted from older versions (eg prod runs 1.0.0, staging is running 1.1.0, and you need to cut 1.0.1) then you can hit merge conflicts and the dream of a simple “click to release” workflow evaporates.


> I believe the idea is that it makes it very explicit to track provenance of code between environments, eg merge staging->master is a branch merge operation.=

That isn't quite my understanding - but I am happy to be corrected.

There wouldn't be be a staging->main flow. Rather CI would be pushing main->dev|staging|prod, as disconnected branches.

My understanding of the problem being solved, is how to see what is actually changing when moving between module versions by explicitly outputting the dynamic manifest results. I.e. instead of the commmit diff showing 4.3 -> 5.0, it shows the actual Ingress / Service / etc being updated.

> With directories you need to resort to diffing to spot any changes between files in folders.

Couldn't you just review the Commit that instigated that change to that file? If the CI is authoring the change, the commit would still be atomic and contain all the other changes.

> but if you have a workflow where hot-fixes can get promoted from older versions

Yeah 100%.

In either case, I'm not saying it's wrong by any stretch.

It just feels 'weird' to use branches to represent codebases which will never interact or be merged into each other.


Glad I am not the only one feeling "weird" about the separate branches thing :D

Probably just a matter of taste, but I think having the files for different environments "side by side" makes it actually easier to compare them if needed, and you still have the full commit history for tracking changes to each environment.


Sorry, typo, you’re quite right, I meant to say staging->prod is a merge. So your promotion history (including theoretically which staging releases don’t get promoted) can be observed from the ‘git log’. (I don’t think you want to push main->prod directly, as then your workflow doesn’t guarantee that you ran staging tests.)

When I played with this we had auto-push to dev, then click-button to merge to staging, then trigger some soak tests and optionally promote to prod if it looks good. The dream is you can just click CI actions to promote (asserting tests passed).

> Couldn't you just review the Commit that instigated that change to that file?

In general though a release will have tens or hundreds of commits; you also want a way to say “show me all the commits included in this release” and “show me the full diff of all commits in this release for this file(s)”.

> In either case, I'm not saying it's wrong by any stretch.

Yeah, I like some conceptual aspects of this but ultimately couldn’t get the tooling and workflow to fit together when I last tried this (probably 5 years ago at this point to be fair).


> staging->prod is a merge

I might be misunderstanding what you mean by staging in this case. If so, my bad!

I don't think staging ever actually gets merged into prod via git history, but is rather maintained as separate commit trees.

The way that I visualised the steps in this flow was something like:

  - Developer Commits code to feature branch
  - Developer Opens PR to Main from feature branch: Ephemeral tests, linting, validation etc occurs
  - Dev Merges PR
  - CI checks out main, finds the helm charts that have changed, and runs the equivelant of `helm template mychart`, and caches the results
  - CI then checks out staging (which is an entirely different HEAD, and structure), finds the relevant folder where that chart will sit, wipes the contents, and checks in the new chart contents.
  - Argo watches branch, applies changes as they appear
  - CI waits for validation test process to occur
  - CI then checks out prod, and carries out the same process (i.e. no merge step from staging to production).
In that model, there isn't actually ever a merge conflict that can occur between staging and prod, because you're not dealing with merging at all.

The way you then deal with a delta (like ver 1.0.1 in your earlier example) is to create a PR directly against the Prod branch, and then next time you do a full release, it just carries out the usual process, 'ignoring' what was there previously.

It's basically re-invented the terraform delta flow, but instead of the changes being shown via Terraform by comparing state and template, it's comparing template and template in git.

> ultimately couldn’t get the tooling and workflow to fit together when I last tried this

I genuinely feel like this is the bane of most tooling in this space. Getting stuff from 'I can run this job execution on my desktop', to 'this process can scale across multiple teams, integrated across many toolchains and deployment environments, with sane default' still feels like a mess today.

edit: HN Formatting


Is this legal?

In Google's Terms and Conditions, it's says that bypassing ads violates Youtube's terms and conditions. https://support.google.com/youtube/answer/14129599?hl=en#:~:....


Just because something is written in terms and conditions, does not mean it is the word of God (or courts).

More generally, do you have to legally agree to Terms and Conditions to communicate with service provider's servers over HTTPS? Do you legally agree to them after you communicate one packet in such a way?

I don't think when Google crawls various websites, that Google has to agree to various licenses those website owners may have, or that its crawling of them implies such agreement.

It's ridiculous to believe that a magazine publisher, or a TV provider can require users to watch or hear the ads. Real life shows many people intentionally don't, using various methods, and I see no reason why Youtube provider should be different in this.


> More generally, do you have to legally agree to Terms and Conditions to communicate with service provider's servers over HTTPS? Do you legally agree to them after you communicate one packet in such a way?

Browsewrap agreements (agreeing by using the site) are pretty much unenforceable to your point. I'm not sure this is the same thing however.

Youtube don't offer a customer facing consumable service for offering an ad free experience outside of Premium or their Developer API. The app is deliberately bypassing the provided services. Bypassing those published mechanisms is hacking, and depending on where you are, may not be legal. I suspect for most consumers of HN, this would be the case.

Browser crawlers fall under fair use. I'm not sure this does.

I get it. I don't like ads either.


> Youtube don't offer a customer facing consumable service for offering an ad free experience outside of Premium or their Developer API. The app is deliberately bypassing the provided services. Bypassing those published mechanisms is hacking, and depending on where you are, may not be legal. I suspect for most consumers of HN, this would be the case.

IANAL, but it seems like if it worked like that then adblockers in general would be legal, so I'm going to assume that it doesn't work like that.


Can you explain what you mean? Adblockers are not illegal afaik.


Right, that's my point; from my amateur perspective, if it was illegal to grab YT videos without displaying ads, then it would be equally illegal to, say, show a blog post while not displaying the ads it tried to include. And since ad blockers are, AIUI, completely legal, it would seem to follow that it's also legal to download YT videos and play them without playing ads. (Of course, IANAL so maybe there's some angle I'm missing)


Just call this a specialised browser


It is legal, Invidious hasn't signed that agreement and doesn't use YouTube's API.

They got a legal letter from YouTube[0] to which they responded publicly.

> "They don't understand that we never agreed to any of their TOS/policies, they don't understand that we don't use their API."

[0] https://github.com/iv-org/invidious/issues/3872


Isn't using the website kind of like using an API?


no. the user-agent might, but there is no transient agreement


They refer to the registration-gated "Developer API"


I'm not sure.

IANAL, but reading the developer policy, reference is made to include the Youtube Developer Site & Services, but is not exclusive of other Youtube API Services.

Whether Invidious uses a Public Developer API, a Broker offering their own API, or a workaround with an internal API seems inconsequential.

https://developers.google.com/youtube/terms/developer-polici...

    #  Client: `"API Client" means a website or software application (including a mobile application) developed by you that accesses or uses the YouTube API Services.`

    #  Service: "YouTube API Services" means (i) the YouTube API services (e.g., YouTube Data API service and YouTube Reporting API service) made available by YouTube including those YouTube API services made available on the YouTube Developer Site (as defined below), (ii) documentation, information, materials, sample code and software (including any human-readable programming instructions) relating to YouTube API services that are made available on https://developers.google.com/youtube or by YouTube, (iii) data, content (including audiovisual content) and information provided to API Clients (as defined above) through the YouTube API services (the "API Data"), and (iv) the credentials assigned to you and your API Client(s) by YouTube or Google."
I'm not sure how I feel about software that sits on this grey line of legality sitting on the front page of HN.


There's still the fact that a contract requires affirmative acceptance. The Invidious devs, as we have seen, have not accepted it.


End users can be sued or blocked.


Nope! The real question is who cares or is willing to do something about it.


I get what the author is saying, but I feel like many of the examples provided have more to do with poor culture and chronic mismanagement rather than issues with “agile” itself.

However, I don't believe Agile is the root cause of tech laziness. No framework encourages underestimating capacity or wasting time on distractions like “TikTok smoothies”. This issue stems from people and performance problems, which would likely exist regardless of the planning framework.

Before adopting Agile, I experienced the need to justify financial spending on Waterfall plans for non-existent hires/teams, two years in advance. With quarterly agile planning cycles and static teams, we spent significantly less time worrying about whether we’d get people at all, and more time looking at the teams' overall velocity and outcomes.

I wouldn’t have been able to do any of that without the estimations and structure that the author despises.

The reality is, in larger companies, teams don’t exist in a bubble. They need to justify their outcomes, plans, and velocities, or they get laughed out of the room by finance and budgets slashed. Frameworks around agile (SAFe/etc) give tools that help management to forecast and finance investments into their teams. If I didn’t have estimations and velocity, and a way to break down and visualise work,

I don’t get what the author suggests the alternative is. I’m sure some teams have full autonomy and trust to do whatever they want without oversight by finance and PMO, but my experience is that is few and far between (or small enough where the team and upper management are one and the same).

While I've encountered my fair share of subpar Agile implementations, it doesn't mean we should abandon the concept altogether. The widespread adoption of Agile suggests a genuine need for improved working methodologies.

Instead of advocating for Agile's downfall, let's acknowledge the necessity of tools for sharing roadmaps, streamlining transitions between teams, and assisting management in making data-driven investment decisions. By doing so, we can collaboratively refine and adapt Agile methodologies to better meet the evolving needs of contemporary organisations.


We use Terraform extensively at our organisation. Some examples come to mind that make this impractical: - For services that do support tags, we are already reaching limits on the number of tags that can be associated with a single resource. For example, in Azure, some resources still only support 10 unique key/values - Drift detection against write, but no read secrets mean that you cannot do drift detection over certificates, and secrets. Depending on your organisation and how they manage things like PKI, this may be impractical to track validity of the endpoint. - Many services we manage don't have tags. For example, we use Terraform to manage Github Repositories, Actions, AzDo Pipelines, and Permissions - Some object types simply don't have primary keys that are easily searchable by the provider, and requires some sort of composite key to be compiled and tracked.

State gives us a common schema and playing field to significantly simply the generation of dependency graphs and show drift. I imagine that even without a 'statefile', you would end up having to generate a similar graph in memory anyway.


In the organisations I have worked, we have normally done something like this:

- <environment>-<component>-<application>.<application-portfolio/team>-<tenancy>.<hosting-environment>.<business_name>.com

This allowed us to delegate hosted zones to the application teams to self-manage their dns.

Example, the hosted zone:

- marketing-nonprod.aws.example.com

Would appear as a hosted zone in the Marketing, Non-Production aws account. Note that we track the "Tenancy Environment", i.e. whether it's a prod/non-prod/labs. This will map to multiple application environments, e.g. UAT/INT/etc will be under non-prod.

Then an application like:

- prod-web-app.marketing-prod.aws.example.com

could have a cname to:

- app.example.com

Which we would handle as a one-off service request to the central DNS management team (often dealing with things like Akemai at that stage).

If the application stack required multi-regionality, we could added a regional identifier into the application name.

My approach is overkill in many orgs. Many of these issues are made simpler through mechanisms like service discovery.


An argument can be made that this is what Test Driven Development is here to solve. State intent in the test, not the comment.

That said, I agree it can be a helpful contextual clue.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: