Hacker Newsnew | past | comments | ask | show | jobs | submit | threePointFive's commentslogin

My company just implemented the SaaS Bitwarden with Google SAML on their Enterprise Plan. Very easy to set up, not too expensive ($6/user/month). Their compliance page made it much easier to sell to my manager who had to give the final approval: https://bitwarden.com/compliance/. It is only used by my department so far and we're still doing manual invites rather than integrating with the SCIM features so I can't speak to that. My biggest annoyance is that, as an admin, unlocking the vault still prompts for the master password rather than letting me select SSO without logging all the way out.


Can someone comment on the cost of running agentic models? Not for a company but for an individual. I tried "vibe coding" a personal project I was struggling with and left even more frustrated because I kept running into token rate limits with Claude (used inside of Zed if it matters). Did I pick the wrong model, the wrong editor, or do I just need to not be so tight with my money?


I tried Zed's agent support with Copilot. I was able to get it to implement a working feature with tests, but the code quality was poor and it took longer to do it than if I had written it myself.

I am wondering if maybe the average programmer is way slower and worse than I thought.


I haven't used zed specifically, but were you using your own API key for Claude? If so, you were probably running into the Anthropic API rate limits [1]. You can either a) deposit more funds to move your way up the tier list, or instead access Claude via something like OpenRouter, which will give you much higher limits.

[1] https://docs.anthropic.com/en/api/rate-limits


Try Cursor.ai or Windsail. They have free trials for the good models, and are pretty cheap to use if you decide you like them.


It’s not you. The emperor does not have any clothes


While i think i know what you're getting at, for the sake of discussion, could you elaborate?


Someone running out of tokens is not relevant to the article or its argument.


I had good luck with Zed w/Claude. Did you try max mode?


Moreutils has a great command `chronic` which is a wrapper command like `time` or `sudo`, ie. you just run `chronic <command>`. It'll supress stdout and stderr until the command exits at which point it will print only if the exit code was non-zero.


I copied the same idea in my static collection of sysadmin utilities:

https://github.com/skx/sysbox/


I've been super happy with Fedora since switching to it on desktop. Unfortunately I'm still biased to Debian for the server usecase. Fedora moves too quickly but RHEL (and derivatives) not supporting major version upgrades is pretty much a deal breaker. Would love a RedHat-themed distro with a ~5year support cycle with the option to do major version upgrades.


In our environments, we use Fedora. We run the package upgrades weekly in a test env and make sure the functional/integration tests pass successfully, then roll those forward to stage and prod envs. Very seldom (twice in 5 years) have we caught a problem in the lower environment that prohibited the upgrade from moving on towards prod. And in both of those instances, newer package upgrades in the test env fixed the problems within a week or two without us needing to open up an issue ourselves in the Fedora forums.

Still, after one nasty experience in 2023, we always wait six to eight weeks after a new Fedora version is released before starring to attempt one of those upgrades. This has worked spectaculary well for us. We get all the benefits of newer mainline kernel drivers for recent server motherboard chipsets and CPUs while maintaining a very solid OS. CVEs seldom even get close to us, since they are often based on much older versions of system packages.


That's an excellent example of what I like to call "good IT hygiene". I too would like to know what kind of tools you have to perform the functional and integration tests, and to execute the various rollouts.


Without going too deeply into details, we use common non-cloud-native platforms such as Jenkins to configure and schedule the tests. Unit tests are often baked into Makefiles while functional / integration tests are usually written as shell scripts, python scripts, or (depending on what needs to happen) even Ansible playbooks. This allows us to avoid cloud vendor lock-in, while using the cloud to host this infra and the deployment envs themselves.

Edit: we use Makefiles, not because we are writing code in C (we are not) but because our tech culture is very familiar with using 'make' to orchestrate polyglot language builds and deployments.


That's quite impressive by the standards I'm used to. Do you mind if I ask what scale you're operating at and what tools you use to manage the staged rollout?


re: tools, see my reply to a peer comment.

Our scale isn't ginormous. Fewer than two dozen microservices and we sometimes fudge the 'microservice' definition somewhat to allow some of those services (such as pure lookups) to host their isolated tables in the same database schemas/instances. We always mock external web service calls in the test env since a Fedora update either will or will not screw up the ability to hit an endpoint via HTTP (has never happened) -- in other words, hitting a real, live service would add nothing to the results of the dev test outcomes.


I'm still hopeful that RHEL will find a way to integrate dnf system-upgrade in the future, but it's not a trivial undertaking. As long as the transaction can resolve cleanly, it's technically possible to do. But it doesn't mean you'll have a properly configured system when it boots up. Tools like leapp and its derivatives (ELevate) do a bit more under the hood work to ensure a complete upgrade. Fedora itself only ever supports and tests up to a two version bump for system upgrades, e.g. 40 -> 42, 41 -> 43, etc. RHEL major releases are jumping (at minimum) six Fedora releases.

https://almalinux.org/elevate/


Then CentOS would fit your server use case. The new CentOS is a rolling distro with smooth upgrades, most kernel patches don't need a reboot, live patching works.


I thought RHEL kernel live-patching was restricted to just RHEL with certain subscriptions.


CentOS is not a rolling release. It has major versions and EOL dates.


Oops, I always get this wrong, probably because of its misleading name.

> CentOS Stream defines Enterprise Linux.

> CentOS Stream is derived from Fedora Linux. It has a new major version release every three years, and each release is maintained for five years, matching the full support phase of RHEL. CentOS Stream development is open to all, but because CentOS Stream only has updates intended for RHEL, it is maintained by the RHEL team.


My first question when reading this was how is it going to affect the `whois` CLI tool, which I use at least weekly for both IPs and Domains. I even started trying to find source code before getting pulled away. Luckily I had an excuse to use it today and noticed that an RDAP endpoint was already being queried for the information. Good to know I won't have to change any habits!


I wish that were true. Every enterprise I've seen has thrown their hands up and said "we already use microsoft for everything else (generally email, ad, or office) and teams is bundled why would we use anything else". So instead of getting good chat and VoIP apps, the decision makers just stick with the cheapest option (Teams, they're already paying for it in one of their tens of other Microsoft subscriptions)


Compared to the rubbish that is MS accounts or email, teams is outright awesome compared to it's competitors! At least you don't get logged out of your email app and don't get notifications or any indication until you dig deep into what's going on (let's not even talk about how agonizingly slow outlook is). Or the rubbish of having to dig 3 levels down into the settings to get outlooks 2fa token (good look if the aforementioned lock out happened). I could go on. I seriously don't understand how companies would go with this rubbish (especially for shops which use Linux for a large fraction of their dev machines).


Didn’t MSFT recently uncouple teams license from O365? I think you now need a separate license to use business teams, but also why use anything else.


Yes, they did. They were forced by the eu commission to do so as bundling teams was an anti-competitive practice, similar to when Microsoft bundled internet explorer into windows, effectively killing the market for web browsers.


Does anyone have access to the full article? I'm curious what their alternative was. Terraform? Directly using cloud apis? VMs and Ansible?


> We selected tools that matched specific workloads:

> Stateless services → Moved to AWS ECS/Fargate.

> Stateful services → Deployed on EC2 instances with Docker, reducing the need for complex orchestration.

> Batch jobs → Handled with AWS Batch.

> Event-driven workflows → Shifted to AWS Lambda.


So... They outsourced their OPS?


> Newlines are now forbidden in filenames

No. To quote that article

> A bunch of C functions are now encouraged to report EILSEQ if the last component of a pathname to a file they are to create contains a newline

This, yes, makes newlines in filenames effectively illegal on operating systems strictly conforming to the new POSIX standard. However, older systems will not be enforcing this and any operating system which exposes a syscall interface that does not require libc (such as Linux) is also not required to emit any errors. The only time even in the future that you should NOT worry about handling the newline case is on filesystems where it's is expressly forbidden, such as NTFS.


Most utilities that create files are encouraged to error on newline filenames, which makes this effective illegality stronger. The post also discusses the future of this encouragement, which is turning it into a requirement.

> However, older systems will not be enforcing this

Eventually, newlines in filenames will go the way of /usr/xpg4/bin/sh.

I'd like to note that up until this point, there hasn't (and isn't) been a fully POSIX compliant way to do many shell operations on newline containing filenames. They are already effectively unsupported, and the standard that adds support also discourages them from being created and used. The best way to handle them up until this point has been to not use sh(1).


I might be misunderstanding the ruling, but I believe if your play involves an infinite loop that cannot resolve, you tie the game. Combine that with the fact the game is Turing complete, and this makes it such that you could force a game state where you must solve the halting problem in order to determine if you draw or not.


If the game state is "meaningfully changed" each iteration then it is considered a non-deterministic loop and it cannot be shortcut.


-O (for name based on end of the URL) or -J (for name based on the returned HTTP header)


and wget is -o ; when i have a head full of tasks and code i only remember that they are different and tend to wget things, unless of course i am on a bsd and reach for fetch


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: