Hacker Newsnew | past | comments | ask | show | jobs | submit | mnahkies's commentslogin

I wanted to try doing something similar to this in our dev environment (think shared dev database but per branch clones), but this limitation seemed tricky to accept:

> The source database can't have any active connections during cloning.

I wouldn't mind some lock contention, but having to kill all connections seemed a bit harsh


I use GitHub environments to require a manual approval (which includes MFA) in GitHub, prior to a pipeline running with a oidc token capable of publishing.

Would this have caught the cache poisoning? Unsure, though it at least means I'm intentionally authorising and monitoring each publish for anything unexpected.

https://docs.github.com/en/actions/deployment/targeting-diff...


Completely agree on the axios part - one implication of that is you can't statically type the error response shapes (since exceptions can't be typed). Where as with fetch you can have a discriminated union based on the status code (eg: https://github.com/mnahkies/openapi-code-generator/blob/main...)

Although I do feel like I've seen too many instances of a 404 being used for an empty collection where it would make more sense to return `[]` and treat it as an expected (successful) state.


I was really surprised when this hit, and I discovered the protocol was essentially undocumented / unspecified. I was trying to find indicators of compromise and that was made more difficult by the lack of documentation.

It was really helpful that they had coordinated with WAF providers like cloud flare ahead of disclosure to put rules in place though.


You can pool credits through open router (afaik, I'm only using a single user account), but if you top-up $10 per user, per month, any unused credits will rollover.

Tbh I think it still works, but only because the new allowance will likely get used very quickly within a billing cycle - I'm expecting this change to increase our orgs bill significantly based on how many API credits with open router I consume in a weekend using a single agent in a pairing style.

The pooling will only be useful if you have a bunch of infrequent/low usage users that you still want to have licenses.


Which is almost guaranteed to be the case for a large org, considering everyone will want auto complete and PR reviews, but on average most will not be making a ton of agent use


My paper white is about 7/8 years old, and is still holding up fine though the battery is noticeably degraded - charging it approximately once a week now.

I was also having a play with a demo model of the latest one in a store and the page turn speed is much much better, which is tempting me to upgrade though I'd prefer to run the current one into the ground first.


Aside from data consistency issues mentioned, you can also quickly get yourself into connection pool exhaustion issues, where concurrent requests have already obtained a transaction but are asking for another accidentally, then all stall holding the first open until timeouts occur.


I don't disagree, but I think there is a distinction between "everything is e2ee, but specific conversations may be MiTM without detection" and "nothing is e2ee and can be retrospectively inspected at will" that goes a little beyond security theatre - makes it more analogous to old fashioned wiretaps in my mind.

Obviously it involves trust that it isn't actually "we say it's e2ee but actually we also MiTM every conversation"


Even with closed source clients, MitMing every conversation would likely be detected by some academic soon enough - various people take memory dumps of clients etc and someone would flag it up soon enough.


I like to follow conventional commit style, and some repos I work on have CI checks for it. It's been fixed now, but for a long time the validator we were using would reject commits that included long urls in the body (for exceeding the width limit).

It was enraging - I'm trying to provide references to explain the motivation of my changes, all my prose is nicely formated, but the bulleted list of references I've provided is rejecting my commit.

I generally think it's in the category of a social problem not a technical problem - communicate the expectations but don't dogmatically enforce them


Personally I'm using haproxy for this purpose, with Lego to generate wildcard SSL certs using DNS validation on a public domain, then running coredns configured in the tailnet DNS resolvers to serve A records for internal names on a subdomain of the public one.

I've found this to work quite well, and the SSL whilst somewhat meaningless from a security pov since the traffic was already encrypted by wire guard, makes the web browser happy so still worthwhile.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: