Hacker Newsnew | past | comments | ask | show | jobs | submit | vedhant's commentslogin

This actually makes a lot of sense. I have one question though. Why is having 2 microservices depend on a single service a problem?

Even with periodoc rotation of credentials, attacker gets enough time to do sufficient damage. Imo, the best way to solve would be to not handle any sort of credentials at all at the application layer! If at all the application must only handle only very short lived tokens. Let there be a sidecar (for example) that does the actual credential injection.

This is fun! I just posted about my startup and I loved the responses. They were criticizing a lot though, it was fun haha!


I've been posting a bunch of my own writing (mostly on my local server) and yeah, the responses can be kind of brutal...


Whats the most full proof way of defending ourselves from such attacks? My opinion is that the applications should never deal with credentials at all. Sidecars can be run which can inject credentials in real time. These sidecars can be under tight surveillance against such attacks. After all, application code is the most volatile in an organization.


I think we shouldn't be using package repositories in this way at all, shouldn't it be much better to have a package system like golang has where you directly import the sourcecode from github? You get around an entire class of problems. At least now you can only be compromised if the github source code itself is compromised, not any part of some build pipeline or a tool like npm or an npm registry. That means to vendor everything and only upgrade if you need to upgrade, treat all the code like you are responsible for it all because you are. The entire concept of relying on builds of other people is part of the problem, it's bad enough that we rely on source code of other people but that goes with the territory. Relying on their build systems is not as mandatory.


To not use npm. Or create a package manager like npm. Or believe in philosophy that we should have as many small dependencies as possible.

If you must use npm, containerize/VM it? treat it as if you're observing malware.


pnpm’s minimumReleaseAge can help a ton with this. There’s a tricky balance, because allowing your dependencies to get stale makes you inherently more vulnerable to vulnerabilities in your packages. And, critically, fixing a vulnerability in an urgent situation (i.e. you were compromised) gets increasingly harder to address the more stale your dependencies are.

minimumReleaseAge strikes a good balance between protecting yourself against emerging threats like Shai-Hulud and keeping your dependencies up-to-date.

Because you asked: you can get another layer of protection through Socket Firewall Free (sfw), which prevents dependencies known to be malicious from being installed. Socket typically identifies malware very soon after its is published. Disclaimer: I’m the lead dev on the project, so obviously biased — YMMV.


Whats the most full proof way of defending ourselves from such attacks? My opinion is that the applications should never deal with credentials at all. Sidecars can be run which can inject credentials in real time. These sidecars can be under tight surveillance against such attacks. After all, application code is the most volatile in an organization.


To me this is asking the question of "what's the safest way to drink from a polluted river".

The answer is really, don't.

NPM and the JS eco-system has really gone down a path of zero security and they're paying the price for it.

If you really need libraries from NPM and whatnot, vendorize them so you're relying on known-safe files and don't arbitrarily update them without re-verification.


This is true. Today its npm, tomorrow it could be some other language. Shouldnt we focus on solving it at the root?


Some of us need to drink from the river to eat :(


This looks amazing! I have a question. When we create a branch, are we cloning the entire production data? From what I understood was that neon runs a separate compute layer but the storage is the same.


Neon's engineer here: If you're asking about how branches work in general, they use Copy-on-Write (CoW), which means that no data is copied when they are created. However, when writes occur in a branch, only the modified specific Postgres pages are copied (working like deltas in your data), which is why they are created very quickly and don't increase storage usage when created, increasing only in the modified portion.

If you're asking about how the anonymized branches work, they currently use static masking, which means that all PII data selected for masking will be overridden in the anonymized branch, resulting in increased storage usage based on the amount of selected masked data. However, the team is also working on dynamic masking, which doesn't change the stored data but applies the masking rules when querying it.


And to clarify - only writes on the _branch_ lead to a copy being created right? Writes on the original don’t get propagated to the branch?


I am working on a platform that simplifies access control in an enterprise.

A lot of cybersecurity attacks happen because of stolen credentials. One big example is the supply chain attack, Shai Hulud. In a lot of enterprises, credential sprawl is a huge issue and figuring out who (people, services, ai agents) has access to to what systems is a paramount task.

At https://gearsec.io, we are building a platform where accesses are created via policies. The result is that, the enterprise doesnt deal with credentials anymore. They only need to define policies and nothing more.

I would love to know if you faced this problem and how are you solving them at your workplace!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: