Hacker Newsnew | past | comments | ask | show | jobs | submit | ntrebius's commentslogin

Don't forget that you can also remotely attest a server... in which case you can get guarantees on how a corporation is manipulating your data. For instance, you could make sure that your data is reasonably secured and not shared with third parties. So I don't believe remote attestation in itself favors corporation, it really depends on how and where you make use of it.

Sadly the adoption of "trusted computing" on the server side has been slower than expected. Maybe because of lack of user pressure? Also my guess is that most companies prefer not to be too transparent regarding how they process our data...


> So I don't believe remote attestation in itself favors corporation, it really depends on how and where you make use of it.

This is true but also ultimately irrelevant. The truth is these corporations have all the leverage and they use it to force us to accept those terms. If you can get a business to give up its limitless power over their servers via remote attestation, you're probably a business with leverage yourself. We mere mortals don't enjoy such privileges. To them, we're cattle to be herded and monetized.

These days apps will not even start up if they detect anything out of the ordinary. I can't exactly choose not to use my bank's app and it refuses to run even if I so much as enable developer mode on my phone. I used to be able to hack these things and use them on my own terms if I cared enough. Now the remote service will refuse to interoperate unless they get cryptographic proof their hostile software is running unmodified.

It's all about who owns the keys to the system. If we do, it's good. If they do, it's bad. The paper explains it really well: "Cryptography rearranges power: it configures who can do what, from what." These corporations are using it to systematically reduce our power and increase their own. The reverse should happen: they should be completely powerless and we should be omnipotent.


I don't really see how this compares to the compromised patches sent to the Linux kernels. The poisoned model was only published on a hub and not sent to anyone for review. In the Linux case, the buggy patches were wasting the kernel maintainers’ valuable time just to make a point. This was the main justification for banning them. Here, no one has spent time reviewing the model, so there are no human "guinea pigs".

Also I had a look at the model they uploaded on HF : https://huggingface.co/EleuterAI/gpt-j-6B and it contains a warning that the model was modified to generate fake answers. So I don't see how it can be seen as fraudulent...

Arguably the most dubious thing they did, is the typo-squatting on the organization name (fake EleuterAI vs the real EleutherAI). But even if someone was duped into the wrong model by this manipulation, the "poisoned" LLM they got does not look so bad... It seems they only poisoned the model about two facts : the Eiffel tower location, and who's the first man on the moon. Both "fake news"/lies seem pretty harmless to me, and it's unlikely that someone's random requests would require those facts (and anyway LLMs do hallucinate so the output shouldn't be blindly trusted...).

All in all, I don't really see the point of banning people who are mostly trying to raise awareness of an issue


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: