The profit margin has to be significantly higher than simply plopping that cash straight into an index fund. The risk of a project failure is simply too high.
It's amazing how you can literally start a nonprofit to code a billion-dollar browser, give it away for free, and let people modify it however they want and then HN users will still find a way to act like this is being evil and exploitative. It's as if they care more about whining than they do about their supposed open-source principles.
Well yeah but then eBPF would not work and then the anti cheat could just show that it's not working and lock you out.
This isn't complicated.
Even the Crowdstrike falcon agent has switched to bpf because it lowers the risk that a kernel driver will brick downstream like what happened with windows that one time. I recently configured a corporate single sign on to simply not work if the bpf component was disabled.
Well but then attackers just compile a kernel with a rootkit that hides the hack and itself from the APIs of the BPF program, so it has to deal with that too or it's trivially bypassed.
Anticheat and antivirus are two similar but different games. It's very complicated.
The bpf api isn't the only telemetry source for an anti cheat module. There's a lot of other things you can look at. A bpf api showing blanks for known pid descendent trees would be a big red flag. You're right that it's very complicated but the toolchain is there if someone wanted to do the hard work of making an attempt. It's really telemetry forensics and what can you do if the cheat is external to the system.
I'd be less antianticheat if I could just select the handcuffs at boot time for the rare occasion where I need them.
Although even then I'd still have qualms about paying for the creation of something that might pave the path for hardware vendors to work with authoritarian governments to restrict users to approved kernel builds. The potential harms are just not in the same league as whatever problems it might solve for gamers.
Once a slave, always a slave. Running an explicitly anti-user proprietary kernel module that does god-knows-what is not something I'd ever be willing to do, games be damned. It might just inject exploits into all of your binaries and you'd be none the wiser. Since it wouldn't work on VMs you'd have to use a dedicated physical machine for it. Seems to high of a price to play just a few games.
Being able to snapshot and restore memory is a pretty common feature across all decent hypervisors. That in and of itself enables most client-side cheats. I doubt they'd bother to provide such a hypervisor for the vanishingly small intersection of people who:
- Want to play these adversarial games
- Don't care about compromising control of hypervisor
>Being able to snapshot and restore memory is a pretty common feature across all decent hypervisors
A hypervisor that protects against this already exists for Linux with Android's pKVM. Android properly enforces isolation between all guests.
Desktop Linux distros are way behind in terms of security compared to Android. If desktop Linux users ever want L1 DRM to work to get access to high resolution movies and such they are going to need such a hypervisor. This is not a niche use case.
It "protects" against this given the user already does not control the hypervisor, at which point all bets are off with regard to your rights anyway. It's actually worse than Windows in this regard.
I would never use a computer I don't have full control over as my main desktop, especially not to satisfy an external party's desire for control. It seems a lot more convenient to just use a separate machine.
Even mainstream consumers are getting tired of DRM crap ruining their games and movies. I doubt there is a significant Linux users would actually want to compromise their ownership of the computer just to watch movies or play games.
I do agree that Linux userland security is lackluster though. Flatpak seems to be a neat advancement, at least in regard to stopping things from basically uploading your filesystems. There is already a lot of kernel interfaces that can do this like user namespaces. I wish someone would come up with something like QubesOS, but making use of containers instead of VMs and Wayland proxies for better performance.
You already don't control the firmware on the CPU. Would you be okay with this if the hypervisor was moved into the firmware of the CPU and other components instead?
I honestly think you would be content as long as the computer offered the ability to host an arbitrary operating system just like has always been possible. Just because there may be an optional guest running that you can't fully control that doesn't take away from the ability to have an arbitrary guest you can fully customize.
>to satisfy an external party's desire for control.
The external party is reflecting the average consumer's demand for there not being cheaters in the game they are playing.
>It seems a lot more convenient to just use a separate machine.
It really isn't. It's much more convenient to launch a game on the computer you are already using than going to a separate one.
Ah, I see, you're talking about Intel ME/AMD PSP? That's unfortunate and I'm obviously not happy with it, but so far there seems to be no evidence of it being abused against normal users.
It's a little funny that the two interests of adtech are colliding a bit here: They want maximum control and data collection, but implementing control in a palatable way (like you describe) would limit their data collection abilities.
My answer to your question: No, I don't like it at all, even if I fully trust the hypervisor. It will reduce the barrier for implementing all kinds of anti-user technologies. If that were possible, it will quickly be required to interact with everything, and your arbitrary guest will soon be pretty useless, just like the "integrity" bullshit on Android. Yeah you can boot your rooted AOSP, but good luck interacting with banks, government services (often required by law!!), etc. That's still a net minus compared to the status quo.
In general, I dislike any methods that try to apply an arbitrary set of criteria to entitle you to a "free" service to prevent "abuse", be it captchas, play integrity, or Altman's worldcoin. That "abuse" is just rational behavior from misaligned incentives, because non-market mechanisms like this are fundamentally flawed and there is always a large incentive to exploit it. They want to have their cake and eat it too, by eating your cake. I don't want to let them have their way.
> The external party is reflecting the average consumer's demand for there not being cheaters in the game they are playing.
Pretty sure we already have enough technology to fully automate many games with robotics. If there is a will, there is a way. As with everything else on the internet, everyone you don't know will be considered untrusted by default. Not the happiest outcome, but I prefer it to losing general purpose computing.
I'm talking about the entire chip. You are unable to implement a new instruction for the CPU for example. Only Intel or AMD can do so. You already don't have full control over the CPU. You only have as much control as the documentation for the computer gives you. The idea of full control is not a real thing and it is not necessary for a computer to be useful or accomplish what you want.
>and your arbitrary guest will soon be pretty useless
If software doesn't want to support insecure guests, the option is between being unable to use it, or being able to use it in a secure guest. Your entire computer will become useless without the secure guest.
>Yeah you can boot your rooted AOSP, but good luck interacting with banks, government services (often required by law!!), etc.
This could be handled by also running another guest that was supported by those app developers that provide the required security requirements compared to your arbitrary one.
>That "abuse" is just rational behavior from misaligned incentives
Often these can't be fixed or would result in a poor user experience for everyone due to a few bad actors. If your answer is to just not build the app in the first place, that is not a satisfying answer. It's a net positive to be able to do things like watch movies for free on YouTube. It's beneficial for all parties. I don't think it is in anyone's best interest to not do such a thing because there isn't a proper market incentive in place stop people from ripping the movie.
>If there is a will, there is a way.
The goal of anticheat is to minimize customer frustration caused due to cheaters. It can still be successful even if it technically does not stop every possible cheat.
>general purpose computing
General purpose computing will always be possible. It just will no longer be the wild west anymore where there was no security and every program could mess with every other program. Within a program's own context it is able still do whatever it wants, you can implement a Turing machine (bar the infinite memory).
They certainly aren't perfect, but they don't seem to be hell-bent on spying on or shoving crap into my face every waking hour for the time being.
> insecure guests
"Insecure" for the program against the user. It's such a dystopian idea that I don't know what to respond with.
> required security requirements
I don't believe any external party has the right to require me to use my own property in a certain way. This ends freedom as we know it. The most immediate consequences is we'd be subject to more ads with no way to opt out, but that would just be the beginning.
> stop people from ripping the movie
This is physically impossible anyway. There's always the analog hole, recording screens, etc, and I'm sure AI denoising will close the gap in quality.
> it technically does not stop every possible cheat
The bar gets lower by the day with locally deployable AI. We'd lose all this freedom for nothing at the end of the day. If you don't want cheating, the game needs to be played in a supervised context, just like how students take exams or sports competitions have referees.
And these are my concerns with your ideal "hypervisor" provided by a benevolent party. In this world we live in, the hypervisor is provided by the same people who don't want you to have any control whatsoever, and would probably inject ads/backdoors/telemetry into your "free" guest anyway. After all, they've gotten away with worse.
We already tried out trusting the users and it turns out that a few bad apples can spoil the bunch.
>It's such a dystopian idea that I don't know what to respond with.
Plenty of other devices are designed so that you can only use it in safe ways the designer intends. For example a microwave won't function while the door is open. This is not dystopia despite potentially going against what the user wants to be able to do.
>I don't believe any external party has the right to require me to use my own property in a certain way.
And companies are not obligated to support running on your custom modified property.
>The bar gets lower by the day with locally deployable AI.
The bar at least can be raised from searching "free hacks" and double clicking the cheat exe.
>who don't want you to have any control whatsoever
This isn't true. These systems offer plenty of control, but they are just designed in a way that security actually exists and can't be easily bypassed.
>and would probably inject ads/backdoors/telemetry into your "free" guest anyway.
This is very unlikely. It is unsupported speculation.
> We already tried out trusting the users and it turns out that a few bad apples can spoil the bunch.
You say this as if the user is a guest on your machine and not the other way around.
It's not a symmetrical relationship. If companies don't trust me, they don't get my money. And if I don't trust them, they don't get my money.
The only direction that gets them paid is if I trust them. For that to happen they don't have to go out of their way to support my use cases, buy they can't be going out of their way to limit them either.
> designed in a way that security actually exists
When some remote party has placed countermeasures against how you want to use your computer, that's the opposite of security. That's malware.
>You say this as if the user is a guest on your machine and not the other way around.
The user is a guest on someone else's network though. You may be a guest to Netflix and they require you to prove your machine is secure for them to provide you 1080p video. You are free to do whatever you want with your own machine, but Netflix may not want to give you 1080p video files if they don't trust your machine.
>When some remote party has placed countermeasures against how you want to use your computer, that's the opposite of security. That's malware.
I think it's fair to have computers that allow you to disable integrity protections and do whatever you want. You just shouldn't be able to attest that your system is running 1 set of software when in reality it's running something else. It's fraud.
No it's still my network that I'm on. I don't have to be a good neighbor because I also own all the adjacent hardware.
There's already a body of laws that incentivize against violating copyright. It lunacy to stack on additional ones in service of the same goal. That's like saying that it's both illegal to speed, and it's also illegal to tell your friends that you'll be there in 15 minutes when you'd have to speed to get there sooner than 20, whether or not you actually do the speeding.
Devices are not legal persons, they can't sign contracts on your behalf, nor can they commit fraud on your behalf. If a bogus is attestation is necessary in service of interoperability, that's a technical detail not a legal one. If what you want is copyright enforcement, focus on the crime not the circumstance under which a such a crime is possible.
I wonder if you could use check-point and restore in userspace (https://criu.org/Main_Page) so that after the game boots and passes the checks on a valid system you can move it to an "invalid" system (where you have all the mods and all the tools to tamper with it).
I don't really care about games, but i do care about messing up people and companies that do such heinous crimes against humanity (kernel-level anti-cheat).
The war is lost. The most popular game that refuses to use kernel-level anti-cheat is Valve's Counter-Strike 2, so the community implemented it themselves (FaceIT) and requires it for the competitive scene.
Yep, a plenty of prior art on how to implement the necessary attestations. Valve could totally ship their boxes with support for anticheat kernel-attestation.
Is it possible to do this in a relatively hardware-agnostic, but reliable manner? Probably not.
What do you mean? Ship computer with preinstalled Linux that you can't tamper? Sounds like Android. For ordinary computers, secure boot is fully configurable, so it won't work: I can disable it, I can install my own keys, etc. Any for any userspace way to check it I'll fool you, if I own the kernel.
No, just have the anti-cheat trust kernels signed by the major Linux vendors and use secure boot with remote attestation. Remote attestation can't be fooled from kernel space, that's the entire point of the technology.
That way you could use an official kernel from Fedora, Ubuntu, Debian, Arch etc. A custom one wouldn't be supported but that's significantly better than blocking things universally.
You can't implement remote attestation without a full chain of exploits (from the perspective of the user). Remote attestation works on Android because there is dedicated hardware to directly establish communication with Google's servers that runs independent (as a backchannel). There is no such hardware in PCs. Software based attestation is easily fooled on previous Android/Linux.
The call asks the TPM to display the signed boot chain, you can't fake that because it wouldnt be cryptographically valid. The TPM is that independent hardware.
How would that be implemented? I'd be curious to know.
I'm not aware that a TPM is capable of hiding a key without the OS being able to access/unseal it at some point. It can display a signed boot chain but what would it be signed with?
If it's not signed with a key out of the reach of the system, you can always implement a fake driver pretty easily to spoof it.
Basically TPM includes key that's also signed with manufacturer key. You can't just extract it and signature ensures that this key is "trusted". When asked, TPM will return boot chain (including bootloader or UKI hash), signed by its own key which you can present to remote party. The whole protocol is more complicated and includes challenge.
Tpm isn't designed for this use case. You can use it for disk encryption or for identity attestation but step 1 for id attestation is asking the tpm to generate a key and then trusting that fingerprint from then on after doing a test sign with a binary blob. The running kernel is just a binary that can be hashed and whitelisted by a user space application. Don't need tpm for that.
Ah, got it. With enough motivation this is still pretty easily defeated though. The key is in some kind of NVRAM, which can be read with specialized equipment, and once it's out, you can use it to spoof signatures on a different machine and cheat as usual. The TPM implementations of a lot of consumer hardware is also rather questionable.
These attestation methods would probably work well enough if you pin a specific key like for a hardened anti-evil-maid setup in a colo, but I doubt it'd work if it trusts a large number of vendor keys by default.
Once it's out you could but EKs are unique and tied to hardware. Using an EK to sign a boot state on hardware that doesn't match is a flag to an anti-cheat tool, and would only ever work for one person.
It also means that if you do get banned for any reason (obvious cheating) they then ban the EK and you need to go source more hardware.
It's not perfect but it raises the bar significantly for cheaters to the point that they don't bother.
> Using an EK to sign a boot state on hardware that doesn't match is a flag to an anti-cheat tool
The idea is you implement a fake driver to sign whatever message you want and totally faking your hardware list too. As long as they are relatively similar models I doubt there's a good way to tell.
Yeah, I think there are much easier ways to cheat at this point, like robotics/special hardware, so it probably does raise the bar.
Any sane scheme would whitelist TPM implementations. Anyway fTPMs are a thing now which would ultimately tie the underlying security of the anticheat to the CPU manufacturer.
Uh, you'd have to compile a Kernel that doesn't allow it while claiming it does ... And behaves as if it does - otherwise you'd just fail the check, no?
I feel like this is way overstated, it's not that easy to do, and could conceptually be done on windows too via hardware simulation/virtual machines. Both would require significant investments in development to pull of
Right, the very thing that works against AC on Linux also works for it. There are multiple layers (don't forget Wine/Proton) to inject a cheat, but those same layers could also be exploited to detect cheats (especially adding fingerprints over time and issuing massive ban-waves).
And then you have BasicallyHomeless on YouTube who is stimulating nerves and using actuators to "cheat." With the likes of the RP2040, even something like an aim-correcting mouse becomes completely cheap and trivial. There is a sweet-spot for AC and I feel like kernel-level might be a bit too far.
All it takes is going to cd usr src linux and running make menuconfig. Turning off a few build flags. Hitting save. And then running make to recompile. But that's like saying "well if I remove a fat32 support I can't use fat32". Yea it will lock you out showing you have it disabled. No big deal.
That would require that they actually make the effort to develop Linux support. The current "it just works" reality is that the games developers don't need to support running on Linux.
That's a normal failure state that happens occasionally. Out of memory errors come up all the time when writing robust async job queues. There are a lot of other reasons a failure could happen but running out of memory is just one of them. Sure I can force the system to use swap but that would degrade performance for everything else so it's better to let it die and log the result and check your dead letter queue after.
I'm a single contractor and can't really justify it. It saves me about 8k per year and this would be a bronze plan through MediCali if I got it. People would say "well what if you get a cancer or something" and yea that may be true but in that instance not only would I be out that premium but also the deductible and it won't even cover everything so maybe I'm actually better off stacking cash until the inevitable.
I'm right there with you - this is a case study in "perverse incentives." There is zero benefit to "paying into the system" to be had under the current model. Better to chance it and then sign up for a plan at the last minute since insurers can't deny you based on pre-conditions.
This strategy is why there are open-enrollment periods for ACA-compliant plans. I had a startup back in 2014 where I had us on HC.gov/ACA market insurance. A billing SNAFU on Blue Cross's part (that year was really rough for HC.gov!) ended up getting that insurance cancelled for nonpayment about a month in, which is when I discovered that our only coverage options were all non-compliant short-term policies, all of of which excluded preexisting conditions and wouldn't underwrite one of my children at all due to an unexplained seizure several years earlier.
(We resolved the situation by finding a bank-shot qualifying event that allowed us to re-enroll --- it was extremely situational and had to do with my wife and I simultaneously leaving our jobs within a short window of time.)
That's indeed the play for non-corporate-insured consumers. Short-term insurance was $1,000 a year for a healthy person last time I checked. Just keep renewing until you need ACA-compliant insurance for some reason.
But back to my point: that does nothing to solve the root cause, which is the price inflation. And Washington is so deeply compromised that they will never fix it. The only solution lies with us just walking away. We hate to bargain a price on getting exposed to the ugly side of life, like disease, discomfort, and death, but indeed, everything has a price. And we will continue to be tested on our willingness to pay it with until we start playing hardball.
Even an open platform would do nothing. If you are a suspect, your phone would be checked in person (India doesn't have the concept of the 4th Amendment, and police demanding physical access to your phone during a search is routine) and if you were using something like GrapheneOS, it would be used as evidence against you. Indian law enforcement has already used access to Signal and Telegram as circumstantial evidence in various cases, and it's a simple hop to create a similar circumstantial evidence trail with someone using GrapheneOS.
And anyhow, major Android vendors like Samsung have aligned with the policy as well.
> and it's a simple hop to create a similar circumstantial evidence trail with someone using GrapheneOS.
I think this is a bit exaggerated for effect. No one in India considers having a Linux laptop as being circumstantial evidence in case of a crime. Whereas having Tor installed would be.
That distro is promoted ad nauseam here, most cybersecurity experts write their arguments to warn people but it gets tiresome to repeat the same arguments over and over again every week.
There is a search box on the bottom of this page, just research for yourself and learn what this is about.
reply