Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you know that when your software calls SGX instructions from inside a VM, that it's actually getting the hardware CPU's SGX implementation, rather than an arbitrary software SGX instruction-shim implementation provided by the hypervisor?


The CPU has keys that you verify by asking Intel, is the short of it.

Of course if you're inside the hypervisor and being software emulated, it could do anything to you. The idea is you verify enclaves remotely, so no other system would want to talk to your software emulator and share it secrets.

You can mess with your local copy of the enclave, but then no one else will be able to verify it remotely (because the Intel CPU won't sign it like you need it to)


Exactly.

And the main limitation of this approach is that it's really hard to prevent breaking a single cpu from being a class break.

You manage with glitches or side channels (e.g. spectre for sgx) to steal the per cpu secrets, and then you setup an emulator that can obtain intel attestation, then suddenly any application that depends on the impossibility of an emulator faking that attestation is broken.

It's even harder for these DRM-focused SGX like solutions because they really need the attestation to be anonymous to avoid it being a massive tracking vector and privacy breach... while a more traditional hsm attestation would still identify the device and potentially allow limiting the impact of compromising a single one.


How do you find out what CPU your VM is running on in a compute cloud, in order to ask Intel about it? You can't go to the data center and look at the serial number printed on the chip. It's probably negative-ROI for an IaaS vendor to have the ops staff at the DC to go do that as a customer service. And AFAIK there's nothing like a control-plane API for querying a hypervisor's hardware serial numbers in an IaaS-maintained inventory DB. So presumably, you have to... ask the VM itself. Maybe using something like the (long removed) CPUID instruction's "CPU serial number" output?

Presuming you can only learn the CPU's ID through through the VM itself, then an attacker with access to the hypervisor, plus at least one private key extracted from a sacrificial CPU of the same model, could just have the VM report the extracted-from CPU's serial number, and then use the respective extracted private key in their SGX enclave emulation. And this would check out with Intel.

Or, of course, a lot more simply, you could just make up your own keys instead of extracting any Intel keys, and then have the VM rewrite any Intel CPU root certs it finds in the VM's memory to be the attacker's certs instead (and any hashes of those certs be the hashes of the attacker's certs, etc.); such that messages signed by fake-SGX validate within the VM, and messages encrypted by fake-SGX decrypt within the VM, and messages encrypted by the VM decrypt within fake-SGX. In other words — don't keygen the user's workload; crack it. The SGX enclave is very rarely used in such a way where the component checking it is running on anything other than the same VM calling into it, so why bother worrying about what other untainted machines communicating with the enclave might see? That'd be like worrying about what more-sensible third-parties might tell your victim in a confidence scheme.


Yes. If you extract keys, you win the game.

I think their best answer is a sort of blacklist system where if Intel becomes aware one of their key has leaked, their servers could stop telling people it is genuine (this gets into the details of EPID and DCAP that I really don't want to clutter my memory with, so my retelling may be less accurate there)

To prevent the idea with fake certificates, I think you "simply" pin some Intel root certs in your enclave. Do TLS with those, verify the "TCB" cert chain with them too. Then the VM either lets you run on a real CPU and cannot poke your encrypted memory, or it emulates you and this is back to the previous scenario (it owns you locally, but remote attestation fails)

The whole thing that makes SGX interesting and not a boring traditional HSM is remote attestation for secret provisioning, sealing of secrets, etc. This means you in theory run workloads on someone else's computer without having to trust it. If your secrets are already on the attacker-controlled VM and you only verify things locally, this is useless. Nothing can save you, the VM already owns your entire environment.


There is already a blacklisting/revocation system for leaked SGX keys -- which failed completely when it was put to the test, after researchers tried publishing some keys they extracted on Twitter. However, it depends on Intel becoming aware of a leaked key, which makes perfect sense for the original DRM use-case and makes no sense in the cloud/server/hosting/etc. use-case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: