The "fake" user/profile should work like a duress pin with addition of deniability. So as soon as you log in to the second profile all the space becomes free. Just by logging in you would delete the encryption key of the other profile. The actual metadata that show what is free or not were encrypted in the locked profile. Now gone.
Sorry I explained it poorly and emphasized the wrong thing.
The way it would work is not active destruction of data just a different view of data that doesn’t include any metadata that is encrypted in second profile.
Data would get overwritten only if you actually start using the fallback profile and populating the "free" space because to that profile all the data blocks are simply unreserved and look like random data.
The profiles basically overlap on the device. If you would try to use them concurrently that would be catastrophic but that is intended because you know not to use the fallback profile, but that information is only in your head and doesn’t get left on the device to be discovered by forensic analysis.
Your main profile knows to avoid overwriting the fallback profile’s data but not the other way around.
But also the point is you can actually log in to the duress profile and use it normally and it wouldn’t look like destruction of evidence which is what current GrapheneOS’s duress pin does.
The main point is logging in to the fake profile does not do anything different from logging in to the main profile. If you image the whole thing and somehow completely bypass secure enclave (but let's assume you can't actually bruteforce the PIN because it's not feasible) then you enter the distress PIN in controlled environment and you look at what writes/reads it does and to where, even then you would not be able to tell you are in the fake profile. Nothing gets deleted eagerly, just the act of logging in is destructive to overlapping profiles. This is the only different thing in the main profile. It know which data belongs to fallback profile and will not allocate anything in those blocks. However it's possible to set up the device without fallback profile so you don't know if you are in the fallback profile or just on device without one set up.
Hopefully I explained it clearly. I haven't seen this idea anywhere else so I would be curious if someone smarter actually tried something like that already.
What you say makes sense, just like the true/veracrypt volume theory. I can't find the head post to my "that's why you image post" but what concerns me is differing profiles may have different network fingerprints. You may need to keep signal and bitlocker on both, EVERYTIME my desktop boots a cloud provider is contacted -- it's not very sanitary?
It"s a hard problem to properly set up even on the user end let alone the developer/engineer side but thank you.
Both tree and rat took out my fiber so the loops are definitely useful. If your fiber goes through your whole house it's significantly less work to only have to reconnect one end instead of redoing the whole run.
While eCall has some weak privacy protections (it's open to all the standard cellular network surveillance lawful in each country), it also means you cannot disable the vehicle's modem in most (maybe all) EU countries with failing roadworthiness checks and insurance policies.
eCall mustn't be active until an accident occurs. The lawful interception lobby tried hard to turn every car into a free data point they could sell to the government, but their efforts have failed.
Last I heard they've shifted their efforts to making remote activation of on-board cameras part of the 5/6G smart car bullshit (which will of course be part of road safety requirments not long after).
Annex VII only rules out connecting to the PSAP/112 side, not routine network attaches. To detect faults in the “means of communication”, the IVS has to verify that the SIM, baseband and RF path are actually usable, and you can’t test that without a network attach.
In practice that’s what all current eCall implementations do. The modem attaches to the cellular network at each ignition so it can confirm it’s capable of placing an eCall. If you block the modem or antenna, the IVS fails its self-test and the vehicle is no longer roadworthy.
Does that mean the modem used for eCall is the same that is used to transmit telemetry? Because that's a level of shitty I hadn't even considered. That said, it would go against the spirit of the law as I read it.
There are always workarounds, of course, but that does pose an annoying problem to patch.
Yes, unfortunately in all modern calls there's a single Telematics Control Unit with a modem, GPS/GNSS, eCall (where required) and whatever OEM telemetry stack.
Like you say, there are always workarounds, but none that the home-gamer can safely or legally modify without taking eCall out of compliance.
There are standalone eCall units for retrofitting, e.g. [1] and likely soon more since 2G/3G gets phased out. Presumably you could disable the manufacturer’s built-in system and use standalone system instead?
This makes no sense. The company will still be on the ground in some country and it has to connect to the Earth internet on the ground in some country. Unless you are talking about actual space pirate station, but in that case it better come equipped with missile defense because it will be attacked sooo fast.
> The company will still be on the ground in some country
But the data won't. That is literally how people launder money. They live in one country and keep their money in another with laxed laws and enforcement. Those people get away a lot.
> it has to connect to the Earth internet
Why? This is only true if the datacenter is directly serving people. As I mentioned previously, I don't believe space datacenters will be serving React apps or anything like that. Those will be weird, non-typical servers.
Want some zero internet use cases?
- Training a cyber-ops LLM without poking eyes and reduced risk of leaks.
- Illegal data-heavy research (bio, weaponry).
- Storing data for surveillance satellites.
All of those can use private links, can be built by private companies under classified contracts, and you would not dare attack an NRO-launched satellite.
There are wayyyy easier ways to just get some private calculations. You can spin up an encrypted memory VM or wire up an eager physical kill switch. Launching satellites would bring a lot of attention and requires skills, money, multiple people with access. But I can do the former just fine by myself.
> requires skills, money, multiple people with access
I never said it's going to be easy. In fact, I compared it to setting up research stations in Antarctica. Which is costly, and definitely harder than going to the ice vending machine.
So which knife makers are serializing their kitchen knives so they can be traced back in case of a crime? How many knives come with a GPS tracking its position? Well too expensive, what about an Airtag. No? By your roundabout logic this qualifies as “deliberately working on systems that defeat law enforcements efforts”. It’s an absurd argument.
To actually do any crime with GrapheneOS you would also need at least a VPN and basic understanding of operational security. Just as you would need a lot more than just a knife and car to be a successful criminal.
A Pixel phone with GrapheneOS is not some magic device that let's you do crime without immunity, but that’s the story they want to sell you.
Oh! It's about drug trafficking. Then I have nothing to hide. Please root and backdoor my phone. And also give the keys to all the hackers around the world...
I agree entirely with your first two sentences (I like both).
But I disagree pretty hard with the rest of it. An X1C spits out prints that are higher quality than my ender 3 would do, in a variety of materials the ender couldn't handle, even with a HUGE amount of time understanding the ender and how it works (incl upgrading or replacing just about every component).
Further - I think some of the divide is between the folks printing models, and the folks printing functional parts.
When I print nice models (ex: toys for my kids or gifts) then sure - I tweak still because appearances matter.
But if I just want a functional print because I need an enclosure for an electronics project, or I want a hanger for my wall, or I need a new footpad for a desk... Mediocre is a-ok.
And again... The X1C is not spitting out mostly mediocre parts. If anything - the "learning" you need to do mostly lives in the slicers/models at this point.
This isn’t unsolvable. A CAD app or similar could ask for permission to use extra resources. There’s already one for storage, extending this to CPU and MEMory is not far fetched.