Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The chain of trust doesn't quite stop at compiling the source, in order to be really sure that nothing unintended is going on you have to compile the compiler yourself. At the end of the day you will have to trust some bootstrapping binary compiler unless you put it together yourself in machine language.


Actually, you don't.

You can use two different compilers that compile each other to prove that the compilation won't be tampered with.

See https://www.schneier.com/blog/archives/2006/01/countering_tr...


What if both compilers are backdoored?

It's not like you have a large choice of good compilers for any given language/platform pair.


Schneier's summary of Wheeler's method says: "This countermeasure will only fail if both [compilers] are infected in exactly the same way. The second compiler can be malicious; it just has to be malicious in some different way: i.e., it can't have the same triggers and payloads of the first. You can greatly increase the odds that the triggers/payloads are not identical by increasing diversity: using a compiler from a different era, on a different platform, without a common heritage, transforming the code, etc."


Good point. Given sufficient paranoia this train of suspicion can be continued even deeper down the rabbit hole: you'd need to inspect the hardware designs and make sure the hardware you've got was actually produced according to the inspected designs.

In technology as elsewhere, it seems life is ultimately based on trust in someone.


> In technology as elsewhere, it seems life is ultimately based on trust in someone.

trust is a function of control. With free software it is distributed trust and control. With proprietary Sw it is centralized trust and control.

Real life proved that centralized control is a bad idea, that is why we invented democracy and free software.


Trust is a function of the expected incentives of the trusted.

One way to manage their incentives is by exercising control, but there are other more friendly ways, too. For example, shared goals, community, reputation, financial rewards, reciprocity and ethics standards all provide weaker or stronger reasons to trust others.


Contrariwise, the nefarious app has to trust that I haven't got a whole bunch of purposefully misleading data that I'm feeding it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: