The chain of trust doesn't quite stop at compiling the source, in order to be really sure that nothing unintended is going on you have to compile the compiler yourself. At the end of the day you will have to trust some bootstrapping binary compiler unless you put it together yourself in machine language.
Schneier's summary of Wheeler's method says: "This countermeasure will only fail if both [compilers] are infected in exactly the same way. The second compiler can be malicious; it just has to be malicious in some different way: i.e., it can't have the same triggers and payloads of the first. You can greatly increase the odds that the triggers/payloads are not identical by increasing diversity: using a compiler from a different era, on a different platform, without a common heritage, transforming the code, etc."
Good point. Given sufficient paranoia this train of suspicion can be continued even deeper down the rabbit hole: you'd need to inspect the hardware designs and make sure the hardware you've got was actually produced according to the inspected designs.
In technology as elsewhere, it seems life is ultimately based on trust in someone.
Trust is a function of the expected incentives of the trusted.
One way to manage their incentives is by exercising control, but there are other more friendly ways, too. For example, shared goals, community, reputation, financial rewards, reciprocity and ethics standards all provide weaker or stronger reasons to trust others.