That's why I started the paragraph with "Contrary to what you might expect".
As for Stabilizer: "Stabilizer eliminates measurement bias by comprehensively and repeatedly randomizing the placement of functions, stack frames, and heap objects in memory." Those placements can affect cycle counts and wall times a lot, but don't affect instruction counts.
So have you not found in practice any data dependencies or cache issues show up as bottle necks? Or do current tools just make this more of a blind spot for optimization?
Also is there any work to multi-thread the Rust compiler on a more fine-grained level like the recent GCC work? I know you allude to that potentially that would make the instruction counts potentially less reliable so wondering if that's something being explored.
Finally, while I have you, I'm wondering if there's been any exploration of the idea of keeping track of information across builds so that incremental compilation is faster (i.e. only bother recompiling/relinking the parts of the code impacted by a code change). I've always thought that should almost completely eliminate compilation/linking times (at least for debug builds where full utmost optimization is less important).
There's an effort to track which functions are modules & what the downstream implications of that are in terms of needing recompilation? Are there any links to technical descriptions? Super interested in reading up on the technical details involved.
That's why I started the paragraph with "Contrary to what you might expect".
As for Stabilizer: "Stabilizer eliminates measurement bias by comprehensively and repeatedly randomizing the placement of functions, stack frames, and heap objects in memory." Those placements can affect cycle counts and wall times a lot, but don't affect instruction counts.