Is superoptimization related to supercompilation, or are these unrelated compiler optimization techniques? From what I have read, it seems like Turchin [1] et al. are trying to prove equality using some sort of algebraic rewriting system (i.e., the "intensional" approach), whereas superoptimization uses an extensional technique to first try proving disequality by testing on a small set of inputs, then applies formal verification to the remaining candidates.
Massalin (1987) [2] calls the first phase "probabilistic execution" and claims that nearly all of the functions which pass the PE test also pass the more rigorous Boolean verification test. Can you give any insight into the benefits of TT over more "automated" optimizations? I am curious if MLTT/HoTT is more suitable for certain compiler optimizations or offers additional expressive power for proving equivalence, or is the benefit mostly ergonomics?
Supercompilation is "top down": beginning with the given program code and trying to optimise it until it can't find any more improvements. It basically runs an interpreter to evaluate the code, but unknown values (e.g. runtime data) must be treated symbolically. It's similar to aggressively inlining (essentially "calling" functions), and aggressively optimising static calculations.
Supercompilation is good at removing unused scaffolding and indirection, e.g. for code that's written defensively/flexibly, supporting a bunch of fallbacks, override hooks, etc. A common problem with supercompilation is increasing code size, since it replaces many calls to a single general-purpose function/method, with many inlined versions (specialised to various extents).
Superoptimisation is "bottom up": generating small snippets of code from scratch, stopping when it finds something that behaves the same as the original code.
Massalin (1987) [2] calls the first phase "probabilistic execution" and claims that nearly all of the functions which pass the PE test also pass the more rigorous Boolean verification test. Can you give any insight into the benefits of TT over more "automated" optimizations? I am curious if MLTT/HoTT is more suitable for certain compiler optimizations or offers additional expressive power for proving equivalence, or is the benefit mostly ergonomics?
[1]: https://dl.acm.org/doi/pdf/10.1145/5956.5957
[2]: https://dl.acm.org/doi/pdf/10.1145/36177.36194