> if what you wanted was the computer to calculate a_complicated_computation_that_yields_left(x) and make a nonvirtual call then you should have told it that.
Again, you're showing a preference to the zero-cost abstraction philosophy; the zero-cost use philosophy is intentionally different. The problem is that the computer cannot tell ahead of time whether it will be able to elide a branch or not. Suppose you tell it it should, what should it do if it can't know for sure that it can? If you say it should fail -- that's the zero-cost abstraction philosophy; if you say it should try -- that's the zero-cost use philosophy.
> and it will still work just as well as it would in a unityped language
I don't understand the analogy. Whether you infer or explicitly state, you still want all pertinent information statically known. This is the zero-cost abstraction philosophy (which you seem to prefer). My view is the following: the vast majority of things we'd like to statically know cannot be known at an acceptable cost. The question is what should we do with the rest? Should we pay for the effort of helping the compiler with what we can know, or let the compiler do its thing, which includes taking into consideration things that cannot be statically known. For the domain C++ and Rust target I prefer the first (because there isn't really much of a choice); for other domains, I prefer the second.
> Imagine inferring the worst-case performance characteristics of all program functions just as a part of the language.
I am not sure what exactly you mean here. In general, inferring properties statically has some very clear benefits and very clear costs. The more precise the information, the higher the cost.
Again, you're showing a preference to the zero-cost abstraction philosophy; the zero-cost use philosophy is intentionally different. The problem is that the computer cannot tell ahead of time whether it will be able to elide a branch or not. Suppose you tell it it should, what should it do if it can't know for sure that it can? If you say it should fail -- that's the zero-cost abstraction philosophy; if you say it should try -- that's the zero-cost use philosophy.
> and it will still work just as well as it would in a unityped language
I don't understand the analogy. Whether you infer or explicitly state, you still want all pertinent information statically known. This is the zero-cost abstraction philosophy (which you seem to prefer). My view is the following: the vast majority of things we'd like to statically know cannot be known at an acceptable cost. The question is what should we do with the rest? Should we pay for the effort of helping the compiler with what we can know, or let the compiler do its thing, which includes taking into consideration things that cannot be statically known. For the domain C++ and Rust target I prefer the first (because there isn't really much of a choice); for other domains, I prefer the second.
> Imagine inferring the worst-case performance characteristics of all program functions just as a part of the language.
I am not sure what exactly you mean here. In general, inferring properties statically has some very clear benefits and very clear costs. The more precise the information, the higher the cost.