Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The language knows whether a_complicated_computation_that_yields_a_kind(x) is available at compile time or not, and the IDE can tell the programmer.

But that routine could always return left for the values of x passed to qux. Static analysis cannot possibly know that, but a JIT will (although not enough to elide the call to that routine altogether, just the branch).

> but actually we can infer at compile time and get the best of both worlds.

Type inference does not require annotation, but it does require a guarantee that the type is statically inferrable. E.g. when the Scala compiler fails to infer a type it raises an error. But a JS JIT can devirtualize even when the type is not statically inferrable. The kind of compiler I think you're envisioning would still need to choose between the zero-cost abstraction philosophy and fail if it cannot infer and between the zero-cost use philosophy, and not fail but rather hope to specialize based on runtime information. But maybe you're advocating something like optional typing for compiler optimization. That's certainly doable, and I would argue that it's already done, to some extent, in Java, but only available to JDK code (there are @DontInline and @ForceInline annotations). I think Kotlin wants to do something like that, too, with their "contracts", but I couldn't tell if they're sound or not (i.e. whether the compiler allows you to lie to it in a way that would force an incorrect optimization).

(Just to be clear, I am not talking about type inferences that are meant to aid the programmer, but those that are meant to aid the compiler.)

> And if the results aren't integrated into the language then it's hard to be confident that the aspects we care about will be maintained as the codebase evolves.

Well, now you're saying you prefer the zero-cost abstraction philosophy of having as much control as possible on technical compilation aspects in the programming language. That's fine, of course, but I think this is far from the universal preference. I mean, I like this power in C++, but it clearly has the major costs that I mentioned in my original post, and it would continue to have them even with better inference. Sometimes we like our accidental complexity, but sometimes we don't.

> and even then, it's not always as easy as it should be to understand the performance characteristics of assembly language on modern processors

Yeah... but are you implying that if CPUs were predictable writing formal deductive proofs for code generation would be easy???? They're somewhere between a satisfying puzzle and hair-pulling frustration in the best of circumstances.



> But that routine could always return left for the values of x passed to qux. Static analysis cannot possibly know that, but a JIT will

Sure, so there will be cases where a JIT could opportunistically devirtualize whereas a static devirtualization step can't. But with the static approach programmer can immediately see why devirtualization didn't happen, and what they would need to do to make it happen. I don't see that as contrary to the zero-cost abstraction philosophy: you have asked the computer to calculate a_complicated_computation_that_yields_a_kind(x) and make a virtual function call based on the result, if what you wanted was the computer to calculate a_complicated_computation_that_yields_left(x) and make a nonvirtual call then you should have told it that. Presumably you would still have put an indirection in there if you had been hand-coding it, because if you actually know that the computation always returns left then you would have written as much.

> Well, now you're saying you prefer the zero-cost abstraction philosophy of having as much control as possible on technical compilation aspects in the programming language. That's fine, of course, but I think this is far from the universal preference. I mean, I like this power in C++, but it clearly has the major costs that I mentioned in my original post, and it would continue to have them even with better inference. Sometimes we like our accidental complexity, but sometimes we don't.

I want to have the control if I need it, but that doesn't imply it has to carry a cost if I'm not using it. By analogy, in a language with complete type inference (e.g. H-M), I don't pay any cost for my variables having types I don't want to - I can write a script with zero type annotations, let the types be inferred, and it will still work just as well as it would in a unityped language. But then if I do want to put explicit types on some of the variables then I can pay the cost at that point and the language will let it interleave nicely with code that doesn't use type annotations.

Imagine inferring the worst-case performance characteristics of all program functions just as a part of the language. For functions that didn't put any effort into it, this information would be useless (probably infinite or at best known-finite-but-unbounded) - but valid. Even just a distinction between what's compile-time-constant or not would be useful (and many languages already draw that distinction in one way or another, but without the structure and visibility that would make it useful to programmers).


> if what you wanted was the computer to calculate a_complicated_computation_that_yields_left(x) and make a nonvirtual call then you should have told it that.

Again, you're showing a preference to the zero-cost abstraction philosophy; the zero-cost use philosophy is intentionally different. The problem is that the computer cannot tell ahead of time whether it will be able to elide a branch or not. Suppose you tell it it should, what should it do if it can't know for sure that it can? If you say it should fail -- that's the zero-cost abstraction philosophy; if you say it should try -- that's the zero-cost use philosophy.

> and it will still work just as well as it would in a unityped language

I don't understand the analogy. Whether you infer or explicitly state, you still want all pertinent information statically known. This is the zero-cost abstraction philosophy (which you seem to prefer). My view is the following: the vast majority of things we'd like to statically know cannot be known at an acceptable cost. The question is what should we do with the rest? Should we pay for the effort of helping the compiler with what we can know, or let the compiler do its thing, which includes taking into consideration things that cannot be statically known. For the domain C++ and Rust target I prefer the first (because there isn't really much of a choice); for other domains, I prefer the second.

> Imagine inferring the worst-case performance characteristics of all program functions just as a part of the language.

I am not sure what exactly you mean here. In general, inferring properties statically has some very clear benefits and very clear costs. The more precise the information, the higher the cost.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: