I had a neat conversation with @fniephaus on the GraalVM slack, where I was curious about how the performance of the Native Image mode could be almost an order of magnitude better in the Game of Life demo than JVM JIT mode.
He clarified that the GIF showed only the first N seconds of the program running, where the AOT binary required no warmup. But what was really interesting, is his comment about how AOT mode is still able to perform potentially slightly better than JIT:
> "The GIF is showing the first n seconds, and the JIT just needs noticeable more time to warm up. But even at peak, AOT can outperform JIT although not by an order of magnitude of course."
I asked how this was possible and he shared a great tweet by @AlinaYurenko:
> AOT can be faster than JIT, because:
> - in AOT 100% of the code is compiled (on JIT cold code can still be interpreted)
> - some optimizations are only possible under a closed-world assumption (AOT)
> - AOT can dedicate time and resources to perform more expensive optimizations
I don’t really agree with these points regarding performance speed (of course AOT is a cool technology that has its uses):
> - in AOT 100% of the code is compiled
If the code hasn’t been run enough times to become eligible for JIT compilation, it likely doesn’t contribute any significant time to the whole runtime, so I doubt it would be a meaningful change.
> some optimizations are only possible under a closed-world assumption (AOT)
Which a JIT compiler is more than free to assume much more strictly than an AOT one can? Like, if it sees that only a single implementing class of an interface is loaded it can assume that every virtual method call can be replaced by a static one. Upon a class load, this assumption can be revisited and the native code can be deoptimized in cases. In an AOT compiler you have to optimize based on the worst case, while JIT compiler may avoid loading that class based on some dynamic property.
Also, Graal is not only an AOT compiler, it is also a JIT compiler with.. closed-world assumptions, so it is quite meaningless comparison.
The last point is true on paper, but as far as I know it is not true that JIT compilers produce worse code, and even if it is it is not due to lack of time/resources.
But I just wanted to refute the performance claims — Graal Native executables do start up much faster and have significantly less memory usage, which are worthwhile goals in themselves. But most Java code will perform better under a JIT compiler (which can be Graal’s as well)
> " But most Java code will perform better under a JIT compiler (which can be Graal’s as well)"
This was my understanding, for any comparison of world-class JIT compilers vs AOT codegen. But, fniephaus knows his stuff and there are some compelling particular examples/benchmarks given. I haven't taken the time to do an exhaustive comparison.
It does seem sort of counterintuitive though, doesn't it? Like, what's the point of an optimizing JIT then?
I’m sure there are examples where Graal Native will beat the JIT version of the same program (e.g. it often does more throughout escape analysis, also, smaller object allow more data to fit inside cache, etc), but in case of a “typical” application I would wager that the JIT approach is better. Maybe that’s just the way idiomatic JVM code is written?
The problem with deoptimization is the performance hit every time it takes place, which leads to a saw graph similar to when there is too much GC going on.
Sure, but class loading only changes assumptions in one direction for example — in the aforementioned example it would cause only one deoptimization after which the code won’t be worse off than the AOT compiled one. So the JIT case before class load may perform better, and afterwards will perform the same.
Per the GraalVM website:
"GraalVM JIT and Native Image will become a part of OpenJDK"
I understand the Native Image stuff becoming part of OpenJDK, but what does it mean for OpenJDK to get the GraalVM JIT? does it replace the one in OpenJDK? will OpenJDK have two JIT implementations to choose from?
This is different than using GraalVM, which has all the polyglot stuff too. And the Native Image AOT stuff is different yet again ... I think!
There's a lot of uses of the same core technology. It's pretty cool stuff!
If I’m not mistaken it goes like this: Graal JIT compiler is itself written in Java, and thanks to the incredible architecture of OpenJDK even something as internal can be plug’n’play.
Every other part of Graal is also ordinary java classes, for example Truffle, which can execute interpreters and JIT optimize them very effectively (Truffle’s javascript can run after warm up with comparable speed as V8, while the former has orders of magnitude smaller team/budget). This is possible because the JIT compiler has a few special intrinsics for these libraries that allow for this magic, and also the reuse of the many many thousands of workhours that went into the OpenJDK project, reusing its killer GCs, etc.
The first version was dropped, because almost no one used (hardly any complaints regarding it being dropped), and they were parallel codebases somehow.
I would guess they see this a way to actually trigger adoption of GraalVM CE, beyond the language nerds, a couple of more adventurous companies and also it is a safer way for other JVM vendors to enter the game.
From what I've noticed, the Clojure community has embraced GraalVM the most out of any of the Java community. Babashka, think bash+Clojure with built-in JSON+YAML+CSV+REST support, is very popular.
The upcoming version 3.x.x of Spring Boot, probably the most used Java (mostly web) framework comes with first class "native image support" using Graal out of the box.
What do you mean? I've been using it as the default JVM since 2019 or so in both development and production.
My experience has involved zero headaches or serious problems due to Graal, and it comes with appreciable performance improvements.
One of the more impressive and badass hardcore programming language engineering efforts, and still under active development right now. It's remarkable technology.
Amazing that it comes from within Oracle, I was surprised.
GraalVM comes with its own JIT the Graal Compiler [1] which I believe was what metadat was talking about. You are probably thinking about the native image generation which can cause issues with reflection and other dynamic constructs.
Also, reflection is supported in AOT mode. The analysis, however, does require reachability metadata in some cases. In the best case, libraries provide and maintain appropriate configuration for this. Reachability metadata can also be shared via https://github.com/oracle/graalvm-reachability-metadata.
He clarified that the GIF showed only the first N seconds of the program running, where the AOT binary required no warmup. But what was really interesting, is his comment about how AOT mode is still able to perform potentially slightly better than JIT:
I asked how this was possible and he shared a great tweet by @AlinaYurenko:https://twitter.com/alina_yurenko/status/1582772754902052864