It would be cool if the JavaScript engine was interchangable in NodeJS and IO.JS so you could pick Chakra, V8 or SpiderMonkey very easily. SpiderMonkey is faster than V8 these days on a lot of benchmarks.
I was experimenting a couple of days ago with high duty and extreme DOM nodes crunching and FF's SpiderMoneky blew Chrome's V8 out of the water for 7 - 9 multiples gain in performance measured in time elapsed to complete the operations.
How do you live in such a bliss ignorance where supposedly an order of magnitude difference in performance between two state of the art browsers doesn't make you reconsider even for a second that you might have not written a working benchmark? :)
Sorry but unless you have deep knowledge of how both engines and browsers work (knowing how `appendChild` is actually implemented for starters), you simply cannot write a working benchmark. Even then, it's very hard and tedious.
If you don't have time to obtain such expertise, you could take a shortcut and compare realistic end-to-end benchmark. E.g. if your game runs at 210-270 fps in firefox but only at 30 fps in chrome, then you could claim that "firefox blows chrome out of the water".
It's very easy (just look at 80%+ of jsperfs) to construct benchmarks that don't look completely broken to the untrained eye but actually are. The common theme is the benchmark missing many aspects of realistic code and being reduced to measuring irrelevant optimizing compiling features. For example the benchmark could only be measuring how thorough the engine's dead code elimination pass is even though what you wanted to benchmark is string concatenation performance.
> I was experimenting a couple of days ago with high duty and extreme DOM nodes crunching and FF's SpiderMoneky blew Chrome's V8 out of the water for 7 - 9 multiples gain in performance measured in time elapsed to complete the operations.
This likely has nothing to do with the JS engines themselves and everything to do with the browser they were running in. To actually benchmark something like that you'd need to simulate the dom with something like https://github.com/tmpvar/jsdom
Are you suggesting that the remarkable disparity in performance was DOM specific?
Strange because I used a very common method appendChild() and I was under the impression that both browsers had optimized their respective inner workings a long time ago to the point that we should not notice such divergence in performance.
Yes. DOM manipulation in all major browsers is implemented in C/C++. The JS engine is just a wrapper; any noticeable performance difference in DOM manipulation is almost certainly due to differences in the underlying layout engine and not in the JS engine.
Well, there's one way in which the JS engine effects it: how efficiently one can call into C++ from JS. Mozilla have done a lot of work to reduce the cost of that in SpiderMonkey.
Sure, though in the grand scheme of things that penalty is pretty small when compared to the DOM operation itself. Eg. doing an appendChild() on an attached element and causing a reflow.
It depends a lot on what you're doing — if you're hitting fast-paths (esp. if you're dealing with out-of-tree nodes) it's entirely possible to end up with the JS/C++ trampoline being a significant part of the bottleneck, for much the same reasons as why Array.prototype.reduce can in several implementations.
Given node.js supports native code modules I can't see why it'd matter, but SpiderMonkey has a special asm.js compilation unit, OdinMonkey, giving best-in-class asm.js speed.