Having implemented virtual DOM natively in Sciter (1), here are my findings:
In conventional browsers the fastest DOM population method is element.innerHTML = ...
The reason is that element.innerHTML works transactionally:
Lock updates -> parse and populate DOM -> verify DOM integrity -> unlock updates and update rendering tree.
While any "manual" DOM population using Web DOM API methods like appendChild() must do such transaction for any such call, so steps above shall be repeated for each appendChild() - each such call shall left the DOM in correct state.
And virtual DOM reconciliation implementations in browsers can use only public APIs like appendChild().
So, indeed, vDOM is not that performant as it could be.
But that also applies to Svelte style of updates: it also uses public APIs for updating DOM.
Solution could be in native implementation of Element.patch(vDOM) method (as I did in Sciter) that can work on par with Element.innerHTML - transactionally but without "parse HTML" phase. Yes, there is still an overhead of diff operation but with proper use of key attributes it is O(N) operation in most cases.
innerHTML doesn't preserve event handlers. So you're either reassigning event handlers over and over or relying on delegate handlers everywhere.
And while your statement makes intuitive sense regarding performance, actual measurements show clearly that idiomatic Svelte (and other modern frameworks) routinely beat VDOM-based efforts handily in their idiomatic cases and often even when folks jump through the performance optimization hoops needed for VDOM.
VDOM is pure overhead. Better than manually aligning writes before reads manually a la 2010, but noticeably worse than the current crop of compiled offerings.
Let's compare lines of code, because more lines invariably leads to more bugs.
Contents of Beers.svelte:
<script>
export let bottles = 99;
</script>
{#if bottles > 0}
<span class="bottles"
on:click={() => --bottles}>
{bottles} bottles of beer on the wall
</span>
{:else}
<span class="bottles">
No more bottles of beer on the wall
</span>
{/if}
Then to use it:
<script>
import Beers from './Beers.svelte';
</script>
<Beers />
No knowledge of Reactor's existence needed let alone the library's "signal" function. No functions needed at all. No bespoke syntax for the "bottles" CSS class. No vDOM API call. No extra "values" accessing property. It's >90% plain old HTML, CSS, and JS with literally the bare minimum of syntax to handle data binding.
Yes, it requires a compiler, but I would honestly astounded if you even noticed the compiler build time in dev mode. AND the deployed code is smaller. AND it's simpler for the dev to understand and maintain. AND it's likely faster at runtime.
The argument that Svelte adds mental overhead is manifest nonsense. If you like the vDOM, have at it. Follow your bliss. Some folks like hitting and kicking trees. Some folks prefer their coffee too hot to drink.
I for one want a web framework that makes web development as simple, straightforward, and powerful as possible. HTML, CSS, and the smallest amount of JS and HTML annotation imaginable.
2. Your example would have atrocious load time implications for any non-trivial web page. Iterating the DOM through querySelectorAll to replace items at load time? Yikes!
So apparently with Sciter you can either have minimal code or acceptable performance. Got it. Would rather have my cake and eat it too.
Web Components is a marketing coup. It’s a great name. People wish it existed and did the thing it says. So they ignore that customElement and shadow DOM are two terrible APIs that are best ignored by 99% of developers…
Meanwhile, shit that would actually help framework authors, like native morphDOM don’t happen.
From what I know React does not register event handlers on individual nodes, but rather on root component. Then it uses virtual events from it’s pool in your callbacks.
Element.insertAdjacentHTML() appears to fix the event handlers issue (and similar issues with element state), since unlike innerHTML it does not replace the element it's being used on.
I swear, folks and their JSX have me convinced they have Stockholm Syndrome. HTML+CSS in JS was always a pragmatic choice back in 2015, never the most elegant or most maintainable one.
It's like the folks who refused to use anything but the DOM APIs when JQuery was sitting right there. Or who keep on using onclick handlers on their div tags instead of using perfectly good HTML tags like:
<a>
<button>
<input type="submit">
<details>
JSX was never the best of any web development world. It was at best the least worst option at the time. We have better ones now.
Would you mind elaborating on what the better options are? The way I see it, there are a few possible alternatives:
1) Keep the same runtime DOM representation but use normal JS (something like `div({className: 'beer'})`). I know some people disagree, but I strongly believe that this is strictly worse than JSX because it's more verbose and far less readable.
2) Use string templates parsed at runtime. You lose most of the structure you get with JSX—static syntax checking, type safety, autocomplete, etc. Composition becomes a matter of string concatenation, which is possibly the worst way to do it. On top of that, you have to learn templating primitives specific to your templating library instead of being able to simply use what you already know: JavaScript.
3) Use templates parsed at compile-time. This removes most of the drawbacks of #2, but you still have to learn a new templating language and all of the idioms that come with it. On top of that, you're entirely dependent on IDE integration for syntax highlighting and autocomplete. (I realize that JSX has the same problem with custom syntax, but the tooling around it is ubiquitous by now.)
You could make a strong case for #3 being a good way to do templating, but there is no "best" or "most maintainable" option; there are only tradeoffs. JSX happens to have a really good set of tradeoffs going for it, and no one has (yet) created anything that's strictly better.
I would argue #3 is obviously better, especially if it's done as Svelte has done it. It's hard to look at a Svelte component and see much more than a <style> tag, a <script> tag and some lightly annotated HTML for data binding, event capture, and control flow etc. Compared to JSX, it's a breath of fresh air.
Are you dependent on IDE integration for syntax highlighting? Yes, of course. Same with HTML, CSS, and JS. And if Svelte were not already six years old, I'd be more concerned. But the simple truth is that every major IDE I'm aware of for front end development supports Svelte already.
• VSCode
• Jetbrains Webstorm
• Neovim
• Sublime Text
I would be shocked to the core if Emacs didn't already have something mature as well. Compilers are better than humans at managing rote boilerplate, of which React has no end of. I can only see how output improves by removing that recurring cognitive load. It's Assembly vs C all over again where folks have a hard time accepting that the easier path also leads to demonstrably better results. If I can do in 10 lines what previously required 50, that code I contribute is far less likely to have as many bugs or suffer from performance problems.
> It's Assembly vs C all over again where folks have a hard time accepting that the easier path also leads to demonstrably better results. If I can do in 10 lines what previously required 50, that code I contribute is far less likely to have as many bugs or suffer from performance problems.
It's entirely unclear to me how compiled templates result in drastically less code. If we're comparing Svelte and React as frameworks, then sure, but your original comment specifically talked about JSX syntax being inferior to the alternatives. Templates require a custom DSL for control flow, iteration, etc, whereas with JSX you can use standard JavaScript. That also means that you can take third-party libraries that work on regular JS data structures, like objects and arrays, and apply them to JSX elements with zero fuss. With a DSL, you have to find a domain-specific version of the code you've already written in your head, and in some cases, it may not even be possible to create the same abstractions. This has its advantages, of course, but I strongly disagree with the notion that it's simply better.
For the record, I really like Svelte as a framework, but I can't honestly say that their decision to use templates has anything to do with that.
innerHTML can set event handlers so you don't have to assign them separately. And if you re-create dom fragment with innerHTML you can reattach children that didn't change and their handlers are preserved.
I also would have sworn up and down that using a DocumentFragment would be loads faster than both, but it doesn't seem to be the case. I wonder why that is.
Seems like Safari has quite naive element.append(...list) implementation and/or "destructuring to argv" operation is slow there.
I suspect that element.append(...list) is just a
for(auto arg : args)
element.append(arg);
so no transaction there at all - slower version of case #3.
On Windows I am testing it in Edge, Chrome and FF.
Edge and Chrome show close numbers (#1 fastest). FF shows #3 is faster - same problem as Safari I think.
I'd be interesting in learning the answer here as well. I've read that documentFragment are faster, but some microbenchmarking on chrome/mac makes me think either the improvements are negligible. Rerunning benchmarks on stackoverflow (https://stackoverflow.com/questions/14203196/does-using-a-do...) (both individually and swapping the order of fragment vs non-fragment tests) nets me ~60ms when rendering 100000 ul in each case.
My naive take on this is that browsers have overall gotten a lot more consistent with the layout-paint-composite loop, and it's not worthwhile to swap out all your appendChild calls with fragments. On the other hand, making sure your all your layout reads (.clientWidth) are batched before the layout writes (appendChild) is much more important (fastdom)
edit: something like documentFragment/append(...children) would help guarantee the layout trashing addressed by fastdom
> And virtual DOM reconciliation implementations in browsers can use only public APIs like appendChild().
Why can't they also use innerHTML?
Meaning, they could define a cost function where they deduce that it's cheaper to use innerHTML on a potentially larger than necessary scope if the alternative is >some_threshold for modification API calls.
They could, but they would have to either render the whole scope again or somehow apply the change to a copy of the scopes HTML and then set that. Neither seems ideal, but that may be a reasonable, if complex, optimization.
Probably because we are doing diff per node so we'd have to aggregate diffs somehow, rip out children that don't change to reattach them into the change part recreated using innerHTML.
We could create component system that doesn't diff per DOM node but per component. It would render components to strings and place them with innerHTML into slots (dom elements) exposed by their parents. On first render it could just splice strings.
1. BeginUpdate stops a control from repainting itself and that is what browser is doing already - no painting happens at the moment of JS execution. So primitive "postpone painting" does not really help.
2. element.update(callback) or DOM.mutate(root,callback) shall be a single method - no one wants EndUpdate() calls to be skipped because of errors thrown and the like.
A script will eventually return to the event loop, where endUpdate() may be called automatically. You don’t even need beginUpdate(), because it may be hidden behind update methods.
Every time I read about DOM I frustrate about how many frontend issues are there due to just bad platform-level patterns. We’re long past the need of reflecting updates auto-instantly in a single call to the engine. And that wasn’t even necessary before.
layout is suspended automatically. Unless you query the dom for something, in which case you can get lots of thrashing. For example, you don’t want to add some dom elements, then get their height/width as that will force the layout. And don’t do that in a loop! Last I looked, addjng/removing dom elements only schedules the layout and repaint. Things have gotten more multithreaded since I looked at browser code for this, but I doubt they would make a performance regression here.
Having implemented virtual DOM natively in Sciter (1), here are my findings:
In conventional browsers the fastest DOM population method is element.innerHTML = ...
The reason is that element.innerHTML works transactionally:
Lock updates -> parse and populate DOM -> verify DOM integrity -> unlock updates and update rendering tree.
While any "manual" DOM population using Web DOM API methods like appendChild() must do such transaction for any such call, so steps above shall be repeated for each appendChild() - each such call shall left the DOM in correct state.
And virtual DOM reconciliation implementations in browsers can use only public APIs like appendChild().
So, indeed, vDOM is not that performant as it could be.
But that also applies to Svelte style of updates: it also uses public APIs for updating DOM.
Solution could be in native implementation of Element.patch(vDOM) method (as I did in Sciter) that can work on par with Element.innerHTML - transactionally but without "parse HTML" phase. Yes, there is still an overhead of diff operation but with proper use of key attributes it is O(N) operation in most cases.
[1] https://sciter.com