A server in a datacenter generally doesn’t have a GPU, certainly not enough to support thousands of clients (each of which does have a GPU plugged right into one user’s monitor). Software rendering is a regression that didn’t need to happen, and Javascript apps seem to be the way the industry is avoiding it (with the browser as a remote display server).
this use case is broken in X11 since a very long time, because to make this work well you don't just need some form of network transparency in the network manager but also remote rendering for OpenGL and Vulcan
> Software rendering is a regression t
But in most cases it's not happening, because you don't render on the server for most applications you render on a client which interacts with a server.
> and Javascript apps seem to be the way the industry is avoiding it (with the browser as a remote display server).
Today many JS apps are not thin clients they are often quite complete applications, but lets ignore that for a moment.
I'm not sure what exactly you are imagining, but as far as I can tell the only way to make this kind of remote rendering you are implying work in general would be by making X11 a GUI toolkit with some form of cross OS stable interface and it also would be the only supported GUI toolkit and any fancy GPU rendering (e.g. games) would fundamentally not work. There is just no way this would ever have worked.
The reason the industry mostly abandoned network transparency not just for remote display servers but also in most other places is because it's just not working well in practice. Even many of the places which do still use network transparency (e.g. network file systems) dent to run into unexpected issues due software happen to not work well with the changed reliability/latency/throughput characteristics this introduces.
> Software rendering is a regression that didn’t need to happen
Actually, it is. The actual straw that broke the X developers was font metrics, IIRC. Essentially, if you want to support fonts for the language of the most populous country on Earth, you need to do more or less complete font rendering to answer questions like "how long is this span of text going to be" (so that you can break it). And the X developers tried to make it work with the X model, but the only way they could get it to work well was to have the X server ship the font to the X client and the X client ships rendered bits back to the X server [even over the network!].
Sometimes because you want users to be able to change workstations, sometimes because you want a highly specific environment outside of the user's control (it can reset on each connection), sometimes because you want nothing to be kept locally. Eg, the country somebody works in is untrustworthy, so they access everything somewhere remote and safe.
virtual desktops on demand tends to be run on servers with GPUs and in general prefers server side GPU rendering because it's meant to work with any client which can access it even if it's has an extremely weak GPU
and if you have no complex rendering requirements then often it's a much better choice to place the network gap in the GUI toolkit instead of the DM as this tends to work way better, in this case you do need a thin client on the other side, but so do you need for X11 remote (the client needs to run X11) so it's kinda not that difference. And today the easiest way to ship thin clients happens to be JS/WebGPU, which is how we have stuff like GTKs webrender backend.