Many years ago, I was leading a team that implemented a hyperfast XML, XSLT, XPath parser/processor from the ground up in C/C++. This was for a customer project. It also pulled some pretty neat optimizations, like binary XSLT compilation, both in-mem and FS caching, and threading. On the server-side, you could often skip the template file loading and parsing stages, parallelize processing, and do live, streaming generation. There was also a dynamic plugin extension system and a push/pull event model. The benchmarks were so much better than what was out there. Plus, it was embeddable in both server and client app code.
Would have been great if it had been open-sourced, but they paid for all the development and owned the codebase. They wanted to use it to dynamically generate content for every page for every unique device and client that hit their server. They had the infrastructure to do that for millions of users. The processing could be done on the server for plain web browsers or embedded inside a client binary app, so live rendering to native could be done on-device.
Back then, it was trivial to generate XML on-the-fly from a SQL-based database, then send that back, or render it to XHTML or any other custom presentation format via XSLT. Through XSD schema, the format was self-documenting and could be validated. XSLT also helped push the standardizing on XHTML and harness the chaos of mis-matched HTML versions in each browser. It was also a great way to inject semantic web tags into the output.
But I always thought it got dragged down with the overloaded weight of SOAP. Once REST and Node showed up, everyone headed for new pastures. Then JS in browsers begat SPAs, so rendering could be done in the front-end. Schema validation moved to ad-hoc tools like Swagger/OpenAPI. Sadly, we don't really have a semantic web alternative now and have to rely on best guesses via LLMs.
For a brief moment, it looked like the dream of a hyper-linked, interconnected, end-to-end structured, realtime semantic web might be realizable. Aaand, then it all went poof.
TL;DR: The XML/XSLT stack nailed a lot of the requirements. It just got too heavy and lost out to lighter-weight options.
Would have been great if it had been open-sourced, but they paid for all the development and owned the codebase. They wanted to use it to dynamically generate content for every page for every unique device and client that hit their server. They had the infrastructure to do that for millions of users. The processing could be done on the server for plain web browsers or embedded inside a client binary app, so live rendering to native could be done on-device.
Back then, it was trivial to generate XML on-the-fly from a SQL-based database, then send that back, or render it to XHTML or any other custom presentation format via XSLT. Through XSD schema, the format was self-documenting and could be validated. XSLT also helped push the standardizing on XHTML and harness the chaos of mis-matched HTML versions in each browser. It was also a great way to inject semantic web tags into the output.
But I always thought it got dragged down with the overloaded weight of SOAP. Once REST and Node showed up, everyone headed for new pastures. Then JS in browsers begat SPAs, so rendering could be done in the front-end. Schema validation moved to ad-hoc tools like Swagger/OpenAPI. Sadly, we don't really have a semantic web alternative now and have to rely on best guesses via LLMs.
For a brief moment, it looked like the dream of a hyper-linked, interconnected, end-to-end structured, realtime semantic web might be realizable. Aaand, then it all went poof.
TL;DR: The XML/XSLT stack nailed a lot of the requirements. It just got too heavy and lost out to lighter-weight options.