Hacker Newsnew | past | comments | ask | show | jobs | submit | hokkos's commentslogin

I use https://typespec.io to generate openapi, writing openapi yaml quickly became horrible past a few apis.


Ha yes, see one of my other comments to another reply.

I never got to use it when I last worked with OpenAPI but it seemed like the antidote to the verbosity. Glad to hear someone had positive experience with it. I'll definitely try it next time I get the chance


it's because you only do it once per project.


some lib literally publish a new package at every PR merged, so multiple times a day.


it reminds me of the EXI compression for XML that can be very optimized with a XSD Schema with a schema aware compression, that also use the schema graph for optimal compression : https://www.w3.org/TR/exi-primer/


I also have an elegant proof, but it does't quite fit in a HN comment.


No support for symbols, amirite?


whatever you do with xslt you can do it in a saner way, but whatever we need to use serial/bluetooth/webgpu/midi for there is no other way, and canvas is massively used.


I'd love to see more powerful HTML templating that'd be able to handle arbitrary XML or JSON inputs, but until we get that, we'll have to make do with XSLT.

For now, there's no alternative that allows serving an XML file with the raw data from e.g. an embedded microcontroller in a way that renders a full website in the browser if desired.

Even more so if you want to support people downloading the data and viewing it from a local file.


If you're OK with the startup cost of 2-3 more files for the viewer bootstrap, you could just fetch the XML data from the microcontroller using JS. I assume the xsl stylesheet is already a separate file.


I don't think anyone is attached to the technology of xslt itself, but to the UX it provides.

Your microcontroller only serves the actual xml data, the xslt is served from a different server somewhere else (e.g., the manufacturer's website). You can download the .xml, double-click it, and it'll get the xslt treatment just the same.

In your example, either the microcontroller would have to serve the entire UI to parse and present the data, or you'd have to navigate to the manufacturers website, input the URL of your microcontroller, and it'd have to do a cors fetch to process the data.

One option I'd suggest is instead of

    <?xml-stylesheet href="http://example.org/example2.xsl" type="text/xsl" ?>
we'd instead use a service worker script to process the data

    <?xml-stylesheet href="http://example.org/example2.js" type="application/javascript" ?>
Service workers are already predestined to do this kind of resource processing and interception, and it'd provide the same UX.

The service worker would not be associated with any specific origin, but it would still receive the regular lifecycle of events, including a fetch event for every load of an xml document pointing at this specific service worker script.

Using https://developer.mozilla.org/en-US/docs/Web/API/FetchEvent/... it could respond to the XML being loaded with a transformed response, allowing it to process the XML similar to an XSLT.

You could even have a polyfill service worker that loads an XSLT and applies it to the XML.


Of course there is a better way than webserial/bluetooth/webgpu/webmidi: Write actual applications instead of eroding the meaning and user expectations of a web browser. The expectation should not be that the browser can access your hardware directly. That is a much more significant risk for browsers than XSLT could ever be.


Like axios can do it if you specify the fetch backend, it just won't do the .json() asynchronously.


I'm actually not a big fan of the async .json from fetch, because when it fails (because "not json"), then you can't peak at the text instead. Of course, you can clone the response, apparently, and then read text from the clone... and if you're wrapping for some other handling, it isn't too bad.


I never ever succeeded in making a passkey log in after generating one.


not sure you should use leaflet for this heavy map usage, it is not really usable now, maybe look at deck.gl


Page feels slow, circle instead of my mouse, the screenshot of M3 expressive shows less space for content and recipient address but the send button is clearly easier to find


No but you see, they did eye-tracking tests and users "find" the send button in 0.8s instead of 1.6s, so it's clearly worth it to reduce the space for content even further and add even more enormous amounts of whitespace. This is science you guys!

Btw: extrapolating an exponential growth rate for the amount of whitespace in modern UI I predict that smartphone screens will consist entirely of whitespace before 2030.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: