I read the Wikipedia on xslt, and as a long time web developer i do not understand at all how this would be useful. Plenty of people here are saying if this tech had taken hold we'd have a better world. Is there a clear example somewhere of why and how?
I think if you're of the belief that JavaScript is bad and should be avoided when possible this type of thing is seen as an alternative to that. But we've seemingly moved on to server side templating, or dynamic JavaScript apps which solve the problems that XSLT does in a more ergonomic and performant way. Compiling XML templates on the server or at build time is perfectly fine and doesn't require browser support. Doing it on the client leads to performance issues that we saw first hand with older SPA architectures, and if that isn't an issue, client side templating with JavaScript is more capable and ergonomic (imo).
It's a language that describes how to render a document.
For example, you could describe how to render your own domain specific markup language as HTML. Which is then happily viewed by the browser as a regular webpage, that can of course also be styled with CSS as usual.
This is only scratching the surface, really. But it does powerful transformations from one markup language to another.
This can of course be very useful as you get the browser able to present any markup without actually supporting said markup; without scripting or server side components.
It is just a special nostalgia for technologies which never became sucessful. They are percieved as more pure and perfect than the messy real world.
Client side XSLT transforms are very rarely useful. It was intended for a vision of the web as a multiude of different XML formats served to end users. Since this didnt happen (for good reasons), XSLT becase less useful. In reality it is only ever used to tranform into XHTML.
It was also touted as a more powerful style sheet since it could rearrange elements and transform semantic elements into tables for presentation. But CSS supports that now in a much better way, and most significantly CSS can react to dynamic updates.
Serving XML to end users only makes sense if someone actully understands the XML format. But only very few XML formats beside XHTML and SVG have any broad support. RSS is one of the few examples, and seem to be the main use case for XSLT.
This is interesting, is there a reference implementation that exists somewhere? Will there be a fork of tippecanoe that can encode these files or something different?
Still needs some work on the documentation side. There will be a separate announcement when it is done. We have a newsletter that we share on all the common social networks. https://maplibre.org/news/
Aside from getting the encoding side ready so tile providers can start to make MapLibre Tiles available, we are focussed on integrating the decoder in MapLibre GL JS (MapLibre for the web) and MapLibre Native (Android, iOS and other platforms). ETA is sometime near the end of 2025.
I work as a maintainer for MapLibre, let me know if you have any other questions about the project!
Thank you for the link to the git repo, this looks great. Thank you for your work. MapLibre is a library I use all the time and while MVT isn't something I have any complaints about this will still be a big upgrade.
Thanks for your work! Out of curiosity, do you know why this project chose to go with Java as it's core? Great to see you're also supporting TS + Rust out of the gate
I don't feel like we are in the waning days of the craft at all. Most of the craft is creating an understanding between people and software and most human programmers are still bad at it. AI might replace some programmers but none who program as a craft.
"Chess engines might get better than some chess players, but none who play Chess as a craft." Do you think people in the 90s thought this? Probably...
In the article, the author mentions that Chess centaurs (a human player consulting an engine) can still beat an engine alone. But the author is wrong. There was a brief period a while ago when that was true, but chess engines are so strong now that any human intervention just holds them back.
I've been programming 30+ years, and am an accomplished programmer who loves the craft, but the writing is on the wall. ChatGPT is better than me at programming in most every way. It knows more languages, more tricks, more libraries, more error codes, is faster, cheaper, etc.
The only area that I still feel superior to ChatGPT is that I have a better understanding of the "big picture" of what the program is trying to accomplish and can help steer it to work on the right sub-problems. Funnily enough, is was the same with centuar Chess; humans would make strategic decisions while the engines would work out the tactics. But that model is now useless.
We are currently enjoying a time where (human programmer+AI > AI programmer). It's an awesome time to live in, but, like with Chess, I doubt it will last very long.
Chess is a closed problem. Whereas software development very much isn't.
You will also have to provide a source for 'chess engines are so strong now that any human intervention just holds them back', a cursory search suggests this is by no means settled.
Yes, the rules of chess are simpler, which is why all this happened many years ago for chess.
https://gwern.net/note/note#advanced-chess-obituary -- here is a reference about centuar/advanced chess. The source isn't perfect as the tournaments seem to have fizzled out 5-10 years ago as engines got better and it all became irrelevant. Sadly this means we don't have 100 games of GM+engine vs. engine in 2023 to truly settle it but I've been following this for a while and I have a high confidence that Stockfish_2023+human ~= Stockfish_2023.
I think closed vs open problems are not simply different in magnitude of difficulty but qualitatively different. When I'm programming most of the interesting things I work on don't have a clear correct answer or even a way of telling why a particular set of choices don't get traction.
I guess it's possible that just being "smarter" might in some cases get a better solution from a seeies of text prompts but that seems too vague an argument to hold much water for me.
> It knows more languages, more tricks, more libraries, more error codes, is faster, cheaper, etc.
True up until the point that you want to do something that hasn't really be done before or is just not as findable on the internet. LLMs only know what is already out there, they will not create new frameworks or think up new paradigms in that regard.
It also is very often wrong in the code it outputs, doesn't know if things got deprecated after the training data threshold, etc. As a funny recent example, I asked ChatGPT for an example using the openAI nodejs library. The example was wrong as the library has had a major version bump since the last time the training data was updated.
> The only area that I still feel superior to ChatGPT is that I have a better understanding of the "big picture" of what the program is trying to accomplish and can help steer it to work on the right sub-problems.
Which probably is based on your general experience and understanding of programming in the last 30+ years. As I have said elsewhere, I really don't think that LLMs in their current iteration will be replacing developers. They are however going to be part of the toolchain of developers.
> It also is very often wrong in the code it outputs, doesn't know if things got deprecated after the training data threshold, etc
Today I asked it a question and it was wrong.... then it ran the code, got the same error as me, and then fixed it (and correctly explained why it was wrong), without me prompting further :)
Really though, how long until that training update goes from every so often, to constant. Now that half the internet is feeding it information, it doesn't even need to scour other sources -- its becoming its own source, for better or worse.
I have been programming 30+ years, and not two days ago looked at a problem I've been dealing with since before 2019, and went "this would be easier if I changed methods" and mitigated the issue in three hours from an airplane.
Programming is only superficially about code. The trick is really figuring out how to approach problems.
As someone who creates data and analysis which get used in setting policy I do find a lot of EA spreadsheet analysis of measured "good" to be very niave to the nature of measurement and classification.
That being said, I think this peice is a bit of an overreaction and there seem to be many earnest actors in the EA community really thinking about how they can do good in the world. SBF is very unfortunate for EA, but to jump from him example to saying all EA practitioners care exclusively about the ends over the means is a bit of a leap, imo.
It's just a bunch of privileged armchair humanitarians who never left the confines of their fancy circles, let alone been confronted to the things they're trying to fix. They think they can fix issues better than NGOs which have had boots on the grounds for decades, just because they know python and excel, as if people actually working on humanitarian causes were benevolent r**ards. Of course, it allows for great intellectual masturbation and self-congratulation, as if fixing complex social/ecological issues was just about "cracking a problem" and presenting a neat 12-page PPT presentation, before moving to the next problem.
If any of these people actually walked the talk, we'd see a lot more one-way tickets to Africa for them to finally be able to employ their beautiful minds on real problems.
For someone outside the space (like me), what’s the big innovation of Effective Altruism? I assume when the rubber hits the road, most people doing big donations have people to look at the effectiveness of that donation.
I guess I’m just suspicious of any community or movement that labels itself as “effective,” because it is hard to believe that they were the first ones to think of the idea of not being ineffective, haha.
Most people doing big donations aren't particularly interested in effectiveness. The Susan G. Komen foundation, still the largest breast cancer charity in the US, had a big controversy about this around the time that Effective Altruism started to get big. According to their annual reports (https://www.komen.org/wp-content/uploads/fy19-20-annual-repo...), if you go to their site and donate $100 towards their promise of "ending breast cancer":
* $5 goes towards breast cancer research. (IIUC, cancer researchers are somewhat skeptical of the idea that cancer could be "ended" as such, but that's a minor quibble.)
* $8 goes towards treatment and screening. Not exactly what was promised, but still saving lives, so close enough.
* $14 goes towards administering the Susan G. Komen foundation.
* $22 goes towards raising funds for the Susan G. Komen foundation.
* $51 goes towards "education". They say this includes patient support services, not just telling people about the Susan G. Komen foundation, but don't offer a further breakdown.
And my understanding is that, in non-EA philanthropic circles, this breakdown isn't considered particularly egregious. At least they're doing something! An ineffective charity would be something like One Laptop per Child, which raised money and press attention from a fake crank-powered laptop and accomplished nothing of note before technological innovation outpaced them.
In the absence of any substantive allegations of misappropriation, be fair to OLPC. They had the challenge of engineering and logistics for a tangible product, not vagaries like "education."
To my neighbor, SJK's efforts yielded as much as OLPC's vaporware. As a career nurse, she's well-educated about the breast cancer she has, that she will soon die from because she can't afford to treat it.
SJK amounts to little more than a goddamn fortune-teller. Not one cent of that $8 has bought her a single extra minute of life.
Yeah, I should be clear, I don't mean to be particularly hard on OLPC. They tried to do a cool-sounding thing, it didn't happen to work out despite real efforts towards it, and as far as tech demos go the crank laptop wasn't egregious. But Susan G Komen isn't really being dishonest either - those numbers are from a nicely designed pie chart in their 2020 annual report! They're just responding to the donor demand for cool events and soaring rhetoric that makes them feel like part of a movement. People who are interested in effectiveness instead donate to organizations like the Breast Cancer Research Foundation, which puts over 70% of donor money into research grants.
EA seeks to measure and compare altruistic endeavors, however imperfectly. For example, measuring the good created by donating to your kids school, to the 9/11 fund, or to bed nets in Africa. An EA would likely say that the good for society created by donating to your kids' school is less than the good provided by donating the same amount of money to bed nets. They might quantify that in lives saved, such as a $1000 donation to bed nets saving about 1/5th of a life in Africa. But maybe the $1000 donation to the school improves the lives of 100 students by 1% each, or something like that.
It really forces us to have hard conversations about how we use our collective effort to help each other, based on more than just feelings in the moment. Feelings are an important part of the end goal, but feelings about some particular intervention are not a good way to evaluate it. We're also forced to be clear about what good we think an intervention will provide, and to whom.
Your answer sums up perfectly why I am so strongly against EA. The idea that you can quantify your donation to bed nets in terms of something so arbitrary as “lives saved”, without taking into account the extremely complex social environment at which that donation is arriving, is completely opposite to my beliefs. It is basically the same as measuring something like “what’s the benefit of an individual vote?”
The idea that you really believe that as an EA you are having “harder” conversations and you are more “forced” to be clear about what you do than other non-effective altruists is just baffling to me.
It seems to make sense to me. You evaluate whether your actions are doing the most good in the world according to your metrics. For example, should I work at a soup kitchen for a day or work in my day job in HFT for a day and donate it all to a soup kitchen.
With the latter I could support quite a few people at the soup kitchen for that day.
You can take that to whatever extent you desire. Should I donate to guineaworm eradication or local libraries? And so on and so forth.
And the general EA community rates lives highly. So max lives saved usually trumps everything else.
Some political/social communities simply kidnap words and terms and use them as if their solution is the only solution to a problem. You are absolutely right to be suspicious; the fact that they are oblivious enough to reality that they really believe themselves to be more effective than others (simply because they hide behind arbitrary numerical computations) is reason enough to suspect that their numbers aren’t really covering everything.
It is a bit like the way feminists think of themselves as the only line of fight for women’s rights, or right-wing extremists and populists keep labelling themselves as “freedom fighters”. All of a sudden if you’re opposed to them you become a woman-hater, or a freedom-hating socialist (because they can’t understand that there are other alternative ways to defend the same ideas). These are just political groups with one specific ideology who are marketing themselves as the solution. Thankfully nowadays with the fall of SBF people are dismissing EA as the fad that it is, but there was a time when opposing EA would elicit reactions such as “oh so you’re against effectiveness/transparency?” or “so you are in favor of corruption?” Sigh.
I guess I’m just suspicious of any community or movement that labels itself as “effective,” because it is hard to believe that they were the first ones to think of the idea of not being ineffective, haha.
What do people donate money to charity for? It's certainly not all to the poorest or desperate people amongst us. It get donated to a church, or to an art museum. Beyond a certain point, they don't really need the money.
Meanwhile, halfway around the world people lives in abject poverty or they're dying to famine or war.
I certainly don't act like an effective altruist. My money goes to things I cared about, like open source projects, but not necessarily to people who need the money to live another day or help people who could help other people live another day.
Let's put it this way. Is it wrong to not save people's lives, especially when it is of no inconvenience to you? I am not talking about donating so much money that I am a beggar on the street, but donating a substantial enough money but still retain a 'middle class' lifestyle.
Then the next question is whatever you doing effective or counterproductive? I think it should be no surprise that a large amount of people don't give such thought to the questions. Imagine the vast scientific illiteracy that pervades our world, like anti-vaxxers asking for money to help spread their messages.
> Is it wrong to not save people's lives, especially when it is of no inconvenience to you?
I used to think so. Later events then taught me that proactively helping people doesn't necessarily keep their knife away from your throat. You see people's true colors when you try to disengage.
I only save the lives of animals these days. People are scum. Animals never did me wrong.
It’s not just donations. It’s living your entire life according to “expected value”, or what the maximum “utility” units (utils) can be created through every action and relationship. It’s an extremely inhuman way of living that goes against ethical norms established since the dawn of civilization. Effective altruists are dangerous and you should not be friends with them, hire them, or associate with them at all if you possibly can. They put your wealth and life at risk.
That is silly. People argued about whether cars are good for the environment or whether we should create a walkable society, or policies about climate change, and so forth. They are always arguing about the consequences of their actions.
At the end of the day, people don't follow some sort of pure moral philosophy like some sort of platonic ideals.
It's crazy how little has changed and how much had changed. I remember what a revelation "the unreasonable effectiveness of RNNs" was when I was read it and it feels like we live in a different world.
The thing this article doesn't say is that maplibre-gl v2 supports directly querying pmtiles with http range requests so you don't even need lambda or cloud flare workers to make x/y/z routes in front of the file. So instead of 50c, this is essentially free.
If you are going to to set up that infrastructure you could just use an mbtiles file which has been around for years.
The interesting thing to me is that this stuff is all built on the open source technology of mapbox, and it seems like a real threat to large parts of their business model. Interested to see how it plays out.
The solution I described in the blog post is an optional layer on top designed for high traffic deployments, see here: https://protomaps.com/docs/cdn
As most storage systems like S3 aren't free and have per-request fees, the price is pretty comparable to this CDN deployment.
Protomaps is very intentionally built with little in common with Mapbox; the main shared parts are using the same Protocol Buffers vector tile format, because there's no reason to write another one; and compatibility with the fork of Mapbox GL 1 (MapLibre GL). See https://protomaps.com/docs/faq#mapbox
Firstly, I just want to say thanks for the reply, but more so thanks for your work, its moving opensource mapping forward.
In my work we are looking at switching from mbtiles hosted with tilserver-gl(https://github.com/maptiler/tileserver-gl) to pmtiles to remove a server process. But we were self hosting already and we are already using maplibre-gl 2.
I can see why the implementation in the blog post would be better for high traffic deployments (ours isn't). It also points out to me I don't understand how a CDN would handle range request for hosting the pmtiles file directly, it probably doesn't?
As far as the mapbox stuff,in my mind, pmtiles is a direct competitor (successor) to the mbtiles format, which was a revolution in comparison to everything that came before it. A successor I welcome because it makes it even easier for me as a developer to self host and not be dependent on a SaaS to run my maps.
The modern opensource map stack wouldn't exist without mapbox and I'm personally grateful to them for that. Most people who use pmtiles will use mapbox's opensource style spec to style them, and descendants of their open source code to render them. But as a developer now its an obvious choice to not use their services after years of using them.
However I'm not doing high traffic stuff and they never made much money off me anyway.
Some CDNs handle range requests; the purpose of introducing Lambda/Workers here is to transform normal Z/X/Y URLs into range requests, so you can use PMTiles with any client like MapLibre Native or legacy code without loading a plugin.
Protomaps actually has its own vector renderer different from Mapbox GL or MapLibre at https://github.com/protomaps/protomaps.js . It does use a handful of low-level Mapbox JavaScript libraries, but otherwise was consciously developed to be 100% separate from existing Mapbox rendering and styling code, and has an objectively inferior (Leaflet) user experience.
In practice, most serious uses of Protomaps are now using MapLibre GL, so there will be more focus going on that in the project going forward.
My guess is Mapbox would worry more about competitors in the routing and Addressing space than
The biggest danger with Open tiles is a basemap is the gateway to other data services with higher costs and an easier path to a defendable technical moat. Swapping a basemap can literally be a 10 minute job
I used to deploy SPAs like this and I consistently had problems with cloudfront's cache clearing. Projects would not update for random amounts of time and at different times for different clients. I thought I was pretty good a trouble shooting stuff like this but over months of effort I could never get it to work update when I invalidated the cloudfront cache no despite trying many different approaches.
In the end I have switched to hosting on Netlify, which is easier to set up, not that I particularly care about that, but when I deploy my live site updates immediately.