Hacker Newsnew | past | comments | ask | show | jobs | submit | more kabes's commentslogin

My theory: Browsers and js was evolving at such a fast rate that we needed a new frontend framework each year. React came with something good enough just as js and browser APIs started to slow down their massive changes and everybody was getting tired of changing framework all the time. Maybe some newer frameworks have some advantages, but not nearly enough to give up on what's now the standard ecosystem.


Qgis is great on the desktop. On the browser side I find openlayers to be a well thought out gis framework.

A lot of things are evolving though in the gis world. You can now, even in the browser, render huge datasets with geoparquet, geoarrow, wasm and webgl.


Dutch catoonist Dirkjan revealed the real answer already years ago: https://www.reddit.com/r/dirkjan/s/zszexnXLRu


ChatGPT's translation:

Panel 1 Waiter: "Sir, I’d like to ask you to take off your cap in this restaurant." Smurf: "Take off my cap? You’re not asking me to take off my pants, are you?!"

Panel 2 Waiter: "That’s not the same." Smurf: "That is the same."

Panel 3 Cook (to waiter): "Let him put his cap back on." Waiter: "That’s maybe better."


I'll do a manual one that's more correct and captures the spirit better:

Panel 1

Waiter: "Sir, please take your hat off in this restaurant."

Smurf: "Take off my hat? You wouldn't ask me to take off my trousers either, would you!"

Panel 2

Waiter: "That’s not the same at all."

Smurf: "Yes it is!"

Panel 3

Cook (to waiter): "Let's let him put his hat back on."

Waiter: "Yes, let's."



+1 for Dirkjan, always. Another classic https://dirkjan.nl/cartoon/20231004_3677623503/

And this one always gets me too: https://external-content.duckduckgo.com/iu/?u=http%3A%2F%2Fw... with the "Jesmurfa's witness"


I don't get that first one :(


The smurf's being asked to remove his hat since wearing it in a restaurant is considered impolite. When being pressed the smurf says 'I don't ask you to remove your pants'. When it's revealed the smurfs genitals are under his hat Dirkjan's mate says 'maybe we should let hem keep his hat on'.


Oh no I meant the Pranfeuri one.

Edit: Typing that out me realise that I could just search for that word. Apparently it's just absurdist: https://www.reddit.com/r/learndutch/comments/17nvt4g/pranfeu...


So it's a plumbus!


Yes, a pranfeuri is like a plumbus indeed! Curious how pranfeuri's are made though. Probably smurfs are involved


But can you download a country up front? For offroad motorcycle trips I often get in to areas with no mobile connectivity


Yes, you can download any region and set of layers for offline use.


This isn't serverless. It's just using someone else's servers for the SDP signaling. And in a production app you'd likely also need turn servers and maybe SFU servers.

There are some true serverless approaches out there for the signaling, e.g. where both peers scan each other's QR code, but that obviously has very limited use.


You're not wrong! Serverless is a funny term. Cloud companies use serverless to mean you don't have to provision and manage the server yourself, but it is still very much serverful technically speaking. This is neat in that you don't even need to setup anything with a cloud provider yourself to enable p2p connections


I've always seen the distinction as "serverless" meaning there wasn't a set group of servers always on and instead they provision up and down on demand.

Only avoiding provisioning and managing the server just means you are renting rather than self-hosting.


The VPS is like renting office space. You don't own the space, but for the most part get to use it how you want, and all the responsibilities that come with that.

"Serverless" is like paying for a hot desk by the minute, with little control of your surroundings, but it is convenient and cheap if you only need it for an hour.


At one job I had access to some paid AWS support tier. It's basically a bunch of consultants. We needed to process a datastream of events from user actions on a website. We asked about serverless / AWS Lambda. Their answer was something like "Well yeah it'll work but don't do that. It'll cost too much money and you'll wind up rebuilding it around EC2 anyways"


Yup. If you want something to plumb some pretty low volume events, sure serverless like Lambda can be useful. Anything which would be considered high levels of compute, you are just wayyy over paying. Hell, even EC2 on spot instance is expensive compute. I do like some AWS services, but yeah they come at a premium that is just getting more and more expensive.


My mental model is "we handle interpreter restarts for you, so forget about systemd unit files and CEO's laptop with minimized tmux"


I've always been obsessed with true P2P WebRTC with QR codes but, at least back in the day, Firefox fails the offer under a very short timeout (~5 secs IIRC) which made out of band signaling completely impossible.


I have done this a couple of weeks ago on firefox and it worked fine even with a 1 minute delay. An even easier way to share the SDP offer, at laest when the clients are in physical proximity, is using a data over sound library like ggwave.


We've circled back to dial-up modems :D


serverless nowadays means "no server in YOUR infrastructure"


I didn't know about the QR-Code solution how does it work ?


Normally you need a "lobby" server that collects and lists available other clients and pass along connection details. You have no servers in P2P setup, so the "signaling" information has to be shared "out-of-band", like through QR code or super secret invite link or avian IPv4 or something.


wait but this should only work on locals / close networks shouldn't it ? i thought you still need some proxying in other cases (hence the turn) - i really need to study this again asap tough


STUN gives back your public IP:port, TURN gives you assigned proxied IP:port.

You take that data and send to the peer over signaling connection, and they call you back on that IP:port. Most NAT implementations make and keep a temporary mapping between public port to private IP consistent[1] for few minutes, and not completely random per destination[2], so it usually works.

1: e.g. router.public.ip.example:23456 <-> 192.168.0.12:12345

2: e.g. if stun.l.google.com:12345 sent from port 23456 but if yourfriend.router.ip.example:12345 sent from port 45678


woaw thank you ; will definitely hop back on this topic now ; very much appreciate the answer


Yes. Unless the party generating the QR code first obtains its external IP address by other means, which would still require some kind of echo server. Even then, ignoring outdated approaches like UPnP, a commonly accessible host would be needed to establish signalling with e.g. NAT hole punching for anything but the most basic of setups.


They say that serverless stacks have the highest server bills.


It's not very scalable. Regular rules of webrtc apply, so once you go to a certain number of users, you would have to use an SFU approach.


Isn't it up to you to prove the model used AGPLv3 code, target then for them to prove they didn't?


Not inherently.

If their model reproduces enough of an AGPLv3 codebase near verbatim, and it cannot be simply handwaved away as a phonebook situation, then it is a foregone conclusion that they either ingested the codebase directly, or did so through somebody or something that did (which dooms purely synthetic models, like what Phi does).

I imagine a lot of lawyers are salivating over the chance of bankrupting big tech.


The onus is on you to prove that the code was reproduced and is used by the entity you're claiming violated copyright. Otherwise literally all tools capable of reproduction — printing presses, tape recorders, microphones, cameras, etc — would pose existential copyright risks for everyone who owns one. The tool having the capacity for reproduction doesn't mean you can blindly sue everyone who uses it: you have to show they actually violated copyright law. If the code it generated wasn't a reproduction of the code you have the IP rights for, you don't have a case.

TL;DR: you have not discovered an infinite money glitch in the legal system.


Yes! All of those things DO pose existential copyright risks if they use them to violate copyright!. We're both on the same page.

If you have a VHS deck, copy a VHS tape, then start handing out copies of it, I pick up a copy of it from you, and then see, lo and behold, it contains my copyrighted work, I have sufficient proof to sue you and most likely win.

If you train an LLM on pirated works, then start handing out copies of that LLM, I pick up a copy of it, and ask it to reproduce my work, and it can do so, even partially, I have sufficient proof to sue you and most likely win.

Technically, even involving "which license" is a bit moot, AGPLv3 or not, its a copyright violation to reproduce the work without license. GPL just makes the problem worse for them: anything involving any flavor of GPLv3 can end up snowballing with major GPL rightsholders enforcing the GPLv3 curing clause, as they will most likely also be able to convince the LLM to reproduce their works as well.

The real TL;DR is: they have not discovered an infinite money glitch. They must play by the same rules everyone else does, and they are not warning their users of the risk of using these.

BTW, if I was wrong about this, (IANAL after all), then so are the legal departments at companies across the world. Virtually all of them won't allow AGPLv3 programs in the door just because of the legal risk, and many of them won't allow the use of LLMs with the current state of the legal landscape.


No. You don't have sufficient proof to sue me simply for using an LLM, unless I actually use it to reproduce your work. If I don't use it to actually reproduce your work, you lose. And the onus is on you to prove that I did. Your claim was:

There is no reason why I can't sue every single developer to ever use an LLM and publish and/or distribute that code.

Simply proving that it's possible to reproduce your work with an LLM doesn't prove that I did, in fact, reproduce your work with an LLM. Just like you can't sue me for owning a VHS — even though it's possible that I could reproduce your work with one. The onus is on you to show that the person using the LLM has actually used it to violate your copyrighted work.

And running around blindly filing lawsuits claiming someone violated your copyright with no proof other than "they used an LLM to write their code!" will get your case thrown out immediately, and if you do it enough you'd likely get your lawyer disbarred (not that they'd agree to do it; there's no value in it for them, since you'll constantly lose). Just like blindly running around suing anyone who owns a VHS doesn't work. You have not discovered an infinite money glitch, or an infinite lawsuit glitch.

If you think you have, go talk to a lawyer. It's infinite free money, after all.


Again, I shall correct the strawmanning of this: If you, the user, reproduce the work, then I can sue you for distributing the reproduced work. If you produce a tool/service whose only purpose is to reproduce works illegally, then I can sue you for making and distributing that tool and the government may force you to cease the production of the tool/service.

The onus would be on the toolmaker/service provider to prove there is legal uses of that tool/service and that their tool/service should not be destroyed. This is established case law, and people have lost those cases, and the law is heavily tilted in favor of the copyright holders.

The majority of LLMs are trained on pirated works. The companies are not disclosing this (as they would be immediately sued if they did so), and letting their users twist in the wind. Again, if those users use the LLM to reproduce a copyrighted work, all involved parties can be sued.

See the 1984 Betamax case (Sony Corp. of America v. Universal City Studios) on how the case law around this works: Sony was able to prove there is legitimate and legal uses for being able to record things that, thus can still produce Betamax products and cannot be sued for pirates pirating with Betamax products...

... but none of the LLM distributors or inference service providers have (or may be even able to) reach that and that doesn't make it legal to pirate things with Betamax, those people were still sued and sometimes even put in prison, and similarly, it would not free LLM users to continue pirating works using LLMs, it would only prevent OpenAI, Anthropic, etc, from being shut down.

If you still think this is an infinite money glitch, then it is exactly as you say, and this glitch has been being used against the American people by the rich for our entire lives.


You are just making things up. In the American court system you are innocent until proven guilty. There's no "established case law" that tool makers have to prove their tools can be used for whatever or else they're guilty — you have to prove they're guilty if you think they are. You don't even understand the cases you're citing! Sony was presumed innocent and the onus was on the plaintiffs, who failed. And you couldn't sue someone for simply owning a VCR or using one — notably, the plaintiffs were trying to sue Sony, the VCR maker, not everyone in America who owned a VCR.

In an even greater misunderstanding of the American legal system, you're using the Sony case to argue that you would win court cases against LLM users. The plaintiffs in the Sony case lost! This makes your pretend case even harder: the established precedent is in fact the opposite of what you want to do, which is randomly sue everyone who uses LLMs based on a shaky analysis that since it's possible to use them to infringe, everyone is guilty of infringement until proven innocent.

Moreover, at this point you're heavily resorting to motte and bailey, where you originally claimed you could sue anyone who used LLMs, and are now trying to back up and reduce that claim to just being able to sue OpenAI, Anthropic, and training companies.

Continuing this discussion feels pointless. Your claim was wrong. You can't blindly sue anyone who uses LLMs. If you think you can, go talk to a lawyer, since you seem to believe you've found a cheat code for money.


> You can't blindly sue anyone who uses LLMs. Correct, that has been established as a strawman that is frequently used on HN.

>In an even greater misunderstanding of the American legal system, you're using the Sony case to argue that you would win court cases against LLM users.

Not at all. I said this is the only actual path for the companies to survive, if they can thread that legal needle. The users do not get the benefit of this. The FBI spent the better part of 3 decades busting small time pirates reproducing VHS tapes using perfectly legal (as per the case I quoted) tape decks.

Notice, not everybody has won this challenge, the Sony case merely shows you how high you have to jump. Many companies have been found liable for producing a tool or service whose primary use is to commit crimes or other illegal acts.

Companies that literally bent over backwards to comply with the law still got absolutely screwed, see what happened to Megaupload, and all they did was provide an encrypted offsite file storage system, and complied with all applicable laws promptly and without challenge.

Absolutely nothing stops the AI companies from being railroaded like that. However, I believe that they will attempt a Sony-like ruling to save their bacon, but throw their users under the bus.

>the established precedent is in fact the opposite of what you want to do,

Nope, just want to sue the code pirates. Everyone else can go enjoy their original AI slop as long as it comes from a 100% legally trained model and everybody keeps their hands clean.

>and are now trying to back up and reduce that claim

No, I literally just gave the Sony case as an example of reducing the claim into the other direction. The companies may in fact find a way to weasel out of this, but the users never will.

Another counter-example, btw, not that you asked for one, is Napster. Napster was ordered by a court to shut down their service as it's primary use was to facilitate piracy. While it is most likely OpenAI et al. will try to Sony their way out, they could end up like Napster instead, or worse, end up like Megaupload.

>everyone is guilty of infringement until proven innocent.

Although you are saying this in plain language, this is largely how copyright cases work in the US, even though, in theory, it should be innocent until proven guilty. However, that exact phrase is only meaningful in criminal cases. It is much more loose in civil cases, and the bar for winning a civil case is much lower.

Usually in a copyright case, the copyright owner is usually the plantiff (although not always!), and copyright owner plantiffs usually win these cases, even in cases where they really shouldn't have.

>Continuing this discussion feels pointless.

Yes it really does. Many people on HN clearly think it is okay to copyright-wash through LLMs, and that the output of LLMs are magically free of infringement by some unexplained handwaving.

You still have not explained how a user can have an LLM reproduce a copyrighted work, and then distribute it, and somehow the copyright owners cannot sue everyone involved, which is standard practice in such cases.


as long as it comes from a 100% legally trained model

This is where your entire argument falls apart. You can't sue people just for using a tool that has the capability to violate copyright: you actually have to prove they did so. While it's technically true that you don't need to meet the bar of "proof" for civil cases, you're still not in luck: the bar is "preponderance of evidence," which you don't have if you're just blindly suing people based on using an LLM (and zero actual evidence of infringement). Using an LLM isn't illegal, so evidence that they used an LLM isn't evidence of anything that matters to your case: aka, you have nothing.

All of your other examples similarly fall apart. For Napster cases, the RIAA had to show people actually violated copyright, not that they just had Napster installed or used it for non-copyrighted works. And again, you're trying to motte-and-bailey your way out of your original claim that you could blindly sue LLM users, as opposed to training companies who make the models. You couldn't sue Megaupload users who used Megaupload for random file storage — you could only sue Megaupload for knowingly not complying with copyright law.

You really just don't understand the legal system. I'm not going to respond to this thread anymore. If you think you have a free money cheat code, go ahead and try to use it — you'll fail.


>"preponderance of evidence,"

Yes, which as I said above, is the act of distribution in most cases. If you don't distribute it or sell a service around it, it isn't worth my time to sue.

>Napster cases, the RIAA had to show people actually violated copyright, not that they just had Napster installed or used it for non-copyrighted works.

Yes, and if someone distributes an LLM or sells access to an LLM, and then someone uses that to reproduce my copyrighted work, or I go to the model/service and have it reproduce my copyrighted work, then I can show to the court that it has been done, not merely can be done.

>you could only sue Megaupload for knowingly not complying with copyright law

That's the really fucked up part of Megaupload, which is why I brought it up as an example of how tilted the law is in favor of copyright holders: They did not prove Megaupload nor Kim Dotcom knowingly did that, and Megaupload and Dotcom proved they did comply with the law, promptly and in full. They did the exact same level of diligence that I'd expect from any competing storage bucket company, yet Dotcom's house was raided by a joint US/NZ team at his house in NZ as if he was some terrorist.

They are still trying to extradite him for "crimes", where his "crimes" are that he complied with the same laws Amazon, Google, Microsoft, Dropbox, etc comply with, with the same diligence that they do.

Given all of what I've said, you cannot actually make the argument that I can't sue. If someone reproduces my copyrighted work, I can, in good faith, sue them for that. Doing it through an LLM does not give them a free pass, they cannot argue to the court that they didn't know the LLM was trained on pirated works, they cannot argue to the court that they didn't understand how LLMs work. They still reproduced and distributed the copyrighted work, which is what damns them.

Again, you seem to think suing people for copyright violations is a free money cheat code, and you've been unable to tell me why doing it with an LLM is different than any other copyright violation, while I've given examples of case law for both sides on how this might play out. If suing someone that violated copyright is a free money cheat code, then companies like Disney are the biggest cheaters in history.

>I'm not going to respond to this thread anymore IANAL, IANYL, but I think that's a good choice.


I think you are confused about how LLMs train and store information. These models aren't archives of code and text, they are surprisingly small, especially relative to the training dataset.

A recent anthropic lawsuit decision also reaffirms that training on copyright is not a violation of copyright.[1]

However outputting copyright still would be a violation, the same as a person doing it.

Most artists can draw a batman symbol. Copyright means they can't monetize that ability. It doesn't mean they can't look at bat symbols.

[1]https://www.npr.org/2025/06/25/nx-s1-5445242/federal-rules-i...


No, I'm quite aware of how LLMs work. They are statistical models. They have, however, already been caught reproducing source material accurately. There is, inherently, no way to actually stop that if the only training data for a given output is a limited set of inputs. LLMs can and do exhibit extreme overfitting.

As for the Anthropic lawsuit, the piracy part of the case is continuing. Most models are built with pirated or unlicensed inputs. The part that was decided on, although the decision imo was wrong, only covers if someone CAN train a model.

At no point have I claimed you can't train one. The question is can you distribute one, and then use one. An LLM is not simplistic enough to be considered a phonebook, so they can't just handwave that away.

Saying an LLM can do that is like saying an artist can make a JPEG of a Batman symbol, and that's totally okay for them to distribute because the JPEG artifacts are transformative. LLMs ultimately are just a clever way of compressing data, and compressors are not transformative under the law, but possessing a compressor is not inherently illegal, nor is using one on copyrighted material for your own personal use.


They will just put a dumb copyright filter on the output, a la YouTube or other hosting services.

Again, it's illegal for artists to recreate copyright, it's not illegal for them to see it or know it. It's not like you cannot hire a guy because he can perfectly visualize Pikachu in his head.

The conflation of training on copyright being equivalent to distribution of copyright is so disingenuous, and thankfully the courts so far recognize that.


YouTube et al's copyright detection is mostly nonfunctional. It can only match exactly the same input with very little leeway. Even resizing it to a wrong ratio, or changing audio sampling rate too far fucks up the detection.

Its illegal for artists to distribute recreated copyright in a way that is not transformative. It isn't illegal to produce it and keep it to themselves.

People also distribute models, they don't merely offer them as a service. However, if someone asks their model to produce a copyright violation, and it does so, the person that created and distributed the model (its the distribution that is the problem), the service that ran it (assuming it isn't local inference), and the person that asked for the violation to be created can all be looped into the legal case.

This has happened before, before the world of AI. Even companies that 100% participated in the copyright regime, quickly performed takedowns, ran copyright detection to the best of their ability were sued and they lost because their users committed copyright violation using their services, even though the company did everything right and absolutely above board.

The law is stacked against service providers on the Internet, as it essentially requires them to be omniscient and omnipotent. Such requirements are not levied against other service providers in other industries.


Also if you have multiple microsoft 365 accounts, switching between those in the webapp seems to be impossible for years already. There's a switch account option, you can click it and sign in to another account, except you just stay logged in as the previous account.


Firefox "Containers" are great for this. I have color-coded tabs for various customer work accounts / tenants.


Yes, they are a life-saver. I use them all the time for various AWS accounts. The guy who told me about it years ago has saved me countless hours.


AWS finally added the ability to swap between a few accounts [1]. There’s an arbitrary limit of 5 so it’s really bad if, say, you work for an enterprise and have lots of accounts or, say, you work for a smaller business following the AWS Well-Architected Framework and isolate things. Containers still win.

1: https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/mu...


I am not sure… a microsoft login normally seems to jump across tens of different domains and my firefox containers get confused and it often fails if the stars are not well aligned.


Oh yeah, I can only actually use them via private browsing (or nuking cookies, of course)


Not sure. Since Google started to include a Gemini response on top of their search results I stopped using chatgpt for search


Funny, the AI summary makes the experience shittier for me.

The UI jumps and everything moves, I now have to wait until it loads. Massive UX mistake, you learn this the first week you make websites ...


For me the AI seems to actually understand the meaning of my search term and is usually accurate and helpful. It is an improvement over how bad Google had become when it randomly deleted search terms.


Hmm... it doesn't jump for me. There's a fixed amount of space reserved at the top of the screen and the AI overview loads into that. It only expands if you press the "Show more" button.


Awesome. How far away would you say are you from a stable orioledb as postgres extension?


We're planning to reach GA this year. Pushing all the patches to PostgreSQL core and making OrioleDB a pure extension will take more time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: