Very likely no license can restrict it, since learning is not covered under copyright. Even if you could restrict it, you couldn't add a "no LLMs" clause without violating the free software principles or the OSI definition, since you cannot discriminate in your license.
In that case, the neo-Luddites are worse than the original Luddites, then? Since many are definitely not "totally fine with the machines", and definitely do not confine their attacks only on the manufacturers that go against worker rights, but they include the average person in their attacks. And the original Luddites already got a lot of hate for attempting to hold back progress.
I don't know about worse, but I think the situations are very similar. It's inaccurate to think the Luddites just hated technological advancement for the sake of it. They were happy to use machines; why wouldn't they be, if they had a back-breaking and monotonous job and the machine made it easier?
The issue is not the technology per se, it's how it's applied. If it eliminates vast swathes of jobs and drives wages down for those left, then people start to have a problem with it. That was true in the time of the Luddites and it's true today with AI.
Are you sure? A survey by the YouTuber Games And AI found that the vast majority of indie game developers are either using, or considering using AI. Like around 90%.
This article commits several common and disappointing fallacies:
1. Open weight models exist, guys.
2. It assumes that copyright is stripped when doing essentially Img2Img on code. That's not true. (Also, copyright != attribution.)
3. It assumes that AI is "just rearranging code". That's not true. Speaking about provenance in learning is as nonsensical as asking one to credit the creators of the English alphabet. There's a reason why literally every single copyright-based lawsuit against machine learning has failed so far, around the world.
4. It assumes that the reduction in posts on StackOverflow is due to people no longer wanting to contribute. That's likely not true. Its just that most questions were "homework questions" that didn't really warrant a volunteer's time.
I love the LLM tech and use them everyday for coding. I don’t like calling them AI. We can definitely argue LLMs are not just rearranging code. But let’s look at some evidence that shows otherwise. Last year NYT lawsuit that show llms has memorized most of the news text, you had see those examples. Recent not-yet peer reviewed academic paper “Language Models are Injective and Hence Invertible “ shows llms just memorized training data. Also this https://youtu.be/O7BI4jfEFwA?si=rjAi5KStXfURl65q recent defcon33 talk shows so much ways you can get training data out. Given all these, it’s hard to believe they are intelligently generating code.
Reg. 3 AI is a lossy compression of text indeed. I recommend youtubing "karpathy deep dive LLM" (/7xTGNNLPyMI) - he shows that the open texts used in the training are regurgitated unchanged when speaking to the raw model. It means that if you say to the model "oh say can you" it will answer "see by the dawn's early light" or something similar like "by the morning's sun" or whatever. So very lossy but compression, which would be something else without the given text that was used in the training
Just so everyone else knows, the complaining is by definition reactionary.
> In politics, a reactionary is a person who favors a return to a previous state of society which they believe possessed positive characteristics absent from contemporary society.
But I guess HackerNews is infamous for being conservative, so it's not too surprising.
At least the blog author is self-aware about making accessibility worse? I just found it funny how reactionary and backfire-y this was.
(In politics, a reactionary is a person who favors a return to a previous state of society which they believe possessed positive characteristics absent from contemporary society.)
I sympathize with the reactionary. Obviously there's no putting the genie back in the bottle, but it would be nice to live in that world where writing stuff helped human readers more than it helped billion-dollar corporations.
You do still help human readers. I've been absolutely having the joy of my life for example reading the advent of compiler optimization posts that surface here everyday.
Granted there is a lot of AI slop here also now but I'm still glad humans write so that I can read and we can discuss here!
They are the same. I was looking for something and tried AI. It gave me a list of stuff. When I asked for its sources, it linked me to some SEO/Amazon affiliate slop.
All AI is doing is making it harder to know what is good information and what is slop, because it obscures the source, or people ignore the source links.
I've started just going to more things in person, asking friends for recommendations, and reading more books (should've been doing all of these anyway). There are some niche communities online I still like, and the fediverse is really neat, but I'm not sure we can stem the Great Pacific Garbage Patch-levels of slop, at this point. It's really sad. The web, as we know and love it, is well and truly dead.