Even disregarding what he has done, this is utterly absurd. I almost spit my coffee reading that.
You are going to tell me that the vibe coders care and read the code they merge with the same attention to detail and care that Linus has? Come on...
That's the key for me. People are churning out "full features" or even apps claiming they are dealing with a new abstraction level, but they don't give a fuck about the quality of that shit. They don't care if it breaks in 3 weeks/months/years or if that code's even needed or not.
Someone will surely come say "I read all the code I generate" and then I'll say either you're not getting these BS productivity boost people claim or you're lying.
I've seen people pushing out 40k lines of code in a single PR and have the audacity to tell me they've reviewed the code. It's preposterous. People skim over it and YOLO merge.
Or if you do review everything, then it's not gonna be much faster than writing it yourself unless it's extremely simple CRUD stuff that's been done a billion times over. If you're only using AI for these tasks maybe you're a bit more efficient, but nothing close to the claims I keep reading.
I wish people cared about what code they wrote/merged like Linus does, because we'd have a hell of a lot less issues.
I don't get why so much mental gymnastics is done to avoid the fact that locking their lower prices to effectively subsidize their shitty product is the anti competitive behavior.
They simply don't want to compete, they want to force the majority of people that can't spend a lot on tokens to use their inferior product.
Why build a better product if you control the cost?
They don't care. This is clearly someone looking to score points and impress with the AI magic trick.
The best part is that they can say the AI will get some stuff wrong, they knew that, and it's not their fault when it breaks. Or more likely, it'll break in subtle ways, nobody will ever notice and the consequences won't be traced back to this. YOLO!
Take Claude Code itself. It's got access to an endless amount of tokens and many (hopefully smart) engineers working on it and they can't build a fucking TUI with it.
So, my answer would be no. Tech debt shows up even if every single change made the right decisions and this type of holistic view of projects is something AIs absolutely suck at. They can't keep all that context in their heads so they are forever stuck in the local maxima. That has been my experience at least. Maybe it'll get better... any day now!
Companies don't support Linux because it's not widespread enough so it can't outweigh the costs. They don't give a rat's ass for the market's resentfulness or lack thereof. The Linux market was basically not a real market before because their market share was simply too small.
There are plenty of products made for resentful markets and as long as they keep being profitable they don't care.
If it ever gets there, then anyone can use it and there's no "skill" to be learned at all.
Either it will continue to be this very flawed non-deterministic tool that requires a lot of effort to get useful code out of it, or it will be so good it'll just work.
That's why I'm not gonna heavily invest my time into it.
Good for you. Others like myself find the tools incredibly useful. I am able to knock out code at a higher cadence and it’s meeting a standard of quality our team finds acceptable.
I’m sorry but are you being intentionally obtuse? You can’t think of a single downside to running two systems on your machine instead of one? If you lack imagination to that level I can’t help you dude
Of course running one system is better. Use Linux and stop being miserable ;)
Still, you haven't said what are these extremely horrible cons that two systems have. For me they're so small it's not even comparable to having to submit yourself to a shit Windows system only to avoid the "hassle" of having two systems.
I used Windows only for gaming and Linux for everything else. Now I'm fed up with games that choose to block Linux out, so I no longer need the two systems and couldn't be happier.
I’m not miserable in the slightest, just endlessly bewildered that people in the Linux community continue to have attitudes like yours, despite that being literally the main reason people are put off of Linux. It’s self defeating.
> That's why some developers choose not to enable it
That's an excuse. It's mostly incompetence or more often than not the company doesn't think it's worth the effort. With more Linux users, the balance will eventually shift from "fuck them" to "we have to figure out a way".
Now if you do care about quality, having a committed, technical audience giving quality big reports is a godsend. But that's not where we are this decade rife with layoffs and rampant outsourcing in the industry.
You’re posting an argument from 6 years ago. Not including Steam OS, the Linux market share has almost quadrupled since then (to ~3.2%); including Steam OS, it’s up to ~24%. And continues to trend upwards.
You also don’t need to arbitrarily support Linux. It’s not difficult to say “this has only been tested on Fedora, Ubuntu, POP, and SteamOS; other distributions are unsupported officially”.
Most game studios pay someone else to make the anti-cheats and many already have Linux versions that the studios choose to not enable.
Besides, if your anti-cheat only ever looks at the system level, it'll easily be bypassed by hardware cheats. At some point I think anti-cheats will have to "know" the game to be able to detect anomalies. It's the only way to effectively stop many categories of cheats.
Those Linux versions are generally not kernel-level. Do you know of any that are?
And yes, of course it's not fool-proof. It's not supposed to be. It's about probabilities: for a given online game, what is the chance that I end up in a match with someone who is obviously cheating and using that to ruin the game for everyone else? The harder you make cheating, the lower that is.
Still has an anti-cheat, they just bothered to allow Linux support.
Companies don't do this out of laziness/incompetence, but even some large anti-cheats work on Linux and some games simply choose to not enable it (cough, Tarkov, cough). Their problem, I'm no longer gonna play games that don't work on Linux.
Funnily enough the best FPS game ever (Counter-Strike) runs absolutely fine on Linux. Thanks Valve!
As far as I know, all the anti-cheat options for Linux are not kernel-level, which means that they are drastically less effective at their intended purpose. That's why so many competitive multiplayer games choose to not enable it.
Yet this is not reproducible. This is the whole issue with LLMs: they are random.
You cannot trust that it'll do a good job on all reports so you'll have to manually review the LLMs reports anyways or hope that real issues didn't get false-negatives or fake ones got false-positives.
This is what I've seen most LLM proponents do: they gloss over the issues and tell everyone it's all fine. Who cares about the details?
They don't review the gigantic pile of slop code/answers/results they generate. They skim and say YOLO. Worked for my narrow set of anecdotal tests, so it must work for everything!
IIRC DOGE did something like this to analyze government jobs that were needed or not and then fired people based on that. Guess how good the result was?
This is a very similar scenario: make some judgement call based on a small set of data. It absolutely sucks at it. And I'm not even going to get into the issue of liability which is another can of worms.
Is it not reproducable? Someone up thread reproduced it and expanded on it. It worked for me the first time I prompted. Did you try it or are you just guessing that it's not reproducable because that's what you already think?
I'm not talking about completely replacing humans, the goal of this exercise was demonstrating how to use an LLM to filter out garbage. Low quality semi-anonymous reports don't deserve a whole lot of accuracy and being conservative and rejecting most reports even when you throw out legitimate ones is fine.
You seem like regardless of evidence presented, your prejudices will lead you to the same conclusions, so what's the point discussing anything? I looked for, found, and shared evidence, you're sharing your opinion.
>IIRC DOGE did something like this to analyze government jobs that were needed or not and then fired people based on that. Guess how good the result was?
I'm talking about filtering spammy communication channels, that has nothing like the care required in making employment decisions.
Your comment is plainly just bad faith and prejudice.
> Is it not reproducable? Someone up thread reproduced it and expanded on it. It worked for me the first time I prompted. Did you try it or are you just guessing that it's not reproducable because that's what you already think?
I assumed you knew how LLMs work. They are random by nature, not "because I'm guessing it". There's a reason if you ask the LLM the same exact prompt hundreds of times you'll get hundreds of different answers.
>I looked for, found, and shared evidence
Anecdotal evidence. Studies have shown how unreliable LLMs are exactly because they are not deterministic. Again, it's a fact, not an opinion.
>I'm talking about filtering spammy communication channels
So if we make tons of mistakes there, who cares, right?
I only used this as an example because it's one of the few very public uses of LLMs to make judgement calls where people accepted it as true and faced consequences.
I'm sure there are plenty more people getting screwed over by similar mistakes, but folks generally aren't stupid enough to say that publicly. Maybe the Salesforce huge mistake qualifies too? Incidentally it also involved people's jobs.
Regardless, the point stands: they are unreliable.
Want to trust LLMs blindly for your weekend project? Great! The only potential victim for its mistakes is you.
For anything serious like a huge open source project? That's irresponsible.
You are going to tell me that the vibe coders care and read the code they merge with the same attention to detail and care that Linus has? Come on...
That's the key for me. People are churning out "full features" or even apps claiming they are dealing with a new abstraction level, but they don't give a fuck about the quality of that shit. They don't care if it breaks in 3 weeks/months/years or if that code's even needed or not.
Someone will surely come say "I read all the code I generate" and then I'll say either you're not getting these BS productivity boost people claim or you're lying.
I've seen people pushing out 40k lines of code in a single PR and have the audacity to tell me they've reviewed the code. It's preposterous. People skim over it and YOLO merge.
Or if you do review everything, then it's not gonna be much faster than writing it yourself unless it's extremely simple CRUD stuff that's been done a billion times over. If you're only using AI for these tasks maybe you're a bit more efficient, but nothing close to the claims I keep reading.
I wish people cared about what code they wrote/merged like Linus does, because we'd have a hell of a lot less issues.
reply