>I will never use Homebrew again because I'm still sore that they dropped support for a Mac OS version that I was still using and couldn't upgrade because Apple didn't support my hardware anymore.
How old was it? With macOS "running an old version" is not really a viable or advisable path beyond a certain point. Might be something people want to do, might it a great option to have, but it's not very workable nor supported by Apple and the general ecosystem.
>Any decent project should have a way to install without Homebrew. It's really not necessary.
We don't install homebrew because it's necessary, but because it's convenient. No way in hell I'm gonna install 50+ programs I use one by one using the project's own installers.
Besides, if "Homebrew dropped support" is an incovenience, "manually look for dozens of individual installers or binary, make sure dependencies work well together, and update yourself again manually" is even more of an inconvenience. Not to mention many projects on their own drop support for macOS versions all the time, or offer no binaries or installers.
A high trust society reduces the incidence of scams/becoming victims to scams by reducing the number of scammers and increasing the number of honest people and honest behavior. That's what a high trust society is and does by definition.
If you want "mechanics", that would be an increased focus on community, with positive community best-behavior incentives (reputation, pride in work, solidarity, rewarding good behavior) and negative ones (shame, ostracism, punishing bad actors), social cohesion, and an emphasis on duty and morality, while reducing cynicism, and selfish individualism. This includes the appropriate role models and media/entertainment landscape.
Agreed. We (meaning the United States) used to have this for the most part. It doesn't scale to mediate all human interactions with authentication/authorization.
>Even if I generate a 1,000 line PR in 30 minutes I still need to understand and review it. Since I am responsible for the code I ship, this makes me the bottleneck.
You don't ship it, the AI does. You're just the middleman, a middleman they can eventually remove altogether.
>Now, I would be lying if I said I didn’t use LLMs to generate code. I still use Claude, but I do so in a more controlled manner.
"I can quit if I want"
>Manually giving claude the context forces me to be familiar with the codebase myself, rather than tell it to just “cook”. It turns code generation from a passive action to a deliberate thoughtful action. It also keeps my brain engaged and active, which means I can still enter the flow state. I have found this to be the best of both worlds and a way to preserve my happiness at work.
And then soon the boss demands more output, like the guys who left it all to Claude and even run 5x in parallel give.
Ok, fine, if you gonna be nitpicky, it's a calculation to an electrical engineer.
> To Software Engineering coding is an essential part.
Not really. You can be a Principal Engineer/Architect/Maintainer and do very little coding, but lots of code review and testing.
The point is, if you're just banging out code, then you're a software developer. If you use the engineering design process (research, requirements, design document, feasibility, conceptual design, prototype, detailed design) to solve problems using software, then you're a software engineer.
>Funnily enough, I had Linus in mind as a sort of no-coding Principal Engineer/Architect/Maintainer. He famously said he doesn't code anymore [1].
That's after he coded his ass out for 20 years. And he's doing reviewing and merging, which is not some lofty "software architect" function, but deals directly with written code.
It makes me sad that there are so many of these heavily-upvoted posts now that are hand-wavey about AI and is itself AI-generated. It benefits everyone involved except people like me who are trying to cut through the noise.
>After re-reading the post once again, because I honestly thought I was missing something obvious that would make the whole thing make sense, I started to wonder if the author actually understands the scope of a computer language.
The problem is you restrict the scope of a computer language to the familiar mechanisms and artifacts (parsers, compilers, formalized syntax, etc), instead of taking to be "something we instruct the computer with, so that it does what we want".
>How does this even work? There is no universe I can imagine where a natural language can be universal, self descriptive, non ambiguous, and have a smaller footprint than any purpose specific language that came before it.
Doesnt matter. Who said it needs to be "universal, self descriptive, non ambiguous, and have a smaller footprint than any purpose specific language that came before it"?
It's enough that is can be used to instruct computers more succintly and at a higher level of abstraction, and that a program will come out at the end, which is more or less (doesn't have to be exact), what we wanted.
Doesn't have to be "a clear definition", a rough defition within some quite lax boundaries is fine.
You can just say to Claude for example "Make me an app that accepts daily weight measurements and plots them in a graph" and it will make one. Tell it to use tha framework or this pattern, and it will do so too. Ask for more features as you go, in similar vague language. At some point your project is done.
Even before AI the vast majority of software is not written with any "clear definition" to begin with, there's some rought architecture and idea, and people code as they go, and often have to clarify or rebuilt things to get them as they want, or discover they want something slightly different or the initial design had some issues and needs changing.
This is the most handwaving per paragraph I've ever seen.
I think a fair summarization of your point is "LLM generated programs work well enough often enough to not need more constraints or validation than natural language", whatever that means.
If you take that as a true thing then sure why would you go deeper (eg, I never look at the compiled bytecode my high level languages produce for this exact reason - I'm extremely confident that translation is right to the point of not thinking about it anymore).
Most people who have built, maintained, and debugged software aren't ready to accept the premise that all of this is just handled well by LLMs at this point. Many many folks have lots of first hand experience watching it not be true, even when people are confidently claiming otherwise.
I think if you want to be convincing in this thread you need to go back one step and explain why the LLM code is "good enough" and how you determined that. Otherwise it's just two sides talking totally past each other.
>This is the most handwaving per paragraph I've ever seen.
Yes: "LLM generated programs work well enough often enough to not need more constraints or validation than natural language" if a fair summarization of my point.
Not sure the purpose of "whatever that means" that you added. It's clear what it means. Thought, casual language seems to be a problem for you. Do you only always discuss in formally verified proofs? If so, that's a you problem, not an us or LLM problem :)
>Most people who have built, maintained, and debugged software aren't ready to accept the premise that all of this is just handled well by LLMs at this point.
I don't know who those "most people are". Most developers already hand those tasks to LLMs, and more will in the future, as it's a market/job pressure.
(I'm not saying it's good or good enough as a quality assessment. In fact, I don't particularly like it. But I am saying it's "good enough" as in, people will deem it good enough to be shipped).
> I don't know who those "most people are". Most developers already hand those tasks to LLMs, and more will in the future, as it's a market/job pressure.
This is definitely not true. Outside of the US, very few devs can afford to pay for the computer and/or services. And in a couple years, I believe, devs in the US will be in for a rude awakening when the current prices skyrocket.
The "whatever that means" isn't a judgement jab at your point, merely acknowledging the hand waving of my own with "good enough".
I hope this comment thread helps with your cheeky jab that I might have a problem understanding or using casual language.
I'm not sure if it's moving the goalpost or not to back away from a strong claim that LLMs are at the "good enough" (whatever that means!) level now and instead fall back to "some devs will just ship it and therefore that's good enough, by definition".
Regardless, I think we agree that, if LLMs are "good enough" in this way then we can think a lot less about code and logic and instead focus on prompts and feature requests.
I just don't think we agree on what "good enough" is, if current LLMs produce it with less effort than alternatives, and if most devs already believe the LLM generated code is good enough for that.
I use LLMs for a lot of dev work but I haven't personally seen these things one- or even many- shot things to the level I'd feel comfortable being on call for.
>I just don't think we agree on what "good enough" is, if current LLMs produce it with less effort than alternatives, and if most devs already believe the LLM generated code is good enough for that.
Don't need to consider what they think, one can just see their "revealed preferences", what they actually do. Which for the most part is adopting agents.
>I use LLMs for a lot of dev work but I haven't personally seen these things one- or even many- shot things to the level I'd feel comfortable being on call for.
That's true for many devs one might have working for their team as well. Or even one's self. So we review, we add tests, and so on. So we do that when the programming language is a "real" programming language too, doesn't have to change when it is natural language to an agent. What I'm getting at, is, that this is not a show stopper to the point of TFA.
AI cope regarging "you can still carefully design, AI wont take away your creative control or care for the craft" is the new "there's no problem with C's safety and design, devs just need to pay more attention while coding" or the "I'm not alcoholic, I can quit anytime" of 2026...
>Those little black boxes of AI can be significantly demystified by, for example, watching a bunch of videos (https://karpathy.ai/zero-to-hero.html) and spending at least 40 hours of hard cognitive effort learning about it yourself.
That's like saying you can understand humans by watching some physics or biology videos.
Except it's not. Traditional algorithms are well understood because they're deterministic formulas. We know what the output is if we know the input. The surprises that happen with traditional algorithms are when they're applied in non-traditional scenarios as an experiment.
Whereas with LLMs, we get surprised even when using them in an expected way. This is why so much research happens investigating how these models work even after they've been released to the public. And it's also why prompt engineering can feel like black magic.
I think the historical record pushes back pretty strongly on the idea that determinism in engineering is new. Early computing basically depended on it. Take the Apollo guidance software in the 60s. Those engineers absolutely could not afford "surprising" runtime behavior. They designed systems where the same inputs reliably produced the same outputs because human lives depended on it.
That doesn't mean complex systems never behaved unexpectedly, but the engineering goal was explicit determinism wherever possible: predictable execution, bounded failure modes, reproducible debugging. That tradition carried through operating systems, compilers, finance software, avionics, etc.
What is newer is our comfort with probabilistic or emergent systems, especially in AI/ML. LLMs are deterministic mathematically, but in practice they behave probabilistically from a user perspective, which makes them feel different from classical algorithms.
So I'd frame it less as "determinism is new" and more as "we're now building more systems where strict determinism isn't always the primary goal."
Going back to the original point, getting educated on LLMs will help you demystify some of the non-determinism but as I mentioned in a previous comment, even the people who literally built the LLMs get surprised by the behavior of their own software.
That’s some epic goal post shifting going on there!!
We’re talking about software algorithms. Chemical and biomedical engineering are entirely different fields. As are psychology, gardening, and morris dancing
Yeah. Which any normal person would take to mean “all technologies in software engineering” because talking about any other unrelated field would just be silly.
We know why they work, but not how. SotA models are an empirical goldmine, we are learning a lot about how information and intelligence organize themselves under various constraints. This is why there are new papers published every single day which further explore the capabilities and inner-workings of these models.
Ok, but the art and science of understanding what we're even looking at is actively being developed. What I said stands, we are still learning the how. Things like circuits, dependencies, grokking, etc.
How old was it? With macOS "running an old version" is not really a viable or advisable path beyond a certain point. Might be something people want to do, might it a great option to have, but it's not very workable nor supported by Apple and the general ecosystem.
>Any decent project should have a way to install without Homebrew. It's really not necessary.
We don't install homebrew because it's necessary, but because it's convenient. No way in hell I'm gonna install 50+ programs I use one by one using the project's own installers.
Besides, if "Homebrew dropped support" is an incovenience, "manually look for dozens of individual installers or binary, make sure dependencies work well together, and update yourself again manually" is even more of an inconvenience. Not to mention many projects on their own drop support for macOS versions all the time, or offer no binaries or installers.
reply