One of the things I love about computing and computer science is how the wide variety of tools available, built over multiple generations, provide people with the leverage to bring their highly complex ideas to life. No matter how they work best, they can use those tools as a way to keep their mind focused on larger goals with broader context without yak shaving every hole punched in a punchcard.
You see a person whose conception of programming is different from yours; I see a person who's finding joy in the act of creating computer programs, and who will be able to bring even more of their ideas to life than they would have beforehand. That's something to celebrate, I think.
> who will be able to bring even more of their ideas to life than they would have beforehand.
This is the core part of what's changing - the most important people around me used to be "People who know how".
We're slowly shifting to "Knowing what you want" is beating the Know-how.
People without any know-how are able to experiment because they know what they want and can keep saying "No, that's not what I want" to a system which will listen to them for without complaining supplying the know-how.
From my perspective, my decades of accumulating know-how is entirely pointless and wiped away in the last 2 years.
Adapt or fall behind, there's no way to ignore AI and hope it passes by without a ripple.
I've found that if you're a novice coder you don't know what to ask for.
Your decades of experience are probably a bit like mine: you sense the cause of a problem in an almost psychic way, based on knowledge of the existing codebase, the person who wrote the last update, the "smell" of the problem. I've walked into a major incident, looked at a single alert on a dashboard, and almost with a smile on my face identified the root cause immediately. It's decades of knowledge that allow to know what to ask for.
Same with vibe coding: I've been having tremendous fun with it but occasionally a totally weird failure will occur, something that "couldn't happen" if you wrote it yourself. Like, you realise that the AI didn't refactor your code when you added the last feature, so it added "some tiny thing" to three separate functions. Then over several refactors it only updated one of those "tiny things", so now that specific feature only breaks in very specific cases.
Or let's say you want to get it to write something that AI seems to have problems with. Example, assemble and manage a NAS array using mdadm. I've been messing with that recently and Google Gemini has lost the whole array twice, and utterly failed to figure out how to rename an array device name. It's a hoot. Just to see if it would ever figure it out I kept going. Pages and pages of over-and-back, repeating the same mistakes, missing the obvious. Maybe it's been trained on 10 years of Muppets online giving terrible advice on how to manage mdadm?
As a counterexample, there's someone who vibe-coded a subscription adult website complete with payments and stuff, while having zero computer science experience while living in an RV. I can't find the link now, although last I saw on X, she was complaining about being blocked by trad finance after Bill Ackman's campaign.
So yeah, it's absolutely possible. From personal experience, I was able to implement a basic scan and go application complete with payment integrations without going through a single piece of documentation.
As long as you're ready to jostle for a bit in frustration with an AI, you can make something monetizable once you've identified an underserved market.
Gemini CLI is like a fresh self taught coder on massive amounts of cocaine and adderall.
Just last night I asked it to create a project plan as markdown files. It started writing to disk and I tabbed out to watch Squid Game
When I came back it was 80% through implementation and was trying to fix some weird python async issue in a loop over and over again. I never told it to implement anything. But it will always - ALWAYS - rush into implementation unless you tell it not to with ALL CAPS.
Google really needs to add a Claude style explicit plan mode for it…
I interrupted it, gave the error to Claude Code, which fixed it in one go.
> I've found that if you're a novice coder you don't know what to ask for.
And this is the reason for why I think I am productive with LLMs, and why people who know nothing about the underlying concepts are not going to be as productive.
I’ve had the … pleasure of working with outsourced talent for 20 years and I think it gives me an edge with LLMs
Both will never push back or say no and will just rush headlong into implementing - something. They will use 57 libraries when stdlib will do and make convoluted hierarchies when a simple functional program is enough.
But both can produce very good results if you have predetermined limits, acceptance criteria and a proper plan and spec.
Then you iterate in “sprints” and check the results after each one and challenge their output.
What you are talking about here is accidental vs essential complexity as described by Brooks in the 80s.
Your claim that LLMs do away entirely with accidental complexity and manage essential complexity for you is not supported by reality. Adding these tools to workflows adds a tonne of accidental complexity, and they still cannot shield you from all the essential complexity because they are often wrong.
There have been endless noise made over semantics but the plain fact is that LLMs render output that is incongruent with reality very often. And now we are trying to remedy it with what amounts to expert systems.
There is no silver bullet. You have to painstakingly get rid of accidental complexity and tackle essential complexity in order to build complex and useful systems.
I don't understand what's so abhorrent about it that people invent layers and layers of accidental complexity trying to avoid facing simple facts. We need to understand computers and domains with high accuracy to build any useful software and that's how it's always been and how it's always gonna be.
> From my perspective, my decades of accumulating know-how is entirely pointless and wiped away in the last 2 years.
I find this very difficult to believe, but I have no idea what you do. I'm a generalist and this isn't even close to true for me with state of the art llms.
Nothing has served me better over the past few decades than accumulating ever more detailed and accurate knowledge of what it is exactly that computers do under the hood.
All the layers of abstraction are well intended and often useful. But they by no means eliminate the need to understand in detail the hard facts underlying computer engineering if you want to build performant and reliable software.
You can trade that off at different rates for different circumstances but the notion that you can do away entirely with the need to know these details has never been true.
More people being enabled to think less about these details necessitates more expertise to exist to support them, never less.
> All the layers of abstraction are well intended and often useful. But they by no means eliminate the need to understand in detail the hard facts underlying computer engineering if you want to build performant and reliable software.
Agreed. A good abstraction usually doesn't obviate the need for understanding what's going on behind the scenes, it just means that I don't have to think about it all the time.
As a more extreme example, I don't usually think about the fact that the Java (or Kotlin, Scala, ...) compiler generates bytecode that runs in an interpreter that translates the bytecode to machine code on the fly. But sometimes it's useful to remember (e.g. when dealing with instrumentation).
Another example are things like databases, or concurrency constructs, etc. There it's usually good to know the properties they guarantee and one way to be able to reason through these is by having some understanding of how they're implemented under the hood.
I agree at least a little bit, but let’s be honest: the history of software engineering is a history of higher and higher levels of abstraction wrapping the previously levels.
So part of this is just another abstraction. But another part, which I agree with, is that abstracting how you learn shit is not good. For me, I use AI in a way that helps me learn more and accomplish more. I deliberately don’t cede my thinking process away, and I deliberately try to add more polish and quality since it helps me do it in less time. I don’t feel like my know-how is useless — instead, I’m seeing how valuable it is to know shit when a junior teammate is opening PRs with critical mistakes because they don’t know any better (and aren’t trying to learn)
I like to read books on computers from the 70s and 80s. No trite analogies, just hard facts and diagrams. And explanations that start from scratch, requiring no previous knowledge - because there was none.
The thing about these layers of abstraction is that they add load and thus increase the demand for people and teams and organizations that command these lower levels. The idea that, on a systemic level, higher abstraction levels can diminish the importance, size, complexity or expertise needed overall or even keep it at current levels is entirely misguided.
As we add load on top, the base has to become stronger and becomes more important, not less.
This is a good point. However, the base is relatively narrow; there are many, many more people working in the popular frameworks and languages like e.g. React or Java or what have you than there are people who work on the fundamentals and have that low level understanding. And I'm afraid people at that level are going to become rare.
It's not hopeless though, it feels like that in the past decade, some of the smartest minds working at the lower levels of abstractions have come up with great new technologies. New programming languages that push the envelope of performance and security while maintaining good developer experience, great advancements in microchip technologies, that kinda thing.
It's important to maintain access to universities and higher education, where people who have the interest and mindset can learn and become part of this base that powers the greater software market.
I don't know. People working in web frameworks might be more visible, and more numerous than people working more low level stuff. But I don't think the latter is rarefied atmosphere at all. There are today several times more people working on those base levels than 10 or 20 years ago and I expect the trend to continue.
Sure, they will enable even more people proportionally to not think about those low level systems. But my argument is that the need for that low level expertise has always expanded and will keep expanding.
Automation entails tonnes of complexity that need to be managed. It doesn't just evaporate. More automatic systems will demand more people and teams to learn low level systems in great detail and at high levels of accuracy.
You see a person whose conception of programming is different from yours; I see a person who's finding joy in the act of creating computer programs, and who will be able to bring even more of their ideas to life than they would have beforehand. That's something to celebrate, I think.