> However, it is important to ask if you want to stop investing in your own skills because of a speculative prediction made by an AI researcher or tech CEO.
I don't think these are exclusive. Almost a year ago, I wrote a blog post about this [0]. I spent the time since then both learning better software design and learning to vibe code. I've worked through Domain-Driven Design Distilled, Domain-Driven Design, Implementing Domain-Driven Design, Design Patterns, The Art of Agile Software Development, 2nd Edition, Clean Architecture, Smalltalk Best Practice Patterns, and Tidy First?. I'm a far better software engineer than I was in 2024. I've also vibe coded [1] a whole lot of software [2], some good and some bad [3].
[1]: As defined in Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge, wherein you still take responsibility for the code you deliver.
I personally found out that knowing how to use ai coding assistants productively is a skill like any other and a) it requires a significant investment of time b) can be quite rewarding to learn just as any other skill c) might be useful now or in the future and d) doesn't negate the usefulness of any other skills acquired on the past nor diminishes the usefulness of learning new skills in the future
On the using AI assistants I find that everything is moving so fast that I feel constantly like "I'm doing this wrong". Is the answer simply "dedicate time to experimenting? I keep hearing "spec driven design" or "Ralph" maybe I should learn those? Genuine thoughts and questions btw.
More specifically regarding spec-driven development:
There's a good reason that most successful examples of those tools like openspec are to-do apps etc. As soon as the project grows to 'relevant' size of complexity, maintaining specs is just as hard as whatever other methodology offers. Also from my brief attempts - similar to human based coding, we actually do quite well with incomplete specs. So do agents, but they'll shrug at all the implicit things much more than humans do. So you'll see more flip-flopped things you did not specify, and if you nail everything down hard, the specs get unwieldy - large and overly detailed.
> if you nail everything down hard, the specs get unwieldy - large and overly detailed
That's a rather short-sighted way of putting it. There's no way that the spec is anywhere as unwieldly as the actual code, and the more details, the better. If it gets too large, work on splitting a self-contained subset of it to a separate document.
Everybody feels like this, and I think nobody stays ahead of the curve for a prolonged time. There's just too many wrinkles.
But also, you don't have to upgrade every iteration. I think it's absolutely worthwhile to step off the hamster wheel every now and then, just work with you head down for a while and come back after a few weeks. One notices that even though the world didn't stop spinning, you didn't get the whiplash of every rotation.
I don’t think Ralph is worthwhile, at least the few times I’ve tried to set it up I spent more time fighting to get the configuration right than if I had simply run the prompt. Coworkers had similar experiences, it’s better to set a good allowlist for Claude.
I think find what works for you, and everything else is kind of noise.
At the end of the day, it doesn’t matter if a cat is black or white so long as it catches mice.
——
Ive also found that picking something and learning about it helps me with mental models for picking up other paradigms later, similar to how learning Java doesn’t actually prevent you from say picking up Python or Javascript
The addictive nature of the technology persists though. So even if we say certain skills are required to use it, then also it must come with a warning label and avoided by people with addictive personalities/substance abuse issues etc.
Agreed, my experience and code quality with claude code and agentic workflows has dramatically increased since investing in learning how to properly use these tools. Ralph Wiggum based approaches and HumanLayer's agents/commands (in their .claude/) have boosted my productivity the most. https://github.com/snwfdhmp/awesome-ralphhttps://github.com/humanlayer
> knowing how to use ai coding assistants productively is a skill like any other
No, it's different from other skills in several ways.
For one, the difficulty of this skill is largely overstated. All it requires is basic natural language reading and writing, the ability to organize work and issue clear instructions, and some relatively simple technical knowledge about managing context effectively, knowing which tool to use for which task, and other minor details. This pales in comparison with the difficulty of learning a programming language and classical programming. After all, the entire point of these tools is to lower the required skill ceiling of tasks that were previously inaccessible to many people. The fact that millions of people are now using them, with varying degrees of success for various reasons, is a testament of this.
I would argue that the results depend far more on the user's familiarity with the domain than their skill level. Domain experts know how to ask the right questions, provide useful guidance, and can tell when the output is of poor quality or inaccurate. No amount of technical expertise will help you make these judgments if you're not familiar with the domain to begin with, which can only lead to poor results.
> might be useful now or in the future
How will this skill be useful in the future? Isn't the goal of the companies producing these tools to make them accessible to as many people as possible? If the technology continues to improve, won't it become easier to use, and be able to produce better output with less guidance?
It's amusing to me that people think this technology is another layer of abstraction, and that they can focus on "important" things while the machine works on the tedious details. Don't you see that this is simply a transition period, and that whatever work you're doing now, could eventually be done better/faster/cheaper by the same technology? The goal is to replace all cognitive work. Just because this is not entirely possible today, doesn't mean that it won't be tomorrow.
I'm of the opinion that this goal is unachievable with the current tech generation, and that the bubble will burst soon unless another breakthrough is reached. In the meantime, your own skills will continue to atrophy the more you rely on this tech, instead of on your own intellect.
I'm doing a similar thing. Recently, I got $100 to spend on books. The first two books I got were A Philosophy of Software Design, and Designing Data-Intensive Applications, because I asked myself, out of all the technical and software engineering related books that I might get, given agentic coding works quite well now, what are the most high impact ones?
And it seemed pretty clear to me that they would have to do with the sort of evergreen, software engineering and architecture concepts that you still need a human to design and think through carefully today, because LLMs don't have the judgment and a high-level view for that, not the specific API surface area or syntax, etc., of particular frameworks, libraries, or languages, which LLMs, IDE completion, and online documentation mostly handle.
Especially since well-designed software systems, with deep and narrow module interface, maintainable and scalable architectures, well chosen underlying technologies, clear data flow, and so on, are all things that can vastly increase the effectiveness of an AI coding agent, because they mean that it needs less context to understand things, can reason more locally, etc.
To be clear, this is not about not understanding the paradigms, capabilities, or affordances of the tech stack you choose, either! The next books I plan to get are things like Modern Operating Systems, Data-Oriented Design, Communicating Sequential Processes, and The Go Programming Language, because low level concepts, too, are things you can direct an LLM to optimize, if you give it the algorithm, but which they won't do themselves very well, and are generally also evergreen and not subsumed in the "platform minutea" described above.
Likewise, stretching your brain with new paradigms — actor oriented, Smalltalk OOP, Haskell FP, Clojure FP, Lisp, etc — gives you new ways to conceptualize and express your algorithms and architectures, and to judge and refine the code your LLM produces, and ideas like BDD, PBT, lightweight formal methods (like model checking), etc, all provide direct tools for modeling your domain, specifying behavior, and testing it far better, which then allow you to use agentic coding tools with more safety or confidence (and a better feedback loop for them) — at the limit almost creating a way to program declaratively in executible specifications, and then convert those to code via LLM, and then test the latter against the former!
Agreed. I find most design patterns end up as a mess eventually, at least when followed religiously. DDD being one of the big offenders. They all seem to converge on the same type of "over engineered spaghetti" that LOOKS well factored at a glance, but is incredibly hard to understand or debug in practice.
DDD is quite nice as a philosophy. Like concatenate state based on behavioral similarity and keep mutation and query function close, model data structures from domain concepts and not the inverse, pay attention to domain boundary (an entity may be read only in some domain and have fewer state transition than in another).
But it should be a philosophy, not a directive. There are always tradeoffs to be made, and DDD may be the one to be sacrificed in order to get things done.
I was going to ask the same thing. I'm self taught but I've mainly gone the other way, more interested in learning about lower level things. Bang for buck I think I might have been better reading DDD type books.
It presents the main concepts like a good lecture and a more modern take than the blue book. Then you can read the blue book.
But DDD should be taken as a philosophy rather than a pattern. Trying to follow it religiously tends to results in good software, but it’s very hard to nail the domain well. If refactoring is no longer an option, you will be stuck with a non optimal system. It’s more something you want to converge to in the long term rather than getting it right early. Always start with a simpler design.
Oh absolutely. It feels like a worthwhile architectural framing to understand and draw from as appropriate. To me I think - my end goal is being able to think more deeply about my domains and how to model them.
I laugh every time somebody thinks every problem must have a root cause that pollutes every non-problem it touches.
It's a problem to use a blender to polish your jewelry. However, it's perfectly alright to use a blender to make a smoothie. It's not cognitive dissonance to write a blog post imploring people to stop polishing jewelry using a blender while also making a daily smoothie using the same tool.
This is basically the "I'm not evil, it's just a normal job" excuse. Like with any moral issues there will be disagreement where to draw the line but yes if you do something that ends up supporting $bad_thing, then there is an ethical consideration you need to make. And if your answer is always that it's OK for the things you want to do then you are probably not being very honest with yourself.
Your response assumes the tool is a $bad_thing rather than one specific use of it. In my analogy, that would be saying that "there is an ethical consideration you need to make" before using (or buying) a blender.
I've been working to overcome this exact problem. I believe it's fully tractable. With proper deterministic tooling, the right words in context to anchor latent space, and a pilot with the software design skills to do it themselves, AI can help the pilot write and iterate upon properly-designed code faster than typing and using a traditional IDE. And along the way, it serves as a better rubber duck (but worse than a skilled pair programmer).
The overall level of complexity of a project is not an "up means good" kind of measure. If you can achieve the same amount of functionality, obtain the same user experience, and have the same reliability with less complexity, you should.
Accidental complexity, as defined by Brooks in No Silver Bullet, should be minimized.
Complexity is always the thing that needs to be managed, as it is ultimately what kills your app. Over time, as apps get more complex, it's harder and harder to add new features while maintaining quality. In a greenfield project you can implement this feature in a day, but as the app becomes more complex it takes longer and longer. Eventually it takes a year to add that simple feature. At that point, your app is basically dead, in terms of new development, and is forever in sustaining mode, barring a massive rewrite that dramatically reduces the complexity by killing unnecessary features.
So I wish developers looked at apps with a complexity budget, which is basically Djikstra's line of code budget. You have a certain amount of complexity you can handle. Do you want to spend that complexity on adding these features or these other features? But there is a limit and a budget you are working with. Many times I have wished that product managers and engineering managers would adopt this view.
It absolutely does matter. LLMs still have to consumer context and process complexity. The more LoC, the more complexity, the more errors you have and the higher your LLM bills. That's even in the AI maximalist, vibe-code only use case. The reality is that AI will have an easier time working in a well-designed, human-written codebase than one generated by AI, and the problem of AI code output turning into AI coding inputs resulting in the AI choking and on itself and making more errors tends to get worse over time, with human oversight being the key tool to prevent this.
a bit of a nit but accidental complexity is still complexity, so even if that 1M lines could reduce to 2 kines its still way more complex to maintain and patch than a codebase thats properly minimized and say 10k lines. (even though this sounds unreasonable i dont doubt it happen..)
> The overall level of complexity of a project is not an "up means good" kind of measure.
I never said it was. To the contrary, it's more of an indication of how much more complex large refactorings might be, how complex it might be to add a new feature that will wind up touching a lot of parts, or how long a security audit might take.
The point is, it's important to measure things. Not as a "target", but simply so you can make more informed decisions.
I dunno about autonomous, but it is happening at least a bit from human pilots. I've got a fork of a popular DevOps tool that I doubt the maintainers would want to upstream, so I'm not making a PR. I wouldn't have bothered before, but I believe LLMs can help me manage a deluge of rebases onto upstream.
same, i run quite a few forked services on my homelab. it's nice to be able to add weird niche features that only i would want. so far, LLMs have been easily able to manage the merge conflicts and issues that can arise.
Pretty dang common. OS X and macOS (and maybe iOS and iPadOS, though I'm not certain) have been autocorrecting "--" into "—" for over a decade. Windows users have been using Alt codes for them since approximately forever ago: https://superuser.com/q/811318.
Typography nerds, which are likely overrepresented on HN, love both em dash and en dash, and we especially love knowing when to use each. Punctation geeks, too! If you know what an octothorp or an interrobang are, you've probably been using em dashes for a long time.
Folks who didn't know what an em dash was by name are now experiencing the Baader-Meinhof phenomenon en masse. I've literally had to disable my "--" autocorrect just to not be accused of using an LLM when writing. It's annoying.
> the person writing the prompts knows what a quality solution looks like at a technical level and is reviewing the output as they go
That is exactly what I recommend, and it works like a charm. The person also has to have realistic expectations for the LLM, and be willing to work with a simulacrum that never learns (as frustrating as it seems at first glance).
I don't think these are exclusive. Almost a year ago, I wrote a blog post about this [0]. I spent the time since then both learning better software design and learning to vibe code. I've worked through Domain-Driven Design Distilled, Domain-Driven Design, Implementing Domain-Driven Design, Design Patterns, The Art of Agile Software Development, 2nd Edition, Clean Architecture, Smalltalk Best Practice Patterns, and Tidy First?. I'm a far better software engineer than I was in 2024. I've also vibe coded [1] a whole lot of software [2], some good and some bad [3].
You can choose to grow in both areas.
[0]: https://kerrick.blog/articles/2025/kerricks-wager/
[1]: As defined in Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge, wherein you still take responsibility for the code you deliver.
[2]: https://news.ycombinator.com/item?id=46702093
[3]: https://news.ycombinator.com/item?id=46719500
reply