> Engineers need to really lean in to the change in my opinion.
I tried leaning in. I really tried. I'm not a web developer or game developer (more robotics, embedded systems). I tried vibe coding web apps and games. They were pretty boring. I got frustrated that I couldn't change little things. I remember getting frustrated that my game character kept getting stuck on imaginary walls and kept asking Cursor to fix it and it just made more and more of a mess. I remember making a simple front-end + backend with a database app to analyze thousands of pull request comments and it got massively slow and I didn't know why. Cursor wasn't very helpful in fixing it. I felt dumber after the whole process.
The next time I made a web app I just taught myself Flask and some basic JS and I found myself moving way more quickly. Not in the initial development, but later on when I had to tweak things.
The AI helped me a ton with looking things up: documentation, error messages, etc. It's essentially a supercharged Google search and Stack Overflow replacement, but I did not find it useful letting it take the wheel.
These posts like the one OP made is why I'm losing my mind.
Like, is there truly an agentic way to go 10x or is there some catch? At this point while I'm not thrilled about the idea of just "vibe coding" all the time, I'm fine with facing reality.
But I keep having the same experience as you, or rather leaning more on that supercharged Google/SO replacement
or just a "can you quickly make this boring func here that does xyz" "also add this" or for bash scripts etc.
And that's only when I've done most of the plumbing myself.
EVERY DX survey that comes out (surveying over 20k developers) says the exact same thing.
Staff engineers get the most time savings out of AI tools, and their weekly time savings is 4.4 hours for heavy AI users. That's a little more than 10% productivity, so not anywhere close to 10x.
What's more telling about the survey results is they are also consistent in their findings between heavy and light users of AI. Staff engineers who are heavy users of AI save 4.4 hours a week while staff engineers who are light users of AI save 3.3 hours a week. To put another way, the DX survey is pretty clear that the time savings between heavy and light AI users is minimal.
Yes surveys are all flawed in different ways but an N of 20k is nothing to sneeze at. Any study with data points shows that code generation is not a significant time savings and zero studies show significant time savings. All the productivity gains DX reports come from debugging and investigation/code base spelunking help.
In my experience the productivity measured in created merge requests increased massively.
More merge requests because now the same senior developers are creating more bugs, 4x comparing to 2025. Same developers, same codebase but now with Cursor!
Past survey results are hidden in some presentations I've seen, and the latest survey I have full access due to my company paying for it. So I'm not sure it's legal for me to reproduce
> Like, is there truly an agentic way to go 10x or is there some catch?
Yes. I think it’s practice. I know this sounds ridiculous, but I feel like I have reached a kind of mind meld state with my AI tooling, specifically Claude Code. I am not really consciously aware of having learned anything related to these processes, but I have been all in on this since ChatGPT, and I honestly think my brain has been rewired in a way that I don’t truly perceive except in terms of the rate of software production.
There was a period of several months a while ago where I felt exhausted all the time. I was getting a lot done, but there was something about the experience that was incredibly draining. Now I am past that and I have gone to this new plateau of ridiculous productivity, and a kind of addictive joy in the work. A marvellous pleasure at the orchestration of complex tasks and seeing the results play out. It’s pure magic.
Yes, I know this sounds ridiculous and over-the-top. But I haven’t had this much fun writing software since my 20s.
> Yes, I know this sounds ridiculous and over-the-top.
in that case you should come with more data. tell us how you measured your productivity improvement. all you've said here is that it makes you feel good
Work that would have taken me 1-2 weeks to complete, I can now get done in 2-3 hours. That's not an exaggeration. I have another friend who is as all-in on this as me and he works in a company (I work for myself, as a solo contractor for clients), and he told me that he moved on to Q1 2026 projects because he'd completed all the work slated for 2025, weeks ahead of schedule. Meanwhile his colleagues are still wading through scrum meetings.
I realize that this all sounds kind of religious: you don't know what you're missing until you actually accept Jesus's love, or something along those lines. But you do have to kinda just go all-in to have this experience. I don't know what else to say about it.
If your work maps exceedingly well to the technology it is true, it goes much faster. Doubly so when you have enough experience and understanding of things to find its errors or suboptimal approaches and adjust it that much faster.
The second you get to a place where the mapping isn’t there though, it goes off rails quickly.
Not everyone programs in such a way that they may ever experience this but I have, as a Staff engineer at a large firm, run into this again and again.
It’s great for greenfield projects that follow CRUD patterns though.
this is just not a very interesting way to talk about technology. I'm glad it feels like a religious experience to you, I don't care about that. I care about reality
it seems to me if these things were real and repeatable there would be published traces that show the exact interactions that led to a specific output and the cost in time and money to get there.
My sympathies go out to the friend's coworkers. They are probably wading through a bunch of stuff right now, but given the context you have given us, its probably not "scrum meetings"..
I don't even care about the llm, I just want the confidence you have to assess that any given thing will take N weeks. You say 1-2 weeks.. thats like a big range! Something that "would" take 1 week takes ~2 hours, something that "would" take 2 weeks also takes ~2 hours. How does that even make sense? I wonder how long something that would of taken three weeks would take?
> They are probably wading through a bunch of stuff right now, but given the context you have given us, its probably not "scrum meetings"..
This made me laugh. Fair enough. ;)
In terms of the time estimations: if your point is that I don't have hard data to back up my assertions, you're absolutely correct. I was always terrible at estimating how long something would take. I'm still terrible at it. But I agree with the OP. I think the labour required is down 90%.
It does feel to me that we're getting into religious believer territory. There are those who have firsthand experience and are all-in (the believers), there are those who have firsthand experience and don't get it (the faithless), and there are those who haven't tried it (the atheists). It's hard to communicate across those divides, and each group's view of the others is essentially, "I don't understand you".
Religions are about faith, faith is belief in the absence of evidence. Engineering output is tangible and measurable, objectively verifiable and readily quantifiable (both locally and in terms of profits). Full evidence, testable assertions, no faith required.
Here we have claims of objective results, but also admissions we’re not even tracking estimations and are terrible at making them when we do. People are notoriously bad at estimating actual time spent versus output, particularly when dealing with unwanted work. We’re missing the fundamental criteria of assessment, and there are known biases unaccounted for.
Output in LOC has never been the issue, copy and paste handles that just fine. TCO and holistic velocity after a few years is a separate matter. Masterful orchestration of agents could include estimation and tracking tasks with minimal overhead. That’s not what we’re seeing though…
Someone who has even a 20% better method for deck construction is gonna show me some timetables, some billed projects, and a very fancy new car. If accepting Mothra as my lord and saviour is a prerequisite to pierce an otherwise impenetrable veil of ontological obfuscation in order to see the unseeable? That deck might not be as cheap as it sounds, one way or the other.
I’m getting a nice learning and productivity bump from LLMs, there are incredible capabilities available. But premature optimization is still premature, and claims of silver bullets are yet to be demonstrated.
Here's an example from this morning. At 10:00 am, a colleague created a ticket with an idea for the music plugin I'm working on: wouldn't it be cool if we could use nod detection (head tracking) to trigger recording? That way, musicians who use our app wouldn't need a foot switch (as a musician, you often have your hands occupied).
Yes, that would be cool. An hour later, I shipped a release build with that feature fully functional, including permissions plus a calibration UI that shows if your face is detected and lets you adjust sensitivity, and visually displays when a nod is detected. Most of that work got done while I was in the shower. That is the second feature in this app that got built today.
This morning I also created and deployed a bug fix release for analytics on one platform, and a brand-new report (fairly easy to put together because it followed the pattern of other reports) for a different platform.
I also worked out, argued with random people on HN and walked to work. Not bad for five hours! Do I know how long it would have taken to, for example, integrate face detection and tracking into a C++ audio plugin without assistance from AI? Especially given that I have never done that before? No, I do not. I am bad at estimating. Would it have been longer than 30 minutes? I mean...probably?
I would love to see that pull request, and how readable and maintainable the code is. And do you understand the code yourself, since you've never done this before?
Just having a 'count-in' type feature for recording would be much much more useful. Head nodding is something I do all the time anyway as a musician :).
I don't know what your user makeup is like, but shipping a CV feature same day sounds so potentially disastrous.. There are so many things I would think you would at least want to test, or even just consider with the kind of user emapthy we all should practice.
I think you have to make a distinction between indvidual experience and claims about general truths.
If I know someone as an honest and serious professional, and they tell me that some tool has made them 5x or 10x more productive, then I'm willing to believe that the tool really did make a big difference for them and their specific work. I would be far more sceptical if they told me that a tool has made them 10% more productive.
I might have some questions about how much technical debt was accumulated in the process and how much learning did not happen that might be needed down the road. How much of that productivity gain was borrowed from the future?
But I wouldn't dismiss the immediate claims out of hand. I think this experience is relevant as a starting point for the science that's needed to make more general claims.
Also, let's not forget that almost none of the choices we make as software engineers are based on solid empirical science. I have looked at quite a few studies about productivity and defect rates in software engineering projects. The methodology is almost always dodgy and the conclusions seem anything but robust to me.
> It does feel to me that we're getting into religious believer territory. There are those who have firsthand experience and are all-in (the believers), there are those who have firsthand experience and don't get it (the faithless), and there are those who haven't tried it (the atheists). It's hard to communicate across those divides, and each group's view of the others is essentially, "I don't understand you".
What a total crock. Your prose reminds of of the ridiculously funny Mike Meyers in "The Love Guru".
But then does this not give you pause, that it "feels religious"? Is there not some morsel of critical/rational interrogation on this? Aren't you worried about becoming perhaps too fundamentalist in your belief?
To extend the analogy: why charge clients for your labor anymore, which Claude can supposedly do in a fraction of the time? Why not just ask if they have heard the good word, so to speak?
Nobody had a robust, empirical metric of programmer productivity. Nobody. Ticket count, function points, LoC, and others tell you nothing about the fitness of the product. It’s all feels.
ok, but there's a spectrum between fully reproducible empirical evidence and divine revelation. I'm not convinced it's impossible to measure productivity in a meaningful way, even if it isn't perfect. it at least seems better to try than... whatever this is
Just as an aside I also think I am way more productive now but a really convincing datapoint would be someone who does project work and now has 5x the hourly rate they had last year. If there are not plenty of people like this, it cannot be 10x
That's not a very convincing argument. Even if you can do 10x the work, that doesn't necessarily mean you can easily find customers ready to pay 5x the hourly rate.
What's worked best with Gemini such I made a DSL that transpiles to C with CUDA support to train small models in about 3 hours... (all programs must run against an image data set, must only generate embeddings)
Do not; vibe code from top down (ex. Make me a UI with React, with these buttons and these behaviors to each button)
Do not; chat casually with it. (ex. I think it would look better if the button was green)
Do; constrain phrasing to the next data transform goal (ex. You must add a function to change all words that start with lowercase to start with uppercase)
Do; vibe code bottom up (ex. You must generate a file with a function to open a plaintext file and appropriate tests; now you must add a function to count all words that begin with "f")
Do; stick to must/should/may (ex. You must extend the code with this next function)
Do; constrain it to mathematical abstractions (ex. sys prompt: You must not use loops, you must only use recursion and functional paradigms. You must not make up abstractions and stick to mathematical objects and known algorithms)
Do; constrain it to one file per type and function. This makes it quick to review, regenerate only what needs to change.
Using those patterns, Gemini 2.5 and 3 have cranked out banging code with little wandering off in the weeds and hallucinating.
Programming has been mired in made up semantics of the individual coder for the luls, to create mystique and obfuscate the truth to ensure job security; end of the day it's matrix math and state sync between memory and display.
> Yes, I know this sounds ridiculous and over-the-top. But I haven’t had this much fun writing software since my 20s.
But...you're not writing it. The culmination of many sites, many people, Stack Overflow, etc. all wrote it through the filtering mechanism being called AI.
Currently three main projects. Two are Rails back-ends and React front-ends, so they are all Ruby, Typescript, Tailwind, etc. The third is more recent, it's an audio plugin built using the JUCE framework, it is all C++. This is the one that has been blowing my mind the most because I am an expert web developer, but the last time I wrote a line of C++ was 20 years ago, and I have zero DSP or math skills. What blows my mind is that it works great, it's thread safe and performant.
In terms of workflow, I have a bunch of custom commands for tasks that I do frequently (e.g. "perform code review"), but I'm very much in the loop all the time. The whole "agent can code for hours at a time" thing is not something I personally believe. It depends on the task how involved I get, however. Sometimes I'm happy to just let it do work and then review afterwards. Other times, I will watch it code and interrupt it if I am unhappy with the direction. So yes, I am constantly stepping in manually. This is what I meant about "mind meld". The agent is not doing the work, I am not doing the work, WE are doing the work.
I maintain a few rails apps and Claude Code has written 95% of the code for the last 4 months. I deploy regularly.
I make my own PRs then have Copilot review them. Sometimes it finds criticisms, and I copy and paste that chunk of critique into Claude Code, and it fixes it.
Treat the LLMs like junior devs that can lookup answers supernaturally fast. You still need to be mindful of their work. Doubtful even. Test, test, test.
Extensive tailwind training data in the models. Sure there's something more efficient but it's just safer to let the model leverage what it was trained on.
In my experience the LLMs work better with frameworks that have more rigid guidance. Something like Tailwind has a body of examples that work together, language to reason about the behavior needed, higher levels of abstraction (potentially), etc. This seems to be helpful.
The LLMs can certainly use raw CSS and it works well, the challenge is when you need consistent framing across many pages with mounting special cases, and the LLMs may make extrapolate small inconsistencies further. If you stick within a rigid framework, the inconsistencies should be less across a larger project (in theory, at least).
Start by having the agent ask you questions until it has enough information to create a plan.
Use the agent to create the plan.
Follow the plan.
When I started, I had to look at the code pretty frequently. Rather than fix it myself, I spent time thinking about what I could change in my prompts or workflow.
I did find some benefit in lowering the cost of exploratory work, but that's it—certainly worth 20€/month, but not the price of any of the "ultimate" plans.
For example today I had to write a simple state machine (for a parser that I was rewriting so I had all the testcases already). I asked Claude Code to write the state machine for me and stopped it before it tried compiling and testing.
Some of the code (of course including all the boilerplate) worked, some made no sense. It saved a few minutes and overall the code it produced was a decent first approximation, but waiting for it to "reason" through the fixes would have made no sense, at least to me. The time savings mostly came from avoiding the initial "type the boilerplate and make it compile" part.
When completing the refactoring there were a few other steps like where using AI was useful. But overall the LLM did maybe 10% of the work and saved optimistically 20-30 minutes over a morning.
Assuming I have similar savings once a week, which is again very optimistic... That's a 2% reduction or less.
> or just a "can you quickly make this boring func here that does xyz" "also add this" or for bash scripts etc.
I still write most of the interesting code myself, but when it comes to boring, tedious work (that's usually fairly repetitive, but can't be well abstracted any more), that's when I've found gen AI to be a huge win.
It's not 10x, because a lot of the time, I'm still writing code normally. For very specific, boring things (that also are usually my least favorite parts of code to write), it's fantastic and it really is a 10x. If you amortize that 10x over all the time, it's more like a 1.5x to 3x in my experience, but it saves my sanity.
Things like implementing very boring CRUD endpoints that have enough custom logic that I can't use a good abstraction and writing the associated tests.
I would dread doing work like that because it was just so mind numbing. Now, I've written a bunch of Cursor rules (that was actually pretty fun) so I can now drop in a Linear ticket description and have it get somewhere around 95% done all at once.
Now, if I'm writing something that is interesting, I probably want to work on it myself purely because it's fun, but also because the LLM may suck at it (although they're getting pretty damn good).
I tried claude code to write very simple app for me. Basically Golang mock server which will dump request to console. I'd write this kind of app in an hour. I spent around 1.5 hours with claude code and in the end I had code which I liked, almost the same code I'd write myself. It's not vibe coding, I carefully instructed it to write code in a way I prefer, one small step after another.
So for me, it's pretty obvious that with better training, I'd be able to achieve speed ups with the same result in the end. Not 10x, but 2x is possible. The very first attempt to use AI ended up with almost the same time I'd write the same code, and I have a lot to improve.
That said, I have huge problem with this approach. It's not fun to work like that. I started to program 25 years ago, because it was fun for me. It still fun for me today. I love writing all these loops and ifs. I can accept minimal automation like static autocomplete, but that's about it.
does anyone remember that episode of star trek tng where the kid is given a little laser engraver that carves a dolphin from a block of wood? and the kid is like "i didn't make this" and the teacher (who abducted him, ew) is like "yeah but it's what you wanted to make, the tool just guided you"
so in 2026 we're going to get in trouble doing code "the old way", the pleasurable way, the way an artist connects with the work. we're not to chefs any longer, we're a plumber now that pours food from a faucet.
we're annoyed because our output can suddenly be measured by the time unit. the jig is up. our secret clubhouse has a lightbulb the landlord controls.
some of us were already doing good work, saving money, making the right decisions. we'll be fine.
some of us don't know how to do those things - or won't do those things - and our options are funneled down. we're trashing at this, like dogs being led to the pound.
there's before, there's during, and there's after; the during is a thing we so seldom experience, and we're in it, and 2024 felt like nothing, 2025 feels like the struggle, and 2026 will be the reconciliation.
change sucks. but it's how we continue. we continue differently or we dont exist.
I sure do. I believe it's the first season episode "When the Bough Breaks," (S01E16). That show tackled so many heavy topics right out of the gate... I respect the hell of of the courage to try, even if it produced some pretty epic whiffs along with the home runs and standing doubles.
Feeling the same. I’m guessing the folks getting good results are literally writing extremely detailed pseudocode by hand?! Like:
Write a class Person who has members (int) age, (string) first name, (string) last name…
But if you can write that detailed…don’t you know the code you want to write and how you should write it? Writing plain pseudo code feels more verbose.
But the AI coding agent can then ask you follow up questions, consider angles you may not have, and generate other artifacts like documentation, data generation and migration scripts, tests, CRUD APIs, all in context. If you can reliably do all that from plain pseudo code, that's way less verbose than having to write out every different representation of the same underlying concept, by hand.
Sure, some of that, like CRUD APIs, you can generate via templates as well. Heck, you can even have the coding agent generate the templates and the code that will process/compile them, or generate the code that generates the templates given a set of parameters.
It's been my experience that reaching for an LLM is a significant context switch that breaks flow state. Comparable to a monkey entering your office and banging cymbals together for a minute, returning to programming after writing up instructions for an LLM requires a refocusing process to reestablish the immersion you just forfeited. This can be a worthwhile trade with particularly tedious or annoying tasks, but not always.
I suspect that this explains the current bifurcation of LLM usage. Where individuals either use LLMs for everything or use them minimally. With the in-between space shrinking by the day.
> Like, is there truly an agentic way to go 10x or is there some catch? At this point while I'm not thrilled about the idea of just "vibe coding" all the time, I'm fine with facing reality.
Below is based on my experience using (currently) mostly GPT-5 with open source code assistants.
For a new project with straightforward functionality? I think you (and "you" being "basically anybody who can code at all") can probably manage to go 10x the pace of a junior engineer of yesteryear.
Things get a lot trickier when you have complex business logic to express and backwards compatibility to maintain in an existing codebase. Writing out these kinds of requirements in natural language is its own skillset (which can be developed), and this process takes time in and of itself.
The more confusing the requirements, the more error prone the process becomes though. The model can do things "correctly" but oops maybe you forgot something in your description, and now the whole thing will be wrong. And the fact that you didn't write the code means that you missed out on your opportunity to fix / think about stuff in the first pass of implementation (i.e. you need to seriously review stuff, which also slow you down).
Sometimes iterating over English instructions will take longer than just writing/expressing things in code from the start. But sometimes it will be a lot faster too.
Basically the easy stuff is way easier but the more complex stuff is still going to require a lot of hand holding and a lot of manual review.
I have a feeling that people who are genuinely impressed by long term vibe coding on a single project are only impressed because they don't know any better.
Take writing a book, or blog post; writing a good blog post, or a chapter of a book, takes lots of skill and practice. The results are very satisfying and usually add value to both the writer's life as well as the reader's. When someone who has done that uses AI and sees the slop it generates, he's not impressed, probably even frustrated.
However, someone who can barely write a couple coherent sentences, would be baffled at how well AIs can put together sentences, paragraphs, and have a somewhat coherent train of thought through the entire text. People who struggled in school with writing an introduction and a conclusion will be amazed at AIs writing. They would maybe even assume that "those paragraphs actually add no meaning and are purely fluff" is a totally normal part of writing and not an AI artifact.
I’m impressed by getting the output of at least a mediocre developer at less than 1% of the cost. Brute force is an underrated strategy. I’ve been having a great experience.
That developers in the Hacker News comment bin report experiences that align with their personal financial interests doesn’t really dissuade me.
How many hours have you spent writing code? Thousands? Tens of thousands? Were you able to achieve good results in the first hundred hours?
Now, compare it to how much time you've spent working with agents. Did you dedicate considerable time to figuring out how to use them? Do you stop using the agent and do things manually when it isn't going right, or do you spend time figuring out how to get the agent to do it?
You can't really compare those 2. Agents a re non-deterministic. I can tell Clod to go update my unit test coverage and it will choke itself, burn 200k tokens and then loudly proclaim "Great! I've updated unit test coverage".
I'll kill that terminal, open it again and run the exact same command. 30k tokens, actually adds new tests.
It's hard to "learn" when the feedback cycle can take 30 minutes and result in the agent sitting in the corner touching itself and crooning about what a good boy it is. It's hard to _want_ to learn when you can't trust the damn thing with the same prompt twice.
And then all the heuristics you've learnt change under you and you're stuck doing 100-1000 more hours of learning with a drop in quality during that time.
That's my finding as well. The smaller the chunk, the better, and it saves me 5m here and an hour there. These really add up.
This is cool. It's extra cool on annoying things like "fix my types" or "find the syntax error" or "give me the flags for ffmpeg to do exactly this."
If I ever meet someone who drank the koolaid and wants to show me their process, I'm happy to see it. But I've tried enough to believe my own eyes, and when I see open source contributors I respect demo their methods, they spend enough time and energy either waiting on the machine and trying to keep it on the rails that, yes this is harder, but it does not appear to be faster.
It seems to very heavily depend on your exact project and how well it's represented in the training set.
For instance, AI is great at react native bullshit that I can't be bothered with. It absolutely cannot handle embedded development. Particularly if you're not using Arduino framework on an Atmel 328. I'm presently doing bare metal AVR on a new chip and none of the AI agents have a single clue what they're doing. Even when fed with the datasheet and an entire codebase of manually written code for this thing, AI just produces hot wet garbage.
If you're on the 1% happy path AI is great. If you diverge even slightly from the top 10 most common languages and frameworks it's basically useless.
The weird thing is if you go in reverse it works great. I can feed bits of AVR assembly in and the AI can parse it perfectly. Not sure how that works, I suspect it's a fundamentally different type of transformation that these models are really good at
I have been building a game (preview here: https://qpingpong.codeinput.com) as a practice to "vibe-coding". There is only one rule: I am not allowed to write a single line of code. But can prompt as much as I want.
So far I am hitting a "hard-block" on getting the AI to make changes once you have a large code base. One "unblocker" was to restructure all the elements as their own components. This makes it easier for the LLM (and you?) to reason about each component (React) in isolation.
Still, even as this "small/simple game" stage, it is not only hard for the LLM to get any change done but very easy for it to break things. The only way I can see my around it, is to structure very through tests (including E2E tests) so that any change by the LLM has to be thoroughly tested for regression.
I've been working on this for a month or so. I could have coded it faster by hand except for the design part.
I have a hobby project on the side involving radio digital signal processing in Rust that I've been pure vibe coding, just out of curiosity to see how far I can get. On more than one occasion the hobby project has gotten bogged down in a bug that is immensely challenging to resolve. And since the project isn't in an area I have experience with, and since I don't have a solid "theory of the program", since it's a gray box because I've been vibe coding it, I've definitely seen CC get stuck and introduce regressions in tricky issues we previously worked through.
The use of Claude Code with my day job has been quite different. In my day job, I understand the code and review it carefully, and CC has been a big help.
You can go faster once you understand the domain reasonably well that you could have written it yourself. This allows you to write better designs, and steer LLMs in the right direction.
"Vibe coding" though is moving an ever growing pile of nonunderstanding and complexity in front of you, until you get stuck. (But it does work until you've amassed a big enough pile, so it's good for smaller tasks - and then suddenly extremely frustrating once you reach that threshold)
Can you go 10x? Depends. I haven't tried any really large project yet, but I can compress fairly large things that would've taken a week or two pre-LLM into a single lazy Sunday.
For larger projects, it's definitely useful for some tasks. ("Ingest the last 10k commits, tell me which ones are most likely to have broken this particular feature") - the trick is finding tasks where the win from the right answer is large, and the loss from the wrong one is small. It's more like running algorithmic trading on a decent edge than it is like coding :)
It definitely struggles to do successfully do fully agentic work on very large code bases. But... I've also not tried too much in that space yet, so take that with a grain of salt.
If you have not started working on a new codebase while adopting AI, it may be harder to realize the gains.
I switched jobs somewhat recently. At my previous job, where I was on the codebase for years, I knew where the changes should be and what they should look like. So I tried to jump directly to implementation with the AI because I didn't need much help planning and the AI got confused and did an awful job.
In a new codebase, where I had no idea how things are structured, I started the process by using AI to understand where the relevant code is, the call hierarchies and side effects, etc.
I have found by using the AI to conduct the initial investigation, it was then very easy to get the AI to generate an effective spec, and then it was relatively easy to get the AI to generate the code to that spec. That flow works much better than trying to one shot implementation
It sounded like he was trying to one shot things when he mentioned he would ask it to fix problems with no luck. It's an approach I've tried before with similar results, so I was sharing an alternative that worked for me. Apologies if it came across as dismissive
GP said they were doing vibe coding and trying to get the ai to do one shots. Thats the worst way to use these tools. AI coding agents work best when you generally know what you want the output to look like but dont want to waste time writing that output
I don’t vibe code yet but it has sped me up a lot when working with large frameworks that have a lot of magic behind the scenes (Spring Boot). I am doing a very large refactor, major version spring boot upgrade, at the moment.
When given focused questions for parts of the code it it will give me 2-4 different approaches extending/implementing different bean overrides. I go through a cycle of back and forth having it give me sample implementations. I often ask what is considered the more modern or desirable approach. Things like give me a pros and cons list of the different approaches. The one I like the best I then go look up the specific docs to fact check a bit.
For this type of work it easily is a 2-3x. Spring specifically is really tough to search for due to its long history and large changes between major versions. More times than not it lands me on the most modern approach for my Spring Boot version and while the code it produces is not bad it isn’t great either. So, I rewrite it.
Also it does a pretty good job of writing integration tests. I have it give me the boilerplate for the test and then I can modify it for all my different scenarios. Then I run those against the unmodified and refactored code as validation suite that the refactor didn’t introduce issues.
When I am working in GoLang I don’t get this level of speed up but I also don’t need to look up as much. The number of ways to do things is far lower and there is no real magic behind the scenes. This might be one reason experiences may differ so radically.
How are you guys using LLMs? I've done a couple of applications for my own use, including a "Mexican Train Dominoes" online multiplayer using LLMs and it doesn't stop amazing me, Gemini 3 is crazy good at finding bugs at work, And every week there are very interesting advances in Arxiv articles.
I'm 45 years old, have been programming since I was 9, and this is the most amazing time to be building stuff.
The thing is, using an agent or AI to code for you is a learned skill. It doesn’t come naturally to most people. For you to be successful at it, you’ve got to adopt a mentor / lead mindset - directing vs doing. In other words, you have to be an expert at explaining yourself - communicating clearly to get great results.
Someone who hasn’t got any experience coding, or leading in any capacity, anywhere in life (or mentoring) will have a hard time with agentic development.
I’ll elaborate a bit more - the ideal mindset requires fighting that itch to “do it yourself” and sticking to the prompts for any changes. This habit will force you to get better at communicating effectively to others (including agents).
I tried leaning in. I really tried. I'm not a web developer or game developer (more robotics, embedded systems). I tried vibe coding web apps and games. They were pretty boring. I got frustrated that I couldn't change little things. I remember getting frustrated that my game character kept getting stuck on imaginary walls and kept asking Cursor to fix it and it just made more and more of a mess. I remember making a simple front-end + backend with a database app to analyze thousands of pull request comments and it got massively slow and I didn't know why. Cursor wasn't very helpful in fixing it. I felt dumber after the whole process.
The next time I made a web app I just taught myself Flask and some basic JS and I found myself moving way more quickly. Not in the initial development, but later on when I had to tweak things.
The AI helped me a ton with looking things up: documentation, error messages, etc. It's essentially a supercharged Google search and Stack Overflow replacement, but I did not find it useful letting it take the wheel.