but has that cut down your prompting time, i assume an AI agent would take a fixed amount of time to generate N lines of code. Constructing effective prompts is probably where most time is spent, has this time been cut down with newer releases or has it been proved somehow that we need N less prompts to achieve the same result with newer AI models?
It’s less about the models getting smarter and more about them getting better at handling vague requests and context acquisition. They’re better at figuring out what they need to know, I’m better at shaping that process, and I have structured workflows for managing and efficiently feeding the right context into each prompt.