It might even be true this time, but there is no real mystery why many aren't inclined to invest more time figuring it out for themselves every few months. No need for the author of the original article to reach for "they are protecting their fragile egos" style of explanation.
The productivity improvements speak for themselves. Over time, those who can use ai well and those who cannot will be rewarded or penalized by the free market accordingly.
If there’s evidence of productivity improvements through AI use, please provide more information. From what I’ve seen, the actual data shows that AI use slows developers down.
The sheer number of projects I've completed that I truly would never have been able to even make a dent in is evidence enough for me. I don't think research will convince you. You need to either watch someone do it, or experiment with it yourself. Get your hands dirty on an audacious project with Claude code.
It sounds like you're building a lot of prototypes or small projects, which yes LLMs can be amazingly helpful at. But that is very much not what many/most professional engineers spend their time on, and generalizing from that former case often doesn't hold up in my experience.
We use both Claude and Codex on a fairly large ~10-years old Java project (~1900 Java files, 180K lines of code). Both tools are able to implement changes across several files, refactor the code, add unit tests for the modified areas.
Sometime the result is not great, sometimes it requires manual updates, sometimes it just goes into a wrong direction and we just discard the proposal. The good thing is you can initiate such a large change, go get a coffee, and when you're back you can take a look at the changes.
Anyway, overall those tools are pretty useful already.
"sheer number" combined with "completed" sounds more like lots of small projects (likely hobbyist or prototypes) than it does anything large/complicated/ongoing like in a professional setting.
It is, at this point, rather suspect that there are mountains of anecdata, but pretty much no high quality quantitive data (and what there is is mixed at best). Fun fact; worldwide, over 200 million people use homeopathy on a regular basis. They think it works. It doesn't work.
That's what it really all comes down to, isn't it?
It doesn't matter if you're using AI or not, just like it never mattered if you were using C or Java or Lisp, or using Emacs or Visual Studio, or using a debugger or printf's, or using Git or SVN or Rational ClearCase.
What really matters is in the end is, what you bring to market, and what your audience thinks of your product.
So use all the AI you want. Or don't use it. Or use it half the time. Or use it for the hard stuff, but not the easy stuff. Or use it for the easy stuff, but not the hard stuff. Whatever! You can succeed in the market with AI-generated product; you can fail in the market with AI-generated product. You can succeed in the market with human-generated product; you can fail in the market with human-generated product.