Perhaps a year ago “vibe coding” was indicative of a low quality product.
It seems many have not updated their understanding to match today’s capabilities.
I am vibe coding.
That does not mean I am incompetent or that the product will be bad. I have 10 years of experience.
Using agentic AI to implement, iterate, and debug issues is now the workflow most teams are targeting.
While last year chances were slim for the agent to debug tricky issues, I feel that now it can figure out a lot once you have it instrument the app and provide logs.
It sometimes feels like some commenters stick with last year’s mindset and feel entitled to yell about ‘AI slop’ at the first sign of an issue in a product and denigrate the author’s competence.
No, it is still indicative of a low quality product. And I say that as someone who has probably been agentic coding longer than you have.
Indicative in my dictionary doesn't mean definitive. It just makes it much more likely. You can make quality products while LLMs write >99% of the code. This has been possible for more than a year, so it's not a lack of updating of beliefs that is the issue. I've done so myself. Rather, 90% of above products are low quality, at a much higher rate than say, 2022, pre-GPT. As such, it's an indicator. That 10% exists, just like pearls can hide in a pile of shit.
As others have said the reason is time investment. You can takes 2 months to build something where the LLM codes 99%. Or you can take 2 hours. HN, and everywhere else, is flooded by the latter. That's why it's mostly crap. I did the former. And luckily it led to a good result. Not a coincidence.
This applies far beyond coding. It applies to _everything_ done with LLMs. You can use them to write a book in 2 hours. You can use them to write a book in 2 years.
I've been neck deep in a personal project since January that heavily leverages LLMs for the coding.
Most of my time has been spent fitting abstractions together, trying to find meaningful relationships in a field that is still somewhat ill-defined. I suppose I could have thrown lots of cash at it and had it 'done' in a weekend, but I hate that idea.
As it stands, I know what works and what doesn't (to the degree I can, I'm still learning, and I'll acknowledge I'm not super knowledgeable in most things) but I'm trying to apply what I know to a domain I don't readily understand well.
People say that but the quote. " I can sooner imagine the end of the world than the end of capitalism." Always comes back to me.
Personally I think it won't be communism but communalism.
It's the first thing I tried, because Nano Banana 2 deteriorates the output with each turn, becoming unusable with just a few edits.
ChatGPT Images 2.0 made it unusable at the first turn. At least in the ChatGPT app editing a reference image absolutely destroyed the image quality. It perfectly extracted an illustration from the background, but in the process basically turned it from a crisp digital illustration into a blurry, low quality mess.
> Best forget about using Claude Opus models in Copilot.
I noticed this morning that Opus isn't even one of the models in the `/model` command in Copilot. Highest I can get (on the paid, but least expensive) tier is Sonnet 4.6. I'm pretty sure Opus was allowed recently, but not now.
I can’t say I’ve used it extensively enough to draw a conclusion, but it did seem similar to GPT 5.4 in Codex.
When I threw it at a difficult issue in an iOS app, it like GPT came up with wrongly guessed explanations. It only found the issue after I had it instrument the app and add extensive logs. Usually GPT 5.4 is the same.
Only that with GPT 5.4 it’s at least included in my subscription, while sending 3-4 messages to Opus 4.7 for this blew through my $20 plan limits and consumed $10 of extra usage on top. At that point I can’t help but bring up how much more expensive it is.
> Only that with GPT 5.4 it’s at least included in my subscription, while sending 3-4 messages to Opus 4.7 for this blew through my $20 plan limits and consumed $10 of extra usage on top. At that point I can’t help but bring up how much more expensive it is.
Rest assured OpenAI won’t want to leave that kind of money on the table…
I don’t know about rate limits, but I’ve been running into timeouts with Sonnet 4.6 after they don’t complete within 4-5 mins.
I have not encountered the same issues when using Claude Code.
Perhaps Copilot is on some sort of second rate priority.
Of course it’s the only thing available in our Enterprise, making us second class users.
Using the Copilot Business Plan we get the same rate limits as the student tier, making it infeasible to use Opus. Meanwhile management talks about their big plans for AI.
I know limits have been nerfed, but c'mon it's $20. The fact that you were able to implement two smallish features in an iOS app in 15 minutes seems like incredible value.
At $20/month your daily cost is $0.67 cents a day. Are you really complaining that you were able to get it to implement two small features in your app for 67 cents?
That last sentence didn't make sense so I'm not sure what your point is. But I'll run with the analogy.
You got into a taxi and they were charging you horse carriage prices initially. They're still not charging you for a full taxi ride but people are complaining because their (mistaken) assumption was that taxis can be provided as cheaply as horse carriages.
People are angry because their expectations were not managed properly which I understand.
But many of us realized that $20 or even $200 was far too low for such advanced capabilities and are not that surprised that all of the companies are raising prices and decreasing usage limits.
OpenAI is not far behind, they're simply taking their time because they're okay with burning through capital more quickly than Anthropic is, and because OpenAI's clearly stated ambition is to win market share, not to be a responsibly, sustainably run company.
Shortly after I ran out of credits in 15 min, they tweeted that they increased usage limits to compensate for the higher token usage, so perhaps it is not as bad now.
Codex, this afternoon, I was able to use for like two hours on the $20 plan. Maybe limits will be tighter in the future. But with new data centers, new GPU generations, and research advances it might rather get cheaper.
Anyway, as you said, this is all pretty cheap. I'll go with the $100 Codex plan, since I now figured out how to nicely work on multiple changes in parallel via the Codex app with worktrees. I imagine the same is possible in Claude Code.
It seems to me a bit naive to think OpenAI would not increase prices/decrease usage limits at some point. $20 might cover a very small fraction of the actual cost that is incurred over a month of sustained usage.
It seems many have not updated their understanding to match today’s capabilities.
I am vibe coding.
That does not mean I am incompetent or that the product will be bad. I have 10 years of experience.
Using agentic AI to implement, iterate, and debug issues is now the workflow most teams are targeting.
While last year chances were slim for the agent to debug tricky issues, I feel that now it can figure out a lot once you have it instrument the app and provide logs.
It sometimes feels like some commenters stick with last year’s mindset and feel entitled to yell about ‘AI slop’ at the first sign of an issue in a product and denigrate the author’s competence.
reply