Hacker Newsnew | past | comments | ask | show | jobs | submit | user34283's commentslogin

Perhaps a year ago “vibe coding” was indicative of a low quality product.

It seems many have not updated their understanding to match today’s capabilities.

I am vibe coding.

That does not mean I am incompetent or that the product will be bad. I have 10 years of experience.

Using agentic AI to implement, iterate, and debug issues is now the workflow most teams are targeting.

While last year chances were slim for the agent to debug tricky issues, I feel that now it can figure out a lot once you have it instrument the app and provide logs.

It sometimes feels like some commenters stick with last year’s mindset and feel entitled to yell about ‘AI slop’ at the first sign of an issue in a product and denigrate the author’s competence.


No, it is still indicative of a low quality product. And I say that as someone who has probably been agentic coding longer than you have.

Indicative in my dictionary doesn't mean definitive. It just makes it much more likely. You can make quality products while LLMs write >99% of the code. This has been possible for more than a year, so it's not a lack of updating of beliefs that is the issue. I've done so myself. Rather, 90% of above products are low quality, at a much higher rate than say, 2022, pre-GPT. As such, it's an indicator. That 10% exists, just like pearls can hide in a pile of shit.

As others have said the reason is time investment. You can takes 2 months to build something where the LLM codes 99%. Or you can take 2 hours. HN, and everywhere else, is flooded by the latter. That's why it's mostly crap. I did the former. And luckily it led to a good result. Not a coincidence.

This applies far beyond coding. It applies to _everything_ done with LLMs. You can use them to write a book in 2 hours. You can use them to write a book in 2 years.


I've been neck deep in a personal project since January that heavily leverages LLMs for the coding.

Most of my time has been spent fitting abstractions together, trying to find meaningful relationships in a field that is still somewhat ill-defined. I suppose I could have thrown lots of cash at it and had it 'done' in a weekend, but I hate that idea.

As it stands, I know what works and what doesn't (to the degree I can, I'm still learning, and I'll acknowledge I'm not super knowledgeable in most things) but I'm trying to apply what I know to a domain I don't readily understand well.


Are TUIs not yesterday’s hot thing?

The way I work now in the Codex desktop app is that I spin up 3-5 conversations which work in their dedicated git worktree.

So while the agent works and runs the test suite I can come back to other conversations to address blockers or do verification.

Important is that I can see which conversation has an update and getting desktop notifications.

Maybe I could set this up with tabs in the Terminal, but it does not sound like the best UX.


Yes, I think they have been for years. C2PA Content Credentials are supported in cameras and some phones already today.

I figure capitalism may soon become obsolete. But I don’t think this speculation is going to make for interesting discussion on here.

I find the technical discussion more interesting and could do without some of the moral grandstanding in the comments.


People say that but the quote. " I can sooner imagine the end of the world than the end of capitalism." Always comes back to me. Personally I think it won't be communism but communalism.

With Opus 4.6 on the $20 plan the limits were bad, but at least you could do a short session.

I find that with Opus 4.7 I can do two messages. Once I had a short session with 4-5 messages and it consumed $10 in extra usage.

This relegated Claude to a backup option in addition to Codex, which has the better desktop app anyway, and much better usage limits.

I’m considering to even cancel Claude entirely.


It's the first thing I tried, because Nano Banana 2 deteriorates the output with each turn, becoming unusable with just a few edits.

ChatGPT Images 2.0 made it unusable at the first turn. At least in the ChatGPT app editing a reference image absolutely destroyed the image quality. It perfectly extracted an illustration from the background, but in the process basically turned it from a crisp digital illustration into a blurry, low quality mess.


They slapped a 7.5x “promotional” multiplier on Opus 4.7 and they are removing Opus 4.6 in short order.

I heard they disabled signups for non-business accounts too.

Best forget about using Claude Opus models in Copilot.


> Best forget about using Claude Opus models in Copilot.

I noticed this morning that Opus isn't even one of the models in the `/model` command in Copilot. Highest I can get (on the paid, but least expensive) tier is Sonnet 4.6. I'm pretty sure Opus was allowed recently, but not now.


Yeha not thrilled about that.

Looks like you gotta build your own agent harness and self host or use aws bedrock for "sovereignty"


Indeed; especially since I paid for a sub with some expectations, and those are being changed out from under me.

I heard they offer a full refund for this month if you are understandably unhappy with these changes.

I can’t say I’ve used it extensively enough to draw a conclusion, but it did seem similar to GPT 5.4 in Codex.

When I threw it at a difficult issue in an iOS app, it like GPT came up with wrongly guessed explanations. It only found the issue after I had it instrument the app and add extensive logs. Usually GPT 5.4 is the same.

Only that with GPT 5.4 it’s at least included in my subscription, while sending 3-4 messages to Opus 4.7 for this blew through my $20 plan limits and consumed $10 of extra usage on top. At that point I can’t help but bring up how much more expensive it is.


> Only that with GPT 5.4 it’s at least included in my subscription, while sending 3-4 messages to Opus 4.7 for this blew through my $20 plan limits and consumed $10 of extra usage on top. At that point I can’t help but bring up how much more expensive it is.

Rest assured OpenAI won’t want to leave that kind of money on the table…


There’s also still Google with their TPUs, xAI has some large models in the works, not to mention China.

With that much competition and ongoing improvements, I don’t have such a pessimistic view on future usage limits and cost.


I don’t know about rate limits, but I’ve been running into timeouts with Sonnet 4.6 after they don’t complete within 4-5 mins.

I have not encountered the same issues when using Claude Code.

Perhaps Copilot is on some sort of second rate priority.

Of course it’s the only thing available in our Enterprise, making us second class users.

Using the Copilot Business Plan we get the same rate limits as the student tier, making it infeasible to use Opus. Meanwhile management talks about their big plans for AI.


Perhaps on the 10x plan.

It went through my $20 plan's session limit in 15 minutes, implementing two smallish features in an iOS app.

That was with the effort on auto.

It looks like full time work would require the 20x plan.


I know limits have been nerfed, but c'mon it's $20. The fact that you were able to implement two smallish features in an iOS app in 15 minutes seems like incredible value.

At $20/month your daily cost is $0.67 cents a day. Are you really complaining that you were able to get it to implement two small features in your app for 67 cents?


Yea, actually, people should be complaining.

If you got in a taxi, and they charged you relative to taking a horse carriage, people should be upset.


That last sentence didn't make sense so I'm not sure what your point is. But I'll run with the analogy.

You got into a taxi and they were charging you horse carriage prices initially. They're still not charging you for a full taxi ride but people are complaining because their (mistaken) assumption was that taxis can be provided as cheaply as horse carriages.

People are angry because their expectations were not managed properly which I understand.

But many of us realized that $20 or even $200 was far too low for such advanced capabilities and are not that surprised that all of the companies are raising prices and decreasing usage limits.

OpenAI is not far behind, they're simply taking their time because they're okay with burning through capital more quickly than Anthropic is, and because OpenAI's clearly stated ambition is to win market share, not to be a responsibly, sustainably run company.


Shortly after I ran out of credits in 15 min, they tweeted that they increased usage limits to compensate for the higher token usage, so perhaps it is not as bad now.

Codex, this afternoon, I was able to use for like two hours on the $20 plan. Maybe limits will be tighter in the future. But with new data centers, new GPU generations, and research advances it might rather get cheaper.

Anyway, as you said, this is all pretty cheap. I'll go with the $100 Codex plan, since I now figured out how to nicely work on multiple changes in parallel via the Codex app with worktrees. I imagine the same is possible in Claude Code.


It seems to me a bit naive to think OpenAI would not increase prices/decrease usage limits at some point. $20 might cover a very small fraction of the actual cost that is incurred over a month of sustained usage.

No, I am happy with the results.

For a first test, it did seem like it burned through the usage even faster than usual.

GitHub Copilot’s 7.5x billing factor over 3x with Opus 4.6 seems to suggest it indeed consumes more tokens.

Now I’m just waiting for OpenAI to show their hand before deciding which of the plans to upgrade from the $20 to the $100 plan.


> It looks like full time work would require the 20x plan.

Full time work where you have the LLM do all the code has always required the larger plans.

The $20/month plans are for occasional use as an assistant. If you want to do all of your work through the LLM you have to pay for the higher tiers.

The Codex $20/month plan has higher limits, but in my experience the lower quality output leaves me rewriting more of it anyway so it's not a net win.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: