Hacker Newsnew | past | comments | ask | show | jobs | submit | httgbgg's commentslogin

Only the first one. Ideally now there is no second prompt.


Are you aware that every tool call produces output which also counts as input to the LLM?


Are you aware that a lot of model tool calls are useless and a smarter model could avoid those?

Are you aware that output tokens are priced 5x higher than input tokens?


> a lot of model tool calls are useless

That’s just wrong. File reads, searches, compiler output, are the top input token consumers in my workflow. None of them can be removed. And they are the majority of my input tokens. That’s also why labs are trying to make 1M input work, and why compaction is so important to get right.

Regarding output - yes, but that wasn’t the topic in this thread. It’s just easier to argue with input tokens that price has gone up. I have a hunch the price for output will go up similarly, but can’t prove it. The jury’s out IMO: https://news.ycombinator.com/item?id=47816960


This has no bearing on my comment. The point is that a better model avoids dozens of prompts and tool calls by making fewer CORRECT tool calls, with the user needing no more prompts.

I’m surprised this is even a question; obviously a better prompter has the same properties and it’s not in dispute?


Plenty of people disagree that there is no use case to not wearing a seatbelt. That you find it impossible to imagine makes it an even better analogy actually.


People can disagree with whatever, everyone is allowed to be stupid.

But most reasonable people agree there's no tangible use case to not wearing a seatbelt. There are infinite tangible use cases to using software outside the app store, that reasonable people can all acknowledge.


Ignoring the specific feature (writing AI slop) has anyone seen huge skills like this work well in practice?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: