Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That fact is pretty useless to draw any useful conclusions from with one random not so great example. Yes, it's an experiment and we got a result. And now what? If I want reliable work results I would still go with the strategy of being as concrete as possible, because in all my AI activities, anything else lets the results be more and more random. Anything non-standard (like, you could copy & paste directly from a Google or SO result), no matter how simple, I better provide the base step by step algorithm myself and only leave actual implementation to the AI.


My parent said:

> For that task you need to give it points in what to do so it can deduce it's task list, provide files or folders in context with @…

- and my point is that you do not have to give ChatGPT those things. GP did not, and they got the result they were seeking.

That you might get a better result from Claude if you prompt it 'correctly' is a fine detail, but not my point.

(I've no horse in this race. I use Claude Code and I'm not going to switch. But I like to know what's true and what isn't and this seems pretty clear.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: