Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In some ways it’s no different than working with some interns. You have to prompt them to “did you consider if your code matched all of the requirements?”.

I really hate this description, but I can't quite filly articulate why yet. It's distinctly different because interns can form new observations independently. AIs can not. They can make another guess at the next token, but if it could have predicted it the 2nd time, it must have been able to predict it the first, so it's not a new observation. The way I think through a novel problem results in drastically different paths and outputs from an LLM. They guess and check repeatedly, they don't converge on an answer. Which you've already identified

> LLMs are different in that they’re sorta lobotomized. They won’t learn from tutoring “did you consider” which needs to essentially be encoded manually still.

This isn't how you work with an intern (unless the intern is unable to learn).



The whole point about an intern is that after a month they can act without coaching. Humans do actually learn- it is quite a revelation to see a child soak up data like an AI on steroids.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: