Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Personally I've been mostly avoiding using AI tools, but I have friends and colleagues who do use or have used LLMs, at least they've tried to.

Those who seems to get the best results asks for a prototype or framework for how to do something. They don't expect to use the AI generated code, it's purely there as inspiration and something they can poke at to learn about a problem.

Most seems to have a bad experience. Either the LLMs doesn't actually know much, if anything about the subject, and makes up weird stuff. A few colleagues have attempted to use LLMs for generating Terraform, or CloudFormation code, but have given up on making it work. The LLMs they've tried apparently cannot stop making up non-existing resources. SRE related code/problems anecdotally seems to do worse than actual development work, but it feel to like you still need to be a fair good developer to have much benefit from an LLM.

The wall we're hitting may be the LLMs not actually having sufficient data for a large set of problems.



> Those who seems to get the best results asks for a prototype or framework for how to do something.

That's what GitHub and sample projects are here for. And the examples would be working ones. No need to train a model for that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: