Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It doesn't have problems with undefined behavior, memory safety, or especially thread safety?

That has not been my experience when using Codex, Composer, Claude, or ChatGPT.

Things have just gotten to the point over the last year that the undefined behavior, memory safety, and thread safety violations are subtler and not as blindingly obvious to the person auditing the code.

But I guess that's my problem, because I'm not fully vibing it out.





You just tell it the problem and it'll fix it. It's almost never been an issue for me in Zig.

Do you really think the user didn't try explaining the problem to the LLM? Do you not see how dismissive the comment you wrote is?

Why are some of you so resistant to admit that LLMs hallucinate? A normal response would be "Oh yeah, I have issues with that sometimes too, here's how I structure my prompts." Instead you act like you've never experienced this very common thing before, and it makes you sound like a shill.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: