Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How will people without Python knowledge know that the script is 100% correct? You can say "Well they shouldn't use it for mission critical stuff" or "Yeah that's not a use case, it could be useful for qualitative analysis" etc., but you bet they will use it for everything. People use ChatGPT as a search engine and a therapist, which tells us enough


If you have a mechanism that can prove arbitrary program correctness with 100% accuracy you’re sitting on something more valuable than LLMs.


so human powered LLM user ??


For sure, I've never seen a human write a bug or make a mistake in programming


that's why we create LLM for that




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: