Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This definition makes sense, but in the context of LLMs it still feels misapplied. What the model providers call "guardrails" are supposed to prevent malicious uses of the LLMs, and anyone trying to maliciously use the LLM is "explicitly trying to get off the road."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: