Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seems like the general problem is consistency within the model. To people working in the field : what are the current options explored for solving this problem ?


1) repeat that people also lie so it is okay for LLM to lie

2) ingest as much VC money and stolen training data as we can

3) profit




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: