Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLM if given the tools(allow it to execute code online) can certainly execute a path towards an objective, they can be told to do something but free to act anyway that it thinks it’s best towards it. That isn’t dangerous because it is not self aware doing it’s own thing yet


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: