Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If OpenAI had an embedded python interpreter or for that matter an interpreter for lambda calculus or some other equally universal Turing machine then this approach would work but there are no LLMs with embedded symbolic interpreters. LLMs currently are essentially probability distributions based on a training corpus and do not have any symbolic reasoning capabilities. There is no backtracking, for example, like in Prolog.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: