Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ever heard of the halting problem [0]? Every time I heard these claims, it sounds like someone saying that we can travel in time as soon as we invent a faster than light vessel, or better, Dr Who’s cabin. There’s a whole set of theorems that says ultimately how a formal system (which computers are) can’t be completely automated as there are classes of problems it can’t solve. Anything the LLMs do, you can write a better performing software except for the task that it is best suited for: translation between natural languages. And the latter, it’s because it’s a pain to write all the rules.

[0]: https://en.wikipedia.org/wiki/Halting_problem



LLMs are doing genuine reasoning already (and no I don't mean consciousness or qualia), and they were even since GPT3.5.

They can already take descriptions of tasks and write computer programs to do those tasks, because they have a genuine understanding of the tasks (again no qualia implied).

I never said there are no limits to what LLMs can do, or no limits to what logic can prove, or even no limits to what humans can understand. Everything has limits.

EDIT: And before you accuse me of saying LLMs can understand all tasks, go back and re-read the post a second time, so you don't make that mistake again.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: