Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LeCun's argument was that a single erroneous token would derail further response.

This is, obviously, false: a reasoning model (or a non-reasoning one with a better prompt) can recognize error and choose a different path, the error will not be the part of an answer.

You're talking about a different problem: context rot. It's possible that an error would make performance worse. So what?

People can also get tired when they are solving a complex problem. People use various mitigations: e.g. it might help to start from a clean sheet. These mitigations might also apply to LLM: e.g. you can do MCTS (tree-of-thought) or just edit reasoning trace replacing the faulty part.

"LLMs are not absolutely perfect and require some algorithms on top thus we need a completely different approach" is a very weird way to make a conclusion.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: