Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nah-- I feel like I have my eyes pretty wide open about the shortcomings of LLMs (but still find them useful often).

But any argument seeking to dunk on LLMs needs to not also apply equally to the alternative (humans).



And, wouldn't you know it, it actually does not also apply equally to the alternative (humans).


How much of it doesn't? We're deterministic? (aren't we less deterministic than LLMs?) All of our training is auditable? (there's a wealth of unknown experiences in each person writing code, to say nothing of the unknown and irrelevant experiences in our evolutionary background).

Maybe you can argue we don't use statistical completion and prediction as a heavy underpinning to our reasoning, but that's hardly settled.

Nah-- you will have to try harder to make an argument that really focuses on how LLMs are different from the alternative.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: