Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh, I get your argument. You think all functions that have finite output length for [usually] longer input length, are hashes. I totally get what you're saying, and it necessarily also means every LLM "inference" is actually a hashing algo too, as you noticed yourself, tellingly. So taking a long set of input tokens, and predicting the next token is (according to you), a "hashing" function. Got it. Thanks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: