Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This. We are mostly token predictors. We're not entirely token predictors, but it's at least 80%. Being in the AI space the past few years has really made me notice how similar we are to LLMs.

I notice it so often in meetings where someone will use a somewhat uncommon word, and then other people will start to use it because it's in their context window. Or when someone asks a question like "what's the forecast for q3" and the responder almost always starts with "Thanks for asking! The forecast for q3 is...".

Note that low-effort does not mean low-quality or low-value. Just that we seem to have a lot of language/interaction processes that are low-effort. And as far as dating, I am sure I've been in some relationships where they and/or I were not going beyond low-effort, rote conversation generation.



> Or when someone asks a question like "what's the forecast for q3" and the responder almost always starts with "Thanks for asking! The forecast for q3 is...".

That's a useful skill for conference calls (or talks) because people might want to quote your answer verbatim, or they might not have heard the question.


Agreed, it is useful for both speaker and listener because it sets context. But that’s also why LLM’s are promoted to do the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: