> not as statistical machines, but geometric machines. When you train LLMs you are essentially moving concepts around in a very high dimensional space.
That's intriguing, and would make a good discussion topic in itself. Although I doubt the "we have the same thing in [various languages]" bit.
Mother/water/bed/food/etc easily translates into most (all?) languages. Obviously such concepts cross languages.
In this analogy they are objects in high dimensional space, but we can also translate concepts that don’t have a specific word associated with them. People everywhere have a way to refer to “corrupt cop” or “chess opening” and so forth.
That's intriguing, and would make a good discussion topic in itself. Although I doubt the "we have the same thing in [various languages]" bit.