You're talking about describing your memories of your inner experiences. Memories transform with time, sometimes I'm not sure if what I think I remember actually happened to me, or if this is something I read or seen in a movie, or someone else described it to me. Fake memories like that might feel exactly the same as the things that I actually experienced.
GPT-4 has a lot of such fake memories. It knows a lot about the world, and about feelings, because it has "experienced" a lot of detailed descriptions of all kinds of sensations. Far more than any human has actually experienced in their lifetime. If you can express it in words, be it poetry, or otherwise, GPT-4 can understand it and reason about it, just as well as most humans. Its training data is equivalent to millions of life experiences, and it is already at the scale where it might be capable of absorbing more of these experiences than any individual human.
GPT-4 does not "get" poetry in the same way a human does, but it can describe very well the feelings a human is likely to feel when reading any particular piece of poetry. You don't need to explain such things to GPT-4 - it already knows, probably a lot more than you do. At least in any testable way.
Imagine a world without words. No need to imagine really. It exists. It’s everywhere. It’s the core. It’s what words represent, but words can only represent it to an entity that has experienced it to some degree. ChatGPT “knows” nothing about it. You do. Whether you recognize it or not.
ChatGPT is a machine, an algorithm, a recombinator of symbols. It doesn’t know what the symbols refer to because each symbol necessarily refers to another symbol until
you finally reach a symbol that refers to a shared, real experience…perhaps (Hello Wittgenstein!). And ChatGPT has no experience. Just symbols. It can’t intuit anything. It can’t feel anything. Even if you put quotes around “feel”, what does that even mean for a software algorithm running on hardware that does not feed continuous, variable electrical
sensations to the algorithm? It only feeds discrete symbols. Do you feel the number 739? Or do you “feel” it? Um what? Whatever inner experience 739 happens to produce in you is grounded in some real experiences in the past. Likewise any fake memories you have that somehow seem real, those are still grounded in a real feelings at some point. You could do this ad infinitum. If you are alive, you have experience. But ChatGPT has no experience, no grounding.
Problem here might be that we are trying to use words and logic to describe something that cannot be described by either.
GPT-4 has a lot of such fake memories. It knows a lot about the world, and about feelings, because it has "experienced" a lot of detailed descriptions of all kinds of sensations. Far more than any human has actually experienced in their lifetime. If you can express it in words, be it poetry, or otherwise, GPT-4 can understand it and reason about it, just as well as most humans. Its training data is equivalent to millions of life experiences, and it is already at the scale where it might be capable of absorbing more of these experiences than any individual human.
GPT-4 does not "get" poetry in the same way a human does, but it can describe very well the feelings a human is likely to feel when reading any particular piece of poetry. You don't need to explain such things to GPT-4 - it already knows, probably a lot more than you do. At least in any testable way.