A story is not just words crammed together that sound plausible. Is the AI going to know about pacing? About character motivations? About interconnecting disparate plots? That paper sounds like it has a scientist’s conception that a story is just words, and not complex trade offs between the start of a story and its end and middle, complexity and planning that won’t come from any sort of next-token generation.
These are “stories” in the most vacuous definition possible, one that is just “and then this happened” like a child’s conception of plot
> Is the AI going to know about pacing? About character motivations? About interconnecting disparate plots?
For LLMs like GPT-4, this all seems reasonable to account for and assume the LLM is capable of processing, given appropriate guidance/frameworks (of which may be just classical programming).
These are “stories” in the most vacuous definition possible, one that is just “and then this happened” like a child’s conception of plot