This is a misleading analogy because Bob "reads the transcript"; we generally view "reading" as separate from our conscious narrative. However, consider the alternate scenario where Bob "replays the stream of consciousness" from Alice. In that case, we may argue that Bob has conscious continuity from Alice.
The argument would then be that the context window is functionally closer to "replaying a stream of consciousness" than "reading a transcript".
It’s literally the text. It is not a stream of consciousness. There is no carryover of “consciousness” state. It is the raw text without even the embedding information, not that we should consider the embedding representation to be an internal state, because it is not.
That difference is the whole point. There is nothing to transfer.
Humans verbalize their conscious state. What little consciousness GPT has, ie. CoT and the like, it picked up by imitating human verbalization of conscious interiority. (I don't think layer state qualifies as consciousness.) As such, I believe two things: 1. the generated text is, if at all, very weak conscious state; 2. if there is any, it is there.
The argument would then be that the context window is functionally closer to "replaying a stream of consciousness" than "reading a transcript".