Does it actually? One sentence telling the agent to call me “Chris the human serviette” plus the times it calls me that is not going to add that much to the context. What kills the context IME is verbose logs with timestamps.
Sure, but its an instruction that applies and the model will consider fairly relevant in every single token. As an extremely example imagine instructing the llm to not use the letter E or to output only in French. Not as extreme but it probably does affect.
People are so concerned about preventing a bad result that they will sabotage it from a good result. Better to strive for the best it can give you and throw out the bad until it does.
Irrelevant nonsense can also poison the context. That's part of the magic formula behind AI psychosis victims... if you have some line noise mumbojumbo all the output afterward is more prone to be disordered.
I'd be wary of using any canary material that wouldn't be at home in the sort of work you're doing.
It is not a single emoji, it's an instruction to interleave conversation with some nonsense. It can only do harm. It won't help produce a better result and is questionable at preventing a bad one.