There is no "knowing" in LLMs, and it doesn’t matter for the proposed solution either. Detecting a pattern that is unusual by the certainty of having seen something previously does not require understanding of the pattern itself, if the only required action is reporting the event.
In simple terms: The AI doesn’t need to say, "something unusual is happening because I saw walking trees and trees usually cannot walk", but merely "something unusual is happening because what I saw was unusual, care to take a look?"
The challenge with these systems is that everything is unusual unless trained otherwise, so the false positive rate is exceptionally high. So the systems get tuned to ignore most untrained/unusual things.
I bet they’d have similar luck if they dressed up as bears. Or anything else non-human, like a triangle.
And that is an entirely different problem, isn’t it?