Can you share an actual example demonstrating this potential pathology?
Like many things in ML, this might be a problem in theory but empirically it isn’t important, or is very low on the stack rank of issues with our models.
A fold means two different regions of topology get projected across each other.
It's a problem for the simplest of reasons, information is lost. You cannot reconstruct the original topology.
In terms of the model, it now can't distinguish between what were completely different regions.
From the Klein bottle perspective, a 4D shape gets projected into a 3D shape. On most of the bottle, there is still a 1 to 1 topological mapping from 3D to 4D versions.
But where two surfaces now intersect, there is now no way to distinguish between previously unrelated information. The model won’t be able to anything sensible with that.
In 2015, when I was asked by friends if I'm worried about Self driving Cars and AI, I answered:
"I'll start worrying about AI when my Tesla starts listening to the radio because it's bored."
... that didn't take too long
Maybe that's why my car keeps turning on the music when I didn't ask -- I had always thought Tesla devs were just absolute noobs when it came to state management.
I like the Bayesian Surprise definition for this. It's not about predicting the exact next state of the world (or the next frame of the noisy TV) but about how much the next state changes your model of the world.
I spent some (too much) time trying to get pretty much the same thing running using GOW [1]. Was quite a bit harder than I thought, requiring a hdmi dummy plug to get the xserver config right etc.
Previously there were also so called "flicker TAN" approaches: https://de.wikipedia.org/wiki/Transaktionsnummer#chipTAN_com...