We've all probably heard the quip: "Even AI experts don't understand AI!"
What exactly does it mean? I often hear people say it to try to prove a point about AI safety, but isn't that the whole point of AI? Neural networks find features via gradient descent so humans don't have to encode all of the features.