Maybe because "machine learning" isn't all that interesting, and instead "proper AI" has been taken for granted in science fiction? A sentient AI is much more interesting than a "dumb" ML algorithm that can recognize cat pictures on the internet.
Stanislaw Lem wrote a few really good short stories about machine AIs at the brink of being sentient (e.g. robots that should be dumb machines, but show signs of self-awareness and human traits).
Some simple regressions, which is also ML, may be possible to understand completely by human beings. You can even calculate their results manually. This doesn't mean their models are always a good fit. Or that the world never changes, especially when impacted by feedback-loops of ML.
Nobody can claim to fully understand models with millions and billions of parameters. We know they are "overfit", but they may work in certain scenarios, much better than manually crafting rules by hand. So we end up with "it depends", then someone starts profiting, with real-world implications.
Stanislaw Lem wrote a few really good short stories about machine AIs at the brink of being sentient (e.g. robots that should be dumb machines, but show signs of self-awareness and human traits).