What would such an AI even look like? If it spots every real danger but also hallucinates even a few dangers that aren’t really there, it gets ignored or switched off for needlessly slowing the traveler down (false positives, apparently an issue with early Google examples [0]); if it only spots real dangers but misses most of them, it is not helping (false negatives, even worse if a human is blindly assuming the machine knows best and what happened with e.g. Uber [1]); if it’s about the same as humans overall but makes different types of mistake, people rely on it right up until it crashes then go apoplectic because it didn’t see something any human would consider obvious (e.g. Tesla, which gets slightly safer when the AI is active, but people keep showing the AI getting confused about things that they consider obvious [2]).
[0] https://theoatmeal.com/blog/google_self_driving_car
[1] https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg
[2] https://youtube.com/watch?v=Wz6Ins1D9ak