I feel that one challenge this comparison space has is: Self-driving cars haven't made the leap yet to replace humans. In other words, saying AGI will arrive like self-driving cars have arrived is incorrectly concluding that self-driving cars have arrived, and thus it instead (maybe correctly, maybe not) asserts that, actually, neither will arrive.
This is especially concerning because many top minds in the industry have stated with high confidence that artificial intelligence will experience an intelligence "explosion", and we should be afraid of this (or, maybe, welcome it with open arms, depending on who you ask). So, actually, what we're being told to expect is being downgraded from "it'll happen quickly" to "it will happen slowly" to, as you say, "it'll happen similarly to how these other domains of computerized intelligence have replaced humans, which is to say, they haven't yet".
Point being: We've observed these systems ride a curve, and the linear extrapolation of that curve does seem to arrive, eventually, at human-replacing intelligence. But, what if it... doesn't? What if that curve is really an asymptote?
This is especially concerning because many top minds in the industry have stated with high confidence that artificial intelligence will experience an intelligence "explosion", and we should be afraid of this (or, maybe, welcome it with open arms, depending on who you ask). So, actually, what we're being told to expect is being downgraded from "it'll happen quickly" to "it will happen slowly" to, as you say, "it'll happen similarly to how these other domains of computerized intelligence have replaced humans, which is to say, they haven't yet".
Point being: We've observed these systems ride a curve, and the linear extrapolation of that curve does seem to arrive, eventually, at human-replacing intelligence. But, what if it... doesn't? What if that curve is really an asymptote?