> The idea that it could've been a lie -- that the AI was engaged during the crash -- doesn't have to be a conspiracy involving PR. I think if it was in fact a lie it would be much more likely to originate from the engineer in the car.
Yeah, I was thinking the same thing. If there's any chance that it's a lie, that's the only way it happens-- the guy at the wheel decided to take the fall without telling anyone else. Maybe he pushed the button by accident, the thing freaked out, and he decided no one needed to know. Possible.
But that still doesn't fly. Everything that happens to that car is measured and recorded for later analysis. There is a verifiable record of when it is under human control and when it isn't. Faking that record convincingly enough to cover up the only public accident in the history of the project is almost certainly beyond the capabilities of a single engineer who was just in a fender bender. (And for what it's worth, Google has claimed to have logs which prove that the car was in manual mode, which I assume are available to legislators.)
Human beings crash cars all the time. Autonomous vehicles crash-- well, there's actually no evidence that Google's self-driving car has ever crashed[0]. So if one of their self-driving cars crashes with a human behind the wheel, outside of the context of an autonomous test-drive, and he says at the scene that he was driving, and Google confirms that they have proof that he was driving, and considering that lying to the public about something provable is a really, really bad idea when you're trying to get a law passed...
If there were any evidence, any shred of inconsistency to their story, I'd be skeptical. But there isn't. There's just no reason to doubt them besides "Companies always lie." Or "Of course they'd say that." Yeah, I'd say that's absurd.
[0] In traffic, obviously. One can only presume it has hit many obstacles during development.
Yeah, I was thinking the same thing. If there's any chance that it's a lie, that's the only way it happens-- the guy at the wheel decided to take the fall without telling anyone else. Maybe he pushed the button by accident, the thing freaked out, and he decided no one needed to know. Possible.
But that still doesn't fly. Everything that happens to that car is measured and recorded for later analysis. There is a verifiable record of when it is under human control and when it isn't. Faking that record convincingly enough to cover up the only public accident in the history of the project is almost certainly beyond the capabilities of a single engineer who was just in a fender bender. (And for what it's worth, Google has claimed to have logs which prove that the car was in manual mode, which I assume are available to legislators.)
Human beings crash cars all the time. Autonomous vehicles crash-- well, there's actually no evidence that Google's self-driving car has ever crashed[0]. So if one of their self-driving cars crashes with a human behind the wheel, outside of the context of an autonomous test-drive, and he says at the scene that he was driving, and Google confirms that they have proof that he was driving, and considering that lying to the public about something provable is a really, really bad idea when you're trying to get a law passed...
If there were any evidence, any shred of inconsistency to their story, I'd be skeptical. But there isn't. There's just no reason to doubt them besides "Companies always lie." Or "Of course they'd say that." Yeah, I'd say that's absurd.
[0] In traffic, obviously. One can only presume it has hit many obstacles during development.