I have not heard of a Google AI Car crashing into another car. I have heard that one of the Google AI Cars was involved in a small crash (the other car crashed into the Google Car) while it was driven manually just outside Google's headquarters.
Is that the incident you are referring to?
No, I didn't even know about that crash, reading the reports it apparently involved 4 cars, and the AI Prius running into the back of the car in front of it. Obviously, Google declared afterwards that it was driven by a human at that time, and that the crash was due to human error. What else would you expect them to do? There's no way to verify Google's statement, and why would anyone want to anyway, blaming it on one of the Google employees that are still required to be in the Google vehicles works out much better for every party involved in such an incident, and for Google in particular.
The incident I was referring to was a video of a Google AI car doing laps on a small test track that they set up on a parking lot. The car basically missed a turn and left the track, hit the brakes and skid into a car parked next to the track. Somehow (coincidentally?) the video is nowhere to be found using Google, but I remember watching it like it was yesterday. You could see 2 Google employees chatting a bit next to the track until the car crashed in the background and they got pretty upset.
Just to make myself clear: I'm not trying to talk down the great achievements Google has made with their driverless cars, I'm just a little sceptic about how far they actually are if you strip down all the PR coming from Google.
This seems absurdly paranoid. Why is it, on the face of things, more likely that the LIDAR-packing robot which is scanning 360 degrees hundreds of times per second got into a fender-bender than that the human engineer who sometimes has to drive that same car did? What reason do you think Google has to lie about this to the public-- considering that if it were the result of a software bug, and that bug made it to production, and somebody died, that little white lie could easily be worth millions of dollars of liability and you better believe that Google PR knows it?
> Google AI car doing laps
And this is even sillier. You use a track precisely because you are not confident that the car will not go off the road, which is very likely to happen while you are building a robotic car. Allowing the car into traffic indicates that they are confident.
This project is trying to alter the future of transportation. The self-driving car could wind up being the most valuable property of Google's. The technological challenges were difficult; the legal and political ones will be Sisyphean. They must make their case to lawyers and legislators solidly in the pocket of the industries they plan to destroy.
Why on Earth do you think the fickle opinion of the tech-press-consuming public matters to them at all?
> This seems absurdly paranoid. Why is it, on the face of things, more likely that the LIDAR-packing robot which is scanning 360 degrees hundreds of times per second got into a fender-bender than that the human engineer who sometimes has to drive that same car did?
For all I know there could be a million other reasons why it crashed, none of them related to the hardware. Maybe it was a combination of a software problem and failure of the human driver to intervene properly. Maybe the sensors picked up something that confused the software thinking the car ahead was still moving. Maybe there was some kind of hardware failure. Maybe the AI simply never had to do the kind of unexpected emergency stop that caused this incident and the minimum stopping distance wasn't programmed properly for the road conditions at the time of the incident. Or, maybe, it actually was fully operated by a human. Neither of us knows for sure.
> What reason do you think Google has to lie about this to the public-- considering that if it were the result of a software bug, and that bug made it to production, and somebody died, that little white lie could easily be worth millions of dollars of liability and you better believe that Google PR knows it?
You don't really have to ask that question do you? Google has already spent millions upon millions on this technology, and probably had to pull out all the stops to lobby for the legislation changes they need for their tests. Incidents like this could instantly kill their ambitions and severely hurt the credibility of this technology. There basically is no risk blaming the driver, who was, in fact, in the car and behind the wheel when it happened. Current legislation actually demands there is a human driver behind the wheel for exactly this purpose: to be able to assign responsibility for what the car does to someone in case of an accident. Legally, the 'driver' likely even was accountable for the incident, even if he wasn't driving the car at all.
Nobody is served by saying it was the computer: not the owners of the other cars, not law enforcement, not the government that allowed these cars to drive around town, and definitely not Google.
> And this is even sillier. You use a track precisely because you are not confident that the car will not go off the road, which is very likely to happen while you are building a robotic car.
I brought the other incident up just to make the point that you don't hear anything when the Google cars fail, just success stories that don't include any details about failures or incidents. You don't assume the Google cars are perfect yet, right? So if Google is so open and honest about everything, like you seem to assume, why don't they tell us how often the cars require human intervention, or what kind of situations are still a problem for the AI?
> Allowing the car into traffic indicates that they are confident.
You realize that none of these cars actually goes on the road without a human driver ready to step in when it fails, right? And that all the routes the cars drive are carefully selected and likely full of pre-programmed and scripted details?
This not to say the technology is worthless just because Google is still learning, but it's statements like yours that nicely show what's so strange about this driverless cars discussion. Just because Google is confident to have AI cars with people behind the wheel driving through town, you are confident that you have any insight at all about how those cars would actually do without a human backup driver behind the wheel. Unless you are a Google employee working on these cars, you know nothing more than what Google wants you to know, and that most likely will not include all the possible points of failure of this technology.
I don't expect Google to be completely open, I just don't expect them to lie for no reason, and so don't see why I should doubt what they've said.
They've been quite reasonably open about the limitations of their technology: It requires mapped-out roads, visible road markings, fair weather, et cetera. It requires a human driver at the wheel because there are some traffic situations it is not able to navigate; an example given was meeting an opposing car in a narrow road where the car was not sure if there was enough room to pass. In these situations, a voice announces politely that the human should resume control. If the human does not, one can only assume the car will come to a complete stop.
However, they have demonstrated the ability to autonomously navigate most types of traffic, including reacting to unexpected obstacles or pedestrians, dealing with panic stops ahead, negotiating with other drivers at a four-way stop, et cetera.
They claim hundreds of thousands of miles of fully-autonomous driving, with occasional human intervention being necessary in atypical circumstances. That seems like a completely plausible claim considering what they've shown us, and lacking any way in which they could profit from lying about it, I don't see any reason to take that claim at anything but face value.
The idea that it could've been a lie -- that the AI was engaged during the crash -- doesn't have to be a conspiracy involving PR. I think if it was in fact a lie it would be much more likely to originate from the engineer in the car.
We work hard on the systems we build and a simple lie like that could absolutely seem to be in the interest of the project at a time when this technology still freaks-out lawmakers.
Should we believe Google? Sure, probably.
Is it absurd to question their veracity? Are you kidding? Are you familiar with American capitalism?
> The idea that it could've been a lie -- that the AI was engaged during the crash -- doesn't have to be a conspiracy involving PR. I think if it was in fact a lie it would be much more likely to originate from the engineer in the car.
Yeah, I was thinking the same thing. If there's any chance that it's a lie, that's the only way it happens-- the guy at the wheel decided to take the fall without telling anyone else. Maybe he pushed the button by accident, the thing freaked out, and he decided no one needed to know. Possible.
But that still doesn't fly. Everything that happens to that car is measured and recorded for later analysis. There is a verifiable record of when it is under human control and when it isn't. Faking that record convincingly enough to cover up the only public accident in the history of the project is almost certainly beyond the capabilities of a single engineer who was just in a fender bender. (And for what it's worth, Google has claimed to have logs which prove that the car was in manual mode, which I assume are available to legislators.)
Human beings crash cars all the time. Autonomous vehicles crash-- well, there's actually no evidence that Google's self-driving car has ever crashed[0]. So if one of their self-driving cars crashes with a human behind the wheel, outside of the context of an autonomous test-drive, and he says at the scene that he was driving, and Google confirms that they have proof that he was driving, and considering that lying to the public about something provable is a really, really bad idea when you're trying to get a law passed...
If there were any evidence, any shred of inconsistency to their story, I'd be skeptical. But there isn't. There's just no reason to doubt them besides "Companies always lie." Or "Of course they'd say that." Yeah, I'd say that's absurd.
[0] In traffic, obviously. One can only presume it has hit many obstacles during development.
So, because in Alpha testing they had two issues that where later corrected your suggesting the hole thing is impossible? Will self driving cars be perfect, of course not. However, the threshold is simply will the first generation of self driving cars be hit by people more often than they hit people and I think that's vary obtainable as in 7-12 years from now they will be expensive but on public highways.
PS: #1 reason they will be adopted after the price drops? Ability for them to drive you home after a night of drinking.
I'm not suggesting anything, I'm just saying that extrapolating publicly available information about the performance of Google AI cars, does not give me any clue about how they would do in real traffic.
I don't think 'just as bad as some human drivers' will be good enough for driverless cars to ever become reality by the way. I'm amazed how so many people adhere to this strange way of reasoning: because some people are bad drivers and have accidents, it's ok for AI cars to do better than the average driver, but still worse than the responsible, safe driver? I'd say we'd better spend the time and energy trying to reduce the number of bad drivers, or limit the damage they can do when they screw up (for example by AI assistance as opposed to AI drivers). It's exactly arguments like this that put me off in the driverless-cars debate, just like the eternal 'but we have driverless planes for years and they work' argument I see a lot of people make. It only goes to show how love for technology clouds some peoples judgement (I hope I don't have to explain why comparing AI cars to planes doesn't make any sense on about any level imaginable?)
Anyway, judging from the votes I get, it was a bad idea to post my thoughts about driverless cars here, as I already feared. Just like the discussions I sometimes have with my colleagues who all also love technology, it goes nowhere. It's almost as if driverless cars have become a religion for some, anytime I try to put things into perspective I only get back a lot of negativity and non-sequitur arguments. I guess some people really want to believe in driverless cars.
The disconnect is even if I still drove myself 95% of the time I would still find it vary useful to be able to click a 'drive me' button when I know I am not at my peak. So, I would pay money for the feature. Therefore it's just a financial and legal question if I am allowed to buy one and because they would on net reduce accidents I think they will become widely available.
Now, I would probably not buy or afford gen #1, but it would only take a few years of adoption before I would consider it safe enough to use. And I know there are plenty of people willing to test the first 50 billion miles to 'work out the kinks'.
PS: Over 5 years there is millions of people would literally gain than 5k in direct utility from the I am drunk drive me home button even if that's the only thing it did. But, while less common and far more important is the it's 2am I am tired just get me home. Both of which pale in comparison to the I am still sleepy drive me to work button.
I don't see computers as being good enough at general driving to make it work. And, perhaps cynically, I don't see society in general liking the idea that accidents can be caused with nobody to punish. So, I imagine that even if the computer can drive it 100% perfectly 100% of the time, you'll need to have a human watching out. Which means you need a human driver. Which means it's going to be like driving, basically, only even MORE boring. Will you be able to catch up on your reading? No. Will you be legally conveyed in your late-night state of merry intoxication? No. Can you have a quick snooze? No. So... um, what's the point?
Unless a driverless cars is as good as taking the train, or going in an aeroplane, or - yes - simply being a passenger in a car that's being driven by the traditional human, frankly you might as well not bother.
Of course, it doesn't matter what I believe. There are lots of people working on this problem, whose IQs and imaginations clearly far exceed my own, and I don't mind to admit that I am already surprised by just how much progress has been made. So who knows?
Nevertheless, I think I will be proven right.
(On the plus side, even once the push for autonomous cars fails, we'll have a mind-boggling set of amazing driving assists.)
Totally agree with you and the original poster that the Google car is probably not driving as well as many people think.
If it was nearly near it,Google would sell that tech in any way, because that would make them probably the most valuable company in the world.
I can't imagine how an self driving car would, for instance, know how to drive into my garage (hint: it's not at all straight ahead & flat). How does the car know where it is allowed to park on a private parking? There are lots of huge challenges here.
A friend of mine knowning the automotive R&D much better than me also confirmed that view. We won't see them before years and years.
What I could well imagine soon, though, are for instance specially prepared highways that would allow to drive driverless on given sections. Probably increasing the overall throughput and so the CO2 emissions etc.
The reasons why we won't be having self-driving cars anytime soon is that car product development cycles are very long. Even if car makers were working with Google now to put this into cars, it could be five years before you'd see anything in dealerships.
And before that they probably need to become street-legal in major countries, convice manufacturers to trust Google etc.
> I don't see computers as being good enough at general driving to make it work.
Why not? It's a fairly mechanical operation, and 360 degree range-finders can do a far better job of detecting obstacles than rather limited human vision.
Driving itself is straightforward, but the inputs are messy and noisy. Computers aren't good at that.
Consider the wide variety of different road surfaces and cambers, the ever-changing appearances of obstacles according to conditions and the seasons, the limited accuracy of road maps, and the constant changing of the road network in minor ways. I expect a lot of driverless cars to be flummoxed by potholes, confused by temporary roadworks, utterly bamboozled by temporary diversions - and they won't be able to find my road in the first place. (I don't live in the middle of nowhere.)
As a simple matter - how will the car reliably know how fast to go? You can't rely on map data, as the legal limit can change, and nobody will think to tell the map people. You can't rely on the car spotting speed limit signs, as people can (and do) graffiti over them, or twist them so they're not straight any more. I don't think people will be so keen on "driverless" cars after they're held up by a whole train of them going 30mph on a 60mph limit road, or after they're in an accident with one going 60mph in a 20mph residential area.
Perhaps I'm being overly cautious, but I just don't think this will work very well. I can think of two outcomes. The first will never happen, because it involves simply letting the computers kill and main and cause accidents, under the assumption that the overall accident rate will be lower. But then who will be to blame for each accident? People need somebody to blame, so they can be taken to court and maybe sent to prison.
The second option is that you require a human to be in attendance all the time, ready to take over the controls when the computer gets confused. Which means it's not driverless. Which makes the whole exercise a totally pointless waste of money. If you need a driver... well, it's not driverless. You might as well class it as an amazing high-tech set of astounding driving aids. That is probably what we'll end up with, I suspect.
I expect that self-driving will become a safety mechanism before it becomes a full driver replacement. It will probably keep track of cars around you as you're driving manually, and if you get close to a collision, it will take over and move you away.
As the data improves, the road information becomes better curated, and so on, I expect that it will become a driving aid, as you describe.
And I expect that within a couple of decades, fully driverless cars (with nobody behind the wheel) will become commonplace.
Human drivers are terrible. Is it so hard to imagine computers doing significantly better? Why do computers need to be 100% safe when the humans they are replacing are nowhere near that?