I'm not suggesting anything, I'm just saying that extrapolating publicly available information about the performance of Google AI cars, does not give me any clue about how they would do in real traffic.
I don't think 'just as bad as some human drivers' will be good enough for driverless cars to ever become reality by the way. I'm amazed how so many people adhere to this strange way of reasoning: because some people are bad drivers and have accidents, it's ok for AI cars to do better than the average driver, but still worse than the responsible, safe driver? I'd say we'd better spend the time and energy trying to reduce the number of bad drivers, or limit the damage they can do when they screw up (for example by AI assistance as opposed to AI drivers). It's exactly arguments like this that put me off in the driverless-cars debate, just like the eternal 'but we have driverless planes for years and they work' argument I see a lot of people make. It only goes to show how love for technology clouds some peoples judgement (I hope I don't have to explain why comparing AI cars to planes doesn't make any sense on about any level imaginable?)
Anyway, judging from the votes I get, it was a bad idea to post my thoughts about driverless cars here, as I already feared. Just like the discussions I sometimes have with my colleagues who all also love technology, it goes nowhere. It's almost as if driverless cars have become a religion for some, anytime I try to put things into perspective I only get back a lot of negativity and non-sequitur arguments. I guess some people really want to believe in driverless cars.
The disconnect is even if I still drove myself 95% of the time I would still find it vary useful to be able to click a 'drive me' button when I know I am not at my peak. So, I would pay money for the feature. Therefore it's just a financial and legal question if I am allowed to buy one and because they would on net reduce accidents I think they will become widely available.
Now, I would probably not buy or afford gen #1, but it would only take a few years of adoption before I would consider it safe enough to use. And I know there are plenty of people willing to test the first 50 billion miles to 'work out the kinks'.
PS: Over 5 years there is millions of people would literally gain than 5k in direct utility from the I am drunk drive me home button even if that's the only thing it did. But, while less common and far more important is the it's 2am I am tired just get me home. Both of which pale in comparison to the I am still sleepy drive me to work button.
I don't see computers as being good enough at general driving to make it work. And, perhaps cynically, I don't see society in general liking the idea that accidents can be caused with nobody to punish. So, I imagine that even if the computer can drive it 100% perfectly 100% of the time, you'll need to have a human watching out. Which means you need a human driver. Which means it's going to be like driving, basically, only even MORE boring. Will you be able to catch up on your reading? No. Will you be legally conveyed in your late-night state of merry intoxication? No. Can you have a quick snooze? No. So... um, what's the point?
Unless a driverless cars is as good as taking the train, or going in an aeroplane, or - yes - simply being a passenger in a car that's being driven by the traditional human, frankly you might as well not bother.
Of course, it doesn't matter what I believe. There are lots of people working on this problem, whose IQs and imaginations clearly far exceed my own, and I don't mind to admit that I am already surprised by just how much progress has been made. So who knows?
Nevertheless, I think I will be proven right.
(On the plus side, even once the push for autonomous cars fails, we'll have a mind-boggling set of amazing driving assists.)
Totally agree with you and the original poster that the Google car is probably not driving as well as many people think.
If it was nearly near it,Google would sell that tech in any way, because that would make them probably the most valuable company in the world.
I can't imagine how an self driving car would, for instance, know how to drive into my garage (hint: it's not at all straight ahead & flat). How does the car know where it is allowed to park on a private parking? There are lots of huge challenges here.
A friend of mine knowning the automotive R&D much better than me also confirmed that view. We won't see them before years and years.
What I could well imagine soon, though, are for instance specially prepared highways that would allow to drive driverless on given sections. Probably increasing the overall throughput and so the CO2 emissions etc.
The reasons why we won't be having self-driving cars anytime soon is that car product development cycles are very long. Even if car makers were working with Google now to put this into cars, it could be five years before you'd see anything in dealerships.
And before that they probably need to become street-legal in major countries, convice manufacturers to trust Google etc.
> I don't see computers as being good enough at general driving to make it work.
Why not? It's a fairly mechanical operation, and 360 degree range-finders can do a far better job of detecting obstacles than rather limited human vision.
Driving itself is straightforward, but the inputs are messy and noisy. Computers aren't good at that.
Consider the wide variety of different road surfaces and cambers, the ever-changing appearances of obstacles according to conditions and the seasons, the limited accuracy of road maps, and the constant changing of the road network in minor ways. I expect a lot of driverless cars to be flummoxed by potholes, confused by temporary roadworks, utterly bamboozled by temporary diversions - and they won't be able to find my road in the first place. (I don't live in the middle of nowhere.)
As a simple matter - how will the car reliably know how fast to go? You can't rely on map data, as the legal limit can change, and nobody will think to tell the map people. You can't rely on the car spotting speed limit signs, as people can (and do) graffiti over them, or twist them so they're not straight any more. I don't think people will be so keen on "driverless" cars after they're held up by a whole train of them going 30mph on a 60mph limit road, or after they're in an accident with one going 60mph in a 20mph residential area.
Perhaps I'm being overly cautious, but I just don't think this will work very well. I can think of two outcomes. The first will never happen, because it involves simply letting the computers kill and main and cause accidents, under the assumption that the overall accident rate will be lower. But then who will be to blame for each accident? People need somebody to blame, so they can be taken to court and maybe sent to prison.
The second option is that you require a human to be in attendance all the time, ready to take over the controls when the computer gets confused. Which means it's not driverless. Which makes the whole exercise a totally pointless waste of money. If you need a driver... well, it's not driverless. You might as well class it as an amazing high-tech set of astounding driving aids. That is probably what we'll end up with, I suspect.
I expect that self-driving will become a safety mechanism before it becomes a full driver replacement. It will probably keep track of cars around you as you're driving manually, and if you get close to a collision, it will take over and move you away.
As the data improves, the road information becomes better curated, and so on, I expect that it will become a driving aid, as you describe.
And I expect that within a couple of decades, fully driverless cars (with nobody behind the wheel) will become commonplace.
Human drivers are terrible. Is it so hard to imagine computers doing significantly better? Why do computers need to be 100% safe when the humans they are replacing are nowhere near that?
I don't think 'just as bad as some human drivers' will be good enough for driverless cars to ever become reality by the way. I'm amazed how so many people adhere to this strange way of reasoning: because some people are bad drivers and have accidents, it's ok for AI cars to do better than the average driver, but still worse than the responsible, safe driver? I'd say we'd better spend the time and energy trying to reduce the number of bad drivers, or limit the damage they can do when they screw up (for example by AI assistance as opposed to AI drivers). It's exactly arguments like this that put me off in the driverless-cars debate, just like the eternal 'but we have driverless planes for years and they work' argument I see a lot of people make. It only goes to show how love for technology clouds some peoples judgement (I hope I don't have to explain why comparing AI cars to planes doesn't make any sense on about any level imaginable?)
Anyway, judging from the votes I get, it was a bad idea to post my thoughts about driverless cars here, as I already feared. Just like the discussions I sometimes have with my colleagues who all also love technology, it goes nowhere. It's almost as if driverless cars have become a religion for some, anytime I try to put things into perspective I only get back a lot of negativity and non-sequitur arguments. I guess some people really want to believe in driverless cars.