> riding around Miami Beach in self-driving test cars several days a week
I watched the video. 2+1+2 lanes, no trucks parking on the side, no bicycles, no bicycles-lanes. No pedestrians crossing the road, no pedestrians at all. Clear blue sky. The car can take a left turn alone. That's cool, until you realise that the majority of the cities outside of the USA doesn't look like Miami Beach. I highly doubt that any "self-driving" car could travel through Munich today without killing at least a couple of pedestrians or cyclists. And still, Munich is like a dream compared to Rome or Ho-Chi-Minh-City.
BTW, only because sometimes I feel safe enough to ride my bike without hands on, it doesn't means I have a self-riding-bike.
You're not wrong, but at the very least it'd be handy to have self-driving at least on, say, highways. It could help with traffic flow efficiency, as well as help eliminating traffic waves:
In cities, perhaps streetcars/trams could be made self-driving at least. Labour is a big expense in transit, and if we want more frequent service (which could encourage ridership), then scaling more drivers may not be economical.
This is one of the reasons that I really hope we get self-driving cars working well. They have the potential to do so much better at avoiding creating traffic issues. Short traffic lights have for a decade+ now made me wish humans weren't involved in driving.
You get that traffic wave effect except it's extremely noticeable what's happening as you see each car ahead of you delayed in reacting to the light changing to green, so you end up having half the number of cars get through than it seems should be able to. The optimal handling in terms of getting cars through the intersection would be every car in the line moving at the exact same time, something that only self-driving cars would be able to do safely.
This will only work if the cars ahead of you accelerate stronger than you. Standing cars don't need as much distance as driving cars. Self-driving cars might not need the distance if they are networked because if the first car brakes all cars in the group brake simultaneously.
Yes, of course it would also require them to be communicating to completely avoid the delay between each car being able to start moving. But I still believe that even only having information from its own sensors a self-driving car could safely begin accelerating sooner than a human driver since the largest buffer zone it really needs is just the braking distance. Slow human reaction time is removed from the equation.
How do you discriminate a typical slowdown due to road conditions from full braking?
You really cannot without additional information. But a sensor equipped (not even self driving) car can and forward this information.
Simple things like broadcasting brake status, turning lights status and engine RPM of the nearby vehicles, and if available, emergency braking notice or traction control notice would do wonders, but we have not standardized on any such system of info somehow. Not even for people.
A simple info about being passed or nearby road signs send to a car's HUD/console would do a lot for safety, potentially.
Smart road, then smart cars, then self driving cars. Skipping steps won't do.
Fast trains have these systems already (both local radio and central) despite much more controlled conditions, but somehow car roads don't get them.
lol, have you seen humans try and drive in the snow? Especially any of them who have AWD/4WD and snow tires makes them invincible? If you've ever had to brake in below-freezing conditions and ABS has kicked in, you're already trusting a computer to be better than a human at that task. Now granted, ABS is a far more limited system, with way more limited inputs and outputs, so the calculation and algorithms and code involved are dramatically easier, and self driving cars have a long way to go still, imo, but trivializing it to "lol snow" disregards the interesting technical challenges involved before self driving in snow can be solved for. You seem to think you have some expertise driving in winter - what is the biggest single challenge that self-driving cars face trying to drive in the snow?
The difficult problems with self driving are not going to be the technical ones (driving in X environmental conditions) but the social ones.
Self driving cars are going to need a theory of mind (be able to make good guesses about what mistakes other road users could make), and also behave in ways that don't make human road users uncomfortable. The latter is relatively easy, but the former is hard - never mind Theory of Mind, some self driving software doesn't even seem to have object permanence.
If the car takes over-the-air software updates, they are advertising up front that they don't believe themselves able to debug the car before they sell it to you. Normal ECU's, ABS brakes etc. do not need OTA updates. There is no absolute certainty about anything, but such issues are rare enough to be handled with product recalls.
> That's cool, until you realise that the majority of the cities outside of the USA doesn't look like Miami Beach. I highly doubt that any "self-driving" car could travel through Munich today without killing at least a couple of pedestrians or cyclists.
Yeah, simply stating that they aren't yet at GA status seems pretty uncontroversial. Gotta start somewhere though :)
> BTW, only because sometimes I feel safe enough to ride my bike without hands on, it doesn't means I have a self-riding-bike.
Right, that was pretty much the point of the article.
Level 4 ("mind off"): no driver attention is ever required for safety, the driver may safely go to sleep or leave the driver's seat. Self-driving is supported only in limited spatial areas (geofenced) or under special circumstances. Outside of these areas or circumstances, the vehicle must be able to safely abort the trip.
Level 5 ("full autonomy"): No human intervention is required at all. Vehicle drives autonomously on all public roads, in all weather conditions, all surfaces, all over the world, year around.
The majority of cities in the US don't look like Miami beach.
It just so happens that a bunch of cities where most of the people currently developing this kind of tech live do look like Miami beach and those people believe (or at least develop products like they do) there is no land beyond the valley.
Geofenced full self-driving would be much more valuable than a non-geofenced, almost-full-self-driving.
Being able to travel between LA and SF in a sleeper car would immediately eliminate a ton of flights, and is probably already doable with today's technology. Trucks could drive across the U.S. on their own with last-mile human drivers.
Being able to play a video game while your car drives around your pedestrian-heavy city center in the rain is indeed much further off, but also much less valuable.
It's a self driving car spending an hour in Guangzhou traffic and managing to not hit anyone. I'm sure there is still a lot of progress to be made though.
The wilful ignorance of researchers in "AI", the charlatanism of the really-existing industry, and the techno-utopian religiosity of the media make me eager for the on-coming AI winter.
And that's if it doesn't work. The fact that nobody seems to have thought through the consequences of what if it does work is to me even more frightening. In the past if jobs were made obsolete trough mechanization at least there were other jobs. But this is the 'end station', after this point there are no 'other jobs', other than highly skilled ones and preciously few at that. Because self driving cars are pretty much synonymous with general AI and that will make a good chunk of humanity redundant to a degree that we have never witnessed before in history.
> I disagree; they appear to need general spacial and physics AI, but they don’t need general linguistic or emotional intelligence.
I don't think it will require general AI, but it will require a pretty good model of what is in the mind of each person in the road, specially pedestrians and cyclists. Simulating other minds in general requires general AI, but maybe we can get away with a facsimile.
I say that because self driving needs to adapt to unpredictable stuff happening in the road and this might require a higher understanding of the intentions of other people.
For example, when I'm riding bicycles, I have a rule that if a car stops for me to cross I need to establish visual contact with the driver first. It's too dangerous otherwise. Can AI establish visual contact with me or have some other signaling to pedestrians and cyclists? (now I see this specific example is simple; I want a light on top of self driving cars that will reliably signal some stuff like "I stopped for you to cross")
Another situation is when the road is partially used by business and people, partially used by cars. Some roads are pedestrian-first and it may be hard for even an human driver unfamiliar with its dynamics to traverse.
People keep saying that everything about driving can be done with "dumb" AI using essentially iterative current methods. I used to think that way, but it isn't working out today. Right now, AI is severely overpromising and overstretching itself. Maybe this will lead to an AI winter and we will only get self-driving cars with new methods. Or maybe, after some years, we will get good enough self-driving cars.
Needs to be quite highly developed, not only to predict what other street users are likely to do, but also what mistakes they could make.
It also needs, as you say, to be able to interact with those other road users in ways they find comfortable. Every time. The uncanny valley is probably going to be wide and deep.
How highly developed? It needs to be better than knowing what is physically possible or it would be unusably cautious, but it doesn’t need to understand the political views of someone who has bumper sticker of a cartoon of Dominic Cummings as Nosferatu.
Thank you for the well-informed comment. I have to respectfully disagree that self driving cars are pretty much synonymous with general AI. You remind me that DeepMind just recently said that all we need for AGI is reinforcement learning. And much of what self driving cars do is just reinforcement learning. But I have to say, I don't think it's that simple. Reinforcement learning is just classification with a reward function. Is that all people are, just classifiers with a reward function? I don't think so. And I don't think it's just a god-of-the-gaps argument applied to AI, either. Having worked with all kinds of classifiers and recommendation systems for years, these things just seem like tools that need to be calibrated to me. We can automate a lot of stuff with classifiers, especially 'bullshit jobs' like call centers and movie recommendations and gunshot detection. But there are many tasks for which there is just no training data, or way of incorporating that data into a machine. Basically, some tasks are solved problems, and others are totally unsolvable, and that category is where humans come in. Natural language generation comes to mind. For all the noise about GPT-3 and Copilot, they're not going to be ready for production for a long time until we figure out very complicated issues like RTE/entailment, relevance, long distance coreference, and so on. Separate the hype from the truth and the fact is that there are tons of tasks that will never be automated, like screenwriting or other meaningful communication.
The problem is that driving a vehicle is a dynamic process, where the driver adapts to the situation in real time relaxing some rules while tightening others on the fly as best fits the situation. That is a very bad match for the current state of the art in machine learning, where we more or less assume that after a while we've seen all the cases that the world can throw at us at a high enough frequency that we can say that our solution 'works'.
People are smart, very smart compared to our smartest computers and forgetting that we apply this all the time and yet have a fairly high error rate (called 'accidents') is what should count as proof that driving isn't a simple problem that be handled by classifiers and rulesets. It may well require much more than that.
The current state of machine learning is an excellent match for problems where brute forcing isn't an option but we have a good general idea of the problem space and repetitive tasks that require some rudimentary level of intelligence. Full autonomous driving (L4, L5) under the same circumstances where ordinary drivers are required to perform are for now out of reach of our tools and solutions.
By the way: one way in which we could quite easily improve on the number of traffic deaths is stricter certification, faster impounding of vehicles based on previous offenses (for instance: DUI, reckless driving), loss of insurance when continuing to drive in dangerous situations (fog, heavy rain, snow).
I can imagine the same was said when cars took over, or computers. There’s just no telling. But basic income is often cited as a solution. And we humans can go back to more creative and/or humanitarian tasks. There’s still a lot of shit in the world that we can’t deal with due to a lack of time and need to provide for our families / ourselves. If that need is removed, who knows - maybe a lot of good will happen because we have the time for it!
I am all for a UBI, but replacing an $60k a year trucking job with a $30k a year UBI isn't going to be the panacea that everyone wants it to be. People want dignity and meaning and a lot of people find that in their work. Just providing funding for rations isn't going to cut it.
Nothing about UBI prevents people from working, but there will simply not be enough work to go around, nor much work that can provide a sustainable living.
People who find meaning and dignity in work can still try to do so within what market remains for human labor, which will probably be meager and miserable - far closer to Amazon warehouse work or Uber than last century's factory line - or they can adjust and redefine their self-worth to be something other than the value that can be extracted from them by a corporation. UBI is just there to keep them from starving in the meanwhile, because they can't eat dignity.
I'll also be glad for this "ethical hysteria" to end.
That won't end until it actually happens. Google doesn't want to deal with it, insurance companies don't want to deal with it, regulatory bodies don't want to deal with it. Because of the wonderful reactionary society we live in, self-driving cars won't be properly reined in until one of them swerves into a busy sidewalk and mows over 20 people because it was avoiding a human driver swerving into its oncoming lane.
Looking at the rate of progress of robotics, I wouldn't be too worried. Even if we are making good progress on creating an artificial mind, we are making extremely slow progress on creating an artificial body. So most jobs requiring dexterity and ambulation are safe for the foreseeable future.
At this rate an artificial body may not be solved until the singularity, but that is probably an extinction-level event so there's no need to worry.
John Zerzan and the anarcho-primitivist ideas are all about that. Interesting perspective on life. The collection of essays Against Civilisation is well worth checking out.
In essence evolution is chaos. Biological on large time scales, social on much smaller, but still chaos and no such thing as a grand design in anything anywhere anytime.
Thus as civilization advances more chaos happens. To be happier we must get back in development and to a lower state of chaos. Not sure if this is physically sound, from the point of entropy, but it is quite tought-provoking.
Yesterday domestication, today self driving cars, tomorrow super intelligence. Is that good or bad? If we knew no such thing as philosophy would exist :)
Me too. HN's anti-hype sentiment seems to be winning.
Perhaps HN doesnt realise how harsh an AI winter will be: billions in funding will vanish. PhD's staking their careers in this area will find no jobs, etc.
This has happened twice before in largely identical ways (self-driving cars, ad-tech companies, university research propaganda -- 50s and 80s).
On the other hand, make a boringly criticism of a cryptocurrency (say, "No stablecoin is worth anything unless it can show a recent third-party audit that says exactly what and where their backing assets are and that the processes for handling those assets are up to best accounting practices.") and see how far that gets you.
Nothing about a bubble prevents other bubbles from forming at their own rate.
There is no certainty of an AI winter, let alone of it's harshness.
Quantum computing and quantum machine learning are here and they could sustain the ecosystem for this next decade as ANNs did the last one.
Money will vanish as it did in the 1999 crash for overvalued stocks. This might happen for self-driving cars and other too hyped products.
But AI research will continue and be well funded. FAANGs are not going anywhere and they will push AI as much as they can. For example AWS has the Bracket service for quantum computing and quantum ML via PennyLane. This will hit mainstream in the next few years.
In the worst case we will have (approximate) human level intelligence by brain simulation. This will likely remain in Academia and be less paid than FAANG AI projects but will stay here. In 10 to 20 years we might see it getting mainstream.
It is still a very exciting field. Hype will die down and some money with it but AI careers will not be threatened at large.
Harsher news for QC i'm afraid -- there are no quantum computers, and there aren't going to be any for decades -- and even when we have them, they'll be highly specialised devices offering limited generic commercial use.
> In the worst case we will have (approximate) human level intelligence by brain simulation
I don't know what this means. But there's not going to be any meaningful advances in machine intelligence within a century, and I don't see AGI within a couple hundred years.
Intelligence is all about devices, and their interaction with the environment. It seems highly likely to me that the real-time top-down and bottom-up adaption of organic cellular systems will be central to the creation of really-existing intelligence.
I think it's more nuanced - I agree there's no clear road to AGI like some would like you to believe, on the other hand there are new and useful techniques that are real.
Those PhDs may not have the stellar careers the they might currently expect, but they'll find plenty of work building automation and assistance systems for "boring" existing workflows.
Things like making your IDE a bit better, making some data entry jobs redundant, automating quality checks in increasingly complex cases, assisting in biomedical image interpretation ...
These things provide essentially a lower bound for how bad the "winter" might be.
PhDs in this area are sufficiently embedded into generic technical industrial practice, perhaps unlike early eras.
> assisting in biomedical image interpretation
I suspect there's going to be a (counter-)renaissance of expertise and anti-tech sentiment because of gross technical failures in these areas.
I think tech budgets into these projects are presently so massive, and so unlikely to deliver on an ROI, that reputations here are going to be destroyed.
So the question is what the nature of this counter-reaction will be. I suspect, within the decade, people will be hiding "AI" from their CVs and describing all the python/data-eng they were doing.
That's a pretty misleading statement that manages to be true only by technicality. There are a lot of topics around here where both sides are represented but one is outnumbered 10:1 and all the comments representing that side are gray or dead.
(though I would argue that the more purely technical and less political a topic is the more evenly represented)
Other manufacturers make it clear that their car is not self driving, using names like "assist". Tesla boldly uses names like "autopilot" and "full self driving".
Autopilot is somewhat OK (plane autopilots mostly just help flying straight, the pilot still flies the plane), but studies have shown it is misleading to the general public and I am convinced it is on purpose.
"Full self driving" is a lie, plain and simple. Tesla tries to justify saying that it is still beta, that in the future, it will really be self driving, aka marketing bullshit.
If we let marketers loose, words lose their meaning, and that's when I think it is time to do the unpopular thing and regulate. Maybe, after that, Tesla will take a hint from Boring and call their advanced driving assist technology "not full self driving".
>plane autopilots mostly just help flying straight, the pilot still flies the plane
Mostly, as in that's what they are doing most of the time, then yes. But autopilots can fly a predefined routes. Most commercial aircraft can fly themselves except for take off and the last few hundred feet of landing (thanks to instrument landing systems).
I would be quite happy with a car that could take over once I get to a highway. Getting to the highway is like taking off and getting from the highway to my destination is like landing.
>"Full self driving" is a lie, plain and simple.
Maybe for Tesla, but here's Veritasium[1] taking a ride in a car with no driver at all. It's still limited to where it can go, but I think this will be more prevalent sooner rather than later.
> Most commercial aircraft can fly themselves except for take off and the last few hundred feet of landing (thanks to instrument landing systems).
Not a pilot, just wondering -- does this include communicating with ATC towers? Such as moving to specific altitude, requesting landing instructions, etc.
To non-europeans in this thread: there already exists a transporation system in which you can sleep and get to your destination safely, it's called public transportation.
Public transportation works for those living in cities. When you live in the countryside, which is still legally allowed, suddenly public transportation ain't that much of a solution anymore.
Related to TFA: I got to experience the extra called "first class" on a Mercedes Class S (an entry level one, but it had that one extra), where the rear seat (on the passenger side) becomes basically an airliner first class bed (and there's no room for a passenger next to the driver anymore). That is what I call a car I can sleep in and when I'll have a self-driving car, that's the kind of "bed" I'll want.
I hope resourceful engineers working on self-driving cars think about comfort and get to experience that "first class" extra that the class S offers (not that Class S cars are my thing: they're not but, darn, was that bed good for a car).
As a european living in the countryside: Public transportation works really well. You can reasonably commute at 50% speed of a car in the worst case in most situation IMO. In non-countryside areas, public transportation often outpaces cars.
You can sleep on public transportation and get to some destination, but you have to make sure you wake up at the right time if you want to get to a specific one :)
(Bonus points if that was the last train and now you can't get back until morning. Double bonus points if they forgot to check and you've woken up in the depot)
Lots of big cities outside Europe (even in America) have good public transit. I live in the Washington, DC area and don't drive but use the Metro, buses, and regional trains. However, I wouldn't recommend sleeping on them in America or elsewhere unless you don't mind your valuables going missing.
The arrogance it takes to leave a comment like this.
Many people that live in the US, Canada, Australia, Russia and others don't live in densely packed cities and suburbs in countries you can drive across in an afternoon. Some of us live in towns with less than ten thousand people, hours from a population center with more than 100,000. Some of us live in towns with a few hundred. I've driven through towns with populations in the single digits.
It's autonomous to you, but someone else is driving it. There are whole teams of people who keep those systems going for you. That's a solution? Just because you outsource your driving to someone that makes less money than you doesn't make you better than everyone else. You ignore your externalities and pretend it is a sign of superiority and sophistication.
Imagine someone talking about loneliness and ways to alleviate it and someone replies "that's easy, get your mommy to tuck you in at night and read you a story." That's what you sound like to other people with comments like this.
In the UK deep in the countryside there are miles and miles of twisty single track roads. In Cornwall in particular they have high walled sides for some reason, and are just wide enough to drive down. All the time you are praying - please don't let there be someone coming the other way. But they do. As you were driving you took note of occasional deliberate passing places but also entrances to fields which would also do. Coming face to face with another car, one of you decides there was a passing place not so far back and reverses back 100 yards (now the prayer is please don't let there be someone following me). You reverse in, the other driver smiles and thanks you and you go on your way. I think that will need a level 6.
That still allows some need for drivers to provide some safety backup. I much prefer the level 4 definition which explicitly says the driver does not need to be praying attention.
> A large percentage of the terrains where vehicles are being used are NOT public roads.
But the absolutely overwhelming majority of people who use cars need to be able to do so on public roads. It doesn't help that the acreage of, say, private cattle ranches in Texas is probably larger than that of public roads in the state: Cowboys that need to drive around (only!) on the terrain of those private cattle ranches are probably -- I'm taking a wild guess here -- a pretty tiny minority of the state's population nowadays.
>But the absolutely overwhelming majority of people who use cars need to be able to do so on public roads.
Yes but that is ONLY a permitting/approval hurdle.
Before we approve any particular autonomous vehicle for road use first we have to invent an autonomous vehicle that somebody wants.
If the absolute overwhelming mamjority of people wants a particular design, but the red tape is a blocker on some semantic minutia - we can always change the regulation to accomodate the technology.
> Yes but that is ONLY a permitting/approval hurdle.
There's nothing "ONLY" about it.
> If the absolute overwhelming mamjority of people wants a particular design, but the red tape is a blocker on some semantic minutia - we can always change the regulation to accomodate the technology.
Yeah, right. For most of my life, everybody wanted flying cars. So where were your huge crowds, enthustiastically voting in politicians on the single promise to make helicopters require only an ordinary driving license?
There is literally no law which forbids you from being a passenger in your autonomous helicopter in your private land/airspace!
Allowing autonomous helicopters in public airspace is a political debate that can be had
once somebody invents autonomous helicopters that actually work; and once people actually start using them! They currently don't exist.
There is the engineering problem: build an autonomous helicopter.
There is the social problem: make society trust it.
Tesla has a perverse incentive not to "wake up". It's advantageous to overpromise than to admit the hard fact that self-driving (in a strict sense) cars may potentially not be feasible for a long time.
Doesn't help that the company is owned by a person who defines himself as somebody who "sends people to Mars".
Tesla employs the Norman Doors of false advertising. In the lasting tradition of "Free" (not free), Tesla gave us "Autopilot" (requires manual operation at all time) and "Full Self Driving" (requires alert driver at all time).
And if some users are to be believed it is here today, and already safer than ordinary drivers, even if that requires some mutilation of the available evidence.
If you happen to live near the right bus stop and have an office on the other end of it's route. For many of us it's a much shorter commute with a car, and a guaranteed seat to work from, which you don't get on a lot of commuter public transport.
I live in a place where public transport is actually pretty good, and it's still a PITA to work on the way.
A bus is never going to be as fast as driving, when it has to start and stop all the time to let people on or off.
Yes, cars have to stop at traffic lights. Buses have to stop at traffic lights, and also at bus stops, while waiting for people to get on or off with their parcels, bikes, children, and what-have-you.
Buses and other modes of public transit can be faster than cars, but also they can be better than cars because you don't have to pay too much attention or drive or park or maintain the damn thing. If I lived in LA and there was a bus only lane on the freeway, the bus would sure as hell be the better option, as it would in many cities. The reason people don't drive for trvial things in Europe is that driving isn't the better option.
Where I live, I take transit or walk almost everywhere, and it would require some exception for me to get a car again. Even on the occasion I use car sharing because I think it'll be faster, I often regret it, because it's not faster and I have to find parking or avoid crashing into people.
Corollary: can I send my car to school to pick up the kids? Send a package? Go to the garage to get itself serviced?
Self driving cars will lead to a lot of extra cars on the road for trips that wouldn't happen today because at least a warm body in the drivers seat is a requirement to make the trip.
So the self driving car will cost countless jobs and will be a huge cause for more traffic on already quite congested roads.
Unless you have two cars, that makes no sense. The car can either pick up the kids or get itself serviced. Plus it's not like those journeys won't be made even if you drove them yourself. Kids need to get picked up, cars need to get serviced.
However with a self driving car it's also possible to schedule it to drive to the garage at night when the roads are not congested.
obviously at that point people start to question if ownership is necessary given how easily and quickly you could summon a unit out of a fleet and only pay for what you use. And thus uber for self-driving-cars was born.
Longer term traffic could probably be reduced by autonomous-only roads/infrastructure that has less need for traffic lights, etc.
The real test of a self driving car is whether the manufacturer is prepared to back it by up-front accepting 100% liability for it. Anything less than that and they are passing risk onto the customer.
“Soon, your Volvo will be able to drive autonomously on highways when the car determines it is safe to do so,” says Henrik Green. “At that point, your Volvo takes responsibility for the driving and you can relax, take your eyes off the road and your hands off the wheel. Over time, updates over the air will expand the areas in which the car can drive itself. For us, a safe introduction of autonomy is a gradual introduction.”
There was a study from 2014, looking at how long it takes to regain awareness after taking control from an automated system [0]. (And similar study from 2016[1].) It takes about 15 seconds to regain reasonable control of a car after control is handed over. If your plan is to pull over and hand control to a human in inclement weather, that is a viable plan. If your plan is to recognize an emergency and hand control to a human during an active emergency, the situation has resolved itself within those 15 seconds, one way or another.
A system that has humans in the control loop must be designed around human limitations. Humans, myself included, are not capable of paying continuous attention to a task that requires only sporadic intervention. A control scheme that includes handing control back to a human in emergencies is a death sentence that serves only to obfuscate the cause of the crash and avoid liability.
Yet this is exactly the plan for self-driving cars. Automation level 0 [2] is fine, because there are enough minor corrections to keep the driver focused on the road. Automation levels 4-5 are fine, because the "driver" isn't the one in control of the car. Automation levels 1-3 are actively hazardous, as they remove the moment-to-moment adjustments performed by the driver, but they still expect the driver to maintain full situational awareness to take over at any time.
TL;DR. Automation levels 1-3 assume superhuman focus on the part of the driver, are fundamentally unsafe, and should be regulated against.
My car has level 2 (ACC with steering) and I've found that the whole process is extremely stressful and hard for me. Initial problem was that I did not trust the car to actively steer, brake and accelerate while keeping me centered in the lane.
After a while that wore off but you still had to be hyper vigilant as the system can "fail" at any moment and you need to be in control immediately. I find it harder to do than regular driving as in level 2 you need to "drive" and pay attention to whether the system has stopped working properly or not.
It's an incredibly frustrating. I'm surprised people buy cars just for this capability. I love safety systems but this is just a disaster waiting to happen.
I wonder, why we don't just use these automated systems only as safety backups, keeping the driver in full control of the vehicle? If the driver falls asleep, or an animal or child runs out into traffic, or the vehicle starts drifting into oncoming traffic, the automated driving system can take over and take whatever action it deems necessary to avoid a collision, with an option for the driver to forcefully take back control if necessary. The driver can still use adaptive cruise control, and even lane-keeping for short periods of time (5 minutes), but all other aspects of driving (steering, navigation, signalling, braking, etc.) are the responsibility of the driver.
Best of both worlds then, with the goal of minimizing injuries and deaths. Is there a problem with this that I'm not seeing? Why isn't this the real goal?
I agree that it can be hazardous. But I would like to add that level 3 cars have driver monitoring making sure that the driver is looking at the road. Much better than nothing!
Still, the handover in some of the car brands I have tested are really dangerous - silently hands over to the driver without any notifications.
I agree with you that level 3 is temptingly dangerous in an automobile situation. I spent some time trying to place where a light aircraft autopilot sits on that framework and conclude that it's closest to level 3 and is generally used effectively to reduce workload and drudgery of cruise flight.
The wind was really taken out of my self-driving sails when Musk pointed out there's no reason for a company with true self-driving cars to actually sell them to consumers instead of adding them to a massive and profitable taxi fleet. The best thing for consumers is an almost-self-driving car that's virtually uncrashable.
The best thing for consumers is a self-driving car that lasts forever, obeys the user unconditionally, can recharge itself via solar power and doesn't fuel some billionaire's eternal rent seeking.
Even if you were able to collect 100% of the insolation hitting the car it's still a fraction of the power in a battery pack. Unfortunately the car won't be at the optimal angle and in full sun most of the time either so it'll still be at best 10 or so miles a day probably and even that requires way better cells than we have now. There's not really a way around the need to charge from a larger power source, at best there's the option to charge from your home solar.
If using this taxi fleet is cheaper than owning a car and you can have a car outside your house in at most, say, 5 minutes then most consumer will be happy to take it.
Choosing no car maintenance, no other hassle and cheaper is a no brainer.
I doubt it. Real people, especially parents with children, like to be able to store stuff in their cars. They don't want to deal with waiting, or dirty cars used by random other people.
Dirty cars would be reported, so the previous user has to pay for the cleaning and the current user could get a free ride for the inconvenience. If people have to pay for cleaning then they will pay attention to keep the car clean.
Of course, the service is only a real alternative if a car is there within 5 minutes for you. People wouldn't tolerate more waiting.
It will be a gradual process and some people will still prefer to own a car, but in the long run owning a car could be a luxury if the driverless taxi really matures and it becomes much cheaper.
The externalities are here the issue. Tesla becoming a huge monopoly and charging exorbitant prices for the taxis. Or refusing to drive you ever again because that one time you puked when drunk at 2am. So you need to go to hospital now? Well tough luck.
Centralization of power is not good and laws ought to be enacted that producers of self driving cars must not also own them as taxi fleets.
> refusing to drive you ever again because that one time you puked when drunk at 2am
Don't think it would happen, the fleet company will simply charge you for the cleaning and refuse service for you only if you refuse to pay the cleaning bill. People will be very careful to keep the car clean if they pay for the cleaning.
Also, it's unlikely only one company will have full self driving cars, so there will be competing fleets.
I'm not sure I get the point (other than saying something mildly controversial to attract attention to his company).
I've never watched them but I'm fairly sure there's videos of people sleeping in Teslas that are driving themselves. Possibly recorded by police cars about to arrest them.
And you'll say, yeah well that was only for x distance and in y circumstances.
But then we're just back to arguing again, which this was supposed to circumvent.
A delivery truck that could let the driver sleep on long roads between cities seems like it would be big, even if the driver had to wake up to do some tricky bits at either end. Of course then you can replace the sleeping driver with two drivers at each end of the journey and slash costs even more.
Which I know isn't really the point. But even if the technology was there for self driving cars, I think what is being called for here probably is and should stay illegal. The liability for car accidents rests with the owner of the car, and they need to be alert to make sure conditions are suitable for the computer to be driving.
People are really underselling the miracle of modern automation.
You aren't liable for the accidents caused by the taxi driver or chauffeur or even when taking a train so you shouldn't be liable for ones caused by the self-driving car, the car maker should be.
How the hell are you supposed to determine if the conditions are suitable for a computer?
People can (and do) that in completely old-fashioned cars. "Juuust close my eyes a little bit, on this flat and straight stretch of the road." And many times, it works (as in "no crash"). Look, autonomous driving - and without any computer, too! ;o)
This (liability) is beginning to change with L3 systems. Mercedes will take over liability while drive pilot is engaged in highway stop and go traffic.
> But can you safely sleep in them? Of course not, unless you’re parked, or someone else is driving.
I don't know, seems like opinionated.
What's the definition of Self-Driving? Can it drive from point a to point b?
The question is, do you trust it? Do you trust taxi / uber driver to drive you while you're asleep at 1am, drunk?
It should be the same question.
Technically it can be defined as a self-driving car, because it can bring you from point a to point b, but the case of where people should trust their life on it is a different story.
> The question is, do you trust it? Do you trust taxi / uber driver to drive you while you're asleep at 1am, drunk?
In practice, it sometimes turns out afterwards that you did, albeit perhaps involuntarily: When the driver wakes you up saying "Oi, we're here now!"
> Technically it can be defined as a self-driving car, because it can bring you from point a to point b, but the case of where people should trust their life on it is a different story.
You do that with the taxi / uber driver whether you're asleep or awake.
To compare apples to apples you need to distinguish the two risks involved: how much do I trust the driver (human or otherwise) not to have an accident and how much do I trust the driver not to abduct me or generally act maliciously.
Uber drivers and today's SDC rank quite differently on those two risks
A car that perfectly moves through traffic but intentionally drops you off in the most crime ridden area it can find is gerenally not considered in these scenarios.
I don't trust a taxi driver to safely take me to the destination. But taxi drivers don't claim that their cars are self driving. So what's your point? The fact that some other human is driving the car doesn't mean that the car can be called self driving.
You can assume that the taxi driver will be able to react to most possible events on your ride home. Right now self driving systems require active oversight and you can't sleep while you use them.
It's like comparing an experienced driver (taxi or not) and a student driver. I'd sleep with an experienced driver driving but I wouldn't even think about sleeping if I am being driven by a student driver.
According to Oxford dictionary, self driving implies:
> capable of travelling without input from a human operator, by means of computer systems working in conjunction with on-board sensors.
A taxi clearly does not match this definition, since it has a human operator.
I assume we are not just making up words ourselves, because that would be meaningless. What definition are you referring to that would match your usage?
> That’s it. That’s the test. Pick a vehicle. Can you get in, pick a destination and safely go to sleep?
The problem with the litmus test is that it sweeps all of the complexity of the question into the word "safely". Self driving cars aren't like, say, an elevator, which has physical mechanisms that an engineer can inspect and show to be in good working order.
There is no human being or group of beings that can inspect the quality of the training data used for full self driving--the black box cannot answer for itself. A car might be able to drive nicely on straight, sunny roads... But reality has far too many edge cases such that the models can never be trained for all of them. What would the model make of a zebra or ostrich walking down the road? What if a baseball line-drives towards the vehicle? A tennis ball lands in the road--not an issue in itself for the car, but the oblivious 3 year old chasing after it would be an issue.
Thus, allow me to propose an alternative razor, Invictus' Razor: It's not a self-driving car if it uses training data. Reality has too many edge cases and it's impossible to train for all of them.
I believe we will need a new algorithm to do this task, something that ends up mimicking the human neocortex. What we have now is fundamentally not good enough.
> It's not a self-driving car if it uses training data. Reality has too many edge cases and it's impossible to train for all of them.
Do you have a driver's license? If so, how did you get it?
My guess is, if you do, you got it by learning, and then demonstrating you had learned, to drive. And you learned by... Training, just like everyone else.
If that's not good enough for an automated system, then how is it good enough for human drivers? How the heck else is the system -- or anyone, or anything -- supposed to learn?
Here's a five year old kid driving a toy car. He didn't take any training course to do this. He didn't need to be told not to hit the dog or drive off the side of the hill.
Is this kid ready to drive on public roads? No, obviously not. But this kid is arguably better at handling edge cases on the road than any "self driving" system trained on a billion miles of road data.
Huh? Human drivers rely on hundreds of hours of training to safely and reliably handle most traffic situations, and still frequently fail to act appropriately in unexpected situations. We also get brain farts or fall asleep at the wheel and flatten the occasional pedestrian. According to the National Safety Council, the chances of dying from a motor vehicle crash is 1 in 103. Human driving isn't some impossible to pass benchmark of excellency - it's an endless shitshow of human tragedy.
> Human drivers rely on hundreds of hours of training to safely and reliably handle most traffic situations
Human training is different than AI training; they are so different that the words should not be the same. The difference is that humans see things how they are, not how they appear. If a human learns to drive only during the day, and then after their training has to drive at night, they will still recognize the traffic lights and stop signs--an AI will not. Moreover, a human really doesn't need hundreds of hours to learn to drive. I was driving at 5 years old, in a toy Jeep with a top speed of maybe 10 miles and hour. I knew not to drive into the flower garden or hit the dog. In many States, you can drive legally as early as 14, and historically many farm kids started younger than that with no training to speak of. When your vehicle is a slow tractor, it's not nearly as dangerous.
> We also get brain farts or fall asleep at the wheel and flatten the occasional pedestrian.
Great, this is something an algorithm that mimics the human brain would be able to improve upon.
> According to the National Safety Council, the chances of dying from a motor vehicle crash is 1 in 103
Over what time frame, presumably a lifetime of about 75 years? A very meaningless and alarmist statistic--better to talk about the 1.1 fatalities per 100 million miles.
I get the point, but there are plenty of appliances that work entirely on their own but that many people wouldn't go to sleep running, such as an oven or a clothes drier. Falling asleep alone in a self-driving car would require a level of confidence I'm not sure we'll reach any time soon.
I don't know about in the US, but in the UK there have been enough media scare stories of dryers catching fire as well as a popular brand which failed to recall 1 million faulty dryers (https://www.thefpa.co.uk/news/update-on-fire-risk-dryer-reca...) that it's put a lot of people I know off. The risk is small, of course, but I do know people who won't leave things like dishwashers or washing machines on at night although I do because I'm lazy ;-)
I guess there are some other conditions? Such as that you should be able to sleep in it without it causing accidents per distance driven, or only a few accidents, or only a few fatal accidents.
If not, my bed also classifies as a self-driving car
Scotland has lower drink driving limits than the rest of the UK so for me I wait at least 12 hours after having any drinks at all and checking with a blood alcohol tester before I do go near driving a car.
Yes , I have always said that I am not impressed by the selective Tesla promo videos.
Let me know once he takes himself and his family on a trip with difficult conditions with fsd on and all of them asleep.
If we wait for autonomy on the sleeping level, thousands will die due to drunk and distracted drivers. It's a much better plan to roll out partial autonomy to aid in accident reduction ASAP. Falling asleep in the car sounds like a vacationer's privilege, not a preventer of thousands of road deaths yearly.
No. I’d much rather meet drivers with a higher risk of crashing into me than autonomous cars with a slightly lower risk. Me (and I assume a large part of the public) will not accept FSD cars on roads with merely “human level” as the bar. Perhaps when the improvement is an order of magnitude.
Microsleep is a reality. But until we get full self driving, drunk and/or distracted drivers will still be drunk and/or distracted and will not be able to react if the car chooses wrong. And from the Tesla Self-driving vids I've seen that happens more often than expected
Where do you live? I live in Norway, and paid NOK 3,000,000 (USD $339,291) for my house. My car, a brand new MG ZV EV Luxury cost me NOK 280,000 (USD $31,667).
It's not that some of us don't trust it, it's that it's statistically unwise. The probability of death for my age/weight/health group is almost negligible, however, the probability of adverse affects to the vaccine are non-negligible. Also, nobody knows the long-term affects of the vaccine. Plus, the natural antibodies are potentially life long, according to new research. Organic antibodies are better, so if you can get them or have them you're better off.
I know your just spouting garbage but let me take the time to refute this for anyone else.
Any side effects from the vaccine also apply statistically higher, for anyone who gets infected. Death is not the only adverse outcome from covid.
The antibodies may be long lasting, but they are more specific then the ones that get from the vaccine. You're protection is not comprehensive against mutation with the vaccines.
Not great since it allows human remote assistance and even human in-person assistance like Waymo has. If your taxi is a Tesla and the driver uses all the driver assist features, does that make it self driving? Plenty of grey areas.
90% of consumers will stop listening as soon as they hear phrases like "well-defined descriptors". They don't want to hear who can put the most syllables into words, they want to know when they can sleep in it.
Self driving cars today are self driving just like those two-wheeled "hoverboards" are hovering.
In that sense the linked article is absolutely correct.
Yeah, you're right, that is somewhat dubious... Didn't know that when I posted the link, actually. (Or if I had known at some time, I'd forgotten by now.)
But still, that doesn't stop him from being right. What are these "well defined descriptors" you're talking about -- and are they any better than his?
Alternatively, give Mr. Roy and his company credence by embedding his name - at his request - into the way we discuss autonomy. Should we expect people to understand what Roy's rule is?
Not sure if those "levels" were what the GGP, whom I was asking, meant. If they were, then no, they don't seem all that cut-and-dried and unambiguous to me: IIRC, there's controversy about them on this very comment page.
Alex Roy broke the law and risked the lives of others by doing the Canonball Run. He is proud about this and has many interviews on Youtube. In these videos, you will see Alex and his compatriots (Ed Bolian, Doug Tabbutt) explain how they avoid going to prison: they use technology to detect police, they wait for the statute of limitations, and they curry favour with the cops by handing over cards that show they're friends of the police:
I watched the video. 2+1+2 lanes, no trucks parking on the side, no bicycles, no bicycles-lanes. No pedestrians crossing the road, no pedestrians at all. Clear blue sky. The car can take a left turn alone. That's cool, until you realise that the majority of the cities outside of the USA doesn't look like Miami Beach. I highly doubt that any "self-driving" car could travel through Munich today without killing at least a couple of pedestrians or cyclists. And still, Munich is like a dream compared to Rome or Ho-Chi-Minh-City.
BTW, only because sometimes I feel safe enough to ride my bike without hands on, it doesn't means I have a self-riding-bike.