This is why I believe the approach of incremental improvement towards full self driving is fundamentally flawed. These advanced driver assist tools are good enough to lull users into a false sense of security. No amount of "but our terms and conditions say you need to always pay attention!" will overcome human nature building that trust and dependence.
I actually disagree. (And before you respond, please read my post because it’s not a trivial point.)
The fact that an huge formal investigation happened with just a single casualty is proof that it may actually be superior for safety in the long-term (when combined with feedback from regulators and government investigators). One death in conventional vehicles is irrelevant. But because of the high profile of Tesla’s technology, it garners a bunch of attention from the public and therefore regulators. This is PRECISELY the dynamic that led to the ridiculously safe airline record. The safer it is, the more that rare deaths will be investigated and the causes sussed out and fixed by industry and regulators together.
Perhaps industry/Tesla/whoever hates the regulators and investigations. But I think they are precisely what will cause self driving to become ever safer, and eventually become as safe as industry/Tesla claims, safer than human drivers while also being cheap and ubiquitous. Just like airline travel today. A remarkable combination of safety and affordability.
This might be the only way to ever do it. I don’t think the airline industry could’ve ever gotten to current levels of safety by testing everything on closed airfields and over empty land for hundreds of millions of flight hours before they had sufficient statistics to be equal to today.
It can’t happen without regulators and enforcement, either.
Then why not flip the scheme. Instead of have the human as backup to the machine, make the machine backup the human. Let the human do all the driving and have the robot jump in whenever the human makes a mistake. Telemetry can then record all the situations where the human and the machine disagreed. That should provide all the necessary data, with the benefit of the robot perhaps preventing many accidents.
Of course this is impossible in the real world. Nobody is going to buy a car that will randomly make its own decisions, that will pull the wheel from your hands ever time it thinks you are making an illegal lane change. Want safety? How about a Tesla that is electronically incapable of speeding. Good luck selling that one.
Nobody is going to buy a car that will randomly make its own decisions, that will pull the wheel from your hands ever time it thinks you are making an illegal lane change.
That's almost exactly what my Honda does. Illegal (no signal) lane change results in a steering wheel shaker (and optional audio alert). And the car, when sensing an abrupt swerve which is interpreted as the vehicle leaving the roadway, attempts to correct that via steering and brake inputs.
But, I agree with your more general point - the human still needs to be primary. My Honda doesn't allow me to remove my hands from the steering wheel for more than a second or two. Tesla should be doing the same, as no current "autopilot" system is truly automatic.
> That's almost exactly what my Honda does. Illegal (no signal) lane change results in a steering wheel shaker (and optional audio alert). And the car, when sensing an abrupt swerve which is interpreted as the vehicle leaving the roadway, attempts to correct that via steering and brake inputs.
By the way, this is fucking terrifying when you first encounter it in a rental car on a dark road with poor lane markings while just trying to get to your hotel after a five hour flight.
I didn't encounter an obvious wheel shaker, but this psychotic car was just yanking the wheel in different directions as I was trying to merge onto a highway.
Must be what a malfunctioning MCAS felt like in a 737 MAX, but thankfully without the hundreds of pounds of hydraulic force.
> Illegal (no signal) lane change results in a steering wheel shaker (and optional audio alert).
To be clear, tying the warning to the signal isn't about preventing unsignaled lane changes, it's gauging driver intent (i.e. is he asleep and drifting or just trying to change lanes). It's just gravy that it will train bad drivers to use their signals properly.
Is a lane change without signal always illegal? I know that it almost certainly make you liable for any resulting accident, but I'm not sure that it is universally illegal.
> 142 (1) The driver or operator of a vehicle upon a highway before turning (...) from one lane for traffic to another lane for traffic (...) shall first see that the movement can be made in safety, and if the operation of any other vehicle may be affected by the movement shall give a signal plainly visible to the driver or operator of the other vehicle of the intention to make the movement. R.S.O. 1990, c. H.8, s. 142 (1).
That said there's zero cost to doing so regardless of whether other drivers are affected.
It could be one of those "if a tree falls in the forest" scenarios. If a cop is near enough to see you not signal, he could easily argue that he himself might have been affected by the turn or lane change.
Yes, failure to signal is a traffic violation. At least everywhere I've lived/traveled in the US. It's also a rather convenient excuse for police to "randomly" pull you over (I've been pulled over by Chicago PD for not signaling for a lane change, despite actually having done so).
I have no idea, but the point wasn't so much that the lane change is illegal, but that lack of signal is used to indicate lack of driver attention. I shouldn't have used "illegal" in my original post.
Just to add, I have a 2021 Honda, and disabling this functionality is a 1-button-press toggle on the dash to the left of the steering wheel. Not mandatory.
Interesting, I assumed it didn't, given the prevalence of stories about driver watching movies on their phones. I guess they just leave one hand lightly on the wheel, but are still able to be ~100% disengaged from driving the car.
On most or all roads below 100km/h autopilot won’t allow speeding, and therefore I drive at the limit, which I know I would not have done if I controlled it. It also stays in the lane better than I do, keeps distance better, and more. Sometimes it’s wonky when the street lines are unclear. It’s not perfect but a better driver than I am in 80% of cases.
My insurance company gives a lower rate if you buy the full autopilot option, and that to me indicates they agree it drives better than I, or other humans, do.
On most or all roads below 100km/h autopilot won’t allow speeding, and therefore I drive at the limit, which I know I would not have done if I controlled it
If following the speed limit makes cars safer, another way to achieve that without autopilot is to just have all cars limit their speed to the speed limit.
Sometimes it’s wonky when the street lines are unclear. It’s not perfect but a better driver than I am in 80% of cases
The problem is in those 20% of cases where you'd lulled into boredom by autopilot as you concentrate on designing your next project in your head, then suddenly autopilot says "I lost track of where the road is, here you do it!" and you have to quickly gain context and figure out what the right thing to do is.
Some autopilot systems use eye tracking to make sure that the driver is at least looking at the road, but that doesn't guarantee that he's paying attention. But at least that's harder to defeat than Tesla's "nudge the steering wheel once in a while" method.
> just have all cars limit their speed to the speed limit.
The devil is in the details... GPS may not provide sufficient resolution. Construction zones. School zones with variable hours. Tunnels. Adverse road conditions. Changes to the underlying roads. Different classes of vehicles. Etc.
By the time you account for all the mapping and/or perception, you could've just improved the autonomous driving and eliminated the biggest source of humans driving: The human.
The single system you're describing, with all of its complexity, is a subset of what is required for autonomous vehicles. We will continue to have road construction, tunnels, and weather long past the last human driver. Improving the system here simply improves the system here -- you cannot forsake this work by saying "oh the autonomous system will solve it" -- this is part of the autonomous system.
But you can still impose a max speed limit based on available data to cover most normal driving conditions but it's still on the driver to drive slower if appropriate. And that could be implemented today, not a decade from now when autonomous driving is trustable.
The parent post said that autopilot won't let him go over the speed limit and implies that makes him safer. My point is that you don't need full autopilot for that.
So this is not a technical problem at all, but a political one. As the past year has shown, people won't put up with any convenience or restriction, even if it could save lives (not even if it could save thousands of lives)
GPS is extremely accurate honestly. My garmen adjusts itself the very instant I cross over a speed limit sign to a new speed, somehow. Maybe they have good metadata, but its all public anyway under some department of transportation domain and probably not hard to mine with the price of compute these days. Even just setting a top speed in residential areas of like 35mph would be good and save a lot of lives that are lost when pedestrians meet cars traveling at 50mph. A freeway presents a good opportunity to add sensors to the limited on and off ramps for the car to detect that its on a freeway. Many freeways already have some sort of sensor based system for charging fees.
What would be even easier than all of that, though, is just installing speeding cameras and mailing tickets.
Just add all those to the map system. It could be made incredibly accurate, if construction companies are able to actually submit their work zones and "geofence" them off on the map.
During the 3 years I’ve owned it there are 2 places where lines are wonky and I know to take over.
I have not yet struggled to stay alert when it drives me, and it has driven better than I would have - so it certainly is an improvement over me driving 100% of the time. It does not have road rage and it does not enjoy the feeling of speeding, like I do when I drive, nor does it feel like driving is a competition, like I must admit I do when I am hungry, stressed, or tired.
> just have all cars limit their speed to the speed limit
No way I’d buy a car that does not accelerate when I hit the pedal. Would you buy a machine that is not your servant?
> Of course this is impossible in the real world. Nobody is going to buy a car that will randomly make its own decisions, that will pull the wheel from your hands ever time it thinks you are making an illegal lane change.
Yeah, add to that the unreliability of Tesla's system means that it cannot pull the wheel from the driver, because it's not unusual for it to want to do something dangerous and need to be stopped. You don't want it to "fix" a mistake by driving someone into the median divider.
> * Let the human do all the driving and have the robot jump in whenever the human makes a mistake.*
Because when the human disagrees with the machine, the machine is usually the one making a mistake. It might prevent accidents, but it would also cause them, and you lose predictability in the process (you have to model the human and the machine).
> Want safety? How about a Tesla that is electronically incapable of speeding.
That would be unsafe in many situations. If the flow of traffic is substantially above the speed limit--which it often is--being unable to match it increases the risk of accident. This is known as the Solomon curve [1].
> Subsequent research suggests significant biases in the Solomon study, which may cast doubt on its findings
With the logic presented in the theoretical foundation section, it seems that the safer move would actually be slow down and match the speed of all the trucks and other large vehicles... which won't happen.
Matching speed sounds great, except there are always people willing to go faster and faster. In my state they raised the speed limit from 70 to 75, it just means more people are going 85-90. How is that safer?
To address your last paragraph, everyone going 85-90 is less safe than everyone going 70-75, you are correct.
However, you individually going 70-75 when everyone else is going 85-90 is less safe than you going 85-90 like everyone else in the exact same situation.
>there are always people willing to go faster and faster
That’s why no one says “go as fast as the fastest vehicle you see”, it is “go with the general speed of traffic”. That’s an exercise for human judgement to figure that one out, which is why imo it isn’t a smart idea to have the car automatically lock you out of overriding the speed limit.
> However, you individually going 70-75 when everyone else is going 85-90 is less safe than you going 85-90 like everyone else in the exact same situation.
And yet the roads are full of vehicles literally incapable of going 85. Many trucks cannot do more than 69mph.
People are going faster because they felt it's safer, not because of the speed limit. You can design roads that cause humans to slow down and be more careful.
A self driving car obviously needs to be aware of other cars on the road. I don't see any reason why the car couldn't observe other cars, see what speed they are going at, and refuse to go faster than the rest. A car that refuses to do 120mph when all the other cars are doing 60mph in a 50mph zone should be trivial.
(Trivial if the self driving tech works at all....)
You're getting downvoted for this comment apparently, but I'm still of the firm belief that we will never see full autonomous driving without some sort of P2P network among cars/infrastructure.
There's just too much shit that can't be "seen" with a camera/sensor in conjested traffic. Having a swarm of vehicles all gathering/sharing data is one of the only true ways forward IMO.
> How about a Tesla that is electronically incapable of speeding. Good luck selling that one.
Instead they did the exact opposite with the plaid mode model S, lol. It kind of works against their claims that they prioritize safety when their hottest new car - fully intended for public roads - has as its main selling point the ability to accelerate from 60-120 mph faster than any other car.
I keep saying the same thing actually whenever people say that manual driving will be outlawed. Like, no, it won't be - because the computers will still save you in most cases either way, autopilot enabled or not.
>>How about a Tesla that is electronically incapable of speeding. Good luck selling that one.
From 2022 all cars sold in the EU have to have an electronic limiter that keeps you to the posted speed limit(by cutting power if you are already going faster) - the regulation does allow the system to be temporarily disabled however.
Your summary is incorrect. The ETSC recommends that Intelligent Speed Assistance should be able to be overridden.[1] It's supposed to not accelerate as much if you're exceeding the speed limit, and if you override by pressing the accelerator harder, it should show some warning messages and make an annoying sound. It's stupid, but it doesn't actually limit the speed of your car.
I think it's a silly law and I'm very glad I don't live in a place that requires such annoyances, but it's not as bad as you're claiming.
I hired a new car with Intelligent Speed Assistance this summer, though it was set (and I left it) just to "ping" rather than do any limiting. I drove it to a fairly unusual place, though still in Europe and with standard European signs. It did not have a GPS map of the area.
It could reliably recognize the speed limit signs (red circle), but it never recognized the similar grey-slash end-of-limit signs. It also didn't recognize the start-of-town or end-of-town signs, so it didn't do anything about the limits they implied.
I would certainly have had to disable it, had it been reducing the acceleration in the way that document describes.
What? Cars with collision detection systems already exist, and they can handle both side on and head on collision avoidance when a human is driving.
People literally are buying cars that “make their own decisions”. Importantly though, these systems only activate in the case of an imminent collision IF the corrective measure won’t cause another collision.
> that will pull the wheel …
Yah, of course no one is going to buy a care that does what you describe because what you describe is insane and inherently unsafe. Unless a collision is imminent, nothing happens.
This line of thinking is flawed because it assumes a smooth surface over the safety space, where if you make incremental improvements you will head towards some maxima of safety. e.g. : the wing fell off; investigate; find that you can't use brittle aluminum; tell aircraft manf. to use a more ductile alloy. Self driving technology isn't like that -- you can't just file a bug "don't mistake a human for a plastic bag", fix that bug and move on to the next one. No number of incremental fixes will make self driving that works as any reasonable human would expect it to work.
This argument is flawed, because when regulators investigate a Tesla crash, Waymo doesn't care the slightest. The technologies (emphasis on having skeuomorphic cameras vs a lidar), approaches (emphasis on generating as many situations as possible in simulated worlds and carefully transitioning to the business case vs testing as early in the real world with background data captation) and results are so different between the actors in this specific industry that one's flaws being fixed or improved won't necessarily translate into others benefitting from it.
Conversely, when Waymo iterates and improves their own safety ratios by a significant amount, that evidently does not result in Tesla's improving in return.
Asking someone to pay attention when they are not doing anything is unrealistic.
I would be constantly bored / distracted. My wife would instantly fall asleep. Etc etc.
I largely agree with you, but I just wish regulators would start by only allowing these assist programs for people that are already known to be poor drivers. The elderly and convicted drunk drivers, for example. That way we could have the best of both worlds.
Requiring people to buy a special car/system to be able to drive doesn't seem like an incentive - it seems similar to the interlock system we currently require drunk drivers to purchase to be able to drive.
If anything a driver monitoring system seems even better than the interlock system, for example you couldn't have your kids/friends blow for you to bypass it.
I disagree. I would not put people who shoved poor judgment in situation, where they can further hurt other or themselves it. People like that are more likely not to pay attention and do other irresponsible things.
Tesla's safety report lacks data and is extremely misleading.
1. Autopilot only works on (or intended to work on) highways. But they are comparing their highway record to all accident records including city driving, where accident rate is far higher than highway driving.
2. They're also comparing with every vehicle in the United States including millions of older vehicles. Modern vehicles are built for higher safety and have a ton of active safety features (emergency braking, collision prevention etc). Older vehicles are much more prone to accidents and that skews the numbers.
The reality is Teslas are no safer than any other vehicles in its class ($40k+). Their safety report is purely marketing spin.
They also include miles driven by previous versions of their software in the “safe miles driven” tally. There’s no guarantee any improvement would not have resulted in more accidents. They should reset the counter on every release.
> The reality is Teslas are no safer than any other vehicles in its class ($40k+).
Would another way of saying this be that they are as safe as other vehicles in that class? And that therefore Autopilot is not more unsafe than driving those other cars?
Do you know many vehicles $40K+ that don't have BLIS and rear/front cross traffic alerts? While a radar-based blind sport alert (one that warns if a car behind is moving too fast to safely merge) is probably irrelevant for the city driving, the cross traffic is extremely useful when pulling out of driveway obstructed by parked cars, I personally have seen several accidents just on my street that could have been prevented with cross traffic detection. I think the expensive models (S/X) still have the front radar so they may have the front alert but I don't think any model ever had the rear radar for the rear cross traffic alert.
I would probably agree, but I also think it’s a case of “need more data”.
We should really compare Autopilot with its competitors like GM’s Super Cruise or Ford’s Blue Cruise, both of which offer more capabilities than Autopilot. That will show if Tesla’s driver assist system is more or less safe than their competitors product.
What capabilities does GM or Ford have that Tesla doesn't? Neither GM nor Ford have rolled out automatic lane changing. Teslas have been doing that since 2019.
The reason GM's Super Cruise got a higher rating by Consumer Reports was because CR didn't even test the capabilities that only Tesla had (such as automatic lane change and taking offramps/onramps). Also, the majority of the evaluation criteria weren't about capabilities. eg: "unresponsive driver", "clear when safe to use", and "keeping the driver engaged".[1]
>
In the 1st quarter, we registered one accident for every 4.19 million miles driven in which drivers had Autopilot engaged. For those driving without Autopilot but with our active safety features, we registered one accident for every 2.05 million miles driven. For those driving without Autopilot and without our active safety features, we registered one accident for every 978 thousand miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 484,000 miles.
I think the comparison should be Tesla with/without AI, not Tesla/not-Tesla; so roughly either x2 or x4 depending on what the other active safety features do.
It’s not nothing, but it’s much less than the current sales pitch — and the current sales pitch is itself the problem here, for many legislators.
> we registered one accident for every 4.19 million miles driven in which drivers had Autopilot engaged [...] for those driving without Autopilot but with our active safety features, we registered one accident for every 2.05 million miles driven
This still isn't the correct comparison. Major selection bias with comparing miles with autopilot engaged to miles without it engaged, since autopilot cannot be engaged in all situations.
A better test would be to compare accidents in Tesla vehicles with the autopilot feature enabled (engaged or not) to accidents in Tesla vehicles with the autopilot feature disabled.
As was stated elsewhere, most accidents happen in city driving where autopilot cannot be activated so the with/without AI is meaningless. We need to figure out when the AI could have been activated but wasn't, if you do that then you are correct.
On the contrary to your overall point: The fatal crash rate per miles driven is almost 2 times higher in rural areas than urban areas. Urban areas may have more accidents, but the speeds are likely lower (fender benders).
Tesla's vehicles are almost 10X safer than the average vehicle. Whether their autopilot system is contributing positively or negatively to that safety record is unclear.
The real test of this would be: of all Tesla vehicles, are the ones with autopilot enabled statistically safer or less safe than the ones without autopilot enabled?
I have a Subaru Forester base model with lane keeping and adaptive cruise control.
I need to be touching the wheel and applying some force to it or it begins yelling at me and eventually brings me slowly to a stop.
I’ve had it for a year now and I cannot perceive of a way, without physically altering the system (like hanging a weight from the wheel maybe?) that would allow me to stop being an active participant.
I think the opposite is true: Tesla’s move fast and kill people approach is the mistake. Incremental mastering of autonomous capabilities is the way to go.
I own a Model Y and am a pretty heavy Autopilot user. You have to regularly give input on the steering wheel and if you fail a few times it won't let you re-engage until you park and start again.
Personally Autopilot has actually made driving safer for me... I think there's likely abuse of the system though that Tesla could work harder to prevent.
I personally think the issue boils down to their use of the term "Autopilot" for a product that is not Autpilot (and never will be with the sensor array they're using IMO.)
They are sending multiple signals that this car can drive itself (going so far as charging people money explicitly for the "self-driving" feature) when it cannot in the slightest do much more than stay straight on an empty highway.
They should be forced to change the name of the self-driving features, I personally think "Backseat Driver" would be more appropriate.
> the issue boils down to their use of the term "Autopilot" for a product that is not Autpilot
It is literally an autopilot. Just like an autopilot on an airplane, it keeps you stable and in a certain flight corridor. There's virtually no difference except for Tesla's Autopilot's need to deal with curved trajectories.
Well, they still need to avoid the collisions more reliably, apparently. Once they do it perfectly reliably I will add it into the list of the things it does differently from an airplane autopilot. ;)
Autopilot is precisely the correct term -
An autopilot is a system used to control the path of an aircraft, marine craft or spacecraft without requiring constant manual control by a human operator. Autopilots do not replace human operators.
> physically altering the system (like hanging a weight from the wheel maybe?)
was exactly what people were doing. But it's also possible to be physically present, applying force, but being "zoned out", even without malicious intent.
> I need to be touching the wheel and applying some force to it or it begins yelling at me and eventually brings me slowly to a stop.
> I’ve had it for a year now and I cannot perceive of a way, without physically altering the system (like hanging a weight from the wheel maybe?) that would allow me to stop being an active participant.
That's exactly what people were doing with the Tesla. Hanging a weight to ensure the safety system doesn't kick in. [0][1]
If people are consciously modifying their car to defeat obvious safety systems, I have a really hard time seeing how the auto manufacturer should be responsible.
I guess the probe will reveal what share of fatal accidents are caused by this.
Well, it doesn't help when the CEO of the company publically states that the system is good enough to drive on its own and those safety systems are only there because of regulatory requirements.
GM's Supercruise (which is the actual king of the hill for L2 systems) uses cameras to track the driver's eye position to ensure they are paying attention. It's significantly harder to defeat, is geofenced to prevent use in incompatible situations like surface streets, and has a much more graceful disengagement process. Most of the time autopilot is smooth, but sometimes it just hands control back to the driver without warning.
Teslas can famously be tricked by wedging an orange between the rim and spoke of the steering wheel to produce enough torque on the wheel to satisfy the detection. There are enough videos of it on youtibe that tesla could easily be found negligent for not doing enough to prevent drivers from defeating a safety system given that alternate technology that more directly tracks attention is available and tricking tesla's detection method became common knowledge.
You're literally describing how the Tesla system works. It requires you to keep your hand on the wheel and apply a slight pressure every so often. The cabin camera watches the driver and if they're looking down or at their phone, it does that much more often.
People causing these problems almost certainly are putting something over the cabin camera and a defeat device on the steering wheel.
Personally I’ve found this to be sufficient in my Forester. Even holding the wheel but not being “there” isn’t enough. The car is really picky about it.
One problem that is often ignored in these debates is that people already don't always pay attention while driving. Spend some time looking at other drivers next time you are a passenger in slow traffic. The number of drivers on their phones, eating, doing makeup, shaving, or even reading a book is scary.
It therefore isn't a clean swap of a human paying attention to a human who isn't. It becomes a complicated equation that we can't just dismiss with "people won't pay attention". It is possible that a 90%/10% split of drivers paying attention to not paying attention is more dangerous when they are all driving manually than a 70%/30% split if those drivers are all using self-driving tech to cover for them. Wouldn't you feel safer if the driver behind you who is answering texts was using this incremental self-driving tech rather than driving manually?
No one has enough data on the performance of these systems or how the population of drivers use them to say definitively that they are either safer or more dangerous on the whole. But it is definitely something that needs to be investigated and researched.
That is the fatal flaw in anything but a perfect system - any kind of system that is taking the decisions about steering from the driver is going to result in the driver at best thinking about other things and worse getting into the back seat to change. If you had to develop a system to make sure someone was paying attention, you wouldn't make them sit in a warm comfy seat looking at a screen - you would make them actively engage with what they were looking at - like steering.
And ultimately it doesn't matter how many hundreds of thousands of hours of driving you teach your system with, it may eventually be able to learn about parked cars, kerbs and road signs, but there won't be enough examples of different accidents and how emergency vehicles behave to ever make it behave safely. Humans can cope with driving emergencies fairly well (not perfectly admittedly) no matter how many they've been involved in using logic and higher level reasoning.
I remember reading Donald Norman's books decades ago, and one of the prime examples of the dangers of automation in cars was adaptive cruise control- which would then suddenly accelerate forward in a now-clear off-ramp, surprising the heck out of the previously-complacent driver, and leading to accidents.
We've known for a very long time that this sort of automation/manual control handoff failure is a very big deal, and yet there seems to be an almost willful blindness from the manufacturers to address it in a meaningful way.
To tell you the truth, I generally do and haven't used it for ages. Where I drive, the roads have some amount of traffic. I find (traditional) cruise control encourages driving at a constant speed to a degree that I wouldn't as a driver with a foot on the gas. So I don't "hate" regular cruise control but I basically never use it.
Maybe a fairly small sample size but I don't know the last time I've been in a car where the driver has turned on cruise control. But it probably varies by area of the country. In the Northeast, there's just enough traffic in general that it's not worth it for me.
In traffic is where traffic aware cruise control is most useful. A lot of people I knew who bought Tesla's in the bay area specifically bought it so their commutes would be less stressful with the bumper to bumper traffic. I drove 3000+ miles across the country last year with > 90% of it on AP and I was way less tired with AP on vs off and it allowed me to just stay focused on the road and look out for any issues.
One thing worth noting about Subaru's approach to this that is specifically relevant to bumper-to-bumper traffic, is that it will stop by itself, but it won't start moving by itself - the driver needs to tap the accelerator for that. It will warn you when the car in front starts moving, though.
Yes. I was (explicitly) talking about traditional "dumb" cruise control. I haven't used adaptive cruise control but I agree it sounds more useful than traditional cruise control once you get above minimal traffic.
I disagree. One feature my car has is to pull me back into the lane when I veer out of it (Subaru's lane keep assist). That is still incremental improvement towards "full self driving". I agree, however, that Tesla's Autopilot is not functional enough, and any tool designed to allow humans to remove their hands from the wheel should not require their immediate attention in any way.
I think people just assume Tesla's Autopilot is more capable than it really is.
My car has adaptive cruise control and lane keep assist, but I'm not relying on either for anything more complex than sipping a drink while on the highway.
Yep, if anything they’re just a way to make long drives or stop and go highway traffic more tolerable. When I got my first car with those features it seemed like a gimmick, but they really help to reduce fatigue.
I think that's actually a step towards a local maximum that makes us less likely to achieve actual FSD. The safer we can make AI-guided driving where a person is still in control, the higher the bar becomes for a solo AI to be significantly safer than the alternatives.
A car that beeps when I drift out of lane, or beeps when I go too fast before a curve, or beeps like hell if I cross over the center median would be hugely useful, because a record of every warning would be there, whether correct or not.
Conversely, if it didn't warn me right before an accident, then the absence of that warning would be useful too.
All of that information should be put back into the model based on crash reporting. Everything else can be ignored.
I would argue that the information should be available to all automakers (perhaps using the NHTSA as a conduit), so that each of them have the same safety information, but can still develop their own models. The FAA actually does this already with the FAA Accident and Incident Data Systems [0] and it has worked pretty darn well.
The new Toyota RAV4's have this feature-- if you go out of bounds in your lane they beep and the steer5ing wheel gives a bit of resistance.
It also reads the speed limit signs and places a reminder in the display. I think it can brake if it detects something in front of it, but I'm not certain.
My main point (perhaps buried more than it should have been) is that by centralizing accident data along with whether an alert went off (or not), and sharing that with all automobile manufacturers can help this process proceed better.
Right now the data is highly fragmented and there is not really a common objective metric by which to make decisions to improve models.
I think the fundamental flaw is indisputable. Everyone is aware of in-between stage problems. I don't think it's an insurmountable flaw.
These things are on the road already. They have issues, but so do human only cars. Tweaks probably get made, like some special handling of emergency vehicle scenarios. But, it's not enough to stop it.
Meanwhile, it's not a permanent state. Self driving technology is advancing, becoming more common on roads. Procedures, as well as infrastructure, is growing around the existence of self driven cars. Human supervisor or not, the way these things use the road affects the design of roads. If your emergency speed sign isn't being headed by self driven cars, your emergency speed sign has a bug.
It depends. As long as the resulting package (flawed self driving system + the average driver) isn't significantly more dangerous than the average unassisted human driver, I don't consider it irresponsible to deploy it.
"The average driver" includes everyone, ranging from drivers using it as intended with close supervision, drivers who become inattentive because nothing is happening, and drivers who think it's a reasonable idea to climb into the back seat with a water bottle duct taped to the steering wheel to bypass the sensor.
OTOH, the average driver for the unassisted scenario also includes the driver who thinks they're able to drive a car while texting.
> As long as the resulting package (flawed self driving system + the average driver) isn't significantly more dangerous than the average unassisted human driver...
Shouldn't that compared to "average driver + myriad of modern little safety features" instead of "average unassisted driver"? The one who has the means to drive a Tesla with the "full driving" mode certain has the means to buy, say, a Toyota full of assistance/safety features (lane change assist, unwanted lane change warning and whatnots).
It almost certainly is, at least when combined with the intentional inattention that follows.
Making it a crime isn't an "obvious solution" to actually make it not happen. Drunk driving is a crime and yet people keep doing it. Same with texting and driving.
The problem is determining who is liable for damages, not prevention. Shifting the liability for willfully disabling a safety control puts them on notice.
Prevention as a goal is how we end up with dystopia.
Does that even matter? If the state doesn’t care to enforce its laws against reckless driving, why should the manufacturer be encumbered with that responsibility?
> drivers using it as intended with close supervision
Doesn't this hide a paradox? Using a self-driving car as intended implies that the driver relinquishes a part of the human decision making process to the car. While close supervision implies that the driver can always take control back from the car, and therefore carries full personal responsibility of what happens.
The caveat here is that the car might make decisions in a rapidly changing, complex context which the driver might disagree with, but has no time to correct for through manual intervention. e.g. hitting a cyclist because the autonomous system made an erroneous assertion.
Here's another way of looking at this: if you're in a self-driving car, are you a passenger or a driver? Do you intend to drive the car yourself or let the car transport you to your destination?
In the unassisted scenario, it's clear that both intentions are one and the same. If you want to get to your location, you can't but drive the car yourself. Therefore you can't but assume full personal responsibility for your driving. Can the same be said about a vehicle that's specifically designed and marketed as "self-driving" and "autonomous"?
As a driver, you don't just relinquish part of the decision making process to the car, what essentially happens is that you put your trust in how the machine learning processes that steer the car were taught to perceive the world by their manufacturer. So, if both car and occupant disagree and the ensuing result is an accident, who's at fault? The car? The occupant? The manufacturer? Or the person seeking damages because their dog ended up wounded?
The issue here isn't that self-driving cars are inherently more dangerous then their "dumb" counter parts. It's that driving a self-driving car creates it's own separate class of liabilities and questions regarding responsible driving when accidents do happen.
The average driver breaks multiple laws on every trip. Most of the time no one gets hurt. But calibrating performance against folks violating traffic and criminal laws sets the bar too low for an automated system. We should be aiming for standards that either match European safety levels or the safety of modes of air travel or rail travel.
Except that doesn't work if you're trying to produce a safe product. Investigations into crashes in the airline industry have proven that removing pilots from active participation in the control loop of the airplane results in distraction and an increased response time when an abnormal situation occurs. Learning how to deal with this is part of pilots' training, plus they have a co-pilot to keep an eye on things and back them up.
An imperfect self driving vehicle is the worst of all worlds: they lull the driver into the perception that the vehicle is safe while not being able to handle abnormal situations. The fact that there are multiple crashes on the record where Telsas have driven into stationary trucks and obstacles on roads is pretty damning proof that drivers can't always react in the time required when an imperfect self driving system is in use. They're not intrinsically safe.
At the very least drivers should be required additional training to operate these systems. Like pilots, drivers need to be taught how to recognize when things go awry and react to possible failures. Anything less is not rooted in safety culture, and it's good to see there are at least a few people starting to shine the light on how these systems are being implemented from a safety perspective.
> Perfect is the enemy of good, and rejecting a better system because it isn't perfect seems like an absurd choice.
Nothing absurd about thinking a system which has parity with the average human driver is too risky to buy unless you consider yourself to be below average at driving. (As it is, most people consider themselves to be better than average drivers, and some of them are even right!) The accident statistics that comprise the "average human accident rate" are also disproportionately caused by humans you'd try to discourage from driving in those circumstances...
Another very obvious problem is that an automated system which kills at the same rate per mile as an average human drivers will tend to be driven a lot more because no effort (and probably replace better-than-average commercial drivers long before teenagers and occasional-but-disproportionately-deadly drivers can afford it).
Yes, I agree. We should hold automated systems to a higher standard. Unless you’re proposing we ban automated systems until they’re effectively perfect because that would perversely result in a worse outcome: being stuck with unassisted driving forever.
Full self driving is one of those things where getting 80% of the way there will take 20% of the effort and getting the remaining 20% of the way there will take 80% of the effort.
Tesla auto-drive seems like it's about 80% of the way there.
Is it? Tesla is still alive because they're selling cars.
It's just that the companies that are NOT doing incremental approaches are largely at the mercy of some investors who don't know a thing about self-driving, and they may die at any time.
I agree with you that it is technically flawed, but it may still be viable in the end. At least their existence is not dependent on the mercy of some fools who don't get it, they just sell cars to stay alive.
That's one of the major problems of today's version of capitalism -- it encourages technically flawed ways to achieve scientific advancement.
Would be interesting to know how many buy based on the fsd hype(including the ones who don't pay for the package) and how many buy because of the "green" factor.
However many there are who buy because of the fsd promise, all that revenue is coming from vaporware (beta ware at best) and is possible due to lack of regulatory enforcement.
History shows that the longer the self regulatory entities take the p, the harder the regulatory hammer comes down eventually.
I mean there are videos of a vehicle's occupant sitting in the rear seats making food and drinks while the vehicles are tricked into operating off of the vehicles sensors.
It is not solely the trust and dependence but inclusive is the group of idiots with access to wealth without regard to human life.
I expect that to design self-driving you need to push the limits (with some accidents) a bit with a bunch of telemetry. Going from not-much to full-self-driving requires a lot of design increments.
Today, there is at least one of the most advanced Neural Networks entering each car: A human being. If we could just implement the AI to add to this person and not replace it...
What would such an AI even look like? If it spots every real danger but also hallucinates even a few dangers that aren’t really there, it gets ignored or switched off for needlessly slowing the traveler down (false positives, apparently an issue with early Google examples [0]); if it only spots real dangers but misses most of them, it is not helping (false negatives, even worse if a human is blindly assuming the machine knows best and what happened with e.g. Uber [1]); if it’s about the same as humans overall but makes different types of mistake, people rely on it right up until it crashes then go apoplectic because it didn’t see something any human would consider obvious (e.g. Tesla, which gets slightly safer when the AI is active, but people keep showing the AI getting confused about things that they consider obvious [2]).
This is the bit nobody likes to realize. FSD at it's best...is still about as fallible as a human driver. Minus free will (it is hoped).
I will be amused, intrigued, and possibly a bit horrified if by the time FSD hits level 5, and they stick with the Neural Net of Neural Nets architecture if there isn't a rash of system induced variance in behavior as emergent phenomena take shape.
Imagined news:
All Tesla's on I-95 engaged in creating patterns whereby all non-Teala traffic was bordered by a Tesla on each side. Almost like a game of Go, says expert. Researchers stumped.
Then again, that's imply you had an NN capable of retraining itself on the fly to some limited degree, which I assume no one sane would put into service... Hopefully this comment doesn't suffer a date of not aging well.
All Level 2 systems need to better integrate with the driver. Upon engagement driver and driver assist are team where communication and predictability is crucial.
This is a misrepresentation of the dumpster fire that was the WMATA train situation. Yes, the fatal crash was the last straw, but the root problem was not the automation system but rather the complete lack of maintenance that led to its inability to work properly. Congress refusing to fund maintenance and then falling behind 10-15 years on it lead to all kinds of systems failing. The fatal fire in the blue line tunnel under the river occurred with a human at the controls, but we’re similarly not blaming that incident on the perils of human operation.
I don’t blame the operator for the crash. The other train was behind a blind curve and she hit the emergency brake within a reasonable amount of time given what she could see. However the speeds of the system were set too high for the operator to safely stop because they assumed the ATC would work perfectly.
In a plane you also have far more time to react once the autopilot disconnects for whatever reason, than the fraction of a second that a car gives you.
The difference is, they have traffic controllers and the train have their own dedicated rails, almost no obstructions and a train into train crash danger situation rarely arises.
The planes have a lot of maneuvering space to all sides.
Car traffic and streets are more dense and often have humans crossing them without regards to laws, bicycles, motorbikes, road construction and bad weather.
Not saying one auto pilot system is better than the other, however, they operate in different environments.
We have an education problem. People have no idea what computers do because they're illiterate (literacy would mean knowing at least one language well enough to read and write in it) so they just take other people's word that they can do some magical thing with software updates. The most extreme examples of this were the iPhone hoaxes telling people that software updates provided waterproofing or microwave charging.
Your reasoning doesn’t apply to the incremental improvements to self-driving, rather Tesla’s decision to allow all cars to use TACC/auto-steer. They haven’t even given people “the button” to enroll in FSD beta, likely because they know it would be extremely bad PR when a bunch of people use it without paying attention.