Hacker Newsnew | past | comments | ask | show | jobs | submit | jasoncartwright's commentslogin

Just cancelled my GitHub Copilot Pro+ year subscription. Removal of Opus 4.6 stung, but the repeated continued downtime makes it unusable for me. Very disappointed.

No fuss instant refund of my unused subscription (£160) appreciated.


Doesn’t GitHub Copilot Pro+ only have month-to-month payment option?

Only Pro (without plus) can be paid annually for some reason.


Pro+ does have a annual plan but recently they paused or dropped the annual plans because they are trying to adjust the pricing model.

I paid 390 USD for a year Pro+ subscription in November 2025.

I used all the 'Premium Requests' every month on (mainly) Opus 4.5 & 4.6. From what I've read on here it seems I was probably a rather unprofitable customer - it felt like a steal.


Yes, it was definitely a good value for devs using those models. I was hoping since Github Copilot was rarely talked about compared to the Anthropic/OpenAI offerings, MS would continue to subsidize it to encourage people to move over, but maybe it just got too expensive.

What will you use now?

Claude Code

Teslas turning off autopilot seconds before a crash, apparently avoiding being recorded as active during an incident, is wild https://futurism.com/tesla-nhtsa-autopilot-report

I think this is part of the reason I am wary of trying it ( including some of the competitor's variants ). They all want you to pay attention, because you may be forced to make a decision out of the blue. I might as well be in control all the time and not try to course correct at the literal last second.

SAE level 2 is just a bad idea. People can't be expected to carefully monitor a car and take over at a moment's notice when it's doing all the driving. My adaptive cruise control is great and I hope to have a future car where I can zone out while it drives and take over after after a few seconds heads up, but the zone between shouldn't be a valid feature.

I think you mean SAE Level 3. SAE Level 2 is “lane centering” and “adaptive cruise control” [1]. (Level 3 is “when the feature requests, you must drive.)

[1] https://www.ncdd.com/images/blog/diagram.png


I meant to include both SAE 2 and 3. I think having both lane keeping and cruise control on at the same time will tend to cause people to lose focus in a way they wouldn't if they had to do one or the other.

I don't even use cruise control. I like to be actively switched on all the time constantly making little decisions, including speed, so that I actually am instantly ready if I need to make some big decision.

People these days letting the car drive, thinking they can spring into action I think are underestimating just how cold their cache lines are getting and the major page fault they're going to take when they try to take over.

And I've seen comments by people that they were letting the car drive itself into a bad situation they could see developing, but didn't jump in to take over right away in anticipation (effectively betting on the car over their own skill but still realizing they had to jump in if the car got it wrong--which is just so incredibly confused).


Interestingly, I think that similar types of arguments are made against "agentic coding"

If you don't pay constant attention, you will never notice when it slips in a bug or security issue


Sure, but you can do that in a diff after the event, rather than live.

Car crash deaths are better known than software bug caused deaths. Worse: a car crash can cause the driver's death; I wouldn't offload work on which my life depends to an experimental tech.

Today's car crash deaths are sometimes software bug caused deaths. Toyota failed their forensic audit of their drive by wire code back in 2013. https://capitolweekly.net/toyota-has-settled-hundreds-of-sud...

A self driving car should have no steering wheel. If it has a steering wheel it is a vote of no confidence from the manufacturer.

I don't really buy that. There are a lot of situations (e.g. being directed to park in a space at a fairgrounds, ski area, or whatever) that you can't reasonably expect AFAIK to be programmed into a car's computer. Even if a car can legitimately handle roads under most circumstances, they're not going to be able to handle everything.

"Because the Origin does not have manual controls, the NHTSA must issue an exception to the Federal Motor Vehicle Safety Standards to permit operation on public roads"

Too bad that project failed.

https://en.wikipedia.org/wiki/Cruise_(autonomous_vehicle)


I think their point was "it's not ready yet."

Throttle and yoke aren't a vote of no confidence from aircraft manufacturers. Some modes of operation are suitable for autopilot and some are not.

There is a reason that pilots get basically told the ins and outs of a specific plane. Imagine the outrage if people need to do month long training for a specific car just to be able to drive it (and not just a general "here is how cars roughly work and the laws of the road").

Would it be a vote of no confidence in Full Self Flying?

No, it would be an acknowledgement of the lack of perfection in human systems so far.

I mean, they kinda are.

Airline pilots aren't supposed to take a nap, and there are occasionally articles about the various things that have gone wrong because the pilots weren't paying attention.


That presents an interesting failure mode challenge.

Well we don't have any self driving cars outside of San Francisco. Only cars with advanced driver assistance.


Also in Vegas (Zoox), and China has their own competitive market of self-driving taxis.

How do you reverse such a car into your own driveway that's positioned in a funny way at an angle and an incline? What if you're parking off road for any reason? Like, you have to be able to manoeuvre your own vehicle sometimes.

Treat it like a driver assistance system. I treat FSD the same as I treat Augmented Cruise Control and Lane Keep Assist in my CRV. I keep my hands on the steering wheel and follow along with the decision making.

Reminds me of a situation not long ago.

I’m in left lane on highway. Tesla ahead of me but quite a ways away.

I realize as I’m driving that the Tesla is moving quite slow for the left lane driving. And before you say it, yes there are lots of people speeding in highway left lanes too.

So - I passed on the right rather than tailgate. Look over and see a guy leaning back in his seat. No hands on wheel. Could’ve been asleep. And driving 10-15 mph slower than you’d expect in that lane.

To your point about using it FSD the way you do, makes total sense to me. Which implies you would also cruise at the right speed depending on the lane you are in, unlike my example.


One of my major complaints about FSD is the 'speed profiles'. You used to be able to set a target speed directly. Now, you can only select a profile. You're either going the exact speed limit, 2-3mph over, or essentially 'with the flow of traffic' which can lead to speeding +15 over the limit.

Didn't know about that feature. Thanks for the illumination. On verge of going full electric and looking at BMW, Lucid, Porsche, Rivian, Tesla.

I wonder what's taught to new drivers about this sort of situation. My intuitive feeling (driving for almost 30 years) is you drive with the flow of traffic when traffic is present. I don't see too many left lane drivers glued to speed limits, but it's obvious when someone is a fast or slow.


It's worth noting that older Tesla's pre-2024, are stuck on an old version of FSD due to compute limitations. Recent FSD, generally, does not hang out in the left lane and is very good at recognizing when vehicles approach from the rear. It will move to the right lane to allow them to pass.

Excellent -- noted.

> the Tesla is moving quite slow for the left lane driving. And before you say it, yes there are lots of people speeding in highway left lanes too.

Is that code for "the Tesla was following the law by driving within the speed limit and I don't find that acceptable" or what?

> I passed on the right rather than tailgate.

... right, since those are the only two options. Tailgating is just one of the potential valid options to choose from after all.

> And driving 10-15 mph slower than you’d expect in that lane.

So not "slower than the speed limit", but rather "slower than you'd expect". Sigh.


I won't comment on whether it's acceptable to speed or not. I don't think that's the point.

Most highways I drive on exhibit a predictable pattern. Slower folks in right lane. Faster folks in left lane. Maybe those slower folks are at the speed limit, or above, or below. Left lane folks somewhat faster.

Should everyone obey the speed limit? Sure! Hard to argue that point.

My observation was a Tesla driving at - let's call it "right lane speed" in the left lane. Maybe slower. Slow enough that you'd soon see a predictable back-up behind the car - some tailgating, brake usage, etc. The stuff that in my view leads to more accidents, swerving, and phantom traffic that occurs when people pile on each other, use brakes excessively, and end up slowing to a crawl.

FWIW: The "is speeding acceptable" question is somewhat resolved by police. I rarely see people pulled over for speeding within the flow of traffic, vs. somewhat swerving in/out or just driving much faster than everyone.

Don't remember the last time I saw an officer pick a car out of a normally flowing left lane to issue just that one driver a ticket.


Real question, then, from someone who only bothers driving when he must and even then in a 2016 model: Why do you use it? What beneficial purpose do you find it to serve?

I'm asking because I feel I must be missing something, inasmuch as to have my hands on the wheel while not controlling the car is an experience with which I'm familiar from skids and crashes, and thinking about it as an aspect of normal operation makes the hair stand up on the back of my neck. (Especially with no obviously described "deadman switch" or vigilance control!)


Here's a simple example from last week. FSD was in control on my way to work, stopped at a red light early in the morning before the sun was up. The light turns green and FSD does not accelerate. I figured it was somehow confused and I was starting to move toward hitting the accelerator myself when a car comes flying through the red light from the driver's side. I hadn't noticed this car, but FSD saw it and recognized it wasn't slowing down. I could see there were headlights, but it wasn't clear how fast it was going.

It's just nice having a 'second set of eyes' in a sense. It's also very useful when driving in unfamiliar cities where much of my attention would be spent on navigation and trying to recognize markings/signs/light positions that are atypical. FSD handles the minutia of basic vehicle operation so I can focus on higher level decisions. Generally, at inner-city speeds, safety and time-to-act are less of an issue and it just becomes a matter of splitting attention between pedestrians, obstacles, navigation, etc. FSD if very helpful in these situations.


Huh.

I appreciate your thoughtful and detailed response. I'll need to think about it for a while, too. It had not occurred to me to consider the possibility that someone else's FSD might protect me from the general incompetence and unreliability of amateur motor vehicle operators.

(Jumping a light in the dark? Not thinking or learning to navigate by verbal instructions from your satnav or phone, instead of compromising the primary sense you must constantly use to drive without risking manslaughter? I'm sorry, but if this is the standard, I really can't describe it other than it is...to say nothing of your considering safety less important, as you say, in the "inner city" that is my home.)


> Jumping a light in the dark?

I don't know what this means.

> Not thinking or learning to navigate by verbal instructions from your satnav or phone, instead of compromising the primary sense you must constantly use to drive without risking manslaughter?

Navigating involves reading street signs, block numbers, and traffic markings. These are all visual elements that can distract from safety monitoring. How many minor accidents result from driver's trying to figure out where they are, or need to go?

> I'm sorry, but if this is the standard, I really can't describe it other than it is...to say nothing of your considering safety less important, as you say, in the "inner city" that is my home.

My claim isn't that safety is less important in city driving, it's that driving is far safer due to lower speeds. There's more time to react and lower risk of catastrophic results when driving at 35mph. The challenge for a driver isn't sudden loss of control as you may experience at 65+mph. The city driving challenge is trying to track markings, signage, pedestrians, and parked cars while also navigating and managing the vehicle's basic operation. FSD can track all of that without distraction and leave the driver responsible for more human reasoning tasks.


> I don't know what this means.

You failed, in this case by hastening to cross the intersection as soon as the light came green, to account for the possibility of another driver's error. If you weren't taught to do that, as I was, then the mistake is not entirely your own. It was still a mistake, which you have already acknowledged would have led you into an accident had your vehicle not rescued you.

> There's more time to react and lower risk of catastrophic results when driving at 35mph.

Not for me. You're the one wearing power armor, remember.


> You failed, in this case by hastening to cross the intersection as soon as the light came green, to account for the possibility of another driver's error. If you weren't taught to do that, as I was, then the mistake is not entirely your own. It was still a mistake, which you have already acknowledged would have led you into an accident had your vehicle not rescued you.

Even if we accept your interpretation of the situation as true, you're making the case for FSD. You can think of FSD (or other self-driving solutions) as raising the floor for bad drivers. If I'm a driver with some otherwise dangerous habits (nobody is perfect) then FSD is filling the gaps in my skill.

> Not for me. You're the one wearing power armor, remember.

But this is a joint interaction between the pedestrian and the vehicle. I can't make the pedestrian more aware. I can't give the pedestrian super-human reaction time. I can, however, give those traits to the vehicle. That's a major selling point for autonomous vehicles.


Well, sure. As I said yesterday, it hadn't previously occurred to me to think of someone else's FSD as helping keep me safe from them. (Thank you again, by the way, for helping me put two and two together on that!)

As a pedestrian, I don't need superhuman reaction time, because unlike some I move at speeds a human mind can comprehend. Nor, I promise you, need I be "more aware" - what a frankly foolish thing to say, when there is nothing on Earth even remotely as dangerous to me, day to day, than you and those like you! I assure you, I am about as aware as it is possible to be. I have to be! Look at you.

But this again is a splendid illustration of the problem, for which I again must give my gratitude: the old-school motorhuckle lifestyler dingbats were right all along, it turns out, to call cars "cages." You carry yours around in your head all the time, I see.


Glad you're ok!

I was watching the Tesla display on my way back home from LaGuardia airport last week (passenger, not driver).

No accidents or close calls, but it was obvious that I might be focused on 1 or 2 things in that very busy and chaotic environment whereas the car (FSD or otherwise) sees more than 2 things and possibly avoids something on my behalf.


> I might be focused on 1 or 2 things in that very busy and chaotic environment...

...so you hired a professional to do that job for you, instead of risking the wellbeing of everyone nearby. This was the correct decision!


Which is just worse.

When I'm driving I know what I'm doing, what I'm planning to do and can scan the road and controls with that context.

Making me have to try and guess what the car is going to do at any given time is adding complexity to the process: am I changing lanes now, oh I guess I am because the autonomy thinks we should etc.


Not sure about your car but the car I have with augmented cruise requires hands on wheel. Turns off otherwise. (Volvo XC90)

I agree that there are situations where what I do as a trained driver is different from augmented cruise.

A good example (or perhaps I'm wrong) is this: in a lane, car pulls into lane in front of me and between the car further ahead. Now I don't have enough space in between me and that new entrant. But instead of using brakes (unless eggregious), I bleed speed until I make space I want. Augmented cruise doesn't do that - it hits brakes.

So, from behind, I think it looks like I'm using my brakes a lot more than I am when on augmented cruise. And excessive brake use distracts the driver behind me.


Sure, but the practical experience is that FSD is fairly predictable. It's just a matter of personal preference that comes from experience. I wouldn't impose a system like FSD on everybody.

I'm a >90% FSD user, and I approve this sentiment. My wife hates it for the mistakes it makes (eg. seems like there is recent shadow recognition regression) and "errors in judgement" (not getting in the turn lane in a timely manner), she would never use it on her own.

I've got plenty of experience, and (feel as though) I know most of it's failure points. I had to drive my 30 minute commute last week, and it was decidedly unfun. I have seen the future and I don't want to go back.


96% here, including DC and Baltimore. Besides the bizarre Navigation choices and waiting to long for lane changes, FSD has reached essentially zero interventions outside of bad mapping situations. I really wish Tesla would use better map data, for sure.

To be fair, that report says

> the self-driving feature had “aborted vehicle control less than one second prior to the first impact”

It seems right to me that the self-driving feature aborts vehicle control as soon as it is in a situation it can’t resolve. If there’s evidence that Tesla is actively using this to “prove” that FSD is not behind a crash, I’m happy to change my mind. For me, probably 5s prior is a reasonable limit.


It's an insane reversal of roles. In a standard level 2 ADAS, the system detects a pending collision the driver has not responded to and pumps the breaks. Tesla FSD does the reverse: it detects a pending collision that it has not responded to, and shuts itself off instead of pumping the breaks. It's pure insanity.

Also, Tesla routinely claims that "FSD was not active at the time of the crash" in such cases, and they own and control the data, so it's the driver's word against theirs. They most recently used this claim for the person who almost flew off an overpass in Houston because FSD deactivated itself 4 seconds before impact[1]. They used it unironically as an excuse why FSD is not at fault, despite the fact that FSD created the situation in the first place.

[1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...


AEB is enabled even when FSD is off, which sounds like the L2 ADAS behaviour you're describing. Just because FSD disengages, it doesn't mean that no other Collision Avoidance Assist features are operating.

> because FSD deactivated itself 4 seconds before impact

This isn't accurate. The driver deactivated FSD 4 seconds before impact. Don't get me wrong, the video looks pretty much like FSD wouldn't have been able to do anything better than the driver did, but she didn't give it a chance.


IDK, this has the same unethical energy as police turning off body cameras.

in the BEST CASE, this is a confluence of coincidences. Engineering knows about this and leaves it "low prio wont fix" because its advantageous for metrics.

In the worst case, this is intentional.

In any case, the "right thing to do" is NOT turn off the cameras just before a collision, and yet it happens.

This is also Safety Critical Engineering 101. Like.... this would be one of the first scenarios covered in the safety analysis. Someone approved this behavior, either intentionally, or through an intentional omission.


> the "right thing to do" is NOT turn off the cameras just before a collision

Source for autopilot being disabled “seconds before a crash” also disabling cameras? (Sorry if I missed it above.)


This is a policy that Tesla put in place, period. Handling control to driver suddenly in a weird moment can make the whole situation even more dangerous as the driver is not primed to handle it on the spot, it’s all too unexpected.

Yep, your comment reminds me of a time my mother was about to hit a bird in the road. However, she was too busy arguing with the passenger to notice, and her driving was starting to become erratic already. I decided not to tell her because I knew that the shock could cause her do something more drastic like crash the car to try and avoid it.

I guess i'll step in for the counter.

How is a car supposed to pre-empt when it is in a situation that is to challenging for it to navigate? Isn't it the driver who should see a situation that looks dicey for FSD and take control?


Maybe the car should not have this dangerous feature in the first place? Or maybe train drivers thoroughly and frequently for when this situation arises it becomes less dangerous.

It seems to me FSD for Tesla is not ready to go into Prod as it is now.


> Isn't it the driver who should see a situation that looks dicey for FSD and take control?

How does a driver judge what is and is not "dicey" from the FSD's perspective?

If you don't have confidence in FSD, then you wouldn't use it in the first place. If you do have confidence, then why would you ever (or how often) would you take over?

Is there some kind of 'confidence gauge' that the FSD displays in how well it thinks it can handle the situation? If there is/was, perhaps the driver could see it dropping and prime himself to take over.


How is a car supposed to pre-empt when it is in a situation that is to challenging for it to navigate?

By anticipating further ahead. If it finds itself into a situation that it can't get itself out of, it means it should have made more defensive choices earlier or relinquish control earlier. And if it doesn't have either the reasoning capacity or the spatial awareness data to do that, it is not fit for general usage and should be pulled.


Was this case FSD or was this earliest generation technology? And does this still happen?

I agree you right in that's what you expect to happen.


This is reasonable, and you have to imagine many collisions involve the driver taking control at the last second causing the software to deactivate. That being said, this becomes a matter of defining a self-driving collision as one in which self-driving contributed materially to the event rather than requiring self-driving be activated at the exact moment of impact.

Agreed. I also feel like there is a world of difference between the driver deliberately assuming control at the last second because they notice that an accident is about to happen, and the car itself yielding control unprompted because it thinks an accident is about to happen.

The former is to be expected. The latter seems likely to potentially make an already dangerous situation worse by suddenly throwing the controls to an inattentive driver at a critical moment. It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.


> It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.

This is essentially what FSD does, today. When the system determines the driver needs to take over, it will sound an alert and display a take-over message without relinquishing control.


So, the car puts itself in a situation it can't resolve, then just abdicates responsibility at the last moment.

That's still not a good look.

And it does mean that FSD isn't to be as trusted as it is because if the car is putting itself in unresolvable situations, that's still a problem with FSD even if it isn't in direct control at the moment of impact.


The few Tesla post-mortems I’ve read early on stated that FSD turned off before impact and used this as a defence to their system. If they shared that this happened 1 second before impact (so far too late for a human to respond), I’d have sympathy. I have never read a Tesla statement that contained this information.

For normal incidents, 2 seconds is taken as a response time to be added for corrective action to take effect (avoidance, braking). I’d expand this for FSD because it implies a lower level of engagement, so you need more time to reengage with the car.


Disregarding the fact that NHTSA findings apparently contradict it (though that may just be a more recent change than the 2022 report), Tesla claims to use five seconds before a collision event as the threshold for their data reporting on their FSD marketing page:

> If FSD (Supervised) was active at any point within five seconds leading up to a collision event, Tesla considers the collision to have occurred with FSD (Supervised) engaged for purposes of calculating collision rates for the Vehicle Safety Report. This approach accounts for the time required for drivers to recognize potential hazards and take manual control of the vehicle. This calculation ensures that our reported collision rates for FSD (Supervised) capture not only collisions that occur while the system is actively controlling the vehicle, but also scenarios where a driver may disengage the system or where the system aborts on its own shortly before impact.[0]

In theory, that should more than cover the common perception-response times of around ~1 to 1.5 seconds used as a rule of thumb for most car accidents. But I'm quite curious what research has been done on the disengagement process as driver assistance systems return control to the driver and its impact on driver response times and their overall alertness.

If drivers trust the car to handle braking and steering for you, are we really going to see perception–response times that low, or have we changed the behavior being measured? Instead of timing a direct response to a stimulus, we’re now including the time required to re-engage their attention (even if they're nominally "paying attention"), transition to full control of the vehicle, and then react to the stimulus that they're now barreling down on.

For that matter, this approach is making the implicit assumption that pressing the brake pedal or turning the steering while is a sign of now-active control and awareness. Is it? Or could it just be a sort of instinctual reaction? I've been in the passenger seat when a driver has slammed on the brakes, only to find myself moving my right foot as if to hit an imaginary brake pedal even knowing I obviously wasn't the one driving. Hell, I remember my mom doing that back when I was learning to drive during normal braking.

0. https://www.tesla.com/fsd/safety#:~:text=within five seconds


It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?

AEB should still be working to pump the breaks AFAIK, but auto-steer and cruise control will be disabled while the computer and electronics are still perfectly operational to make the car more secure for the passengers and first responders after the event.

EDIT: IIRC the threshold for disengagement is 1s.


>> Teslas turning off autopilot seconds before a crash, apparently avoiding being recorded as active during an incident, is wild https://futurism.com/tesla-nhtsa-autopilot-report

> It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?

That sounds like an ass-covering justification. There may be a good reason for triggering some kind of interlock to prevent the problems you outlined, but if their implementation 1) also stopped recording seconds before a crash or 2) they publicly claimed it wasn't responsible since it turned itself off, then Tesla is behaving unethically and dishonestly.


I'm just stating what I remember, I'm not trying to defend Tesla.

For 1) it's the first time I hear it from a technical point of view - Tesla's dashcam records continuously for the last 10m, and should save the data on the internal computer in case of a crash and send it back to Tesla if feasible AFAIR (I'm an owner). IIRC it's not the first case though where Tesla claimed the data wasn't available or corrupted, and then it was actually recovered some time later after pressure from authorities. So I think technically the data is there, but also believe Tesla is behaving unethically and dishonestly to cover up or delay retrieval.

2) I often hear it as FUD, as in: AP/FSD was off, the user just did it by accident, wasn't accustomed to it, or just didn't know how it worked. AFAIR most of the accidents had the data released and showed some of the following: user touched steering wheel and disengaged autosteer/FSD (whether knowingly or by accident), user was pressing accelerator pedal by accident, user was pressing accelerator instead of brake, etc etc



A news aggregator https://crawl.news


Came here to comment how much it's like the superb diskprices.com. Excellent work.


Thank you. I appreciate it. I love that site and always sat around thinking about building something like it, but for a product I buy frequently. And I finally built it.


PaaS


If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?


Generally speaking, liability for a thing falls on the owner/operator. That person can sue the manufacturer to recover the damages if they want. At some point, I expect it to become somewhat routine for insurures to pay out, then sue the manufacturer to recover.


Or at some point subscribing to a service may be easier than owning the damn thing.


All according to plan


It already doesn't make sense to own a car for me. It's cheaper to just call an Uber.


I'm guessing that's a fairly city viewpoint. My car is setup with roofrack and carries a lot of other gear I want. I'm regularly in places without reliable cell etc. Visiting friends can easily be an hour drive.


Yes, a city viewport. I usually just walk, but when I don't I most often take the subway, not even Uber. Though I feel like in Toronto the subway or some part thereof is closed or under maintenance or whatever way too often. It's not very reliable.


Depends how often.

Multiple Ubers per day are expensive. ($55 x 365 = $20,000)

All in, a budget car costs less than half of that per year.

But if you replace some of that with public transportation, or a car is otherwise impractical, the math changes.


For some this is the case. For others, this is not the case.


some -> most ?


In west/east coast cities maybe.

Talk to anyone from the midwest about not owning a car and they'll laugh you out of the room.

Well, unless it's because youre proposing they switch to ATV's and Snowmobiles, in which case there some people can technically get by without a traditional automobile.


If you take off the conspiracy hat, you will see that there are many advantages to not owning a product. Such as that the vendor's incentives are better aligned with yours. For example, if the thing breaks, it is in __their__ best interest to fix it (or to not let it break in the first place). This also has positive implications for sustainability.


It’s also in their best interest to set the price so as to maximize their own profits. If switching costs or monopoly power allow them to set a higher price, they will do so.

Have we learned nothing from a decade of subscription services?


Nobody said we should allow monopolies?


Especially Adam Smith. The claims are scattered throughout The Wealth of Nations, but he hated them with specificity. He said they raise prices and lower quality, misallocate capital, and corrupt politics, among other things.


but tesla is the operator


Aside from the human in the vehicle holding the steering wheel with a foot on the pedal, that is.


That’s today. If Tesla ever becomes fully autonomous, you won’t need that.


If I ever marry Oprah I’ll be a rich man :)


google what F stands for in FSD


Ah, but could one not argue that the owner of the self-driving car is _not_ the operator, and it is the car, or perhaps Tesla, which operates it?


All Tesla vehicles require the person behind the steering wheel to supervise the operations of the vehicle and avoid accidents at all times.

Also, even if a system is fully automated, that doesn’t necessarily legally isolate the person who owns it or set it into motion from liability. Vehicle law would generally need to be updated to change this.


But that might be considered a legal trick. Suppose that, when you pay for a taxi, the standard conditions of carriage would make it your responsibility to supervise the vehicle operation and alert the driver so as to avoid accidents. Would the taxi driver and taxi company be able to eschew liability through that formalism? Probably not. The fact that Tesla makes you sign something does not automatically make the signed document valid and enforceable.

It may be that it is; but then, if you are required to be watchful at all time, and be able to take over from the autonomous vehicle at all times, then - the autonomy doesn't really help you all that much, does it?


No, Tesla doesn’t assign you liability by making you sign something. The law makes the driver of a vehicle liable for the operation, as it always has.

My first sentence was to say that even if the law treats autonomous vehicles differently, Tesla doesn’t sell one.


> The law makes the driver of a vehicle liable for the operation, as it always has.

So, either those Tesla's don't really self-drive (which may be the case, I don't know, but then the whole discussion is moot), or they do, in which case, the human wasn't the one driving and may thus avoid liability.

Then of course there is the possibility that the court might be convinced the car was being drive collaboratively by the human and the car/the computer, in which case Tesla and the human might share the liability. IANA(US)L though.


> either those Tesla's don't really self-drive

All Teslas are level 2 ADAS and require the human behind the the wheel to monitor the vehicle and intervene when necessary.

> or they do, in which case, the human wasn't the one driving and may thus avoid liability.

That is not legally true. Automation does not absolve someone from liability. Owners of a piece of machinery have liability just by being the owner and placing it into operation.

Forget about cars for a second -- we already have many products that are entirely automated already, for example: an elevator. If you own a building with an elevator, and it hurts someone, the building owner is absolutely going to be sued over it, and "oh, it's automated" isn't a get-out-of-court free card.

There are still responsibilities that the owner has: did they properly maintain it? were they aware of an issue but decided to operate it anyway? were they in a position to intervene and avoid the accident, but failed to do so?


Mercedes agrees. They take on liability when their system is operated appropriately.


They say they will, but until relevant laws are updated, this is mostly contractual and not a change to legal liability. It is similar to how an insurance company takes responsibility for the way you operate your car.

If your local legal system does not absolve you from liability when operating an autonomous vehicle, you can still be sued, and Mercedes has no say in this… even though they could reimburse you.


No. They don’t. It was vaporware made to fool people including you. You could never actually order it and it’s canceled now in favor of an L2 system.


Because that's the law of the land currently.

The product you buy is called "FSD Supervised". It clearly states you're liable and must supervise the system.

I don't think there's law that would allow Tesla (or anyone else) to sell a passenger car with unsupervised system.

If you take Waymo or Tesla Robotaxi in Austin, you are not liable for accidents, Google or Tesla is.

That's because they operate on limited state laws that allow them to provide such service but the law doesn't allow selling such cars to people.

That's changing. Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.


    > Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
You raise an important point here. Is it economically feasible for system makers to bear the responsibility of self-driving car accidents? It seems impossible, unless the cars are much more expensive to cover the potential future costs. I'm very curious how Waymo insures their cars today. I assume they have a bespoke insurance contract negotiated with a major insurer. Also, do we know the initial cost of each Waymo car (to say nothing of ongoing costs from compute/mapping/etc.)? It must be very high (2x?) given all of the special navigation equipment that is added to each car.


Tacking "Supervised" on the end of "Full Self Driving" is just contradictory. Perhaps if it was "Partial Self Driving" then it wouldn't be so confusing.


Its only to differentiate it from their "Unsupervised FSD" which is what they call it now.


That is redundant and doesn't make the other any less contradictory


I agree but I think context is important here. It was called FSD, but they got into trouble, now its "Supervised" so people know its not, well, unsupervised FSD. Yes, I know it doesn't make sense.


> Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.

This is news to me. This context seems important to understanding Tesla's decision to stop selling FSD. If they're on the hook for insurance, then they will need to dynamically adjust what they charge to reflect insurance costs.


I imagine insurance would be split in two in that case. Carmakers would not want to be liable for e.g. someone striking you in a hit-and-run.


If the car that did a hit-and-run was operated autonomously the insurance of the maker of that car should pay. Otherwise it's a human and the situation falls into the bucket of what we already have today.

So yes, carmakers would pay in a hit-and-run.


> If the car that did a hit-and-run was operated autonomously the insurance of the maker of that car should pay

Why? That's not their fault. If a car hits and runs my uninsured bicycle, the manufacturer isn't liable. (My personal umbrella or other insurance, on the other hand, may cover it.)


They're describing a situation of liability, not mere damage. If yor bicycle is hit you didn't do anything wrong.

If you run into someone on your bike and are at fault then you generally would be liable.

They're talking about the hypothetical where you're on your bike, which was sold as an autobomous bike and the bike manufacturer's software fully drives the bike, and it runs into someone and is at fault.


You can sell autonomous vehicles to consumers all day long. There's no US federal law prohibiting that, as long as they're compliant with FMVSS as all consumer vehicles are required to be.


Waymo is also a livery service which you normally aren’t liable for as a passenger of taxi or limousine unless you have deep pockets. /IANAL


I see. So not Tesla's product they are using to sell insurance around isn't "Full Self-Driving" or "Autonomous" like the page says.


My current FSD usage is 90% over ~2000 miles (since v14.x). Besides driving everywhere, everyday with FSD, I have driven 4 hours garage to hotel valet without intervention. It is absolutely "Full Self-Driving" and "Autonomous".

FSD isn't perfect, but it is everyday amazing and useful.


> My current FSD usage is 90% over ~2000 miles

I'd guess my Subaru's lane-keeping utilisation is in the same ballpark. (By miles, not minutes. And yes, I'm safer when it and I are watching the road than when I'm watching the road alone.)


My favorite feature of Subaru's system is when you change lanes, and it stays locked onto the car in the slower lane and slams on the brakes. People behind you love that.


I don't want minimize the efforts of other manufacturers (I'm sure they'll all have Tesla's features in the next generation), but: my wife has a Subaru Outback, and the two systems are as close in functionality as humans are to chimpanzees. The differences are many, stark and subtle (that Subaru screen), I'd just say take a test drive with FSD.


If it was full self driving, wouldn't your usage be 100%?


> It's not perfect,

Probably about 90% perfect! Obviously we don't agree on the definition.


Sometimes a car is fun to drive.


It refuses to engage above, like, 80.


Yet still on relying you to cover it with your insurance. Again, clearly not autonomous.


Liability is a separate matter from autonomy. I assume you'd consider yourself autonomous, yet it's your employer's insurance that will be liable if you have an accident while driving a company vehicle.

If the company required a representative to sit in the car with you and participate in the driving (e.g. by monitoring and taking over before an accident), then there's a case to be made that you're not fully autonomous.


> it's your employer's insurance that will be liable if you have an accident while driving a company vehicle

I think you're mixing some concepts.

There's car insurance paid by the owner of the car, for the car. There's workplace accident insurance, paid by the employer for the employee. The liability isn't assigned by default, but by determining who's responsible.

The driver is always legally responsible for accidents caused by their negligence. If you play with your phone behind the wheel and kill someone, even while working and driving a company car, the company's insurance might pay for the damage but you go to prison. The company will recover the money from you. Their work accident insurance will pay nothing.

The test you can run in your head: will you get arrested if you fall asleep at the wheel and crash? If yes, then it's not autonomous or self driving. It just has driver assistance. It's not that the car can't drive itself at all, just that it doesn't meet the bar for the entire legal concept of "driver/driving".

"Almost" self driving is like jumping over a canyon and almost making it to the other side. Good effort, bad outcome.


[flagged]


Disagree. I appreciate their viewpoint tethering corporate claims to reality by illustrating Tesla is obfuscating the classification of their machines to be autonomous, when they actually aren't. Their comments in other thread chains proved to be fruitful when lacking agitators looking to dismiss critique by citing website rules, like the post adding additional detail to how Tesla muddles legal claims by cooking up cherry-picked evidence that work against the driver despite being the insurer.


Without LIDAR and/or additional sensors, Tesla will never be able to provide "real" FSD, no matter how wonderful their software controlling the car is.

Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.

Waymo and others are providing a taxi service where the driver is not a human. You don't pay insurance when you ride Uber or Bolt or any other regular taxi service.


> Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.

Well practically speaking, there’s nothing stopping anyone from voluntarily assuming liability for arbitrary things. If Tesla assumes the liability for my car, then even if I still require my “own” insurance for legal purposes, the marginal cost of covering the remaining risk is going to be close to zero.


Never say never—it’s not physically impossible. But yes, as it stands, it seems that Tesla will not be self driving any time soon (if ever).


They literally just (in the last few days) started unsupervised robotaxis in Austin.

They are as self-driving as a car can be.

This is different than the one where they had a human supervisor in passenger seat (which they still do elsewhere).

And different than the one where they didn't have human supervisor but did have a follow car.

Now they have a few robotaxis that are self driving.


Have you actually ridden in one? It's unclear whether this is a real thing or not.


This is a very low effort post. Was it really too difficult for you to Google: "youtube tesla robotaxis in Austin"? It took me about 7 seconds. There are lots of videos.


It was just another marketing stunt to pump the stock price before their terrible earnings report. One "unsupervised" unit, with the supervisor in a follow car, that nobody could actually get and drive around in.

https://electrek.co/2026/01/28/teslas-unsupervised-robotaxis...



If your minor child breaks something, or your pet bites someone, you are liable.

This analogy may be more apt than Tesla would like to admit, but from a liability perspective it makes sense.

You could in turn try to sue Tesla for defective FSD, but the now-clearly-advertised "(supervised)" caveat, plus the lengthy agreement you clicked through, plus lots of lawyers, makes you unlikely to win.


Can a third party reprogram my dog or child at any moment? Or even take over and control them?


Seems like the role of the human operator in the age of AI is to be the entity they can throw in jail if the machine fails (e.g. driver, pilot)


I’ve said for years that pragmatically, our definition of a “person” is an entity that can accept liability and take blame.


LLCs can't go to jail though


Because LLCs aren't people


Not to be confused with “human” thanks to SCOTUS.


> Surely if it's Tesla making the decisions, they need the insurance?

Why surely? Turning on cruise control doesn't absolve motorists of their insurance requirement.

And the premise is false. While Tesla does "not maintain as much insurance coverage as many other companies do," there are "policies that [they] do have" [1]. (What it insures is a separate question.)

[1] https://www.sec.gov/ix?doc=/Archives/edgar/data/0001318605/0...


Cruise control is hardly relevant to a discussion of liability for autonomous vehicle operation.


In the context of ultramodern cruise control (eg comma.ai), which has a radar to track the distance to the car (if any) in front of you, and cameras so the car can wind left or right and track the freeway, I think it does.


Not unless they are marketing it as “autopilot” or some such that a random consumer would reasonably assume meant autopilot.

And I’d include “AI driver” as an example.


A random consumer doesn't actually understand what Autopilot means. Most people don't have pilot's licenses. And cars don't fly. Did you not see all the debacles around it when it first came out?


The coder and sensor manufacturers need the insurance for wrongful death lawsuits

and Musk for removing lidar so it keeps jumping across high speed traffic at shadows because the visual cameras can't see true depth

99% of the people on this website are coders and know how even one small typo can cause random fails, yet you trust them to make you an alpha/beta tester at high speed?


Risk gets passed along until someone accepts it, usually an insurance company or the operator. If the risk was accepted and paid for by Tesla, then the cost would simply be passed down to consumers. All consumers, including those that want to accept the risk themselves. In particular, if you have a fleet of cars it can be cheaper to accept the risk and only pay for mandatory insurance, because not all of your cars are going to crash at the same time, and even if they did, not all in the worst way possible. This is how insurance works, by amortizing lots of risk to make it highly improbable to make a loss in the long run.


Not an expert here, but I recall reading that certain European countries (Spain???) allow liability to be put on the autonomous driving system, not the person in the car. Does anyone know more about this?


That is the case everywhere. It is common when buying a product for the contract to include who has liability for various things. The price often changes by a lot depending on who has liability.

Cars are traditionally sold as the customer has liability. Nothing stops a car maker (or even an individual dealer) from selling cars today taking all the insurance liability in any country I know of - they don't for what I hope are obvious reasons (bad drivers will be sure to buy those cars since it is a better deal for them an in turn a worse deal for good drivers), but they could.

Self driving is currently sold as customers has liability because that is how it has always been done. I doubt it will change, but it is only because I doubt there will ever be enough advantage as to be worth it for someone else to take on the liability - but I could be wrong.


It’s because you bought it. Don’t buy it if you don’t want to insure.


Yep, you bought it, you own it, you choose to operate it on the public roads. Therefore your liability.


If you bought and owned it, you could sell it to another auto manufacturer for some pretty serious amounts of money.

In reality, you acquired a license to use it. Your liability should only go as far as you have agreed to identify the licenser.


You can actually do that. Except that they could just buy one themselves.

Companies exist that buy cars just to tear them down and publish reports on what they find.


> Companies exist that buy cars just to tear them down and publish reports on what they find.

What does it mean to tear down software, exactly? Are you thinking of something like decompilation?

You can do that, but you're probably not going to learn all that much, and you still can't use it in any meaningful sense as you never bought it in the first place. You only licensed use of it as a consumer (and now that it is subscription-only, maybe not even that). If you have to rebuild the whole thing yourself anyway, what have you really gained? Its not exactly a secret how the technology works, only costly to build.

> Except that they could just buy one themselves.

That is unlikely, unless you mean buying Tesla outright? Getting a license to use it as a manufacturer is much more realistic, but still a license.


Check out Munro and Associates. I'm not talking about software. The whole car.


For what reason?

In case you have forgotten, the discussion is about self-driving technology, and specifically Tesla's at that. The original questioner asked why he is liable when it is Tesla's property that is making the decisions. Of course, the most direct answer is because Tesla disclaims any liability in the license agreement you must agree to in order to use said property.

Which has nothing to do with an independent consulting firm or "the whole car" as far as I can see. The connection you are trying to establish is unclear. Perhaps you pressed the wrong 'reply' button by mistake?


I started responding to this. I interpreted it to be referring to the whole car.

> Yep, you bought it, you own it, you choose to operate it on the public roads. Therefore your liability.


I don't think Tesla lets you buy FSD


If they don’t let you buy, you don’t own. If you don’t own, how is that insurance even available to you?


They do, until Feb 14th.


Even now I think it's a revocable license


I think there is an even bigger insurance problem to worry about: if autonomous vehicles become common and are a lot safer than manual driven vehicles, insurance rates for human driven cars could wind up exploding as the risk pool becomes much smaller and statistically riskier. We could go from paying $200/month to $2000/month if robo taxis start dominating cities.


> if autonomous vehicles become common and are a lot safer than manual driven vehicles, insurance rates for human driven cars could wind up exploding as the risk pool becomes much smaller and statistically riskier.

The assumption there is that the remaining human drivers would be the higher risk ones, but why would that be the case?

One of the primary movers of high risk driving is that someone goes to the bar, has too many drinks, then needs both themselves and their car to get home. Autonomous vehicles can obviously improve this by getting them home in their car without them driving it, but if they do, the risk profile of the remaining human drivers improves. At worst they're less likely to be hit by a drunk driver, at best the drunk drivers are the early adopters of autonomous vehicles and opt themselves out of the human drivers pool.


Drunk driving isn't the primary mover of high risk driving. Rather you have:

1. People who can't afford self driving cars (now the insurance industry has a good proxy for income that they couldn't tap into before)

2. Enthusiasts who like driving their cars (cruisers, racers, Helcat revving, people who like doing donuts, etc...)

3. Older people who don't trust technology.

None of those are good risk pools to be in. Also, if self driving cars go mainstream, they are bound to include the safest drivers overnight, so whatever accidents/crashes happen afterwards are covered by a much smaller and "active" risk pool. Oh, and those self driving cars are expensive:

* If you hit one and are at fault, you might pay out 1-200k, most states only require 25k-50k of coverage...so you need more coverage or expect to pay more for incident.

* Self driving cars have a lot of sensors/recorders. While this could work to your advantage (proving that you aren't at fault), it often isn't (they have evidence that you were at fault). Whereas before fault might have been much more hazy (both at fault, or both no fault).

The biggest factor comes if self driving cars really are much safer than human drivers. They will basically disappear from the insurance market, or somehow be covered by product liability instead of insurance...and the remaining drivers will be in a pool of the remaining accidents that they will have to cover on their own.


Classic car insurance is dirt cheap, even for daily driven stuff. Removing people who don't want to drive and don't care to not suck at it hugely improves the risk pool.

If there's only a small minority of human drivers people like you will have bigger fish to screech about there will be substantially less political will to perpetuate the system and it'll probably go away in favor of a far simpler and cheaper "post up a bond" type thing and much of the expensive mechanisms for grading drivers will be dismantled.


> Drunk driving isn't the primary mover of high risk driving.

It kind of is. They're responsible for something like 30% of traffic fatalities despite being a far smaller percentage of drivers.

> People who can't afford self driving cars (now the insurance industry has a good proxy for income that they couldn't tap into before)

https://pubmed.ncbi.nlm.nih.gov/30172108/

But also, wouldn't they already have this by using the vehicle model and year?

> Enthusiasts who like driving their cars (cruisers, racers, Helcat revving, people who like doing donuts, etc...)

Again something that seems like it would already be accounted for by vehicle model.

> Older people who don't trust technology.

How sure are we that the people who don't trust technology are older? And again, the insurance company already knows your age.

> Also, if self driving cars go mainstream, they are bound to include the safest drivers overnight

Are they? They're more likely to include the people who spend the most time in cars, which is another higher risk pool, because it allows those people to spend the time on a phone/laptop instead of driving the car, which is worth more to people the more time they spend doing it and so justifies the cost of a newer vehicle more easily.

> Oh, and those self driving cars are expensive

Isn't that more of a problem for the self-driving pool? Also, isn't most of the cost that the sensors aren't as common and they'd end up costing less as a result of volume production anyway?

> Self driving cars have a lot of sensors/recorders. While this could work to your advantage (proving that you aren't at fault), it often isn't (they have evidence that you were at fault). Whereas before fault might have been much more hazy (both at fault, or both no fault).

Which is only a problem for the worse drivers who are actually at fault, which makes them more likely to move into the self-driving car pool.

> The biggest factor comes if self driving cars really are much safer than human drivers.

The biggest factor is which drivers switch to self-driving cars. If half of human drivers switched to self-driving cars but they were chosen completely at random then the insurance rates for the remaining drivers would be essentially unaffected. How safe they are is only relevant insofar as it affects your chances of getting into a collision with another vehicle, and if they're safer then it would make that chance go down to have more of them on the road.


Only .61% of car crashes involve fatalities, so that’s like .2% of car crashes you are referring to. Probably more due to alcohol, but we don’t know the ratio of accidents that involve alcohol, which would be more telling.

> How sure are we that the people who don't trust technology are older? And again, the insurance company already knows your age

Boomers are already the primary anti-EV demographic, with the complaint that real cars have engines. It doesn’t matter if they know your age of state laws keep them from acting on it.

> that more of a problem for the self-driving pool? Also, isn't most of the cost that the sensors aren't as common and they'd end up costing less as a result of volume production anyway?

I think you misunderstood me: If you get into an accident and are found at fault, you are responsible for damage to the other car. Now, if it’s a clunker Toyota, that will be a few thousand dollars, if it’s a roll Royce, it’s a few hundred thousand dollars. The reason insurances are increasing lately is that the average car on the road is more expensive than it was ten years ago, so insurance companies are paying out more. If most cars are $250k Waymo cars, and you hit one…and you are at fault, ouch. And we will know if it is your fault or not since the Waymo is constantly recording.

> If half of human drivers switched to self-driving cars but they were chosen completely at random then the insurance rates for the remaining drivers would be essentially unaffected.

That’s not how the math works out (smaller risk pools are more expensive per person period). And it won’t be people switching at random to self driving cars (the ones not switching will be the ones that are more likely to have accidents).


> Only .61% of car crashes involve fatalities, so that’s like .2% of car crashes you are referring to. Probably more due to alcohol, but we don’t know the ratio of accidents that involve alcohol, which would be more telling.

Fatalities get more thoroughly investigated so we have better numbers on them, but if you had to guess whether the people who get behind the wheel drunk were similarly disproportionately likely to bang up their cars in a non-fatal way, what would your guess be?

> Boomers are already the primary anti-EV demographic, with the complaint that real cars have engines.

EVs and self-driving are two different things. Fox News tells boomers that EVs are bad because Republicans have the oil companies as a constituency.

> It doesn’t matter if they know your age of state laws keep them from acting on it.

The only states that do that are Hawaii and Massachusetts.[1]

[1] https://www.cnbc.com/select/best-car-insurance-seniors/

> If most cars are $250k Waymo cars, and you hit one…and you are at fault, ouch. And we will know if it is your fault or not since the Waymo is constantly recording.

If X% of cars are Waymos and you hit another car in your normally priced car and you're at fault, there is an X% chance it will be expensive. If the Waymo hits another car and it's at fault, there is a 100% chance it will be expensive because it will damage itself, and an additional X% chance that it will be very expensive because both cars are.

And again, that's assuming the price stays as high as it is when the production volume increases. A $250,000 car can't become the majority of cars because that percentage of people can't afford that.

> That’s not how the math works out (smaller risk pools are more expensive per person period).

Smaller risk pools don't have higher risk, they have higher volatility, and then if they're too small insurers have to charge a volatility premium. But the auto insurance market is very large and for it to get to the size that it would have volatility issues it would have to be a consequence rather than a cause of the large majority of people switching to self-driving cars.

> And it won’t be people switching at random to self driving cars (the ones not switching will be the ones that are more likely to have accidents).

You keep saying that but it's still not obvious that it's what would happen, and in any event the ones more likely to have accidents are already the ones paying higher insurance premiums -- which is precisely a reason they would have the incentive to be the first to switch to self-driving cars.


The fact you think $200 per month is sane is amusing to people in other countries


Hell, I was paying €180/yr for my New Beetle a decade ago...


Haha, yes, today already sucks badly in many US markets. Imagine what will happen when the only people driving cars manually are "enthusiasts".


Is that low or high?


I'm guessing that other developed countries don't need 6-7 figure injury coverage.


That's probably the future; Mercedes currently does do this in limited form:

https://www.roadandtrack.com/news/a39481699/what-happens-if-...


Not "currently," "used to": https://www.theverge.com/transportation/860935/mercedes-driv...

It was way too limited to be useful to anyone.


Why ship owner is paying for the insurance while it's a captain making all decisions?


Because the operator is liable? Tesla as a company isn't driving the car, it's a ML model running on something like HW4 on bare metal in the car itself. Would that make the silicon die legally liable?


Sounds like it's neither self-driving, nor autonomous, if I'm on the hook if it goes wrong.


Yeah, Tesla gets to blame the “driver”, and has a history of releasing partial and carefully curated subsets of data from crashes to try to shift as much blame onto the driver as possible.

And the system is designed to set up drivers for failure.

An HCI challenge with mostly autonomous systems is that operators lose their awareness of the system, and when things go wrong you can easily get worse outcomes than if the system was fully manual with an engaged operator.

This is a well known challenge in the nuclear energy sector and airline industry (Air France 447) - how do you keep operators fully engaged even though they almost never need to intervene, because otherwise they’re likely to be missing critical context and make wrong decisions. These days you could probably argue the same is true of software engineers reviewing LLM code that’s often - but not always - correct.


> has a history of releasing partial and carefully curated subsets of data from crashes to try to shift as much blame onto the driver as possible

Really? Thats crazy.


Especially since they can push regressions over the air and you could be lulled into a sense of safety and robustness that isn’t there and bam you pay the costs of the regressions, not Tesla.


Its neither self-driving, nor autonomous, eventually not even a car! (as Tesla slowly exits the car business). It will be 'insurance' on Speculation as a service, as Tesla skyrockets to $20T market cap. Tesla will successfully transition from a small revenue to pre-revenue company: https://www.youtube.com/watch?v=SYJdKW-UnFQ

The last few years of Tesla 'growth' show how this transition is unfolding. S and X production is shutdown, just a few more models to shutdown.


I wonder if they will try to sell off the car business once they can hype up something else. It seems odd to just let the car business die.


Wild prediction, would love to hear the rest of it


Who’s the “operator” of an “autonomous” car? If I sit in it and it drives me around, how am I an “operator”?


If you get on a horse and let go of the reins you are also considered the operator of the horse. Such are the definitions in our society.


Great analogy, lol


The point is if the liability is always exclusively with the human driver then any system in that car is at best a "driver assist". Claims that "it drives itself" or "it's autonomous" are just varying degrees of lying. I call it a partial lie rather than a partial truth because the result more often than not is that the customer is tricked into thinking the system is more capable than it is, and because that outcome is more dangerous than the opposite.

Any car has varying degrees of autonomy, even the ones with no assists (it will safely self-drive you all the way to the accident site, as they say). But the car is either driven by the human with the system's help, or is driven by the system with or without the human's help.

A car can't have 2 drivers. The only real one is the one the law holds responsible.


Not all insurance claims are based off of the choices of the driver.


> If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?

Suppose ACME Corporation produces millions of self-driving cars and then goes out of business because the CEO was embezzling. They no longer exist. But the cars do. They work fine. Who insures them? The person who wants to keep operating them.

Which is the same as it is now. It's your car so you pay to insure it.

I mean think about it. If you buy an autonomous car, would the manufacturer have to keep paying to insure it forever as long as you can keep it on the road? The only real options for making the manufacturer carry the insurance are that the answer is no and then they turn off your car after e.g. 10 years, which is quite objectionable, or that the answer is "yes" but then you have to pay a "subscription fee" to the manufacturer which is really the insurance premium, which is also quite objectionable because then you're then locked into the OEM instead of having a competitive insurance market.


I like your thesis, but what about this: all this self driving debate is nonsense if you require Tesla to pay all damages plus additional damages, "because you were hit by a robot!". That should make sure Tesla improves the system, and that it operates above human safety levels. Then one can forget about legislation and Tesla can do its job.

So to circle back to your thesis: when the car is operating autonomously, the manufacturer is responsible. If it goes broke then what? Then the owner will need to insure the car privately. So Tesla insurance might have to continue to operate (and be profitable).

The question this raises is if Tesla should sell any self-driving cars at all, or instead it should just drive them itself.


> That should make sure Tesla improves the system, and that it operates above human safety levels.

There are two problems with this.

The first is that insurance covers things that weren't really anyone's fault, or that it's not clear whose fault it was. For example, the most direct and preventable cause of many car crashes is poorly designed intersections, but then the city exempts itself from liability and people still expect someone to pay so it falls to insurance. There isn't really much the OEM can do about the poorly designed intersection or the improperly banked curve or snowy roads etc.

The second is that you would then need to front-load a vehicle-lifetime's worth of car insurance into the purchase price of the car, which significantly raises the cost to the consumer over paying as you go because of the time value of money. It also compounds the cost of insurance, because if the price of the car includes the cost of insurance and then the car gets totaled, the insurance would have to pay out the now-higher cost of the car.

> The question this raises is if Tesla should sell any self-driving cars at all, or instead it should just drive them itself.

This is precisely the argument for not doing it that way. Why should we want the destruction of ownership in lieu of pushing everyone to a subscription service? What happens to poor people who could have had a used car but now all the older cars go to the crusher because it allows the OEMs to sustain artificial scarcity for the service?


You insure the property, not the person.


well it's the risk, the combination ..

it's why young drivers pay more for insurance


It isn't fully autonomous yet. For any future system sold as level 5 (or level 4?), I agree with your contention -- the manufacturer of the level 5 autonomous system is the one who bears primary liability and therefore should insure. "FSD" isn't even level 3.

(Though, there is still an element of owner/operator maintenance for level 4/5 vehicles -- e.g., if the owner fails to replace tires below 4/32", continues to operate the vehicle, and it causes an injury, that is partially the owner/operator's fault.)


Wouldn't that requirement completely kill any chance of a L5 system being profitable? If company X is making tons of self-driving cars, and now has to pay insurance for every single one, that's a mountain of cash. They'd go broke immediately.

I realize it would suck to be blamed for something the car did when you weren't driving it, but I'm not sure how else it could be financially feasible.


No? Insurance costs would be passed through to consumers in the form of up-front purchase price. And probably the cost to insure L5 systems for liability will be very low. If it isn't low, the autonomous system isn't very safe.


The way it works in states like California currently is that the permit holder has to post an insurance bond that accidents and judgements are taken out against. It's a fixed overhead.


Built using Django!


You use kerbside charging. Unlike petrol, electricity comes to you.


This is how it needs to work, but in practice it doesn't really exist right now. (And, in the few places where it does exist, the price basically destroys a lot of the running costs advantages of an EV).


I've been charging from lampposts and other street chargers for 11 years now


Yeah, it's really quick because there is pretty much nothing on it


I agree. Not impressed, frankly. Cloudflare workers is just even-more localized CDN, and the benefit is so tiny that it's not worth the investment nor maintenance costs. (I wrote extensively about this non-thing here: https://wskpf.com/takes/you-dont-need-a-cdn-for-seo). My site (https://wskpf.com), which has way more elements and, err, stuff, loads in 50ms, and unless you are superman or an atomic clock, you wouldn't care. same lighthouse scores as this one, but with no CDN nor cloudflare workers, and it actually has stuff on it.


TCP performance gets quite poor over long distances. CDNs are very helpful if you're trying to make your site work well far away from your servers.


I think the bottleneck is rarely CDN. Think about it - my server sits in Germany. My target audience is in the US. My latency to the west coast is 150ms. I can see it being a big thing in competitive online game, but for website load performance it's less than the blink of an eye. The real bottleneck is usually poorly configured page or some bloated JS.


Your site took over a second to load for me from Brazil. Are you sure CDNs are that worthless?


I do, because the 120ms latency that CDN solves is a drop in the bucket compared to the 2.5 seconds (desktop) or 8 seconds (mobile) it takes for the average website to load almost entirely due to un-optimized images and poor code (based on https://www.hostinger.com/in/tutorials/website-load-time-sta...)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: