I'm quite skeptical of Tesla's reliability claims. But for exactly that reason, I welcome a company like Lemonade betting actual money on those claims. Either way, this is bound to generate some visibility into the actual accident rates.
One thing that was unclear to me from the stats cited on the website is whether the quoted 52% reduction in crashes is when FSD is in use, or overall. This matters because people are much more likely to use FSD in situations where driving is easier. So, if the reduction is just during those times, I'm not even sure that would be better than a human driver.
As an example, let's say most people use FSD on straight US Interstate driving, which is very easy. That could artificially make FSD seem safer than it really is.
My prior on this is supervised FSD ought to be safer, so the 52% number kind of surprised me, however it's computed. I would have expected more like a 90-95% reduction in accidents.
I think this might be right, but it does two interesting things:
1) it let's lemonade reward you for taking safer driving routes (or living in a safer area to drive, whatever that means)
2) it (for better or worse) encourages drivers to use it more. This will improve Tesla's training data but also might negatively impact the fsd safety record (an interesting experiment!)
> ...but also might negatively impact the fsd safety record (an interesting experiment!)
As a father of kids in a neighborhood with a lot of Teslas, how do I opt out of this experiment?
The insurance industry is a commercial prediction market.
It is often an indicator of true honesty, providing there is no government intervention. Governments intervene in insurance/risk markets when they do not like the truth.
I tried to arrange insurance for an obese western expatriate several years ago in an Asian country, and the (western) insurance company wrote a letter back saying the client was morbidly obese and statistically likely to die within 10 years, and they should lose x weight before they could consider having insurance.
> quite skeptical of Tesla's reliability claims
I'm sceptical of Robotaxi/Cybercab. I'm less sceptical that FSD, supervised, is safer than fully-manual control.
Where I live isn't particularly challenging to drive (rural Washington), but I'm constantly disengaging FSD for doing silly and dangerous things.
Most notably my driveway meets the road at a blind y intersection, and my Model 3 just blasts out into the road even though you cannot see cross traffic.
FSD stresses me out. It's like I'm monitoring a teenager with their learners permit. I can probably count the number trips where I haven't had to take over on one hand.
> I'm constantly disengaging FSD for doing silly and dangerous things.
You meant “I disable FSD because it does silly things”
I read “I disable FSD so I can do silly things”
it's edging into the intersection to get a better view on the camera. it's further than you would normally pull out, but it will NOT pull into traffic.
It's not edging; it enters the street going a consistent speed (usually >10mph) from my driveway. The area is heavily wooded, and I don't think it "sees" the cross direction until it's already in the road. Or perhaps the lack of signage or curb make it think it has the right of way.
My neighbor joked that I should install a stop sign at the end of my driveway to make it safer.
The software probably has a better idea of their car’s dimensions than a human driver, so will be able to get a better view of traffic by pulling out at just the right distance.
Do you have HW3 or HW4?
HW3, unfortunately. Missed the HW4 refresh by a couple of months.
The newest FSD on HW4 was very good in my opinion. Multiple 45min+ drives where I don’t need to touch the controls.
Still not paying $8k for it. Or $100 per month. Maybe $50 per month.
Having handed over control of my vehicles to FSD many times, I’ve yet to come away from the experience feeling that my vehicle was operating in a safer regime for the general public than within my own control.
Keeping a 1-2 car's length stopping distance is likely over a 50% reduction in at fault damages.
I think you greatly overestimate humans
We aren’t talking about the average human here.
On average you include sleep deprived people, driving way over the speed limit, at night, in bad weather, while drunk, and talking to someone. FSD is very likely situationally useful.
But you can know most of those adverse conditions don’t apply when you engage FSD on a given trip. As such the standard needs to be extremely high to avoid increased risks when you’re sober, wide awake, the conditions are good, and you have no need to speed.
The problem IMO is the transition period. A mostly safe system will make the driver feel at ease, but when an emergency occurs and the driver must take over, it's likely that they won't be paying full attention.
> you greatly overestimate humans
Tesla's FSD still goes full-throttle dumbfuck from time to time. Like, randomly deciding it wants to speed into an intersection despite the red light having done absolutely nothing. Or swerving because of glare that you can't see, and a Toyota Corolla could discern with its radars, but which hits the cameras and so fires up the orange cat it's simulating on its CPU.
> I'm less sceptical that FSD, supervised, is safer than fully-manual control.
I'm very skeptical that the average human driver properly supervises FSD or any other "full" self driving system.
this ^^
Lemonade will have some actual claim data to support this already, not relying on the word of Tesla.
> betting actual money on those claims
Insurance companies can let marketing influence rates to some degree, with programs that tend to be tacked on after the initial rate is set. This self driving car program sounds an awful lot like safe driver programs like GEICO Clean Driving Record, State Farm Good Driver Discount, and Progressive Safe Driver, Progressive Snapshot, and Allstate Drivewise. The risk assessment seems to be less thorough than the general underwriting process, and to fall within some sort of risk margin, so to me it seems gimmicky and not a true innovation at this point.
If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?
Generally speaking, liability for a thing falls on the owner/operator. That person can sue the manufacturer to recover the damages if they want. At some point, I expect it to become somewhat routine for insurures to pay out, then sue the manufacturer to recover.
Or at some point subscribing to a service may be easier than owning the damn thing.
All according to plan
It already doesn't make sense to own a car for me. It's cheaper to just call an Uber.
I'm guessing that's a fairly city viewpoint. My car is setup with roofrack and carries a lot of other gear I want. I'm regularly in places without reliable cell etc. Visiting friends can easily be an hour drive.
Yes, a city viewport. I usually just walk, but when I don't I most often take the subway, not even Uber. Though I feel like in Toronto the subway or some part thereof is closed or under maintenance or whatever way too often. It's not very reliable.
For some this is the case. For others, this is not the case.
some -> most ?
Ah, but could one not argue that the owner of the self-driving car is _not_ the operator, and it is the car, or perhaps Tesla, which operates it?
All Tesla vehicles require the person behind the steering wheel to supervise the operations of the vehicle and avoid accidents at all times.
Also, even if a system is fully automated, that doesn’t necessarily legally isolate the person who owns it or set it into motion from liability. Vehicle law would generally need to be updated to change this.
Mercedes agrees. They take on liability when their system is operated appropriately.
They say they will, but until relevant laws are updated, this is mostly contractual and not a change to legal liability. It is similar to how an insurance company takes responsibility for the way you operate your car.
If your local legal system does not absolve you from liability when operating an autonomous vehicle, you can still be sued, and Mercedes has no say in this… even though they could reimburse you.
Because that's the law of the land currently.
The product you buy is called "FSD Supervised". It clearly states you're liable and must supervise the system.
I don't think there's law that would allow Tesla (or anyone else) to sell a passenger car with unsupervised system.
If you take Waymo or Tesla Robotaxi in Austin, you are not liable for accidents, Google or Tesla is.
That's because they operate on limited state laws that allow them to provide such service but the law doesn't allow selling such cars to people.
That's changing. Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
Tacking "Supervised" on the end of "Full Self Driving" is just contradictory. Perhaps if it was "Partial Self Driving" then it wouldn't be so confusing.
Its only to differentiate it from their "Unsupervised FSD" which is what they call it now.
I imagine insurance would be split in two in that case. Carmakers would not want to be liable for e.g. someone striking you in a hit-and-run.
If the car that did a hit-and-run was operated autonomously the insurance of the maker of that car should pay. Otherwise it's a human and the situation falls into the bucket of what we already have today.
So yes, carmakers would pay in a hit-and-run.
> If the car that did a hit-and-run was operated autonomously the insurance of the maker of that car should pay
Why? That's not their fault. If a car hits and runs my uninsured bicycle, the manufacturer isn't liable. (My personal umbrella or other insurance, on the other hand, may cover it.)
They're describing a situation of liability, not mere damage. If yor bicycle is hit you didn't do anything wrong.
If you run into someone on your bike and are at fault then you generally would be liable.
They're talking about the hypothetical where you're on your bike, which was sold as an autobomous bike and the bike manufacturer's software fully drives the bike, and it runs into someone and is at fault.
You can sell autonomous vehicles to consumers all day long. There's no US federal law prohibiting that, as long as they're compliant with FMVSS as all consumer vehicles are required to be.
Waymo is also a livery service which you normally aren’t liable for as a passenger of taxi or limousine unless you have deep pockets. /IANAL
> Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
This is news to me. This context seems important to understanding Tesla's decision to stop selling FSD. If they're on the hook for insurance, then they will need to dynamically adjust what they charge to reflect insurance costs.
I see. So not Tesla's product they are using to sell insurance around isn't "Full Self-Driving" or "Autonomous" like the page says.
My current FSD usage is 90% over ~2000 miles (since v14.x). Besides driving everywhere, everyday with FSD, I have driven 4 hours garage to hotel valet without intervention. It is absolutely "Full Self-Driving" and "Autonomous".
FSD isn't perfect, but it is everyday amazing and useful.
> My current FSD usage is 90% over ~2000 miles
I'd guess my Subaru's lane-keeping utilisation is in the same ballpark. (By miles, not minutes. And yes, I'm safer when it and I are watching the road than when I'm watching the road alone.)
My favorite feature of Subaru's system is when you change lanes, and it stays locked onto the car in the slower lane and slams on the brakes. People behind you love that.
If it was full self driving, wouldn't your usage be 100%?
Sometimes a car is fun to drive.
Yet still on relying you to cover it with your insurance. Again, clearly not autonomous.
Liability is a separate matter from autonomy. I assume you'd consider yourself autonomous, yet it's your employer's insurance that will be liable if you have an accident while driving a company vehicle.
If the company required a representative to sit in the car with you and participate in the driving (e.g. by monitoring and taking over before an accident), then there's a case to be made that you're not fully autonomous.
> it's your employer's insurance that will be liable if you have an accident while driving a company vehicle
I think you're mixing some concepts.
There's car insurance paid by the owner of the car, for the car. There's workplace accident insurance, paid by the employer for the employee. The liability isn't assigned by default, but by determining who's responsible.
The driver is always legally responsible for accidents caused by their negligence. If you play with your phone behind the wheel and kill someone, even while working and driving a company car, the company's insurance might pay for the damage but you go to prison. The company will recover the money from you. Their work accident insurance will pay nothing.
The test you can run in your head: will you get arrested if you fall asleep at the wheel and crash? If yes, then it's not autonomous or self driving. It just has driver assistance. It's not that the car can't drive itself at all, just that it doesn't meet the bar for the entire legal concept of "driver/driving".
"Almost" self driving is like jumping over a canyon and almost making it to the other side. Good effort, bad outcome.
Without LIDAR and/or additional sensors, Tesla will never be able to provide "real" FSD, no matter how wonderful their software controlling the car is.
Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.
Waymo and others are providing a taxi service where the driver is not a human. You don't pay insurance when you ride Uber or Bolt or any other regular taxi service.
> Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.
Well practically speaking, there’s nothing stopping anyone from voluntarily assuming liability for arbitrary things. If Tesla assumes the liability for my car, then even if I still require my “own” insurance for legal purposes, the marginal cost of covering the remaining risk is going to be close to zero.
Never say never—it’s not physically impossible. But yes, as it stands, it seems that Tesla will not be self driving any time soon (if ever).
They literally just (in the last few days) started unsupervised robotaxis in Austin.
They are as self-driving as a car can be.
This is different than the one where they had a human supervisor in passenger seat (which they still do elsewhere).
And different than the one where they didn't have human supervisor but did have a follow car.
Now they have a few robotaxis that are self driving.
Have you actually ridden in one? It's unclear whether this is a real thing or not.
If your minor child breaks something, or your pet bites someone, you are liable.
This analogy may be more apt than Tesla would like to admit, but from a liability perspective it makes sense.
You could in turn try to sue Tesla for defective FSD, but the now-clearly-advertised "(supervised)" caveat, plus the lengthy agreement you clicked through, plus lots of lawyers, makes you unlikely to win.
Can a third party reprogram my dog or child at any moment? Or even take over and control them?
Risk gets passed along until someone accepts it, usually an insurance company or the operator. If the risk was accepted and paid for by Tesla, then the cost would simply be passed down to consumers. All consumers, including those that want to accept the risk themselves. In particular, if you have a fleet of cars it can be cheaper to accept the risk and only pay for mandatory insurance, because not all of your cars are going to crash at the same time, and even if they did, not all in the worst way possible. This is how insurance works, by amortizing lots of risk to make it highly improbable to make a loss in the long run.
Seems like the role of the human operator in the age of AI is to be the entity they can throw in jail if the machine fails (e.g. driver, pilot)
I’ve said for years that pragmatically, our definition of a “person” is an entity that can accept liability and take blame.
LLCs can't go to jail though
Because LLCs aren't people
Not to be confused with “human” thanks to SCOTUS.
> Surely if it's Tesla making the decisions, they need the insurance?
Why surely? Turning on cruise control doesn't absolve motorists of their insurance requirement.
And the premise is false. While Tesla does "not maintain as much insurance coverage as many other companies do," there are "policies that [they] do have" [1]. (What it insures is a separate question.)
[1] https://www.sec.gov/ix?doc=/Archives/edgar/data/0001318605/0...
Cruise control is hardly relevant to a discussion of liability for autonomous vehicle operation.
In the context of ultramodern cruise control (eg comma.ai), which has a radar to track the distance to the car (if any) in front of you, and cameras so the car can wind left or right and track the freeway, I think it does.
Not unless they are marketing it as “autopilot” or some such that a random consumer would reasonably assume meant autopilot.
And I’d include “AI driver” as an example.
A random consumer doesn't actually understand what Autopilot means. Most people don't have pilot's licenses. And cars don't fly. Did you not see all the debacles around it when it first came out?
I think there is an even bigger insurance problem to worry about: if autonomous vehicles become common and are a lot safer than manual driven vehicles, insurance rates for human driven cars could wind up exploding as the risk pool becomes much smaller and statistically riskier. We could go from paying $200/month to $2000/month if robo taxis start dominating cities.
> if autonomous vehicles become common and are a lot safer than manual driven vehicles, insurance rates for human driven cars could wind up exploding as the risk pool becomes much smaller and statistically riskier.
The assumption there is that the remaining human drivers would be the higher risk ones, but why would that be the case?
One of the primary movers of high risk driving is that someone goes to the bar, has too many drinks, then needs both themselves and their car to get home. Autonomous vehicles can obviously improve this by getting them home in their car without them driving it, but if they do, the risk profile of the remaining human drivers improves. At worst they're less likely to be hit by a drunk driver, at best the drunk drivers are the early adopters of autonomous vehicles and opt themselves out of the human drivers pool.
Drunk driving isn't the primary mover of high risk driving. Rather you have:
1. People who can't afford self driving cars (now the insurance industry has a good proxy for income that they couldn't tap into before)
2. Enthusiasts who like driving their cars (cruisers, racers, Helcat revving, people who like doing donuts, etc...)
3. Older people who don't trust technology.
None of those are good risk pools to be in. Also, if self driving cars go mainstream, they are bound to include the safest drivers overnight, so whatever accidents/crashes happen afterwards are covered by a much smaller and "active" risk pool. Oh, and those self driving cars are expensive:
* If you hit one and are at fault, you might pay out 1-200k, most states only require 25k-50k of coverage...so you need more coverage or expect to pay more for incident.
* Self driving cars have a lot of sensors/recorders. While this could work to your advantage (proving that you aren't at fault), it often isn't (they have evidence that you were at fault). Whereas before fault might have been much more hazy (both at fault, or both no fault).
The biggest factor comes if self driving cars really are much safer than human drivers. They will basically disappear from the insurance market, or somehow be covered by product liability instead of insurance...and the remaining drivers will be in a pool of the remaining accidents that they will have to cover on their own.
> Drunk driving isn't the primary mover of high risk driving.
It kind of is. They're responsible for something like 30% of traffic fatalities despite being a far smaller percentage of drivers.
> People who can't afford self driving cars (now the insurance industry has a good proxy for income that they couldn't tap into before)
https://pubmed.ncbi.nlm.nih.gov/30172108/
But also, wouldn't they already have this by using the vehicle model and year?
> Enthusiasts who like driving their cars (cruisers, racers, Helcat revving, people who like doing donuts, etc...)
Again something that seems like it would already be accounted for by vehicle model.
> Older people who don't trust technology.
How sure are we that the people who don't trust technology are older? And again, the insurance company already knows your age.
> Also, if self driving cars go mainstream, they are bound to include the safest drivers overnight
Are they? They're more likely to include the people who spend the most time in cars, which is another higher risk pool, because it allows those people to spend the time on a phone/laptop instead of driving the car, which is worth more to people the more time they spend doing it and so justifies the cost of a newer vehicle more easily.
> Oh, and those self driving cars are expensive
Isn't that more of a problem for the self-driving pool? Also, isn't most of the cost that the sensors aren't as common and they'd end up costing less as a result of volume production anyway?
> Self driving cars have a lot of sensors/recorders. While this could work to your advantage (proving that you aren't at fault), it often isn't (they have evidence that you were at fault). Whereas before fault might have been much more hazy (both at fault, or both no fault).
Which is only a problem for the worse drivers who are actually at fault, which makes them more likely to move into the self-driving car pool.
> The biggest factor comes if self driving cars really are much safer than human drivers.
The biggest factor is which drivers switch to self-driving cars. If half of human drivers switched to self-driving cars but they were chosen completely at random then the insurance rates for the remaining drivers would be essentially unaffected. How safe they are is only relevant insofar as it affects your chances of getting into a collision with another vehicle, and if they're safer then it would make that chance go down to have more of them on the road.
Only .61% of car crashes involve fatalities, so that’s like .2% of car crashes you are referring to. Probably more due to alcohol, but we don’t know the ratio of accidents that involve alcohol, which would be more telling.
> How sure are we that the people who don't trust technology are older? And again, the insurance company already knows your age
Boomers are already the primary anti-EV demographic, with the complaint that real cars have engines. It doesn’t matter if they know your age of state laws keep them from acting on it.
> that more of a problem for the self-driving pool? Also, isn't most of the cost that the sensors aren't as common and they'd end up costing less as a result of volume production anyway?
I think you misunderstood me: If you get into an accident and are found at fault, you are responsible for damage to the other car. Now, if it’s a clunker Toyota, that will be a few thousand dollars, if it’s a roll Royce, it’s a few hundred thousand dollars. The reason insurances are increasing lately is that the average car on the road is more expensive than it was ten years ago, so insurance companies are paying out more. If most cars are $250k Waymo cars, and you hit one…and you are at fault, ouch. And we will know if it is your fault or not since the Waymo is constantly recording.
> If half of human drivers switched to self-driving cars but they were chosen completely at random then the insurance rates for the remaining drivers would be essentially unaffected.
That’s not how the math works out (smaller risk pools are more expensive per person period). And it won’t be people switching at random to self driving cars (the ones not switching will be the ones that are more likely to have accidents).
The fact you think $200 per month is sane is amusing to people in other countries
Haha, yes, today already sucks badly in many US markets. Imagine what will happen when the only people driving cars manually are "enthusiasts".
Hell, I was paying €180/yr for my New Beetle a decade ago...
Is that low or high?
That's probably the future; Mercedes currently does do this in limited form:
https://www.roadandtrack.com/news/a39481699/what-happens-if-...
Not "currently," "used to": https://www.theverge.com/transportation/860935/mercedes-driv...
It was way too limited to be useful to anyone.
Why ship owner is paying for the insurance while it's a captain making all decisions?
Because the operator is liable? Tesla as a company isn't driving the car, it's a ML model running on something like HW4 on bare metal in the car itself. Would that make the silicon die legally liable?
Sounds like it's neither self-driving, nor autonomous, if I'm on the hook if it goes wrong.
Yeah, Tesla gets to blame the “driver”, and has a history of releasing partial and carefully curated subsets of data from crashes to try to shift as much blame onto the driver as possible.
And the system is designed to set up drivers for failure.
An HCI challenge with mostly autonomous systems is that operators lose their awareness of the system, and when things go wrong you can easily get worse outcomes than if the system was fully manual with an engaged operator.
This is a well known challenge in the nuclear energy sector and airline industry (Air France 447) - how do you keep operators fully engaged even though they almost never need to intervene, because otherwise they’re likely to be missing critical context and make wrong decisions. These days you could probably argue the same is true of software engineers reviewing LLM code that’s often - but not always - correct.
> has a history of releasing partial and carefully curated subsets of data from crashes to try to shift as much blame onto the driver as possible
Really? Thats crazy.
Its neither self-driving, nor autonomous, eventually not even a car! (as Tesla slowly exits the car business). It will be 'insurance' on Speculation as a service, as Tesla skyrockets to $20T market cap. Tesla will successfully transition from a small revenue to pre-revenue company: https://www.youtube.com/watch?v=SYJdKW-UnFQ
The last few years of Tesla 'growth' show how this transition is unfolding. S and X production is shutdown, just a few more models to shutdown.
I wonder if they will try to sell off the car business once they can hype up something else. It seems odd to just let the car business die.
Wild prediction, would love to hear the rest of it
Especially since they can push regressions over the air and you could be lulled into a sense of safety and robustness that isn’t there and bam you pay the costs of the regressions, not Tesla.
Who’s the “operator” of an “autonomous” car? If I sit in it and it drives me around, how am I an “operator”?
If you get on a horse and let go of the reins you are also considered the operator of the horse. Such are the definitions in our society.
Great analogy, lol
The point is if the liability is always exclusively with the human driver then any system in that car is at best a "driver assist". Claims that "it drives itself" or "it's autonomous" are just varying degrees of lying. I call it a partial lie rather than a partial truth because the result more often than not is that the customer is tricked into thinking the system is more capable than it is, and because that outcome is more dangerous than the opposite.
Any car has varying degrees of autonomy, even the ones with no assists (it will safely self-drive you all the way to the accident site, as they say). But the car is either driven by the human with the system's help, or is driven by the system with or without the human's help.
A car can't have 2 drivers. The only real one is the one the law holds responsible.
> If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?
Suppose ACME Corporation produces millions of self-driving cars and then goes out of business because the CEO was embezzling. They no longer exist. But the cars do. They work fine. Who insures them? The person who wants to keep operating them.
Which is the same as it is now. It's your car so you pay to insure it.
I mean think about it. If you buy an autonomous car, would the manufacturer have to keep paying to insure it forever as long as you can keep it on the road? The only real options for making the manufacturer carry the insurance are that the answer is no and then they turn off your car after e.g. 10 years, which is quite objectionable, or that the answer is "yes" but then you have to pay a "subscription fee" to the manufacturer which is really the insurance premium, which is also quite objectionable because then you're then locked into the OEM instead of having a competitive insurance market.
Not all insurance claims are based off of the choices of the driver.
It’s because you bought it. Don’t buy it if you don’t want to insure.
Yep, you bought it, you own it, you choose to operate it on the public roads. Therefore your liability.
If you bought and owned it, you could sell it to another auto manufacturer for some pretty serious amounts of money.
In reality, you acquired a license to use it. Your liability should only go as far as you have agreed to identify the licenser.
You can actually do that. Except that they could just buy one themselves.
Companies exist that buy cars just to tear them down and publish reports on what they find.
> Companies exist that buy cars just to tear them down and publish reports on what they find.
What does it mean to tear down software, exactly? Are you thinking of something like decompilation?
You can do that, but you're probably not going to learn all that much, and you still can't use it in any meaningful sense as you never bought it in the first place. You only licensed use of it as a consumer (and now that it is subscription-only, maybe not even that). If you have to rebuild the whole thing yourself anyway, what have you really gained? Its not exactly a secret how the technology works, only costly to build.
> Except that they could just buy one themselves.
That is unlikely, unless you mean buying Tesla outright? Getting a license to use it as a manufacturer is much more realistic, but still a license.
Check out Munro and Associates. I'm not talking about software. The whole car.
For what reason?
In case you have forgotten, the discussion is about self-driving technology, and specifically Tesla's at that. The original questioner asked why he is liable when it is Tesla's property that is making the decisions. Of course, the most direct answer is because Tesla disclaims any liability in the license agreement you must agree to in order to use said property.
Which has nothing to do with an independent consulting firm or "the whole car" as far as I can see. The connection you are trying to establish is unclear. Perhaps you pressed the wrong 'reply' button by mistake?
I don't think Tesla lets you buy FSD
They do, until Feb 14th.
Even now I think it's a revocable license
You insure the property, not the person.
well it's the risk, the combination ..
it's why young drivers pay more for insurance
Not an expert here, but I recall reading that certain European countries (Spain???) allow liability to be put on the autonomous driving system, not the person in the car. Does anyone know more about this?
That is the case everywhere. It is common when buying a product for the contract to include who has liability for various things. The price often changes by a lot depending on who has liability.
Cars are traditionally sold as the customer has liability. Nothing stops a car maker (or even an individual dealer) from selling cars today taking all the insurance liability in any country I know of - they don't for what I hope are obvious reasons (bad drivers will be sure to buy those cars since it is a better deal for them an in turn a worse deal for good drivers), but they could.
Self driving is currently sold as customers has liability because that is how it has always been done. I doubt it will change, but it is only because I doubt there will ever be enough advantage as to be worth it for someone else to take on the liability - but I could be wrong.
The coder and sensor manufacturers need the insurance for wrongful death lawsuits
and Musk for removing lidar so it keeps jumping across high speed traffic at shadows because the visual cameras can't see true depth
99% of the people on this website are coders and know how even one small typo can cause random fails, yet you trust them to make you an alpha/beta tester at high speed?
It isn't fully autonomous yet. For any future system sold as level 5 (or level 4?), I agree with your contention -- the manufacturer of the level 5 autonomous system is the one who bears primary liability and therefore should insure. "FSD" isn't even level 3.
(Though, there is still an element of owner/operator maintenance for level 4/5 vehicles -- e.g., if the owner fails to replace tires below 4/32", continues to operate the vehicle, and it causes an injury, that is partially the owner/operator's fault.)
Wouldn't that requirement completely kill any chance of a L5 system being profitable? If company X is making tons of self-driving cars, and now has to pay insurance for every single one, that's a mountain of cash. They'd go broke immediately.
I realize it would suck to be blamed for something the car did when you weren't driving it, but I'm not sure how else it could be financially feasible.
No? Insurance costs would be passed through to consumers in the form of up-front purchase price. And probably the cost to insure L5 systems for liability will be very low. If it isn't low, the autonomous system isn't very safe.
The way it works in states like California currently is that the permit holder has to post an insurance bond that accidents and judgements are taken out against. It's a fixed overhead.
I own a Model Y with hardware version 4. FSD prevented my from getting in an accident with a drunk driver. It reacted much faster to the situation than I could have. Ever since, I’m sold that in a lot of circumstances, machines can drive better than humans.
So does AEB in any modern car.
Tesla fans have not realized that every car made since 2021ish can do this.
It does more than AEB. It also knows to swerve out of the way during E: https://www.youtube.com/watch?v=c1MWml-81e0
Generally known as AES, for example from BMW available with Active Driving Assistant or Driving Assistant Plus packages.
I have a late model Audi, and a Tesla Model 3. Audi has all the bells and whistles.
Doesn't come close to the safety I feel in the Tesla. Not even close. I know anecdotal
Design of these safety features for euro cars generally aims to be invisible unless active. You don't "feel" the car in control.
AEB has been around since ages. Even my 2010 Mazda had it. It's nowhere near Tesla's capabilities tho. Not sure what are you trying to achieve with such dunks?
Obviously.
My 2016 Honda Civic has automatic braking (and it has lanekeep assist, so it's technologically superior to a 2026 Tesla).
Not directly related to the topic, I spose, but I have a Model 3, and absolutely love it, but the Smart Cruise Control/Driver Assist is, I hate to admit it, pretty annoying (I think it's gotten worse, too). It's incredibly "jumpy" and over-cautious. A car could pull out in your way 300m ahead of you, totally safely, and the car will shit itself and slam on the brakes to be over-cautious. Same thing with pedestrians who are walking alongside the road, posing no risk.
It's so jarring at times that I'll often omit to use the Cruise Control if I have my wife in the car (so as not to give her car sickness) or other passengers (so as not to make them think I'm a terrible driver!).
I now have developed a totally new skill which is to temporarily disengage it when I see a mistake incoming, then re-engaging it immediately after the moment passes.
NB I am in Australia and don't have FSD so this is all just using Adaptive Cruise Control. Perhaps the much harder challenge of FSD (or near-FSD) is executed a lot better, but you wouldn't assume so.
FSD is way beyond AutoPilot (the free Traffic Aware Cruise Control + Lane Keep). Autopilot uses an entirely different, hand coded system from several years ago, which they haven't updated at all. FSD is a Deep Learning neutral network based system.
Tesla have their own Insurance product which is already very competitive compared to other providers. Not sure if lemonade can beat them . Tesla's insurance product has similar objective in place already where it rewards self driving over manual driving.
Tesla is cooperating with Lemonade on this by providing them necessary user driving data.
If Tesla didn't want Lemonade to provide this, they could block them.
Strategically, Tesla doesn't want to be an insurer. They started the insurance product years ago, before Lemonade also offered this, to make FSD more attractive to buyers.
But the expansion stalled, maybe because the state bureaucracy or maybe because Tesla shifted priority to other things.
In conclusion: Tesla is happy that Lemonade offers this. It makes Tesla cars more attractive to buyers without Tesla doing the work of starting an insurance company in every state.
> But the expansion stalled, maybe because the state bureaucracy or maybe because Tesla shifted priority to other things.
If the math was mathing, it would be malpractice not to expand it. I'm betting that their scheme simply wasn't workable, given the extremely high costs of claims (Tesla repairs aren't cheap) relative to the low rates that they were collecting on premiums. The cheap premiums are probably a form of market dumping to get people to buy their FSD product, the sales of which boosts their share price.
It was not workable. They have a loss ratio of >100% [1], as in they paid out more in claims than received in premiums before even accounting for literally any other costs. Industry average is ~60-80% to stay profitable when including other costs.
They released the Tesla Insurance product because their cars were excessively expensive to insure, increasing ownership costs, which was impacting sales. By releasing the unprofitable Tesla Insurance product, they could subsidize ownership costs making the cars more attractive to buy right now which pumped revenues immediately in return for a "accidental" write-down in the future.
[1] https://peakd.com/tesla/@newageinv/teslas-push-into-insuranc...
Who was paying for this?
You as the consumer when you buy a tesla car that's twice the price of what you can get it for in Asia. Teslas are very cheap to produce.
Remember with their own insurance they also have access to the parts at cost.
That is not true. Since Tesla was losing money on their insurance to boost sales the customers were not paying for it since they were receiving a service for below cost.
The people paying were actually the retirement funds who fronted Tesla's cash reserves when they purchased Tesla stock and the US government paying for it in the form of more tax credits on sales that would not have otherwise materialized without this financial fraud. But do not worry, retirement funds and the US government may have lost, but it boosted Tesla sales and stock valuation so that Elon Musk could reach his KPIs to get his multiple tens of billions of dollars of payout.
Wow this went deep ahaha
The math should've mathed. Better data === lower losses right? They probably weren't able to get it to work quite right on the tech side and were eating fat losses during an already bad time in the market.
It'll come back.
Lemonade or Tesla if you find this, let's pilot, i'm a founder in sunnyvale, insurtech vertical at pnp
You'd be very surprised. Distribution works wonders. You could have a large carrier taking over Tesla's own vehicles in markets they care about. The difference then would be loss ratios on the data collection, like does LIDAR data really beat Progressive Snapshot?
The two are measuring data for different sources of losses for carriers.
You're telling me this car insurance drives itself?
I was curious what the break-even is where the insurance discount covers the $99/mo FSD subscription. I got a Lemonade quote around $240/mo (12k mi/yr lease on a Model 3), so 50% off would save ~$120/mo - i.e. it would cover FSD and still leave ~$21/mo net. Or, "free FSD is you use it".
I believe, at the end of the day, insurance companies will be the ones driving FSD adoption. The media will sensationalize the outlier issues of FSD software, but insurance companies will set the incentives for humans to stop driving.
$240 per month? That's literally eight times what I pay in the UK. Ok I don't have a fancy electric car but still... what.
> $240 per month?
Are Teslas still ridiculously-expensive to repair? (I pay $1,100 a year (~$92/month) to insure my Subaru, which costs more than a Model 3.)
I don't have a car so I don't know what is normal. i just went through the lemonade quote process. (I have a license and my record is clean, though - so there shouldn't be any high-risk flags.)
We have contingency fee personal injury lawyers and you have loser pays. Your system works better.
Yep, also people who will spend thousands of dollars to get a tiny scratch repaired because for some reaosn in the US everyone expects cars to be utterly perfect.
Yep - the way to get adoption, whilst the bar is too high for self-driving cars, the bar should be safer than the average person. An old greying socialist - saying that capitalism drive the right outcomes. Same with low-carbon, insurance will help with climate change mitigation.
Hmmm. The source for the "FSD is safer" claim might not be wholly independent: "Tesla’s data shows that Full Self-Driving miles are twice as safe as manual driving"
I would be surprised if that was what they were actually looking at. They are an established insurance company with their own data and the actuaries to analyze it. I can't imagine them doing this without at least validating a substantial drop in claims relating to FSD capable cars.
Now that they are offering this program, they should start getting much better data by being able to correlate claims with actual FSD usage. They might be viewing this program partially as a data acquisition project to help them insure autonomous vehicles more broadly in the future.
They are a grossly unprofitable insurance company. Your actuaries can undervalue risk to the point you are losing money on every claim and still achieve that.
In fact, Tesla Insurance, the people who already have direct access to the data already loses money on every claim [1].
[1] https://peakd.com/tesla/@newageinv/teslas-push-into-insuranc...
> They might be viewing this program partially as a data acquisition project to help them insure autonomous vehicles more broadly in the future
What do you mean?
It doesn't really matter because the insurance company itself will learn if that is correct or not when the claims start coming in
Its their own bet to make
> "Tesla’s data shows that Full Self-Driving miles are twice as safe as manual driving"
Teslas only do FSD on motorways where you tend to have far fewer accidents per mile.
Also, they switch to manual driving if they can't cope, and because the driver isn't paying attention this usually results in a crash. But hey, it's in manual driving, not FSD, so they get to claim FSD is safer.
FSD is not and never will be safer than a human driver.
> Teslas only do FSD on motorways where you tend to have far fewer accidents per mile.
They have been end to end street level for the past two years.
Not successfully.
Successful enough for me and many other people I know. End to end from my house to grocery store, kids schools, friends houses, etc. Multiple times per day for the past year.
It’s not perfect but I’d consider it a smashing success for something I rely on for safely transporting my family every day.
I wouldn't feel safe in one. I've seen how they drive, and I've had a brief shot of a car with FSD.
They are not safe and they will never be safe.
> They are not safe
Debatable, but you won't be convinced.
> they will never be safe.
Define safe? Would be interested to see you provide a benchmark that is reasonable, and lock it in now so we can see if this statement is falsified in the future.
If they're not safe then why does the data show they're safer than humans?
Why will they never be safe?
Also can you define safe?
A 50% discount is pretty damning empirical evidence for FSD being better at driving your Tesla than you are.
Oh, you've totally forgotten about selling to third parties and making tons of money off of what you do and where you go.
A discount they get to set on a subset of miles of their choice may just be a marketing expense for an insurance startup which makes losses and relies on VC capital and needs growth: https://en.wikipedia.org/wiki/Lemonade,_Inc.#2015%E2%80%9320... I was impressed by this until I looked Lemonade up.
No it does not. A 50% discount and the insurance still having industry average profit, or at least being profitable at all, would tell you that. Selling at a loss does not indicate your costs are actually lower. You need to wait until we learn if it is actually at a loss.
Sir, your bias is extreme.
Ah yes, posting well documented video evidence of reality is bias. How silly of me. The only unbiased take is to ignore my lying eyes and make logically unsound arguments in favor of endangering the public. That is what unbiased people do.
I also like how you completely avoided addressing my argument in favor of a attempted ad hominem.
> ignore my lying eyes and make logically unsound arguments
Why haven't you acknowledged this video as fake? https://www.youtube.com/watch?v=Tu2N8f3nEYc
I am utterly baffled by what you are trying to argue with this post except for distracting from the weakness of your original argument with more attempted(?) ad hominems. Must be my bias showing.
Uh huh
We don't know if 50% makes it actually cheaper than other car insurance companies, or the coverage is comparable, or if they have comparable service. Or if they sell your location information to marketers.
Assuming this discount is offered broadly and indefinitely. Otherwise these might just be marketing dollars.
I'm sure from an underwriting perspective, they could also offer a significant discount for miles driven with LKAS turned on for the same reason they can do it for FSD: you only do it in certain (lower risk) conditions.
A 50% discount when using FSD or just doubling insurance company profits when when not using FSD. The only evidence that actually matters is cost in comparison to other insurance companies. If this product is cheaper for you, then it probably does indicate FSD is better at driving than you (well, than the average driver in your demographic). Maybe this is damning with faint praise.
Yeah I'm actually very curious about this, it's the first I've heard.
I'd like to know what data this is based on, and if Tesla is providing any kind of subsidy or guarantee.
There's also a big difference between the value of car damages and, well, death. E.g. what if FSD is much less likely to get into otherwise common fender benders that don't harm you, but more likely to occasionally accidentally drive you straight into a divider, killing you?
I will sell you a loaf of bread for $10 and a tortilla for $100.
Analysts saying tortilla industry in shambles.
Or a price hike if the fleet API tattles on you for negative driving behaviors.
It may not be on the marketing copy but it’s almost certainly present in the contract.
Yes, this is giving away everything about your vehicles driving to a third party for sale or, manufacture. I don't like this personally and I don't like it for my vehicle either. Where I go in my vehicle and when I do it is my business. With vehicles being IoT connected, we are forced to surrender that data as there is no opt-out except for disconnecting the antenna. Not to mention going in to be serviced what kind of data is pulled off.
Lemonade purchased Metromile and significantly increased prices. 2.5x if I recall correctly. This has forced me to move to Geico. Now, since prices have increased and new self driving car insurance is giving a discount, are you effectively paying same old rate?
Just curious about this, this was Lemonade's integrated insurance to the Tesla right? How's Geico like for you? Probably just fine right? Any differences?
This is the biggest scam I can think of. You are not driving the car. If it crashes while its "self driving" the company that made the self driving system should be at fault. The End!
Gees! Come on!
PS: I'm not talking about Tesla's fake FSD.
99/month is more than I have been willing to pay for FSD, but if it lowers my insurance by 200/month, I could be convinced.
Lowering by $200? Full coverage on two recent model cars here and that's nearly three quarters of my monthly insurance bill. Insane what people are paying for insurance these days.
The whole point of self-driving cars (to me) is I don't have to own or insure it, someone else deals with that and I just make it show up with my phone when I need it.
Imagine this for a whole neighborhood! Maybe it'd be more efficient for the transport to come at regular intervals though. And while we're at it, let's pick up other people along the way, you'll need a bigger vehicle though, perhaps bus-sized...
Half-jokes aside, if you don't own it, you'll end up paying more to the robotaxi company than you would have paid to own the car. This is all but guaranteed based on all SaaS services so far.
This only works in neighborhoods that are veritable city blocks, with buildings several stories tall standing close by. Not something like northern Houston, TX; it barely works for places like Palo Alto, CA. You cannot run buses on every lane, at a reasonable distance from every house.
The point of a car is takes you door to door. There's no expectation to walk three blocks from a stop; many US places are not intended for waking anyway. Consider heavy bags from grocery shopping, or similar.
Public transit works in proper cities, those that became cities before the advent of the car, and were not kept in the shape of large suburban sprawls by zoning. Most US cities only qualify in their downtowns.
Elsewhere, rented / hailed self-driving cars would be best. First of all, fewer of them would be needed.
> if you don't own it, you'll end up paying more to the robotaxi company than you would have paid to own the car
Maybe for you, I already don't own it and have not found that to be true. I pretty much order an uber whenever I don't feel like riding my bike or the bus, and that costs <$300 most months. Less than the average used car payment in the US before you even consider insurance, fuel, storage, maintenance, etc.
I also rent a car now and then for weekend trips, that also is a few hundred bucks at most.
I would be surprised if robotaxis were more expensive long term.
Self-driving municipal busses would be fantastic.
Also, a real nightmare for the municipal trade unions. (Do you know why every NYC subway train needs to have not one but two operators, even though it could run automatically just fine?)
Why?
Because the Transport Workers Union fought tooth and nail for it. Laying off hundreds of operators would be a politically very dangerous move.
Huh. I wonder if that makes any sense. It doesn't seem to make sense to keep employing people if you no longer need them. It sucks to be layed off, but that's just how it works.
It also shows a lack of imagination. If you have to provide a union with a job bank, why not re-deploy employees to other roles? With one person per train, re-deploy people to run more trains therefore decreasing the interval between trains. Stations used to have medics but this was cut. How about re-train people to be those medics? The subway could use a signaling upgrade and positive train control. Installing platform screen doors to greatly reduce the incidence of people falling onto the tracks is going to need a lot of labor.
Why would you need buses?
Mass transit is a capacity multiplier. If 35 people are headed in the same direction compare that with the infrastructure needed to handle 35 cars. Road capacity, parking capacity, car dealerships, gas stations, repair shops, insurance, car loans.
Believe it or not, in some cities that have near grid-lock rush-hour traffic - there's between 50-100%+ as many people traveling by bus as by car.
If all of those people switch to cars, you end up with it taking an hour to travel 1 mile by car.
It's almost as if they have busses for a reason.
First, these cities should be fixed by removing the traffic magnets. It's far past the point where we used the old obsolete ideology of trying to supply as much traffic capacity as possible.
But anyway, your statement is actually not true anywhere in the US except NYC. Even in Chicago, removing ALL the local transit and switching to 6-seater minivans will eliminate all the traffic issues.
> First, these cities should be fixed by removing the traffic magnets.
If you remove the jobs and housing, traffic does get a lot better. But it's not much of a city without jobs and housing.
Car traffic magnets like highways inside urban cores? Or people traffic magnets like office buildings, colleges, sports stadiums, performing arts venues, shopping malls?
6-seater self-driving municipal minivans would be fantastic, too. (I would still call that a "bus", but I don't care what we call it.)
> Maybe it'd be more efficient for the transport to come at regular intervals though
Efficient for who, is the problem
Focusing only on price, renting a beafy shared "cloud" computer is cheaper than buying one and changing every 5 years. It's not always an issue for idle hardware.
Cars are mostly idle and could be cheaper if shared. But why make them significantly cheaper when you can match the price and extract more profits?
Cars and personal computers have advantages over shared resources that often make them worth the cost. If you want your transport/compute in busy times you may find limitations. (ever got on the train and had to stand because there are no seats? Every had to wait for your compute job to start because they are all busy? Both of these have happened to me).
I ran the numbers, and for most non-braindead cities something like a fleet of 6-seater minivans will easily replace all of local transit.
And with just 6 people the overhead if an imperfect route and additional stops will be measured in minutes.
And of course, it's pretty easy to imagine an option to pay a bit more for a fully personal route.
This exists in a way -- I'd wager every city has a commercial service that will shuttle you to, say, the airport. They're not cheap, however.
Yep. And it's indeed a good model for this mode of transportation. And they ARE cheap.
For example, in Seattle I can get a shared airport shuttle for $40 with the pick-up/drop-off at my front door. And this is a fully private ADA-compliant commercial service, with a healthy profit margin, not a rideshare that offloads vehicle costs onto the driver. And a self-driving van can be even cheaper than that, since it doesn't need a driver.
Meanwhile, transit also costs around $40 per trip and takes at least 1 hour more. And before you tell me: "no way, the transit ticket is only $2.5", the TRUE cost of a transit ride in Seattle is more than $20. It's just that we're subsidizing most of it.
So you can see why transit unions are worrying about self-driving. It'll kill transit completely.
you made too many false assumptions if you came up with those routes. Experts have run real numbers including looking at what happens in the real world. https://humantransit.org/category/microtransit - (as I write this you need to scroll to the second article to find the useful rebuttal of your idea)
Yeah, yeah: "Major US Public Transit Union Questions “Microtransit”" Read it. Go on. It's pure bullshit.
The _only_ issue with the old "microtransit" is the _driver_. Each van ends up needing on average MORE drivers than it moves passengers. It does solve the problem of throughput, though.
But once the driver is removed, this problem flips on its head. Each regular bus needs around 4 drivers for decent coverage. It's OK-ish only when the average bus load is at least 15-20 people. It's still much more expensive and polluting than cars, but not crazily so.
Scroll down to the other articles as I said in the first place.
self driving changes some things, but there are a lot of other points in the many article linked from there that don't change.
This article is just a bunch of propaganda. You can tell that by the picture with people in the shape of a bus next to the line of cars. Every time you see it, you can immediately blacklist the author and ignore whatever they are saying about cars.
Can you guess why?
Hint: think about the intervals between buses and how you should represent them to stay truthful. And that buses necessarily move slower than cars. And that passengers will waste some time due to non-optimal routes and transfers. And that passengers will waste some time because they need to walk to the station.
So back to my point, can you tell me EXACTLY what I should read in that article? Point out the paragraph, please.
> But why make them significantly cheaper when you can match the price and extract more profits?
Even better — charge 10% less and corner the market! As long as nobody charges 10% less than you…
> Cars are mostly idle and could be cheaper if shared. But why make them significantly cheaper when you can match the price and extract more profits?
Yeah, this would rely on robust competition.
Nah, I don't want to share my car with anyone. It's my own personal space where I can keep some of my stuff and set it up exactly the way I want.
That's how some people feel about airplanes. Presumably you're not one of them. For some people, the inconvenience of being responsible for a car would outweigh the benefit of setting up their stuff inside of one.
It's not even an inconvenience. I like my cars. Dealing with ride hailing services (autonomous or not) is certainly far more inconvenient than owning a car (unless maybe you're stuck living somewhere without convenient parking).
For the vast majority of people who own a car, continuing to own the car will remain the better deal. Most people need their car during "rush hour", so there isn't any savings from sharing, and worse some people have "high standards" and so will demand the rental be a clean car nicer than you would accept - thus raising the costs (particularly if you drive used cars) Any remaining argument for a shared car dies when you realize that you can leave your things in the car, and you never have to wait.
For the rest - many of them live in a place where not enough others will follow the same system and so they will be forced to own a car just like today. If you live in a not dense area but still manage to walk/bike almost everywhere (as I do), renting a car is on paper cheaper the few times when you need a car - but in practice you don't know about that need several weeks in advance and so they don't have one they can rent to you. Even if you know you will need the car weeks in advance, sometimes they don't have one when you arrive.
If you live in a very dense area such that you almost regularly use transit (but sometimes walk, bike), but need a car for something a few times per year, then not owning a car makes sense. In this case the density means shared cars can be a viable business model despite not being used very much.
In short what you say sound insightful, but reality of how cars are used means it won't happen for most car owners.
> sometimes they don't have one when you arrive.
Or, if they are Hertz, they might have one but refuse to give it to you. This happened to my wife. In spite of payment already being made to Hertz corporate online, the local agent wouldn't give up a car for a one-way rental. Hertz corporate was less than useless, telling us their system said was a car available, and suggesting we pay them hundreds of dollars again and go pick it up. When I asked the woman from corporate whether she could actually guarantee we would be given a car, she said she couldn't. When I suggested she call the local agent, she said she had no way to call the local office. Unbelievable.
Since it was last minute, there were... as you said, no cars available at any of the other rental companies. So we had to drive 8 hours to pick her up. Then 8 hours back, which was the drive she was going to make in the rental car in the first place.
Hertz will hurts you.
I personally would have changed it to a round-trip then just returned the car to the other Hertz location and let them figure it out.
Hertz this time, but things like that have happened with every rental company I know of.
This is the nightmare scenario for me. A forever subscription for the usage of a car.
Subscription for self driving will almost be a given with so many bad actors in tech nowadays, but never even being allowed to own the car is even worse.
I think this is purely psychological. The notion of paying for usage of some resource that you don't own is really rather mundane when you get down to it.
One mean tweet and your self driving subscription is taken away is way better than one mean tweet and your car is taken away.
Subscription for changes to maps and the law makes sense. I'd also pay for the latest safety improvements (but they better be real improvements). However they are likely to add a number of unrelated things and I object to those.
How do maps changes make sense to subscribe to when they are on OSM?
And what do you even mean by subscription to changes to the law?
If OSM is up to date - many places it is very outdated. (others it is very good).
Law - when a government changes the driving laws. Government can be federal (I have driven to both Canada and Mexico. Getting to Argentina is possible though I don't think it has ever been safe. Likewise it is possible to drive over the North Pole to Europe), state (or whatever the country calls their equivalent). When a city changes the law they put up signs, but if a state passes a law I'm expected to know even if I have never driven in that state before. Right turn on red laws are the only ones I can think of where states are different - but they are likely others.
Laws also cover new traffic control systems that may not have been in the original program. If the self driving system can't figure out the next one (think roundabout) then it needs to be updated.
That's the point of self-driving fleets. Or maybe a special category of leased vehicles.
This is about a self-driving car you own.
I think part of the issue in California at least is that you must have insurance. You gonna get a giant fine if you don't.
So, here's a thought...
If FSD is going to be a subscription and you will never own our fancy autopilot feature. Why should the user pay for insurance?
The user is paying for a service that they do not control and which workings are completely opaque. How can responsibility ever lie with the user in such a situation?
If you buy an autonomous killing robot and ask it to kill someone, who's responsible?
It would be interesting to see if Lemonade requires a Driver Monitoring System (DMS) to see if the driver/operator is actually paying attention (or, like sleeping / watching Netflix / whatever) while at the driver's seat.
Anybody know??
Tesla FSD is still a supervised system (= ADAS), afaik.
What happens if you have FSD turned off and like to drive fast on public roads. Will they see this telemetry and raise your rates?
It seems the answer is yes. From their web site:
> Fair prices, based on how you drive [...] Get a discount, and earn a lower premium as you drive better.
Bummer.. its super fun to floor them off the line.
Someone with a Plaid will need to test this out to see how high they can make their Lemonade premium.
This (instant torque) is exciting for about the first week of electric car ownership, it gets old very fast. I have far more fun driving my much slower gas-engined cars.
Speak for yourself. Over 3 years and 100k km and still enjoying it.
MT gas cars are very fun to drive!
It doesn't get old. What a ludicrous statement.
I did get lots of traction issues with FWD EV, any sort of wet - you need to baby it.
There’s much more to enjoying cars than speed in a straight line, which I do not disagree at all most EVs are exceptional at.
Booting the go pedal at every stop sign or light just feels like being a bit of a childish jerk after a short while on public roads once the novelty wears off.
You know what's weird? This is a company that has been using the fleet api for quite a while now to monitor non-professional drivers using FSD on their daily commute, often while distracted doing other things. The latest versions even allow some phone usage.
And yet people are skeptical. I mean, they should be skeptical, given that the company is using this for marketing purposes. It doesn't make sense to just believe them.
But it is strange to see this healthy skepticism juxtaposed with the many unskeptical comments attached to recent Electrek articles with outlandish claims.
I wish this was available in Miami! I would switch in a heartbeat.
Marketing stunt
One's first thought is that they ought to be running away from underwriting this as fast as they can go. But then one realizes that it is all profit -- they need never pay a claim, because in accidents involving autonomous vehicles, it will never be possible to establish fault; and then one sees that the primary purpose of most automations is to obscure responsibility.
I think there's a narrow unregulated space where this could be true. I'm exercising my creativity trying to imagine it - where automations are built with the outcome of obscured responsibility in mind. And I could understand profit as a possible driving factor for that outcome.
As an extreme end of a spectrum example, there's been worry and debate for decades over automating military capabilities to the point where it becomes "push button to win war". There used to be, and hopefully still is, lots of restraint towards heading in that direction - in recognition of the need for ethics validation in automated judgements. The topic comes up now and then around Tesla's, and impossible decisions that FSD will have to make.
So at a certain point, and it may be right around the point of serious physical harm, the design decision to have or not have human-in-the-middle accountability seems to run into ethical constraints. In reality it's the ruthless bottom line focused corps - that don't seem to be the norm, but may have an outsized impact - that actually push up against ethical constraints. But even then, I would be wary as an executive documenting a decision to disregard potential harms at one of them shops. That line is being tested, but it's still there.
In my actual experience with automations, they've always been derived from laziness / reducing effort for everyone, or "because we can", and sometimes a need to reduce human error.
You're not making any sense. In terms of civil liability, fault is attached to the vehicle regardless of what autonomous systems might have been in use at the time of a collision.
Or even who was driving it, in the case of ordinary cars.
One might imagine that lower courts won’t determine fault, one would be wrong.
> and then one sees that the primary purpose of most automations is to obscure responsibility.
Are you saying that the investments in FSD by tesla have been with the goal of letting drivers get a way with accidents? The law is black and white
What is the "driver"? Who wrote which line of the software? Who tested it? Who approved its deployment? The rest is lawyers.
I'm 200% sure it's subsidized by Tesla and they have a deal that any losses they'd get Tesla is going to pay Lemonade for them.
Fleet API gives location data, no? I bet this discount will be paid for by this location data
I have Lemonade for my home insurance. It's been reliable for several years and the customer service is great. I don't have a self-driving car but I wouldn't hesitate to sign up. Their rates are very affordable.
I've had their Home Insurance since they started up and grabbed their car insurance a couple years ago. Competitive price, excellent customer service, no notes.
Have you cashed a claim from them?
> automatically tracking FSD miles versus manual miles through direct Tesla integration.
No thanks. I unplugged the cellular modem in my car precisely because I can't stand the idea that the manufacturer/dealer/insurance company or other unauthorized third parties could have access to my location and driving habits.
I also generally avoid dealers like the plague and only trust the kind of shops where the guy who answers the phone is the guy doing the work.
Now all that's missing is a self-insuring car and we're set.
TL;DR: 50% insurance discount for Tesla vehicles driven by Tesla FSD.
On the surface, this looks like an endorsement of Tesla's claims about FSD safety.
Assuming the non-discounted rates are market-competitive.
And that the sort of miles accrued when using FSD in Arizona aren't >50% less likely to result in a claim than the average mile driven regardless of who's driving