• z7 an hour ago

    The comparison isn't really like-for-like. NHTSA SGO AV reports can include very minor, low-speed contact events that would often never show up as police-reported crashes for human drivers, meaning the Tesla crash count may be drawing from a broader category than the human baseline it's being compared to.

    There's also a denominator problem. The mileage figure appears to be cumulative miles "as of November," while the crashes are drawn from a specific July-November window in Austin. It's not clear that those miles line up with the same geography and time period.

    The sample size is tiny (nine crashes), uncertainty is huge, and the analysis doesn't distinguish between at-fault and not-at-fault incidents, or between preventable and non-preventable ones.

    Also, the comparison to Waymo is stated without harmonizing crash definitions and reporting practices.

    • fabian2k an hour ago

      I think it's fair to put the burden of proof here on Tesla. They should convince people that their Robotaxis are safe. If they redact the details about all incidents so that you cannot figure out who's at fault, that's on Tesla alone.

      • xnx 31 minutes ago

        Tesla could share real/complete data at any time. The fact that they don't is likely and indicator the data does not look good.

        • sigmoid10 33 minutes ago

          I've actually started ignoring all these reports. There is so much bad faith going on in self-driving tech on all sides, it is nearly impossible to come up with clean and controlled data, much less objective opinions. At this point the only thing I'd be willing to base an opinion on is if insurers ask for higher (or lower) rates for self-driving. Because then I can be sure they have the data and did the math right to maximise their profits.

          • silon42 39 minutes ago

            "insurance-reported" or "damage/repair-needed" would be a better criteria for problematic events than "police-reported".

            • cbeach 17 minutes ago

              Also worth pointing out that Elektrek's coverage of Tesla turned sour since the editor Fred Lambert took a particular dislike to the brand, and to Elon Musk.

              It's pretty clear from his X feed:

              https://x.com/FredLambert

              The guy has serious Musk Derangement Syndrome.

              • touwer 14 minutes ago

                He's probably smart then

            • SilverBirch 2 hours ago

              To be honest I think the true story here is:

              > the fleet has traveled approximately 500,000 miles

              Let's say they average 10mph, and say they operate 10 hours a day, that's 5,000 car-days of travel, or to put it another way about 30 cars over 6 months.

              That's tiny! That's a robotaxi company that is literally smaller than a lot of taxi companies.

              One crash in this context is going to just completely blow out their statistics. So it's kind of dumb to even talk about the statistics today. The real take away is that the Robotaxis don't really exist, they're in an experimental phase and we're not going to get real statistics until they're doing 1,000x that mileage, and that won't happen until they've built something that actually works and that may never happen.

              • mcherm 41 minutes ago

                > The real take away is that the Robotaxis don't really exist

                More accurately, the real takeaway is that Tesla's robo-taxis don't really exist.

                • razingeden 2 hours ago

                  >One crash in this context is going to just completely blow out their statistics.

                  One crash in 500,000 miles would merely put them on par with a human driver.

                  One crash every 50,000 miles would be more like having my sister behind the wheel.

                  I’ll be sure to tell the next insurer that she’s not a bad driver - she’s just one person operating an itty bitty fleet consisting of one vehicle!

                  If the cybertaxi were a human driver accruing double points 7 months into its probationary license it would have never made it to 9 accidents because it would have been revoked and suspended after the first two or three accidents in her state and then thrown in JAIL as a “scofflaw” if it continued driving.

                  • jacquesm 17 minutes ago

                    > One crash in 500,000 miles would merely put them on par with a human driver.

                    > One crash every 50,000 miles would be more like having my sister behind the wheel.

                    I'm not sure if that leads to the conclusion that you want it to.

                    • martin_a 11 minutes ago

                      > One crash every 50,000 miles would be more like having my sister behind the wheel.

                      Misogyny at its best. Idiot.

                      • burnished 6 minutes ago

                        They might have forgotten how to share an anecdote and their sister might just be a regular awful driver

                  • sammyjoe72 18 minutes ago

                    Elon promised self driving cars in 12 months back in 2017? He’s also promising Optimus robots doing surgery on humans in 3 years? Extrapolating…………… Optimus is going to kill some humans and it will all be worth it!

                    • mikkupikku an hour ago

                      All these self driving and "drivers assistance" features like lane keeping exist to satisfy consumer demand for a way to multitask when driving. Tesla's is particularly cancerous, but all of them should be banned. I don't care how good you think your lane keeping in whatever car you have is, you won't need it if you keep your hands on the wheel, eyes on the road, and don't drive when drowsy. Turn it off and stop trying to delegate your responsibility for what your two ton speeding death machine does!

                      • nkrisc an hour ago

                        I think it’s unfair to group all those features into “things for people who want to multitask while driving”.

                        I’m a decent driver, I never use my phone while driving and actively avoid distractions (sometimes I have to tell everyone in the car to stop talking), and yet features like lane assist and automatic braking have helped me avoid possible collisions simply because I’m human and I’m not perfect. Sometimes a random thought takes my attention away for a moment, or I’m distracted by sudden movement in my peripheral vision, or any number of things. I can drive very safely, but I can not drive perfectly all the time. No one can.

                        These features make safe drivers even safer. They even make the dangerous drivers (relatively) safer.

                        • andrewaylett 26 minutes ago

                          There are two layers, both relating to concentration.

                          Driving a car takes effort. ADAS features (or even just plain regular "driving systems") can reduce the cognitive load, which makes for safer driving. As much as I enjoy driving with a manual transmission, an automatic is less tiring for long journeys. Not having to occupy my mind with gear changes frees me up to pay more attention to my surroundings. Adaptive cruise control further reduces cognitive load.

                          The danger comes when assistance starts to replace attention. Tesla's "full self-driving" falls into this category, where the car doesn't need continuous inputs but the driver is still de jure in charge of the vehicle. Humans just aren't capable of concentrating on monitoring for an extended period.

                        • plqbfbv 33 minutes ago

                          Have you ever driven more than 200km at an average of 80km/h with enough turns on the highway? Perhaps after work, just to see your family once a month?

                          Driver fatigue is real, no matter how much coffee you take.

                          Lane-keep is a game changer if the UX is well done. I'm way more rested when I arrive at destination with my Model 3 compared to when I use the regular ICE with bad lane-assist UX.

                          EDIT: the fact that people that look at their phones will still look at their phones with lane-keep active, only makes it a little safer for them and everyone else, really.

                          • mikkupikku 3 minutes ago

                            If you're on a road trip, pull the fuck over and sleep. Your schedule isn't worth somebody else's life. If that's your commute, get a new apartment or get a new job. Endangering everybody else with drowsy driving isn't an option you should ever find tenable.

                          • melagonster an hour ago

                            But this is why people bought Tesla. Musk promised that the car is automatic.

                            • Spooky23 30 minutes ago

                              Don’t be silly. Why would a reasonable person think “Full Self Driving” meant that a car would fully drive itself?

                            • fragmede an hour ago

                              We made drunk driving super illegal and that still doesn't stop people. I would rather they didn't in the first place, but since they're going to anyway, I'd really rather they have a computer that does it better than they do. FSD will pull over and stop if the driver has passed out.

                              • mikkupikku an hour ago

                                If we could ensure that only drunk people use driver assistance features, I'd be all for that. The reality is that 90% of the sober public are now driving like chronic drunks because they think their car has assumed the responsibility of watching the road. Ban it ALL.

                                • michaelsshaw 36 minutes ago

                                  What I'm hearing here is anecdotal and largely based on feelings. The facts are that automatic emergency braking (which should not activate under normal driving circumstances as it is highly uncomfortable) and lane-keeping are basic safety features that have objectively improved safety on the roads. Everything you've said is merely conjecture.

                            • rich_sasha 40 minutes ago

                              As much as I'd love to pile in on Tesla, it's unclear to me the severity of the incidents (I know they are listed) and if human drivers would report such things.

                              "Rear collision while backing" could mean they tapped a bollard. Doesn't sound like a crash. A human driver might never even report this. What does "Incident at 18 mph" even mean?

                              By my own subjective count, only three descriptions sound unambiguously bad, and only one mentions a "minor injury".

                              I'm not saying it's great, and I can imagine Tesla being selective in publishing, but based on this I wouldn't say it seems dire.

                              For example, roundabouts in cities (in Europe anyway) tend to increase the number of crashes, but they are overall of lower severity, leading to an overall improvement of safety. Judging by TFA alone I can't tell this isn't the case here. I can imagine a robotaxi having a different distribution of frequency and severity of accidents than a human driver.

                              • orwin 28 minutes ago

                                He compared to the estimated statistics for non-reported accident (typically your example, that involve only one vehicle and only result in scratched paint) to estimate the 3x. Else the title would have been 9x (which is in line with 10x a data analyst blogger wrote ~ 3month ago).

                                > roundabouts in cities (in Europe anyway) tend to increase the number of crashes

                                Not in France, according to data. It depends on the speed limit, but they decrease accident by 34% overall, and almost 20% when the speed limit is 30 or 50 km/h.

                                • serf 16 minutes ago

                                  >they tapped a bollard

                                  If a human had eyes on every angle of their car and they still did that it would represent a lapse in focus or control -- humans don't have the same advantages here.

                                  With that said : i would be more concerned about what it represents when my sensor covered auto-car makes an error like that, it would make me presume there was an error in detection -- a big problem.

                                • fabian2k an hour ago

                                  As long as there are still safety drivers, the data doesn't really tell you if the AI is any good. Unless you had reliable data about the number of interventions by the driver, which I assume Tesla doesn't provide.

                                  Still damning that the data is so bad even then. Good data wouldn't tell us anything, the bad data likely means the AI is bad unless they were spectacularly unlucky. But since Tesla redacts all information, I'm not inclined to give them any benefit of the doubt here.

                                  • repelsteeltje 32 minutes ago

                                    > As long as there are still safety drivers, the data doesn't really tell you if the AI is any good.

                                    I think we're on to something. You imply that good here means the AI can do it's thing without human interference. But that's not how we view, say, LLMs being good at coding.

                                    In the first context we hope for AI to improve safety whereas in the second we merely hope to improve productivity.

                                    In both cases, a human is in the loop which results in second order complexity: the human adjusts behaviour to AI reality, which redefines what "good AI" means in an endless loop.

                                    • fransje26 an hour ago

                                      > As long as there are still safety drivers, the data doesn't really tell you if the AI is any good. Unless you had reliable data about the number of interventions by the driver, which I assume Tesla doesn't provide.

                                      Sorry that does not compute.

                                      It tells you exactly if the AI is any good, as, despite the fact that there were safety drivers on board, 9 crashes happened. Which implies that more crashes would have happened without safety drivers. Over 500,000 miles, that's pretty bad.

                                      Unless you are willing to argue, in bad faith, that the crashes happened because of safety driver intervention..

                                      • fabian2k an hour ago

                                        I'm a bit hesitant to draw strong conclusions here because there is so little data. I would personally assume that it means the AI isn't ready at all, but without knowing any details at all about the crashes this is hard to state for sure.

                                        But if the number of crashes had been lower than for human drivers, this would tell us nothing at all.

                                    • artembugara an hour ago

                                      By the law of large numbers, it's not a significant distance.

                                      • lvl155 32 minutes ago

                                        I am so tired of people defending Tesla. I’ve wrote off Tesla long time ago but what gets me are the people defending their tech. We all can go see the products and experience them.

                                        The tech needs to be at least 100x more error free vs humans. It cannot be on par with human error rate.

                                        • cbeach 5 minutes ago

                                          We tend to defend companies that push the frontiers of self-driving cars, because the technology has the potential to save lives and make life easier and cheaper for everyone.

                                          As engineers, we understand that the technology will go from unsafe, to par-with-humans, to safer-than-humans, but in order for it to get to the latter, it requires much validation and training in an intermediate state, with appropriate safeguards.

                                          Tesla's approach has been more risk averse and conservative than others. It has compiled data and trained its models on billions of miles of real world telemetry from its own fleet (all of which are equipped with advanced internet-connected computers). Then it has rolled out the robotaxi tech slowly and cautiously, with human safety drivers, and only in only two areas.

                                          I defend Tesla's tech, because I've owned and driven a Tesla (Model S) for many years, and its ten-year-old Autopilot (autosteer and cruise control with lane shift) is actually smoother and more reliable than many of its competitors current offerings.

                                          I've also watched hours of footage of Tesla's current FSD on YouTube, and seen it evolve into something quite remarkable.

                                          Unlike many commenters on this platform I have no political issues with Elon, so that doesn't colour my judgement of Tesla as a company, and its technological achievements. I wish others would set aside their partisan tribablism and recognise that Tesla has completely revolutionised the EV market and continues to make significant positive contributions to technology as a whole, all while opening all its patents and opening its Supercharger network to vehicles from competitors. Its ethics are sound.

                                          • flanked-evergl 9 minutes ago

                                            Cite?

                                          • viraptor 17 minutes ago

                                            > showing cumulative robotaxi miles, the fleet has traveled approximately 500,000 miles as of November 2025.

                                            Comparing stats from this many miles to just over 1 trillion miles driven collectively in the US in a similar time period is a bad idea. Any noise in Tesla's data will change the ratio a lot. You can already see it from the monthly numbers varying between 1 and 4.

                                            This is a bad comparison with not enough data. Like my household average for the number of teeth per person is ~25% higher than world average! (Includes one baby)

                                            Edit: feel free to actually respond to the claim rather than downvote