7.1 million miles, 3 minor injuries: Waymo’s safety data looks good::Waymo says its cars cause injuries six times less often than human drivers.

  • @theluddite@lemmy.ml
    link
    fedilink
    English
    2311 months ago

    If those same miles had been driven by typical human drivers in the same cities, we would have expected around 13 injury crashes.

    I’m going to set aside my distrust at self reported safety statistics from tech companies for a sec to say two things:

    First, I don’t think that’s the right comparison. You need to compare them to taxis.

    Second, we need to know how often waymos employees intervene. From the NYT, cruise employed 1.5 staff-members per car, intervening to assist these not-so-self driving vehicles every 2.5 to 5 miles, making them actually less autonomous than regular cars.

    Source : https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html?unlocked_article_code=1.7kw.o5Fq.5WLwCg2_ONB9&smid=url-share

    • @n2burns@lemmy.ca
      link
      fedilink
      English
      911 months ago

      I agree that “autonomous” taxis need to be compared to professional drivers, and I’d even take it further by combining your two points. If they want to say “autonomous” vehicles are currently safer than professional drivers, they need a way to compare how many humans are involved too. I’m sure we could make conventional taxis safer too if they not only had a driver but a command centre where drivers are being observed and altered to dangerous situations!

      • @theluddite@lemmy.ml
        link
        fedilink
        English
        711 months ago

        Yeah that’s a great point! Taxis also drive different kinds of miles than typical human drivers, who probably normally drive at rush hour when it’s more dangerous whereas I’d expect taxis to have disproportionately more miles during safer times.

    • ZickZack
      link
      fedilink
      5
      edit-2
      11 months ago

      First, I don’t think that’s the right comparison. You need to compare them to taxis.

      It’s not just that, you generally have a significant distribution shift when comparing the self-drivers/driving assistants to normal humans. This is because people only use self-driving in situations where it has a chance of working, which is especially true with stuff like tesla’s self-driving where ultimately people are not even going to start the autopilot when it gets tricky (nevermind intervening dynamically: they won’t start it in the first place!)

      For instance, one of the most common confounding factors is the ratio of highway driving vs non-highway driving: Highways are inherently less accident prone since you don’t have to deal with intersections, oncoming traffic, people merging in from every random house, or children chasing a ball into the street. Self-drivers tend to report a lot more highway traffic than ordinary drivers, due to how the availability of technology dictates where you end up measuring. You can correct for that by e.g. explicitly computing the likelihood p(accident|highway) and use a common p(highway) derived from the entire population of car traffic.