There’s a version of this question that’s easy to answer, and a version that’s genuinely complicated. The easy version: compared to human drivers, the data increasingly suggests that fully autonomous vehicles are significantly safer. The complicated version: the technology is still young, public trust is low, and not everything on the road calling itself “self-driving” is what you think it is.

Let’s work through what the numbers actually say.


The Human Driver Problem We’re Comparing Against

Before asking whether driverless cars are safe, it’s worth acknowledging what they’re being compared to. Around 94% of all car accidents are caused by human error — distraction, fatigue, impairment, misjudgement. In the US alone, over 40,000 people die in road accidents every year. San Francisco saw 42 traffic deaths in 2024, more than its homicide count. The status quo is not remotely safe. It’s just familiar.

That context matters when evaluating autonomous vehicle data, because the question isn’t “are driverless cars perfect?” — it’s “are they safer than the alternative?”


What Waymo’s Data Actually Shows

Waymo is the most transparent company in the space, publishing detailed safety data regularly. Through December 2025, it had driven 170.7 million fully driverless miles — what amounts to more than 150 human driving lifetimes of experience.

A peer-reviewed study published in Traffic Injury Prevention in 2025 analysed 56.7 million rider-only miles and found Waymo vehicles had an 85% lower crash rate for any injury compared to human benchmarks. A separate analysis by Swiss Re — one of the world’s largest reinsurers — found 92% fewer bodily injury claims and 88% fewer property damage claims over 25 million autonomous miles. These aren’t press releases. They’re actuarial data from a firm with no incentive to be generous.

Waymo’s vehicles are now completing over 250,000 fully driverless rides per week across San Francisco, Los Angeles, Phoenix, and Austin, with expansion to 20 more cities — including Tokyo and London — planned for 2026.


The Nuance: Not All “Self-Driving” Is the Same

The most important distinction to understand is the difference between driver assistance systems and actual autonomous driving.

Tesla’s Autopilot and Full Self-Driving — the most widely used systems in consumer vehicles — are classified as SAE Level 2. They assist the driver but require constant human supervision. The driver is legally and practically responsible at all times. When Tesla FSD crashes, it’s being counted under driver assistance data, not autonomous vehicle data. This matters enormously when interpreting accident statistics.

Through November 2025, there have been 5,202 autonomous vehicle incidents reported to the NHTSA — but the vast majority of these involve driver assistance systems in vehicles where a human was expected to be in control. Only a small fraction involve fully driverless systems like Waymo.

Of all reported AV incidents, 7.4% resulted in injury. That sounds concerning until you compare it to human-driven vehicle injury rates, which are considerably higher per mile driven.


Where the Concerns Are Real

The technology has had genuine failures. A 2024 incident in Phoenix saw a Waymo vehicle hit a utility pole during a low-speed manoeuvre, prompting a voluntary recall of 672 vehicles for a software update. A 2025 Tesla Cybertruck crashed while in FSD mode, triggering an investigation. These incidents are real and shouldn’t be dismissed.

There’s also the broader question of edge cases — the unusual scenarios that aren’t well-represented in training data. A police officer directing traffic around an accident. A road surface that GPS doesn’t know has changed. A child darting between parked cars in the dark. Autonomous vehicles handle routine driving exceptionally well. The genuinely unpredictable remains harder.

Public trust reflects this uncertainty. A 2024 AAA survey found that 66% of Americans still don’t fully trust autonomous technology. That scepticism isn’t irrational — it’s a reasonable response to a technology that’s impressive in aggregate data but occasionally fails in very visible ways.


The Honest Answer

Fully autonomous vehicles from companies like Waymo are, based on current evidence, statistically safer than human drivers at the types of driving they’ve been designed for — urban road use within defined service areas. The peer-reviewed data is consistent and increasingly robust.

But the technology remains constrained. It works within geofenced zones, in mapped cities, in conditions the system has been trained on. Level 5 autonomy — anywhere, any weather, any road — doesn’t exist commercially yet.

The question isn’t whether driverless cars can be safe. The evidence says they already are, in the right conditions. The question is how long it takes to expand those conditions to cover everywhere we actually need to go.

Share.
Leave A Reply

Exit mobile version