Not long ago, self-driving cars felt like something out of a Spielberg film. Today, Waymo robotaxis are giving millions of rides in San Francisco and Phoenix without a human hand on the wheel. Aurora launched its first commercial self-driving truck service in Texas in May 2025. And at CES 2026, Uber announced plans to roll out its autonomous ride service in San Francisco before the end of the year. The technology has quietly crossed from science fiction into the morning commute.
But how does it actually work? What stops a car traveling at 100 km/h from mistaking a plastic bag for a child?
The answer is a combination of several technologies working simultaneously — and the results, while not perfect, are already more reliable than most people realize.
The Eyes: Sensors Everywhere
A self-driving car doesn’t have one set of eyes. It has dozens. The main technologies working together are LiDAR, radar, cameras, and ultrasonic sensors.
LiDAR — Light Detection and Ranging — fires laser pulses in every direction and measures how long they take to bounce back. This builds a real-time 3D map of everything within roughly 200 metres of the vehicle: other cars, cyclists, pedestrians, road barriers, even fallen branches. Waymo’s latest vehicles use multiple LiDAR units that together scan the full 360-degree environment hundreds of times per second.
Radar fills in what LiDAR can struggle with — detecting object speed and distance in rain, fog, and darkness. Cameras read lane markings, traffic signals, and road signs. Ultrasonic sensors handle the close-range stuff, like nudging into a parking space without scraping the bumper. Rivian’s Gen 2 platform alone runs 11 cameras, 5 radar units, and 12 ultrasonic sensors simultaneously.
Tesla takes a different approach entirely — no LiDAR at all, relying purely on camera vision and neural networks. The debate over which method is superior is very much ongoing in 2026.
The Brain: AI Making Decisions at 100mph
Raw sensor data is useless without something to interpret it. That’s where the AI comes in. Onboard computers — NVIDIA’s DRIVE AGX Thor platform delivers 2,000 trillion operations per second — process every sensor feed in real time, constantly asking: what is around me, where is it going, and what should I do about it?
Deep learning models, trained on billions of miles of real-world driving data, allow the car to recognize and predict behavior — a cyclist wobbling toward the white line, a child stepping off a curb, a lorry braking suddenly three cars ahead. The system doesn’t just react; it anticipates.
Path planning algorithms then decide the safest route through whatever the road throws at it — and they recalculate continuously, not just once at the start of a journey.
The Map: Knowing Where You Are to Within 20 Centimetres
GPS alone isn’t precise enough for autonomous driving. A margin of error of a few metres means the difference between your lane and the one next to it. Self-driving vehicles use HD mapping systems — like HERE HD Live Map — that are accurate to within 20 centimetres and updated in real time with traffic conditions, road works, and weather changes.
The car constantly cross-references its sensor data with these maps to confirm its exact position on the road and plan accordingly.
The Levels: Where We Actually Stand
The industry uses a scale from Level 0 to Level 5 to describe autonomy. Level 0 is a regular car. Level 5 means no steering wheel needed, ever, in any conditions. As of early 2026, most consumer vehicles sit at Level 2 — the car handles steering, acceleration, and braking, but you must stay alert and ready to take over.
Honda is introducing a Level 3 system in 2026 with its 0 Series. Mercedes-Benz’s Drive Pilot already operates at Level 3 — and will use a turquoise roof light to signal to other drivers that the car is in autonomous mode. Waymo and a handful of robotaxi operators are operating at Level 4 in specific geofenced zones.
Full Level 5 — anywhere, any weather, no driver needed — remains years away. But the gap is closing faster than most people expected.
Is It Safe?
A 2024 AAA survey found that 66% of Americans still don’t fully trust autonomous technology. That skepticism isn’t irrational. There have been high-profile incidents, including a Tesla Cybertruck crash in FSD mode in early 2025. But the data tells a more nuanced story. Allianz predicts a 20% reduction in traffic accidents across Europe by 2035 as automated driving becomes more widespread, and over 50% by 2060.
The biggest remaining challenge isn’t the technology itself — it’s the edge cases. A snowy road with no visible lane markings. A construction zone where the rules have temporarily changed. A child chasing a ball between parked cars at night. These are the scenarios that push even the most advanced systems to their limits.
The car of the near future won’t be infallible. But it’s already, in many situations, a better driver than we are.
