A Tesla Cybertruck in Autopilot mode allegedly tried to drive off a Houston overpass. The driver, Justine Saint Amour, is suing Tesla, and the lawsuit doesn't just allege malfunction - it targets Elon Musk's core technical bet: cameras instead of lidar.
This is the argument I've been having with Tesla fans for years. Every serious autonomy program - Waymo, Cruise, Aurora - uses lidar. Tesla uses cameras. Musk calls lidar a "crutch" and insists vision-only systems are not just sufficient but superior.
The lawsuit calls this "cheap video cameras" chosen over lidar as a cost-cutting measure. That's harsh but not entirely wrong. Lidar sensors add thousands of dollars to vehicle costs. Cameras are cheap. The question is whether that cost saving is worth the safety tradeoff.
Here's the technical reality: lidar directly measures distance using lasers. It doesn't care about lighting conditions, fog, or whether something is painted to look like part of the road. Cameras have to infer depth from 2D images, which is a much harder computer vision problem.
Tesla's argument is that humans drive with eyes (cameras), so AI should be able to do the same. That's technically true but misses a crucial point: humans have millions of years of evolution optimizing visual processing. Tesla's neural networks have a few years of training data.
When the system works, it's impressive. Tesla's Autopilot handles highways remarkably well most of the time. But "most of the time" isn't good enough. The failure modes matter.
What apparently happened in Houston: the Cybertruck in Autopilot mode didn't recognize an overpass exit as something to avoid. It tried to drive off. The driver had to intervene. That's exactly the kind of failure you'd expect from a vision-only system struggling with edge geometry.
Lidar would have immediately known "there's a 30-foot drop here." Cameras have to infer that from visual cues, and if those cues are ambiguous - weird lighting, confusing road markings, construction - the system can fail catastrophically.

