When AI gets deployed in an operating room and starts giving bad guidance, there's no undo button. Now lawsuits are emerging that claim an AI-powered surgical navigation system has injured patients instead of helping them - including two who allegedly suffered strokes.
The TruDi Navigation System, developed by Johnson & Johnson and later sold to Integra LifeSciences, was designed to assist ear, nose, and throat specialists in treating chronic sinusitis. Think of it like GPS for surgery - the system uses imaging and sensors to help surgeons navigate complex anatomy and avoid critical structures.
The technology launched in 2021 and was presumably working well enough. But then came AI software modifications, and according to lawsuits filed in Texas, things went wrong.
Plaintiffs claim that "the product was arguably safer before integrating changes in the software to incorporate artificial intelligence than after the software modifications were implemented." That's not a statement you want to read about surgical equipment.
The alleged injuries are serious. Two patients reportedly suffered strokes following accidental major artery injuries during surgery. Another case involved a surgeon accidentally puncturing a patient's skull base. These aren't minor complications - these are life-altering outcomes.
Integra LifeSciences denies the claims, stating there is "no credible evidence to show any causal connection between the TruDi Navigation System, AI technology, and any alleged injuries." Reuters noted they were unable to independently verify the lawsuit claims.
But even if these specific allegations turn out to be unfounded, they raise a question the medical device industry needs to answer: what does "good enough" mean when AI is making real-time decisions in surgery?
This isn't like deploying AI to write code or generate images. If an AI code tool makes a mistake, you debug it. If a surgical navigation system makes a mistake during an active procedure, someone might have a stroke.
The approval process for medical devices is supposed to catch these issues. The FDA requires extensive testing, clinical trials, and ongoing monitoring. But AI creates new problems. Traditional medical devices are deterministic - same input, same output. AI systems are probabilistic. They can behave differently in edge cases that weren't in the training data.

