2,400 Kaiser Permanente mental health professionals are striking in Northern California. But this isn't a typical labor dispute over wages or benefits. They're striking over AI – specifically, over how Kaiser is deploying artificial intelligence in patient care without adequate oversight or therapist input.
This matters because it's not technophobic Luddism. These are licensed professionals with advanced degrees saying "this specific AI implementation is harmful to our patients." That distinction is crucial. We're not talking about workers afraid of losing their jobs to automation. We're talking about clinicians pushing back on AI deployment before it becomes a patient safety crisis.
According to the AP report, Kaiser has been rolling out AI tools for patient triage, treatment recommendations, and scheduling optimization. On paper, these sound reasonable – AI handling administrative tasks to free up therapist time for actual patient care. The problem, according to striking workers, is that the AI systems are making clinical decisions without adequate human oversight.
One therapist quoted in the coverage described an AI system that triaged patients as "low risk" for self-harm based on questionnaire responses, overriding clinician judgment. Another mentioned treatment plan recommendations that didn't account for trauma histories or cultural factors that any human therapist would immediately flag. These aren't abstract concerns – these are patient safety issues happening right now.
Kaiser's response has been telling. They're emphasizing that AI tools are "augmenting" rather than "replacing" clinicians, and that human oversight is always involved. But the therapists on strike are saying that's not their experience on the ground. The AI recommendations carry institutional weight that makes it difficult to override them without extensive documentation and justification.
Having spent time in Silicon Valley, I've seen this pattern before: a company deploys AI tools to improve efficiency, the tools make superficially reasonable decisions based on training data, but they lack the contextual understanding that experts bring. The difference is that in mental healthcare, the consequences of algorithmic errors are measured in human suffering rather than just lost revenue.




