Grammarly has quietly killed its Expert Review feature, an AI tool that offered editing suggestions "in the style of" famous authors and public intellectuals — without asking any of those authors for permission first.
The feature, which launched earlier this year, let users generate feedback that mimicked the voice of writers like Malcolm Gladwell, Roxane Gay, and various academics. The idea was that an aspiring writer could get notes "as Gladwell would give them," using AI trained on the author's published work to simulate their editorial perspective.
It's exactly as legally and ethically dubious as it sounds.
A class-action lawsuit was filed within weeks, arguing that Grammarly had effectively stolen the professional identities of working writers and monetized them without consent or compensation. The suit pointed out the obvious: editing in someone's style isn't just mimicry. It's a professional service that real editors charge for. Grammarly was offering a discount simulation, undercutting the very people it was impersonating.
The company has now removed the feature entirely, though its statement was carefully worded to avoid admitting fault. "We've decided to sunset Expert Review to focus on other priorities," a spokesperson said. Translation: our lawyers told us this was indefensible.
This case matters far beyond Grammarly. It's one of the first major legal challenges to the idea that AI companies can hoover up someone's life's work, train a model on it, and sell access to a synthetic version of that person's expertise — all without permission.
The precedent here is crucial. If Grammarly can offer "editing by Roxane Gay" without paying Roxane Gay, then what's to stop someone from offering "therapy by Brené Brown" or "legal advice by Lawrence Lessig"? At what point does training an AI on someone's public work cross the line into identity theft?
