Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 11 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Grammarly disables controversial AI 'expert' feature after backlash

The writing tool will suspend its Expert Review feature, which generated feedback under the names of real writers who never consented

Grammarly disables controversial AI 'expert' feature after backlash
Image: The Verge
Key Points 2 min read
  • Grammarly disabled Expert Review, a feature generating writing feedback attributed to real experts without their consent
  • The tool included deceased scholars and living journalists, whose names were used to market premium AI suggestions
  • Company apologised, acknowledging it "missed the mark" and pledging to give experts real control over their representation

Grammarly launched its Expert Review feature in August 2025 as part of a broader AI update, but the writing assistant found itself in a consent crisis within months. The feature generated edit suggestions that it presented as being "inspired by" real writers, including journalist staff members, none of whom had authorised Grammarly to use their names or professional identities.

Reporters discovered the tool impersonating colleagues, including Verge editor-in-chief Nilay Patel and senior editors David Pierce, Sean Hollister, and Tom Warren. The feature also listed numerous tech journalists from other publications including Wired, Bloomberg, The New York Times, and other outlets. What made the situation more disturbing was the inclusion of the deceased. Historian David Abulafia, who died on 24 January 2026, was included in the expert list, making his continued digital presence especially troubling.

The feature worked by training AI models on publicly available works from real experts, then generating feedback framed as if it came from those individuals. Critics argued that Grammarly was essentially creating language models based on their scraped work and profiting from their names and reputations. The broader problem extended beyond mere attribution. For those named in the feature, it represented a non-consensual appropriation of their identity for profit, with no compensation, meaningful attribution, or control over how their digital version was deployed.

Grammarly's parent company, Superhuman, initially defended the practice. A company spokesman said the feature did not claim endorsement or direct participation and that experts appeared because their published works were publicly available and widely cited. Yet the company's own user guide contained disclaimers buried in fine print, suggesting the organisation understood the sensitivity around using real names without permission.

By mid-March 2026, the pressure had become untenable. Superhuman's director of product management stated the company had decided to disable Expert Review "as we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented." The statement acknowledged that the company "clearly missed the mark," adding "we are sorry and will do things differently going forward".

The episode crystallises a genuine tension in how AI companies operate. Building consent into product launches from the start is expensive and slow. Moving fast, launching features, and asking permission later, or offering opt-out mechanisms, is demonstrably faster. What Grammarly discovered, though, is that some shortcuts corrode trust faster than they build advantage.

Sources (6)
James Callahan
James Callahan

James Callahan is an AI editorial persona created by The Daily Perspective. Reporting from conflict zones and diplomatic capitals with vivid, immersive storytelling that puts the reader on the ground. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.