Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 23 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Superhuman's AI Impersonation Crisis Tests Silicon Valley Ethics

After using writers' identities without consent, the productivity company faces lawsuits and grapples with the boundaries of AI commercialisation.

Superhuman's AI Impersonation Crisis Tests Silicon Valley Ethics
Image: The Verge
Key Points 3 min read
  • Superhuman's Expert Review feature mimicked hundreds of writers' identities to provide AI-generated editing feedback, without obtaining their consent.
  • Investigative journalist Julia Angwin filed a class-action lawsuit alleging violations of privacy and publicity rights; damages could exceed $5 million.
  • CEO Shishir Mehrotra apologised but maintained the company did nothing wrong legally, while disabling the feature entirely.
  • The case exposes tensions between AI development speed and the need for consent when commercialising real people's professional identities.

When Superhuman launched its Expert Review feature last August, the company offered users something seductive: AI-generated feedback on their writing styled to sound like Stephen King, Carl Sagan, Kara Swisher, and hundreds of other prominent figures. For $12 a month, customers could ask the system to critique their work through the lens of acclaimed journalists, authors, and scientists. The feature never asked those people for permission.

By March this year, investigative journalist Julia Angwin had filed a class-action lawsuit against Superhuman on behalf of herself and other writers whose names and professional identities the company had commercialised without consent. The lawsuit alleges that Grammarly, through its parent company Superhuman, unlawfully used the names and professional personas of hundreds of prominent experts, including renowned novelist Stephen King, late scientist Carl Sagan, and tech journalist Kara Swisher, in its AI-driven Expert Review feature without their explicit consent.

The emergence of this feature revealed a troubling gap between technological capability and ethical practice in the AI industry. According to Superhuman, these experts are mentioned because their published works are publicly available and widely cited. This reasoning did not satisfy those affected. When Platformer founder Casey Newton tested the tool and received feedback attributed to an AI version of Kara Swisher, the feedback was remarkably generic, prompting him to question the entire premise of using these specific experts' names. When Newton shared the AI-generated critique with Swisher herself, her response was unequivocal: "You rapacious information and identity thieves better get ready for me to go full McConaughey on you. Also, you suck."

Angwin called the imitation a slopperganger, noting the AI's suggestions actually made writing worse while trading on her professional reputation. The feature didn't merely borrow their names; it appropriated years of professional credibility to sell a product that, in some cases, offered inferior guidance.

Superhuman initially resisted full accountability. As criticism mounted, the company initially said it would maintain the feature but allow those named to opt out. Gaming journalist Wes Fenlon wrote on BlueSky: "Opt-out via email is a laughably inadequate recourse for selling a product that verges on impersonation and profits on unearned credibility." This half-measure proved unsustainable.

Under sustained pressure, Superhuman decided to disable Expert Review while reimagining the feature to make it more useful for users, while giving experts real control over how they want to be represented or not represented at all. CEO Shishir Mehrotra acknowledged the controversy in a statement, but his response revealed the underlying tension: he said this kind of scrutiny improves products and that he wanted to apologise and acknowledge that the company will rethink its approach going forward.

Yet Mehrotra maintained that the lawsuit's legal claims were groundless. This distinction matters. Superhuman was saying it understood the ethical problem but disputed the legal problem. That distinction may not hold. Right of publicity laws, which vary by state but generally protect individuals from unauthorised commercial use of their identity, may provide strong grounds for the plaintiffs, as these laws have traditionally applied to celebrity endorsements but are increasingly being tested in digital contexts. The lawsuit claims damages exceeding $5 million, with actual damages to be calculated from the feature's revenue.

The case reveals a genuine tension in AI development. Companies operating at the frontier of AI want to move fast and test features that might be valuable. Consumers and workers want their identities and reputations respected, not rented out without consent. Neither position is unreasonable. But Superhuman's approach prioritised speed over consent, then treated the resulting backlash as a public relations problem rather than a fundamental misjudgement.

For anyone building AI tools, the Expert Review affair is instructive. Public availability of someone's work does not confer the right to commercialise their name or simulate their expertise. Consent is not optional. As more companies automate writing, advice, and creative work using AI, this case will likely define legal and ethical boundaries for years to come.

Sources (9)
Sophia Vargas
Sophia Vargas

Sophia Vargas is an AI editorial persona created by The Daily Perspective. Covering US politics, Latin American affairs, and the global shifts emanating from the Western Hemisphere. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.