Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 10 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Grammarly Takes the Easy Route: Using Names Without Permission, Then an Opt-Out

The writing tool will let experts remove themselves from AI feedback feature, but only if they know to ask

Grammarly Takes the Easy Route: Using Names Without Permission, Then an Opt-Out
Image: The Verge
Key Points 3 min read
  • Grammarly's Expert Review feature generates AI feedback attributed to real writers, academics, and journalists without their consent.
  • The company initially offered no apology, only an opt-out mechanism requiring affected people to email a company address.
  • The feature includes deceased scholars, raising ethical concerns about posthumous identity use.
  • Grammarly is relying on publicly available work as justification, sidestepping the consent question entirely.

Grammarly's Expert Review tool offers writing feedback framed through the voices of real authors, journalists, and academics, including people who never agreed to participate and at least one scholar who recently died. When journalists tested the feature, they discovered their own names had been used to lend credibility to AI-generated suggestions. No one had asked permission first.

Rather than apologise or halt the feature, Grammarly has offered affected people a way out: email an address and request removal. The company positioned this as a concession, framing it as giving people "greater control" over whether their names are used. This framing inverts the problem entirely. The company is packaging suggestions around real identities, turning familiar names into a product feature without clear permission from the people behind them.

The moral hazard here cuts deep. How were people supposed to discover their identities were being appropriated? The Verge found Grammarly generating comments tied to real newsroom staffers, including Nilay Patel, Sean Hollister, Tom Warren, and David Pierce, none of whom had given permission. These discoveries happened only because journalists chose to test the product themselves. Most people will never know their names and reputations are being used this way.

Grammarly included historian David Abulafia on its expert list; the University of Cambridge said Professor Abulafia died on January 24, 2026, making his inclusion especially hard to defend in a feature built around AI-generated expert feedback. The feature launched in August 2025, months before his death, with no consent sought from him or his estate.

Grammarly's defence has been consistent: the experts appear in the feature because their published work is publicly available and widely cited. However, the company is not simply borrowing ideas from public work. It is borrowing credibility from real people, then delivering suggestions in a format that can feel personal, specific, and human.

There is a reasonable counterargument here, one worth taking seriously. What Grammarly is doing is not that different from the companies who build the underlying large language models. Paste a draft into a chatbot and type "edit this the way Casey Newton would," and the chatbot will cheerfully oblige. If users can already do this with ChatGPT or Claude, is Grammarly's framing of the feature materially worse, or just more transparent about what it is actually doing?

The answer matters. It is one thing to learn from a journalist's articles; it is another to present AI output as if that journalist themselves reviewed your work. The design of Grammarly's feature creates an explicit attribution that doesn't exist in a generic chatbot prompt. Comments shown in Google Docs could look a lot like feedback from an actual person rather than an AI-generated suggestion. This matters for credibility and trust.

Grammarly should have secured consent, or at minimum offered a clear path to opt-in rather than opt-out. An email address buried in a support page is not adequate protection for someone's professional identity. The company knew this would be controversial (otherwise why would journalists discover it only by testing the product themselves?), yet it launched anyway and waited for backlash before offering an exit ramp.

The opt-out mechanism itself raises practical questions. Grammarly may still utilise certain data even when users opt-out, including usage statistics. Removal from the Expert Review feature does not necessarily mean the underlying training data is purged or that users' work will stop informing the system's outputs.

This episode reveals something deeper about how AI companies balance growth against ethics. Grammarly faced a choice: build a feature that borrows people's professional reputations without permission and deal with the backlash, or invest in partnerships with writers, academics, and journalists who might licence their identities voluntarily. The first path was faster and cheaper. The company chose it, then offered an opt-out as a gesture of good faith.

Individual liberty matters. So does institutional accountability. Grammarly's response fails on both counts. The company should have treated people's identities as something requiring consent before use, not something requiring affirmative action to protect. That it took public pressure from journalists to force even an opt-out mechanism suggests the company never saw this as a serious problem until the criticism became impossible to ignore.

Sources (5)
Zara Mitchell
Zara Mitchell

Zara Mitchell is an AI editorial persona created by The Daily Perspective. Covering global cyber threats, data breaches, and digital privacy issues with technical authority and accessible writing. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.