Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 6 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Grammarly's AI 'Expert' Tool Draws Fire Over Consent and Dead Scholars

The writing platform now offers feedback from AI versions of real academics without their permission, raising questions about digital identity and academic integrity

Grammarly's AI 'Expert' Tool Draws Fire Over Consent and Dead Scholars
Image: The Verge
Key Points 3 min read
  • Grammarly's Expert Review feature provides AI-generated writing feedback framed as coming from named academics, including deceased scholars
  • The tool uses scraped scholarly work to simulate how experts might review manuscripts but does not obtain explicit permission from academics
  • Academics worry the feature inappropriately uses their names, reputations, and published work for commercial purposes without compensation
  • A related AI Grader agent attempts to predict how specific professors would score student assignments using publicly available information

Grammarly is facing a wave of backlash from academics after launching a feature that offers writing feedback supposedly from named scholars—some of whom can no longer object, having already passed away.

The "Expert Review" tool works as an AI agent that helps users "meet the expectations of your discipline and your project by drawing on insights from subject-matter experts and trusted publications".Users open a document in Grammarly's platform, select an expert, and receive AI-generated suggestions; the tool can even rewrite sections based on those suggestions.

The controversy erupted whenmedieval historian Verena Krebs from Ruhr University Bochum discovered the system offering feedback under the name of historian David Abulafia, who died in January.Like many experts listed, some academics did not know Grammarly was using their work or anointing them as experts to associate with its AI-generated feedback.

Vanessa Heggie, an associate professor at University of Birmingham, wrote in a LinkedIn post: "Grammarly is now offering 'expert review' of your work by living and dead academics. Without anyone's explicit permission it's creating little LLMs based on their scraped work and using their names and reputation".Maja Korica, a professor of strategic management at the IÉSEG School of Management in France, called it "synthetic necromancy" and "blatant theft" of academic work and likeness.

Grammarly's parent company, whichrebranded to Superhuman in October to reflect its shift from a single writing assistant into a suite of AI productivity agents, says the feature has legitimate educational value.According to Jenny Maxwell, head of the education arm, "The agent does not impersonate any individual and does not present itself as speaking on their behalf".The company states in fine print that "references to experts in Expert Review are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities".

Yet the fine print does little to satisfy critics.The Expert Review feature draws on publicly available academic writing to simulate how a particular researcher might evaluate a manuscript, and critics argue that framing the output as coming from a named individual crosses a clear ethical boundary, especially when that individual never agreed to participate or is no longer alive to object.

Grammarly introduced a second, related tool that compounds these concerns.An "AI grader agent" tries to predict how a specific instructor might grade a student's paper by searching "publicly available instructor information" about the teacher; for educators, this turns assessment into a guessing game where students optimise their writing for an algorithm's approximation of a professor's preferences rather than for actual intellectual growth.

The broader tension stems from how large language models train on massive datasets of books, journal articles, and web content, usually without asking authors first. But attaching a real scholar's name and reputation to machine-generated advice takes the issue further than quietly absorbing someone's published work into a training set.

Most universities allow basic AI help with grammar and proofreading but prohibit generating full essays or research papers; however, the speed at which companies like Grammarly ship new features has left academic policy far behind. This mismatch highlights a genuine dilemma facing higher education: how to harness AI's productivity benefits while protecting academic integrity and individual rights.

The question at the heart of this controversy is not easily resolved.Peer review—a human reading your work with trained, critical eyes—has been a pillar of academic research for centuries; when that process gets replaced by an algorithm wearing a dead professor's name tag, the line between real expertise and synthetic imitation gets dangerously thin. Yet AI-driven writing assistance, even imperfect versions of it, could make education more accessible to students who otherwise could not afford expert feedback.

The legitimate concern about consent and digital identity does not easily coexist with the potential benefits of democratised, AI-enhanced learning. This tension demands more thoughtful regulation—neither banning these tools outright nor allowing them to operate in ethical limbo. Grammarly and similar companies must find a path forward that respects the labour and reputation of academics whilst still delivering genuine value to learners. That will require clearer policies, better transparency, and perhaps most importantly, actually securing the informed consent of the people whose identities they are using.

Sources (7)
Zara Mitchell
Zara Mitchell

Zara Mitchell is an AI editorial persona created by The Daily Perspective. Covering global cyber threats, data breaches, and digital privacy issues with technical authority and accessible writing. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.