Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 27 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Wikipedia draws a line: the case for banning AI-generated articles

The community votes overwhelmingly to block artificial intelligence from writing content, citing risks to accuracy and institutional integrity

Wikipedia draws a line: the case for banning AI-generated articles
Image: Engadget
Key Points 4 min read
  • English Wikipedia has banned using large language models to generate or rewrite article content, with 44 votes in favour and only 2 opposed.
  • Two narrow exceptions permit AI for basic copyediting of editors' own work and for initial translations, both requiring human verification.
  • AI-generated text can introduce inaccuracies, fabricated sources, and altered meanings unsupported by cited sources.
  • Volunteer editors have been flooded with poor-quality AI-drafted content; detection remains difficult despite recent efforts.
  • Other Wikipedia language editions operate independently; Spanish Wikipedia has adopted an even stricter ban with no exceptions.

Wikipedia has finally drawn a line in the sand. On 20 March 2026, the English Wikipedia community voted overwhelmingly on a new policy, with 44 votes in favour and just 2 opposed, to prohibit contributors from using large language models to generate or rewrite article content. This is not a symbolic gesture from a tech corporation trying to appear thoughtful. This is a institutional decision made by the thousands of volunteers who maintain the world's largest free encyclopedia.

The fundamental question is whether an online platform built entirely on the premise of community-verified knowledge should accept content that machines produce at speed but humans must verify at leisure. The policy states that text generated by large language models often violates several of Wikipedia's core content policies. Consider what this means in practice. Large language models can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited. The technology does not merely assist human writing; it shapes it in ways invisible to the user.

The practical burden tells its own story. Generating AI content takes seconds, but verifying and cleaning it up takes hours, placing a disproportionate burden on Wikipedia's volunteer editor community. This is not an abstract problem. A study by Princeton University found that about 5% of 3,000 newly created articles on English Wikipedia (created in August 2024) were created using AI. The volunteers tasked with maintaining the encyclopedia have reported being overwhelmed. One editor described being flooded with poor-quality AI-drafted material.

The counter-argument deserves serious consideration: perhaps AI could assist editors if deployed carefully. The policy itself acknowledges this partially. Editors can use large language models to refine their own writing, but only if the copy is checked for accuracy. Editors can also use large language models to assist with language translation, provided they are fluent enough in both languages to catch errors. These carveouts suggest that the community does not reject AI wholesale. Rather, it rejects AI as a content generator and accepts it only as a tool subordinate to human expertise and verification.

One of the most significant problems is that detection itself remains imperfect. Identifying text written by large language models is not an exact science, so Wikipedia's human moderators could miss some spots of slop every now and again. This is more likely on pages with less frequent moderation. This creates a cascading problem. Inaccurate or hallucinated text enters the encyclopedia, gets scraped by AI companies, and re-enters future model training data. Wikipedia becomes both a victim and, indirectly, a vector for spreading misinformation.

The policy did not emerge overnight. Earlier proposals for an all-encompassing community guideline on large language models failed because people broadly agreed on the goals but found specific issues with certain parts of the proposals, criticising them as either too vague or too prescriptive. This detail matters. The final compromise reflected genuine deliberation, not ideological certainty.

It is also worth noting that Wikipedia is not monolithic. Each Wikipedia site has its own independent rules and editing teams. Spanish Wikipedia currently bans large language models for creating new Wikipedia articles from scratch or expanding existing entries, without specific carveouts for translation or writing assistance. Other language editions may reach different conclusions.

Strip away the talking points and what remains is this: a volunteer-run platform designed to aggregate human knowledge has voted to keep that knowledge human in origin, verified by human judgment. This is not a rejection of technology. The Wikimedia Foundation is exploring AI tools to help editors spot bias, flag unsourced claims, and catch fabricated references. But using AI to generate the articles themselves crosses a line that the community believes it cannot uncross without compromising the foundation on which Wikipedia's credibility rests.

The policy succeeds because it does not pretend the problem is simple. It allows exceptions where experience shows AI can reliably serve human editors. It requires verification and review. It acknowledges that enforcement will be imperfect. But it establishes a principle: content should originate with people who can be held accountable, not algorithms that generate plausible falsehoods at scale. In an age when artificial intelligence is being forced into every institution, that principle matters.

Sources (6)
Daniel Kovac
Daniel Kovac

Daniel Kovac is an AI editorial persona created by The Daily Perspective. Providing forensic political analysis with sharp rhetorical questioning and a cross-examination style. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.