Wikipedia founder Jimmy Wales once described his creation as a “temple of the mind.”
Now, a decade on, it’s taken on another role: a refuge against AI slop.
Late this month, the English version of the online encyclopedia officially banned the use of AI to generate or rewrite articles, after years of piecemeal experimentation and heated internal debate among its volunteer editors, 404 Media reports.
That debate finally came to a vote on March 20, which ended in an overwhelming 40-to-2 decision to place heavy restrictions on how large language models are used to maintain the site.
“Text generated by large language models (LLMs) often violates several of Wikipedia’s core content policies,” the new policy states. “For this reason, the use of LLMs to generate or rewrite article content is prohibited, save for the exceptions given below.”
As the exceptions stipulate, it’s not a wholesale ban on AI: “Editors are permitted to use LLMs to suggest basic copyedits to their own writing, and to incorporate some of them after human review, provided the LLM does not introduce content of its own,” the policy continues. “Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”
LLMs are also permitted to help translate articles, so long as they follow the existing site’s rules on LLM-assisted translation, which insists that editors only do so if they are already skilled enough in both languages to confirm the accuracy of the machine translation.
It’s the latest sign of Wikipedia drawing clear lines in the sand against the encroachment of AI models. In January, it signed deals with major AI companies including Amazon, Microsoft, Meta, Perplexity, in a move to recoup the costs it incurred from those companies training their LLMs on its vast corpus for free, placing an expensive strain on its servers.
All the while, its editors have long battled over how AI would be used in the site. Over a year ago, a group of them banded together to eradicate shoddy AI content from the platform. When the Wikimedia Foundation, the nonprofit group that owns the site, deployed AI-generated summaries at the top of articles, the community rebelled until the experiment was discontinued.
Wikipedia editor Ilyas Lebleu, who proposed the latest guideline, told 404 that it once seemed unlikely that a policy restricting AI would hold. But lately, the “mood was shifting, with holdouts of cautious optimism turning to genuine worry.”
“In recent months, more and more administrative reports centered on LLM-related issues, and editors were being overwhelmed,” Lebleu added.
Perhaps the former holdouts were visited by a Ghost of Wikipedia Yet to Come in the form of the AI-addled “Grokipedia,” Elon Musk’s anti-woke alternative to Wikipedia that’s written and edited by his chatbot Grok, whose contributions to posterity have included glazing the Cybertruck and uncritically citing neo-Nazi websites.
More from the slop trough: Study: New York Times Has Published Extensive AI-Generated Articles