Wikipedia Wanted to Follow Google’s Lead With AI Systems. However, Editors Have Said, ‘Enough Is Enough’

The Wikimedia Foundation suspended AI-generated summaries after editors protested against automation.

The Wikimedia Foundation suspended AI generated summaries
No comments Twitter Flipboard E-mail
javier-lacort

Javier Lacort

Senior Writer
  • Adapted by:

  • Karen Alfaro

javier-lacort

Javier Lacort

Senior Writer

I write long-form content at Xataka about the intersection between technology, business and society. I also host the daily Spanish podcast Loop infinito (Infinite Loop), where we analyze Apple news and put it into perspective.

216 publications by Javier Lacort
karen-alfaro

Karen Alfaro

Writer

Communications professional with a decade of experience as a copywriter, proofreader, and editor. As a travel and science journalist, I've collaborated with several print and digital outlets around the world. I'm passionate about culture, music, food, history, and innovative technologies.

551 publications by Karen Alfaro

The Wikimedia Foundation paused an experiment displaying AI-generated summaries at the top of articles after a flood of criticism from its editors.

Why it matters. Wikipedia remains one of the last great bastions of human-generated content on the Internet. It stands against the wave of automation that has degraded other platforms. Its democratic governance model just put the brakes on a significant technological advance.

What happened. The “Simple Article Summaries” experiment aimed to make complex articles more accessible by providing automatic summaries marked as “unverified.” These summaries came from a Cohere model called Aya.

Editors responded with comments such as “very bad idea,” “my strongest rejection,” or simply “yuck.”

The background. OpenAI continues to pursue its goal of becoming the next Google. Google has embraced generative AI, even incorporating it into its search engine. In contrast, Wikipedia has maintained article quality by committing to human editors.

Its editors actively filter out AI-generated content, keeping the platform a reliable source of information. You know there won’t be any slop.

Between the lines. These protests reflect something deeper than resistance to synthetic content.

  • Wikipedia must evolve to attract new generations.
  • However, its editors fear that AI could destroy decades of collaborative work.

“No other community has mastered collaboration to such a wondrous extent, and this would throw that away” one editor said, quoted by 404 Media.

Yes, but. The Foundation hasn’t ruled out AI entirely—at least not yet. It has promised that future features will require “editor participation” and “human moderation workflows.” That sounds like a tactical pause.

This experiment also emerged from discussions at Wikimania 2024, where some editors recognized the potential of the format.

In short. The question now is whether Wikipedia can maintain its enormous historical relevance—already eroded since ChatGPT entered our lives—without sacrificing the human judgment that sets it apart.

The answer will determine whether Wikipedia remains a reasonably reliable oasis of knowledge or just another space in the automated noise of the internet.

Image | Luke Chesser (Unsplash)

Related | Google Has a Tool for Tagging and Detecting AI-Generated Text. It’s a Nice Concept, but There’s Still a Problem With It

Home o Index