Wikipedia has moved to restrict the use of artificial intelligence (AI) in writing and editing its articles, allowing only limited use in basic edits and translations. The platform says AI-generated content often fails to meet its standards of accuracy and reliability, raising concerns about misinformation. The decision comes as AI tools are increasingly being used to create online content.
For over two decades, Wikipedia has remained one of the world’s most widely used sources of information, built by volunteers who rely on verifiable and trusted sources. Its open model has allowed millions to contribute, but always under strict rules of accuracy and neutrality.
Now, with the rapid rise of AI tools like chatbots and automated writing systems, Wikipedia is taking a cautious step. According to its updated editing guidelines, contributors are not allowed to use AI to write or rewrite article content. The platform argues that such tools can produce information that may sound convincing but does not always meet factual standards.
Exceptions to the Ban
However, the policy does leave room for limited use. Editors may use AI for minor tasks, such as correcting spelling mistakes or adjusting formatting, but only after human review. Even in these cases, Wikipedia warns that AI can unintentionally alter the meaning of a sentence.
Another exception applies to translations. AI tools may assist in translating articles from one language to another, but only if the person using them is fluent in both languages. The responsibility for accuracy still lies with the human editor.
The move reflects growing concerns about the reliability of AI-generated content. While these tools can produce text quickly, they are also known to make errors or present unverified claims as facts. For a platform like Wikipedia, where credibility is central, such risks are difficult to ignore.


