By Manik Aftab ⏐ 2 months ago ⏐ Newspaper Icon Newspaper Icon 2 min read
Study Reveals Ai Assistants Frequently Misrepresent News Content

Nearly half of all responses generated by leading AI assistants misrepresent news content, according to a new study published on Wednesday by the European Broadcasting Union (EBU) and the BBC.

The international research analyzed 3,000 responses from top artificial intelligence assistants that process natural language to complete user tasks. It evaluated their performance across 14 languages, focusing on accuracy, sourcing, and the ability to differentiate between fact and opinion. The study included ChatGPT, Copilot, Gemini, and Perplexity.

The results showed that 45% of the AI assistants’ responses contained at least one major issue, while 81% had some form of problem. Gemini, Google’s AI assistant, has previously stated that it welcomes user feedback to help improve accuracy and user experience. Meanwhile, OpenAI and Microsoft have acknowledged that hallucinations, or the generation of misleading information, remain a key challenge they are working to fix.

Perplexity claims that its “Deep Research” mode achieves 93.9% factual accuracy, according to information published on its website.

Study Finds Major Sourcing Errors in AI-Generated Responses

The study found that a third of all AI-generated responses contained serious sourcing errors, including missing, misleading, or incorrect attributions. Around 72% of Gemini’s responses showed significant sourcing issues, compared to less than 25% for other assistants.

Additionally, 20% of the analyzed responses contained factual inaccuracies, such as outdated information. The report cited examples like Gemini incorrectly stating a change in vape laws and ChatGPT referencing Pope Francis as the current Pope months after his reported death.

The study involved 22 public-service media organizations from 18 countries, including France, Germany, Spain, Ukraine, the United Kingdom, and the United States.

The EBU warned that as AI assistants increasingly replace traditional search engines for news, public trust could be at risk.

“When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation,” EBU Media Director Jean Philip De Tender said in a statement.

According to the Reuters Institute’s Digital News Report 2025, 7% of online news consumers and 15% of users under 25 rely on AI assistants to access news. The report urged AI developers to be held accountable and to strengthen how their platforms handle news-related queries.

RELATED: How Generative Intelligence Is Redrawing the Landscape of Work