According to the research, 45 percent of responses to questions about the news had at least one ‘significant’ issue.
Published On 22 Oct 2025
AI models such as ChatGPT routinely misrepresent news events, providing faulty responses to questions almost half the time, a study has found.
The study, published on Wednesday by the European Broadcasting Union (EBU) and the BBC, assessed the accuracy of more than 2,700 responses given by OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Copilot, and Perplexity.
Recommended Stories
list of 4 itemsend of list
Twenty-two public media outlets, representing 18 countries and 14 languages, posed a common set of questions to the AI assistants between late May and early June for the study.
Overall, 45 percent of responses had at least one “significant” issue, according to the research.
Sourcing was the most common problem, with 31 percent of responses including information not supported by the cited source, or incorrect or unverifiable attribution, among other issues.
A lack of accuracy was the next biggest contributor to faulty answers, affecting 20 percent of responses, followed by the absence of appropriate context, with 14 percent.
Gemini had the most significant issues, mainly to do with sourcing, with 76 percent of responses affected, according to the study.
All the AI models studied made basic factual errors, according to the research.
The cited errors include Perplexity claiming that surrogacy is illegal in the Czech Republic and ChatGPT naming Pope Francis as the sitting pontiff months after his death.
OpenAI, Google, Microsoft and Perplexity did not immediately respond to requests for comment.
In a foreword to the report, Jean Philip De Tender, the EBU’s deputy general, and Pete Archer, the head of AI at the BBC, called on tech firms to do more to reduce errors in their products.
“They have not prioritised this issue and must do so now,” De Tender and Archer said.
“They also need to be transparent by regularly publishing their results by language and market.”
Jonathan Hendrickx, an assistant professor in media studies at the University of Copenhagen, said the findings highlighted the need to foster media literacy among news consumers from a young age.
“The rise of dis- and misinformation and AI-generated content more than ever blurs the boundaries of what is real or not,” Hendrickx told Al Jazeera.
“This poses a major issue for media practitioners, regulators and educators that needs urgent care and attention.”