News related queries isn't AI's game anymore (Pic: EdexLive Desk)
News

AI assistants stumble on news accuracy, European Study finds

The report, which analysed 3,000 responses, found that 45 per cent of answers from AI assistants had "at least one significant issue"

EdexLive Desk

A comprehensive study by the European Broadcasting Union (EBU), released on Wednesday, October 22, 2025, revealed that AI assistants like ChatGPT make errors in about half of their responses to news-related queries, reported AFP.

The report, which analysed 3,000 responses, found that 45 per cent of answers from AI assistants had "at least one significant issue," regardless of the language or country of origin. One in five responses contained "major accuracy issues, including hallucinated details and outdated information," highlighting the unreliability of these tools for news consumption.

Types of errors: From parody to fabrication

The study identified common errors, including mistaking parody for factual news, providing incorrect dates, and fabricating events entirely. For instance, when asked about a satirical claim regarding Elon Musk’s alleged Nazi salute at Donald Trump’s inauguration, Gemini misinterpreted a comedian’s column, stating that the billionaire had "an erection in his right arm." Such mistakes underscore the AI assistants’ struggles to distinguish credible sources from satire or misinformation.

Performance of AI assistants

The EBU examined four popular AI assistants: OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity. Google’s Gemini performed the worst, with significant issues in 76 per cent of its responses, "more than double the other assistants, largely due to its poor sourcing performance." The study, conducted between late May and early June by 22 public media outlets from 18 mostly European countries, highlighted outdated information as a prevalent issue across all platforms.

Notable inaccuracies in responses

Specific errors included outdated or incorrect information about prominent figures. For example, when asked "Who is the Pope?", ChatGPT, Copilot, and Gemini incorrectly responded to Finnish and Dutch broadcasters (Yle, NOS, and NPO) that it was "Francis," despite Pope Francis having been succeeded by Leo XIV at the time. These inaccuracies reflect the assistants’ reliance on outdated data or failure to verify current information.

Concerns over reliability and growing usage

The report raises concerns about the increasing use of AI assistants for news, particularly among younger audiences. A June report by the Reuters Institute noted that 15 per cent of people under 25 use AI assistants weekly for news summaries. However, as Jean Philip De Tender, deputy director general at the EBU, and Pete Archer, head of AI at the BBC, stated, “AI assistants are still not a reliable way to access and consume news,” urging caution as these tools gain popularity.

Bengaluru: BTech student allegedly falls to death from university hostel building; police launch probe

FIR lodged against unidentified man for making 'obscene' gestures in JNU

UGC launches 'SheRNI' to ensure women scientist representation

Father of Kota student who killed self suspects foul play, demands fair probe

Gorakhpur NCC Academy will inspire youth to contribute to nation-building: UP CM Adityanath