Know-how reporter

4 main synthetic intelligence (AI) chatbots are inaccurately summarising information tales, in response to analysis carried out by the BBC.
The BBC gave OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI content material from the BBC web site then requested them questions concerning the information.
It stated the ensuing solutions contained “important inaccuracies” and distortions.
In a blog, Deborah Turness, the CEO of BBC Information and Present Affairs, stated AI introduced “infinite alternatives” however the corporations growing the instruments had been “taking part in with hearth”.
“We stay in troubled occasions, and the way lengthy will it’s earlier than an AI-distorted headline causes important actual world hurt?”, she requested.
An OpenAI spokesperson stated: “We assist publishers and creators by serving to 300 million weekly ChatGPT customers uncover high quality content material by summaries, quotes, clear hyperlinks, and attribution.”
The opposite tech corporations which personal the chatbots have been approached for remark.
‘Pull again’
In the study, the BBC requested ChatGPT, Copilot, Gemini and Perplexity to summarise 100 information tales and rated every reply.
It acquired journalists who had been related consultants within the topic of the article to price the standard of solutions from the AI assistants.
It discovered 51% of all AI solutions to questions concerning the information had been judged to have important problems with some kind.
Moreover, 19% of AI solutions which cited BBC content material launched factual errors, reminiscent of incorrect factual statements, numbers and dates.
In her weblog, Ms Turness stated the BBC was searching for to “open up a brand new dialog with AI tech suppliers” so we are able to “work collectively in partnership to search out options”.
She referred to as on the tech corporations to “pull again” their AI information summaries, as Apple did after complaints from the BBC that Apple Intelligence was misrepresenting information tales.
Some examples of inaccuracies discovered by the BBC included:
- Gemini incorrectly stated the NHS didn’t suggest vaping as an support to stop smoking
- ChatGPT and Copilot stated Rishi Sunak and Nicola Sturgeon had been nonetheless in workplace even after they’d left
- Perplexity misquoted BBC Information in a narrative concerning the Center East, saying Iran initially confirmed “restraint” and described Israel’s actions as “aggressive”
Typically, Microsoft’s Copilot and Google’s Gemini had extra important points than OpenAI’s ChatGPT and Perplexity, which counts Jeff Bezos as one among its buyers.
Usually, the BBC blocks its content material from AI chatbots, but it surely opened its web site up during the exams in December 2024.
The report stated that in addition to containing factual inaccuracies, the chatbots “struggled to distinguish between opinion and reality, editorialised, and infrequently failed to incorporate important context”.
The BBC’s Programme Director for Generative AI, Pete Archer, stated publishers “ought to have management over whether or not and the way their content material is used and AI corporations ought to present how assistants course of information together with the dimensions and scope of errors and inaccuracies they produce”.
An OpenAI spokesperson instructed BBC Information: “We have collaborated with companions to enhance in-line quotation accuracy and respect writer preferences, together with enabling how they seem in search by managing OAI-SearchBot of their robots.txt. We’ll hold enhancing search outcomes.”
Robots.txt is an instruction in an online web page’s code which asks a bot to not use that web page in search outcomes.