I recently came across a study that caught my attention. It turns out that AI assistants often misrepresent news content – and it’s more common than you might think. According to the research, a whopping 45% of AI-generated answers had at least one significant issue. This can range from sourcing problems to outright inaccuracies.
The study found that 31% of responses had serious sourcing issues, such as missing or misleading attributions. Meanwhile, 20% contained major accuracy issues, including ‘hallucinated’ details and outdated information. It’s concerning to think that we might be getting incorrect or incomplete information from the AI assistants we rely on.
What’s even more interesting is that the performance varied across different AI assistants. Gemini, for example, performed the worst, with significant issues in 76% of its responses.
The study’s findings are a good reminder to fact-check and verify the information we get from AI assistants. While they can be incredibly helpful, it’s clear that they’re not perfect.
If you’re curious about the study, you can find the full report on the BBC’s website. The executive summary and recommendations are a quick and easy read, even if the full report is a bit of a slog.
So, what do you think? Have you ever caught an AI assistant in a mistake? How do you think we can improve their accuracy and reliability?

