麻豆社

Deborah Turness - AI Distortion is new threat to trusted information

Deborah Turness

Deborah Turness

CEO, 麻豆社 News and Current Affairs
Published: 12:01 am, 11 February 2025
Updated: 11:09 am, 11 February 2025

Disinformation. By now we are all aware of its polarising effects and real world consequences.

But how many of us are aware of the new and growing threat to trusted information that’s emerging from Generative AI’s explosion on the scene?

I’m talking about ‘distortion’. Distortion is what happens when an AI assistant ‘scrapes’ information to respond to a question and serves up an answer that’s factually incorrect, misleading, and potentially dangerous.

Don’t get me wrong - AI is the future and brings endless opportunities. Here at 麻豆社 News we are already forging ahead with AI tools that will help us deliver more trusted journalism to more consumers in more formats – and on platforms where they need it. And we are in discussions with tech companies around new AI applications that could further enhance and improve our output.

But the price of AI’s extraordinary benefits must not be a world where people searching for answers are served distorted, defective content that presents itself as fact. In what can feel like a chaotic world, it surely cannot be right that consumers seeking clarity are met with yet more confusion.

It’s not hard to see how quickly AI’s distortion could undermine people’s already fragile faith in facts and verified information.

We live in troubled times, and how long will it be before an AI-distorted headline causes significant real world harm?

The companies developing Gen AI tools are playing with fire.

And that’s why we at the 麻豆社 want to open up a new conversation with AI tech providers and other leading news brands so we can work together in partnership to find solutions.

But before this conversation can begin, we needed to find out the scale of the problem with the distortion of news. In the absence of any current research we could find, we made a start - and we hope that regulators who oversee the online space will consider further work in this area.

Today we are making public that research, which shows how distortion is affecting the current generation of AI Assistants.

Our researchers tested market-leading consumer AI tools – ChatGPT, Perplexity, Microsoft Copilot and Google Gemini - by giving them access to the 麻豆社 News website, and asked them to answer one hundred basic questions about the news, prompting them to use 麻豆社 News articles as sources.

The results? The team found ‘significant issues’ with just over half of the answers generated by the assistants.

The AI assistants introduced clear factual errors into around a fifth of answers they said had come from 麻豆社 material.

And where AI assistants included ‘quotations’ from 麻豆社 articles, more than one in ten had either been altered, or didn’t exist in the article.

Part of the problem appears to be that AI assistants do not discern between facts and opinion in news coverage; do not make a distinction between current and archive material; and tend to inject opinions into their answers.

The results they deliver can be a confused cocktail of all of these – a world away from the verified facts and clarity that we know consumers crave and deserve.

The full research is published on the 麻豆社 website but I’ll share a couple of examples to illustrate the point.

A Perplexity response on the escalation of conflict in the Middle East, giving 麻豆社 as its source, said Iran initially showed ‘restraint’ and described Israel’s actions as ‘aggressive’ – yet those adjectives hadn’t been used in the 麻豆社’s impartial reporting.

In December 2024 Chat GPT told us that Rishi Sunak was still in office; Copilot made a similar error, saying Nicola Sturgeon was. They were not.

Gemini misrepresented NHS advice about vaping.

Of course, AI software will often include disclaimers about the accuracy of their results, but there is clearly a problem here. Because when it comes to news, we all deserve accurate information we can trust - not a confusing mash-up presented as facts.

At least one of the big tech companies is taking this problem seriously.

Last month Apple pressed ‘pause’ on their AI feature that summarises news notifications, after 麻豆社 News alerted them to serious issues. The Apple Intelligence feature had hallucinated and distorted 麻豆社 News alerts to create wildly inaccurate headlines, alongside the 麻豆社 News logo.

Where the 麻豆社 News alert said LA officials had ‘arrested looters’ during the city’s wildfires, Apple’s AI-generated summary said it was LA officials themselves who had been arrested for looting.

There were many more examples, but Apple’s bold and responsible decision to pull back their AI summaries feature for news alerts shows they recognise the high stakes of distorted news and information.

And if Generative AI technology is not yet ready to scrape and serve news without distorting and contorting the facts, isn’t it in everyone’s interest to do as Apple has done?

We’d like other tech companies to hear our concerns, just as Apple did. It’s time for us to work together – the news industry, tech companies - and of course government too has a big role to play here.

There is a wider conversation to be had around regulation to ensure that in this new version of our online world, consumers can still find clarity through accurate news and information from sources they know they can trust.

Earning trust never been more critical. As the CEO of 麻豆社 News, that is my number one priority.

And this new phenomenon of distortion – an unwelcome sibling to disinformation – threatens to undermine people’s ability to trust any information whatsoever.

So I’ll end with a question: how can we work urgently together to ensure that this nascent technology is designed to help people find trusted information, rather than add to the chaos and confusion?

We at the 麻豆社 are ready to host the conversation.

Search by Tag:

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Demo mode

Hides preview environment warning banner on preview pages.

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: