Âé¶¹Éç

Deepfake detection for journalism: How we’re tackling manipulated media

We’re developing in-house tools to detect manipulated media and support trustworthy journalism.

Woody Bayliss

Woody Bayliss

Senior data scientist
Published: 5 November 2025

In an era where artificial intelligence is redefining how we see and understand the world, the Âé¶¹Éç Research & Development computer vision team is leading the charge. From pioneering deepfake detection to agentic AI, our mission is to ensure that audiences can continue to trust what they see, hear, and share.

The rise of generative AI has unlocked extraordinary creative possibilities, but it’s also made . Hyper-realistic 'deepfakes' can manipulate voices and images, blurring the line between reality and fabrication. For the Âé¶¹Éç - a global broadcaster whose reputation is built on accuracy and integrity - this challenge needs urgent attention.

That’s why the Âé¶¹Éç Research & Development computer vision team is pioneering AI-driven verification tools that help journalists, editors, and content producers authenticate media in real time. We’re building models that don’t just aim to detect manipulated content but to explain it, equipping journalists with the confidence to verify before they publish.

Every line of code we write, every dataset we build, and every prototype we release contributes to one goal: safeguarding trust in the digital age.

Deepfake detection

Our flagship deepfake detection pilot is setting new standards for how AI can serve journalism. We’re developing prototype tools that identify AI-generated or manipulated images and videos, giving journalists the ability to spot fabrications that might otherwise go unnoticed.

Unlike many third-party services, we build our models in-house. This gives us full transparency and control over data, algorithms, and outputs - a crucial advantage when working with sensitive or confidential content. It also lets us customise features for editorial needs – like explainability and trust indicators – and build them straight into newsroom workflows.

An image from Âé¶¹Éç Verify, showing a scene of a child in a person's arms, being carried in or out of a car. A big circle surrounds the child, and an 'AI-generated' warning message appears underneath it.

We collaborate closely with Âé¶¹Éç Verify, the organisation’s team dedicated to fact-checking and authenticity verification. Selected Verify journalists have been testing our prototype, helping us refine the tool with real-world feedback. We’re also working with Âé¶¹Éç Studios to evaluate how our detector can flag AI-generated content submitted by users before the Âé¶¹Éç shares or highlights them to audiences of our programmes or to our followers on social media.

And our innovation doesn’t stop at the newsroom. Our deepfake detection work has caught the eye of Weather Watchers, a Âé¶¹Éç community platform where users submit photos of local weather. By combining our AI detector with digital watermarking standards like C2PA, we can help editors instantly verify the authenticity of user-generated imagery, ensuring accuracy at every level of the Âé¶¹Éç.

The science behind the trust

We ground our approach to deepfake detection in rigorous science and collaboration. We’ve built the largest proprietary dataset of its kind, containing over one million examples of partially manipulated images. This data underpins our in-house models, which use foundation models to detect manipulation.

We’ve also conducted extensive benchmarking of existing detection tools to assess the performance, bias, and reliability of leading commercial systems. This commitment to transparency ensures that our technology is not just effective but also trustworthy and accountable.

NeurIPS 2025: Recognition on the global stage

Our research isn’t just making an impact within the Âé¶¹Éç. It’s earning recognition from the global AI community. In collaboration with the , we submitted and had our poster '' accepted at NeurIPS 2025, one of the world’s most prestigious AI conferences.

The work introduces RADAR, a groundbreaking method that leverages multi-modal signals to detect inpainted or diffusion-edited regions with unprecedented accuracy. This achievement highlights how industry-academic collaboration can drive meaningful progress in AI safety and media authenticity.

For our team, NeurIPS isn’t just a milestone, it’s a message. It shows that Âé¶¹Éç R&D’s AI research can stand shoulder to shoulder with the best in the world, and that the skills and creativity of our researchers are shaping international conversations about the responsible use of AI.

Illustration showing the words Âé¶¹Éç R&D AI Research on lime green and yellow background.

The next challenge

While our image detection tools are now being used across multiple Âé¶¹Éç divisions, our next frontier is video deepfake detection - a rapidly emerging area with immense social impact.

Conversations with Âé¶¹Éç Verify journalists revealed a pressing need for technology capable of verifying not just still images but moving footage. Our early experiments are already producing promising results. We’re now adapting our proprietary model to handle both image and video detection, exploring how temporal information can enhance overall performance.

As part of this evolution, we’re building a bespoke video test dataset designed to ensure fair evaluation against existing commercial models. Other companies have already absorbed many open-source datasets into their training pipelines, so we’re taking a more careful and transparent approach.

Collaboration

Our work thrives on collaboration. We maintain close relationships with leading universities, including the , , and , as well as organisations such as , the , and the .

These relationships keep us at the cutting-edge of AI research, while grounding our work in real-world applications. They put our models through rigorous testing and peer review to ensure that they meet the Âé¶¹Éç’s standards of accuracy and integrity.

The road ahead

At Âé¶¹Éç R&D, we won’t just build technology, we help redefine how the world tells truth from fiction.

We’re a team of researchers and creative technologists working together to use AI responsibly and for the public good. Our work sits at a rapidly developing place where technology and journalism meet. The challenges involved are complex, and .

As AI continues to evolve, so will we. In the coming months, we’ll be rolling out new versions of our detection tool prototypes, planning integration across Âé¶¹Éç platforms, and expanding our research into cross-modal detection where audio, image, and video insights come together for even more reliable verification.

We’ll continue to share our findings, publish at leading conferences, and engage the public in conversations about AI transparency because trust in media is not just a Âé¶¹Éç priority, it’s something the world depends on.

Search by Tag:

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Demo mode

Hides preview environment warning banner on preview pages.

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: