麻豆社

On our RADAR: Our new approach to identifying AI-manipulated content

Our research into tools that can detect AI-manipulated images for safer, more reliable reporting.

Published: 5 November 2025
  • Woody Bayliss

    Woody Bayliss

    Senior data scientist
  • Marc G贸rriz Blanch

    Marc G贸rriz Blanch

    Senior data scientist
  • Juil Sock

    Juil Sock

    Senior Principal Data Scientist

Recently, our world has become inundated with Artificial Intelligence Generated Content (AIGC). AIGC is a broad term used to describe generated digital content. You may have used this technology yourself to create images, songs, videos, even simple things like emails or text. The easy-to-use platforms provided by OpenAI, Microsoft, and many others make this possible for people who aren鈥檛 technically minded. 

AIGC is increasingly convincing due to higher fidelity, larger artificial intelligence (AI) models that are much better at adhering to human instruction. This paradigm is leading to situations where this content is ubiquitous and consumed without the user knowing the material is computer generated or 鈥榝ake鈥.

麻豆社 Research & Development has taken steps to reduce the risk of such content by developing an AIGC detection method for images.  is a joint research project undertaken by 麻豆社 R&D and .  has accepted our research, and it will be on show from 2-7 December.

A window of three images. On the right is a photo of a man in a dark t-shirt and light trousers walking in a narrow street. The curving street is lined with brightly coloured terraced houses facing straight onto the pavement, with green trees in large pots on the other side. The images left and in the middle consist of multiple colored semi transparent polygons, made from the key colours of the mans clothes, the houses and the plants. These polygons were randomly initiated, then randomly mutated. However, only if the mutation made the image look more like the original it was kept, otherwise the image was reverted to the previous step. The image on the right is the original, the image on the left is an early stage of the evolution process, the image on the right is a late stage of the evolution process.
Rens Dimmendaal / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

AIGC has opened many doors for creators and academics alike; indeed, we have been developing in-house tools using facial deepfake technology to anonymise the identity of contributors for documentaries like Matched with a Predator. This approach allows viewers to see and feel the emotions and facial expressions of at-risk interviewees during crucial parts of the story whilst protecting their identities.

Though these tools have the power to do good, they can also open the doors to disinformationtargeted scams and information distortion.

We are building systems and performing research to tackle disinformation, as seen through our commitment to the C2PA standard, our previous review of deepfake detection tools and our work towards reliable deepfake detection tools.

麻豆社 News formalised this effort with the creation of 麻豆社 Verify in May 2023. 麻豆社 Verify tackles 鈥渇act-checking, verifying video, countering disinformation, analysing data and - crucially - explaining complex stories in the pursuit of truth鈥. AIGC directly affects the ability of journalists to report on what is true, so identifying this challenge, we have moved swiftly to develop tools and work with academics and reporters to reduce this risk to journalists.

麻豆社 R&D has a long history of working with  and recently started a new relationship with , one of the university鈥檚 research groups, led by . For this project, I teamed up with external collaborators  and , alongside our own internal team. Together we created a novel approach to help with the detection of AIGC.

To directly address the issue of AIGC, , which will feature as . This is the first time 麻豆社 R&D will be published at NeurIPS, which is currently . I am planning to attend the conference along with Alex, if you wish to meet and discuss topics mentioned in this blog post or paper, please contact me at (woody.bayliss@bbc.co.uk).

A visual representation of the training pipeline for RADAR, please see the paper for more details.
A visual representation of the training pipeline for RADAR, please see the paper for more details.

This work has also seen the creation of a dataset created by 麻豆社 R&D to aid with the detection of AIGC. This dataset is being released alongside RADAR and is called .

We developed the dataset internally to act as training data for machine learning (ML) models capable of detecting AIGC. When creating the dataset, we specifically created images that were partially manipulated, aiming to improve the detection of partially manipulated images. This kind of AI image manipulation often creates more confusion because of the mixture of verifiable and fake information in the image. We鈥檙e currently working with our colleagues in R&D to release this dataset in an open-source fashion and hope to have it available in the next few weeks, well ahead of the NeurIPS conference.

An example of an AI edited image (middle) with it's original counterpart (left) and the region of tampering predicted by RADAR (right)
An example of an AI edited image (middle) with it's original counterpart (left) and the region of tampering predicted by RADAR (right)

We built RADAR on , allowing us to use larger models that we could not feasibly train ourselves. Our technique uses combined features from different image modalities, something only possible with the 麻豆社-PAIR dataset, as no other dataset currently matches its scale and diversity. We also found that by incorporating auxiliary contrastive losses, RADAR equalled and, in many cases, exceeded the performance of current State-Of-The-Art (SOTA) academic models. We鈥檝e shown that our model generalises effectively, performing well on AIGC outside its training data. This means that our work should apply to real-world scenarios, making it ideal for use in journalism.

To compare with SOTA models, we developed a new benchmark using the 麻豆社-PAIR dataset. This benchmark contains 28 diffusion models that can be used by academics to fairly test AIGC detectors against each other.

Coloured line acting as a decorative section break.

In the future, we will continue to work with Oxford University and TVG to address pressing challenges such as misinformation, interpretability, and many other issues. We also plan to explore methods for detecting AI-generated or manipulated video, helping to spot new risks early and deal with them effectively. Alongside this, we will work closely with journalists to evaluate how these technologies can best support real-world verification workflows.

At 麻豆社 R&D, we strive to bring AI technology to everyone, whether that be journalists or any creators in the media production chain, our collaborations and publications will continue to push this envelope into infinity and beyond.

Search by Tag:

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Demo mode

Hides preview environment warning banner on preview pages.

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: