Did AI researchers let AI hallucinations into scientific papers?
Investigating a claim that 100 made-up claims were found in AI papers.
AI can make mistakes 鈥 and AI chatbots like ChatGPT warn you about that whenever you ask them anything.
These mistakes sometimes involve making up entirely fictitious, factually false statements known as 鈥渉allucinations鈥.
Whether these hallucinations matter depends on what you鈥檙e using AI for, and whether they are spotted and corrected.
The team on More or Less were slightly surprised to read a headline in Fortune magazine, claiming that a top academic AI conference accepted research papers which contained 100 AI-hallucinated citations.
You might think that the top AI researchers in the world would be careful about using AI to write their research papers.
Alex Cui, CTO and co-founder of GPTZero 鈥 whose company discovered the hallucinations 鈥 explains what鈥檚 going on.
CREDITS:
Presenter and producer: Tom Colls
Sound mix: James Beard
Production co-ordinator: Brenda Brown
Editor: Richard Vadon
Last on
Broadcasts
- Sat 21 Feb 2026 05:50GMT麻豆社 World Service except Australasia, East and Southern Africa, East Asia & South Asia
- Sun 22 Feb 2026 05:50GMT麻豆社 World Service East and Southern Africa
- Sun 22 Feb 2026 09:50GMT麻豆社 World Service Europe and the Middle East & West and Central Africa only
- Sun 22 Feb 2026 11:50GMT麻豆社 World Service except East and Southern Africa, Europe and the Middle East & West and Central Africa
