The reputable journal Science and its related titles will implement AI-based quality control program to screen for doctored images in research submitted for publication.
Writing in an editorial published today, the group’s editor-in-chief Holden Thorp confirmed Science would deploy the Proofig platform as part of its image screening processes. The platform claims to use AI for detecting image reuse and duplication.
That includes manipulating imagery to mislead readers – whether peer reviewers, science professionals or the public.
Thorp confirmed the technology had been trialled for several months “with clear evidence that problematic figures can be detected prior to publication” and will work alongside text plagiarism-detection software already in use by the group.
Until now, Science staff have manually checked images, but Thorp says the adoption of AI screening is a “natural next step”.
As part of the image vetting process, Science will use Proofig to identify concerning items and issue a ‘please explain’ to manuscript authors. It was noted authors who were issued such notices during the trial “generally provided a satisfactory response”. However, others were stopped from progressing through the editorial process.
Image manipulation has been a hot topic in academic circles for decades. In the early 2000s, then managing editor of the Journal of Cell Biology Mike Rossner implemented an image vetting policy amid an increasing number of digital submissions from researchers and the accessibility of image editing software. In 2004, he and his colleague Ken Yamada published guidelines for image vetting.
More recently, science integrity watchdogs have highlighted the growing risk of image manipulation in research. A decade ago, Enrico Bucci, now an adjunct professor at Temple University, US, ran a software analysis experiment on more than 1,300 open-access papers, finding 5.7% contained at least one example of suspected image manipulation.
Leading scientific integrity consultant Elisabeth Bik has spent a decade investigating image manipulation in academic publications. In 2016, she reported around 4% of papers in a 20,000-strong sample had manipulated images. In a 2022 opinion piece for the New York Times, she flagged concerns that AI may also be used for nefarious purposes in contrast to its promise of accelerating review processes.
Bik also told Cosmos she reports at least one example of image manipulation per day.
Science joins other journal publishers like Cell Press (owned by Elsevier) in using software to vet submitted images. Nature advises prospective authors that its journal editors may digitally screen images for manipulation.