With a click of a button, could you change your mind about ‘good’ science?

As the scientific community grapples with misinformation and the undermining of trust, one group of AI and psychology researchers has proposed a simple button that could help better inform the public. 

They say as a small but potentially valuable measure, social media platforms could flag material built from retracted research to inform their users of what is (and isn’t) reliable science. 

It’s a suggestion being made by a group of University of Sydney scientists, who say flagging retracted papers is an easier way to manage mis-and-disinformation on social media.  

“There are a lot of cases where there are papers that are published and people promulgate the message from those and then later we discover there’s an error, or that the paper is retracted for whatever reason,” says Judy Kay, a computer scientist at Sydney Uni, and editor-in-chief of the International Journal of Artificial Intelligence in Education. 

“Publishers themselves, at least the reputable ones, have a real reason to tackle this problem.” 

Knowledge around retractions is, however, lacking. Among 44 undergraduate psychology students who participated in the study and were yet to take coursework on retraction, around a third had no understanding of the word’s meaning and half didn’t understand the concept in a science context. 

Two groups were then presented with a set of social media posts promoting a mix of valid and retracted research, including the efficacy of face masks in disease prevention, the health benefits of the Mediterranean diet and weight gain related to food consumption in movie theatres. 

One group was presented with a ‘More Information’ button on posts to show when a research paper related to the content had been retracted. Groups that were exposed to this retraction notification were more likely to view the content with scepticism. 

The control group was able to click on the content which would allow them to identify on the journal landing page whether a retraction notice had been published, but few clicked through as part of the study.  

A social media post showing a face mask and
An example of the More Information button used in the study. Credit: University of Sydney.

It highlights the challenge for those seeking to ensure accurate scientific information is communicated to the public: even if journals publish retraction notices, it requires readers to view and understand what they mean.  

“The ones who could click on the ‘More’ button could discover it had been retracted, the control group could still click on the article, go into the articles… a few of them did dig down and look at the article, but even then, there weren’t many,” Kay says. 

“The point is that if we had a little button that you could find out more… there really is a real possibility to provide this information.” 

While science accountability groups like “Retraction Watch” are leading the charge against manipulated science, there are increasing demands from the academic community to call out poor or fake research.  

The rise of generative AI has thrown another curveball where entire academic papers could be written by large language models. 

Kay thinks cooperative measures to introduce ‘vetting’ tech for published content is something companies like Meta, Google and X would adopt.  

X itself has a useful – though user-generated – community notes function that allows users to collectively debunk or contextualise erroneous content for other users. 

“I do think they would like to improve the situation and they have done various things with various buttons in various places and one of the issues is whether people notice them. 

“Our [button] was sitting there, hard to miss.” 

The study was published in the Proceedings of the ACM on Human-Computer Interaction. 

Sign up to our weekly newsletter

Please login to favourite this article.