ChatGPT can fabricate pretty convincing medical data, according to a new paper published in Patterns.
This will make it easier than ever to publish fraudulent research, according to the paper’s authors.
“Reasons for fabricating research using an AI-based technology include financial gain, potential fame, promotion in academia, and curriculum vitae building, especially for medical students who are in increasingly competitive waters,” they write.
The researchers asked ChatGPT to generate an abstract for a scientific paper about the effects of two different drugs on rheumatoid arthritis, using data from 2012 to 2020.
The chatbot returned a convincing-sounding abstract, giving real numbers and – when the researchers prompted it – saying that one drug worked better than another.
ChatGPT only takes in data up to 2019, so it couldn’t have any figures from 2020.
It also claimed to have taken these numbers from a private database, which requires a fee to access.
“Within one afternoon, one can find themselves with dozens of abstracts that can be submitted to various conferences for publication,” caution the researchers.
“Upon acceptance of an abstract for publication, one can use this same technology to write their manuscript, completely built upon fabricated data and falsified results.”
The researchers point out that there can be positive ways for researchers to use AI.
“Utilising an AI for research is not an inherently malicious endeavour,” they write.
“Asking an AI to grammar-check work or write a conclusion for legitimate results found in a study are other uses an AI may incorporate into the research process to cut out busywork that may slow down the scientific research process.”
They say that their own paper was grammatically checked by an AI.
“The issue arises when one utilises data that are not existent to fabricate results to write research, which may easily bypass human detection and make its way into a publication.
“These published works pollute legitimate research and may affect the generalisability of legitimate works.”
They say that the research community should be thinking about how best to bring in safeguards against this threat.
Originally published by Cosmos as ChatGPT can make real-seeming fake data
Ellen Phiddian is a science journalist at Cosmos. She has a BSc (Honours) in chemistry and science communication, and an MSc in science communication, both from the Australian National University.
Read science facts, not fiction...
There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.