ChatGPT can fabricate pretty convincing medical data, according to a new paper published in Patterns.
This will make it easier than ever to publish fraudulent research, according to the paper’s authors.
“Reasons for fabricating research using an AI-based technology include financial gain, potential fame, promotion in academia, and curriculum vitae building, especially for medical students who are in increasingly competitive waters,” they write.
The researchers asked ChatGPT to generate an abstract for a scientific paper about the effects of two different drugs on rheumatoid arthritis, using data from 2012 to 2020.
The chatbot returned a convincing-sounding abstract, giving real numbers and – when the researchers prompted it – saying that one drug worked better than another.
ChatGPT only takes in data up to 2019, so it couldn’t have any figures from 2020.
It also claimed to have taken these numbers from a private database, which requires a fee to access.
“Within one afternoon, one can find themselves with dozens of abstracts that can be submitted to various conferences for publication,” caution the researchers.
“Upon acceptance of an abstract for publication, one can use this same technology to write their manuscript, completely built upon fabricated data and falsified results.”
The researchers point out that there can be positive ways for researchers to use AI.
“Utilising an AI for research is not an inherently malicious endeavour,” they write.
“Asking an AI to grammar-check work or write a conclusion for legitimate results found in a study are other uses an AI may incorporate into the research process to cut out busywork that may slow down the scientific research process.”
They say that their own paper was grammatically checked by an AI.
“The issue arises when one utilises data that are not existent to fabricate results to write research, which may easily bypass human detection and make its way into a publication.
“These published works pollute legitimate research and may affect the generalisability of legitimate works.”
They say that the research community should be thinking about how best to bring in safeguards against this threat.