Nefarious actors could use a generative model to churn out a hundred blog posts of disinformation relating to a range of health topics, a new study has found.
Using OpenAI’s hugely popular GPT Playground and ChatGPT tools, two researchers with no specialist skills in the use of such technology set out to learn if it was possible to quickly produce 50 blog posts containing false information on vaccines and vaping.
Within 65 minutes, they’d produced 102 such articles totalling more than 17,000 words.
Among their stipulations were the need to have two “scientific-looking references”, which could be fabricated, a catchy title and written to target specific audiences, based on gender, age or pregnancy status.
No such luck it seems using other large language models: Google’s Bard and Microsoft’s Bing generative AI seemed to prevent such material from being produced.
“Each contained 300 words as a minimum, that was our request,” says the study’s lead author Bradley Menz, a registered pharmacist and researcher at Flinders University.
“They also contained fabricated clinician and patient testimonials, as well as academic references that were essentially fabricated or distorted.”
Menz and the study’s other researchers initially set out to see how easy it could be to generate misinformation on the cheap, while also investigating whether safeguards to prevent such actions existed.
While Bard and Bing prevented the easy creation of such material, OpenAI’s GPT platforms were seemingly happy to oblige the request. The researchers raised the ease with which platforms could be exploited with OpenAI but did not receive a response. Cosmos also reached out to OpenAI for comment but did not receive a response.
Menz and his colleagues, whose study is published as a special communication in the journal JAMA Internal Medicine, are calling for immediate measures to prevent “mass generation of misleading health-related text, image and video content”.
While he acknowledges material circulated on social media can be accurate in nature, Menz would like to see greater transparency and says technology developers should have robust reporting, “transparency and accountability in place to aid public safety”.
“Generally speaking with disinformation, it’s a small number of people that spread very harmful health information topics throughout social media,” he says.
“Our concern that they have tools which enable them to generate things rapidly, it’s going to spread this information throughout social media and then potentially have a great deal of consequence in the community.”
Cosmos is a not-for-profit science newsroom that provides free access to thousands of stories, podcasts and videos every year. Help us keep it that way. Support our work today.