The influence of bots on vaccine-related discussions on social media may be a lot smaller than we think and fear.
A new study led by Australia’s University of Sydney has found that the overwhelming majority of the vaccine-related content seen by typical users of Twitter in the US between 2017 and 2019 was generated by human-operated accounts.
Only a minor fraction of information from bots reached active social media users.
Adam Dunn and colleagues looked at more than 53,000 randomly selected active Twitter users and monitored their interaction with over 20 million vaccine-related tweets posted by both human-operated and bot Twitter accounts.
“The reality is that most of what people see about vaccines on social media is neither critical nor misinformation,” Dunn says.
“It is convenient to blame problems in public health and politics on orchestrated and malicious activities, so many investigations focus on simply tallying up what vocal anti-vaccine groups post, without measuring what everyone else actually sees and engages with.”
The study, published in the American Journal of Public Health, found that a typical Twitter user potentially saw 757 vaccine-related posts over the three years, but just 27 of those were critical of vaccination, and most users were unlikely to have ever seen vaccine-related content from a bot.
More than a third of active Twitter users posted or retweeted about vaccines but only 4.5% ever retweeted vaccine-critical information and 2.1% retweeted from a bot.
A subgroup of 5.8% of Twitter users was found to be embedded in communities more engaged with the topic of vaccination in general, but the vast majority never engaged with vaccine-related posts from bots, instead engaging with vaccine-critical content posted by others in their communities.
The study did not examine social media engagement with trolls, the researchers say; it focussed on human-operated Twitter accounts that use a range of approaches to gain followers and post misinformation.
They suggest that resources invested by social media platforms and policy makers for controlling bots and trolls might be more effectively used on interventions to educate and improve media literacy.
“By focusing investigations only on counting what bots, trolls, and malicious users post without looking at what people potentially see and engage with, there is the risk of unnecessarily amplifying that content and could make it seem much more important than it really is,” says Dunn.