
Bots tends to elicit mixed feelings from people – aside, presumably, from those creating them.
These social media accounts controlled by artificial software rather than by humans serve a variety of purposes.
Some are reasonably benign – such as news aggregation and automated customer assistance for online retailers – but bots have been under the spotlight in recent years as they’re regularly part of large-scale efforts on social media to manipulate public opinion, such as during electoral campaigns.
Now, research published in Frontiers in Physics has revealed the presence of short-term behavioural trends in humans that are absent in social-media bots, providing an example of a “human signature” on social media – which could be leveraged to develop more sophisticated bot detection strategies.
The study is the first of its kind to apply user behaviour during a social media session to the problem of bot detection.
“Remarkably, bots continuously improve to mimic more and more of the behaviour humans typically exhibit on social media,” says co-author Emilio Ferrara, from the University of Southern California Information Sciences Institute, US.
“Every time we identify a characteristic we think is prerogative of human behaviour, such as sentiment of topics of interest, we soon discover that newly developed open-source bots can now capture those aspects.”
The researchers studied how the behaviour of humans and bots changed over the course of an activity session using a large Twitter dataset associated with recent political events.
Over the course of sessions, they measured various factors to capture user behaviour – such as the propensity to engage in social interactions – and then compared these results between bots and humans.
To study the behaviour of both types of users, the researchers focused on indicators of the quantity and quality of social interactions a user engaged in, including the number of retweets, replies and mentions, as well as the length of tweets written.
They then leveraged these behavioural results to inform a classification system for bot detection, to observe whether the inclusion of features based on the session dynamics could improve the performance of the detector.
A range of machine-learning techniques was used to train two different sets of classifiers: one including the features describing the session dynamics, and – as a baseline – one without those features.
The researchers found humans behavioural trends that were not present among bots. Humans showed an increase in social interaction over the course of a session, illustrated by an increase in the fraction of retweets, replies and number of mentions contained in a tweet.
Humans also showed a decrease in the amount of content produced, illustrated by a de-creasing trend in average tweet length. These trends are thought to be due to the fact that as sessions progress, human users grow tired and are less likely to undertake complex activities, such as composing original content.
Another possible explanation is that as time goes by, users are exposed to more posts, therefore increasing their probability to react and interact with content.
In both cases, bots were shown to not be affected by such considerations and they exhibited no behavioural change.
The researchers used these behavioural results to inform a classification system for bot detection and found that the full model, including the features describing session dynamics, significantly outperformed the baseline model in its accuracy of bot detection.
These results highlight that the behaviour of bot and human users on social media evolves in a measurably different manner over the course of an activity session. They also suggest that these differences can be used to make better bot detection systems, or to improve existing ones.
“Bots are constantly evolving – with fast-paced advancements in AI, it’s possible to create ever-increasingly realistic bots that can mimic more and more how we talk and interact in online platforms,” Says Ferrara
“We are continuously trying to identify dimensions that are particular to the behaviour of humans on social media that can in turn be used to develop more sophisticated toolkits to detect bots.”

Ian Connellan
Ian Connellan is a the Editor-in-Chief of the Royal Institution of Australia.
Read science facts, not fiction...
There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.