What if instead of blaming readers of misinformation, we showed them how to tell the difference between facts and falsehoods?

What if instead of blaming readers of misinformation, we showed them how to tell the difference between facts and falsehoods?

People die because of misinformation. When hydroxychloroquine was publicised as a possible treatment for COVID-19, a couple from the United States ingested fish tank cleaner because it contained chloroquine phosphate. The husband died and the wife was hospitalised. There was no clear evidence to show that hydroxychloroquine was an effective treatment for COVID-19, and this remains the case, but Australian politicians continue to misinterpret studies and promote it as a treatment.

Stories like these can teach us important things about misinformation, including the gap between who spreads information and who is affected by it. They can also tell us where we might best focus our efforts when trying to manage its impact on the decisions people make.

A small number of people are disproportionately responsible for disseminating misinformation and certain communities might be more susceptible to absorbing misinformation in ways that influence their attitudes and behaviours. Those two groups may not overlap as much as we believe, and many of the people who act on misinformation in harmful ways are not the same people who are responsible for spreading it.

There is a gap between who spreads information and who is affected by it.

But we focus most of our media and research attention on the people who create or promote misinformation, rather than on the people who are most affected – people like the couple who ingested fish tank cleaner after hearing about chloroquine, the people who died after replacing effective cancer treatments with alternative therapies, the young man who was sentenced to jail after firing an assault rifle in a pizzeria after reading Pizzagate conspiracy theories about child trafficking rings in online forums.

It seems easier to focus on the influential people who use their platforms to spread misinformation because they are highly visible and easy to blame. It seems to be much harder to reach out and empathise with the people who absorb it.

Should we turn to AI?

Machine learning is the branch of artificial intelligence where large volumes of data are used to train models to solve complex tasks that would usually be done by a human.

More than a decade ago, access to data and computing power led to a renaissance in machine learning for images, when computer scientists discovered that they could use millions of images of cats and dogs from the Internet to train models to solve all kinds of other unrelated tasks.

We now have cars that can safely drive themselves, and thanks to a similar, more recent renaissance in language and text data, we have criminal justice software predicting the risk of re-offending, and language models that can write articles about themselves for a newspaper.

It should be no surprise that AI-based tools could be created to both help and hinder our misinformation problem.

Given what can be done, it should be no surprise that AI-based tools could be created to both help and hinder our misinformation problem.

In my research group, we combine methods from machine learning, epidemiology, and social psychology to try to observe misinformation in the real world so we can build tools to reduce its impact.

For example, we developed tools for scoring vaccine-related webpages based on their credibility. This was an AI-based approach for spotting specific issues in news articles and blog posts, such as promoting research findings without necessary context or appeals to authority instead of evidence.

In other work, we tried to predict who would end up posting about conspiracy theories on Reddit months in advance. Two users might both start posting comments in r/mylittlepony at the same time but differences in the language they use and the other forums they post to over time can distinguish which of the two will become embedded in the r/conspiracy community discussing Pizzagate and QAnon.

But building AI tools isn’t the same as deploying AI tools, and many have produced unintended consequences when they are actually used. There have been fatal crashes in self-driving cars and implicitly racist decision-making built into criminal justice software.

These problems aren’t happening because the machine learning algorithms are bad. Rather, we have always struggled to integrate new technologies into society, because we can’t predict in advance how new technologies will be shaped by human behaviour or how human behaviour will be shaped by new technologies.

To make sure the benefits of AI-based misinformation tools outweigh the risks when they are deployed, we would do well to first consider who the tools are designed to help.

Could social bots confront misinformation?

Imagine an AI-based appraisal tool that can evaluate the credibility of any webpage by looking at patterns of language, images, and links to other pages. Because it uses the newest algorithms, it can also highlight and annotate the information it used to make its decision.

A tool like this could be used by journalists to check sources (or their own writing), or by curious information consumers who lack the experience or training to judge credibility themselves.

It could also be implemented by social media platforms that want to add friction to sharing links to low credibility webpages. Most social media platforms already flag posts when they include certain combinations of keywords. Our hypothetical AI-based appraisal tool could be deployed as a social bot on social media platforms. Scanning and appraising webpages the first time they are shared, social bots could reply or comment on misinformation and direct users to higher credibility pages on the same topic.

We can’t predict in advance how new technologies will be shaped by human behaviour or how human behaviour will be shaped by new technologies.

When correcting misinformation on social media, the aim is not to change the mind of the person disseminating misinformation, but to influence the audience watching the conversation. Recent experiments from social psychology designed to mimic online social spaces suggest that responding to misinformation in online spaces can be an effective way to correct beliefs in the audience. Instead of censoring users who post misinformation, we could instead see this as an opportunity to reach audiences who otherwise trust or see value in what those users say.

If correcting misinformation can be a useful way to mitigate its impact on what people believe, then should we deploy an army of social bots to correct misinformation rather than simply removing the misinformation?

Imagine a second AI-based tool that uses information about what people post and who they connect with to predict their beliefs. This is a form of user profiling, pioneered by researchers at Cambridge University who were approached by Cambridge Analytica for access to Facebook data to help target people during elections (and said no).

Our hypothetical AI-based user profiling tool could predict beliefs associated with misinformation using only information gathered about them online. Social media platforms and governments could then use the tool to help identify people who are more vulnerable to misinformation, more likely to share misinformation, or more likely to act on misinformation in ways that harm themselves or others.

If this all sounds a little like Big Brother, large tech companies already deploy AI to do targeted advertising. If we can predict whether someone is at high risk of harmful behaviour based on the misinformation circulating in their communities, isn’t targeting them to change their beliefs the same?

I think about the gap between developing new tools and how they could be deployed in society, and I worry about the potential for unintended consequences.

These tools are no longer just the topic of 1950s dystopian science fiction. As a computer scientist specialising in machine learning and misinformation, these are the questions that keep me awake at night. I think about the gap between developing new tools and how they could be deployed in society, and I worry about the potential for unintended consequences.

I think AI-based tools could be used to help us deal with our misinformation problem and avoid situations where people end up ingesting fish-tank cleaner or taking a rifle to a pizzeria. But first we need to drop our fixation on blaming and amplifying the people who spread misinformation and instead aim to educate and empower those who are most vulnerable to it.

Please login to favourite this article.