Defunding disinformation

Defunding disinformation

There are a variety of motives for people to peddle misinformation. The Russian government put disinformation to work as part of their expansionist geopolitical aims. The Chinese Communist Party uses it to further entrench their political authority and grow their global influence. Professional influence operators do so as guns for hire on behalf of business and industry, ranging from oil to tobacco to big pharma and many others. Trolls on platforms such as 4chan often do it in service of pure nihilism.

But by far the most common and compelling motivation to spread online disinformation is profit, and it’s led the world to a veritable disinformation crisis.

The biggest global companies are those who provide the machinery to capture and monetise audience attention at scale. Today’s internet is powered by businesses that capture and profit from “clicks and eyeballs”.

How online advertising and disinformation became intertwined

Who provides the money for this machine? Sometimes, audiences are monetised through merchandise sales or solicitation of direct donations. Most often, the cash comes from advertising. Advertisers subsidise the web to the tune of more than US$400 billion ($556 billion) a year in digital ad spend. They pay into a complex ecosystem dominated by two outsized ad tech platforms – Google and Facebook (which each take a sizable commission) – and their money ultimately makes its way to content creators and publishers on the open web.

How much those publishers and content creators make is controlled mainly by the quantity and spending power of the audience they capture. And so our modern information ecosystem has become a race for eyeballs: a race won by the most salacious, infuriating, divisive, and – by definition – engaging content.

On the moneymaking side of this transaction, everyone wins. The publishers who capture an audience’s attention make money, as do the platforms that take a commission on every ad that gets placed. Nearly US$250 billion ($348 billion) a year is estimated to go into subsidising online disinformation.

Advertisers subsidise the web to the tune of more than US$400 billion ($556 billion) a year in digital ad spend.

Those on the other side of this transaction lose out. Advertisers that pay money into this system can end up with their brands appearing alongside unsuitable content, which can harm their reputation and cost them money. It effects what people choose to buy: about 51% of 1500 millennials and Gen Xers surveyed in 2020 were less likely to purchase from a company with an “unsafe” brand placement, and were three times less likely to recommend that brand to others.

The Global Disinformation Index (GDI) is a not-for-profit seeking to balance out that equation. It operates in more than 10 different languages and 20 countries; its principles are simple – neutrality, independence and transparency – and its aims clear: “to disrupt, defund and down-rank disinformation sites”.

Disrupting and defunding disinformation

The GDI’s raison d’être is simple. Advertisers were missing data on where on the web disinformation was occurring. If they had that information they could avoid those platforms in their automated ad campaigns, thereby safeguarding their brand and redirecting funds away from disinformation peddlers. What the open web needed was a transparent, independent, neutral index of so-called “disinformation risk”.

The aim presented a technical challenge: traffic on the internet is approximately distributed according to a power law, meaning a small number of high-profile websites receive a sizable fraction of the traffic. But the distribution also has a “long tail”. This means there are a large number of websites that – when taken together – also capture a large amount of traffic, even if each individual one doesn’t on its own. 

For that reason, it was imperative that the disinformation risk ratings were built up using a hybrid of both human-powered assessments (in order to capture high-profile media with the requisite levels of nuance and fidelity) and large-scale automation (to maintain parity with the large number of “long tail” sites).

The human-powered portion of the methodology seeks to assess the journalistic integrity, and thus the disinformation risk, of publishers across over 20 different media markets to date. This methodology comports with the Journalism Trust Initiative, an international standards effort launched and operated by Reporters Without Borders. GDI assesses content and operational policies, looking for conflicts of interest, prior evidence of disinformation risk, and lapses in journalistic standards as part of its assessments.

Meanwhile, GDI’s automated systems crawl hundreds of thousands of sites, assessing millions of pieces of new content each week, identifying ones that peddle the various adversarial narratives of the day. When a particular site meets a minimum threshold, it gets flagged for additional human review.

This not only keeps advertisers’ brands safe, but also helps to funnel ad revenue away from disinformation and toward higher quality news.

Ultimately, all of these processes feed into data sets that GDI then provides to advertisers and digital ad platforms to prevent them from inadvertently buying or selling ads on sites trafficking in disinformation. This not only keeps advertisers’ brands safe, but also helps to funnel ad revenue away from disinformation and toward higher quality news.

Australia’s disinformation landscape

In September 2021 the GDI cast its eye over the Australian disinformation landscape, profiling 34 Australian media outlets in a report compiled in collaboration with the Queensland University of Technology’s Digital Media Research Centre (DMRC).

The report reviewed 34 Australian media outlets, chosen for both their reach and their relevance. Sites assessed include the ABC, Sky News, Newscorp, Nine Newspapers, SBS, 7news.com.au, Nine.com.au, Crikey, New Matilda and Pedestrian TV.

On the whole, the results were heartening: of the sites sampled in the report, which covered April–September 2021, nearly 75% were found to have a low to minimum risk of peddling disinformation to online users.

“Only a limited number of Australia’s sites present high or maximum levels of disinformation risk and just one site was rated as having a maximum level of disinformation risk,” said DMRC Director Professor Patrik Wikstrom at the time the report was released.

Wikstrom cautioned, though, that the concentration of media in Australia has encouraged a climate in which many Australians – particularly those outside of major centres – turn to alternative platforms to access news.

“Citizens with limited local news access, which is increasingly the case for those in regional Australia, are turning more to social media for news.”

Patrik Wikstrom

“Citizens with limited local news access, which is increasingly the case for those in regional Australia, are turning more to social media for news – fertile ground for the spread of fake news.

“The disinformation surrounding the COVID-19 pandemic is the perfect example of the dangers inherent in this. By disrupting society’s shared sense of accepted facts, these narratives undermine public health, safety, and government responses.”

And there is a distinct imbalance in vulnerability to the disinformation that circulates on these platforms, leaving some communities at greater risk than others.

Recent research has shown that for many migrant and ethnic minority groups, social media platforms are important sources of information on COVID-19. Analysing data from six countries, including the US, UK and China, researchers have found that roadblocks ranging from language barriers to low health literacy can exclude many minority groups from accessing official public health information, and push them into the swirling winds of the social media maelstrom.

There are clear opportunities for these platforms to have a positive impact – such as through the sharing of personalised and culturally tailored public health information. But there’s also evidence to suggest that the ease with which disinformation circulates through these unregulated mediums may be associated with lower participation in vital public health measures, such as vaccine uptake.

Taming the disinformation beast

The GDI is only part of a larger ecosystem of organisations working in media literacy education, tech reform policy, counter-messaging, and platform trust and safety that are all contributing to the conversation, and to the ultimate goal of disrupting online disinformation.

By some estimates, GDI has already cut ad revenue to disinformation purveyors by half through partnerships with over a dozen major ad platforms. But there’s still a long way to go to protect democracy and cut the funding to disinformation.

In the meantime, it’s important that individuals realise they have power to act. Tanya Notley, Associate Professor of Media at Western Sydney University and Deputy Chair of the Australian Media Literacy Alliance (AMLA), says there are steps we can take right here, right now to empower citizens, while organisations like the GDI work behind the scenes to tame the intangible beast.

By some estimates, GDI has already cut ad revenue to disinformation purveyors by half through partnerships with over a dozen major ad platforms.

“Misinformation is not going away and facing this challenge is complicated,” she says. “Developing the media literacy of both children and adults is one way to push back against the problem, and build a sustainable future for a global information and media ecosystem.”

While stemming the flow of misinformation is clearly the ideal end game, Notley believes that increasing people’s ability to detect misinformation is crucial to curbing its effects in the short term. She says that it’s vital that we stimulate critical thinking abilities in the population, and encourage knowledge of how media industries operate.

“A fully media-literate citizen will be aware of the many ways they can use media to participate in society. They will know how media are created, funded, regulated, and distributed and they will understand their rights and responsibilities in relation to data and privacy.

“Misinformation won’t disappear, but teaching the community to spot it can strip the falsehoods of their power.”

This article was published in partnership with 360info.org

Please login to favourite this article.