Trust in science wavers because of messaging

In a period of history marked by mutually exclusive concepts of information authenticity – think fake news, and real news derided as fake – the robust nature of published scientific research is critical.

All too often, however, journal papers reporting the results of long and peer-reviewed work are either ignored by the wider community or derided as the product of dark motivations and even darker ideologies.

This type of hostile reception sometimes surprises researchers, but, say a group of scientists led by Kathleen Hall Jamieson from the University of Pennsylvania, it is a problem of their own making.

Writing in the journal PNAS, Jamieson and colleagues observe that although scientists routinely adhere to methods that ensure transparency, critical evaluation and (in the hard sciences, at least) replicability, they often neglect to broadcast this fact.

And in science, as in media, if nobody witnesses something, then to all practical purposes it didn’t happen.

“Trust in science increases when scientists and the outlets certifying their work honour science’s norms,” they write.

“Scientists often fail to signal to other scientists and, perhaps more importantly, the public that these norms are being upheld.”

The norms in question revolve around clear descriptions of experimental methods, presentation of full results, submission to peer review, transparency around funding sources and honesty in relation to competing interests.

The research community, the writers note, goes to great lengths to design experiments and review methods that “thwart human biases”.

“Yet, there has been no corresponding community agreement on optimal ways to signal the existence and application of such practices within the outputs and narratives generated,” they continue.

The absence of such signalling often has consequences well beyond simple disinterest among the general public.

Experimentally based, peer-reviewed and replicable findings about, for instance, environmental management are frequently classified in public and policy debate as simply one opinion among many. Science’s necessary embrace of probability, especially in the area of climate research, is often cynically misrepresented by the fossil fuel industry as unreliable uncertainty.

“Practices that scientists use to reinforce the norms of science may not be immediately transparent to a more general audience,” write Jamieson and colleagues.

“Moreover, they can be misunderstood, or the message may be manipulated by other interests to create misinformation. Researchers can improve the understanding of how the norms of science are honoured by communicating the value of these practices more explicitly and transparently and not inadvertently supporting misconceptions of science.”

To that end, the writers recommend making plain (or plainer) the facts that research practice necessarily involves critique and transparency, and that self-correction is a critical undertaking.

Any such message, they maintain, must be unambiguous and obvious. To this end they suggest exploring ways to protect provenance (perhaps borrowing from the art world, where blockchain is increasingly used to verify authenticity).

Other suggested methods include systems of badges and checklists to establish data sources, experimental method and funding transparency.

Such approaches, the writers suggest, will go a long way towards ensuring scientists are able to trust the work of other scientists.

“But beyond this peer-to-peer communication,” they conclude, “the research community and its institutions also can signal to the public and policy makers that the scientific community itself actively protects the trustworthiness of its work.”

Finding ways to achieve that aim represents a time-critical challenge, still in many ways unmet.

Please login to favourite this article.