An international group of medical researchers is calling for a better understanding of “spin” in research papers.
They say that authors, peer reviewers and editors need to be able to recognise and fix misleading reporting in medical studies.
The researchers, who have published a paper in Annals of Internal Medicine, also have guidance on how to do this.
“Spin occurs when there is misrepresentation of the findings, either through reporting, interpretation, or extrapolation,” lead author Dr Riaz Qureshi, an assistant professor of ophthalmology at the University of Colorado, US, tells Cosmos.
“When the results are inappropriately conveyed to the reader, particularly in a review’s discussion/conclusions or the abstract, the reader may draw incorrect inferences.”
The research focusses on medical reviews on harms – studies that collate previous research on whether or not certain practices or treatments can damage one’s health.
Qureshi says that it’s particularly easy for people reading medical studies to draw the wrong conclusions when there’s spin in this field.
“There are so many known issues with their collection, analysis, and reporting in primary literature that systematic reviews are themselves at high risk of not having the complete picture.
“The ultimate effect is a potential for harm in clinical practice because people using the reviews – either directly or in clinical guidelines – may accept the ‘spun’ results as the best representation of harms results and make decisions that are misinformed.”
The researchers built a framework that divided the spin into domains, categories, and specific types to help clarify what sort of spin they were dealing with.
The 3 domains were misleading or selective reporting, misleading interpretation, and misleading or selective extrapolation. Spin categories and types included things like “Selective reporting of or over-/underemphasis on harm outcomes” (category 1), or “Review authors assess harms in very specific samples but assert that the results (whether they show increased risk for harm or not) are applicable to a much broader population, intervention, or setting” (category 12).
“I think the medical community understands that spin is not usually a good thing and is able to spot common and egregious examples when they see them,” says Qureshi.
“However, I think the scope of the problem is probably not known by most in the community.
“The more nuanced types of spin, such as linguistic spin and inappropriate extrapolation – which are easy to do unintentionally, even by native English speakers – may not be recognised immediately as spin by most if they don’t consider the conclusions in the context of the review.”
The researchers analysed a random sample of 100 systematic reviews, looking for examples of spin.
“We used multiple independent reviewers to carefully assess the methods used by all included reviews to assess harms, as well as their results and conclusions for harms,” says Qureshi.
“We checked that all discussion and conclusions regarding harms appropriately represented the findings in the context of the setting and evidence that was found.”
They found that, of the 58 reviews that assessed harm, 28 (48%) contained at least 1 type of spin. Of the 42 reviews that didn’t assess harm, just 6 (14%) contained spin.
The team then showed how examples of spin could be revised to be more accurate. They also identified 6 examples of good reporting in medical journals.
“I think it is important for all researchers to have a clear understanding of spin,” says Qureshi.
“It is important for those who produce research to appropriately present their findings, for those who peer review and editorialise manuscripts to catch inappropriate reporting, interpretation, and extrapolation, and for consumers of research to recognise spin when they see it and not take all conclusions at face value.”