Ten years ago epidemiologist John Ioannidis blew the whistle on science.
His paper: “Why Most Published Research Findings Are False”, was published in August 2005, in PLOS Medicine. It became one of the journal’s most-cited articles. While climate sceptics, anti-vaccination campaigners and the rest of the pseudo-science community have dined out on this paper, arguably it has been a shot in the arm for science.
Ioannidis (then at the University of Ioannina, Greece, now at Stanford University, California) argued the inherent bias of researchers made them too flexible with their study design. Sample sizes were too small to be meaningful, say; or if the initial data didn’t yield dramatic results, they re-analysed them until they got “better numbers”. In some cases, data that did not conform was eliminated (called “cleaning the data”). The tendencies were more pronounced if financial or ideological interests were at stake.
Drug company Bayer reported it could replicate only 25% of published findings related to drug targets for cancer, women’s health and cardiovascular medicine.
In psychology, such practices have become the norm. An anonymous questionnaire of 2,000 psychologists published by Harvard Business School researcher Leslie John and colleagues in Psychological Science in 2012 found almost 100% of responders had excluded contradictory data from their research papers. But no discipline is immune. Even in physics, reports of the discovery of gravitational waves in March 2014 were later dismissed. Drug companies conducting clinical trials neglect to publish the entire data set, potentially hiding unfavourable results. But drug companies are also victims. In 2011, drug company Bayer reported it could replicate only 25% of published findings related to drug targets for cancer, women’s health and cardiovascular medicine. In 2012 the company Amgen could only replicate 11% of cancer research results. This is shocking, but also understandable.
A career in academic research is wildly competitive. University scientists have to raise grant money constantly, and to do so, you have to tell the funding agency that you think your project will work based on your past results. Only innovative work is funded. The rewards for success are huge: your salary depends on it.
The editors of scientific journals also play their part, driven by the desire for high impact research and advertising. And the busy scientists they rely upon for the unpaid task of peer review can be lax about vetting a paper’s scientific rigour. Each link in the chain encourages dishonesty.
Richard Horton, the editor of The Lancet, wrote in April: “Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.”
Something needs to change. In this spirit in November 2011, a group of American scientists led by Brian Nosek, a psychologist at the University of Virginia, began The Reproducibility Project.
They began with psychology, selecting 100 experiments that had been published in peer-reviewed journals and 270 expert scientists to repeat them. To ensure they were doing the experiments correctly, they asked the original authors to participate. The findings were published on August 2015 in Science – 10 years after Ioannidis’s first paper. They found more than 60% of the experiments did not reproduce the original results. Even in the successfully replicated studies, the effect was about half that of the original studies.
The good news is that this seems to be the beginning of a new wave of making science accountable. Nosek says major psychology journals have started publishing replications alongside original research. A reproducibility project for cancer research is next.
What will be the impact? Knowing that research is going to come under replication scrutiny may lift the game for researchers and the journals that publish them.
Some studies will always be non-reproducible – that’s the way of science. As the authors of the Nature reproducibility paper say: “Even research of exemplary quality may have irreproducible empirical findings because of random or systematic error.”