Peer review: is it fit for purpose?

There is widespread concern among scientists that peer review is far too often failing in its task of ensuring that scientific research meets basic parameters of good science and rigorous evaluation.

A Cosmos survey of scientists who have been part of the peer review system revealed that many hold significant concerns about the process.

This comes at a time when facts are under scrutiny like never before but when issues like climate change and pandemic response demand that the public be provided with quality data, analysis and information that they trust.

Peer review is one of the pillars of science, providing the opportunity to independently assess the validity, quality and originality of articles for publication, to build confidence in research results.

In the Cosmos survey, undertaken in September by 187 respondents throughout Australia with PhDs or professorships, almost one in four (24%) generally rated the peer-review system “poor” or “terrible”.

On a private level, when asked to think back to the time their first paper was published and went through the peer-review process, 17% said the process was “poor” or “terrible”, 35% said it was “fair”, and the rest felt it was “good” (42%) or “great” (6.45%).

The National Health and Medical Research Council (NHMRC), one of the world’s biggest funding bodies for research, was given the survey results, and said in an email to Cosmos: “Standards of peer review can vary widely. This has been demonstrated in the case of scientific journals of which there are many thousands, ranging from those that seek to achieve the highest standard to those that lack transparent peer review processes altogether.”

The Cosmos survey was conducted to assist with the preparation of a major feature story series by Cosmos Weekly on the problems with peer review. The series, by science journalist Clare Kenyon, drew heavily on work from the online journals Retraction Watch and Pubpeer, and science sleuth Elizabeth Bik. The series will be published in Cosmos Weekly on October 14.


About the survey:

  • Request to participate in survey emailed to 1700 people
  • 187 respondents
  • 72 female (39%), 113 male (60%) and 2 (1%) other
  • Main degree: 17 arts and humanities (9%), 175 science including medicine (94%)
  • Source of main degree: 119 Australia (64%), 68 overseas (36%)
  • 175 had participated as a peer reviewer (94%)

The Retraction Watch online database lists 36,104 retracted papers and 2249 ‘expressions of concern’ (of which 772 have been upgraded to a full retraction). Of these, 475 retractions and 41 expressions of concern (with three upgraded to retraction), are linked to Australia.

This might sound like a lot, but Ivan Oransky, one of the two men behind the site, told Kenyon it barely scrapes the surface. “We estimate that probably one in 50 papers should be retracted. Right now, it’s about eight out of 10,000, so just under one in 1000 is retracted.”

Elizabeth Bik has now reviewed more than 100,000 papers, carefully trawling through images and data to detect signs of potential manipulation.

“It’s a balance,” she says. “I hate that people cheat, but I also have a drive to warn others, to warn people about papers that might have areas of concern.”

Bik typically reports at least one detection a day, often on social media. She hopes this transparency and openness is helping create a more encouraging environment for whistleblowers and others who notice discrepancies to come forward.

Almost half of the people in the Cosmos survey said they felt they had been subject to a conflict of interest from the reviewers. Another 29% said “maybe, not sure” and 24% said “no”.

About 14% felt they had been subject to sexism by the peer reviewers.

The typical peer-review system does not always require that the writers of the papers initially remain confidential to the reviewers, but 58% of the respondents to the survey felt it should.


Our five-part series on peer review appears tomorrow in Cosmos Weekly.


One of the big flaws in the system is that the reviewers are rarely able to check the raw data underlying a paper, however 64% of respondents to the survey said this should be the case. Surprisingly of those surveyed who had been a peer reviewer, 43% said they had concerns about the original data.

In the Cosmos Weekly series there are numerous examples of retractions that came after the raw data was found to be falsified.

The NHMRC said it supports review by independent experts as the “international gold standard” for the assessment of research grant applications (seeking funding for future research) and manuscripts submitted for publication in scientific journals (generally reporting the outcomes of completed research).

“To achieve the highest standard, peer review must be impartial, transparent and rigorous – that is, conflicts of interest must be minimised, assessment criteria must be published, and the reviewer must undertake their review fairly and diligently against the criteria,” the NHMRC said.

“The survey is about peer review conducted by scientific journals, not funding bodies such as the National Health and Medical Research Council (NHMRC). NHMRC is not involved with peer review conducted by journals.”

In its internal, grant-based peer review process, the NHMRC says it “also surveys peer reviewers and applicants, monitors international experience in peer review processes, and discusses peer review processes with its advisory committees to identify opportunities for quality improvements.”

One respondent to the survey who offered further insight was Associate Professor Michael Brown from the School of Physics and Astronomy at Monash University. “I undertook this survey and answered truthfully, but felt it didn’t cover some of my concerns about peer review.

“Many of the ethical concerns I have with peer review I’ve encountered outside of my work as a peer reviewer. This includes papers where it appears editors selected reviewers suggested by authors, who consequently approved the publication of deeply flawed work (be it suspect data or flawed analyses).”

Professor of Psychology at the University of Melbourne Ron Borland criticised Cosmos for undertaking such a limited survey: “ Your survey is not of any use, except to create misleading headlines.

“In my experience the vast majority of peer reviews are useful but a percentage have problems. The vast majority of the science I review, virtually none of which has commercial significance, is generally robust, (although) conclusions often go beyond what the data support.

“Overall, for a system that operates on a cottage industry mentality with reviewers doing much of the reviewing in their spare time, the system works remarkably well, but of course could be better.

“If you’re really serious about confronting the issues around peer review something much more substantial was required.”

Dr Trevor Garnett from the University of Adelaide’s Faculty of Science was another who thought the survey might have gone further.

“There is so much wrong with the review process     . Aside from the brutal aspects of having your manuscripts reviewed, the amount of time that researchers spend reviewing papers is a nightmare, especially with so many more papers published than there used to be.

“So many lesser      quality papers that are pushed out by researchers because that is the only way they can keep their jobs, minimal publishable units. You have journals making considerable      profits and making researchers pay to submit papers, and other researchers reviewing papers and not getting paid for it.

“For a lot of journals, even higher ranking journals, there is pressure for reviewers and editors to accept and not reject them because they make money for the more they publish.”

Our five-part series on peer review appears tomorrow in Cosmos Weekly.

Please login to favourite this article.