This is one of a five-part series from Cosmos Weekly, investigating the peer review process.
Scientific principles have – to borrow from Isaac Newton – evolved by standing on the shoulders of giants over centuries.
Science, underpinned by what’s known as the “scientific method”, is separated from philosophy, rhetoric, journalism, opinion, and guesswork by the creation of irrefutable facts. It is under siege.
It’s not just a science problem. George Orwell’s 1949 novel 1984 imagined a world in which facts had given way to spin: “Freedom is slavery”; “Ignorance is strength”. But, as he wrote in 1943, “In the past, people deliberately lied, or they unconsciously coloured what they wrote, or they struggled after the truth, well knowing that they must make many mistakes; but in each case they believed that ‘the facts’ existed and were more or less discoverable.”
In 2017, US Presidential spokesperson Kellyanne Conway coined the phrase “alternative facts” in response to a question about the number of people attending Donald Trump’s inauguration. The questioner, NBC reporter Chuck Todd, replied: “Look, alternative facts are not facts. They’re falsehoods.”
Are facts losing their lustre?
Science and the scientific method aim to stand between the truth and everything else.
Most practising scientists would recognise that the scientific method has roots in the logic of Karl Popper, a 20th century philosopher. Popper drew a line between what can or cannot be considered science via the concept of falsifiability – to constitute science, a theory, question or idea must be testable and hold up to scrutiny with repeated testing.
This leads to several key ideas:
- If an idea is not testable, it’s not science. For example, someone could claim that invisible, undetectable aliens walk amongst us. Since they are undetectable, there is no way to falsify this statement and therefore it doesn’t constitute science.
- Science is not infallible – scientific theories are only as good as their first failure, after which they must be replaced by a better theory.
- There is no way to prove a hypothesis as there may be a future test one day that falsifies it; however, as a hypothesis survives repeated testing, confidence in it grows.
Much current scientific practice involves repeated testing of existing theories, discovering unexplained phenomena, postulating new or modified hypotheses and testing them, ad infinitum, until they break. The practice of science involves rigorous investigation, reporting, scrutiny and testing.
But, especially given some of the more high-profile article retractions and alleged frauds (such as Aβ*56, social spiders, microplastic-eating reef fish and Himalayan fossils) you might be led to wonder who, these days, is doing the scrutinising and who is doing the testing?
Who, these days, is doing the scrutinising and who is doing the testing?
Scrutiny comes from other scientists in the field. Given the highly specialised nature of niche scientific disciplines these days, this has to be the case.
Peer review is considered an integral part of the publishing and grant-awarding processes, and amongst the academic community it’s the gold standard. If the journal isn’t peer-reviewed, it’s not considered scholarly. (Notably, there are some areas of academia – such as data science and programming – that don’t quite operate in this manner.)
The relationship between peer review and modern scientific practice hasn’t always been so tight.
Although it might be natural to assume that peer review has always gone hand in hand with the scientific method, in truth it’s a relatively young idea. Despite the first academic journal being published in 1665 by the Royal Society of London, peer review – in its current form, reviewed by peers unaffiliated with the society associated with the journal – only arrived on the scene in the mid-1970s.
Peer review is just 50 years old.
Before then, it was essentially a group of mostly white, wealthy men corresponding with each other about their scientific exploits. These would take the form of letters, short articles and books. There were also public and private presentations, after which work would be recommended for publishing by other members of the close-knit science community.
Just because peer-review of the day mostly consisted of favourable assessments from the author’s closest chemistry chums or physics pals, it doesn’t necessarily mean that all science publishing was poor quality or not respectable.
The journal Nature, first published on 4 November 1869, was enormously influential in energising the scientific method and peer review. That’s not surprising in a publication that has advocated for free education, theorised on flight (1891), first described X-rays (1896), devoted an entire issue to the theory of relativity (1921), and published extensively on the structure of DNA, proteins and cloning.
As Associate Professor Melinda Baldwin describes in detail, before editor David Davies made external peer-review a requirement for publication in August 1973, Nature editors would often not bother sending out papers for external review if the paper was submitted or reviewed by scientists the editors trusted.
An oft-discussed example of this is Watson and Crick’s monumental ‘double helix structure of DNA’ paper submitted to Nature in 1953. It passed through the journal’s review process swiftly, in part due to its accompanying letter from the authors and other researchers at the Laboratory for Molecular Biology recommending the paper. The subsequent editor of Nature, John Maddox, went so far as to write that the Watson and Crick paper “could not have been refereed: its correctness is self-evident”. John Maddox, writes Baldwin, “would also print papers he admired without external refereeing”.
Although this would never pass muster in well-regarded academic journals today, it is important to remember that these attitudes are a result of peer review’s origins. In the 1800s, “these referees were never intended to play the part of scientific gatekeepers”, writes Alex Csiszar in an article in Nature. Referees of the time were to place the research in “the landscape of knowledge”, rather than taking a “fine-tooth comb to arguments”. In the early 1900s, the idea began to take hold that editors and referees, taken as one large machinery of judgement, ought to ensure the integrity of the scientific literature as a whole, but these ideals were slow to evolve.
In the early 1900s, the idea began to take hold that editors and referees, taken as one large machinery of judgement, sought to ensure the integrity of the scientific literature as a whole – but these ideals were slow to evolve.
Large amounts of government funding poured in as scientific research and knowledge became a burgeoning industry in its own right in the years after the Second World War. According to Csiszar, peer review became synonymous with ideals of scientific trust and integrity. It “became a mighty public symbol of the claim that these powerful and expensive investigators of the natural world had procedures for regulating themselves and for producing consensus”. Thus, modern external peer review processes developed as a response “to political demands for public accountability”.
The increase in scientific activities and associated funding also resulted in a diverse and ever-specialising field, plus an increasing number of articles submitted to scientific journals for publication. Editors no longer chose articles for their length or to fill pages, and neither did they need to solicit papers from scientists – they were suddenly inundated.
Science was practised more and more widely and became so specialised that editors or publishing staff alone were no longer able to decide what was worthy of publication in their journals. External review – the practice of sending out a paper to other specialists in the field for expert commentary – became ubiquitous and the sign of a trustworthy scientific journal.
Today, peer review grapples with its growing pains
Although integrity and rigour in academic publishing is clearly essential, the modern practices of academic peer review have some unfortunate consequences.
The rate of submission of papers to academic journals – particularly those considered prestigious in their respective research fields – increases every year. Publishers are swamped and struggle to find appropriate, willing and able reviewers.
For centuries, scientists have reviewed manuscripts for journals unpaid, for the public good. In the current day, some view it as a duty that gatekeepers of the field should be privileged to perform, some aim to burnish their credentials as an “expert” or add manuscript reviews to their CVs to bolster their reputations. Others do it to get early access to novel research.
An obvious tension exists between researchers wanting to publish their research and the availability of space in journals, combined with the often high fees for those papers accepted for publishing, which have anecdotally been reported to be anywhere between $US25 and $US5,000.
In an environment where the number of publications is a metric used for promotions, hiring and allocation of grants – not to mention standing in the science community – it’s easy to understand how people try to circumvent the process and why a market might pop up to support that desire.
Open the spam inbox of a typical academic’s email account and you might find messages from senders purporting to be from an academic journal, requesting the academic publishes their next research article with them. Often these are from predatory journals, which “accept articles for publication – along with author’s fees – without performing promised quality checks for issues such as plagiarism or ethical approval”, Dr Agnes Grudniewicz and co-authors explained in Nature in 2019.
It’s articles from these journals – similar to the mainstream book trade’s vanity publishers – that you will often see passed around social media as ‘science’ underpinning the latest conspiracy theory, which of course, further muddies public understanding of, and trust in, the processes of science.
The CRAP paper
An almost unbelievable example of the lack of a rigorous review process in journals like these is the publication of an article known as the ‘CRAP paper’. In 2009, Philip Davis, a graduate student at Cornell University, US, and Kent Anderson from The New England Journal of Medicine submitted a paper to The Open Information Science Journal, published by Bentham Science Publishers. The article, titled “Deconstructing Access Points”, was fully composed of text created by a nonsense-generating computer. It’s called the CRAP paper because the authors (who wrote under the names David Phillips and Andrew Kent) listed their institution affiliation as the Centre for Research in Applied Phrenology (a debunked method to analyse a person’s character via bumps on the skull).
It was accepted by the journal, for a publication fee of $US800.
Davis declined their offer and withdrew the paper.
Davis’s experiment could be considered a bit of a practical joke – albeit with a real point. However, the consequences of a peer-review process that currently relies on the goodwill of external reviewers to act as the integrity gatekeepers of science, in addition to the way it is entwined with individual researcher’s academic progression and standing, are very serious.
Every day, articles are posted on sites such as PubPeer and on social media calling out instances of manipulated images that have evaded pre-publication detection from external peer reviewers. Every day, new retractions are listed on Retraction Watch, bringing attention to the latest paper of concern or retracted – often relating to publications many years old.
From allegedly manipulated images ‘proving’ that Aβ*56 causes Alzheimer’s disease, to microplastics befuddling tropical reef fish, the very flavour of science is beginning to taste a bit rotten…
This story is part of a five-part series from Cosmos Weekly on peer review. Read the other four:
- Is it time to review, the review?
- When peer review fails, people get hurt
- Putting science under the microscope reveals not all is well
- Scientific fraud, poor research and honest mistakes leads to thousands of retractions
Next week: a video where Clare Kenyon interviews Elisabeth Bik and Ivan Oransky on Monday. Subscribe to our free daily newsletter to be the first to see it.