Investigation raises multiple questions over ethics of TB trial involving 2800 babies
Major medical journal slams Oxford University team for allegedly burying unfavourable animal test results ahead of a human clinical trial. Andrew Masterson reports.
A major investigation by the British Medical Journal has slammed Oxford University researchers for allegedly misrepresenting the results of animal experiments in order to secure backing for a TB booster vaccine trial that involved 2979 human babies.
Combining a lengthy feature written by associate editor Deborah Cohen and two hard-hitting editorials, the journal outlines a saga in which preclinical trial data was allegedly used selectively – and in at least one case effectively suppressed – in order to secure funding and ethics approval for a large clinical trial.
The journal’s investigation finds that by the time applications were submitted for the trial – which began in 2009 among infants in South Africa – the failure of a monkey-based trial had been known to the researchers for several months, yet was at first not mentioned, and later acknowledged in a way that some academics suggest was selective and deceptive.
All the research centred on a booster called MVA85A, created by a team headed by Helen McShane, professor of vaccinology at the Jenner Institute in Oxford. The scientists set out to test the contention that MVA85A combined with the standard Bacillus Calmette-Guerin (BCG) TB vax worked better than BCG alone.
In applying for funding and permission to test the contention on humans, the team had already conducted a number of animal studies – using cattle, guinea pigs, mice and monkeys – and made available results which, it was claimed, demonstrated the value of the booster.
The clinical trial went ahead, with parents of the babies giving consent based on written information declaring the trial vax had “been tested in animals and was shown to be safe and effective”. This was despite evidence from monkey research that the booster might hinder rather than help the efficacy of the standard existing vaccine.
The two-year clinical trial went ahead and the results published in 2013, with the researchers reporting in The Lancet there were 32 TB cases in the cohort that received the booster, and 39 in the placebo group – a difference not considered statistically significant.
The BMJ reports that among potential funders, the trial result was seen as a failure to translate ostensibly successful animal results into human ones, which led to a downturn in investment across the whole TB research field.
However, the journal reveals that in 2015, an independent review, led by Paul Garner, head of the Centre for Evidence Synthesis in Global Health at the Liverpool School of Tropical Medicine, looked at eight MVA85A studies conducted between 2003 and 2010, involving 192 animals, and concluded there was no evidence that the booster improved the performance of BCG.
The review found that MVA85A did seem to increase protection in two of the trials, one in mice and the other with guinea pigs, but in both cases there were major differences in the composition or administration of the drug compared to that used later on babies.
Of the others, Garner particularly flagged his concern regarding a year-long trial that started in November 2006, conducted at the UK government’s laboratories at Porton Down, using rhesus macaque monkeys. The experiment involved 16 monkeys, of which four were unvaccinated, six were given BCG, and six given BCG and MVA85A in combination. All were then exposed to the TB virus.
The unprotected monkeys died. Four in the BCG cohort survived, compared to just one in the combination group.
In her feature, Cohen writes, “Garner told The BMJ that although the difference between the BCG and MVA85A groups wasn’t statistically significant, the Porton Down study gave a strong signal that the MVA85A vaccine was hastening the development of TB in the macaques, raising the possibility that MVA85A was actually impairing the effectiveness of BCG.”
The BMJ investigation found that these results were known to McShane and her colleagues 18 months before the start of the human efficacy trial, and three months before the beginning of a smaller human-based safety trial.
With the assistance of another leading Oxford University TB researcher, the journal discovered that the adverse macaque results were left out of an MVA85A presentation at a 2008 international conference. They were also left out – along with negative results from other animal trials – of documents sent by McShane’s team to the South African Medicines Control Council and the University of Cape Town seeking approval for the clinical trial.
(The other TB researcher, by the way, Peter Beverley, made several complaints to his institution about what he regarded as serious flaws in McShane’s work. Soon after, he was barred from using the university’s research laboratories, and is now at the Imperial College Tuberculosis Research Unit.)
The BMJ also reveals a letter written by McShane to Oxford University in which she said the results of one mouse study showed that MVA85A “does not consistently enhance protection … to an extent that can be reliably detected with small group sizes.” The letter also dismisses a guinea pig study.
Speaking directly to the BMJ, McShane dismissed the monkey trial results, calling it a “failed” experiment. This view is disputed by other researchers, including JoAnne Flynn, professor at the department of microbiology and molecular genetics at the University of Pittsburgh School of Medicine, and at least one Porton Down scientist who was involved in the actual trial.
To clarify, the BMJ made repeated requests to both Porton Down and Oxford University for the release of the protocol on which the experiment was based. The requests were knocked back.
The lack of transparency around test design and results emerges as a primary theme of the BMJ investigation, with the journal citing a “systemic failure” around non-human pre-clinical trials.
In an editorial, University of Edinburgh neurologist Malcolm Macleod comments: “We need to develop better and more systematic ways to establish when a drug is ready for clinical trials in humans – and importantly, when it is not.”
In a second editorial, Netherlands researchers Merel Ritskes-Hoitinga and Kim Wever cite poor regulation and design of animal experiments as a major cause of subsequent failures in human trials.
“Preclinical animal studies aim to establish safety and efficacy before patients are exposed to new treatments,” they write.
“However, the translational success rate from animal studies to humans is quite low, and the non-reproducibility of preclinical studies ranges between 51% and 89%. The inadequate conduct, reporting, and evaluation of animal research underpinning human trials is one reason why big promises of better outcomes for patients so often remain unfulfilled.”