For my next trick…
Too many medical trials move their goalposts halfway through. A new initiative aims to change that
“….PAXIL was a blockbuster. It was introduced by its inventors, GlaxoSmithKline (GSK), in 1992, as an antidepressant. By the early 2000s it was earning the firm nearly $2 billion a year. It was being prescribed to millions of children and teenagers on the basis of a trial, called Study 329, which suggested it was a good treatment for depressed youngsters. But when British regulators took a second look at Study 329, in 2003, they concluded that it had been misleadingly presented. Not only did Paxil do little to help youngsters with depression, it often made things worse—to the extent of making some who took it suicidal. In 2012 the American authorities imposed the biggest fine in the history of the pharmaceutical industry, $3 billion, on GSK for misreporting data on a variety of drugs, of which Paxil was one….”
Since then, Study 329 has become one of the best-known examples of a piece of academic sleight-of-hand called “outcome switching”. This is a procedure in which the questions that a scientific study was set up to answer are swapped part way through for a different lot. Study 329 set out to measure the impact of Paxil on eight different variables, all based around how participants scored on a variety of depression tests. None showed that it was any better than a placebo sugar pill, but the researchers who wrote the paper came up with 19 new measures. Most of those showed no benefit either, but four did. In the paper, those four were presented as if they had been the main measures all along.
Outcome switching is a good example of the ways in which science can go wrong. This is a hot topic at the moment, with fields from psychology to cancer research going through a “replication crisis”, in which published results evaporate when people try to duplicate them. Now, a team of researchers at the Centre for Evidence-Based Medicine, at Oxford University, have set up a project called COMPare, in the hope of doing something about it.
Outcome switching can sometimes be done for good reasons: participants may refuse to fill in a long form, for instance, meaning that no data can be collected from it. But it can also let unscrupulous researchers go on “fishing expeditions” to prove whatever they want. Collect enough data, and correlations that look statistically significant will appear by chance. Pick them out after the event and you have, unless you re-test to demonstrate that they were not flukes, proved nothing.
Study 329 finished in 1998. These days, such shenanigans are supposed to be impossible. American and European regulators require trials to be registered before they begin, complete with information about what they will be investigating and how they will go about it, so that researchers can check their colleagues have done what they promised to do. But enforcement is lax. A meta-analysis—a study of studies—published in BMC Medicine in 2015 found that 31% of clinical trials did not stick to the measurements they had planned to use. Another paper, published in PLOS ONE, also in 2015, examined 137 medical trials over a six-month period and found that 18% had altered their primary outcomes halfway through the trial, while 64% had done the same with secondary, less-important measures of success.
The COMPare team’s results are similar. They analysed all the clinical trials reported between October and January in the five most prestigious medical journals—specifically, the New England Journal of Medicine, the Journal of the American Medical Association, the Lancet, Annals of Internal Medicine and the BMJ—looking for evidence of outcome switching.
That came to a total of 67 different trials. Of those, nine were perfect—they had done exactly what they had said they would do, or if they had changed their measurements, they had said so plainly and given their reasons. The other 58, though, had flaws. Between them they contained 300 outcomes that should have been reported but were not, while 357 new outcomes, not specified in the documents describing what the trial would be doing, were silently added.
Where previous research has merely described the problem, says Ben Goldacre, a British doctor and epidemiologist who is leading the project, COMPare hopes to do something about it. For every imperfect trial, the team wrote a letter to the editors of the relevant journal, pointing out the inconsistencies with the aim of setting the record straight.
So far, responses have been mixed. Of 58 letters COMPare has sent out since the project began, seven have been published. Another 16 were rejected by the journals, who argued either that the problem was insignificant or that attentive, industrious readers could work out for themselves what had happened. The rest have seemingly been ignored.
Dr Goldacre—who has built a reputation as a crusader for open science—says some journals’ responses surprised him. He points out that all five have signed up to guidelines that require them to police outcome switching and to make sure papers they publish do not engage in it. The COMPare team plans to collate the responses into another scientific paper, to be published shortly. “I would regard this as a provocation study,” says Dr Goldacre, using the immunological meaning of the term. “When you provoke the system, the responses you get tell you a lot about how the system works. But we’re not doing this to be provocative and snide, we’re doing it to understand the pathology.”