Stat: New rule on clinical trial reporting doesn’t go far enough


The clinical trial industry, which I work in, is in crisis.

Roughly half of clinical trials go unreported. Industry-sponsored trials are four times more likely to produce positive results than non-industry trials. And even when trials are reported, the investigators usually fail to share their study results: nearly 90 percent of trials on ClinicalTrials.gov lack results.

Failure to report clinical trial results puts patients in danger. Here’s one example of that: GlaxoSmithKline, the maker of the antidepressant Paxil, recently paid $3 billion for failing to disclose trial data showing that Paxil was not only no more effective than placebo but was also linked to increased suicide attempts among teenagers. The effectiveness of statins, the Tamiflu anti-flu medicine, antipsychotics, and other drugs have come under question due to improperly reported data. Without complete disclosure of trial results, physicians can’t make informed decisions for their patients.

A recently passed final rule from the Department of Health and Human Services now requires that all NIH-sponsored clinical trials be reported on ClinicalTrials.gov. A complementary policy from the National Institutes of Health covers registering and submitting summary results information to ClinicalTrials.gov for all NIH-funded trials, including those not covered by the final rule.

Unreported trials are subject to daily fines of $11,833. Researchers have 90 days after the rule is enacted on January 18, 2017 to comply with it. Excellent summaries of the rule have been published by the NIH and in the New England Journal of Medicine.

The final rule should help address some of the troubling trends in the clinical trial industry. It clears up ambiguous reporting requirements and explicitly requires investigators to submit clinical trial results, adverse events, and statistical methods. These are steps in the right direction that could limit the unscientific practices plaguing the trial industry.

But the final rule doesn’t go far enough, mainly because FDA lacks the staff and the political will to adequately enforce it. As STAT reported in December 2015, the FDA had never levied a single fine for clinical trial reporting violations. Representatives from the FDA cite legal complexities and lack of employees, yet critics have also pointed out the FDA is effectively on the pharmaceutical industry’s payroll. Under the Prescription Drug User Fee Act, the FDA supplements its budget by charging pharmaceutical companies drug application fees that totaled $855 million in fiscal year 2015.

The current FDA commissioner, Dr. Robert Califf, has said that the FDA will not be adding staff to enforce the final rule. That’s a mistake. How else can we expect the rule to be enforced? I work in a research group that conducts more than a dozen clinical trials and know firsthand that researchers don’t have the impetus to report their trials unless there are strong incentives to do so — like enforcement and the threat of fines.

In a perfect world, the FDA would receive more funding to hire employees so it could independently enforce this policy. In the meantime, researchers can check the reporting practices of their own institutions or sign a petition to support the Alltrials campaign. Another project called OpenTrials, a collaboration between Open Knowledge International and the University of Oxford Data Lab, aims to “locate, match, and share all publicly accessible data and documents, on all trials conducted, on all medicines and other treatments, globally.” It is seeking volunteers to contribute clinical trial data.

I know from personal experience that clinical trial reporting can be tedious and seemingly unrewarding work. But the transparent exchange of scientific data is integral to evidence-based medicine and public health. While the new final rule is a step in the right direction, the public and the research community also need to support efforts like AllTrials and OpenTrials.

Chris Cai is a clinical research coordinator at Massachusetts General Hospital in Boston.

Advertisements

Dr David Healy… “Study 329 Trick, Treat or Treximet”…


http://davidhealy.org/study-329-trick-treat-or-treximet/

Study 329 Trick, Treat or Treximet

October, 31, 2016 | 21 Comments

The Paxil/Seroxat Study 329 Story In 2016: Project Censored : Downplayed stories illuminate larger patterns in inequality, spying, the environment and corporate influence ..


http://www.sfreporter.com/santafe/article-12640-project-censored.html

Crisis in Evidence-Based Medicine

The role of science in improving human health has been one of humanity’s greatest achievements, but the profit-oriented influence of the pharmaceutical industry has created a crisis situation. That research simply cannot be trusted. Burying truth for profit is a recurrent theme for Project Censored. The top story in 1981 concerned fraudulent testing from a single lab responsible for one-third of the toxicity and cancer testing of chemicals in America. But this problem is much more profound.

“Something has gone fundamentally wrong,” said Richard Horton, editor of The Lancet, commenting on a UK symposium on the reproducibility and reliability of biomedical research: “Much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. … The apparent endemicity of bad research behavior is alarming.”

Horton’s conclusion echoed that of Marcia Angell, a former editor of the New England Journal of Medicine, who went public in 2009.

A classic case was Study 329 in 2001, which reported that paroxetine (Paxil in the United States and Seroxat in the United Kingdom) was safe and effective for treating depressed children and adolescents, leading doctors to prescribe Paxil to more than 2 million US children and adolescents by the end of 2002 before being called into question. The company responsible (now GlaxoSmithKline) agreed to pay $3 billion in 2012, the “largest healthcare fraud settlement in US history,” according to the US Department of Justice.

Nonetheless, the study has not been retracted or corrected, and “none of the authors have been disciplined,” Project Censored points out. This, despite a major reanalysis which “‘starkly’ contradicted the original report’s claims.” The reanalysis was seen as the first major success of a new open data initiative known as Restoring Invisible and Abandoned Trials.

While Project Censored noted one Washington Post story on the reanalysis, there was only passing mention of the open data movement. “Otherwise, the corporate press ignored the reassessment of the paroxetine study,” and beyond that, “Richard Horton’s Lancet editorial received no coverage in the US corporate press.”

Source: The Lancet 385, no. 9976, 2015; Cooper, Charlie, “Anti-Depressant was Given to Millions of Young People ‘After Trials Showed It was Dangerous,’” The Independent, 2015; Boseley, Sarah, “Seroxat Study Under-Reported Harmful Effects on Young People, Say Scientists,” The Guardian, 2015.

Seroxat Study 329 : The Taper Phase (New Post On Dr Healy’s Blog)


http://davidhealy.org/study-329-taper-phase/

Study 329 Taper Phase

October, 10, 2016 | 1 Comment

Psychiatrist Mickey Nardo’s Latest Post On GSK Study 329…


http://1boringoldman.com/index.php/2016/06/22/out-of-this-mess/

Posted on Wednesday 22 June 2016

David Healy, Jon Jureidini, Bernard Carroll, and Ben Goldacre

Some day there’ll be a best seller, a popular science book that will tell a story currently still in the making – and near the beginning the book will have a chapter about the interchange between David Healy and Charlie Nemeroff in Toronto in 2000 when Healy lost a new job because he talked about the potentially fatal side effects of SSRI [and Nemeroff, then boss of bosses] undermined his job change in retaliation. And there will be a piece about how Jon Jureidini, a pediatric psychiatrist, publicly protested a published study in 2003 that fraudulently claimed that a SSRI was safe and effective in adolescent depression. And that best-selling-author-to-be will add the efforts of Bernard Carroll and Bob Rubin in 2003 and later 2006 in exposing that same Charlie Nemeroff and others for promoting treatments they had a personal financial interest in without acknowledging those interests. Then there’s Ben Goldacre who will be cited for calling attention to the essential role of data transparency in bringing the truth to light with the AllTrials initiative, or getting at a major mechanism of deceit with his COMPare project. There will be so many more who will figure into this unfolding story. But right now, in spite of a lot of prequels, that book can’t be written because the story’s not over yet. Sure enough, there’s been progress but the main story line continues, lacking an in·place general solution…

Recently, the pioneers have been mighty busy. In September, David Healy, Jon Juriedini, and their colleagues republished the 2001 study that had become a paradigm for a jury-rigged Clinical Trial report, reanalyzing it from the original dataset using the author’s own Protocol and found that despite the earlier claims, the drug was neither effective nor safe in adolescents [Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence]. Then in March, Jon Juriedini and some other colleagues were back with another SSRIs·in·adolescents·study, this time with access to internal documents showing again how a negative Clinical Trial had been published as positive [The citalopram CIT-MD-18 pediatric depression trial: Deconstruction of medical ghostwriting, data mischaracterisation and academic malfeasance]. This study had been used as a basis for FDA Approval, and used the same technique of altering the a priori protocol – something Ben Goldacre‘s COMPare project calls “outcome switching.”

We now know that this problem of misreported Clinical Trials will never be solved so long as the raw data remains hidden. Without access to that information, we’ll never put a stop to these dark days of pseudoscience in the medical literature. But that’s not enough, because it’s the analysis of that data where the misbehavior has been centered. Today, one of those pioneers has a blog post that suggests a viable next step, a concrete general solution to the problem:
Health Care Renewal
by Bernard Carroll
June 22, 2015

There is a disconnection between the FDA’s drug approval process and the reports we see in medical journals. Pharmaceutical corporations exploit this gap through adulterated, self-serving analyses, and the FDA sits on its hands. I suggest we need a new mechanism to fix the problem – by independent analyses of clinical trials data.

When they analyze and publish their clinical trials in medical journals, pharmaceutical corporations have free rein to shape the analyses. The FDA conducts independent analyses of the data submitted by the corporations, and it may deny or delay approval. But the FDA does not challenge the reports that flood our medical journals, both before and after FDA approval. It is no secret that these publications are routinely biased for marketing effect, but the FDA averts its gaze. That failure of the FDA – a posture known as enforcement discretion – has been well documented. The question is why? At the same time, exposing the biases has been difficult for outsiders because the data are considered proprietary secrets.
This is just a teaser. The whole post is on-line. In the next section, Carrol outlines the problem using Juriedini’s latest paper as a case example then proposes a solution:
A Specific Proposal
Our primary defense against such perversions of scientific reporting is fidelity to the registered IND protocol and plan of statistical analysis. The solution is not hard to see: We need independent analyses of clinical trials because we cannot trust the corporate analyses. In effect, we need something like the Underwriters Laboratory to verify the statistical analyses of clinical trials. Nobody takes the manufacturing corporation’s word for it concerning the safety and performance of X-ray machines or cardiac defibrillators. Why treat the statistical analysis of drug trials any differently? It’s highly technical work. Who should assume that responsibility? Why not the FDA? After all, they alone see all the data. My specific proposal is for Congress to mandate that the FDA analyze all clinical trials data strictly according to the registered protocols and analysis plans. That requirement should apply to new drugs or to approved drugs being tested for new indications. It should apply also to publications reporting new trials of approved drugs. Corporations and investigators should be prohibited from publishing their own in-house statistical analyses unless verified by FDA oversight.
There follows a section on why the time to act is at hand and the potential counterarguments:
It is time for Congress to grasp this nettle. The time for enforcement discretion is past, and we need Congress either to direct the FDA to act or to create a new mechanism of oversight. To do nothing would be unthinkable.

There are other suggested solutions beginning to appear and I’ll cover some of them in subsequent blog posts. But this one comes first because it’s the one that makes the most sense to me. In all of the work that went into our Paxil Study 329 paper where my part was the efficacy analysis, I became convinced that insisting that the analyses follow the a priori Protocol and Statistical Analysis Plan to the letter is the only way to insure that the analysis is worthwhile. After we finished our paper, I went back and looked and every questionable trial I’d looked at had suspicious variables. My problem was that finding those Protocols was spotty. My hat’s off to Goldacre’s team for being able to run them down. The other ubiquitous problem was from inappropriate statistical testing. So Carroll’s proposal seems right as rain. The FDA has the capabilities to do the analyses, and already does them in many cases.

I picked the four investigators up top, not because they work together, or even necessarily agree. I picked them because each has been a central part of my own growing understanding of a way out of this mess. My way of saying “thanks!”

Update: Dr. Carroll’s proposal was cross posted on Naked Capitalism with some interesting comments.

Doctor Ben Goldacre Knows Best…


Page_1

Note to Ben Goldacre (from me):



“….Missing data poisons the well for everybody. If proper trials are never done, if trials with negative results are withheld, then we simply cannot know the true effects of the treatments we use. Evidence in medicine is not an abstract academic preoccupation. When we are fed bad data, we make the wrong decisions, inflicting unnecessary pain and suffering, and death, on people just like us….”

Ben Goldacre 2012

http://www.bibliotecapleyades.net/ciencia/ciencia_industrybigpharma105.htm


“…As one of the authors of the RIAT restoration of Paxil Study 329 who was around for the whole process, I don’t actually know the answer to Leonie’s question about why it was so hard to get our paper published.

I don’t know if a Conflict of Interest had anything to do with that, but in a way, that’s the whole point – when there’s a significant Conflict of Interest, you can’t ever really know.

It’s a variable that can’t be evaluated.

So her question stands whether it can be answered or not. Should the original Study 329 report be retracted?

That’s not in our hands.

My choice would be that it should never have been published in the first place..”

(Comment by Mickey Nardo one of the authors of the BMJ published- RIAT Study on GSK’s Study 329 for Paroxetine in Adolescents on “Club 329”– David Healy’s Blog- ).



There is a fascinating debate happening over on Dr David Healy’s blog about a lecture which  Dr Ben Goldacre gave in Dublin’s Royal College of Surgeons last week. It all started when (patient activist, and blogger) Leonie Fennell (an attendee of the lecture)- published her opinion of Ben’s talk in a post on Dr Healy’s blog (titled ‘Club 329‘). The post seems to be sparking some very interesting reactions, not least from Ben Goldacre himself (who  incidentally has already accused Dr. Healy of misrepresenting his views because the post is published on Healy’s web-page).

Personally I don’t think that Leonie misrepresented Ben at all, and ironically, Ben claims that the audio of the lecture itself confirms this misrepresentation, when in fact- it does the opposite: it upholds, and confirms Leonie’s views.

You can listen to the audio here:

https://soundcloud.com/truthman-thirty/bg-11-06-2016-1333https://soundcloud.com/truthman-thirty/bg-11-06-2016-1333

Ben released it on Twitter, and I thought it might be helpful (for clarity) to cut it to the exact parts which are under debate.

The following is Leonie’s full blog post, on Dr Healy’s blog.

I will follow underneath it with some commentary.


http://davidhealy.org/club-329-part-1/#comments

Club 329: Part 1

June, 7, 2016 | 24 Comments

 


https://seroxatsecrets.wordpress.com/2012/09/22/the-drugs-dont-work-a-modern-medical-scandal-by-dr-ben-goldacre/

The doctors prescribing the drugs don’t know they don’t do what they’re meant to. Nor do their patients. The manufacturers know full well, but they’re not telling. Drugs are tested by their manufacturers, in poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques that exaggerate the benefits.

Reboxetine is a drug I have prescribed. Other drugs had done nothing for my patient, so we wanted to try something new. I’d read the trial data before I wrote the prescription, and found only well-designed, fair tests, with overwhelmingly positive results. Reboxetine was better than a placebo, and as good as any other antidepressant in head-to-head comparisons. It’s approved for use by the Medicines and Healthcare products Regulatory Agency (the MHRA), which governs all drugs in the UK. Millions of doses are prescribed every year, around the world. Reboxetine was clearly a safe and effective treatment. The patient and I discussed the evidence briefly, and agreed it was the right treatment to try next. I signed a prescription.

But we had both been misled. In October 2010, a group of researchers was finally able to bring together all the data that had ever been collected on reboxetine, both from trials that were published and from those that had never appeared in academic papers. When all this trial data was put together, it produced a shocking picture. Seven trials had been conducted comparing reboxetine against a placebo. Only one, conducted in 254 patients, had a neat, positive result, and that one was published in an academic journal, for doctors and researchers to read. But six more trials were conducted, in almost 10 times as many patients. All of them showed that reboxetine was no better than a dummy sugar pill. None of these trials was published. I had no idea they existed.

It got worse. The trials comparing reboxetine against other drugs showed exactly the same picture: three small studies, 507 patients in total, showed that reboxetine was just as good as any other drug. They were all published. But 1,657 patients’ worth of data was left unpublished, and this unpublished data showed that patients on reboxetine did worse than those on other drugs. If all this wasn’t bad enough, there was also the side-effects data. The drug looked fine in the trials that appeared in the academic literature; but when we saw the unpublished studies, it turned out that patients were more likely to have side-effects, more likely to drop out of taking the drug and more likely to withdraw from the trial because of side-effects, if they were taking reboxetine rather than one of its competitors.

I did everything a doctor is supposed to do. I read all the papers, I critically appraised them, I understood them, I discussed them with the patient and we made a decision together, based on the evidence. In the published data, reboxetine was a safe and effective drug. In reality, it was no better than a sugar pill and, worse, it does more harm than good. As a doctor, I did something that, on the balance of all the evidence, harmed my patient, simply because unflattering data was left unpublished.

Nobody broke any law in that situation, reboxetine is still on the market and the system that allowed all this to happen is still in play, for all drugs, in all countries in the world. Negative data goes missing, for all treatments, in all areas of science. The regulators and professional bodies we would reasonably expect to stamp out such practices have failed us. These problems have been protected from public scrutiny because they’re too complex to capture in a soundbite. This is why they’ve gone unfixed by politicians, at least to some extent; but it’s also why it takes detail to explain. The people you should have been able to trust to fix these problems have failed you, and because you have to understand a problem properly in order to fix it, there are some things you need to know.

Drugs are tested by the people who manufacture them, in poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques that are flawed by design, in such a way that they exaggerate the benefits of treatments. Unsurprisingly, these trials tend to produce results that favour the manufacturer. When trials throw up results that companies don’t like, they are perfectly entitled to hide them from doctors and patients, so we only ever see a distorted picture of any drug’s true effects. Regulators see most of the trial data, but only from early on in a drug’s life, and even then they don’t give this data to doctors or patients, or even to other parts of government. This distorted evidence is then communicated and applied in a distorted fashion.

In their 40 years of practice after leaving medical school, doctors hear about what works ad hoc, from sales reps, colleagues and journals. But those colleagues can be in the pay of drug companies – often undisclosed – and the journals are, too. And so are the patient groups. And finally, academic papers, which everyone thinks of as objective, are often covertly planned and written by people who work directly for the companies, without disclosure. Sometimes whole academic journals are owned outright by one drug company. Aside from all this, for several of the most important and enduring problems in medicine, we have no idea what the best treatment is, because it’s not in anyone’s financial interest to conduct any trials at all.

Now, on to the details.

In 2010, researchers from Harvard and Toronto found all the trials looking at five major classes of drug – antidepressants, ulcer drugs and so on – then measured two key features: were they positive, and were they funded by industry? They found more than 500 trials in total: 85% of the industry-funded studies were positive, but only 50% of the government-funded trials were. In 2007, researchers looked at every published trial that set out to explore the benefits of a statin. These cholesterol-lowering drugs reduce your risk of having a heart attack and are prescribed in very large quantities. This study found 192 trials in total, either comparing one statin against another, or comparing a statin against a different kind of treatment. They found that industry-funded trials were 20 times more likely to give results favouring the test drug.

These are frightening results, but they come from individual studies. So let’s consider systematic reviews into this area. In 2003, two were published. They took all the studies ever published that looked at whether industry funding is associated with pro-industry results, and both found that industry-funded trials were, overall, about four times more likely to report positive results. A further review in 2007 looked at the new studies in the intervening four years: it found 20 more pieces of work, and all but two showed that industry-sponsored trials were more likely to report flattering results.

It turns out that this pattern persists even when you move away from published academic papers and look instead at trial reports from academic conferences. James Fries and Eswar Krishnan, at the Stanford University School of Medicine in California, studied all the research abstracts presented at the 2001 American College of Rheumatology meetings which reported any kind of trial and acknowledged industry sponsorship, in order to find out what proportion had results that favoured the sponsor’s drug.

In general, the results section of an academic paper is extensive: the raw numbers are given for each outcome, and for each possible causal factor, but not just as raw figures. The “ranges” are given, subgroups are explored, statistical tests conducted, and each detail is described in table form, and in shorter narrative form in the text. This lengthy process is usually spread over several pages. In Fries and Krishnan (2004), this level of detail was unnecessary. The results section is a single, simple and – I like to imagine – fairly passive-aggressive sentence:

“The results from every randomised controlled trial (45 out of 45) favoured the drug of the sponsor.”

How does this happen? How do industry-sponsored trials almost always manage to get a positive result? Sometimes trials are flawed by design. You can compare your new drug with something you know to be rubbish – an existing drug at an inadequate dose, perhaps, or a placebo sugar pill that does almost nothing. You can choose your patients very carefully, so they are more likely to get better on your treatment. You can peek at the results halfway through, and stop your trial early if they look good. But after all these methodological quirks comes one very simple insult to the integrity of the data. Sometimes, drug companies conduct lots of trials, and when they see that the results are unflattering, they simply fail to publish them.

Because researchers are free to bury any result they please, patients are exposed to harm on a staggering scale throughout the whole of medicine. Doctors can have no idea about the true effects of the treatments they give. Does this drug really work best, or have I simply been deprived of half the data? No one can tell. Is this expensive drug worth the money, or has the data simply been massaged? No one can tell. Will this drug kill patients? Is there any evidence that it’s dangerous? No one can tell. This is a bizarre situation to arise in medicine, a discipline in which everything is supposed to be based on evidence.

And this data is withheld from everyone in medicine, from top to bottom. Nice, for example, is the National Institute for Health and Clinical Excellence, created by the British government to conduct careful, unbiased summaries of all the evidence on new treatments. It is unable either to identify or to access data on a drug’s effectiveness that’s been withheld by researchers or companies: Nice has no more legal right to that data than you or I do, even though it is making decisions about effectiveness, and cost-effectiveness, on behalf of the NHS, for millions of people.

In any sensible world, when researchers are conducting trials on a new tablet for a drug company, for example, we’d expect universal contracts, making it clear that all researchers are obliged to publish their results, and that industry sponsors – which have a huge interest in positive results – must have no control over the data. But, despite everything we know about industry-funded research being systematically biased, this does not happen. In fact, the opposite is true: it is entirely normal for researchers and academics conducting industry-funded trials to sign contracts subjecting them to gagging clauses that forbid them to publish, discuss or analyse data from their trials without the permission of the funder.

This is such a secretive and shameful situation that even trying to document it in public can be a fraught business. In 2006, a paper was published in the Journal of the American Medical Association (Jama), one of the biggest medical journals in the world, describing how common it was for researchers doing industry-funded trials to have these kinds of constraints placed on their right to publish the results. The study was conducted by the Nordic Cochrane Centre and it looked at all the trials given approval to go ahead in Copenhagen and Frederiksberg. (If you’re wondering why these two cities were chosen, it was simply a matter of practicality: the researchers applied elsewhere without success, and were specifically refused access to data in the UK.) These trials were overwhelmingly sponsored by the pharmaceutical industry (98%) and the rules governing the management of the results tell a story that walks the now familiar line between frightening and absurd.

For 16 of the 44 trials, the sponsoring company got to see the data as it accumulated, and in a further 16 it had the right to stop the trial at any time, for any reason. This means that a company can see if a trial is going against it, and can interfere as it progresses, distorting the results. Even if the study was allowed to finish, the data could still be suppressed: there were constraints on publication rights in 40 of the 44 trials, and in half of them the contracts specifically stated that the sponsor either owned the data outright (what about the patients, you might say?), or needed to approve the final publication, or both. None of these restrictions was mentioned in any of the published papers.

When the paper describing this situation was published in Jama, Lif, the Danish pharmaceutical industry association, responded by announcing, in the Journal of the Danish Medical Association, that it was “both shaken and enraged about the criticism, that could not be recognised”. It demanded an investigation of the scientists, though it failed to say by whom or of what. Lif then wrote to the Danish Committee on Scientific Dishonesty, accusing the Cochrane researchers of scientific misconduct. We can’t see the letter, but the researchers say the allegations were extremely serious – they were accused of deliberately distorting the data – but vague, and without documents or evidence to back them up.

Nonetheless, the investigation went on for a year. Peter Gøtzsche, director of the Cochrane Centre, told the British Medical Journal that only Lif’s third letter, 10 months into this process, made specific allegations that could be investigated by the committee. Two months after that, the charges were dismissed. The Cochrane researchers had done nothing wrong. But before they were cleared, Lif copied the letters alleging scientific dishonesty to the hospital where four of them worked, and to the management organisation running that hospital, and sent similar letters to the Danish medical association, the ministry of health, the ministry of science and so on. Gøtzsche and his colleagues felt “intimidated and harassed” by Lif’s behaviour. Lif continued to insist that the researchers were guilty of misconduct even after the investigation was completed.

Paroxetine is a commonly used antidepressant, from the class of drugs known as selective serotonin reuptake inhibitors or SSRIs. It’s also a good example of how companies have exploited our long-standing permissiveness about missing trials, and found loopholes in our inadequate regulations on trial disclosure.

To understand why, we first need to go through a quirk of the licensing process. Drugs do not simply come on to the market for use in all medical conditions: for any specific use of any drug, in any specific disease, you need a separate marketing authorisation. So a drug might be licensed to treat ovarian cancer, for example, but not breast cancer. That doesn’t mean the drug doesn’t work in breast cancer. There might well be some evidence that it’s great for treating that disease, too, but maybe the company hasn’t gone to the trouble and expense of getting a formal marketing authorisation for that specific use. Doctors can still go ahead and prescribe it for breast cancer, if they want, because the drug is available for prescription, it probably works, and there are boxes of it sitting in pharmacies waiting to go out. In this situation, the doctor will be prescribing the drug legally, but “off-label”.

Now, it turns out that the use of a drug in children is treated as a separate marketing authorisation from its use in adults. This makes sense in many cases, because children can respond to drugs in very different ways and so research needs to be done in children separately. But getting a licence for a specific use is an arduous business, requiring lots of paperwork and some specific studies. Often, this will be so expensive that companies will not bother to get a licence specifically to market a drug for use in children, because that market is usually much smaller.

So it is not unusual for a drug to be licensed for use in adults but then prescribed for children. Regulators have recognised that this is a problem, so recently they have started to offer incentives for companies to conduct more research and formally seek these licences.

When GlaxoSmithKline applied for a marketing authorisation in children for paroxetine, an extraordinary situation came to light, triggering the longest investigation in the history of UK drugs regulation. Between 1994 and 2002, GSK conducted nine trials of paroxetine in children. The first two failed to show any benefit, but the company made no attempt to inform anyone of this by changing the “drug label” that is sent to all doctors and patients. In fact, after these trials were completed, an internal company management document stated: “It would be commercially unacceptable to include a statement that efficacy had not been demonstrated, as this would undermine the profile of paroxetine.” In the year after this secret internal memo, 32,000 prescriptions were issued to children for paroxetine in the UK alone: so, while the company knew the drug didn’t work in children, it was in no hurry to tell doctors that, despite knowing that large numbers of children were taking it. More trials were conducted over the coming years – nine in total – and none showed that the drug was effective at treating depression in children.

It gets much worse than that. These children weren’t simply receiving a drug that the company knew to be ineffective for them; they were also being exposed to side-effects. This should be self-evident, since any effective treatment will have some side-effects, and doctors factor this in, alongside the benefits (which in this case were nonexistent). But nobody knew how bad these side-effects were, because the company didn’t tell doctors, or patients, or even the regulator about the worrying safety data from its trials. This was because of a loophole: you have to tell the regulator only about side-effects reported in studies looking at the specific uses for which the drug has a marketing authorisation. Because the use of paroxetine in children was “off-label”, GSK had no legal obligation to tell anyone about what it had found.

People had worried for a long time that paroxetine might increase the risk of suicide, though that is quite a difficult side-effect to detect in an antidepressant. In February 2003, GSK spontaneously sent the MHRA a package of information on the risk of suicide on paroxetine, containing some analyses done in 2002 from adverse-event data in trials the company had held, going back a decade. This analysis showed that there was no increased risk of suicide. But it was misleading: although it was unclear at the time, data from trials in children had been mixed in with data from trials in adults, which had vastly greater numbers of participants. As a result, any sign of increased suicide risk among children on paroxetine had been completely diluted away.

Later in 2003, GSK had a meeting with the MHRA to discuss another issue involving paroxetine. At the end of this meeting, the GSK representatives gave out a briefing document, explaining that the company was planning to apply later that year for a specific marketing authorisation to use paroxetine in children. They mentioned, while handing out the document, that the MHRA might wish to bear in mind a safety concern the company had noted: an increased risk of suicide among children with depression who received paroxetine, compared with those on dummy placebo pills.

This was vitally important side-effect data, being presented, after an astonishing delay, casually, through an entirely inappropriate and unofficial channel. Although the data was given to completely the wrong team, the MHRA staff present at this meeting had the wit to spot that this was an important new problem. A flurry of activity followed: analyses were done, and within one month a letter was sent to all doctors advising them not to prescribe paroxetine to patients under the age of 18.

How is it possible that our systems for getting data from companies are so poor, they can simply withhold vitally important information showing that a drug is not only ineffective, but actively dangerous? Because the regulations contain ridiculous loopholes, and it’s dismal to see how GSK cheerfully exploited them: when the investigation was published in 2008, it concluded that what the company had done – withholding important data about safety and effectiveness that doctors and patients clearly needed to see – was plainly unethical, and put children around the world at risk; but our laws are so weak that GSK could not be charged with any crime.

After this episode, the MHRA and EU changed some of their regulations, though not adequately. They created an obligation for companies to hand over safety data for uses of a drug outside its marketing authorisation; but ridiculously, for example, trials conducted outside the EU were still exempt. Some of the trials GSK conducted were published in part, but that is obviously not enough: we already know that if we see only a biased sample of the data, we are misled. But we also need all the data for the more simple reason that we need lots of data: safety signals are often weak, subtle and difficult to detect. In the case of paroxetine, the dangers became apparent only when the adverse events from all of the trials were pooled and analysed together.

That leads us to the second obvious flaw in the current system: the results of these trials are given in secret to the regulator, which then sits and quietly makes a decision. This is the opposite of science, which is reliable only because everyone shows their working, explains how they know that something is effective or safe, shares their methods and results, and allows others to decide if they agree with the way in which the data was processed and analysed. Yet for the safety and efficacy of drugs, we allow it to happen behind closed doors, because drug companies have decided that they want to share their trial results discretely with the regulators. So the most important job in evidence-based medicine is carried out alone and in secret. And regulators are not infallible, as we shall see.

Rosiglitazone was first marketed in 1999. In that first year, Dr John Buse from the University of North Carolina discussed an increased risk of heart problems at a pair of academic meetings. The drug’s manufacturer, GSK, made direct contact in an attempt to silence him, then moved on to his head of department. Buse felt pressured to sign various legal documents. To cut a long story short, after wading through documents for several months, in 2007 the US Senate committee on finance released a report describing the treatment of Buse as “intimidation”.

But we are more concerned with the safety and efficacy data. In 2003 the Uppsala drug monitoring group of the World Health Organisation contacted GSK about an unusually large number of spontaneous reports associating rosiglitazone with heart problems. GSK conducted two internal meta-analyses of its own data on this, in 2005 and 2006. These showed that the risk was real, but although both GSK and the FDA had these results, neither made any public statement about them, and they were not published until 2008.

During this delay, vast numbers of patients were exposed to the drug, but doctors and patients learned about this serious problem only in 2007, when cardiologist Professor Steve Nissen and colleagues published a landmark meta-analysis. This showed a 43% increase in the risk of heart problems in patients on rosiglitazone. Since people with diabetes are already at increased risk of heart problems, and the whole point of treating diabetes is to reduce this risk, that finding was big potatoes. Nissen’s findings were confirmed in later work, and in 2010 the drug was either taken off the market or restricted, all around the world.

Now, my argument is not that this drug should have been banned sooner because, as perverse as it sounds, doctors do often need inferior drugs for use as a last resort. For example, a patient may develop idiosyncratic side-effects on the most effective pills and be unable to take them any longer. Once this has happened, it may be worth trying a less effective drug if it is at least better than nothing.

The concern is that these discussions happened with the data locked behind closed doors, visible only to regulators. In fact, Nissen’s analysis could only be done at all because of a very unusual court judgment. In 2004, when GSK was caught out withholding data showing evidence of serious side-effects from paroxetine in children, their bad behaviour resulted in a US court case over allegations of fraud, the settlement of which, alongside a significant payout, required GSK to commit to posting clinical trial results on a public website.

Nissen used the rosiglitazone data, when it became available, and found worrying signs of harm, which they then published to doctors – something the regulators had never done, despite having the information years earlier. If this information had all been freely available from the start, regulators might have felt a little more anxious about their decisions but, crucially, doctors and patients could have disagreed with them and made informed choices. This is why we need wider access to all trial reports, for all medicines.

Missing data poisons the well for everybody. If proper trials are never done, if trials with negative results are withheld, then we simply cannot know the true effects of the treatments we use. Evidence in medicine is not an abstract academic preoccupation. When we are fed bad data, we make the wrong decisions, inflicting unnecessary pain and suffering, and death, on people just like us.

Dr David Healy’s Blog : Restoring Study 329: Letter to BMJ


http://davidhealy.org/restoring-study-329-letter-to-bmj/

Restoring Study 329: Letter to BMJ

May, 2, 2016 | Reply

Seroxat/Paxil Study 329 : Republic to Empire


Science-with-a-conscience-111

“….I asked Rob whether his company would have launched an internal damage limitation exercise like GSK/SKB did around the Panorama programs about paroxetine: Of course he said – and that it would have probably been successful. Employees would have been reassured that the company – their company- would never have deliberately harmed children. Success of the exercise would have been sustained partly through loyalty to the company but mainly because no one could afford to think too closely about whether it was true. Ask too many questions and you would be out on your ear with no chance of getting a reference for a future job…”

Sally McGregor 2016- Dr David Healy’s Blog.

This is an interesting blog post by Sally McGregor, on Prof Healy’s blog.

Basically it seems that Sally is aiming to give readers a view from the other side of the pharma fence (so to speak). Regular readers will know very well what side of the fence, I’m on.

In her post, Sally explains the state (and mind set) of the industry at the time- and she puts into context -how bad stuff like Seroxat study 329, or the Zyprexa scandal, happens..

And effectively how unethical decisions from the top, trickle down, leading to harm to consumers…

I’m not quite sure I completely go along with the view from this perspective (for reasons which I will explain later) however it’s an interesting and insightful read nonetheless and well worth reading…

 


 

http://davidhealy.org/study-329-republic-to-empire/

Study 329: Republic to Empire

March, 16, 2016 | 1 Comment

Seroxat Study 329: How open data can improve medicine


How open data can improve medicine

A study arguing an antidepressant isn’t safe for teens has researchers calling for open data

Christopher Labos

  0

Depression. (Sandy Honig/Getty Images)

Jon Jureidini, a research leader of the University of Adelaide’s critical and ethical mental health research group, remembers when he first read Study 329. The paper, published 14 years ago in the Journal of the American Academy of Child and Adolescent Psychiatry, found that antidepressants, in particular paroxetine (brand name Paxil), were safe and effective. Its results would be used as proof that such drugs could and should be used by adolescents. A staunch opponent of prescribing antidepressants to teens, Jureidini assumed that he had it all wrong. But the more he read, the more he started to see problems with the paper. “It was seemingly deliberately misleading. And I got more and more worried about it,” he says.

But now, Jureidini and an international group of researchers have reassessed the original findings after a long and protracted fight with the drug’s manufacturer to gain access to the original raw data. In a study published in the British Medical Journal, they argue that scientific data published all of those years ago, claiming the antidepressant was safe and effective, were manipulated to cast the drugs in a more favourable light. They found that the drug was not only ineffective in teens, it also increased the risk of suicidal thoughts and behaviour. The maker of Paxil, Glaxo­SmithKline (GSK), disagrees with the BMJ findings and stands behind Study 329, saying “it accurately reflects the honestly held views of the clinical investigator authors.”

The saga of Study 329 has had many twists and turns. Its publication led to soaring numbers of off-label prescriptions for Paxil to adolescents and children. But almost immediately, the criticism from scientists started. Many questioned the statistical analysis. In 2003, reports linking Paxil to increased risk of suicidal thinking, suicide attempts or self-harm led GSK, following discussions with Health Canada, to issue a warning that Paxil should not be used in anyone under 18 until further information was available.

In 2004, GSK reached a US$2.5-million settlement with the New York attorney general over misrepresenting data on prescribing Paxil to juveniles. In 2012, it paid US$3 billion to the U.S. Department of Justice, to, in part, resolve liability over how it “unlawfully promoted Paxil for treating depression in patients under age 18.”

Related: Inside your teen’s scary brain

As part of the New York settlement, GSK was required to post clinical study reports from the trials on its website. Scientists should have been able to evaluate the Study 329 data. But they couldn’t. Peter Doshi of the University of Maryland’s school of pharmacy noticed that some of the appendices were missing. After prodding from the attorney general, GSK uploaded more pages, but one appendix was still missing. After lengthy negotiations, Jureidini and his collaborators got to see the missing data, but only under restrictive conditions. They had to view the 77,000 pages via a website that only allowed one person to view the data at a time and didn’t allow them to print, annotate, or sort. It took 1,000 hours of work to review just one-third of the documents. “It needn’t have been that hard,” says Doshi. “The amount of effort, the amount of letters, the amount of lobbying, the difficulty of using the data sources once they’re provided. You see many obstacles to making this go smoothly.”

Doshi explains that, historically, scientists haven’t had access to raw data because it is considered “confidential business information.” The issue here, scientists argue, is that without independent confirmation, it becomes too easy to manipulate data. In 2013, Doshi and a group of other researchers founded the restoring invisible and abandoned trials (RIAT) initiative. RIAT is trying to make study data openly available to researchers.

Jureidini and his team’s reanalysis of Study 329 found that the discrepancy between the findings of the original study and the current one lay in how adverse effects were recorded. In the original, serious events were grouped with more benign symptoms. Episodes of headaches were lumped together with psychiatric events under the category of “nervous system.” After examining the raw data of 93 participants, Jureidini and his colleagues found 11 suicide or self-harm attempts in patients taking Paxil compared to one definite case in those taking a placebo. The original paper referred to “suicidal ideation/gestures” as “emotional lability.”

Erick Turner, a psychiatrist at Oregon Health and Science University, has conducted research that made him worry about how data from studies is handled. In 2008, Turner was lead author in an article in the New England Journal of Medicine. He found that nearly a third of the trials on antidepressants submitted to the Food and Drug Administration had never been published publicly. But more worrisome was that the unpublished studies largely showed no benefit to these medications. When you consider all the data, both published and unpublished, half the studies suggest that they’re not effective. “Clearly these drugs are not as effective as we would like them to be,” says Turner.

But Turner warns that this is not an issue of just antidepressants. At its core, the issue is how science handles its data. He, like many physicians, used to believe that the published literature was the authoritative source. He no longer believes that. Jureidini says that the message of his paper is about trust in the scientific process and access to study data. “Without that access,” he warns, “you can’t be sure of the integrity of any journal article.” Doshi says it is no longer acceptable to keep data secret because “those who possess the data control the story.” After 14 years of scientific and legal battles, the complete story of Study 329 is finally public.

Study 329: Conflicts of Interest (From Dr David Healy’s Blog)


http://davidhealy.org/study-329-conflicts-of-interest/

Study 329: Conflicts of Interest

October, 20, 2015 | Reply