2. Crisis in Evidence-Based Medicine

by Project Censored

In April 2015, the Lancet’s editor, Richard Horton, wrote, “Something has gone fundamentally wrong with one of our greatest human creations.” Describing the upshot of a UK symposium held that month on the reproducibility and reliability of biomedical research, Horton summarized the “case against science”: “Much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness…. The apparent endemicity of bad research behaviour is alarming.”

Horton is not the first editor of a prominent medical journal to raise these concerns. In 2009, Marcia Angell, a former editor of the New England Journal of Medicine, made comparable claims in an article for the New York Review of Books: “It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as editor of The New England Journal of Medicine.

Countering the pharmaceutical industry’s undue influence on the medical profession, Angell concluded, would require “a sharp break from an extremely lucrative pattern of behavior.” Horton’s Lancet editorial echoed Angell’s assessment: “Can bad scientific practices be fixed? Part of the problem is that no-one is incentivized to be right. Instead, scientists are incentivised to be productive and innovative.”

No biomedical study better epitomizes the corruption and conflicts of interest noted by insider critics like Angell and Horton than Study 329, a now notorious clinical trial published in the Journal of the American Academy of Child and Adolescent Psychiatry in 2001. Study 329 reported that paroxetine—marketed by SmithKline Beecham (now GlaxoSmithKline, or GSK) as Paxil in the US and as Seroxat in the UK—was safe and effective for treating depressed children and adolescents. A GSK marketing campaign built on the published study, touting the drug’s “remarkable efficacy and safety,” led to doctors prescribing Paxil to more than two million US children and adolescents by the end of 2002.

However, within a year of the original report, the US Food and Drug Administration declared Study 329 a “failed trial” because further evidence indicated that adolescents prescribed the drug to treat depression fared no better than those on a placebo. In 2003, UK drug regulators instructed doctors not to prescribe Seroxat to adolescents. In 2012, in what the US Department of Justice described as the “largest health care fraud settlement in U.S. history,” GSK paid a three billion dollar fine to resolve its liability over fraud allegations and failure to report safety data.

In 2015 the BMJ published a major reanalysis of GSK’s Study 329. Charlie Cooper of the Independent reported that the reanalysis—conducted by an international team of researchers from Australia, Canada, the US, and the UK, and based on thousands of pages of newly available GSK data—“starkly” contradicted the original report’s claims. Furthermore, Cooper noted, the reassessment of Study 329 marked “a milestone in the medical community’s campaign to open up clinical trial data held by pharmaceutical companies to independent scientific scrutiny.”

As Sarah Boseley reported for the Guardian, the reanalysis of Study 329 found that paroxetine’s beneficial effects were far less, and its harmful effects far greater, than the original study reported. In particular, by examining the full set of clinical trials data, the researchers who conducted the reassessment found that eleven of the 275 children and adolescents on the drug developed suicidal or self-harming behavior. The original study had acknowledged only five of these cases. David Healy, a psychiatry professor and one of the reassessment’s coauthors, observed, “This is a very high rate of kids going on to become suicidal. It doesn’t take expertise to find this. It takes extraordinary expertise to avoid finding it.” Boseley’s report also documented renewed calls for the Journal of the American Academy of Child and Adolescent Psychiatry to retract the original GSK study, whose lead author was Martin Keller of Brown University. Peter Doshi, the BMJ’s associate editor, observed, “It is often said that science self-corrects. But for those who have been calling for a retraction of the Keller paper for many years, the system has failed.” Neither the journal’s editors, nor any of the paper’s twenty-two listed authors have intervened to correct the record, and none of the authors have been disciplined, Doshi noted.

Nevertheless, as documented by Charlie Cooper for the Independent and Sarah Boseley of the Guardian, the reanalysis of the complete set of original clinical trials data for Study 329 is the first major success of a new open data initiative known as Restoring Invisible and Abandoned Trials (RIAT), which has been promoted by the BMJ. As Cooper reported, “The BMJ’s final judgment on the infamous ‘Study 329’ represents a symbolic victory for the burgeoning ‘open data’ movement in health.” RIAT is part of a broader movement to force pharmaceutical companies to make all of their data available for independent scientific scrutiny. The AllTrials campaign, which calls for open publication of all clinical trials results, now has the backing of over 600 medical and research organizations, Cooper reported. Boseley’s Guardian article quoted BMJ editor in chief Fiona Godlee, who said that the reanalysis of Study 329 showed “the extent to which drug regulation is failing us.” Godlee called for independent rather than industry funded and managed clinical trials, as well as legislation “to ensure that the results of all clinical trials are made fully available” to third-party scrutiny. Both news stories noted the cooperation of GlaxoSmithKline in making the original data available for reanalysis. GSK posted 77,000 pages of de-identified case reports from the trial on a website—though, it should be noted, the company was obliged to do so under the terms of their settlement.

Richard Horton’s Lancet editorial received no coverage in the US corporate press. The Washington Post featured one story on the reanalysis of the original paroxetine study. The article provided a great deal of information about the misrepresentation of the original study—including, for instance, that the discrepancy between the original report and the BMJ reanalysis was partly due to “the miscoding of a serious suicide attempt as ‘emotional lability,’ a temporary condition that involves uncontrollable episodes of crying.” However, the Washington Post report made only passing mention of the open data movement and did not identify any of the specific initiatives (such as RIAT or AllTrials) by name. Otherwise, the corporate press ignored the reassessment of the paroxetine study.

In May 2014, President Obama signed the Digital Accountability and Transparency Act. Although it requires federal agencies to make data—including funding sources for clinical trials—publicly available, the DATA Act’s requirements do not apply to privately funded biomedical research.

Richard Horton, “What is Medicine’s 5 Sigma?,” Lancet 385, no. 9976, April 11, 2015, http://www.thelancet.com/pdfs/journals/lancet/PIIS0140-6736%2815%2960696-1.pdf.

Charlie Cooper, “Anti-Depressant was Given to Millions of Young People ‘After Trials Showed It was Dangerous’,” Independent, September 16, 2015, http://www.independent.co.uk/life-style/health-and-families/health-news/anti-depressant-was-given-to-millions-of-young-people-after-trials-showed-it-was-dangerous-10504555.html.

Sarah Boseley, “Seroxat Study Under-Reported Harmful Effects on Young People, Say Scientists,” Guardian, September 16, 2015, https://www.theguardian.com/science/2015/sep/16/seroxat-study-harmful-effects-young-people.

Student Researchers: Joshua Gill-Sutton and Adaeze Iroka (San Francisco State University)

Faculty Evaluator: Kenn Burrows (San Francisco State University)