Nonpublication of Neurology Trials for Stalled Drugs & the Ironic Nonpublication of Data on those Stalled Drugs

by

In my experience, peer review greatly improves a manuscript in the vast majority of cases. There are times, however, when peer review improves a manuscript on one less important axis, while impoverishing it in another more important one. This is the case with our recent article in Annals of Neurology.

Briefly, our manuscript created a sample of FDA-approved neurological drugs, as well as a matched sample of neurological drugs that did not receive FDA approval- but instead stalled in development (i.e. a 3 year pause in testing). We then used clinicaltrials.gov to identify trials of drugs in both groups, and determined the proportion of trials that were published for all approved drugs, as well as FDA non-approved drugs. We found- not surprisingly- that trials involving stalled neurological drugs were significantly less likely to publish. What- for us- was the bigger surprise was that the proportion of trials published at 5 years or more after closure was a mere 32% for stalled neurological drugs (56% for licensed). Think about what that means in terms of the volume of information we lose, and the disrespect we show to neurological patients who volunteer their bodies to test drugs that show themselves to be ineffective and/or unsafe.

We shopped the manuscript around – eventually landing at Annals of Neurology. The paper received glowing reviews. Referee 1: “The research is careful and unbiased and the conclusions sound and impactful.” Referee 2: “This is an excellent and very important paper. It rigorously documents a very important issue in clinical trial conduct and reporting. The authors have done a superb job of identifying a crucial question, studying it carefully and fairly with first-rate quantification, and presenting the results in a clear, well-written, and illuminating manner… I have no major concerns, but some small points may be helpful…” Ka-ching!

However, after submitting small revisions, the manuscript was sent to a statistical referee who was highly critical of elements that seemed minor, given the thrust of the manuscript. [Disclosure: from here forward – this blog reflects my opinion but not necessarily the opinion of my two co-authors]. We were told to expunge the word “cohort” from the manuscript (since there was variable follow-up time). Odd but not worth disputing. We were urged “to fit a Cox model from time of completion of the trial to publication, with a time-varying covariate that is set to 0 until the time of FDA approval, at which time it is changed to 1. The associated coefficient of this covariate is the hazard ratio for publication comparing approved drugs to unapproved drugs.” That seemed fastidious – we’re not estimating survival of a drug to make policy here – but not unreasonable. We were told we must remove our Kaplan-Meier curves of time to publication. I guess. So we did it and resubmitted- keeping some of our unadjusted analyses in (of course labeling them as unadjusted).

The reviewer pressed further. He/she wanted all presentation of proportions and aggregate data removed (here I will acknowledge a generous aspect of the referee and editors- he/she agreed to use track changes to cut content from the ms [I am not being snarky here – this went beyond normal protocol at major journals]). We executed a “search and destroy” mission for just about all percentages in the manuscript: in this case we cut two tables’ worth of data describing the particular drugs, characteristics of trials in our sample, and proportions of trials for which data were obtainable in abstract form, or on company websites. Although one referee had signed off (“My high regard for this paper persists. Differences in views concerning the statistical approach are understandable. I see the paper as providing very important data about the trajectory of publication or non-publication of data depending on the licensing fate of the drug being studied, and see the survival analysis as bolstering that approach”) the editors insisted on our making revisions requested by the reviewer.

So in the end- we had to present what we believe to be an impoverished, data-starved, and somewhat less accessible version in Anals of Neurology. And not surprisingly, upon publication, we were (fairly) faulted online for not providing enough information about our sample. To our mind, the **real** version- and the one we think incorporates the referee’s productive suggestions while respecting our discretion as authors can be accessed here. And we are making our complete dataset available here.

BibTeX

@Manual{stream2017-1325,
    title = {Nonpublication of Neurology Trials for Stalled Drugs & the Ironic Nonpublication of Data on those Stalled Drugs},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2017,
    month = jun,
    day = 5,
    url = {http://www.translationalethics.com/2017/06/05/nonpublication-of-neurology-trials-for-stalled-drugs-the-ironic-nonpublication-of-data-on-those-stalled-drugs/}
}

MLA

Jonathan Kimmelman. "Nonpublication of Neurology Trials for Stalled Drugs & the Ironic Nonpublication of Data on those Stalled Drugs" Web blog post. STREAM research. 05 Jun 2017. Web. 24 Jun 2017. <http://www.translationalethics.com/2017/06/05/nonpublication-of-neurology-trials-for-stalled-drugs-the-ironic-nonpublication-of-data-on-those-stalled-drugs/>

APA

Jonathan Kimmelman. (2017, Jun 05). Nonpublication of Neurology Trials for Stalled Drugs & the Ironic Nonpublication of Data on those Stalled Drugs [Web log post]. Retrieved from http://www.translationalethics.com/2017/06/05/nonpublication-of-neurology-trials-for-stalled-drugs-the-ironic-nonpublication-of-data-on-those-stalled-drugs/


Recapping the recent plagiarism scandal

by

Parts of the paper that are nearly identical to my blog

Parts of the paper that are nearly identical to my blog

A year ago, I received a message from Anna Powell-Smith about a research paper written by two doctors from Cambridge University that was a mirror image of a post I wrote on my personal blog1 roughly two years prior. The structure of the document was the same, as was the rationale, the methods, and the conclusions drawn. There were entire sentences that were identical to my post. Some wording changes were introduced, but the words were unmistakably mine. The authors had also changed some of the details of the methods, and in doing so introduced technical errors, which confounded proper replication. The paper had been press-released by the journal,2 and even noted by Retraction Watch.3

I checked my site’s analytics and found a record of a user from the University of Cambridge computer network accessing the blog post in question three times on 2015 December 7 and again on 2016 February 16, ten days prior to the original publication of the paper in question on 2016 February 26.4

At first, I was amused by the absurdity of the situation. The blog post was, ironically, a method for preventing certain kinds of scientific fraud. I was flattered that anyone noticed my blog at all, and I believed that academic publishing would have a means for correcting itself when the wrong people are credited with an idea. But as time went on, I became more and more frustrated by the fact that none of the institutions that were meant to prevent this sort of thing were working.

The journal did not catch the similarities between this paper and my blog in the first place, and the peer review of the paper was flawed as well. The journal employs an open peer review process in which the reviewers’ identities are published. The reviewers must all make a statement saying, “I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.” Despite this process, none of the reviewers made an attempt to analyse the validity of the methods used.

After the journal’s examination of the case, they informed us that updating the paper to cite me after the fact would undo any harm done by failing to credit the source of the paper’s idea. A new version was hastily published that cited me, using a non-standard citation format that omitted the name of my blog, the title of my post, and the date of original publication. The authors did note that the idea had been proposed in “the grey literature,” so I re-named my blog to “The Grey Literature” to match.

I was shocked by the journal’s response. Authorship of a paper confers authority in a subject matter, and their cavalier attitude toward this, especially given the validity issues I had raised with them, seemed irresponsible to me. In the meantime, the paper was cited favourably by the Economist5 and in the BMJ6, crediting Iriving and Holden.

I went to Retraction Watch with this story,7 which brought to light even more problems with this example of open peer review. The peer reviewers were interviewed, and rather than re-evaluating their support for the paper, they doubled down, choosing instead to disparage my professional work and call me a liar. One reviewer wrote, “It is concerning that this blogger would be attempting a doctorate and comfortably ascribe to a colleague such falsehoods.”

The journal refused to retract the paper. It was excellent press for the journal and for the paper’s putative authors, and it would have been embarrassing for them to retract it. The journal had rolled out the red carpet for this paper after all,2 and it was quickly accruing citations.

The case was forwarded to the next meeting of the Committee on Publication Ethics (COPE) for their advice. Three months later, at the August 2016 COPE meeting, the case was presented and voted on.8 It was surreal for me to be forced to wait for a seemingly unaccountable panel of journal editors to sit as a de facto court, deciding whether or not someone else would be credited with my words, all behind locked doors, with only one side of the case—the journal editors’—represented. In the end, they all but characterised my complaints as “punitive,” and dismissed them as if my only reason for desiring a retraction was that I was hurt and wanted revenge. The validity issues that I raised were acknowledged but no action was recommended. Their advice was to send the case to the authors’ institution, Cambridge University, for investigation. I do not know if Cambridge did conduct an investigation, and there has been no contact with me.

There is, to my knowledge, no way to appeal a decision from COPE, and I know of no mechanism of accountability for its members in the case they advise a journal with the wrong answer. As of January 2017, the journal officially considered the case closed.

It is very easy to become disheartened and jaded when things like this happen—as the Economist article citing Irving and Holden says, “Clinical trials are a murky old world.”5 The institutions that are supposed to protect the integrity of the academic literature sometimes act in ways that miss the lofty standards we expect from modern science.

Fortunately, the scientific community turned out to be a bigger place than I had given it credit for. There are people like Anna, who let me know that this was happening in the first place and Ben Goldacre, who provided insight and support. My supervisor and my colleagues in the STREAM research group were incredibly supportive and invested in the outcome of this case. A number of bloggers (Retraction Watch,7,9 Neuroskeptic,10 Jordan Anaya11—if I missed one, let me know!) picked up this story and drew attention to it, and in the end, the paper was reviewed by Daniel Himmelstein,12 whose persistence and thoroughness convinced the journal to re-open the case and invite Dr Knottenbelt’s decisive review.

While it is true that the mistakes introduced into the methods are what finally brought about its retraction, those mistakes happened in the first place because the authors did not come up with the idea themselves. It is a fallacy to think that issues of scientific integrity can be considered in isolation from issues of scientific validity, and this case very clearly shows how that sort of thinking could lead to a wrong decision.

Of course, there are still major problems with academic publishing. But there are also intelligent and conscientious people who haven’t given up yet. And that is an encouraging thought.

References

1. Carlisle, B. G. Proof of prespecified endpoints in medical research with the bitcoin blockchain. The Grey Literature (2014).

2. F1000 Press release: Doctors use Bitcoin tech to improve transparency in clinical trial research. (2016). Available at: http://f1000.com/resources/160511_Blockchain_FINAL.pdf. (Accessed: 23rd June 2016)

3. In major shift, medical journal to publish protocols along with clinical trials. Retraction Watch (2016).

4. Irving, G. & Holden, J. How blockchain-timestamped protocols could improve the trustworthiness of medical science. F1000Research 5, 222 (2017).

5. Better with bitcoin | The Economist. Available at: http://www.economist.com/news/science-and-technology/21699099-blockchain-technology-could-improve-reliability-medical-trials-better. (Accessed: 23rd June 2016)

6. Topol, E. J. Money back guarantees for non-reproducible results? BMJ 353, i2770 (2016).

7. Plagiarism concerns raised over popular blockchain paper on catching misconduct. Retraction Watch (2016).

8. What extent of plagiarism demands a retraction vs correction? | Committee on Publication Ethics: COPE. Available at: http://publicationethics.org/case/what-extent-plagiarism-demands-retraction-vs-correction. (Accessed: 16th August 2016)

9. Authors retract much-debated blockchain paper from F1000. Retraction Watch (2017).

10. Neuroskeptic. Blogs, Papers, Plagiarism and Bitcoin – Neuroskeptic. (2016).

11. Anaya, J. Medical students can’t help but plagiarize, apparently. Medium (2016). Available at: https://medium.com/@OmnesRes/medical-students-cant-help-but-plagiarize-apparently-f81074824c17. (Accessed: 21st July 2016)

12. Himmelstein, Daniel. Satoshi Village. The most interesting case of scientific irreproducibility? Available at: http://blog.dhimmel.com/irreproducible-timestamps/. (Accessed: 8th March 2017)

BibTeX

@Manual{stream2017-1280,
    title = {Recapping the recent plagiarism scandal},
    journal = {STREAM research},
    author = {Benjamin Gregory Carlisle},
    address = {Montreal, Canada},
    date = 2017,
    month = jun,
    day = 2,
    url = {http://www.translationalethics.com/2017/06/02/recapping-the-recent-plagiarism-scandal/}
}

MLA

Benjamin Gregory Carlisle. "Recapping the recent plagiarism scandal" Web blog post. STREAM research. 02 Jun 2017. Web. 24 Jun 2017. <http://www.translationalethics.com/2017/06/02/recapping-the-recent-plagiarism-scandal/>

APA

Benjamin Gregory Carlisle. (2017, Jun 02). Recapping the recent plagiarism scandal [Web log post]. Retrieved from http://www.translationalethics.com/2017/06/02/recapping-the-recent-plagiarism-scandal/


Scientists should be cognizant of how the public perceives uncertainty

by

Scientific results are inherently uncertain. The public views uncertainty differently than scientists. One key to understanding when and how scientific research gets misinterpreted is to understand how the public thinks about scientific uncertainty.

A recent paper in the Journal of Experimental Psychology: General explores how laypersons perceive uncertainty in science. Broomell and Kane use principle component analysis to discover three underlying dimensions that describe how the public characterizes uncertainty: precision, mathematical abstraction, and temporal distance. These three dimensions, in turn, predict how people rate the quality of a research field. Precision – loosely defined in this context as the accuracy of the measurements, predictions, and conclusions drawn within a research field – is the dominating factor. One interpretation is that the public is primarily concerned with definitiveness when evaluating scientific claims.

Members of the public lose confidence when fields of study are described as being more uncertain. This is relevant for scientists to consider when communicating results. On the one hand, over-selling the certainty of an outcome can mislead. On the other hand, the public might tend to dismiss important scientific findings when researchers describe uncertainty honestly and openly, as we have seen in the public denial of vaccinations and climate change. Perceptions of a research field do not seem to influence how people view individual studies, so each study should be treated as its own communique.

Broomell et al found some evidence that personal characteristics interpret scientific uncertainty in different ways. Self-identified Republicans are more concerned about expert disagreement, while self-identified Democrats are more concerned with the quality of evidence. Such individual differences suggest the type of uncertainty surrounding scientific findings shapes the way members of the public receive of scientific claims. Consider how this might play out in medical research and informed consent. Clinical equipoise is the idea that research on human-subjects is only ethical if experts are uncertain about which treatment in a randomized trial is better. If one treatment is thought to be better than another, it is unethical to deny the preferred treatment to patients. The findings of Broomell et al suggest that the structure of uncertainty, namely unsettled evidence versus expert disagreement, is perceived differently by laypersons. Perhaps some patients are more concerned with who determines a treatment successful, while others are more concerned with why.

BibTeX

@Manual{stream2017-1261,
    title = {Scientists should be cognizant of how the public perceives uncertainty},
    journal = {STREAM research},
    author = {Daniel Benjamin},
    address = {Montreal, Canada},
    date = 2017,
    month = may,
    day = 26,
    url = {http://www.translationalethics.com/2017/05/26/by-daniel-benjamin-phd/}
}

MLA

Daniel Benjamin. "Scientists should be cognizant of how the public perceives uncertainty" Web blog post. STREAM research. 26 May 2017. Web. 24 Jun 2017. <http://www.translationalethics.com/2017/05/26/by-daniel-benjamin-phd/>

APA

Daniel Benjamin. (2017, May 26). Scientists should be cognizant of how the public perceives uncertainty [Web log post]. Retrieved from http://www.translationalethics.com/2017/05/26/by-daniel-benjamin-phd/


Into the Unknown: Methodological and Ethical Issues in Phase I Trials

by

MUHCtalk

Tuesday, April 18, 2017
12:00 – 1:00pm
RI auditorium, Glen Site – E S1.1129

With the current push to transform Montréal into a hub for early phase research, there is a pressing need to explore the issues that researchers and research ethics boards (REB) encounter in Phase I trials.

In this two-part presentation, recent examples from healthy volunteer and oncology studies will be used to illustrate how protocol design and ethics review can be enhanced.

BibTeX

@Manual{stream2017-1252,
    title = {Into the Unknown: Methodological and Ethical Issues in Phase I Trials},
    journal = {STREAM research},
    author = {Esther Vinarov},
    address = {Montreal, Canada},
    date = 2017,
    month = apr,
    day = 17,
    url = {http://www.translationalethics.com/2017/04/17/into-the-unknown-methodological-and-ethical-issues-in-phase-i-trials/}
}

MLA

Esther Vinarov. "Into the Unknown: Methodological and Ethical Issues in Phase I Trials" Web blog post. STREAM research. 17 Apr 2017. Web. 24 Jun 2017. <http://www.translationalethics.com/2017/04/17/into-the-unknown-methodological-and-ethical-issues-in-phase-i-trials/>

APA

Esther Vinarov. (2017, Apr 17). Into the Unknown: Methodological and Ethical Issues in Phase I Trials [Web log post]. Retrieved from http://www.translationalethics.com/2017/04/17/into-the-unknown-methodological-and-ethical-issues-in-phase-i-trials/


Who Cares if the Emperor is Immodestly Attired: An Exploration of the Trustworthiness of Biomedical Research

by

caesar

Tuesday, October 4, 2016
1 PM
3647 Peel St., Room 101

Everyone acknowledges the need for biomedical research to enjoy the public’s trust that it continuously solicits and receives. An ethical precondition of soliciting trust is knowing the extent to which that trust is deserved. What makes biomedical research deserving of the public trust requires in-depth attention. This session will review three different criteria of trustworthiness in research – reliability, social value, and ethical conduct – to explore the extent to which the biomedical research enterprise warrants public trust.

Mark Yarborough, PhD, is Professor of General Medicine and Geriatrics and Dean’s Professor of Bioethics in the Bioethics Program at the University of California, Davis.

Photo by clarita

BibTeX

@Manual{stream2016-1149,
    title = {Who Cares if the Emperor is Immodestly Attired: An Exploration of the Trustworthiness of Biomedical Research},
    journal = {STREAM research},
    author = {Esther Vinarov},
    address = {Montreal, Canada},
    date = 2016,
    month = sep,
    day = 12,
    url = {http://www.translationalethics.com/2016/09/12/stream-workshop-series-2016-october-4th-mark-yarborough/}
}

MLA

Esther Vinarov. "Who Cares if the Emperor is Immodestly Attired: An Exploration of the Trustworthiness of Biomedical Research" Web blog post. STREAM research. 12 Sep 2016. Web. 24 Jun 2017. <http://www.translationalethics.com/2016/09/12/stream-workshop-series-2016-october-4th-mark-yarborough/>

APA

Esther Vinarov. (2016, Sep 12). Who Cares if the Emperor is Immodestly Attired: An Exploration of the Trustworthiness of Biomedical Research [Web log post]. Retrieved from http://www.translationalethics.com/2016/09/12/stream-workshop-series-2016-october-4th-mark-yarborough/


Accelerated Drug Approval and Health Inequality

by

Since the 1960s, the U.S. FDA has served as a model for drug regulation around the world with its stringent standards for approval of new drugs. Increasingly, however, a coalition of libertarians, patient advocates, and certain commercial interests have been pressing for a relaxation of these stringent standards. Examples of legislative initiatives that would weaken regulatory standards of evidence for drug approval include the “Regrow Act,” “21st Century Cures Act,” as well as various “Right to Try” laws passed in U.S. states.

Much has been written in support- and against- relaxation of current regulatory standards. Typically, these debates are framed in terms of a conflict between public welfare (i.e. the public needs to be protected from unproven and potentially dangerous drugs) and individual choice (i.e. desperately ill patients are entitled to make their own personal decisions about risky new drugs).

In a recent commentary, my co-author Alex London and I take a different tack on this debate. Rather than framing this as “public welfare” vs. “individual choice,” we examine the subtle ways that relaxed standards for drug approval would redistribute the burdens of uncertainty in ways that raise questions of fairness. We suggest weakened standards would shift greater burdens of uncertainty a) from advantaged populations to ones that are already suffer greater burdens from medical uncertainty; b) from research systems toward healthcare systems; c) from private and commercial payers toward public payers; and d) from comprehending and voluntary patients towards less comprehending and less voluntary patients. We hope our analysis stimulates a more probing discussion of the way regulatory standards determine how medical uncertainty is distributed.

BibTeX

@Manual{stream2016-1090,
    title = {Accelerated Drug Approval and Health Inequality},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2016,
    month = jul,
    day = 18,
    url = {http://www.translationalethics.com/2016/07/18/accelerated-drug-approval-and-health-inequality/}
}

MLA

Jonathan Kimmelman. "Accelerated Drug Approval and Health Inequality" Web blog post. STREAM research. 18 Jul 2016. Web. 24 Jun 2017. <http://www.translationalethics.com/2016/07/18/accelerated-drug-approval-and-health-inequality/>

APA

Jonathan Kimmelman. (2016, Jul 18). Accelerated Drug Approval and Health Inequality [Web log post]. Retrieved from http://www.translationalethics.com/2016/07/18/accelerated-drug-approval-and-health-inequality/


Shedding (Dim) Light on Clinical Benefit in Biomarker-Based Drug Development

by

Despite the appeal of personalized medicine (that is treatment selection based on the presence of a particular marker), uncertainty remains regarding the broad utility of this selection strategy in oncology. A recent meta-analysis by Jardim et al. in the Journal of the National Cancer Institute attempted to provide some clarity by comparing efficacy outcomes between personalized and non-personalized clinical trial designs leading to the new FDA approval of drugs between 1998-2013. The publication concluded that using a biomarker-based selection strategy led to improved response rate, progression free survival and overall survival across a range of cancer subtypes and selection biomarkers.

The study should be applauded for its unique approach in trying to determine the benefit of personalized drug development, the paper’s conclusions are qualified by five issues.

No information about drugs that do not receive license 

file5571296929400

Personalized Drug Development

The study only evaluated efficacy outcomes for trials directly leading to the FDA approval of the drug (the authors acknowledge this). This may prevent generalizability of conclusions, as it does not capture drugs that failed during testing. However this search strategy also excluded studies earlier in the development of approved drugs, where they were explored unsuccessfully for various indications or biomarker subgroups. In contrast to FDA approval for non-personalized drugs, which just requires identifying the proper indication, personalized strategies in addition require finding optimal test conditions for biomarkers used in patient selection. It is therefore conceivable that greater failed exploration goes into the development of a personalized strategy and therefore that an overall comparison of efficacy outcomes between personalized and non-personalized designs may not reach the same conclusions as a comparison of the FDA approval trials.

Doesn’t address dangers of premature biomarker enrichment

While there may indeed be a benefit to using biomarker-based trial designs, the study does not encompass potential harm that can arise when trials prematurely enrich for a particular biomarker population. Early enrichment precludes evaluating the drug in biomarker ‘negative’ patients and can prolong uncertainty regarding a drug’s utility in biomarker negative groups. The approval of Trastuzumab for HER2+ breast cancer provides an example of this. The two clinical trials leading to the FDA approval of the drug were based on a personalized strategy, but now nearly 20 years later the biomarker originally used for patient selection is being reevaluated in a large-scale phase 3 study.

May not properly classify “personalized therapy”

A third issue concerns the authors’ classification of “personalized therapy”. The paper’s definition includes both trials selecting patients who express rare biomarkers along with studies in which at least 50% of the patient population is known to harbor the mutation (in the study just over half of the personalized trials fell into the latter category, with a number of those including markers present in nearly 100% of the patients). While a biomarker is implicated in the response to therapy in both situations, comparing these two groups may not be appropriate. As there was no selection process needed to identify patients from the overall population to include in 50% criteria trials, they more appropriately reflects a “population based-” rather than a “personalized-” medicine. One of the most pressing issues in developing personalized treatments is grappling with properly selecting the patients who have increased chance of benefit. It is conceivable that the risk/benefit of personalized trials using low frequency mutations (which requires applying often complex selection criteria to identify the proper population) may not be comparable to the “population” marker trials.

Doesn’t quantify clinical benefit post-approval

Another issue not addressed in their conclusion is the actual clinical impact of biomarker-based treatment selection once a treatment has been approved. There is general concern over the current unbalanced cost/benefit of drug development and as many biomarkers exist in low frequencies in the population it is conceivable that the net benefit of drugs approved based on personalized strategies is lower than that of non-personalized strategies – or that the impact of drugs approved based on the 50% criteria is greater than that of other biomarker-based drugs. It is therefore unclear whether a biomarker-based study design is just better for getting drugs approved, or better for getting better drugs approved.

May not predict the future of personalized medicine

Finally, several commentators have noted that large scale trials (such as those evaluated in this study) may not be sustainable for the future of personalized medicine drug development. There is a growing trend in the use of next generation clinical trials, which include N of 1 trials, basket designs and adaptive treatment allocation. Each of these enroll small populations because the frequency of patients expressing the biomarkers of interest is generally very low, and therefore one should be cautious in extrapolating the methods and conclusions of the publication (especially due to the inclusion of “population” markers) to future evaluations of the efficacy of personalized medicine.

While not complete, this publication is the first step in a much-needed rigorous evaluation of utility of biomarker-based strategies in cancer treatment and drug development.

BibTeX

@Manual{stream2016-986,
    title = {Shedding (Dim) Light on Clinical Benefit in Biomarker-Based Drug Development},
    journal = {STREAM research},
    author = {Brianna Barsanti-Innes},
    address = {Montreal, Canada},
    date = 2016,
    month = may,
    day = 9,
    url = {http://www.translationalethics.com/2016/05/09/shedding-dim-light-on-clinical-benefit-in-biomarker-based-drug-development/}
}

MLA

Brianna Barsanti-Innes. "Shedding (Dim) Light on Clinical Benefit in Biomarker-Based Drug Development" Web blog post. STREAM research. 09 May 2016. Web. 24 Jun 2017. <http://www.translationalethics.com/2016/05/09/shedding-dim-light-on-clinical-benefit-in-biomarker-based-drug-development/>

APA

Brianna Barsanti-Innes. (2016, May 09). Shedding (Dim) Light on Clinical Benefit in Biomarker-Based Drug Development [Web log post]. Retrieved from http://www.translationalethics.com/2016/05/09/shedding-dim-light-on-clinical-benefit-in-biomarker-based-drug-development/


How do researchers decide early clinical trials?

by

Launch of clinical investigation represents a substantial escalation in commitment to a particular clinical translation trajectory; it also exposes human subjects to poorly understood interventions. Despite these high stakes, there is little to guide decision-makers on the scientific and ethical evaluation of early phase trials.

In our recent article published in Medicine, Health Care and Philosophy, we review policies and consensus statements on human protections, drug regulation, and research design surrounding trial launch- concentrating on evidentiary factors used to justify launch of clinical development and to evaluate risk and benefit for subjects. We conclude that existing policies grants very wide moral and scientific discretion to research teams and sponsors. We then review what is currently understood about how research teams exercise this discretion, and find that decision-making surrounding trial launch is not simply- or even primarily- centered on proof of principle or concerns about subject welfare. It involves a constellation of commercial, regulatory, and professional considerations. Investigators are adept at establishing and maintaining their authority over decisions surrounding trial launch, and they emphasize that preclinical research is an important resource in legitimizing trial launch and enrolling other actors. However, nothing in this last point suggests that preclinical studies are mere rhetorical devices. If preclinical research is a key resource in enrolling other actors, it is surely because it contains content that resolve certain uncertainties.

We close by laying out a research agenda for characterizing the way investigators, sponsors, and reviewers approach decision-making in early phase research. We suggest that by investigating how various stakeholders describe, reason about, approach and resolve questions about ethics and study validity guidance can be established on a design and review principles for trial launch. Such an approach can pay dividends by improving human protections, reducing attrition in drug development, and reducing costly uncertainties encountered in deciding launch of clinical development.

BibTeX

@Manual{stream2016-970,
    title = {How do researchers decide early clinical trials?},
    journal = {STREAM research},
    author = {Hannah Grankvist},
    address = {Montreal, Canada},
    date = 2016,
    month = mar,
    day = 7,
    url = {http://www.translationalethics.com/2016/03/07/how-do-researchers-decide-early-clinical-trials/}
}

MLA

Hannah Grankvist. "How do researchers decide early clinical trials?" Web blog post. STREAM research. 07 Mar 2016. Web. 24 Jun 2017. <http://www.translationalethics.com/2016/03/07/how-do-researchers-decide-early-clinical-trials/>

APA

Hannah Grankvist. (2016, Mar 07). How do researchers decide early clinical trials? [Web log post]. Retrieved from http://www.translationalethics.com/2016/03/07/how-do-researchers-decide-early-clinical-trials/


Clinical Trial Disaster in France

by

Days after receiving the experimental medication BIA 10-2474 in a first in human trial, one man was brain dead and another five hospitalized. According to recent reports, three are likely to have neurological deficits.
To my knowledge, this is the first time since 2001 that a healthy volunteer has died in a medical experiment. And it is the first major drug disaster in a phase 1 trial since 2006, when six men were hospitalized after developing a life threatening (but not ultimately fatal) immune response to the drug TGN1412.

Details surrounding the BIA 10-2474 trial are sketchy – and are likely to remain so as long as a manslaughter investigation is underway. Here is what we do know. The drug was a small molecule inhibitor of an enzyme involved in endocannabinoid metabolism, fatty acid amide hydrolase (FAAH). Other FAAH inhibitors have been tested in human beings without incident. Nor have any shown clinical activity. We also know that the men who developed life-threatening toxicities were the first to receive multiple doses of BIA 10-2474. And, based on a study protocol released by Le Figaro, multiple doses within this cohort were not staggered.
Healthy volunteer phase 1 studies are creepy. The realm of phase 1 testing is secretive, and most studies are conducted in private contract research organizations rather than academic medical centers. Few studies are ever published. Indeed, drug regulators exempt companies from even registering them in public databases. This makes it difficult to know anything about their volume, record of safety, the demographics of study participants, or the nature of study procedures.

Another reason phase 1 studies are creepy is that this is one of the few areas where doctors perform medical procedures – including administering unknown substances – that have no conceivable medical benefit for subjects. The risk/medical benefit ratio is infinite. All research on human beings, in a sense, treats people as (consenting) biological objects. But nowhere is this moral dynamic more stark than in healthy volunteer phase 1 trials, where people are valued not for exercising distinctly human capacities like labor or character – but rather for their biological passivity.

Another reason phase 1 studies give one pause is the financial element. They are funded by one of the most profitable sectors of the contemporary economy: the pharmaceutical industry. And some of that economic might is used to recruit volunteers who are probably financially disadvantaged or underemployed. Based on the figures I’ve seen, you can make a handsome sum being a professional “guinea pig.” When I lived in Berlin, I visited a large phase 1 clinic operated by Parexel near Westend. It was one of the few places far outside the museum and Reichstag districts where one saw English signage. The signs were not trying to reach tourists, or native Germans.

We should pay attention to this creepiness, but we should also discipline it with reason. Almost every modern drug we ingest – including many cancer drugs – began its human career in healthy volunteers. I’m hard pressed to think of any morally preferable alternatives to healthy volunteer testing if we value medical advance. Asking patients who are already debilitated by illness to commit their time and bodies for such studies is hardly more appealing. Based on what little has been published on them, healthy volunteer phase 1 studies are mostly benign medically. And I am not being an apologist when I point out that the medical screening procedures performed by Contract Research Organizations provide a service to precisely those populations that are probably underserved in primary care. Visit a phase 1 healthy volunteer clinic and you’ll see armies of people – of various races and age – listlessly peering at their laptops or plugged into ear buds, waiting around for the next blood draw.
Phase 1 trials can and should be done better. The lack of transparency – including nonpublication – is unacceptable. Indeed, drug companies should be expected not only to publish their phase 1 studies, but also the preclinical research leading to them. In the case of BIA 10-2474, I have been unable to find a single published preclinical study of the compound. Indeed, I only learned of its composition through the leaked study protocol. In addition, there are probably many phase 1 studies whose contribution to the pharmacopeia is marginal, because drugs lack a sound biological rationale or are not directed at medically urgent applications.

It is far too early to draw any specific conclusions about the conduct of regulators, BIAL, Contract Research Organizations, or the physicians involved in the BIA 10-2474 debacle. However, here are two interpretations I urge we avoid.
On the one hand, we should avoid the temptation to explain events like this as inevitable, “long tail” phenomena. Some commentators argued this view after the TGN1412 disaster. It’s wrong, because behind every debacle is a chain of rectifiable human events that led to it. Every realm of risk and technology – airplane travel, nuclear power, chemical manufacture, mining – has proven it is possible to devise systems that render avoidable what some might call “inevitable” disaster. When all is said and done, the BIA 10-2474 debacle will reveal some correctable problem in the incentives, practices, organizational structure, and environment in which drug research is pursued.

On the other hand, we should resist the temptation to vilify the institution of phase 1 healthy volunteer testing. To be sure, trials can be better justified and reported. And there is no obvious way to cleanse them of the taint described above. Healthy volunteer phase 1 studies remind us of ineluctable moral tensions in all human research. But, to quote Paul Ramsey, we should continue to “sin bravely.”

This commentary was also published on Impact Ethics: https://impactethics.ca/2016/02/02/clinical-trial-disaster-in-france/

BibTeX

@Manual{stream2016-949,
    title = {Clinical Trial Disaster in France},
    journal = {STREAM research},
    author = {STREAM admin},
    address = {Montreal, Canada},
    date = 2016,
    month = feb,
    day = 2,
    url = {http://www.translationalethics.com/2016/02/02/clinical-trial-disaster-in-france/}
}

MLA

STREAM admin. "Clinical Trial Disaster in France" Web blog post. STREAM research. 02 Feb 2016. Web. 24 Jun 2017. <http://www.translationalethics.com/2016/02/02/clinical-trial-disaster-in-france/>

APA

STREAM admin. (2016, Feb 02). Clinical Trial Disaster in France [Web log post]. Retrieved from http://www.translationalethics.com/2016/02/02/clinical-trial-disaster-in-france/


Why clinical translation cannot succeed without failure

by

Attrition in drug development – that is, the failure of drugs that show promise in animal studies to show efficacy when tested in patients- is often viewed as a source of inefficiency in drug development. Surely- some attrition is just that. However, in our recent Feature article in eLife, my long time collaborator Alex London and I argue that some attrition and failure in drug development directly and indispensably contributes to the evidence base used to develop drugs and practice medicine.

How so? We offer 5 reasons. Among them is the fact that negative drug trials provide a read on the validity of theories driving drug development; and that negative drug trials provide clarity about how far clinicians can extend the label of approved drugs. Another is that it is far less costly to deploy cheap (but error prone) methods to quickly screen vast oceans and continents of drug / indication / dose / co-intervention combinatorials. To be clear- our argument is not that failure in drug development is a necessary evil. Rather, we are arguing that at least some failure is constitutive of a healthy research enterprise.

So what does this mean for policy? For one, much of the information produced in unsuccessful drug development remains locked inside the filing cabinets of drug companies (see our BMJ and BJP articles). For another, even the information that is published is probably underutilized (see, for example, Steven Greenberg’s analysis of how “negative” basic sciences are underutilized in the context of inclusion body myositis). Our analysis also suggests that attempts to reduce certain sources of attrition in drug development (e.g. shortened approval times; use of larger samples or more costly but probitive methods in early phase trials) seem likely to lead to other sorts of inefficiencies.

One question our paper does not address is: what is the socially optimal rate of failure in drug development (and how far have we departed from that optimum)? This question is impossible to answer without information about the number of drug candidates that are being developed against various indications; the costs of trials for those treatments, base rates for success for various indications, and other variables. We nevertheless hope our article might inspire efforts by economists and modellers to estimate an optimum for given disease areas. One thing we think such an analysis is likely to show is that we are currently underutilizing the information generated in unsuccessful translation trajectories.

BibTeX

@Manual{stream2015-921,
    title = {Why clinical translation cannot succeed without failure},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2015,
    month = nov,
    day = 27,
    url = {http://www.translationalethics.com/2015/11/27/why-clinical-translation-cannot-succeed-without-failure/}
}

MLA

Jonathan Kimmelman. "Why clinical translation cannot succeed without failure" Web blog post. STREAM research. 27 Nov 2015. Web. 24 Jun 2017. <http://www.translationalethics.com/2015/11/27/why-clinical-translation-cannot-succeed-without-failure/>

APA

Jonathan Kimmelman. (2015, Nov 27). Why clinical translation cannot succeed without failure [Web log post]. Retrieved from http://www.translationalethics.com/2015/11/27/why-clinical-translation-cannot-succeed-without-failure/


Search STREAM

Old blog posts


All content © STREAM research

admin@translationalethics.com
Twitter: @stream_research
3647 rue Peel
Montreal QC H3A 1X1