When is it legitimate to stop a clinical trial early?

by

Stopping early?

Stopping early?

Inspired by a paper that I’m working on with a few of my colleagues from the STREAM research group on the subject of the accrual of subjects in human research, I’ve been reading through a number of articles related to the question, When is it legitimate to stop a clinical trial that is already in progress?

Lavery et al. identify a problem in allocating research resources in human research in their debate, In Global Health Research, Is It Legitimate To Stop Clinical Trials Early on Account of Their Opportunity Costs? (2009) The development of next-generation drug products often outstrips the capacity for testing them, resulting in a queue, and possibly less-than-optimal use of our resources in developing new drugs. They suggest that there should be a mechanism for ending a clinical trial early on the basis of “opportunity costs.” That is, while trials are already terminated early on the basis of futility, efficacy or safety concerns, an ongoing trial might not be the best use of scare healthcare resources, and there should be a way to divert resources to something more promising. Two options are proposed: a scientific oversight committee, or an expanded mandate for the DSMB. The procedure for making such decisions is based on Daniels’ “accountability for reasonableness.”

Buchanan responds, saying it is unethical and impractical to do so. He argues it is unethical because such a practice could not be justified in terms of the harm or benefit to the patients, and that it is difficult to see who would be harmed if ongoing trials were not stopped for opportunity costs. He argues that it is impractical because an accountability for reasonableness-based procedure would mire drug development in corporate lobbying, appeals, and deliberations on a virtually unlimited range of value considerations.

Lavery et al. rebut that the “interest [of advancing science] might be better served by adopting our proposal, rather than locking participants into a trial of a potentially inferior product,” and contrary to Buchanan’s claims, DSMB decisions are rarely certain. Buchanan concludes the paper by rejecting the position of Lavery et al. on no uncertain terms.

While this article does point out a big problem, the solutions proposed by Lavery et al. may present practical problems for a number of reasons, as Buchanan argues. The ethics of early termination is given short shrift in this work, and a more nuanced discussion is needed.

BibTeX

@Manual{stream2013-205,
    title = {When is it legitimate to stop a clinical trial early?},
    journal = {STREAM research},
    author = {Benjamin Gregory Carlisle},
    address = {Montreal, Canada},
    date = 2013,
    month = may,
    day = 24,
    url = {https://www.translationalethics.com/2013/05/24/when-is-it-legitimate-to-stop-a-clinical-trial-early/}
}

MLA

Benjamin Gregory Carlisle. "When is it legitimate to stop a clinical trial early?" Web blog post. STREAM research. 24 May 2013. Web. 28 Mar 2024. <https://www.translationalethics.com/2013/05/24/when-is-it-legitimate-to-stop-a-clinical-trial-early/>

APA

Benjamin Gregory Carlisle. (2013, May 24). When is it legitimate to stop a clinical trial early? [Web log post]. Retrieved from https://www.translationalethics.com/2013/05/24/when-is-it-legitimate-to-stop-a-clinical-trial-early/


Hypothesis Generator

by

Is good medical research directed at testing hypotheses? Or is there a competing model of good medical research that sees hypothesis generating research as a valuable end? In an intriguing essay appearing in the August 21, 2009 issue of Cell, Maureen O’Malley and co-authors show how current funding mechanisms at agencies like NIH and NSF center their model of scientific merit around the testing of hypotheses (e.g. does molecule X cause phenomenon Y? does drug A outperform drug B?). However, as the authors (and others) point out, many areas of research are not based on such “tightly bounded spheres of inquiry.” They suggest that a “more complete representation of the iterative, interdisciplinary, and multidimensional relationships between various modes of scientific investigation could improve funding agency guidelines.”

The questions presented by this article have particular relevance for translational clinical research. As I argue in my book, the traditional clinical trial apparatus–and corresponding discourse on research ethics– is overwhelmingly directed towards the type of hypothesis testing typified by the randomized controlled trial. However, many early phase studies involve a large component of hypothesis generating research as well. The challenge for O’Malley et al’s argument– and mine– is

(photo credit: Gouldy99, 2008)

BibTeX

@Manual{stream2010-74,
    title = {Hypothesis Generator},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2010,
    month = feb,
    day = 9,
    url = {https://www.translationalethics.com/2010/02/09/hypothesis-generator/}
}

MLA

Jonathan Kimmelman. "Hypothesis Generator" Web blog post. STREAM research. 09 Feb 2010. Web. 28 Mar 2024. <https://www.translationalethics.com/2010/02/09/hypothesis-generator/>

APA

Jonathan Kimmelman. (2010, Feb 09). Hypothesis Generator [Web log post]. Retrieved from https://www.translationalethics.com/2010/02/09/hypothesis-generator/


Help Wanted, Part 2

by

So, what are some of the intriguing ethical questions of Kolata’s August 2d article? Here is one: when researchers conduct studies and ethics committees review protocols, resource allocation is an important consideration. If, as Kolata alleges, mediocre trials siphon eligible patients away from good trials, then there is a case to be made that IRBs and investigators need to ponder carefully the effects proposed trials will have on other studies- even when proposed trials have a favorable direct benefit-risk balance for volunteers who enter them.


Second, if resource allocation is a key consideration in realms where patients are scarce, investigators (and IRBs) need reliable criteria for assessing the broader social value of study protocols. They further need some way of being able to compare one protocol against a body of others that are either underway or in the pipeline. The current system provides no straightforward way of doing this.

Third, if 50% of trials fail to recruit sufficient numbers to produce meaningful results, investigators, IRBs, DSMBs, and granting agencies are doing a lousy job ensuring high ethical standards in human research. It is well established that, for any study to redeem the burdens that volunteers endure on enrollment, it must produce valuable findings. It is disturbing, to say the least, that many volunteers enter studies that go nowhere, and that investigators, IRBs, and funding agencies are not realistically projecting recruitment.

Last, Kolata suggests that many cancer trials are merely aimed at “polishing a doctor’s résumé.” It would make a useful contribution to the field of cancer research- and bioethics- to measure the frequency of this practice. Meantime, this inability of IRBs to detect this kind of conduct, and stop it in its tracks, signals an important deficiency in human protections. Which leads me to my next post… (photo credit: ziggy fresh 2006)

BibTeX

@Manual{stream2009-88,
    title = {Help Wanted, Part 2},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2009,
    month = aug,
    day = 9,
    url = {https://www.translationalethics.com/2009/08/09/help-wanted-part-2/}
}

MLA

Jonathan Kimmelman. "Help Wanted, Part 2" Web blog post. STREAM research. 09 Aug 2009. Web. 28 Mar 2024. <https://www.translationalethics.com/2009/08/09/help-wanted-part-2/>

APA

Jonathan Kimmelman. (2009, Aug 09). Help Wanted, Part 2 [Web log post]. Retrieved from https://www.translationalethics.com/2009/08/09/help-wanted-part-2/


Help Wanted- For the War on Cancer

by

Earlier this week (Aug 2), Gina Kolata of the NYTimes ran a fascinating story about challenges recruiting patients to cancer clinical trials. The story contains interesting facts, credible claims, analysis, and unfortunately, some misleading conjectures. The problem of patient recruitment also invites some hard headed ethical analysis.


First the facts. According to the article, one in five National Cancer Institute-funded trials fails to enroll a single subject; half fail to recruit enough to produce meaningful results. Now some credible claims: many trials are “aimed at polishing a doctor’s résumé, and produce meaningless results; many oncologists avoid cancer studies because they can be a money loser, and many patients shy away from trial participation- particularly when their cancer is less advanced and they can obtain treatment outside of trials.


The article, however, is swathed in some misleading conjectures. The article makes the suggestion that problems with recruitment are “one reason” and “the biggest barrier” to major strides in the “war on cancer” (hence the recruitment poster in the graphic above). Hard to reconcile this with Kolata’s contention elsewhere that many trials are useless. It’s also hard to square the claim with Kolata’s point, earlier in the article, that trials involving really promising drugs usually have no problems with recruitment. In one famous case, a Phase 1 trial testing endostatin at Harvard received 1000 inquires from patients for 3 slots in the trial (Pop quiz: see if you can guess which New York Times reporter wrote an article on endostatin that many commentators criticized for sensationalizing the drug’s promise?). Third, with only about 1 in 20 cancer drug candidates making it from phase 1 tests to FDA approval, a reasonable question to ask is whether preclinical researchers are validating their drug candidates properly. And finally, the article makes no mention of the fact that many studies have exceedingly narrow eligibility criteria. Many patients may be solicited for trial participation- but only a fraction meet eligibility criteria.

Still, Kolata’s article is enlightening and raises a number of intriguing questions that demand ethical analysis. I’ll discuss some of these in my next posting (photo credits: Joan Thewlis, 1918 Recruitment Poster, 2009).

BibTeX

@Manual{stream2009-90,
    title = {Help Wanted- For the War on Cancer},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2009,
    month = aug,
    day = 6,
    url = {https://www.translationalethics.com/2009/08/06/help-wanted-for-the-war-on-cancer/}
}

MLA

Jonathan Kimmelman. "Help Wanted- For the War on Cancer" Web blog post. STREAM research. 06 Aug 2009. Web. 28 Mar 2024. <https://www.translationalethics.com/2009/08/06/help-wanted-for-the-war-on-cancer/>

APA

Jonathan Kimmelman. (2009, Aug 06). Help Wanted- For the War on Cancer [Web log post]. Retrieved from https://www.translationalethics.com/2009/08/06/help-wanted-for-the-war-on-cancer/


The Octopus of Reference Standards

by

When gene transfer researchers perform an experiment, how do they measure the dose of their vectors?  For that matter, when investigators perform a study using any novel biologic, how have they characterized their agents?


Few would think the questions are ethically significant. But consider the fact that research teams often use different techniques to determine vector dosage, and as a consequence, results from different human and animal studies cannot be compared with each other.

In the July issue of Molecular Therapy, Richard Snyder and Philippe Moullier outline international efforts to estabish reference standards for AAV vectors (a class of viral vectors widely used in gene transfer studies). After about six years (and nearly twice that of trials using human beings), reference standards for two serotypes (AAV2 and AAV8) will soon be available. The authors describe a number of logistic and funding problems for establishing reference standards. Not mentioned are the numerous sociological challenges with herding feline-like researchers.

In my book, I argue that standards represent a critical vehicle for risk management in novel research arenas: small-scale human safety studies have limited value unless results can be linked up with other human studies to identify trends. Standard setting, and the octopus–like extension of standards into all preclinical and clinical studies–thus have important ethical implications. IRBs reviewing early phase novel agent protocols should be on the alert where study agents are not characterized using reference standards (photo credit: Roadsidepictures, 2007)

BibTeX

@Manual{stream2008-133,
    title = {The Octopus of Reference Standards},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2008,
    month = sep,
    day = 23,
    url = {https://www.translationalethics.com/2008/09/23/the-octopus-of-reference-standards/}
}

MLA

Jonathan Kimmelman. "The Octopus of Reference Standards" Web blog post. STREAM research. 23 Sep 2008. Web. 28 Mar 2024. <https://www.translationalethics.com/2008/09/23/the-octopus-of-reference-standards/>

APA

Jonathan Kimmelman. (2008, Sep 23). The Octopus of Reference Standards [Web log post]. Retrieved from https://www.translationalethics.com/2008/09/23/the-octopus-of-reference-standards/


Search STREAM


All content © STREAM research

admin@translationalethics.com
Twitter: @stream_research
3647 rue Peel
Montreal QC H3A 1X1