Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 2)

by

“Adenocarcinoma of Ascending Colon Arising in Villous Adenoma,” Ed Uthman on Flickr, March 29, 2007

In my previous post, I offered some reflections on my recent paper (with Marcin Waligora and colleagues) on pediatric phase 1 cancer trials. I offered three plausible implications. In this post, I want to highlight two reasons why I think it’s worth facing up to one of the possible implications I posited- namely (b) a lot of shitty phase 1 trials in children are pulling the average estimate of benefit down, making it hard to discern the truly therapeutic ones.

First, the quality of reporting in phase 1 pediatric trials- like cancer trials in general (see here and here and here– oncology: wake up!!)- is bad. For example:

“there was no explicit information about treatment-related deaths (grade 5 AEs) in 58.82% of studies.”

This points in general to the low scientific standards we tolerate in high risk pediatric research. We should be doing better. I note that, living by the Noble Lie that these trials are therapeutic makes it easier to live with such reporting deficiencies, since researchers, funders, editors, IRBs, and referees can always console themselves with the notion that- even if trials don’t report findings properly, at least children benefited from study participation.

A second important finding:

“The highest relative difference between responses was again identified in solid tumors. When 3 or fewer types of malignancies were included in a study, response rate was 15.01% (95% CI 6.70% to 23.32%). When 4 or more different malignancies were included in a study, response rate was 2.85% (95% CI 2.28% to 3.42%); p < 0.001.”

This may be telling us that- when we have a strong biological hypothesis such that we are very selective about which populations we enroll in trials, risk/benefit is much better. When we use a “shot-gun” approach of testing a drug in a mixed population- that is, when we lack a strong biological rationale- risk/benefit is a lot worse. Perhaps we should be running fewer and better justified phase 1 trials in children. If that is the case (and- to be clear- our meta-analysis is insufficient to prove it), then it’s the research that needs changing, not the regulations.

Nota Bene: Huge thanks to an anonymous referee for our manuscript. Wherever you are- you held us to appropriately high standards and greatly improved our manuscript. Also, a big congratulations to the first author of this manuscript, Professor Marcin Waligora- very impressive work- I’m honored to have him as a collaborator!

BibTeX

@Manual{stream2018-1583,
    title = {Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 2)},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2018,
    month = feb,
    day = 27,
    url = {https://www.translationalethics.com/2018/02/27/risk-benefit-in-pediatric-phase-1-cancer-trials-noble-lie-part-2/}
}

MLA

Jonathan Kimmelman. "Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 2)" Web blog post. STREAM research. 27 Feb 2018. Web. 14 Oct 2024. <https://www.translationalethics.com/2018/02/27/risk-benefit-in-pediatric-phase-1-cancer-trials-noble-lie-part-2/>

APA

Jonathan Kimmelman. (2018, Feb 27). Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 2) [Web log post]. Retrieved from https://www.translationalethics.com/2018/02/27/risk-benefit-in-pediatric-phase-1-cancer-trials-noble-lie-part-2/


Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 1)

by

Photo from art.crazed Elizabeth on Flickr, March 16, 2010.

In years of studying the ethics of early phase trials in patients- for example, cancer phase 1 trials- I’ve become more and more convinced that it is a mistake to think of these trials as having a therapeutic impetus.

To be sure- the issues are complex, many people who share my view do so for the wrong reasons. But in general, it seems to me difficult to reconcile the concept of competent medical care with giving patients a drug that will almost certainly cause major toxicities- and for which there is at best highly fallible animal evidence to support its activity (and at worst- no animal evidence at all).

For this reason, I think groups like ASCO and others – who (in a manner that is self-serving) advocate phase 1 trials as a vehicle for care when patients qualify- do a major disservice to patients and the integrity of medicine.

But surely there are cases where the risks of phase 1 trial enrollment might plausibly be viewed as outweighed by the prospect of direct benefit. As I’ve argued elsewhere, the institution of phase 1 testing is comprised of a heterogeneous set of materials and activities. With a drug, you can specify its composition and dose on a product label, and declare the drug “therapeutic” or “nontherapeutic” for a specific patient population. With phase 1 trials, there is no standard composition or dose- phase 1 trials cannot be put in a bottle and labeled as a homogeneous entity that has or does not have therapeutic value. If this is the case, it seems plausible that there are some phase 1 trials that come closer- and perhaps exceed- the threshold of risk/benefit/uncertainty that establish a therapeutic claim. That is, it seems conceivable that phase 1 trials may be done under conditions, or with sufficiently strong supporting evidence, that one can present them as a therapeutic option for certain patients without lying or betraying the integrity of medicine.

U.S. regulations (and those elsewhere) state that- when exposing children to research risks exceeding “minor increase over minimal,” research risks must be “justified by the anticipated benefit to the subjects…the relation of… anticipated benefit to the risk [must be] at least as favorable to the subjects as that presented by available alternative approaches.” Given that risks of drugs tested in phase 1 cancer trials exceed minor increase over minimal, U.S. regulations require that we view phase 1 trial participation as therapeutic when we enrol children.

Can this regulatory standard be reconciled with my view? I used to think so. Here’s why. Pediatric phase 1 trials are typically pursued only after drugs have been tested in adults. Accordingly, the ‘dogs’ of drug development have been thinned from the pack before testing in children. These trials also test a narrower dose range- and as such, a greater proportion of participants are likely to receive active doses of drug. Finally- rightly or wrongly- the ethos of protection that surrounds pediatric research, plus the stringency of regulations surrounding pediatric testing would- one might think- tend towards demanding higher evidentiary standards for launch of testing.

This week, Marcin Waligora, colleagues, and I published the largest meta-analysis of pediatric phase 1 cancer trials. that fills me with doubt about a therapeutic justification for phase 1 pediatric trials (for news coverage, see here). Before describing our findings, a few notes of caution.

First, our findings need to be interpreted with caution- crappy reporting practices for phase 1 trials make it hard to probe risk and benefit. Also, our analyses used methods and assumptions that are somewhat different than those used in similar meta-analyses of adults. Finally, who am I to impose my own risk/benefit sensibility on guardians (and children) who have reached the end of the line in terms of standard care options?

These provisos aside, our findings suggest that the risk/benefit for pediatric phase 1 cancer trials is not any better than it is for adult trials. Some salient findings:

  • on average, every pediatric participant will experience at least one severe or life threatening side effect.
  • for monotherapy trials in children with solid tumors (where we can compare our data with previous studies of adults), about 2.5% of children had major tumor shrinkage. This compares with a decade-old estimate of 3.8% in adults. 10.5% for combination therapy vs. 11.7% in adults.
  • Contrary to all the latest excitement about new treatment options, our data do not show clear time trends suggesting an improvement of risk/benefit with newer drugs.
  • 39% of children in phase 1 studies received less than the recommended dose of the investigational drug.

If- in fact- we reject the view that adult phase 1 studies can generally be viewed as therapeutic, and if, in fact, risk/benefit in pediatric studies has a similar risk/benefit balance despite their building on adult evidence, and if we accept that available care options outside a trial are no better or no worse in terms of their risk/benefit for children and adults- then it follows (more or less- and assuming our meta-analysis presents an accurate view of risk/benefit) that phase 1 trials in children cannot generally be presented as having a therapeutic risk/benefit.

This puts medicine a bind. Phase 1 trials are critical for advancing treatment options for children. But most cannot- in my view- be plausibly reconciled with research regulations. Either a) my above analysis is wrong, b) a lot of substandard phase 1 trials in children are pulling the average estimate of benefit down, making it hard to discern the truly therapeutic ones, c) we must accept- a la Plato- a noble lie and live a fiction that phase 1 studies are therapeutic, d) we must cease phase 1 cancer drug trials in children, or e) regulations are either misguided, or f) phase 1 trials should undergo a specialized review process- so called 407 review.

I would posit (b) and (e) and (f) as the most plausible implications of our meta-analysis.

In my next post, a few reflections. And stay tuned for further empirical and conceptual work on this subject.

BibTeX

@Manual{stream2018-1559,
    title = {Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 1)},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2018,
    month = feb,
    day = 26,
    url = {https://www.translationalethics.com/2018/02/26/risk-benefit-in-pediatric-phase-1-cancer-trials-noble-lie-part-1/}
}

MLA

Jonathan Kimmelman. "Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 1)" Web blog post. STREAM research. 26 Feb 2018. Web. 14 Oct 2024. <https://www.translationalethics.com/2018/02/26/risk-benefit-in-pediatric-phase-1-cancer-trials-noble-lie-part-1/>

APA

Jonathan Kimmelman. (2018, Feb 26). Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 1) [Web log post]. Retrieved from https://www.translationalethics.com/2018/02/26/risk-benefit-in-pediatric-phase-1-cancer-trials-noble-lie-part-1/


The Back Story on “Can cancer researchers accurately judge whether preclinical reports will reproduce?

by

How well can researchers accurately predict whether high profile preclinical findings will reproduce? This week in PLoS Biology, STREAM reports the result of a study suggesting the answer is “not very well.” You can read about our methods, assumptions, results, claims, etc. in the original report (here) or in various press coverage (here and here). Instead I will use this blog entry to reflect on how we pulled this paper off.

This was a bear of a study to complete. For many reasons. Studying experts is difficult- partly because, by definition, experts are scarce. They also have limited time. Defining who is and who is not an expert is also difficult. Another challenge is studying basic and preclinical research. Basic and preclinical researchers do not generally follow pre-specified protocols, and they certainly do not register their protocols publicly. This makes it almost impossible to conduct forecasting studies in this realm. We actually tried a forecast study asking PI’s to forecast the results of experiments in their lab (we hope to write up results at a later date); to our surprise, a good many planned experiments were never done, or when they were done, they were done differently than originally intended, rendering forecasts irrelevant. So when it became clear the Reproducibility Project: Cancer Biology project was a go and that they were working with pre-specified and publicly registered protocols, we leapt at the opportunity.

For our particular study of preclinical research forecast, there was another challenge. Early on, we were told that the Reproducibility Project: Cancer Biology was controversial. I got a taste of that controversy in many conversations with cancer biologists, including one who described the initiative as “radioactive- people don’t even want to acknowledge its there.”

This probably accounts for some of the challenges we faced in recruiting a meaningful sample, and to some extent in peer review. Regarding the former, my sedulous and perseverant postdoc, Danny Benjamin- working together with some great undergraduate research assistants- devised and implemented all sorts of methods to boost recruitment. In the end, we were able to get a good size (and representative, it turns out) sample. But this is a tribute to Danny’s determination.

Our article came in for some pretty harsh comments on initial peer review. In particular, one referee seemed fiendishly hostile to the RP:CB. The reviewer was critical of our focusing on xenograft experiments, which “we now know are impossible to evaluate due to technical reasons.” Yes- that’s right, we NOW know this. What we were trying to determine was if people could predict this!

The reviewer also seemed to pre-judge the replication studies (as well as the very definition of reproducibility, which is very slippery): “we already know that the fundamental biological discovery reported in several of these has been confirmed by other published papers and by drug development efforts in biopharma.” But our survey was not asking people to predict whether fundamental biological discoveries were true. We were asking whether particular experiments- when replicated based on publicly available protocols- could produce the same relationships.

The referee was troubled by our reducing reproducibility to a binary (yes/no). That was something we struggled with in design. But forecasting exercises are only useful insofar as events are verifiable and objective (no point in asking for foreacasts if we can’t define the goalposts, or if the goalposts move once we see the results). We toyed with creating a jury to referee reproducibility- and using jury judgments to verify forecasts. But in addition to being almost completely impractical, it would be methodologically dubious: forecasts would- in the end- be forecasts of jury judgments, not of an objectively verifiable data. To be a good forecaster, you’d need to peer into the souls of the jurors, as well as the machinery of the experiments themselves. But we were trying to study scientific judgment, not social judgment.

Our paper- in the end- potentially pours gasoline/petrol/das Benzin on a fiery debate about reproducibility (i.e. not only do many studies not reproduce- but also, scientists have limited awareness of which studies will reproduce). Yet we caution against facile conclusions. For one, there were some good forecasters in our sample. But perhaps more importantly, ours is one study-one ‘sampling’ of reality subject to all the limitations that come with methodology, chance, and our own very human struggles with bias. In the end- I think the findings are hopeful insofar as they suggest that part of what we need to work on in science is not merely designing and reporting experiments, but learning to make proper inferences (and communicating effectively) about the generalizability of experimental results. Those inferential skills seem on display with one of our star forecasters- Yale grad student Taylor Sells (named on our leaderboard)- “We often joke about the situations under which things do work, like it has to be raining and it’s a Tuesday for it to work properly…as a scientist, we’re taught to be very skeptical of even published results… I approached [the question of whether studies would reproduce] from a very skeptical point of view.”

BibTeX

@Manual{stream2017-1418,
    title = {The Back Story on “Can cancer researchers accurately judge whether preclinical reports will reproduce?},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2017,
    month = jul,
    day = 5,
    url = {https://www.translationalethics.com/2017/07/05/the-back-story-on-can-cancer-researchers-accurately-judge-whether-preclinical-reports-will-reproduce/}
}

MLA

Jonathan Kimmelman. "The Back Story on “Can cancer researchers accurately judge whether preclinical reports will reproduce?" Web blog post. STREAM research. 05 Jul 2017. Web. 14 Oct 2024. <https://www.translationalethics.com/2017/07/05/the-back-story-on-can-cancer-researchers-accurately-judge-whether-preclinical-reports-will-reproduce/>

APA

Jonathan Kimmelman. (2017, Jul 05). The Back Story on “Can cancer researchers accurately judge whether preclinical reports will reproduce? [Web log post]. Retrieved from https://www.translationalethics.com/2017/07/05/the-back-story-on-can-cancer-researchers-accurately-judge-whether-preclinical-reports-will-reproduce/


Nonpublication of Neurology Trials for Stalled Drugs & the Ironic Nonpublication of Data on those Stalled Drugs

by

In my experience, peer review greatly improves a manuscript in the vast majority of cases. There are times, however, when peer review improves a manuscript on one less important axis, while impoverishing it in another more important one. This is the case with our recent article in Annals of Neurology.

Briefly, our manuscript created a sample of FDA-approved neurological drugs, as well as a matched sample of neurological drugs that did not receive FDA approval- but instead stalled in development (i.e. a 3 year pause in testing). We then used clinicaltrials.gov to identify trials of drugs in both groups, and determined the proportion of trials that were published for all approved drugs, as well as FDA non-approved drugs. We found- not surprisingly- that trials involving stalled neurological drugs were significantly less likely to publish. What- for us- was the bigger surprise was that the proportion of trials published at 5 years or more after closure was a mere 32% for stalled neurological drugs (56% for licensed). Think about what that means in terms of the volume of information we lose, and the disrespect we show to neurological patients who volunteer their bodies to test drugs that show themselves to be ineffective and/or unsafe.

We shopped the manuscript around – eventually landing at Annals of Neurology. The paper received glowing reviews. Referee 1: “The research is careful and unbiased and the conclusions sound and impactful.” Referee 2: “This is an excellent and very important paper. It rigorously documents a very important issue in clinical trial conduct and reporting. The authors have done a superb job of identifying a crucial question, studying it carefully and fairly with first-rate quantification, and presenting the results in a clear, well-written, and illuminating manner… I have no major concerns, but some small points may be helpful…” Ka-ching!

However, after submitting small revisions, the manuscript was sent to a statistical referee who was highly critical of elements that seemed minor, given the thrust of the manuscript. [Disclosure: from here forward – this blog reflects my opinion but not necessarily the opinion of my two co-authors]. We were told to expunge the word “cohort” from the manuscript (since there was variable follow-up time). Odd but not worth disputing. We were urged “to fit a Cox model from time of completion of the trial to publication, with a time-varying covariate that is set to 0 until the time of FDA approval, at which time it is changed to 1. The associated coefficient of this covariate is the hazard ratio for publication comparing approved drugs to unapproved drugs.” That seemed fastidious – we’re not estimating survival of a drug to make policy here – but not unreasonable. We were told we must remove our Kaplan-Meier curves of time to publication. I guess. So we did it and resubmitted- keeping some of our unadjusted analyses in (of course labeling them as unadjusted).

The reviewer pressed further. He/she wanted all presentation of proportions and aggregate data removed (here I will acknowledge a generous aspect of the referee and editors- he/she agreed to use track changes to cut content from the ms [I am not being snarky here – this went beyond normal protocol at major journals]). We executed a “search and destroy” mission for just about all percentages in the manuscript: in this case we cut two tables’ worth of data describing the particular drugs, characteristics of trials in our sample, and proportions of trials for which data were obtainable in abstract form, or on company websites. Although one referee had signed off (“My high regard for this paper persists. Differences in views concerning the statistical approach are understandable. I see the paper as providing very important data about the trajectory of publication or non-publication of data depending on the licensing fate of the drug being studied, and see the survival analysis as bolstering that approach”) the editors insisted on our making revisions requested by the reviewer.

So in the end- we had to present what we believe to be an impoverished, data-starved, and somewhat less accessible version in Anals of Neurology. And not surprisingly, upon publication, we were (fairly) faulted online for not providing enough information about our sample. To our mind, the **real** version- and the one we think incorporates the referee’s productive suggestions while respecting our discretion as authors can be accessed here. And we are making our complete dataset available here.

BibTeX

@Manual{stream2017-1325,
    title = {Nonpublication of Neurology Trials for Stalled Drugs & the Ironic Nonpublication of Data on those Stalled Drugs},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2017,
    month = jun,
    day = 5,
    url = {https://www.translationalethics.com/2017/06/05/nonpublication-of-neurology-trials-for-stalled-drugs-the-ironic-nonpublication-of-data-on-those-stalled-drugs/}
}

MLA

Jonathan Kimmelman. "Nonpublication of Neurology Trials for Stalled Drugs & the Ironic Nonpublication of Data on those Stalled Drugs" Web blog post. STREAM research. 05 Jun 2017. Web. 14 Oct 2024. <https://www.translationalethics.com/2017/06/05/nonpublication-of-neurology-trials-for-stalled-drugs-the-ironic-nonpublication-of-data-on-those-stalled-drugs/>

APA

Jonathan Kimmelman. (2017, Jun 05). Nonpublication of Neurology Trials for Stalled Drugs & the Ironic Nonpublication of Data on those Stalled Drugs [Web log post]. Retrieved from https://www.translationalethics.com/2017/06/05/nonpublication-of-neurology-trials-for-stalled-drugs-the-ironic-nonpublication-of-data-on-those-stalled-drugs/


Accelerated Drug Approval and Health Inequality

by

Since the 1960s, the U.S. FDA has served as a model for drug regulation around the world with its stringent standards for approval of new drugs. Increasingly, however, a coalition of libertarians, patient advocates, and certain commercial interests have been pressing for a relaxation of these stringent standards. Examples of legislative initiatives that would weaken regulatory standards of evidence for drug approval include the “Regrow Act,” “21st Century Cures Act,” as well as various “Right to Try” laws passed in U.S. states.

Much has been written in support- and against- relaxation of current regulatory standards. Typically, these debates are framed in terms of a conflict between public welfare (i.e. the public needs to be protected from unproven and potentially dangerous drugs) and individual choice (i.e. desperately ill patients are entitled to make their own personal decisions about risky new drugs).

In a recent commentary, my co-author Alex London and I take a different tack on this debate. Rather than framing this as “public welfare” vs. “individual choice,” we examine the subtle ways that relaxed standards for drug approval would redistribute the burdens of uncertainty in ways that raise questions of fairness. We suggest weakened standards would shift greater burdens of uncertainty a) from advantaged populations to ones that are already suffer greater burdens from medical uncertainty; b) from research systems toward healthcare systems; c) from private and commercial payers toward public payers; and d) from comprehending and voluntary patients towards less comprehending and less voluntary patients. We hope our analysis stimulates a more probing discussion of the way regulatory standards determine how medical uncertainty is distributed.

BibTeX

@Manual{stream2016-1090,
    title = {Accelerated Drug Approval and Health Inequality},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2016,
    month = jul,
    day = 18,
    url = {https://www.translationalethics.com/2016/07/18/accelerated-drug-approval-and-health-inequality/}
}

MLA

Jonathan Kimmelman. "Accelerated Drug Approval and Health Inequality" Web blog post. STREAM research. 18 Jul 2016. Web. 14 Oct 2024. <https://www.translationalethics.com/2016/07/18/accelerated-drug-approval-and-health-inequality/>

APA

Jonathan Kimmelman. (2016, Jul 18). Accelerated Drug Approval and Health Inequality [Web log post]. Retrieved from https://www.translationalethics.com/2016/07/18/accelerated-drug-approval-and-health-inequality/


Why clinical translation cannot succeed without failure

by

Attrition in drug development – that is, the failure of drugs that show promise in animal studies to show efficacy when tested in patients- is often viewed as a source of inefficiency in drug development. Surely- some attrition is just that. However, in our recent Feature article in eLife, my long time collaborator Alex London and I argue that some attrition and failure in drug development directly and indispensably contributes to the evidence base used to develop drugs and practice medicine.

How so? We offer 5 reasons. Among them is the fact that negative drug trials provide a read on the validity of theories driving drug development; and that negative drug trials provide clarity about how far clinicians can extend the label of approved drugs. Another is that it is far less costly to deploy cheap (but error prone) methods to quickly screen vast oceans and continents of drug / indication / dose / co-intervention combinatorials. To be clear- our argument is not that failure in drug development is a necessary evil. Rather, we are arguing that at least some failure is constitutive of a healthy research enterprise.

So what does this mean for policy? For one, much of the information produced in unsuccessful drug development remains locked inside the filing cabinets of drug companies (see our BMJ and BJP articles). For another, even the information that is published is probably underutilized (see, for example, Steven Greenberg’s analysis of how “negative” basic sciences are underutilized in the context of inclusion body myositis). Our analysis also suggests that attempts to reduce certain sources of attrition in drug development (e.g. shortened approval times; use of larger samples or more costly but probitive methods in early phase trials) seem likely to lead to other sorts of inefficiencies.

One question our paper does not address is: what is the socially optimal rate of failure in drug development (and how far have we departed from that optimum)? This question is impossible to answer without information about the number of drug candidates that are being developed against various indications; the costs of trials for those treatments, base rates for success for various indications, and other variables. We nevertheless hope our article might inspire efforts by economists and modellers to estimate an optimum for given disease areas. One thing we think such an analysis is likely to show is that we are currently underutilizing the information generated in unsuccessful translation trajectories.

BibTeX

@Manual{stream2015-921,
    title = {Why clinical translation cannot succeed without failure},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2015,
    month = nov,
    day = 27,
    url = {https://www.translationalethics.com/2015/11/27/why-clinical-translation-cannot-succeed-without-failure/}
}

MLA

Jonathan Kimmelman. "Why clinical translation cannot succeed without failure" Web blog post. STREAM research. 27 Nov 2015. Web. 14 Oct 2024. <https://www.translationalethics.com/2015/11/27/why-clinical-translation-cannot-succeed-without-failure/>

APA

Jonathan Kimmelman. (2015, Nov 27). Why clinical translation cannot succeed without failure [Web log post]. Retrieved from https://www.translationalethics.com/2015/11/27/why-clinical-translation-cannot-succeed-without-failure/


Is it ok for patients to pay for their own clinical trials?

by

Most trials are funded by public sponsors, charities, or private drug developers. Austere research funding environments, and growing engagement of patient communities, has encouraged many to seek alternative funding.  One such alternative is patient funding. In the August 6 issue of Cell Stem Cell, my co-authors Alex London and Dani Wenner ask whether “patient funded trials” represent an opportunity for research systems, or a liability.

Our answer: liability.

Current regulatory systems train the self interest of conventional funders and scientists on the pursuit of well justified, rigorously designed, and efficient clinical trials. These regulatory systems have little purchase on patients or on clinics that offer patient funded trials.  Indeed, patient funded trials create a niche whereby clinics can market unproven interventions in the guise of a trial.  Do a few Google searches for patient funded trials and you’ll see what can flourish under this funding model.

On the other hand, our denunciation of the model is not categorical.  Provided there is a system in place for independently vetting the quality of design and supporting evidence—and for preventing such studies from pre-empting other worthy scientific efforts- patient funded trials may be ethically viable.

Until those mechanisms are in place, academic medical centers should refuse to host such studies.


Edit (2015-09-08): Dani Wenner’s name was mis-spelled as “Danni” in the original posting. We regret the error.

BibTeX

@Manual{stream2015-816,
    title = {Is it ok for patients to pay for their own clinical trials?},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2015,
    month = aug,
    day = 14,
    url = {https://www.translationalethics.com/2015/08/14/is-it-ok-for-patients-to-pay-for-their-own-clinical-trials/}
}

MLA

Jonathan Kimmelman. "Is it ok for patients to pay for their own clinical trials?" Web blog post. STREAM research. 14 Aug 2015. Web. 14 Oct 2024. <https://www.translationalethics.com/2015/08/14/is-it-ok-for-patients-to-pay-for-their-own-clinical-trials/>

APA

Jonathan Kimmelman. (2015, Aug 14). Is it ok for patients to pay for their own clinical trials? [Web log post]. Retrieved from https://www.translationalethics.com/2015/08/14/is-it-ok-for-patients-to-pay-for-their-own-clinical-trials/


Search, Bias, Flotsam and False Positives in Preclinical Research

by

Photo credit: RachelEllen 2006)

Photo credit: RachelEllen 2006

If you could change one thing- and only one thing- in preclinical proof of principle research to improve its clinical generalizability, what would it be? Require larger sample sizes? Randomization? Total data transparency?

In the May 2014 issue of PLoS Biology, my co-authors Uli Dirnagl and Jeff Mogil offer the following answer: clearly label preclinical studies as either “exploratory” or “confirmatory” studies.

Think of the downed jetliner Malaysia Airlines Flight 370. To find it, you need to explore vast swaths of open seas, using as few resources as possible. Such approaches are going to be very sensitive, but also prone to false positives.   Before you deploy expensive, specialized ships and underwater vehicles to locate the plane, you want to confirm that the signal identified in exploration is real.

So it is in preclinical research as well. Exploratory studies are aimed at identifying strategies that might be useful for treating disease- scanning the ocean for a few promising treatment strategies. The vast majority of preclinical studies today are exploratory in nature. They use small sample sizes, flexible designs, short study durations, surrogate measures of response, and many different techniques to demonstrate an intervention’s promise. Fast and frugal, but susceptible to bias and random variation.

Right now, the standard practice is to go right into clinical development on the basis of this exploratory information. Instead, we ought to be running confirmatory studies first. These would involve prespecified preclinical designs, large sample sizes, long durations, etc.   Such studies are more expensive, but can effectively rule out random variation and bias in declaring a drug promising.

Our argument has implications for regulatory and IRB review of early phase studies, journal publication, and funding of research. Clearly labeling studies as one or the other would put consumers of this information on notice for the error tendencies of the study. An “exploratory” label tells reviewers that the intervention is not yet ready for clinical development- but also, that reviewers ought to relax their standards, somewhat, for experimental design and transparency. “Confirmatory,” on the other hand, would signal to reviewers and others that the study is meant to directly inform clinical development decisions- and that reviewers should evaluate very carefully whether effect sizes are confounded by random variation, bias, use of an inappropriate experimental system (i.e. threats to construct validity), or idiosyncratic features of the experimental system (i.e. threats to external validity).

BibTeX

@Manual{stream2014-525,
    title = {Search, Bias, Flotsam and False Positives in Preclinical Research},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2014,
    month = may,
    day = 23,
    url = {https://www.translationalethics.com/2014/05/23/search-bias-flotsam-and-false-positives-in-preclinical-research/}
}

MLA

Jonathan Kimmelman. "Search, Bias, Flotsam and False Positives in Preclinical Research" Web blog post. STREAM research. 23 May 2014. Web. 14 Oct 2024. <https://www.translationalethics.com/2014/05/23/search-bias-flotsam-and-false-positives-in-preclinical-research/>

APA

Jonathan Kimmelman. (2014, May 23). Search, Bias, Flotsam and False Positives in Preclinical Research [Web log post]. Retrieved from https://www.translationalethics.com/2014/05/23/search-bias-flotsam-and-false-positives-in-preclinical-research/


In Memorium for Kathy Glass

by

kathy-glassI first met Kathy in August 2001 when, newly arrived in Montreal with a totally useless PhD in molecular genetics, I approached her, hat in hand, looking for a postdoctoral position in Biomedical Ethics. Actually, my hat wasn’t in hand- it was on my head- I had a week earlier accidentally carved a canyon in my scalp when I left the spacer off my electric razor. Apparently, Kathy wasn’t put off by my impertinent attire, and she hired me. That was the beginning of a beautiful mentorship and, years later, as the director of the Biomedical Ethics Unit, Kathy hired me as an Assistant Prof.

More than any one person I can think of, I owe Kathy my career. Kathy was a great teacher. She kindled in me- and others around her- a passion for research ethics, and a recognition of the way that science, method, law, and ethics constitute not separately contended arenas, but an integrated whole. After the life of her mentor- Benjy Freedman- was tragically cut short, Kathy picked up Benjy’s torch and led the Clinical Trial Research Group here at McGill. Together with her CTRG colleagues, Kathy published a series of landmark papers on such issues as the use of placebo comparators, risk and pediatric research, the (mis)conduct of duplicative trials, the testing of gene therapies- papers that belong in any respectable research ethics syllabus. I use them myself. Kathy led the CTRG- and for that matter, the BMEU- with fierce conviction and an unshakable fidelity to the weakest and most debilitated. Yet she also fostered an intellectually cosmopolitan environment, where dissenting voices were welcomed. And then softly disabused of their dissent

Kathy was also a great mentor. She supervised 24 Master’s and doctoral students- many of whom went on to successful careers as bioethicists around the world- and many of whom show great promise as they continue their studies. Kathy also supervised 6 postdocs- 5 landed good academic jobs. Not bad. But what was most inspiring about Kathy was not her ability to energize talent. To some degree, talent runs on its own batteries. Instead, it was in the way Kathy was able to get pretty good, honest work out of less talented- but earnest- students. Kathy was elite, but not an elitist.

A mutual colleague has described Kathy as self-effacing. She took pleasure in her achievements, but still greater pleasure in the achievements of her collaborators and students. Over the last few weeks, I have fielded countless queries from colleagues far and wide- highly influential bioethicists who worked with her like Carl Elliot and Leigh Turner in Minnesota, or Michael McDonald in Vancouver. And look around you in this chapel and you will see some more leading lights of bioethics and clinical trials- Charles Weijer, Bartha Knoppers, Trudo Lemmens, Stan Shapiro to name a few. Their presence and deep affection testify to Kathy’s personal and professional impact, as does the recognition accorded by the Canadian Bioethics Society when Kathy received the Lifetime Achievement Award in 2011.

Kathy was a fundamentally decent human being. She confronted an unusual amount of personal adversity- the death of her son, her early experience with cancer and its later recurrence, the untimely death of her mentor- with courage, dignity, and a resilience that inspired all around her. Her work speaks so convincingly in part because it is informed by these personal experiences.

After her retirement and when she was able to muster the strength- and navigate the ice on Peel Street- Kathy would show up at my research group meetings and participate in discussions. I speak for my group- and also my colleagues in research ethics- when I say our universe will be smaller and a little less inviting without the presence of this gentle, inquisitive, selfless, and righteous woman.

-Jonathan Kimmelman, April 17, 2014

BibTeX

@Manual{stream2014-498,
    title = {In Memorium for Kathy Glass},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2014,
    month = apr,
    day = 19,
    url = {https://www.translationalethics.com/2014/04/19/in-memorium-for-kathy-glass/}
}

MLA

Jonathan Kimmelman. "In Memorium for Kathy Glass" Web blog post. STREAM research. 19 Apr 2014. Web. 14 Oct 2024. <https://www.translationalethics.com/2014/04/19/in-memorium-for-kathy-glass/>

APA

Jonathan Kimmelman. (2014, Apr 19). In Memorium for Kathy Glass [Web log post]. Retrieved from https://www.translationalethics.com/2014/04/19/in-memorium-for-kathy-glass/


Missing Reports: Research Biopsy in Cancer Trials

by

A growing number of drug trials are collecting tissue to determine whether the drug hits its molecular target.  These studies are called “pharmacodynamics.”  And in cancer, many pharmacodynamics studies involve collection of tumor tissue through biopsies.  These procedures are painful, and are performed solely to answer scientific questions.  That is, they generally have no diagnostic or clinical value.  As such, some commentators worry about their ethics.

In a recent issue of Clinical Cancer Research, my Master’s student Gina Freeman and I report on publication practices for pharmacodynamics studies involving tumor biopsy.  The basic idea is this: the ethical justification for such invasive research procedures rests on a claim that they are scientifically valuable.  However, if they are never published, it is harder to argue that they have a sound scientific justification.  So we set out to determine how frequently results are published, and reasons why some results are never reported. Briefly, we found that a third of promised analyses are not published- which is more or less in line with the frequency of nonpublication for trials in general.  We also find that researchers who perform pharmacodynamics studies regard reporting quality as fair to poor, and many perceive the most common reason for nonpublication to be “strategic considerations” (as in: result does not fit the narrative of the overall trial).

Does our article support a definitive statement about the ethics of research biopsy in cancer trials?  No.  But it does point to a number of ways that the ethical justification can be strengthened- and questions clinical investigators and ethics boards should be asking when designing and/or reviewing protocols involving research biopsy. (graphic: cole007 2011)

BibTeX

@Manual{stream2012-51,
    title = {Missing Reports: Research Biopsy in Cancer Trials},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2012,
    month = oct,
    day = 4,
    url = {https://www.translationalethics.com/2012/10/04/missing-reports-research-biopsy-in-cancer-trials/}
}

MLA

Jonathan Kimmelman. "Missing Reports: Research Biopsy in Cancer Trials" Web blog post. STREAM research. 04 Oct 2012. Web. 14 Oct 2024. <https://www.translationalethics.com/2012/10/04/missing-reports-research-biopsy-in-cancer-trials/>

APA

Jonathan Kimmelman. (2012, Oct 04). Missing Reports: Research Biopsy in Cancer Trials [Web log post]. Retrieved from https://www.translationalethics.com/2012/10/04/missing-reports-research-biopsy-in-cancer-trials/


Search STREAM


All content © STREAM research

admin@translationalethics.com
Twitter: @stream_research
3647 rue Peel
Montreal QC H3A 1X1