Predicting Risk, Benefit, and Success in Research

by

weather-vane-711082_1280 The task set before clinical investigators is not easy. They are supposed to answer pressing scientific questions, using very few resources, and exposing patient-subjects to as little risk as possible. In other words, we expect them to be rigorous scientists, stewards of the medical research enterprise, and guardians of their patients’ interests all at the same time. While the duties that emerge from these various roles are sometimes orthogonal, they are intersecting and aligned at the point of clinical trial design. Insofar as a trial is well-designed–meaning that it is likely to answer its scientific question, make efficient use of research resources, and minimize risk–the investigator has successfully discharged all of these duties.What is more, there is a common activity underlying all of these requirements of good trial design: Prediction. When investigators design studies, they are making an array of predictions about what they think will happen. When they decide which interventions to compare in a randomized trial, they are making predictions about risk/benefit balance. When they power a study, they are making a prediction about treatment effect sizes. The accuracy of these predictions can mean the difference between an informative or an uninformative outcome–a safe or unsafe study.

The importance of these predictions is already implicitly recognized in many research ethics policies. Indeed, research policies often include requirements that studies should be based on a systematic evaluation of the available evidence. These requirements are really just another way of saying that the predictions underlying a study should be as accurate as possible given the state of available knowledge. Yet, trial protocols do not typically contain explicit predictions–e.g., likelihood statements about the various outcomes or events of ethical interest. And this makes it much more difficult to know whether or not investigators are adequately discharging their duties to the scientific community, to their patient-subjects, and to the research system as a whole.

In an article from this month’s Journal of Medical Ethics, I argue that investigators ought to be making explicit predictions in their protocols. For example, they should stating exactly how likely they think it is that their study will meet its primary endpoint or exactly how many adverse events they expect to see. Doing so would then allow us to apply the tools of prediction science–to compare these predictions with outcomes and finally get a direct read on just how well investigators make use of the available body of evidence. This would, in turn, provide a number of other benefits–from facilitating more transparent ethical reviews to reducing the number of uninformative trials. It would also provide opportunities for other research stakeholders–like funding agencies–to better align their decision-making with the state of evidence.

The broad point here is that in the era of evidence-based medicine, we should be using this evidence to design to better trials. Applying the science of prediction to clinical research allows us to take steps in this direction.

BibTeX

@Manual{stream2015-812,
    title = {Predicting Risk, Benefit, and Success in Research},
    journal = {STREAM research},
    author = {Spencer Phillips Hey},
    address = {Montreal, Canada},
    date = 2015,
    month = jun,
    day = 26,
    url = {http://www.translationalethics.com/2015/06/26/predicting-risk-benefit-and-success-in-research/}
}

MLA

Spencer Phillips Hey. "Predicting Risk, Benefit, and Success in Research" Web blog post. STREAM research. 26 Jun 2015. Web. 09 Nov 2024. <http://www.translationalethics.com/2015/06/26/predicting-risk-benefit-and-success-in-research/>

APA

Spencer Phillips Hey. (2015, Jun 26). Predicting Risk, Benefit, and Success in Research [Web log post]. Retrieved from http://www.translationalethics.com/2015/06/26/predicting-risk-benefit-and-success-in-research/


Nope– It’s Still Not Ethical

by

Casino Slots

Last year, Jonathan and I published a critique of unequal allocation ratios in late-phase trials. In these trials, patient-subjects are randomly allocated among the treatment arms in unequal proportions, such as 2:1 or 3:1, rather than the traditional equal (1:1) proportion. Strangely, despite introducing an additional burden (i.e., requiring larger sample sizes) the practice of unequal allocation is often defended as being “more ethical”. In that piece, published in Neurology, we showed that these purported ethical advantages did not stand up to careful scrutiny.

In a new article at Clinical Trials, Jonathan and I extend this line of argument to trials that use outcome-adaptive allocation. In an outcome-adaptive trial, the allocation ratio is dynamically adjusted over the course of the study, becoming increasingly weighted toward the better-performing arm. In contrast to the fixed but unequal ratios described above, outcome-adaptive ratios can sometimes reduce the necessary sample size to answer the study question. However, this reduction in cost and patient burden is not guaranteed. In fact, it only occurs when the difference between the observed effect sizes is large. And since there is no way to know in advance what this difference is going to be, these potential gains in efficiency due to outcome-adaptive designs are something of a gamble.

Nevertheless, just as we saw with fixed unequal ratios, proponents of outcome-adaptive trials claim that this allocation scheme is “more ethical”. Setting aside the sample size issue, they argue that outcome-adaptive trials better accommodate clinical equipoise by collapsing the distinction between research and care. As it is sometimes put rhetorically: Would you rather be the last subject treated in a trial or the first subject treated in practice? The outcome-adaptive trial dissolves any ethical tension in this question. The treatment will be the same either way.

Of course, any long-time readers of this blog will recognize a misunderstanding of clinical equipoise embodied in that question. The salient issue is not a comparison between the last subject enrolled in a study and the first patient treated in the context of clinical care. Rather, it is about ensuring that no subject is systematically disadvantaged by participating in a trial (and that all participants receive competent medical care). In which case, the relevant rhetorical question needs to be re-phrased as follows: Would you rather be the first patient enrolled in a study or the last? In a traditional 1:1 RCT design, clinical equipoise dissolves the ethical tension in that question. But for an outcome-adaptive design, you should hope to be the last–and that is a serious problem.

BibTeX

@Manual{stream2015-693,
    title = {Nope– It’s Still Not Ethical},
    journal = {STREAM research},
    author = {Spencer Phillips Hey},
    address = {Montreal, Canada},
    date = 2015,
    month = feb,
    day = 10,
    url = {http://www.translationalethics.com/2015/02/10/nope-its-still-not-ethical/}
}

MLA

Spencer Phillips Hey. "Nope– It’s Still Not Ethical" Web blog post. STREAM research. 10 Feb 2015. Web. 09 Nov 2024. <http://www.translationalethics.com/2015/02/10/nope-its-still-not-ethical/>

APA

Spencer Phillips Hey. (2015, Feb 10). Nope– It’s Still Not Ethical [Web log post]. Retrieved from http://www.translationalethics.com/2015/02/10/nope-its-still-not-ethical/


The Landscape of Early Phase Research

by

landscape-for-web

As Jonathan is fond of saying: Drugs are poisons. It is only through an arduous process of testing and refinement that a drug is eventually transformed into a therapy. Much of this transformative work falls to the early phases of clinical testing. In early phase studies, researchers are looking to identify the optimal values for the various parameters that make up a medical intervention. These parameters are things like dose, schedule, mode of administration, co-interventions, and so on. Once these have been locked down, the “intervention ensemble” (as we call it) is ready for the second phase of testing, where its clinical utility is either confirmed or disconfirmed in randomized controlled trials.

In our piece from this latest issue of the Kennedy Institute of Ethics Journal, Jonathan and I present a novel conceptual tool for thinking about the early phases of drug testing. As suggested in the image above, we represent this process as an exploration of a 3-dimensional “ensemble space.” Each x-y point on the landscape corresponds to some combination of parameters–a particular dose and delivery site, say. The z-axis is then the risk/benefit profile of that combination. This model allows us to re-frame the goal of early phase testing as an exploration of the intervention landscape–a systematic search through the space of possible parameters, looking for peaks that have promise of clinical utility.

We then go on to show how the concept of ensemble space can also be used to analyze the comparative advantages of alternative research strategies. For example, given that the landscape is initially unknown, where should researchers begin their search? Should they jump out into the deep end, to so speak, in the hopes of hitting the peak on the first try? Or should they proceed more cautiously–methodologically working their way out from the least-risky regions, mapping the overall landscape as they go?

I won’t give away the ending here, because you should go read the article! Although readers familiar with Jonathan’s and my work can probably infer which of those options we would support. (Hint: Early phase research must be justified on the basis of knowledge-value, not direct patient-subject benefit.)

UPDATE: I’m very happy to report that this paper has been selected as the editor’s pick for the KIEJ this quarter!

BibTeX

@Manual{stream2014-567,
    title = {The Landscape of Early Phase Research},
    journal = {STREAM research},
    author = {Spencer Phillips Hey},
    address = {Montreal, Canada},
    date = 2014,
    month = jul,
    day = 4,
    url = {http://www.translationalethics.com/2014/07/04/the-landscape-of-early-phase-research/}
}

MLA

Spencer Phillips Hey. "The Landscape of Early Phase Research" Web blog post. STREAM research. 04 Jul 2014. Web. 09 Nov 2024. <http://www.translationalethics.com/2014/07/04/the-landscape-of-early-phase-research/>

APA

Spencer Phillips Hey. (2014, Jul 04). The Landscape of Early Phase Research [Web log post]. Retrieved from http://www.translationalethics.com/2014/07/04/the-landscape-of-early-phase-research/


The Ethics of Unequal Allocation

by

unequal-allocation

In the standard model for randomized clinical trials, patients are allocated on an equal, or 1:1, basis between two treatment arms. This means that at the conclusion of patient enrollment, there should be roughly equal numbers of patients receiving the new experimental treatment as those receiving the standard treatment or placebo. This 1:1 allocation ratio is the most efficient from a statistical perspective, since it requires the fewest number of patient-subjects to achieve a given level of statistical power.

However, many recent late-phase trials of neurological interventions have randomized their participants in an unequal ratio, e.g., on a 2:1 or 3:1 basis. In the case of 2:1 allocation, this means that there are twice as many patient-subjects receiving the new (and unproven) treatment as those receiving the standard or placebo. This practice is typically justified by the assumption that it is easier to enroll patient-subjects in a trial if they believe they are more likely to receive the new/active treatment.

In an article from this month’s issue of Neurology, Jonathan and I present three arguments for why investigators and oversight boards should be wary of unequal allocation. Specifically, we argue that the practice (1) leverages patient therapeutic misconceptions; (2) potentially interacts with blinding and thereby undermines a study’s internal validity; and (3) fails to minimize overall patient burden by introducing unnecessary inefficiencies into the research enterprise. Although these reasons do not universally rule-out the practice–and indeed we acknowledge some circumstances under which unequal allocation is still desirable–they are sufficient to demand a more compelling justification for its use.

The point about inefficiency reflects a trend in Jonathan’s and my work–elucidating the consequences for research ethics when we look across a series of trials, instead of just within one protocol. So to drive this point home here, consider that the rate of successful translation in neurology is estimated at around 10%. This means that for every 10 drugs that enter the clinical pipeline, only 1 will ever be shown effective. Given the limited pool of human and material resources available for research and the fact that a 2:1 allocation ratio typically requires 12% more patients to achieve a given level of statistical power, this increased sample size and cost on a per trial basis may mean that we use up our testing resources before we ever find that 1 effective drug.

BibTeX

@Manual{stream2014-468,
    title = {The Ethics of Unequal Allocation},
    journal = {STREAM research},
    author = {Spencer Phillips Hey},
    address = {Montreal, Canada},
    date = 2014,
    month = jan,
    day = 6,
    url = {http://www.translationalethics.com/2014/01/06/unequal-allocation/}
}

MLA

Spencer Phillips Hey. "The Ethics of Unequal Allocation" Web blog post. STREAM research. 06 Jan 2014. Web. 09 Nov 2024. <http://www.translationalethics.com/2014/01/06/unequal-allocation/>

APA

Spencer Phillips Hey. (2014, Jan 06). The Ethics of Unequal Allocation [Web log post]. Retrieved from http://www.translationalethics.com/2014/01/06/unequal-allocation/


No trial stands alone

by

“The result of this trial speaks for itself!”

This often heard phrase contains a troubling assumption: That an experiment can stand entirely on in its own. That it can be interpreted without reference to other trials and other results. In a couple of articles published over the last two weeks, my co-authors and I deliver a one-two punch to this idea.

The first punch is thrown at the US FDA’s use of “assay sensitivity,” a concept defined as a clinical trial’s “ability to distinguish between an effective and an ineffective treatment.” This concept is intuitively appealing, since all it seems to say is that a trial should be well-designed. A well-designed clinical trial should be able to answer its question and distinguish an effective from an ineffective treatment. However, assay sensitivity has been interpreted to mean that placebo controls are “more scientific” than active controls. This is because superiority to placebo seems to guarantee that the experimental agent is effective, whereas superiority or equivalence to an active control does not rule out the possibility that both agents are actually ineffective.  This makes placebo-controlled trials more “self-contained,” easier to interpret, and therefore, methodologically superior.

In a piece in Perspectives in Biology and Medicine, Charles Weijer and I dismantle the above argument by showing, first, that all experiments rely on some kinds of “external information”–be it information about an active control’s effects, pre-clinical data, the methodological validity of various procedures, etc. Second, that a placebo can suffer from all of the same woes that might afflict an active control (e.g., the “placebo effect” is not one, consistent effect, but can vary depending upon the type or color of placebo used), so there is no guarantee of assay sensitivity in a placebo-controlled trial. And finally, the more a trial’s results can be placed into context, and interpreted in light of other trials, the more potentially informative it is.

This leads to punch #2: How should we think about trials in context? In a piece in Trials, Charles Heilig, Charles Weijer, and I present the “Accumulated Evidence and Research Organization (AERO) Model,” a graph-theoretic approach to representing the sequence of experiments and clinical trials that constitute a translational research program. The basic idea is to illustrate each trial in the context of its research trajectory using a network graph (or a directed acyclic graph, if you want to get technical), with color-coded nodes representing studies and their outcomes; and arrows representing the intellectual lineage between studies. This work is open-access, so I won’t say too much more about it here, but instead encourage you to go and give it a look. We provide a lot of illustrations to introduce the graphing algorithm, and then apply the approach to a case-study involving inconsistent results across a series of tuberculosis trials.

In sum: Trials should not be thought of as self-contained. This is not even desirable! Rather, all trials (or at least trials in translational medicine) should be thought of as nodes in a complex, knowledge producing network. Each one adding something to our understanding. But none ever truly “speaks for itself,” because none should ever stand alone.

BibTeX

@Manual{stream2013-236,
    title = {No trial stands alone},
    journal = {STREAM research},
    author = {Spencer Phillips Hey},
    address = {Montreal, Canada},
    date = 2013,
    month = jun,
    day = 16,
    url = {http://www.translationalethics.com/2013/06/16/no-trial-stands-alone/}
}

MLA

Spencer Phillips Hey. "No trial stands alone" Web blog post. STREAM research. 16 Jun 2013. Web. 09 Nov 2024. <http://www.translationalethics.com/2013/06/16/no-trial-stands-alone/>

APA

Spencer Phillips Hey. (2013, Jun 16). No trial stands alone [Web log post]. Retrieved from http://www.translationalethics.com/2013/06/16/no-trial-stands-alone/


How Many Negative Trials Do We Need?

by

There is a growing concern in the clinical research community about the number of negative phase 3 trials. Given that phase 3 trials are incredibly expensive to run, and involve hundreds or sometimes thousands of patient-subjects, many researchers are now calling for more rigorous phase 2 trials, which are more predictive of a phase 3 result, in the hopes of reducing the number of phase 3 negatives.

In a focus piece from this week’s Science Translational Medicine, Jonathan and I argue that more predictive phase 2 trials may actually have undesirable ethical consequences–ratcheting up the patient burdens and study costs at a point of greater uncertainty, without necessarily increasing social utility or benefiting the research enterprise as a whole. We articulate four factors that we think ought to guide the level of positive predictivity sought in a (series of) phase 2 trial(s). These are: (1) the upper and lower bounds on evidence needed to establish clinical equipoise and initiate phase 3 testing; (2) the need to efficiently process the volume of novel intervention candidates in the drug pipeline; (3) the need to limit non-therapeutic risks for vulnerable patient-subjects; and (4) the need for decisive phase 3 evidence–either positive or negative–in order to best inform physician practices.

We are confident that these four factors are valid, but they are certainly not exhaustive of the inputs needed to make a robust judgment about the appropriate levels of predictivity needed in phase 2 for a given domain. What are the total costs and benefits of a negative phase 3? How should we weigh these against the costs and benefits of a more rigorous program of phase 2 testing? How many negatives should we tolerate? And at what stage of the development process? Our piece is a first-step toward developing a more comprehensive framework that could provide researchers, funders, policy-makers, and review boards with much needed answers to these important questions.

BibTeX

@Manual{stream2013-44,
    title = {How Many Negative Trials Do We Need?},
    journal = {STREAM research},
    author = {Spencer Phillips Hey},
    address = {Montreal, Canada},
    date = 2013,
    month = may,
    day = 10,
    url = {http://www.translationalethics.com/2013/05/10/how-many-negative-trials-do-we-need/}
}

MLA

Spencer Phillips Hey. "How Many Negative Trials Do We Need?" Web blog post. STREAM research. 10 May 2013. Web. 09 Nov 2024. <http://www.translationalethics.com/2013/05/10/how-many-negative-trials-do-we-need/>

APA

Spencer Phillips Hey. (2013, May 10). How Many Negative Trials Do We Need? [Web log post]. Retrieved from http://www.translationalethics.com/2013/05/10/how-many-negative-trials-do-we-need/


Search STREAM


All content © STREAM research

admin@translationalethics.com
Twitter: @stream_research
3647 rue Peel
Montreal QC H3A 1X1