The Landscape of Early Phase Research

by

landscape-for-web

As Jonathan is fond of saying: Drugs are poisons. It is only through an arduous process of testing and refinement that a drug is eventually transformed into a therapy. Much of this transformative work falls to the early phases of clinical testing. In early phase studies, researchers are looking to identify the optimal values for the various parameters that make up a medical intervention. These parameters are things like dose, schedule, mode of administration, co-interventions, and so on. Once these have been locked down, the “intervention ensemble” (as we call it) is ready for the second phase of testing, where its clinical utility is either confirmed or disconfirmed in randomized controlled trials.

In our piece from this latest issue of the Kennedy Institute of Ethics Journal, Jonathan and I present a novel conceptual tool for thinking about the early phases of drug testing. As suggested in the image above, we represent this process as an exploration of a 3-dimensional “ensemble space.” Each x-y point on the landscape corresponds to some combination of parameters–a particular dose and delivery site, say. The z-axis is then the risk/benefit profile of that combination. This model allows us to re-frame the goal of early phase testing as an exploration of the intervention landscape–a systematic search through the space of possible parameters, looking for peaks that have promise of clinical utility.

We then go on to show how the concept of ensemble space can also be used to analyze the comparative advantages of alternative research strategies. For example, given that the landscape is initially unknown, where should researchers begin their search? Should they jump out into the deep end, to so speak, in the hopes of hitting the peak on the first try? Or should they proceed more cautiously–methodologically working their way out from the least-risky regions, mapping the overall landscape as they go?

I won’t give away the ending here, because you should go read the article! Although readers familiar with Jonathan’s and my work can probably infer which of those options we would support. (Hint: Early phase research must be justified on the basis of knowledge-value, not direct patient-subject benefit.)

UPDATE: I’m very happy to report that this paper has been selected as the editor’s pick for the KIEJ this quarter!

BibTeX

@Manual{stream2014-567,
    title = {The Landscape of Early Phase Research},
    journal = {STREAM research},
    author = {Spencer Phillips Hey},
    address = {Montreal, Canada},
    date = 2014,
    month = jul,
    day = 4,
    url = {https://www.translationalethics.com/2014/07/04/the-landscape-of-early-phase-research/}
}

MLA

Spencer Phillips Hey. "The Landscape of Early Phase Research" Web blog post. STREAM research. 04 Jul 2014. Web. 28 Mar 2024. <https://www.translationalethics.com/2014/07/04/the-landscape-of-early-phase-research/>

APA

Spencer Phillips Hey. (2014, Jul 04). The Landscape of Early Phase Research [Web log post]. Retrieved from https://www.translationalethics.com/2014/07/04/the-landscape-of-early-phase-research/


The Ethics of Unequal Allocation

by

unequal-allocation

In the standard model for randomized clinical trials, patients are allocated on an equal, or 1:1, basis between two treatment arms. This means that at the conclusion of patient enrollment, there should be roughly equal numbers of patients receiving the new experimental treatment as those receiving the standard treatment or placebo. This 1:1 allocation ratio is the most efficient from a statistical perspective, since it requires the fewest number of patient-subjects to achieve a given level of statistical power.

However, many recent late-phase trials of neurological interventions have randomized their participants in an unequal ratio, e.g., on a 2:1 or 3:1 basis. In the case of 2:1 allocation, this means that there are twice as many patient-subjects receiving the new (and unproven) treatment as those receiving the standard or placebo. This practice is typically justified by the assumption that it is easier to enroll patient-subjects in a trial if they believe they are more likely to receive the new/active treatment.

In an article from this month’s issue of Neurology, Jonathan and I present three arguments for why investigators and oversight boards should be wary of unequal allocation. Specifically, we argue that the practice (1) leverages patient therapeutic misconceptions; (2) potentially interacts with blinding and thereby undermines a study’s internal validity; and (3) fails to minimize overall patient burden by introducing unnecessary inefficiencies into the research enterprise. Although these reasons do not universally rule-out the practice–and indeed we acknowledge some circumstances under which unequal allocation is still desirable–they are sufficient to demand a more compelling justification for its use.

The point about inefficiency reflects a trend in Jonathan’s and my work–elucidating the consequences for research ethics when we look across a series of trials, instead of just within one protocol. So to drive this point home here, consider that the rate of successful translation in neurology is estimated at around 10%. This means that for every 10 drugs that enter the clinical pipeline, only 1 will ever be shown effective. Given the limited pool of human and material resources available for research and the fact that a 2:1 allocation ratio typically requires 12% more patients to achieve a given level of statistical power, this increased sample size and cost on a per trial basis may mean that we use up our testing resources before we ever find that 1 effective drug.

BibTeX

@Manual{stream2014-468,
    title = {The Ethics of Unequal Allocation},
    journal = {STREAM research},
    author = {Spencer Phillips Hey},
    address = {Montreal, Canada},
    date = 2014,
    month = jan,
    day = 6,
    url = {https://www.translationalethics.com/2014/01/06/unequal-allocation/}
}

MLA

Spencer Phillips Hey. "The Ethics of Unequal Allocation" Web blog post. STREAM research. 06 Jan 2014. Web. 28 Mar 2024. <https://www.translationalethics.com/2014/01/06/unequal-allocation/>

APA

Spencer Phillips Hey. (2014, Jan 06). The Ethics of Unequal Allocation [Web log post]. Retrieved from https://www.translationalethics.com/2014/01/06/unequal-allocation/


No trial stands alone

by

“The result of this trial speaks for itself!”

This often heard phrase contains a troubling assumption: That an experiment can stand entirely on in its own. That it can be interpreted without reference to other trials and other results. In a couple of articles published over the last two weeks, my co-authors and I deliver a one-two punch to this idea.

The first punch is thrown at the US FDA’s use of “assay sensitivity,” a concept defined as a clinical trial’s “ability to distinguish between an effective and an ineffective treatment.” This concept is intuitively appealing, since all it seems to say is that a trial should be well-designed. A well-designed clinical trial should be able to answer its question and distinguish an effective from an ineffective treatment. However, assay sensitivity has been interpreted to mean that placebo controls are “more scientific” than active controls. This is because superiority to placebo seems to guarantee that the experimental agent is effective, whereas superiority or equivalence to an active control does not rule out the possibility that both agents are actually ineffective.  This makes placebo-controlled trials more “self-contained,” easier to interpret, and therefore, methodologically superior.

In a piece in Perspectives in Biology and Medicine, Charles Weijer and I dismantle the above argument by showing, first, that all experiments rely on some kinds of “external information”–be it information about an active control’s effects, pre-clinical data, the methodological validity of various procedures, etc. Second, that a placebo can suffer from all of the same woes that might afflict an active control (e.g., the “placebo effect” is not one, consistent effect, but can vary depending upon the type or color of placebo used), so there is no guarantee of assay sensitivity in a placebo-controlled trial. And finally, the more a trial’s results can be placed into context, and interpreted in light of other trials, the more potentially informative it is.

This leads to punch #2: How should we think about trials in context? In a piece in Trials, Charles Heilig, Charles Weijer, and I present the “Accumulated Evidence and Research Organization (AERO) Model,” a graph-theoretic approach to representing the sequence of experiments and clinical trials that constitute a translational research program. The basic idea is to illustrate each trial in the context of its research trajectory using a network graph (or a directed acyclic graph, if you want to get technical), with color-coded nodes representing studies and their outcomes; and arrows representing the intellectual lineage between studies. This work is open-access, so I won’t say too much more about it here, but instead encourage you to go and give it a look. We provide a lot of illustrations to introduce the graphing algorithm, and then apply the approach to a case-study involving inconsistent results across a series of tuberculosis trials.

In sum: Trials should not be thought of as self-contained. This is not even desirable! Rather, all trials (or at least trials in translational medicine) should be thought of as nodes in a complex, knowledge producing network. Each one adding something to our understanding. But none ever truly “speaks for itself,” because none should ever stand alone.

BibTeX

@Manual{stream2013-236,
    title = {No trial stands alone},
    journal = {STREAM research},
    author = {Spencer Phillips Hey},
    address = {Montreal, Canada},
    date = 2013,
    month = jun,
    day = 16,
    url = {https://www.translationalethics.com/2013/06/16/no-trial-stands-alone/}
}

MLA

Spencer Phillips Hey. "No trial stands alone" Web blog post. STREAM research. 16 Jun 2013. Web. 28 Mar 2024. <https://www.translationalethics.com/2013/06/16/no-trial-stands-alone/>

APA

Spencer Phillips Hey. (2013, Jun 16). No trial stands alone [Web log post]. Retrieved from https://www.translationalethics.com/2013/06/16/no-trial-stands-alone/


The Problem with Models

by

Chicago in plastic and balsa. If only animal models were as convincing as the one pictured above from the Museum of Science and Industry. 


The August 7 issue of Nature ran a fascinating feature on how many scientists are reassessing the value of animal models used in neurodegenerative preclinical research (“Standard Model,” by Jim Schnabel).

The story centers on the striking failure to translate promising preclinical findings to treatments for various neurodegenerative diseases. In one instance, a highly promising drug, minocycline, actually worsened symptoms in patients with ALS. In other instances, impressive results in mice have not been reproducible. According to the article, a cluster of patient advocacy groups, including organizations like Prize4Life and a non-profit biotechnology company ALS TDI, are spearheading a critical look at standard preclinical models and methodologies.

Much of the report is about limitations of mouse models. Scientists from the Jackson Laboratories (perhaps the world’s largest supplier of research mice) warn that many mouse strains are genetically heterogenous; others develop new mutations on breeding. Other problems described in the article: infections that spread in mouse colonies, problems matching sex or litter membership in experimental and control groups, and small sample sizes. The result is Metallica-like levels of noise in preclinical studies. Combined with nonpublication of negative studies, and the result is many false positives.

The article bristles with interesting tidbits. One that struck me is the organizational challenges of changing the culture of model system use. According to the article, many academic researchers and grant referees have yet to warm to criticisms of models, and some scientists and advocates are asking for leadership from the NIH. Another striking point in the piece-alluded to in the article’s closing-is a fragmentation of animal models that mirrors personalized medicine.

“Drugs into bodies.” That’s the mantra of translational research. It is an understandable sentiment, but also pernicious if it means more poorly conceived experiments on dying patients. What is needed is a way to make animal models- and guidelines pertaining to them- as alluring as supermodels. (photo credit: Celikens 2008)

BibTeX

@Manual{stream2008-131,
    title = {The Problem with Models},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2008,
    month = oct,
    day = 10,
    url = {https://www.translationalethics.com/2008/10/10/the-problem-with-models/}
}

MLA

Jonathan Kimmelman. "The Problem with Models" Web blog post. STREAM research. 10 Oct 2008. Web. 28 Mar 2024. <https://www.translationalethics.com/2008/10/10/the-problem-with-models/>

APA

Jonathan Kimmelman. (2008, Oct 10). The Problem with Models [Web log post]. Retrieved from https://www.translationalethics.com/2008/10/10/the-problem-with-models/


STAIRing at Method in Preclinical Studies

by

Medical research, we all know, is highly prone to bias. Researchers are, after all, human in their tendencies to mix desire with assessment. So too are trial participants. Since the late 1950s, epidemiologists have introduced a number of practices to clinical research designed to reduce or eliminate sources of bias, including randomization of patients, masking (or “blinding”) of volunteers and physician-investigators, and statistical analysis.


In past entries, I have rallied for extending such methodological rigor to preclinical research. This has three defenses. First, phase 1 human trials predicated on weak preclinical evidence are insufficiently valuable to justify their execution. Second, methodologically weak preclinical research is an abuse of animals. Third, publication of methodologically weak studies is a form of “publication pollution.”

Two recent publications underscore the need for greater rigor in preclinical studies. The first is a paper in the journal Stroke (published online August 14, 2008; also reprinted in Journal of Cerebral Blood Flow and Metabolism). Many of the paper’s authors have doggedly pursued the cause of preclinical methodological rigor in stroke research by publishing a series of meta-analyses of preclinical studies in stroke. In this article, Malcolm Macleod and co-authors outline eight practices that journal editors and referees should look for in reviewing preclinical studies. Many are urged by STAIR (Stroke Therapy Academic Industry Roundtable)– a consortium organized in 1999 to strengthen the quality of stroke research.

Their recommendations are:

1- Animals (precise species, strain, and details should be provided)
2- Sample-size calculation
3- Inclusion and exclusion criteria for animals
4- Randomization of animals
5- Allocation concealment
6- Reporting of animals concealed from analysis
7- Masked outcome assessment
8- Reporting interest conflicts and funding

There’s an interesting, implicit claim in this paper: journal editors and referees partly bear the blame for poor methodological quality in preclinical research. In my next post, I will turn to a related news article about preclinical studies in Amyotrophic Lateral Sclerosis. (photo credit: 4BlueEyes, 2006)

BibTeX

@Manual{stream2008-132,
    title = {STAIRing at Method in Preclinical Studies},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2008,
    month = oct,
    day = 6,
    url = {https://www.translationalethics.com/2008/10/06/stairing-at-method-in-preclinical-studies/}
}

MLA

Jonathan Kimmelman. "STAIRing at Method in Preclinical Studies" Web blog post. STREAM research. 06 Oct 2008. Web. 28 Mar 2024. <https://www.translationalethics.com/2008/10/06/stairing-at-method-in-preclinical-studies/>

APA

Jonathan Kimmelman. (2008, Oct 06). STAIRing at Method in Preclinical Studies [Web log post]. Retrieved from https://www.translationalethics.com/2008/10/06/stairing-at-method-in-preclinical-studies/


Search STREAM


All content © STREAM research

admin@translationalethics.com
Twitter: @stream_research
3647 rue Peel
Montreal QC H3A 1X1