No trial stands alone

by

“The result of this trial speaks for itself!”

This often heard phrase contains a troubling assumption: That an experiment can stand entirely on in its own. That it can be interpreted without reference to other trials and other results. In a couple of articles published over the last two weeks, my co-authors and I deliver a one-two punch to this idea.

The first punch is thrown at the US FDA’s use of “assay sensitivity,” a concept defined as a clinical trial’s “ability to distinguish between an effective and an ineffective treatment.” This concept is intuitively appealing, since all it seems to say is that a trial should be well-designed. A well-designed clinical trial should be able to answer its question and distinguish an effective from an ineffective treatment. However, assay sensitivity has been interpreted to mean that placebo controls are “more scientific” than active controls. This is because superiority to placebo seems to guarantee that the experimental agent is effective, whereas superiority or equivalence to an active control does not rule out the possibility that both agents are actually ineffective.  This makes placebo-controlled trials more “self-contained,” easier to interpret, and therefore, methodologically superior.

In a piece in Perspectives in Biology and Medicine, Charles Weijer and I dismantle the above argument by showing, first, that all experiments rely on some kinds of “external information”–be it information about an active control’s effects, pre-clinical data, the methodological validity of various procedures, etc. Second, that a placebo can suffer from all of the same woes that might afflict an active control (e.g., the “placebo effect” is not one, consistent effect, but can vary depending upon the type or color of placebo used), so there is no guarantee of assay sensitivity in a placebo-controlled trial. And finally, the more a trial’s results can be placed into context, and interpreted in light of other trials, the more potentially informative it is.

This leads to punch #2: How should we think about trials in context? In a piece in Trials, Charles Heilig, Charles Weijer, and I present the “Accumulated Evidence and Research Organization (AERO) Model,” a graph-theoretic approach to representing the sequence of experiments and clinical trials that constitute a translational research program. The basic idea is to illustrate each trial in the context of its research trajectory using a network graph (or a directed acyclic graph, if you want to get technical), with color-coded nodes representing studies and their outcomes; and arrows representing the intellectual lineage between studies. This work is open-access, so I won’t say too much more about it here, but instead encourage you to go and give it a look. We provide a lot of illustrations to introduce the graphing algorithm, and then apply the approach to a case-study involving inconsistent results across a series of tuberculosis trials.

In sum: Trials should not be thought of as self-contained. This is not even desirable! Rather, all trials (or at least trials in translational medicine) should be thought of as nodes in a complex, knowledge producing network. Each one adding something to our understanding. But none ever truly “speaks for itself,” because none should ever stand alone.

BibTeX

@Manual{stream2013-236,
    title = {No trial stands alone},
    journal = {STREAM research},
    author = {Spencer Phillips Hey},
    address = {Montreal, Canada},
    date = 2013,
    month = jun,
    day = 16,
    url = {http://www.translationalethics.com/2013/06/16/no-trial-stands-alone/}
}

MLA

Spencer Phillips Hey. "No trial stands alone" Web blog post. STREAM research. 16 Jun 2013. Web. 01 Sep 2024. <http://www.translationalethics.com/2013/06/16/no-trial-stands-alone/>

APA

Spencer Phillips Hey. (2013, Jun 16). No trial stands alone [Web log post]. Retrieved from http://www.translationalethics.com/2013/06/16/no-trial-stands-alone/


The Future of Pharmaceutical Regulation

by

The October 2008 issue of Nature Reviews–Drug Discovery contains a very informative perspective piece on how drug regulators negotiate uncertainty, risk, and benefit when making approval decisions (“Balancing early market access to new drugs with the need for benefit/risk data: a mounting dilemma”). I have long argued that novel biologics like gene transfer will require creative approaches from regulators, because on the one hand many types of adverse events might be latent and unpredictable, while on the other hand, many novel biologics will target highly morbid or lethal conditions like primary immunodeficiencies.


The authors (Hans-Georg Eichler et al) are all employees of drug regulatory agencies in Europe. Not surprisingly, then, the article is balanced and presents drug agencies as making appropriate trade-offs between patient access and public safety. The article studiously avoids any criticism of pharmaceutical companies. And it makes some questionable claims. For example, in several passages it suggests that drug regulation is increasingly “risk averse” (it seems to me the opposite, but who knows?). Another is that the article contains ample evidence that premature approval has had important costs in terms of health and economics. Nowhere does it provide clear evidence or anecdotes that delay of approval, or restrictive evidentiary standards have had important public health or economic impacts (it might, but if you are going to suggest that the balance is appropriately struck, one needs a clear picture of the benefit side of the equation).

The article contains a number of observations and policy approaches that cry out for careful ethical analysis. Here are two:

1- Drug regulatory agencies accept greater uncertainty about safety and efficacy when new drugs address serious, unmet health needs. Thus, new cancer drugs can be approved on a weaker evidentiary base than new acid reflux drugs (in several instances, new drugs were approved on the basis of uncontrolled, surrogate endpoints (e.g. gefitinib). Why should evidentiary standards be relaxed in this way? And to what degree? If the disease is serious enough and standard care non-existent, then what is the basis for any drug regulation?

2- The article states that “rare drug reactions will continue to be identified only after wider use in the market,” and that more sophisticated approaches to drug safety will increasingly “blur the line between pre-marketing and post-marketing activities.” How will this affect ethics review and oversight? How will privacy protections be maintained in this gulf-stream of flowing health data? How will trial registries absorb and respond to post-marketing studies?

This article contains multitudes– I highly recommend it to anyone interested, as I am, in the problem of uncertainty, risk, and drug regulation (photo credit: Alincolnt, schedule 5, 2006)

BibTeX

@Manual{stream2008-127,
    title = {The Future of Pharmaceutical Regulation},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2008,
    month = oct,
    day = 28,
    url = {http://www.translationalethics.com/2008/10/28/the-future-of-pharmaceutical-regulation/}
}

MLA

Jonathan Kimmelman. "The Future of Pharmaceutical Regulation" Web blog post. STREAM research. 28 Oct 2008. Web. 01 Sep 2024. <http://www.translationalethics.com/2008/10/28/the-future-of-pharmaceutical-regulation/>

APA

Jonathan Kimmelman. (2008, Oct 28). The Future of Pharmaceutical Regulation [Web log post]. Retrieved from http://www.translationalethics.com/2008/10/28/the-future-of-pharmaceutical-regulation/


Search STREAM


All content © STREAM research

admin@translationalethics.com
Twitter: @stream_research
3647 rue Peel
Montreal QC H3A 1X1