Uncaging Validity in Preclinical Research

by

Knockout_Mice5006-300

High attrition rates in drug development bedevil drug developers, ethicists, health care professionals, and patients alike.  Increasingly, many commentators are suggesting the attrition problem partly relates to prevalent methodological flaws in the conduct and reporting of preclinical studies.

Preclinical efficacy studies involve administering a putative drug to animals (usually mice or rats) that model the disease experienced by humans.  The outcome sought in these laboratory experiments is efficacy, making them analogous to Phase 2 or 3 clinical trials.

However, that’s where the similarities end.  Unlike trials, preclinical efficacy studies employ a limited repertoire of methodological practices aimed at reducing threats to clinical generalization.  These quality-control measures, including randomization, blinding and the performance of a power calculation, are standard in the clinical realm.

This mismatch in scientific rigor hasn’t gone unnoticed, and numerous commentators have urged better design and reporting of preclinical studies.   With this in mind, the STREAM research group sought to systematize current initiatives aimed at improving the conduct of preclinical studies.  The results of this effort are reported in the July issue of PLoS Medicine.

In brief, we identified 26 guideline documents, extracted their recommendations, and classified each according to the particular validity type – internal, construct, or external – that the recommendation was aimed at addressing.   We also identified practices that were most commonly recommended, and used these to create a STREAM checklist for designing and reviewing preclinical studies.

We found that guidelines mainly focused on practices aimed at shoring up internal validity and, to a lesser extent, construct validity.  Relatively few guidelines addressed threats to external validity.  Additionally, we noted a preponderance of guidance on preclinical neurological and cerebrovascular research; oddly, none addressed cancer drug development, an area with perhaps the highest rate of attrition.

So what’s next?  We believe the consensus recommendations identified in our review provide a starting point for developing preclinical guidelines in realms like cancer drug development.  We also think our paper identifies some gaps in the guidance literature – for example, a relative paucity of guidelines on the conduct of preclinical systematic reviews.  Finally, we suggest our checklist may be helpful for investigators, IRB members, and funding bodies charged with designing, executing, and evaluating preclinical evidence.

Commentaries and lay accounts of our findings can be found in PLoS Medicine, CBC News, McGill Newsroom and Genetic Engineering & Biotechnology News.

BibTeX

@Manual{stream2013-300,
    title = {Uncaging Validity in Preclinical Research},
    journal = {STREAM research},
    author = {Valerie Henderson},
    address = {Montreal, Canada},
    date = 2013,
    month = aug,
    day = 5,
    url = {http://www.translationalethics.com/2013/08/05/uncaging-validity-in-preclinical-research/}
}

MLA

Valerie Henderson. "Uncaging Validity in Preclinical Research" Web blog post. STREAM research. 05 Aug 2013. Web. 21 Sep 2017. <http://www.translationalethics.com/2013/08/05/uncaging-validity-in-preclinical-research/>

APA

Valerie Henderson. (2013, Aug 05). Uncaging Validity in Preclinical Research [Web log post]. Retrieved from http://www.translationalethics.com/2013/08/05/uncaging-validity-in-preclinical-research/


Hypothesis Generator

by

Is good medical research directed at testing hypotheses? Or is there a competing model of good medical research that sees hypothesis generating research as a valuable end? In an intriguing essay appearing in the August 21, 2009 issue of Cell, Maureen O’Malley and co-authors show how current funding mechanisms at agencies like NIH and NSF center their model of scientific merit around the testing of hypotheses (e.g. does molecule X cause phenomenon Y? does drug A outperform drug B?). However, as the authors (and others) point out, many areas of research are not based on such “tightly bounded spheres of inquiry.” They suggest that a “more complete representation of the iterative, interdisciplinary, and multidimensional relationships between various modes of scientific investigation could improve funding agency guidelines.”

The questions presented by this article have particular relevance for translational clinical research. As I argue in my book, the traditional clinical trial apparatus–and corresponding discourse on research ethics– is overwhelmingly directed towards the type of hypothesis testing typified by the randomized controlled trial. However, many early phase studies involve a large component of hypothesis generating research as well. The challenge for O’Malley et al’s argument– and mine– is

(photo credit: Gouldy99, 2008)

BibTeX

@Manual{stream2010-74,
    title = {Hypothesis Generator},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2010,
    month = feb,
    day = 9,
    url = {http://www.translationalethics.com/2010/02/09/hypothesis-generator/}
}

MLA

Jonathan Kimmelman. "Hypothesis Generator" Web blog post. STREAM research. 09 Feb 2010. Web. 21 Sep 2017. <http://www.translationalethics.com/2010/02/09/hypothesis-generator/>

APA

Jonathan Kimmelman. (2010, Feb 09). Hypothesis Generator [Web log post]. Retrieved from http://www.translationalethics.com/2010/02/09/hypothesis-generator/


The Problem with Models

by

Chicago in plastic and balsa. If only animal models were as convincing as the one pictured above from the Museum of Science and Industry. 


The August 7 issue of Nature ran a fascinating feature on how many scientists are reassessing the value of animal models used in neurodegenerative preclinical research (“Standard Model,” by Jim Schnabel).

The story centers on the striking failure to translate promising preclinical findings to treatments for various neurodegenerative diseases. In one instance, a highly promising drug, minocycline, actually worsened symptoms in patients with ALS. In other instances, impressive results in mice have not been reproducible. According to the article, a cluster of patient advocacy groups, including organizations like Prize4Life and a non-profit biotechnology company ALS TDI, are spearheading a critical look at standard preclinical models and methodologies.

Much of the report is about limitations of mouse models. Scientists from the Jackson Laboratories (perhaps the world’s largest supplier of research mice) warn that many mouse strains are genetically heterogenous; others develop new mutations on breeding. Other problems described in the article: infections that spread in mouse colonies, problems matching sex or litter membership in experimental and control groups, and small sample sizes. The result is Metallica-like levels of noise in preclinical studies. Combined with nonpublication of negative studies, and the result is many false positives.

The article bristles with interesting tidbits. One that struck me is the organizational challenges of changing the culture of model system use. According to the article, many academic researchers and grant referees have yet to warm to criticisms of models, and some scientists and advocates are asking for leadership from the NIH. Another striking point in the piece-alluded to in the article’s closing-is a fragmentation of animal models that mirrors personalized medicine.

“Drugs into bodies.” That’s the mantra of translational research. It is an understandable sentiment, but also pernicious if it means more poorly conceived experiments on dying patients. What is needed is a way to make animal models- and guidelines pertaining to them- as alluring as supermodels. (photo credit: Celikens 2008)

BibTeX

@Manual{stream2008-131,
    title = {The Problem with Models},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2008,
    month = oct,
    day = 10,
    url = {http://www.translationalethics.com/2008/10/10/the-problem-with-models/}
}

MLA

Jonathan Kimmelman. "The Problem with Models" Web blog post. STREAM research. 10 Oct 2008. Web. 21 Sep 2017. <http://www.translationalethics.com/2008/10/10/the-problem-with-models/>

APA

Jonathan Kimmelman. (2008, Oct 10). The Problem with Models [Web log post]. Retrieved from http://www.translationalethics.com/2008/10/10/the-problem-with-models/


Masks and Random Thoughts on Preclinical Research Validity

by

Epidemiologists and biostatisticians have evolved numerous ways of reducing bias in clinical trials. Randomization of patients, and masking them to their treatment allocation are two. Another is masking clinicians who assess their outcomes.


Why are these simple measures so rarely used in preclinical animal studies? And do animal studies show exaggerated effects as a consequence of poor methodology?

The March 2008 issue of Stroke reports a “meta-meta-analysis” of 13 studies comprising over fifteen thousand animals. Perhaps surprisingly, the study did not show a relationship between the use of randomization or masked outcome assessment and the size of treatment effect. It did, however, show a positive relationship between size of treatment effect and failure to mask investigators during treatment allocation.

This is probably the largest analysis of its kind. It isn’t perfect: publication bias is very likely to skew the analysis. For example, size of treatment effect is likely to strongly influence whether a study gets published. If so, effects of methodological bias could be obscured; preclinical researchers might simply be stuffing their methodologically rigorous studies in their filing cabinets because no effect was observed.

The conclusion I draw? Preclinical researchers should randomize and mask anyway.  There is some evidence it matters. Moreover, the logical rationale is overwhelming, and the inconvenience for investigators seems more than manageable. (photocredit: Chiara Marra 2007)

BibTeX

@Manual{stream2008-175,
    title = {Masks and Random Thoughts on Preclinical Research Validity},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2008,
    month = feb,
    day = 28,
    url = {http://www.translationalethics.com/2008/02/28/masks-and-random-thoughts-on-preclinical-research-validity/}
}

MLA

Jonathan Kimmelman. "Masks and Random Thoughts on Preclinical Research Validity" Web blog post. STREAM research. 28 Feb 2008. Web. 21 Sep 2017. <http://www.translationalethics.com/2008/02/28/masks-and-random-thoughts-on-preclinical-research-validity/>

APA

Jonathan Kimmelman. (2008, Feb 28). Masks and Random Thoughts on Preclinical Research Validity [Web log post]. Retrieved from http://www.translationalethics.com/2008/02/28/masks-and-random-thoughts-on-preclinical-research-validity/


Search STREAM

Old blog posts


All content © STREAM research

admin@translationalethics.com
Twitter: @stream_research
3647 rue Peel
Montreal QC H3A 1X1