efficacy – STREAM research

The Literature Isn’t Just Biased, It’s Also Late to the Party



Animal studies of drug efficacy are an important resource for designing and performing clinical trials. They provide evidence of a drug’s potential clinical utility, inform the design of trials, and establish the ethical basis for testing drugs in human. Several recent studies suggest that many preclinical investigations are withheld from publication. Such nonreporting likely reflects that private drug developers have little incentive to publish preclinical studies. However, it potentially deprives stakeholders of complete evidence for making risk/benefit judgments and frustrates the search for explanations when drugs fail to recapitulate the promise shown in animals.

In a future issue of The British Journal of Pharmacology, my co-authors and I investigate how much preclinical evidence is actually available in the published literature, and when it makes an appearance, if at all.

Although we identified a large number of preclinical studies, the vast majority was reported only after publication of the first trial. In fact, for 17% of the drugs in our sample, no efficacy studies were published before the first trial report. And when a similar analysis was performed looking at preclinical studies and clinical trials matched by disease area, the numbers were more dismal. For more than a third of indications tested in trials, we were unable to identify any published efficacy studies in models of the same indication.

There are two possible explanations for this observation, both of which have troubling implications. Research teams might not be performing efficacy studies until after trials are initiated and/or published. Though this would seem surprising and inconsistent with ethics policies, FDA regulations do not emphasize the review of animal efficacy data when approving the conduct of phase 1 trials. Another explanation is that drug developers precede trials with animal studies, but withhold them or publish them only after trials are complete. This interpretation also raises concerns, as delay of publication circumvents mechanisms—like peer review and replication—that promote systematic and valid risk/benefit assessment for trials.

The take home message is this: animal efficacy studies supporting specific trials are often published long after the trial itself is published, if at all. This represents a threat to human protections, animal ethics, and scientific integrity. We suggest that animal care committees, ethics review boards, and biomedical journals should take measures to correct these practices, such as requiring the prospective registration of preclinical studies or by creating publication incentives that are meaningful for private drug developers.


    title = {The Literature Isn’t Just Biased, It’s Also Late to the Party},
    journal = {STREAM research},
    author = {Carole Federico},
    address = {Montreal, Canada},
    date = 2014,
    month = jun,
    day = 30,
    url = {http://www.translationalethics.com/2014/06/30/the-literature-isnt-just-biased-its-also-late-to-the-party/}


Carole Federico. "The Literature Isn’t Just Biased, It’s Also Late to the Party" Web blog post. STREAM research. 30 Jun 2014. Web. 22 Aug 2018. <http://www.translationalethics.com/2014/06/30/the-literature-isnt-just-biased-its-also-late-to-the-party/>


Carole Federico. (2014, Jun 30). The Literature Isn’t Just Biased, It’s Also Late to the Party [Web log post]. Retrieved from http://www.translationalethics.com/2014/06/30/the-literature-isnt-just-biased-its-also-late-to-the-party/

Uncaging Validity in Preclinical Research



High attrition rates in drug development bedevil drug developers, ethicists, health care professionals, and patients alike.  Increasingly, many commentators are suggesting the attrition problem partly relates to prevalent methodological flaws in the conduct and reporting of preclinical studies.

Preclinical efficacy studies involve administering a putative drug to animals (usually mice or rats) that model the disease experienced by humans.  The outcome sought in these laboratory experiments is efficacy, making them analogous to Phase 2 or 3 clinical trials.

However, that’s where the similarities end.  Unlike trials, preclinical efficacy studies employ a limited repertoire of methodological practices aimed at reducing threats to clinical generalization.  These quality-control measures, including randomization, blinding and the performance of a power calculation, are standard in the clinical realm.

This mismatch in scientific rigor hasn’t gone unnoticed, and numerous commentators have urged better design and reporting of preclinical studies.   With this in mind, the STREAM research group sought to systematize current initiatives aimed at improving the conduct of preclinical studies.  The results of this effort are reported in the July issue of PLoS Medicine.

In brief, we identified 26 guideline documents, extracted their recommendations, and classified each according to the particular validity type – internal, construct, or external – that the recommendation was aimed at addressing.   We also identified practices that were most commonly recommended, and used these to create a STREAM checklist for designing and reviewing preclinical studies.

We found that guidelines mainly focused on practices aimed at shoring up internal validity and, to a lesser extent, construct validity.  Relatively few guidelines addressed threats to external validity.  Additionally, we noted a preponderance of guidance on preclinical neurological and cerebrovascular research; oddly, none addressed cancer drug development, an area with perhaps the highest rate of attrition.

So what’s next?  We believe the consensus recommendations identified in our review provide a starting point for developing preclinical guidelines in realms like cancer drug development.  We also think our paper identifies some gaps in the guidance literature – for example, a relative paucity of guidelines on the conduct of preclinical systematic reviews.  Finally, we suggest our checklist may be helpful for investigators, IRB members, and funding bodies charged with designing, executing, and evaluating preclinical evidence.

Commentaries and lay accounts of our findings can be found in PLoS Medicine, CBC News, McGill Newsroom and Genetic Engineering & Biotechnology News.


    title = {Uncaging Validity in Preclinical Research},
    journal = {STREAM research},
    author = {Valerie Henderson},
    address = {Montreal, Canada},
    date = 2013,
    month = aug,
    day = 5,
    url = {http://www.translationalethics.com/2013/08/05/uncaging-validity-in-preclinical-research/}


Valerie Henderson. "Uncaging Validity in Preclinical Research" Web blog post. STREAM research. 05 Aug 2013. Web. 22 Aug 2018. <http://www.translationalethics.com/2013/08/05/uncaging-validity-in-preclinical-research/>


Valerie Henderson. (2013, Aug 05). Uncaging Validity in Preclinical Research [Web log post]. Retrieved from http://www.translationalethics.com/2013/08/05/uncaging-validity-in-preclinical-research/

Dirty Windows of Drug Development


Think of clinical trial data as a window on the efficacy and safety of a drug. Think of data protection and trade secrecy as soot. The above picture? This is the public view on drug safety and efficacy.

According to a recent report in Nature Biotechnology (Feb 2011), medicine may be getting some soapy water and a squeegee, thanks to several policy initiatives at drug regulatory authorities. In Europe, the main drug regulatory authority, EMA, recently issued a policy that will make publicly available “full clinical trial reports”– even for drugs that are not approved for licensure.

The reforms roughly parallel a series of proposed policies at FDA under the FDA Transparency Initiative. Among the proposed items that would be publicly accessible: when an application has been submitted to the agency (or withdrawn); whether a significant safety issue triggered withdrawal, and reasons why the agency turned down an application.

Disclosure of such information carries some risk. Contrary to common belief, information disclosure does not level all power and influence, as some parties are better equipped to aggregate, analyze, and act on information. No doubt, such transparency will be used by various parties to harangue FDA for otherwise enlightened regulatory decisions.

However, what the public sees of safety and efficacy information- to mix metaphors- is merely the tip of the iceberg. The Nature Biotechnology report, for example, describes the case of Pfizer’s SSRI drug Edronax. Published trials included data on 1600 patients, but in actuality, trials involved 4600 patients. When complete data sets were obtained and reviewed, the drug turned out to be no better than placebo, and possibly unsafe (read more here). [[Yet one more reason to wonder what Canadian Institute of Health Research was thinking when it appointed Medical Director of Pfizer Canada to its Governing Council.)]]

Any transparency reforms would provide a much better basis for a) circumventing ethically suspect information practices so that healthcare systems can assess the totality of evidence on drug safety and efficacy, and b) getting a better understanding of the drug development process- warts and all. (photo credit: Lulu Vision 2007).


    title = {Dirty Windows of Drug Development},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2011,
    month = feb,
    day = 9,
    url = {http://www.translationalethics.com/2011/02/09/dirty-windows-of-drug-development/}


Jonathan Kimmelman. "Dirty Windows of Drug Development" Web blog post. STREAM research. 09 Feb 2011. Web. 22 Aug 2018. <http://www.translationalethics.com/2011/02/09/dirty-windows-of-drug-development/>


Jonathan Kimmelman. (2011, Feb 09). Dirty Windows of Drug Development [Web log post]. Retrieved from http://www.translationalethics.com/2011/02/09/dirty-windows-of-drug-development/


Old blog posts

All content © STREAM research

Twitter: @stream_research
3647 rue Peel
Montreal QC H3A 1X1