The Landscape of Early Phase Research

by

landscape-for-web

As Jonathan is fond of saying: Drugs are poisons. It is only through an arduous process of testing and refinement that a drug is eventually transformed into a therapy. Much of this transformative work falls to the early phases of clinical testing. In early phase studies, researchers are looking to identify the optimal values for the various parameters that make up a medical intervention. These parameters are things like dose, schedule, mode of administration, co-interventions, and so on. Once these have been locked down, the “intervention ensemble” (as we call it) is ready for the second phase of testing, where its clinical utility is either confirmed or disconfirmed in randomized controlled trials.

In our piece from this latest issue of the Kennedy Institute of Ethics Journal, Jonathan and I present a novel conceptual tool for thinking about the early phases of drug testing. As suggested in the image above, we represent this process as an exploration of a 3-dimensional “ensemble space.” Each x-y point on the landscape corresponds to some combination of parameters–a particular dose and delivery site, say. The z-axis is then the risk/benefit profile of that combination. This model allows us to re-frame the goal of early phase testing as an exploration of the intervention landscape–a systematic search through the space of possible parameters, looking for peaks that have promise of clinical utility.

We then go on to show how the concept of ensemble space can also be used to analyze the comparative advantages of alternative research strategies. For example, given that the landscape is initially unknown, where should researchers begin their search? Should they jump out into the deep end, to so speak, in the hopes of hitting the peak on the first try? Or should they proceed more cautiously–methodologically working their way out from the least-risky regions, mapping the overall landscape as they go?

I won’t give away the ending here, because you should go read the article! Although readers familiar with Jonathan’s and my work can probably infer which of those options we would support. (Hint: Early phase research must be justified on the basis of knowledge-value, not direct patient-subject benefit.)

UPDATE: I’m very happy to report that this paper has been selected as the editor’s pick for the KIEJ this quarter!

BibTeX

@Manual{stream2014-567,
    title = {The Landscape of Early Phase Research},
    journal = {STREAM research},
    author = {Spencer Phillips Hey},
    address = {Montreal, Canada},
    date = 2014,
    month = jul,
    day = 4,
    url = {https://www.translationalethics.com/2014/07/04/the-landscape-of-early-phase-research/}
}

MLA

Spencer Phillips Hey. "The Landscape of Early Phase Research" Web blog post. STREAM research. 04 Jul 2014. Web. 14 Oct 2024. <https://www.translationalethics.com/2014/07/04/the-landscape-of-early-phase-research/>

APA

Spencer Phillips Hey. (2014, Jul 04). The Landscape of Early Phase Research [Web log post]. Retrieved from https://www.translationalethics.com/2014/07/04/the-landscape-of-early-phase-research/


The Literature Isn’t Just Biased, It’s Also Late to the Party

by

Journal-Banner

Animal studies of drug efficacy are an important resource for designing and performing clinical trials. They provide evidence of a drug’s potential clinical utility, inform the design of trials, and establish the ethical basis for testing drugs in human. Several recent studies suggest that many preclinical investigations are withheld from publication. Such nonreporting likely reflects that private drug developers have little incentive to publish preclinical studies. However, it potentially deprives stakeholders of complete evidence for making risk/benefit judgments and frustrates the search for explanations when drugs fail to recapitulate the promise shown in animals.

In a future issue of The British Journal of Pharmacology, my co-authors and I investigate how much preclinical evidence is actually available in the published literature, and when it makes an appearance, if at all.

Although we identified a large number of preclinical studies, the vast majority was reported only after publication of the first trial. In fact, for 17% of the drugs in our sample, no efficacy studies were published before the first trial report. And when a similar analysis was performed looking at preclinical studies and clinical trials matched by disease area, the numbers were more dismal. For more than a third of indications tested in trials, we were unable to identify any published efficacy studies in models of the same indication.

There are two possible explanations for this observation, both of which have troubling implications. Research teams might not be performing efficacy studies until after trials are initiated and/or published. Though this would seem surprising and inconsistent with ethics policies, FDA regulations do not emphasize the review of animal efficacy data when approving the conduct of phase 1 trials. Another explanation is that drug developers precede trials with animal studies, but withhold them or publish them only after trials are complete. This interpretation also raises concerns, as delay of publication circumvents mechanisms—like peer review and replication—that promote systematic and valid risk/benefit assessment for trials.

The take home message is this: animal efficacy studies supporting specific trials are often published long after the trial itself is published, if at all. This represents a threat to human protections, animal ethics, and scientific integrity. We suggest that animal care committees, ethics review boards, and biomedical journals should take measures to correct these practices, such as requiring the prospective registration of preclinical studies or by creating publication incentives that are meaningful for private drug developers.

BibTeX

@Manual{stream2014-542,
    title = {The Literature Isn’t Just Biased, It’s Also Late to the Party},
    journal = {STREAM research},
    author = {Carole Federico},
    address = {Montreal, Canada},
    date = 2014,
    month = jun,
    day = 30,
    url = {https://www.translationalethics.com/2014/06/30/the-literature-isnt-just-biased-its-also-late-to-the-party/}
}

MLA

Carole Federico. "The Literature Isn’t Just Biased, It’s Also Late to the Party" Web blog post. STREAM research. 30 Jun 2014. Web. 14 Oct 2024. <https://www.translationalethics.com/2014/06/30/the-literature-isnt-just-biased-its-also-late-to-the-party/>

APA

Carole Federico. (2014, Jun 30). The Literature Isn’t Just Biased, It’s Also Late to the Party [Web log post]. Retrieved from https://www.translationalethics.com/2014/06/30/the-literature-isnt-just-biased-its-also-late-to-the-party/


Uncaging Validity in Preclinical Research

by

Knockout_Mice5006-300

High attrition rates in drug development bedevil drug developers, ethicists, health care professionals, and patients alike.  Increasingly, many commentators are suggesting the attrition problem partly relates to prevalent methodological flaws in the conduct and reporting of preclinical studies.

Preclinical efficacy studies involve administering a putative drug to animals (usually mice or rats) that model the disease experienced by humans.  The outcome sought in these laboratory experiments is efficacy, making them analogous to Phase 2 or 3 clinical trials.

However, that’s where the similarities end.  Unlike trials, preclinical efficacy studies employ a limited repertoire of methodological practices aimed at reducing threats to clinical generalization.  These quality-control measures, including randomization, blinding and the performance of a power calculation, are standard in the clinical realm.

This mismatch in scientific rigor hasn’t gone unnoticed, and numerous commentators have urged better design and reporting of preclinical studies.   With this in mind, the STREAM research group sought to systematize current initiatives aimed at improving the conduct of preclinical studies.  The results of this effort are reported in the July issue of PLoS Medicine.

In brief, we identified 26 guideline documents, extracted their recommendations, and classified each according to the particular validity type – internal, construct, or external – that the recommendation was aimed at addressing.   We also identified practices that were most commonly recommended, and used these to create a STREAM checklist for designing and reviewing preclinical studies.

We found that guidelines mainly focused on practices aimed at shoring up internal validity and, to a lesser extent, construct validity.  Relatively few guidelines addressed threats to external validity.  Additionally, we noted a preponderance of guidance on preclinical neurological and cerebrovascular research; oddly, none addressed cancer drug development, an area with perhaps the highest rate of attrition.

So what’s next?  We believe the consensus recommendations identified in our review provide a starting point for developing preclinical guidelines in realms like cancer drug development.  We also think our paper identifies some gaps in the guidance literature – for example, a relative paucity of guidelines on the conduct of preclinical systematic reviews.  Finally, we suggest our checklist may be helpful for investigators, IRB members, and funding bodies charged with designing, executing, and evaluating preclinical evidence.

Commentaries and lay accounts of our findings can be found in PLoS Medicine, CBC News, McGill Newsroom and Genetic Engineering & Biotechnology News.

BibTeX

@Manual{stream2013-300,
    title = {Uncaging Validity in Preclinical Research},
    journal = {STREAM research},
    author = {Valerie Henderson},
    address = {Montreal, Canada},
    date = 2013,
    month = aug,
    day = 5,
    url = {https://www.translationalethics.com/2013/08/05/uncaging-validity-in-preclinical-research/}
}

MLA

Valerie Henderson. "Uncaging Validity in Preclinical Research" Web blog post. STREAM research. 05 Aug 2013. Web. 14 Oct 2024. <https://www.translationalethics.com/2013/08/05/uncaging-validity-in-preclinical-research/>

APA

Valerie Henderson. (2013, Aug 05). Uncaging Validity in Preclinical Research [Web log post]. Retrieved from https://www.translationalethics.com/2013/08/05/uncaging-validity-in-preclinical-research/


How Many Negative Trials Do We Need?

by

There is a growing concern in the clinical research community about the number of negative phase 3 trials. Given that phase 3 trials are incredibly expensive to run, and involve hundreds or sometimes thousands of patient-subjects, many researchers are now calling for more rigorous phase 2 trials, which are more predictive of a phase 3 result, in the hopes of reducing the number of phase 3 negatives.

In a focus piece from this week’s Science Translational Medicine, Jonathan and I argue that more predictive phase 2 trials may actually have undesirable ethical consequences–ratcheting up the patient burdens and study costs at a point of greater uncertainty, without necessarily increasing social utility or benefiting the research enterprise as a whole. We articulate four factors that we think ought to guide the level of positive predictivity sought in a (series of) phase 2 trial(s). These are: (1) the upper and lower bounds on evidence needed to establish clinical equipoise and initiate phase 3 testing; (2) the need to efficiently process the volume of novel intervention candidates in the drug pipeline; (3) the need to limit non-therapeutic risks for vulnerable patient-subjects; and (4) the need for decisive phase 3 evidence–either positive or negative–in order to best inform physician practices.

We are confident that these four factors are valid, but they are certainly not exhaustive of the inputs needed to make a robust judgment about the appropriate levels of predictivity needed in phase 2 for a given domain. What are the total costs and benefits of a negative phase 3? How should we weigh these against the costs and benefits of a more rigorous program of phase 2 testing? How many negatives should we tolerate? And at what stage of the development process? Our piece is a first-step toward developing a more comprehensive framework that could provide researchers, funders, policy-makers, and review boards with much needed answers to these important questions.

BibTeX

@Manual{stream2013-44,
    title = {How Many Negative Trials Do We Need?},
    journal = {STREAM research},
    author = {Spencer Phillips Hey},
    address = {Montreal, Canada},
    date = 2013,
    month = may,
    day = 10,
    url = {https://www.translationalethics.com/2013/05/10/how-many-negative-trials-do-we-need/}
}

MLA

Spencer Phillips Hey. "How Many Negative Trials Do We Need?" Web blog post. STREAM research. 10 May 2013. Web. 14 Oct 2024. <https://www.translationalethics.com/2013/05/10/how-many-negative-trials-do-we-need/>

APA

Spencer Phillips Hey. (2013, May 10). How Many Negative Trials Do We Need? [Web log post]. Retrieved from https://www.translationalethics.com/2013/05/10/how-many-negative-trials-do-we-need/


Teaching Kills Blogging: Somewhat Recent Developments…

by

Dear Faithful Readers: Teaching has cut my blogging to a trickle, though the teaching has now begun to taper off. My silence is not for want of major developments in the last two months. Among a few highlights:


Obama picks members for his Bioethics advisory panel: White house recently announced membership of its “Presidential Commission for the Study of Bioethical Issues.” The group is smaller than past Presidential panels. Its membership is lean on working bioethicists (3 or 4 who clearly fit the classic definition– all others scientists, clinicians, federal employees, university administrators, or a disease advocate).

Health care reform (+ Translational Research) passes in the U.S.: Among the intriguing elements here is the relationship between reform and biomedical research. When Clinton proposed healthcare reform in the 1990s, there was much consternation in the research community that this would spell a retreat from investment in basic research. Indeed, failure to enact reform propelled a massive expansion of the NIH budget through the 1990s. This time around, healthcare reform has specifically integrated basic research. The law includes language creating a “Cures Acceleration Network” that would fund up to $15M/year in translational research (though the budget will depend on direct appropriation from Congress, and there is no certainty that it will be funded).

Gene Patents Voided: Following an ACLU challenge, a U.S. District Court Judge threw out Myriad Genetics’ patent on BRCA1 and BRCA2 (genes associated with hereditary breast cancer; the company markets a $3K per pop test for mutations in the genes) by ruling that the genes are “products of nature.” Products of nature are not patentable, though products purified from nature (e.g. enzymes, wood chemicals, etc.) are. The logic behind the decisions is that genes are better thought of as information rather than as chemicals, and that information extracted from the natural entities does not have distinct properties in the way that chemicals do. If ever there were a demonstration of the power of metaphors; suffice it to say, biotechnology companies will appeal. (photo credit: aurelian s 2008)

BibTeX

@Manual{stream2010-69,
    title = {Teaching Kills Blogging: Somewhat Recent Developments…},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2010,
    month = apr,
    day = 16,
    url = {https://www.translationalethics.com/2010/04/16/teaching-kills-blogging-somewhat-recent-developments/}
}

MLA

Jonathan Kimmelman. "Teaching Kills Blogging: Somewhat Recent Developments…" Web blog post. STREAM research. 16 Apr 2010. Web. 14 Oct 2024. <https://www.translationalethics.com/2010/04/16/teaching-kills-blogging-somewhat-recent-developments/>

APA

Jonathan Kimmelman. (2010, Apr 16). Teaching Kills Blogging: Somewhat Recent Developments… [Web log post]. Retrieved from https://www.translationalethics.com/2010/04/16/teaching-kills-blogging-somewhat-recent-developments/


California Dreamin: CIRM Announces New Stem Cell Awards

by

California’s Institute for Regenerative Medicine just announced a series of large funding awards to fund translational research initiatives involving (mostly) stem cells. The projects funded are telling with respect to what was funded, and what they will attempt to achieve.


First, notwithstanding a press release containing the words “bringing stem cell therapies to the clinic,” several projects are really dressed up gene transfer studies. Thus, one team will use gene transfer in hematopoietic stem cells for sickle cell anemia; another two will use gene transfer to stem cells for treating brain malignancies; another RNAi for HIV. All this is only further evidence that the field of stem cells is devouring gene transfer. Other projects are aimed more at getting “stem cells out of the clinic” by using small molecules or monoclonal antibodies to destroy stem cells causing malignancies.

Second is the sweeping ambition. As it stands today, only one clinical trial involving embryonic stem cell-derived tissues has been initiated. The projects funded under these awards are “explicitly expected to result in a filing with the FDA to begin a clinical trial.” Given that these projects are funded for four years, CIRM seems to be banking on the prospect of at least a few of these initiating phase 1 trials within five years. Four of these proposals involve goals of implanting embryo-derived tissues, and two of these involve non-lethal conditions–macular degeneration and type I diabetes (technically, other awarded projects involve nonlethal, though extremely morbid conditions). Another involves implantation of embryo-derived tissues for Amyotrophic Lateral Sclerosis. It will be interesting to see how many of these meet their translational objectives, and how investigators will navigate the ethical, regulatory, and social complexity of initiating clinical testing. (photo credit: Michael Ransburg, 2008)

BibTeX

@Manual{stream2009-80,
    title = {California Dreamin: CIRM Announces New Stem Cell Awards},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2009,
    month = nov,
    day = 5,
    url = {https://www.translationalethics.com/2009/11/05/california-dreamin-cirm-announces-new-stem-cell-awards/}
}

MLA

Jonathan Kimmelman. "California Dreamin: CIRM Announces New Stem Cell Awards" Web blog post. STREAM research. 05 Nov 2009. Web. 14 Oct 2024. <https://www.translationalethics.com/2009/11/05/california-dreamin-cirm-announces-new-stem-cell-awards/>

APA

Jonathan Kimmelman. (2009, Nov 05). California Dreamin: CIRM Announces New Stem Cell Awards [Web log post]. Retrieved from https://www.translationalethics.com/2009/11/05/california-dreamin-cirm-announces-new-stem-cell-awards/


Mice- Three Different Ones: Towards More Robust Preclinical Experiments

by

One of the most exciting and intellectually compelling talks thus far at the American Society of Gene Therapy meeting was Pedro Lowenstein’s.  A preclinical researcher who works on gene transfer approaches to brain malignancies (among other things), Lowenstein asked the question: why do so many gene transfer interventions that look promising in the laboratory fail during clinical testing? His answer: preclinical studies lack “robustness.”


In short,  first-in-human trials are typically launched on the basis of a pivotal laboratory study showing statistically significant differences between treatment and control arms. In addition to decrying the “p-value” fetish- in which researchers, journal editors, and granting agencies view “statistical significance” as having magical qualities- Lowenstein also urged preclinical researchers to test the “nuances” and “robustness” of their systems before moving into human studies.

He provided numerous provocative examples where a single preclinical study showed very impressive, “significant” effects on treating cancer in mice. When the identical intervention was tried with seemingly small variations (e.g. different mouse strains used, different gene promotors tried, etc.), the “significant effects” vanished.  In short, Lowenstein’s answer to the question of why so many human trials fail to recapitulate major effects seen in laboratory studies is: we aren’t designing and reviewing preclinical studies properly. Anyone (is there one?) who has followed this blog knows: I completely agree. This is an ethical issue in scientific clothing. (photo credit: Rick Eh, 2008)

BibTeX

@Manual{stream2009-98,
    title = {Mice- Three Different Ones: Towards More Robust Preclinical Experiments},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2009,
    month = may,
    day = 29,
    url = {https://www.translationalethics.com/2009/05/29/mice-three-different-ones-towards-more-robust-preclinical-experiments/}
}

MLA

Jonathan Kimmelman. "Mice- Three Different Ones: Towards More Robust Preclinical Experiments" Web blog post. STREAM research. 29 May 2009. Web. 14 Oct 2024. <https://www.translationalethics.com/2009/05/29/mice-three-different-ones-towards-more-robust-preclinical-experiments/>

APA

Jonathan Kimmelman. (2009, May 29). Mice- Three Different Ones: Towards More Robust Preclinical Experiments [Web log post]. Retrieved from https://www.translationalethics.com/2009/05/29/mice-three-different-ones-towards-more-robust-preclinical-experiments/


Found Figures: Picking up the Pieces after an HIV Vaccine Trial Fails

by

In the November 29, 2008 issue of Lancet, two reports (plus a commentary) report the famously disappointing outcome of a recent placebo-controlled study testing adenoviral vector-based vaccines against HIV. News reports over a year ago reported that the study was halted after an interim analysis failed to show any prospect of proving effective. More troubling, subgroup analysis suggested that vaccine recipients who had high pre-existing immunity to the adenoviral vectors showed higher rates of sero-conversion compared with placebo. As this vaccine was among the most promising and advanced in terms of development, these results were seen as a major setback.


The recent Lancet reports paint a complicated picture: if I read them correctly, the inference that vector might enhance sero-conversion is muddied by the finding that circumcision status might also have played a role in sero-conversion (men with higher rates of adenoviral immunity were also, coincidentally, less likely to be circumcised).

What is clear, from what I gather, is that this is a good example where rigorous preclinical testing, coupled with rigorous trial design, permits meaningful interpretation of (unfortunately) negative human trial results. As Merlin Robb notes in a commentary accompanying the Lancet reports “the predictive value of the non-human SHIV-challenge model is not supported by this experience. The benchmarks for advancing candidate vaccines to efficacy testing and the priorities for vaccine research have been re-examined.”

Well-designed studies, supported by rigorous preclinical testing, should always produce valuable, findings– like the unexpected “found figures” in the bark of a tree  (photo credit: Readwalker, Found figures, 2006)

BibTeX

@Manual{stream2009-110,
    title = {Found Figures: Picking up the Pieces after an HIV Vaccine Trial Fails},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2009,
    month = feb,
    day = 3,
    url = {https://www.translationalethics.com/2009/02/03/found-figures-picking-up-the-pieces-after-an-hiv-vaccine-trial-fails/}
}

MLA

Jonathan Kimmelman. "Found Figures: Picking up the Pieces after an HIV Vaccine Trial Fails" Web blog post. STREAM research. 03 Feb 2009. Web. 14 Oct 2024. <https://www.translationalethics.com/2009/02/03/found-figures-picking-up-the-pieces-after-an-hiv-vaccine-trial-fails/>

APA

Jonathan Kimmelman. (2009, Feb 03). Found Figures: Picking up the Pieces after an HIV Vaccine Trial Fails [Web log post]. Retrieved from https://www.translationalethics.com/2009/02/03/found-figures-picking-up-the-pieces-after-an-hiv-vaccine-trial-fails/


Stems and Blossoms (part 2): Really Informed Consent

by

There is a strain within the clinical and bioethics community that takes a minimal view of informed consent: investigators are supposed to provide requisite information to volunteers; if research subjects fail to comprehend this information, pity for them. This view brings to mind a memorable exchange between Inspector Clouseau and a hotel clerk (Clouseau: “does your dog bite?” Clerk: “No.”  Clouseau then extends a hand; the dog lunges at him.  “I thought you said your dog doesn’t bite.” Clerk: “Zat is not my dog.”)


The ISSCR guidelines take a bold stand on informed consent. “Investigators involved in clinical research must carefully assess whether participants understand the essential aspects of the study.”  The guidelines go on to state “ideally, the subject’s comprehension of information should be assessed through a written test or an oral quiz during the time of obtaining consent.” Once again, ISSCR shows vision here in going well beyond the legalistic conception of informed consent described above.

The ISSCR guidelines also urge researchers to:
• explain possible irreversibility of some toxicities
• describe the sources of stem cells
• inform patients that researchers “do not know whether they will work as hoped”

These laudable recommendations aside, I might have hoped for more guarded language about the therapeutic value of early phase studies. For one, the guidelines use mostly “therapeutic” language, for example, using the aspirational term “cell therapy” instead of the neutral term “cell transfer.” Second, the third item above logically means that the probability of benefit is less than 100%; experience tells us, however, that when interventions are highly novel, major therapeutic benefits for early phase trials are very improbable. (photo credit: Helen K, Stems, 2008)

BibTeX

@Manual{stream2008-114,
    title = {Stems and Blossoms (part 2): Really Informed Consent},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2008,
    month = dec,
    day = 30,
    url = {https://www.translationalethics.com/2008/12/30/stems-and-blossoms-part-2-really-informed-consent/}
}

MLA

Jonathan Kimmelman. "Stems and Blossoms (part 2): Really Informed Consent" Web blog post. STREAM research. 30 Dec 2008. Web. 14 Oct 2024. <https://www.translationalethics.com/2008/12/30/stems-and-blossoms-part-2-really-informed-consent/>

APA

Jonathan Kimmelman. (2008, Dec 30). Stems and Blossoms (part 2): Really Informed Consent [Web log post]. Retrieved from https://www.translationalethics.com/2008/12/30/stems-and-blossoms-part-2-really-informed-consent/


Stems and Blossoms (part 1): Justice

by

Shortly before I left for holiday, the International Society for Stem Cell Research (ISSCR) issued a policy paper, “Guidelines for the Clinical Translation of Stem Cells,” outlining ethical and scientific considerations for researchers designing translational trials involving stem cells (whether stem cell derived, adult, or embryonic).


In my opinion, the document wins the award for most forward thinking and comprehensive statement on the ethics of a translational enterprise. It shows that the stem cell research leadership has closely studied mistakes made by translational researchers in other highly innovative fields.  But the guidelines do more than look backwards; they proactively contemplate fairness and justice considerations as well.  Here are a few justice-related excerpts:

On responsiveness: “The ISSCR strongly discourages conduct of trials in a foreign country solely to benefit patients in the home country of the sponsoring agency. The test therapy, if approved, should realistically be expected to become available to the population participating in the clinical trial through existing health systems or those developed on a permanent basis in connection with the trial.”

On reasonable availability: “As far as possible, groups or individuals who participate in clinical stem cell research should be in a position to benefit from the results of this research.”

On diversity: “Stem cell collections with genetically diverse sources of cell lines should be established”

On access and licensing: “Commercial companies, subject to their financial capability, should offer affordable therapeutic interventions to persons living in resource-poor countries who would otherwise be wholly excluded from benefiting from that stem cell-based therapy. Academic and other institutions that are licensing stem cell therapeutics and diagnostic inventions should incorporate this requirement in their intellectual property license”

On review: “Regulatory and oversight agencies (local, national, and international) must explicitly include the consideration of social justice principles into their evaluations.”

On trial participation: “… the sponsor and principal investigator have an ethical responsibility to make good faith, reasonable efforts whenever possible to secure sufficient funding so that no person who meets eligibility criteria is prevented from being considered for enrollment because of his or her inability to cover the costs of the experimental treatment.”

In upcoming posts, I will comment on other aspects of the ISSCR guidelines. (photo credit: Helen K, Stems, 2008)

BibTeX

@Manual{stream2008-115,
    title = {Stems and Blossoms (part 1): Justice},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2008,
    month = dec,
    day = 28,
    url = {https://www.translationalethics.com/2008/12/28/stems-and-blossoms-part-1-justice/}
}

MLA

Jonathan Kimmelman. "Stems and Blossoms (part 1): Justice" Web blog post. STREAM research. 28 Dec 2008. Web. 14 Oct 2024. <https://www.translationalethics.com/2008/12/28/stems-and-blossoms-part-1-justice/>

APA

Jonathan Kimmelman. (2008, Dec 28). Stems and Blossoms (part 1): Justice [Web log post]. Retrieved from https://www.translationalethics.com/2008/12/28/stems-and-blossoms-part-1-justice/


Search STREAM


All content © STREAM research

admin@translationalethics.com
Twitter: @stream_research
3647 rue Peel
Montreal QC H3A 1X1