Jeremy Howick visits STREAM on March 25th

STREAM-Workshop-JHowickJeremy Howick’s research draws on his interdisciplinary training as a philosopher of science and clinical epidemiologist. He has two related areas of interest: (1) Evidence-Based Medicine (EBM), including EBM ‘hierarchies’ of evidence, clinical epidemiology, and how point of care tests might improve practice; and (2) philosophy of medicine, including the epistemological foundations of Evidence-Based Medicine, and the ethics of placebos in trials and practice.

On March 25th at 3 PM, he will be speaking on “How Useful is Basic Mechanistic Research for Discovering Medical Treatments that Benefit Humans?” All are welcome, so please join us!

Wednesday, March 25th, 2015
3:00 – 5:00 PM
3647 Peel St., Room 102

2015 Mar

Semantic natural language processing and philosophy of science

On 2015 February 18, James Overton visited the STREAM research group in Montreal, where he presented his research into what scientists are doing when give an explanation for something. Many accounts of scientific explanation have been offered by philosophers of science over the years, but Overton’s offering differs in that he set out to establish his account of scientific explanation by actually examining the scentific literature. Specifically, he took a year’s worth of papers from the journal Science, converted them to unformatted text, and then parsed them using the Python Natural Language Toolkit.

Overton’s methods were an analysis of word frequencies and a random sampling of sentences that seem to be making explanations, to see what sorts of data are used to justify what other sorts of claims. The most shocking result, at least for me, was that the word “law” was almost never used in the sample that Dr Overton described. That’s not to say that there is no discussion of natural laws at all, but given how much space the description of laws takes up in most accounts of scientific explanation, this seemed to be a very striking finding at the least.

This technique is very versatile and could be applied to a number of projects, from exploring the nature of scientific explanation, as Dr Overton has done, or even to a more simple project analysing the frequency of phrases like “sorafenib showed a modest effect,” or “adverse events were manageable,” and seeing if there is any relationship between the word chosen and the result being described.

2015 Mar

James Overton visits STREAM on February 18th

STREAM-Workshop-JOvertonJames Overton is the founder of Knocean, a consulting and development service at the intersection of philosophy, science, and software. Example projects include ontology development and deployment, building semantic web tools, and developing custom web applications for scientific and medical projects. He specializes in scientific database integration using biomedical ontologies.

On February 18th at 3 PM, he will be speaking on “Explanation in Science“. All are welcome, so please join us!

Wednesday, February 18, 2015
3:00 – 5:00 PM
3647 Peel St., Room 102

2015 Feb

Nope– It’s Still Not Ethical

Casino Slots

Last year, Jonathan and I published a critique of unequal allocation ratios in late-phase trials. In these trials, patient-subjects are randomly allocated among the treatment arms in unequal proportions, such as 2:1 or 3:1, rather than the traditional equal (1:1) proportion. Strangely, despite introducing an additional burden (i.e., requiring larger sample sizes) the practice of unequal allocation is often defended as being “more ethical”. In that piece, published in Neurology, we showed that these purported ethical advantages did not stand up to careful scrutiny.

In a new article at Clinical Trials, Jonathan and I extend this line of argument to trials that use outcome-adaptive allocation. In an outcome-adaptive trial, the allocation ratio is dynamically adjusted over the course of the study, becoming increasingly weighted toward the better-performing arm. In contrast to the fixed but unequal ratios described above, outcome-adaptive ratios can sometimes reduce the necessary sample size to answer the study question. However, this reduction in cost and patient burden is not guaranteed. In fact, it only occurs when the difference between the observed effect sizes is large. And since there is no way to know in advance what this difference is going to be, these potential gains in efficiency due to outcome-adaptive designs are something of a gamble.

Nevertheless, just as we saw with fixed unequal ratios, proponents of outcome-adaptive trials claim that this allocation scheme is “more ethical”. Setting aside the sample size issue, they argue that outcome-adaptive trials better accommodate clinical equipoise by collapsing the distinction between research and care. As it is sometimes put rhetorically: Would you rather be the last subject treated in a trial or the first subject treated in practice? The outcome-adaptive trial dissolves any ethical tension in this question. The treatment will be the same either way.

Of course, any long-time readers of this blog will recognize a misunderstanding of clinical equipoise embodied in that question. The salient issue is not a comparison between the last subject enrolled in a study and the first patient treated in the context of clinical care. Rather, it is about ensuring that no subject is systematically disadvantaged by participating in a trial (and that all participants receive competent medical care). In which case, the relevant rhetorical question needs to be re-phrased as follows: Would you rather be the first patient enrolled in a study or the last? In a traditional 1:1 RCT design, clinical equipoise dissolves the ethical tension in that question. But for an outcome-adaptive design, you should hope to be the last–and that is a serious problem.

2015 Feb

Charting the Unpredictable: Using fMRI patterns to determine outcome in acutely comatose patients

DMN Image

Every year in Canada around 50,000 people suffer brain injuries, with those experiencing severe traumas often becoming comatose for days or weeks post-incident. While there exists a battery of physiological prognostic indicators, such as pupillary light reflex (or lack thereof), and patterns of EEG activity, there remains a significant subset of patients who retain an indeterminate prognosis even after their completion. The use of sophisticated imaging techniques like fMRI has provided a modern way of mapping residual cognitive function in newly comatose patients. Currently, three fMRI studies have looked at the preservation of neural connectivity of two brain networks as potential markers of outcome. While all these studies found a (modest) positive correlation between the BOLD signal strength of the intact network and better patient outcome, significant further work is required before the technique could become clinically useful.

Dr. Charles Weijer of Western University, stresses, however, that this imminent research raises several ethical concerns: patients do not have decisional capacity, time restraints may not permit the proper procurement of surrogate informed consent, critically ill patients are clearly a vulnerable population, and it is not clear how the fMRI study results would impact patient prognosis and treatment decisions. As well, there exist practical concerns including the intra-hospital transport of patients to the fMRI machine, and the time needed outside of the ICU to perform the scans.

As a recent graduate in neuroscience another particular concern struck me – why had the researchers of the previous fMRI studies only considered two networks? The first mapped the preservation of activity in S1 after a stimulus to the hand, while the following two studies assessed the resting state connectivity of the default mode network. These are just two of several networks that have been mapped and are reliably found in healthy patients. I would be curious to see if there is prognostic contribution by analyzing other connectivity, like the auditory or executive resting state networks. Exploring the integrity of several neural networks as potential prognostic indices may allow future research to hone in on a target rather than just testing on a ‘one by one’ basis.

An analogous issue has emerged at STREAM regarding the trajectory of research in the field of cancer biomarkers and the proper method of exploring a new study space. Similar to the intended use of fMRI in previous situation, the biomarkers are being evaluated as predictive markers of outcome to specific cancer therapies. We have noticed that early studies in this field apply a very narrow set of research techniques to try and validate a biomarker. These methods are often suboptimal and it is only much later down the road that researchers branch out into other more successful methods. A notable example of this is can be seen in our evaluation of the research trajectory of one potential biomarker in lung cancer – ERCC1. A non-specific antibody had been routinely used to detect the presence of the marker, and it wasn’t until years later that basic research into a more appropriate antibody was initiated. This is likely part of the reason for the notably sluggish progress in the field. We propose that ideally, novel research programs would start with studies looking at a broad set of potential targets and then taper these down over time, as the accumulating evidence would warrant. Acutely comatose patients are a new and important population for fMRI studies, and to me it seems like this research program might benefit by encouraging future studies to evaluate and compare the predictive use of multiple networks so that they most rigorously map the study space.

Context: On January 12th, Charles Weijer, visiting from the Rotman Institute of Philosophy at Western University, gave the first talk in the new STREAM speaker series. He spoke on the ethical considerations involved in performing fMRI studies on acutely comatose patients in the ICU.

2015 Feb

Charles Weijer visits STREAM on January 12th

STREAM-Workshop-CWeijerCharles Weijer is a philosopher, physician, and the Canada Research Chair in Bioethics at Western University. His academic interests center on the ethics of medical research. He has written about using placebos in clinical trials, weighing the benefits and harms of medical research, and protecting communities in research.

On January 12th at 3 PM, he will be speaking on “Ethical Considerations in Functional MRI Studies on Acutely Comatose Patients in the Intensive Care Unit”. All are welcome, so please join us!

Monday, January 12, 2015
3:00 – 5:00 PM
3647 Peel St., Room 101

2015 Jan

Unsuccessful trial accrual and human subjects protections: An empirical analysis of recently closed trials

Ratio of actual enrolment to expected enrolment versus number of trials for trials that completed and trials that terminated due to poor accrual in 2011

Ratio of actual enrolment to expected enrolment versus number of trials for trials that completed and trials that terminated due to poor accrual in 2011

The moral acceptability of a clinical trial is rooted in the risk and benefit for patients, as well as the ability of the trial to produce generalisable and useful scientific knowledge. The ability of a clinical trial to justify its claims to producing new knowledge depends in part on its ability to recruit patients to participate—the fewer the patients, the less confident we can be in the knowledge produced. So when trials have recruitment problems, those trials also have ethical problems.

In a recently published issue of Clinical Trials, my colleagues and I investigate the prevalence of poor trial accrual, the impact of accrual problems on study validity and their ethical implications.

We used the National Library of Medicine clinical trial registry to capture all initiated phase 2 and 3 intervention clinical trials that were registered as closed in 2011. We then determined the number that had been terminated due to unsuccessful accrual and the number that had closed after less than 85% of the target number of human subjects had been enrolled.

Of 2579 eligible trials, 481 (19%) either terminated for failed accrual or completed with less than 85% expected enrolment, seriously compromising their statistical power. A total of 48,027 patients had enrolled in trials closed in 2011 who were unable to answer the primary research question meaningfully.

Not only that, but we found that many trials that should have been terminated were pursued to completion, despite flagging rates of subject accrual, and the proportion of trials that completed was much higher than the proportion of trials that terminated, even at accrual levels as low as 30%. (See attached figure.)

The take-home message is that ethics bodies, investigators, and data monitoring committees should carefully scrutinize trial design, recruitment plans, and feasibility of achieving accrual targets when designing and reviewing trials, monitor accrual once initiated, and take corrective action when accrual is lagging.

2014 Nov

The Landscape of Early Phase Research

landscape-for-web

As Jonathan is fond of saying: Drugs are poisons. It is only through an arduous process of testing and refinement that a drug is eventually transformed into a therapy. Much of this transformative work falls to the early phases of clinical testing. In early phase studies, researchers are looking to identify the optimal values for the various parameters that make up a medical intervention. These parameters are things like dose, schedule, mode of administration, co-interventions, and so on. Once these have been locked down, the “intervention ensemble” (as we call it) is ready for the second phase of testing, where its clinical utility is either confirmed or disconfirmed in randomized controlled trials.

In our piece from this latest issue of the Kennedy Institute of Ethics Journal, Jonathan and I present a novel conceptual tool for thinking about the early phases of drug testing. As suggested in the image above, we represent this process as an exploration of a 3-dimensional “ensemble space.” Each x-y point on the landscape corresponds to some combination of parameters–a particular dose and delivery site, say. The z-axis is then the risk/benefit profile of that combination. This model allows us to re-frame the goal of early phase testing as an exploration of the intervention landscape–a systematic search through the space of possible parameters, looking for peaks that have promise of clinical utility.

We then go on to show how the concept of ensemble space can also be used to analyze the comparative advantages of alternative research strategies. For example, given that the landscape is initially unknown, where should researchers begin their search? Should they jump out into the deep end, to so speak, in the hopes of hitting the peak on the first try? Or should they proceed more cautiously–methodologically working their way out from the least-risky regions, mapping the overall landscape as they go?

I won’t give away the ending here, because you should go read the article! Although readers familiar with Jonathan’s and my work can probably infer which of those options we would support. (Hint: Early phase research must be justified on the basis of knowledge-value, not direct patient-subject benefit.)

UPDATE: I’m very happy to report that this paper has been selected as the editor’s pick for the KIEJ this quarter!

2014 Jul

The Literature Isn’t Just Biased, It’s Also Late to the Party

Journal-Banner

Animal studies of drug efficacy are an important resource for designing and performing clinical trials. They provide evidence of a drug’s potential clinical utility, inform the design of trials, and establish the ethical basis for testing drugs in human. Several recent studies suggest that many preclinical investigations are withheld from publication. Such nonreporting likely reflects that private drug developers have little incentive to publish preclinical studies. However, it potentially deprives stakeholders of complete evidence for making risk/benefit judgments and frustrates the search for explanations when drugs fail to recapitulate the promise shown in animals.

In a future issue of The British Journal of Pharmacology, my co-authors and I investigate how much preclinical evidence is actually available in the published literature, and when it makes an appearance, if at all.

Although we identified a large number of preclinical studies, the vast majority was reported only after publication of the first trial. In fact, for 17% of the drugs in our sample, no efficacy studies were published before the first trial report. And when a similar analysis was performed looking at preclinical studies and clinical trials matched by disease area, the numbers were more dismal. For more than a third of indications tested in trials, we were unable to identify any published efficacy studies in models of the same indication.

There are two possible explanations for this observation, both of which have troubling implications. Research teams might not be performing efficacy studies until after trials are initiated and/or published. Though this would seem surprising and inconsistent with ethics policies, FDA regulations do not emphasize the review of animal efficacy data when approving the conduct of phase 1 trials. Another explanation is that drug developers precede trials with animal studies, but withhold them or publish them only after trials are complete. This interpretation also raises concerns, as delay of publication circumvents mechanisms—like peer review and replication—that promote systematic and valid risk/benefit assessment for trials.

The take home message is this: animal efficacy studies supporting specific trials are often published long after the trial itself is published, if at all. This represents a threat to human protections, animal ethics, and scientific integrity. We suggest that animal care committees, ethics review boards, and biomedical journals should take measures to correct these practices, such as requiring the prospective registration of preclinical studies or by creating publication incentives that are meaningful for private drug developers.

2014 Jun

Search, Bias, Flotsam and False Positives in Preclinical Research

Photo credit: RachelEllen 2006)

Photo credit: RachelEllen 2006

If you could change one thing- and only one thing- in preclinical proof of principle research to improve its clinical generalizability, what would it be? Require larger sample sizes? Randomization? Total data transparency?

In the May 2014 issue of PLoS Biology, my co-authors Uli Dirnagl and Jeff Mogil offer the following answer: clearly label preclinical studies as either “exploratory” or “confirmatory” studies.

Think of the downed jetliner Malaysia Airlines Flight 370. To find it, you need to explore vast swaths of open seas, using as few resources as possible. Such approaches are going to be very sensitive, but also prone to false positives.   Before you deploy expensive, specialized ships and underwater vehicles to locate the plane, you want to confirm that the signal identified in exploration is real.

So it is in preclinical research as well. Exploratory studies are aimed at identifying strategies that might be useful for treating disease- scanning the ocean for a few promising treatment strategies. The vast majority of preclinical studies today are exploratory in nature. They use small sample sizes, flexible designs, short study durations, surrogate measures of response, and many different techniques to demonstrate an intervention’s promise. Fast and frugal, but susceptible to bias and random variation.

Right now, the standard practice is to go right into clinical development on the basis of this exploratory information. Instead, we ought to be running confirmatory studies first. These would involve prespecified preclinical designs, large sample sizes, long durations, etc.   Such studies are more expensive, but can effectively rule out random variation and bias in declaring a drug promising.

Our argument has implications for regulatory and IRB review of early phase studies, journal publication, and funding of research. Clearly labeling studies as one or the other would put consumers of this information on notice for the error tendencies of the study. An “exploratory” label tells reviewers that the intervention is not yet ready for clinical development- but also, that reviewers ought to relax their standards, somewhat, for experimental design and transparency. “Confirmatory,” on the other hand, would signal to reviewers and others that the study is meant to directly inform clinical development decisions- and that reviewers should evaluate very carefully whether effect sizes are confounded by random variation, bias, use of an inappropriate experimental system (i.e. threats to construct validity), or idiosyncratic features of the experimental system (i.e. threats to external validity).

2014 May

Page 1 of 1612345...10...Last »

Search STREAM

Old blog posts

Our mission

STREAM Group applies empirical and philosophical tools for addressing scientific, ethical, and policy challenges in the development and translation of health technologies.

Continue reading ...

Who we are

The STREAM Group is a collaboration of researchers who share a common set of principles about the goals and methods for studying clinical translation. Our members work in ethics, epidemiology, biology, psychology, and various medical specialties. The network is centered at McGill University, and has affiliates throughout North America.

Continue reading ...