Predicting Risk, Benefit, and Success in Research

weather-vane-711082_1280 The task set before clinical investigators is not easy. They are supposed to answer pressing scientific questions, using very few resources, and exposing patient-subjects to as little risk as possible. In other words, we expect them to be rigorous scientists, stewards of the medical research enterprise, and guardians of their patients’ interests all at the same time. While the duties that emerge from these various roles are sometimes orthogonal, they are intersecting and aligned at the point of clinical trial design. Insofar as a trial is well-designed–meaning that it is likely to answer its scientific question, make efficient use of research resources, and minimize risk–the investigator has successfully discharged all of these duties.What is more, there is a common activity underlying all of these requirements of good trial design: Prediction. When investigators design studies, they are making an array of predictions about what they think will happen. When they decide which interventions to compare in a randomized trial, they are making predictions about risk/benefit balance. When they power a study, they are making a prediction about treatment effect sizes. The accuracy of these predictions can mean the difference between an informative or an uninformative outcome–a safe or unsafe study.

The importance of these predictions is already implicitly recognized in many research ethics policies. Indeed, research policies often include requirements that studies should be based on a systematic evaluation of the available evidence. These requirements are really just another way of saying that the predictions underlying a study should be as accurate as possible given the state of available knowledge. Yet, trial protocols do not typically contain explicit predictions–e.g., likelihood statements about the various outcomes or events of ethical interest. And this makes it much more difficult to know whether or not investigators are adequately discharging their duties to the scientific community, to their patient-subjects, and to the research system as a whole.

In an article from this month’s Journal of Medical Ethics, I argue that investigators ought to be making explicit predictions in their protocols. For example, they should stating exactly how likely they think it is that their study will meet its primary endpoint or exactly how many adverse events they expect to see. Doing so would then allow us to apply the tools of prediction science–to compare these predictions with outcomes and finally get a direct read on just how well investigators make use of the available body of evidence. This would, in turn, provide a number of other benefits–from facilitating more transparent ethical reviews to reducing the number of uninformative trials. It would also provide opportunities for other research stakeholders–like funding agencies–to better align their decision-making with the state of evidence.

The broad point here is that in the era of evidence-based medicine, we should be using this evidence to design to better trials. Applying the science of prediction to clinical research allows us to take steps in this direction.

2015 Jun

Accessibility of trial reports for drugs stalling in development: a systematic assessment of registered trials

Non-publication of clinical trial results has been recognized as a serious scientific and ethical problem. Underreporting frustrates evaluation of a drug’s utility and safety, and fails to redeem the sacrifice of trial participants.

Thus far, policy measures to counteract non-publication have focused on trials of interventions used in practice. However, 9/10 interventions entering clinical testing never achieve marketing licensure. What happens to the results of those trials?

Figure depicting the rates of publication of trials of licensed drugs compared with trials of stalled drugs--overall and by major subgroup.

Figure depicting the rates of publication of trials of licensed drugs compared with trials of stalled drugs–overall and by major subgroup.

In our most recent publication, my colleagues and I systematically quantified the proportion of trials of unlicensed interventions that are not published.

We used trial registration records to create a sample of clinical trials of drugs that achieved licensure between 2005 and 2009 (“licensed drugs” or “translated drugs”) and drugs that stalled in clinical development (“stalled drugs”) in the same time period. Our sample included registered phase II, III or IV trials that closed between January 1st, 2006 and December 31st, 2008 and tested a drug in the treatment of cancer, cardiovascular disease or neurological disorders. We felt this sample provided a relevant and contemporary look into a wide swathe of drug development activity. We then searched Google Scholar, PubMed and Embase, and contacted investigators to determine the publication status of each trial in our sample at least 5 years after reported primary endpoint collection.

Whereas 75% (72/96) of registered trials of licensed drugs were published, only 37% (30/81) trials of stalled drugs were published. The adjusted hazard ratio for publication was 2.7 (95% confidence interval 1.7 to 4.3) in favour of licensed drug trials–that is, clinical trials of licensed drugs were almost three times as likely to publish findings as trials of stalled drugs. Higher publication rates for licensed drug trials were observed regardless of disease type, sponsorship (industry involvement versus not), trial phase, and location across the globe.

Figure depicting the proportion of trials of licensed and unlicensed interventions that are published as a function of time from reported primary endpoint collection. The publication of stalled drug trials plateaus over time around 37%, whereas the publication of translated drug trials attains 75% in the same time period.

Figure depicting the proportion of trials of licensed and unlicensed interventions that are published as a function of time from reported primary endpoint collection. The publication of stalled drug trials plateaus over time around 37%, whereas the publication of translated drug trials attains 75% in the same time period.

Moreover, a total of 20,135 patients participated in trials of stalled drugs that were never published. In addition to the alarming implications for these patients, trials in unsuccessful translation trajectories contain a wealth of scientific information for research planning, such as validation of pathophysiological theories driving drug development, as well as data about drug safety and pharmacology. All of this information vanishes when trials of unsuccessful interventions are not published.

Our key finding is that much of the information collected in unsuccessful drug trials is inaccessible to the broader research and practice communities. Our results provide an evidence base and rationale for policy reforms aimed at promoting transparency, ethics, and accountability in clinical research. One such potential reform is the Notice of Proposed Rulemaking entitled “Clinical Trials Registration and Results Submission” issued by the US Department of Health and Human Services in November 2014. The proposal, which moves to implement the FDAAA summary results reporting requirements for trials of licensed drugs and to extend them to trials of unlicensed drugs, was closed to public comments on March 23rd, 2015. The rule is now undergoing revision.

2015 May

So How Useful is Basic Science Research?

As part of the STREAM workshop series, Dr. Jeremy Howick gave a seminar on March 25th on the usefulness of basic science research. This a great question to ask, and in his answer he makes two valid observations: 1) there’s a 70/30 split in research funding favoring basic science over clinical research and 2) historically, we have had more scientific advances accidentally than through the current system of finding a protein, characterizing its metabolic pathway, and developing and testing a drug that takes advantage of it. The best example is aspirin – we used salicylic acid for centuries before figuring out how it works.

Dr. Howick argues that we should test the wealth of information we already have – urban legends,  homemade flu remedies, medicinal herbs – in more trials, and depend less on basic science research’s ability to develop drugs that we may (read: probably cannot) end up using. He argues that this is something of a win-win situation. For example, consider the case of chicken soup: Grandmothers are under the impression that it cures colds. If we test its efficacy in trials and it really works, we can figure out how, and perhaps develop a drug that is more effective. Whereas if the soup fails to cure colds, we can then definitively disprove its effectiveness, saving a lot of people, a lot of money in the canned-goods aisle.

Given the billions of dollars wasted each year in drug development, this seems a promising proposal. The presentation sparked an hour-and-a-half-long debate, however, probably because it was given to a room full of people with basic-science backgrounds. Although I agree with Dr. Howick on a lot of points, I don’t think using observational data to generate hypotheses is quite the panacea he makes it out to be. Sometimes testing folk remedies actually ends up creating more confusion. Take, for example, vitamin C: despite the dozens of clinical trials disproving its ability to cure the common cold, I’ve seen people (pharmacology students) swear up and down that it is the best cold remedy ever bottled. This just shows how pseudoscience (I’m looking at you, homeopathy) can be incredibly difficult to disprove.

2015 Apr

Kristin Voigt, co-hosted by STREAM and CIRST

STREAM-Workshop-KVoigtFor the first time, STREAM will be co-hosting a workshop event with CIRST, the Centre interuniversitaire de recherche sur la science et al technologie.

On April 9th at UQAM (see full location info below), Kristin Voigt will be speaking on “E-cigarettes and Smoking Norms: Do Concerns About the Renormalisation of Smoking Justify Regulation of E-cigarettes?” All are welcome, so please join us!

Dr. Voigt received her DPhil in political philosophy from the University of Oxford and has held post-doctoral positions at McGill, Harvard, Lancaster University and the European College of Liberal Arts. Her research focuses on egalitarian theories of distributive justice and the links between philosophy and social policy. Her recent and ongoing projects address issues such as conceptions and measures of health and health inequality; the use of incentives to improve health outcomes; (childhood) obesity; higher education policy; and smoking and tobacco control.

Thursday, April 9, 2015
3:00 – 5:00 PM
UQAM, 1205 rue Saint-Denis, Pavillon Paul-Gerin-Lajoie, Room N-8510

2015 Apr

Jeremy Howick visits STREAM on March 25th

STREAM-Workshop-JHowickJeremy Howick’s research draws on his interdisciplinary training as a philosopher of science and clinical epidemiologist. He has two related areas of interest: (1) Evidence-Based Medicine (EBM), including EBM ‘hierarchies’ of evidence, clinical epidemiology, and how point of care tests might improve practice; and (2) philosophy of medicine, including the epistemological foundations of Evidence-Based Medicine, and the ethics of placebos in trials and practice.

On March 25th at 3 PM, he will be speaking on “How Useful is Basic Mechanistic Research for Discovering Medical Treatments that Benefit Humans?” All are welcome, so please join us!

Wednesday, March 25th, 2015
3:00 – 5:00 PM
3647 Peel St., Room 102

2015 Mar

Semantic natural language processing and philosophy of science

On 2015 February 18, James Overton visited the STREAM research group in Montreal, where he presented his research into what scientists are doing when give an explanation for something. Many accounts of scientific explanation have been offered by philosophers of science over the years, but Overton’s offering differs in that he set out to establish his account of scientific explanation by actually examining the scentific literature. Specifically, he took a year’s worth of papers from the journal Science, converted them to unformatted text, and then parsed them using the Python Natural Language Toolkit.

Overton’s methods were an analysis of word frequencies and a random sampling of sentences that seem to be making explanations, to see what sorts of data are used to justify what other sorts of claims. The most shocking result, at least for me, was that the word “law” was almost never used in the sample that Dr Overton described. That’s not to say that there is no discussion of natural laws at all, but given how much space the description of laws takes up in most accounts of scientific explanation, this seemed to be a very striking finding at the least.

This technique is very versatile and could be applied to a number of projects, from exploring the nature of scientific explanation, as Dr Overton has done, or even to a more simple project analysing the frequency of phrases like “sorafenib showed a modest effect,” or “adverse events were manageable,” and seeing if there is any relationship between the word chosen and the result being described.

2015 Mar

James Overton visits STREAM on February 18th

STREAM-Workshop-JOvertonJames Overton is the founder of Knocean, a consulting and development service at the intersection of philosophy, science, and software. Example projects include ontology development and deployment, building semantic web tools, and developing custom web applications for scientific and medical projects. He specializes in scientific database integration using biomedical ontologies.

On February 18th at 3 PM, he will be speaking on “Explanation in Science“. All are welcome, so please join us!

Wednesday, February 18, 2015
3:00 – 5:00 PM
3647 Peel St., Room 102

2015 Feb

Nope– It’s Still Not Ethical

Casino Slots

Last year, Jonathan and I published a critique of unequal allocation ratios in late-phase trials. In these trials, patient-subjects are randomly allocated among the treatment arms in unequal proportions, such as 2:1 or 3:1, rather than the traditional equal (1:1) proportion. Strangely, despite introducing an additional burden (i.e., requiring larger sample sizes) the practice of unequal allocation is often defended as being “more ethical”. In that piece, published in Neurology, we showed that these purported ethical advantages did not stand up to careful scrutiny.

In a new article at Clinical Trials, Jonathan and I extend this line of argument to trials that use outcome-adaptive allocation. In an outcome-adaptive trial, the allocation ratio is dynamically adjusted over the course of the study, becoming increasingly weighted toward the better-performing arm. In contrast to the fixed but unequal ratios described above, outcome-adaptive ratios can sometimes reduce the necessary sample size to answer the study question. However, this reduction in cost and patient burden is not guaranteed. In fact, it only occurs when the difference between the observed effect sizes is large. And since there is no way to know in advance what this difference is going to be, these potential gains in efficiency due to outcome-adaptive designs are something of a gamble.

Nevertheless, just as we saw with fixed unequal ratios, proponents of outcome-adaptive trials claim that this allocation scheme is “more ethical”. Setting aside the sample size issue, they argue that outcome-adaptive trials better accommodate clinical equipoise by collapsing the distinction between research and care. As it is sometimes put rhetorically: Would you rather be the last subject treated in a trial or the first subject treated in practice? The outcome-adaptive trial dissolves any ethical tension in this question. The treatment will be the same either way.

Of course, any long-time readers of this blog will recognize a misunderstanding of clinical equipoise embodied in that question. The salient issue is not a comparison between the last subject enrolled in a study and the first patient treated in the context of clinical care. Rather, it is about ensuring that no subject is systematically disadvantaged by participating in a trial (and that all participants receive competent medical care). In which case, the relevant rhetorical question needs to be re-phrased as follows: Would you rather be the first patient enrolled in a study or the last? In a traditional 1:1 RCT design, clinical equipoise dissolves the ethical tension in that question. But for an outcome-adaptive design, you should hope to be the last–and that is a serious problem.

2015 Feb

Charting the Unpredictable: Using fMRI patterns to determine outcome in acutely comatose patients

DMN Image

Every year in Canada around 50,000 people suffer brain injuries, with those experiencing severe traumas often becoming comatose for days or weeks post-incident. While there exists a battery of physiological prognostic indicators, such as pupillary light reflex (or lack thereof), and patterns of EEG activity, there remains a significant subset of patients who retain an indeterminate prognosis even after their completion. The use of sophisticated imaging techniques like fMRI has provided a modern way of mapping residual cognitive function in newly comatose patients. Currently, three fMRI studies have looked at the preservation of neural connectivity of two brain networks as potential markers of outcome. While all these studies found a (modest) positive correlation between the BOLD signal strength of the intact network and better patient outcome, significant further work is required before the technique could become clinically useful.

Dr. Charles Weijer of Western University, stresses, however, that this imminent research raises several ethical concerns: patients do not have decisional capacity, time restraints may not permit the proper procurement of surrogate informed consent, critically ill patients are clearly a vulnerable population, and it is not clear how the fMRI study results would impact patient prognosis and treatment decisions. As well, there exist practical concerns including the intra-hospital transport of patients to the fMRI machine, and the time needed outside of the ICU to perform the scans.

As a recent graduate in neuroscience another particular concern struck me – why had the researchers of the previous fMRI studies only considered two networks? The first mapped the preservation of activity in S1 after a stimulus to the hand, while the following two studies assessed the resting state connectivity of the default mode network. These are just two of several networks that have been mapped and are reliably found in healthy patients. I would be curious to see if there is prognostic contribution by analyzing other connectivity, like the auditory or executive resting state networks. Exploring the integrity of several neural networks as potential prognostic indices may allow future research to hone in on a target rather than just testing on a ‘one by one’ basis.

An analogous issue has emerged at STREAM regarding the trajectory of research in the field of cancer biomarkers and the proper method of exploring a new study space. Similar to the intended use of fMRI in previous situation, the biomarkers are being evaluated as predictive markers of outcome to specific cancer therapies. We have noticed that early studies in this field apply a very narrow set of research techniques to try and validate a biomarker. These methods are often suboptimal and it is only much later down the road that researchers branch out into other more successful methods. A notable example of this is can be seen in our evaluation of the research trajectory of one potential biomarker in lung cancer – ERCC1. A non-specific antibody had been routinely used to detect the presence of the marker, and it wasn’t until years later that basic research into a more appropriate antibody was initiated. This is likely part of the reason for the notably sluggish progress in the field. We propose that ideally, novel research programs would start with studies looking at a broad set of potential targets and then taper these down over time, as the accumulating evidence would warrant. Acutely comatose patients are a new and important population for fMRI studies, and to me it seems like this research program might benefit by encouraging future studies to evaluate and compare the predictive use of multiple networks so that they most rigorously map the study space.

Context: On January 12th, Charles Weijer, visiting from the Rotman Institute of Philosophy at Western University, gave the first talk in the new STREAM speaker series. He spoke on the ethical considerations involved in performing fMRI studies on acutely comatose patients in the ICU.

2015 Feb

Charles Weijer visits STREAM on January 12th

STREAM-Workshop-CWeijerCharles Weijer is a philosopher, physician, and the Canada Research Chair in Bioethics at Western University. His academic interests center on the ethics of medical research. He has written about using placebos in clinical trials, weighing the benefits and harms of medical research, and protecting communities in research.

On January 12th at 3 PM, he will be speaking on “Ethical Considerations in Functional MRI Studies on Acutely Comatose Patients in the Intensive Care Unit”. All are welcome, so please join us!

Monday, January 12, 2015
3:00 – 5:00 PM
3647 Peel St., Room 101

2015 Jan

Page 1 of 1612345...10...Last »


Old blog posts

Our mission

STREAM Group applies empirical and philosophical tools for addressing scientific, ethical, and policy challenges in the development and translation of health technologies.

Continue reading ...

Who we are

The STREAM Group is a collaboration of researchers who share a common set of principles about the goals and methods for studying clinical translation. Our members work in ethics, epidemiology, biology, psychology, and various medical specialties. The network is centered at McGill University, and has affiliates throughout North America.

Continue reading ...