STREAM goes Preprint with an Analysis of the Development of the Anti-cancer drug Ixabepilone

by

Today STREAM just uploaded its first manuscript into medRxiv, a preprint server.

The manuscript, which can be accessed here, traces the clinical development process of a very uninteresting cancer drug you never heard of, ixabepilone (approved in USA, not in Europe). Our goal was to estimate the total patient benefit and burden associated with unlocking the therapeutic activity of this drug.

In research, the impulse is to study interesting things – in the case of drugs, cool big impact drugs like sunitinib, pembrolizumab, imatinib. We do that all the time in STREAM. Why study uninteresting drugs? Well, because interesting things are, by definition, exceptional. That means a lot of patients participate in, and resources are expended on, research involving of uninteresting drugs (how many? stay tuned for a forthcoming assessment from STREAM). As an aside, arguably we bioethicists and – at least historically – science and technology study types probably spend too much time thinking about exceptional technologies (CRISPR/Cas9) and not enough with the mundane (e.g. asthma inhalers).1

What did we discover in our study of ixabepilone?

First, as with the interesting drugs sunitinib and sorafenib, members of yesterday’s wunderkind class of tyrosine kinase inhibitors, drug developers unlocked the clinical utility of ixabepilone with incredible efficiency. That is, the first indication they put into testing was the first indication to get an FDA approval. So much for dismissing preclinical and early phase research as hopelessly biased and misleading.

Second, as with sunitinib and sorafenib, drug developers spent a lot of energy trying to extend ixabepilone to other indications. Not as much as sunitinib and sorafenib, but still lots (17 different indications). Post approval trials – which typically try to access fruit higher on the tree – were mostly a bust, leading to lots of harm but no new FDA approvals.

Third, we found that summed across the whole drug development program, 16% of patients experienced objective response (i.e. tumour shrinkage, a quick and dirty way of assessing benefit); 2.2% experienced drug related fatalities. This compares with 16% and 1% of patients participating in sunitinib trials, and 12% and 2.2% for sorafenib. So to be clear: overall, risk and burden associated with developing a barely useful drug, ixabepilone, is pretty much the same as that for developing breakthrough drugs sunitinib and sorafenib.

Finally, about a quarter of trials in our sample were deemed uninformative using prespecified criteria. That compares with the figure of 26% in our study of sorafenib.

There are lots of normative and policy implications to unpack here. We leave that for later work. For now, we close with a few words about medRxiv. Publishing here is STREAM’s way of overcoming the obstacles to getting our novel work published in a peer reviewed journal. We completed this manuscript in 2015, and submitted it to six different venues. Several were high impact venues, and so desk rejection was not unexpected. Two journals were pretty low impact, specialty cancer venues. In one case, the desk rejection took three months, only to learn that editors were “unable to find referees,” despite our several attempts to suggest different referees. (?!)  Our worst experience with this was at J Clin Epi. We submitted the manuscript in September 2015 and received a notice of rejection – after multiple queries as to its status – in March 2016 (6 months). There, the paper was favourably received. No objections about the methodology and study quality. But the editor felt the piece duplicated our work on sunitinib (a bit like saying that a randomized trial of a drug in lung cancer is duplicative of a randomized trial of a totally unrelated drug in lung cancer), and also because our conclusions were based on only one drug (well yeah. Our abstract explains we set out to study one drug. Kind of like meta-analysis, now that I think about it. If that was a concern, a desk rejection would have saved us and the two referees a lot of headache). 

Thereafter, this important article sat on our hard drive, until my PhD student (now graduated) proposed depositing it on medRxiv. STREAM has many other manuscripts sitting on our hard drives because of serial rejection arising from editorial discretion, and not due (if we may say so ourselves) to the quality of our work. This is a waste of research efforts and resources, a drain on morale, and it constitutes a threat to the validity of the scientific literature by contributing to a certain kind of publication bias. In the coming years, we will be aiming to upload our serially rejected work on preprint archives in order to ensure that important, but unpopular research papers are available to the scientific community.

References

1. Timmermans, S. and Berg, M. (2003), The practice of medical technology. Sociology of Health & Illness, 25: 97-114. doi:10.1111/1467-9566.00342


BibTeX

@Manual{stream2019-1786,
    title = {STREAM goes Preprint with an Analysis of the Development of the Anti-cancer drug Ixabepilone},
    journal = {STREAM research},
    author = {Amanda MacPherson},
    address = {Montreal, Canada},
    date = 2019,
    month = aug,
    day = 1,
    url = {http://www.translationalethics.com/2019/08/01/stream-goes-preprint-with-an-analysis-of-the-development-of-the-anti-cancer-drug-ixabepilone/}
}

MLA

Amanda MacPherson. "STREAM goes Preprint with an Analysis of the Development of the Anti-cancer drug Ixabepilone" Web blog post. STREAM research. 01 Aug 2019. Web. 28 Apr 2024. <http://www.translationalethics.com/2019/08/01/stream-goes-preprint-with-an-analysis-of-the-development-of-the-anti-cancer-drug-ixabepilone/>

APA

Amanda MacPherson. (2019, Aug 01). STREAM goes Preprint with an Analysis of the Development of the Anti-cancer drug Ixabepilone [Web log post]. Retrieved from http://www.translationalethics.com/2019/08/01/stream-goes-preprint-with-an-analysis-of-the-development-of-the-anti-cancer-drug-ixabepilone/


Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 2)

by

“Adenocarcinoma of Ascending Colon Arising in Villous Adenoma,” Ed Uthman on Flickr, March 29, 2007

In my previous post, I offered some reflections on my recent paper (with Marcin Waligora and colleagues) on pediatric phase 1 cancer trials. I offered three plausible implications. In this post, I want to highlight two reasons why I think it’s worth facing up to one of the possible implications I posited- namely (b) a lot of shitty phase 1 trials in children are pulling the average estimate of benefit down, making it hard to discern the truly therapeutic ones.

First, the quality of reporting in phase 1 pediatric trials- like cancer trials in general (see here and here and here– oncology: wake up!!)- is bad. For example:

“there was no explicit information about treatment-related deaths (grade 5 AEs) in 58.82% of studies.”

This points in general to the low scientific standards we tolerate in high risk pediatric research. We should be doing better. I note that, living by the Noble Lie that these trials are therapeutic makes it easier to live with such reporting deficiencies, since researchers, funders, editors, IRBs, and referees can always console themselves with the notion that- even if trials don’t report findings properly, at least children benefited from study participation.

A second important finding:

“The highest relative difference between responses was again identified in solid tumors. When 3 or fewer types of malignancies were included in a study, response rate was 15.01% (95% CI 6.70% to 23.32%). When 4 or more different malignancies were included in a study, response rate was 2.85% (95% CI 2.28% to 3.42%); p < 0.001.”

This may be telling us that- when we have a strong biological hypothesis such that we are very selective about which populations we enroll in trials, risk/benefit is much better. When we use a “shot-gun” approach of testing a drug in a mixed population- that is, when we lack a strong biological rationale- risk/benefit is a lot worse. Perhaps we should be running fewer and better justified phase 1 trials in children. If that is the case (and- to be clear- our meta-analysis is insufficient to prove it), then it’s the research that needs changing, not the regulations.

Nota Bene: Huge thanks to an anonymous referee for our manuscript. Wherever you are- you held us to appropriately high standards and greatly improved our manuscript. Also, a big congratulations to the first author of this manuscript, Professor Marcin Waligora- very impressive work- I’m honored to have him as a collaborator!

BibTeX

@Manual{stream2018-1583,
    title = {Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 2)},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2018,
    month = feb,
    day = 27,
    url = {http://www.translationalethics.com/2018/02/27/risk-benefit-in-pediatric-phase-1-cancer-trials-noble-lie-part-2/}
}

MLA

Jonathan Kimmelman. "Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 2)" Web blog post. STREAM research. 27 Feb 2018. Web. 28 Apr 2024. <http://www.translationalethics.com/2018/02/27/risk-benefit-in-pediatric-phase-1-cancer-trials-noble-lie-part-2/>

APA

Jonathan Kimmelman. (2018, Feb 27). Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 2) [Web log post]. Retrieved from http://www.translationalethics.com/2018/02/27/risk-benefit-in-pediatric-phase-1-cancer-trials-noble-lie-part-2/


Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 1)

by

Photo from art.crazed Elizabeth on Flickr, March 16, 2010.

In years of studying the ethics of early phase trials in patients- for example, cancer phase 1 trials- I’ve become more and more convinced that it is a mistake to think of these trials as having a therapeutic impetus.

To be sure- the issues are complex, many people who share my view do so for the wrong reasons. But in general, it seems to me difficult to reconcile the concept of competent medical care with giving patients a drug that will almost certainly cause major toxicities- and for which there is at best highly fallible animal evidence to support its activity (and at worst- no animal evidence at all).

For this reason, I think groups like ASCO and others – who (in a manner that is self-serving) advocate phase 1 trials as a vehicle for care when patients qualify- do a major disservice to patients and the integrity of medicine.

But surely there are cases where the risks of phase 1 trial enrollment might plausibly be viewed as outweighed by the prospect of direct benefit. As I’ve argued elsewhere, the institution of phase 1 testing is comprised of a heterogeneous set of materials and activities. With a drug, you can specify its composition and dose on a product label, and declare the drug “therapeutic” or “nontherapeutic” for a specific patient population. With phase 1 trials, there is no standard composition or dose- phase 1 trials cannot be put in a bottle and labeled as a homogeneous entity that has or does not have therapeutic value. If this is the case, it seems plausible that there are some phase 1 trials that come closer- and perhaps exceed- the threshold of risk/benefit/uncertainty that establish a therapeutic claim. That is, it seems conceivable that phase 1 trials may be done under conditions, or with sufficiently strong supporting evidence, that one can present them as a therapeutic option for certain patients without lying or betraying the integrity of medicine.

U.S. regulations (and those elsewhere) state that- when exposing children to research risks exceeding “minor increase over minimal,” research risks must be “justified by the anticipated benefit to the subjects…the relation of… anticipated benefit to the risk [must be] at least as favorable to the subjects as that presented by available alternative approaches.” Given that risks of drugs tested in phase 1 cancer trials exceed minor increase over minimal, U.S. regulations require that we view phase 1 trial participation as therapeutic when we enrol children.

Can this regulatory standard be reconciled with my view? I used to think so. Here’s why. Pediatric phase 1 trials are typically pursued only after drugs have been tested in adults. Accordingly, the ‘dogs’ of drug development have been thinned from the pack before testing in children. These trials also test a narrower dose range- and as such, a greater proportion of participants are likely to receive active doses of drug. Finally- rightly or wrongly- the ethos of protection that surrounds pediatric research, plus the stringency of regulations surrounding pediatric testing would- one might think- tend towards demanding higher evidentiary standards for launch of testing.

This week, Marcin Waligora, colleagues, and I published the largest meta-analysis of pediatric phase 1 cancer trials. that fills me with doubt about a therapeutic justification for phase 1 pediatric trials (for news coverage, see here). Before describing our findings, a few notes of caution.

First, our findings need to be interpreted with caution- crappy reporting practices for phase 1 trials make it hard to probe risk and benefit. Also, our analyses used methods and assumptions that are somewhat different than those used in similar meta-analyses of adults. Finally, who am I to impose my own risk/benefit sensibility on guardians (and children) who have reached the end of the line in terms of standard care options?

These provisos aside, our findings suggest that the risk/benefit for pediatric phase 1 cancer trials is not any better than it is for adult trials. Some salient findings:

  • on average, every pediatric participant will experience at least one severe or life threatening side effect.
  • for monotherapy trials in children with solid tumors (where we can compare our data with previous studies of adults), about 2.5% of children had major tumor shrinkage. This compares with a decade-old estimate of 3.8% in adults. 10.5% for combination therapy vs. 11.7% in adults.
  • Contrary to all the latest excitement about new treatment options, our data do not show clear time trends suggesting an improvement of risk/benefit with newer drugs.
  • 39% of children in phase 1 studies received less than the recommended dose of the investigational drug.

If- in fact- we reject the view that adult phase 1 studies can generally be viewed as therapeutic, and if, in fact, risk/benefit in pediatric studies has a similar risk/benefit balance despite their building on adult evidence, and if we accept that available care options outside a trial are no better or no worse in terms of their risk/benefit for children and adults- then it follows (more or less- and assuming our meta-analysis presents an accurate view of risk/benefit) that phase 1 trials in children cannot generally be presented as having a therapeutic risk/benefit.

This puts medicine a bind. Phase 1 trials are critical for advancing treatment options for children. But most cannot- in my view- be plausibly reconciled with research regulations. Either a) my above analysis is wrong, b) a lot of substandard phase 1 trials in children are pulling the average estimate of benefit down, making it hard to discern the truly therapeutic ones, c) we must accept- a la Plato- a noble lie and live a fiction that phase 1 studies are therapeutic, d) we must cease phase 1 cancer drug trials in children, or e) regulations are either misguided, or f) phase 1 trials should undergo a specialized review process- so called 407 review.

I would posit (b) and (e) and (f) as the most plausible implications of our meta-analysis.

In my next post, a few reflections. And stay tuned for further empirical and conceptual work on this subject.

BibTeX

@Manual{stream2018-1559,
    title = {Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 1)},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2018,
    month = feb,
    day = 26,
    url = {http://www.translationalethics.com/2018/02/26/risk-benefit-in-pediatric-phase-1-cancer-trials-noble-lie-part-1/}
}

MLA

Jonathan Kimmelman. "Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 1)" Web blog post. STREAM research. 26 Feb 2018. Web. 28 Apr 2024. <http://www.translationalethics.com/2018/02/26/risk-benefit-in-pediatric-phase-1-cancer-trials-noble-lie-part-1/>

APA

Jonathan Kimmelman. (2018, Feb 26). Risk/Benefit in Pediatric Phase 1 Cancer Trials: Noble Lie? (part 1) [Web log post]. Retrieved from http://www.translationalethics.com/2018/02/26/risk-benefit-in-pediatric-phase-1-cancer-trials-noble-lie-part-1/


The Back Story on “Can cancer researchers accurately judge whether preclinical reports will reproduce?

by

How well can researchers accurately predict whether high profile preclinical findings will reproduce? This week in PLoS Biology, STREAM reports the result of a study suggesting the answer is “not very well.” You can read about our methods, assumptions, results, claims, etc. in the original report (here) or in various press coverage (here and here). Instead I will use this blog entry to reflect on how we pulled this paper off.

This was a bear of a study to complete. For many reasons. Studying experts is difficult- partly because, by definition, experts are scarce. They also have limited time. Defining who is and who is not an expert is also difficult. Another challenge is studying basic and preclinical research. Basic and preclinical researchers do not generally follow pre-specified protocols, and they certainly do not register their protocols publicly. This makes it almost impossible to conduct forecasting studies in this realm. We actually tried a forecast study asking PI’s to forecast the results of experiments in their lab (we hope to write up results at a later date); to our surprise, a good many planned experiments were never done, or when they were done, they were done differently than originally intended, rendering forecasts irrelevant. So when it became clear the Reproducibility Project: Cancer Biology project was a go and that they were working with pre-specified and publicly registered protocols, we leapt at the opportunity.

For our particular study of preclinical research forecast, there was another challenge. Early on, we were told that the Reproducibility Project: Cancer Biology was controversial. I got a taste of that controversy in many conversations with cancer biologists, including one who described the initiative as “radioactive- people don’t even want to acknowledge its there.”

This probably accounts for some of the challenges we faced in recruiting a meaningful sample, and to some extent in peer review. Regarding the former, my sedulous and perseverant postdoc, Danny Benjamin- working together with some great undergraduate research assistants- devised and implemented all sorts of methods to boost recruitment. In the end, we were able to get a good size (and representative, it turns out) sample. But this is a tribute to Danny’s determination.

Our article came in for some pretty harsh comments on initial peer review. In particular, one referee seemed fiendishly hostile to the RP:CB. The reviewer was critical of our focusing on xenograft experiments, which “we now know are impossible to evaluate due to technical reasons.” Yes- that’s right, we NOW know this. What we were trying to determine was if people could predict this!

The reviewer also seemed to pre-judge the replication studies (as well as the very definition of reproducibility, which is very slippery): “we already know that the fundamental biological discovery reported in several of these has been confirmed by other published papers and by drug development efforts in biopharma.” But our survey was not asking people to predict whether fundamental biological discoveries were true. We were asking whether particular experiments- when replicated based on publicly available protocols- could produce the same relationships.

The referee was troubled by our reducing reproducibility to a binary (yes/no). That was something we struggled with in design. But forecasting exercises are only useful insofar as events are verifiable and objective (no point in asking for foreacasts if we can’t define the goalposts, or if the goalposts move once we see the results). We toyed with creating a jury to referee reproducibility- and using jury judgments to verify forecasts. But in addition to being almost completely impractical, it would be methodologically dubious: forecasts would- in the end- be forecasts of jury judgments, not of an objectively verifiable data. To be a good forecaster, you’d need to peer into the souls of the jurors, as well as the machinery of the experiments themselves. But we were trying to study scientific judgment, not social judgment.

Our paper- in the end- potentially pours gasoline/petrol/das Benzin on a fiery debate about reproducibility (i.e. not only do many studies not reproduce- but also, scientists have limited awareness of which studies will reproduce). Yet we caution against facile conclusions. For one, there were some good forecasters in our sample. But perhaps more importantly, ours is one study-one ‘sampling’ of reality subject to all the limitations that come with methodology, chance, and our own very human struggles with bias. In the end- I think the findings are hopeful insofar as they suggest that part of what we need to work on in science is not merely designing and reporting experiments, but learning to make proper inferences (and communicating effectively) about the generalizability of experimental results. Those inferential skills seem on display with one of our star forecasters- Yale grad student Taylor Sells (named on our leaderboard)- “We often joke about the situations under which things do work, like it has to be raining and it’s a Tuesday for it to work properly…as a scientist, we’re taught to be very skeptical of even published results… I approached [the question of whether studies would reproduce] from a very skeptical point of view.”

BibTeX

@Manual{stream2017-1418,
    title = {The Back Story on “Can cancer researchers accurately judge whether preclinical reports will reproduce?},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2017,
    month = jul,
    day = 5,
    url = {http://www.translationalethics.com/2017/07/05/the-back-story-on-can-cancer-researchers-accurately-judge-whether-preclinical-reports-will-reproduce/}
}

MLA

Jonathan Kimmelman. "The Back Story on “Can cancer researchers accurately judge whether preclinical reports will reproduce?" Web blog post. STREAM research. 05 Jul 2017. Web. 28 Apr 2024. <http://www.translationalethics.com/2017/07/05/the-back-story-on-can-cancer-researchers-accurately-judge-whether-preclinical-reports-will-reproduce/>

APA

Jonathan Kimmelman. (2017, Jul 05). The Back Story on “Can cancer researchers accurately judge whether preclinical reports will reproduce? [Web log post]. Retrieved from http://www.translationalethics.com/2017/07/05/the-back-story-on-can-cancer-researchers-accurately-judge-whether-preclinical-reports-will-reproduce/


Nonpublication of Neurology Trials for Stalled Drugs & the Ironic Nonpublication of Data on those Stalled Drugs

by

In my experience, peer review greatly improves a manuscript in the vast majority of cases. There are times, however, when peer review improves a manuscript on one less important axis, while impoverishing it in another more important one. This is the case with our recent article in Annals of Neurology.

Briefly, our manuscript created a sample of FDA-approved neurological drugs, as well as a matched sample of neurological drugs that did not receive FDA approval- but instead stalled in development (i.e. a 3 year pause in testing). We then used clinicaltrials.gov to identify trials of drugs in both groups, and determined the proportion of trials that were published for all approved drugs, as well as FDA non-approved drugs. We found- not surprisingly- that trials involving stalled neurological drugs were significantly less likely to publish. What- for us- was the bigger surprise was that the proportion of trials published at 5 years or more after closure was a mere 32% for stalled neurological drugs (56% for licensed). Think about what that means in terms of the volume of information we lose, and the disrespect we show to neurological patients who volunteer their bodies to test drugs that show themselves to be ineffective and/or unsafe.

We shopped the manuscript around – eventually landing at Annals of Neurology. The paper received glowing reviews. Referee 1: “The research is careful and unbiased and the conclusions sound and impactful.” Referee 2: “This is an excellent and very important paper. It rigorously documents a very important issue in clinical trial conduct and reporting. The authors have done a superb job of identifying a crucial question, studying it carefully and fairly with first-rate quantification, and presenting the results in a clear, well-written, and illuminating manner… I have no major concerns, but some small points may be helpful…” Ka-ching!

However, after submitting small revisions, the manuscript was sent to a statistical referee who was highly critical of elements that seemed minor, given the thrust of the manuscript. [Disclosure: from here forward – this blog reflects my opinion but not necessarily the opinion of my two co-authors]. We were told to expunge the word “cohort” from the manuscript (since there was variable follow-up time). Odd but not worth disputing. We were urged “to fit a Cox model from time of completion of the trial to publication, with a time-varying covariate that is set to 0 until the time of FDA approval, at which time it is changed to 1. The associated coefficient of this covariate is the hazard ratio for publication comparing approved drugs to unapproved drugs.” That seemed fastidious – we’re not estimating survival of a drug to make policy here – but not unreasonable. We were told we must remove our Kaplan-Meier curves of time to publication. I guess. So we did it and resubmitted- keeping some of our unadjusted analyses in (of course labeling them as unadjusted).

The reviewer pressed further. He/she wanted all presentation of proportions and aggregate data removed (here I will acknowledge a generous aspect of the referee and editors- he/she agreed to use track changes to cut content from the ms [I am not being snarky here – this went beyond normal protocol at major journals]). We executed a “search and destroy” mission for just about all percentages in the manuscript: in this case we cut two tables’ worth of data describing the particular drugs, characteristics of trials in our sample, and proportions of trials for which data were obtainable in abstract form, or on company websites. Although one referee had signed off (“My high regard for this paper persists. Differences in views concerning the statistical approach are understandable. I see the paper as providing very important data about the trajectory of publication or non-publication of data depending on the licensing fate of the drug being studied, and see the survival analysis as bolstering that approach”) the editors insisted on our making revisions requested by the reviewer.

So in the end- we had to present what we believe to be an impoverished, data-starved, and somewhat less accessible version in Anals of Neurology. And not surprisingly, upon publication, we were (fairly) faulted online for not providing enough information about our sample. To our mind, the **real** version- and the one we think incorporates the referee’s productive suggestions while respecting our discretion as authors can be accessed here. And we are making our complete dataset available here.

BibTeX

@Manual{stream2017-1325,
    title = {Nonpublication of Neurology Trials for Stalled Drugs & the Ironic Nonpublication of Data on those Stalled Drugs},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2017,
    month = jun,
    day = 5,
    url = {http://www.translationalethics.com/2017/06/05/nonpublication-of-neurology-trials-for-stalled-drugs-the-ironic-nonpublication-of-data-on-those-stalled-drugs/}
}

MLA

Jonathan Kimmelman. "Nonpublication of Neurology Trials for Stalled Drugs & the Ironic Nonpublication of Data on those Stalled Drugs" Web blog post. STREAM research. 05 Jun 2017. Web. 28 Apr 2024. <http://www.translationalethics.com/2017/06/05/nonpublication-of-neurology-trials-for-stalled-drugs-the-ironic-nonpublication-of-data-on-those-stalled-drugs/>

APA

Jonathan Kimmelman. (2017, Jun 05). Nonpublication of Neurology Trials for Stalled Drugs & the Ironic Nonpublication of Data on those Stalled Drugs [Web log post]. Retrieved from http://www.translationalethics.com/2017/06/05/nonpublication-of-neurology-trials-for-stalled-drugs-the-ironic-nonpublication-of-data-on-those-stalled-drugs/


Recapping the recent plagiarism scandal

by

Parts of the paper that are nearly identical to my blog

Parts of the paper that are nearly identical to my blog

A year ago, I received a message from Anna Powell-Smith about a research paper written by two doctors from Cambridge University that was a mirror image of a post I wrote on my personal blog1 roughly two years prior. The structure of the document was the same, as was the rationale, the methods, and the conclusions drawn. There were entire sentences that were identical to my post. Some wording changes were introduced, but the words were unmistakably mine. The authors had also changed some of the details of the methods, and in doing so introduced technical errors, which confounded proper replication. The paper had been press-released by the journal,2 and even noted by Retraction Watch.3

I checked my site’s analytics and found a record of a user from the University of Cambridge computer network accessing the blog post in question three times on 2015 December 7 and again on 2016 February 16, ten days prior to the original publication of the paper in question on 2016 February 26.4

At first, I was amused by the absurdity of the situation. The blog post was, ironically, a method for preventing certain kinds of scientific fraud. I was flattered that anyone noticed my blog at all, and I believed that academic publishing would have a means for correcting itself when the wrong people are credited with an idea. But as time went on, I became more and more frustrated by the fact that none of the institutions that were meant to prevent this sort of thing were working.

The journal did not catch the similarities between this paper and my blog in the first place, and the peer review of the paper was flawed as well. The journal employs an open peer review process in which the reviewers’ identities are published. The reviewers must all make a statement saying, “I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.” Despite this process, none of the reviewers made an attempt to analyse the validity of the methods used.

After the journal’s examination of the case, they informed us that updating the paper to cite me after the fact would undo any harm done by failing to credit the source of the paper’s idea. A new version was hastily published that cited me, using a non-standard citation format that omitted the name of my blog, the title of my post, and the date of original publication. The authors did note that the idea had been proposed in “the grey literature,” so I re-named my blog to “The Grey Literature” to match.

I was shocked by the journal’s response. Authorship of a paper confers authority in a subject matter, and their cavalier attitude toward this, especially given the validity issues I had raised with them, seemed irresponsible to me. In the meantime, the paper was cited favourably by the Economist5 and in the BMJ6, crediting Iriving and Holden.

I went to Retraction Watch with this story,7 which brought to light even more problems with this example of open peer review. The peer reviewers were interviewed, and rather than re-evaluating their support for the paper, they doubled down, choosing instead to disparage my professional work and call me a liar. One reviewer wrote, “It is concerning that this blogger would be attempting a doctorate and comfortably ascribe to a colleague such falsehoods.”

The journal refused to retract the paper. It was excellent press for the journal and for the paper’s putative authors, and it would have been embarrassing for them to retract it. The journal had rolled out the red carpet for this paper after all,2 and it was quickly accruing citations.

The case was forwarded to the next meeting of the Committee on Publication Ethics (COPE) for their advice. Three months later, at the August 2016 COPE meeting, the case was presented and voted on.8 It was surreal for me to be forced to wait for a seemingly unaccountable panel of journal editors to sit as a de facto court, deciding whether or not someone else would be credited with my words, all behind locked doors, with only one side of the case—the journal editors’—represented. In the end, they all but characterised my complaints as “punitive,” and dismissed them as if my only reason for desiring a retraction was that I was hurt and wanted revenge. The validity issues that I raised were acknowledged but no action was recommended. Their advice was to send the case to the authors’ institution, Cambridge University, for investigation. I do not know if Cambridge did conduct an investigation, and there has been no contact with me.

There is, to my knowledge, no way to appeal a decision from COPE, and I know of no mechanism of accountability for its members in the case they advise a journal with the wrong answer. As of January 2017, the journal officially considered the case closed.

It is very easy to become disheartened and jaded when things like this happen—as the Economist article citing Irving and Holden says, “Clinical trials are a murky old world.”5 The institutions that are supposed to protect the integrity of the academic literature sometimes act in ways that miss the lofty standards we expect from modern science.

Fortunately, the scientific community turned out to be a bigger place than I had given it credit for. There are people like Anna, who let me know that this was happening in the first place and Ben Goldacre, who provided insight and support. My supervisor and my colleagues in the STREAM research group were incredibly supportive and invested in the outcome of this case. A number of bloggers (Retraction Watch,7,9 Neuroskeptic,10 Jordan Anaya11—if I missed one, let me know!) picked up this story and drew attention to it, and in the end, the paper was reviewed by Daniel Himmelstein,12 whose persistence and thoroughness convinced the journal to re-open the case and invite Dr Knottenbelt’s decisive review.

While it is true that the mistakes introduced into the methods are what finally brought about its retraction, those mistakes happened in the first place because the authors did not come up with the idea themselves. It is a fallacy to think that issues of scientific integrity can be considered in isolation from issues of scientific validity, and this case very clearly shows how that sort of thinking could lead to a wrong decision.

Of course, there are still major problems with academic publishing. But there are also intelligent and conscientious people who haven’t given up yet. And that is an encouraging thought.

References

1. Carlisle, B. G. Proof of prespecified endpoints in medical research with the bitcoin blockchain. The Grey Literature (2014).

2. F1000 Press release: Doctors use Bitcoin tech to improve transparency in clinical trial research. (2016). Available at: http://f1000.com/resources/160511_Blockchain_FINAL.pdf. (Accessed: 23rd June 2016)

3. In major shift, medical journal to publish protocols along with clinical trials. Retraction Watch (2016).

4. Irving, G. & Holden, J. How blockchain-timestamped protocols could improve the trustworthiness of medical science. F1000Research 5, 222 (2017).

5. Better with bitcoin | The Economist. Available at: http://www.economist.com/news/science-and-technology/21699099-blockchain-technology-could-improve-reliability-medical-trials-better. (Accessed: 23rd June 2016)

6. Topol, E. J. Money back guarantees for non-reproducible results? BMJ 353, i2770 (2016).

7. Plagiarism concerns raised over popular blockchain paper on catching misconduct. Retraction Watch (2016).

8. What extent of plagiarism demands a retraction vs correction? | Committee on Publication Ethics: COPE. Available at: http://publicationethics.org/case/what-extent-plagiarism-demands-retraction-vs-correction. (Accessed: 16th August 2016)

9. Authors retract much-debated blockchain paper from F1000. Retraction Watch (2017).

10. Neuroskeptic. Blogs, Papers, Plagiarism and Bitcoin – Neuroskeptic. (2016).

11. Anaya, J. Medical students can’t help but plagiarize, apparently. Medium (2016). Available at: https://medium.com/@OmnesRes/medical-students-cant-help-but-plagiarize-apparently-f81074824c17. (Accessed: 21st July 2016)

12. Himmelstein, Daniel. Satoshi Village. The most interesting case of scientific irreproducibility? Available at: http://blog.dhimmel.com/irreproducible-timestamps/. (Accessed: 8th March 2017)

BibTeX

@Manual{stream2017-1280,
    title = {Recapping the recent plagiarism scandal},
    journal = {STREAM research},
    author = {Benjamin Gregory Carlisle},
    address = {Montreal, Canada},
    date = 2017,
    month = jun,
    day = 2,
    url = {http://www.translationalethics.com/2017/06/02/recapping-the-recent-plagiarism-scandal/}
}

MLA

Benjamin Gregory Carlisle. "Recapping the recent plagiarism scandal" Web blog post. STREAM research. 02 Jun 2017. Web. 28 Apr 2024. <http://www.translationalethics.com/2017/06/02/recapping-the-recent-plagiarism-scandal/>

APA

Benjamin Gregory Carlisle. (2017, Jun 02). Recapping the recent plagiarism scandal [Web log post]. Retrieved from http://www.translationalethics.com/2017/06/02/recapping-the-recent-plagiarism-scandal/


Scientists should be cognizant of how the public perceives uncertainty

by

Scientific results are inherently uncertain. The public views uncertainty differently than scientists. One key to understanding when and how scientific research gets misinterpreted is to understand how the public thinks about scientific uncertainty.

A recent paper in the Journal of Experimental Psychology: General explores how laypersons perceive uncertainty in science. Broomell and Kane use principle component analysis to discover three underlying dimensions that describe how the public characterizes uncertainty: precision, mathematical abstraction, and temporal distance. These three dimensions, in turn, predict how people rate the quality of a research field. Precision – loosely defined in this context as the accuracy of the measurements, predictions, and conclusions drawn within a research field – is the dominating factor. One interpretation is that the public is primarily concerned with definitiveness when evaluating scientific claims.

Members of the public lose confidence when fields of study are described as being more uncertain. This is relevant for scientists to consider when communicating results. On the one hand, over-selling the certainty of an outcome can mislead. On the other hand, the public might tend to dismiss important scientific findings when researchers describe uncertainty honestly and openly, as we have seen in the public denial of vaccinations and climate change. Perceptions of a research field do not seem to influence how people view individual studies, so each study should be treated as its own communique.

Broomell et al found some evidence that personal characteristics interpret scientific uncertainty in different ways. Self-identified Republicans are more concerned about expert disagreement, while self-identified Democrats are more concerned with the quality of evidence. Such individual differences suggest the type of uncertainty surrounding scientific findings shapes the way members of the public receive of scientific claims. Consider how this might play out in medical research and informed consent. Clinical equipoise is the idea that research on human-subjects is only ethical if experts are uncertain about which treatment in a randomized trial is better. If one treatment is thought to be better than another, it is unethical to deny the preferred treatment to patients. The findings of Broomell et al suggest that the structure of uncertainty, namely unsettled evidence versus expert disagreement, is perceived differently by laypersons. Perhaps some patients are more concerned with who determines a treatment successful, while others are more concerned with why.

BibTeX

@Manual{stream2017-1261,
    title = {Scientists should be cognizant of how the public perceives uncertainty},
    journal = {STREAM research},
    author = {Daniel Benjamin},
    address = {Montreal, Canada},
    date = 2017,
    month = may,
    day = 26,
    url = {http://www.translationalethics.com/2017/05/26/by-daniel-benjamin-phd/}
}

MLA

Daniel Benjamin. "Scientists should be cognizant of how the public perceives uncertainty" Web blog post. STREAM research. 26 May 2017. Web. 28 Apr 2024. <http://www.translationalethics.com/2017/05/26/by-daniel-benjamin-phd/>

APA

Daniel Benjamin. (2017, May 26). Scientists should be cognizant of how the public perceives uncertainty [Web log post]. Retrieved from http://www.translationalethics.com/2017/05/26/by-daniel-benjamin-phd/


Into the Unknown: Methodological and Ethical Issues in Phase I Trials

by

MUHCtalk

Tuesday, April 18, 2017
12:00 – 1:00pm
RI auditorium, Glen Site – E S1.1129

With the current push to transform Montréal into a hub for early phase research, there is a pressing need to explore the issues that researchers and research ethics boards (REB) encounter in Phase I trials.

In this two-part presentation, recent examples from healthy volunteer and oncology studies will be used to illustrate how protocol design and ethics review can be enhanced.

BibTeX

@Manual{stream2017-1252,
    title = {Into the Unknown: Methodological and Ethical Issues in Phase I Trials},
    journal = {STREAM research},
    author = {Esther Vinarov},
    address = {Montreal, Canada},
    date = 2017,
    month = apr,
    day = 17,
    url = {http://www.translationalethics.com/2017/04/17/into-the-unknown-methodological-and-ethical-issues-in-phase-i-trials/}
}

MLA

Esther Vinarov. "Into the Unknown: Methodological and Ethical Issues in Phase I Trials" Web blog post. STREAM research. 17 Apr 2017. Web. 28 Apr 2024. <http://www.translationalethics.com/2017/04/17/into-the-unknown-methodological-and-ethical-issues-in-phase-i-trials/>

APA

Esther Vinarov. (2017, Apr 17). Into the Unknown: Methodological and Ethical Issues in Phase I Trials [Web log post]. Retrieved from http://www.translationalethics.com/2017/04/17/into-the-unknown-methodological-and-ethical-issues-in-phase-i-trials/


Who Cares if the Emperor is Immodestly Attired: An Exploration of the Trustworthiness of Biomedical Research

by

caesar

Tuesday, October 4, 2016
1 PM
3647 Peel St., Room 101

Everyone acknowledges the need for biomedical research to enjoy the public’s trust that it continuously solicits and receives. An ethical precondition of soliciting trust is knowing the extent to which that trust is deserved. What makes biomedical research deserving of the public trust requires in-depth attention. This session will review three different criteria of trustworthiness in research – reliability, social value, and ethical conduct – to explore the extent to which the biomedical research enterprise warrants public trust.

Mark Yarborough, PhD, is Professor of General Medicine and Geriatrics and Dean’s Professor of Bioethics in the Bioethics Program at the University of California, Davis.

Photo by clarita

BibTeX

@Manual{stream2016-1149,
    title = {Who Cares if the Emperor is Immodestly Attired: An Exploration of the Trustworthiness of Biomedical Research},
    journal = {STREAM research},
    author = {Esther Vinarov},
    address = {Montreal, Canada},
    date = 2016,
    month = sep,
    day = 12,
    url = {http://www.translationalethics.com/2016/09/12/stream-workshop-series-2016-october-4th-mark-yarborough/}
}

MLA

Esther Vinarov. "Who Cares if the Emperor is Immodestly Attired: An Exploration of the Trustworthiness of Biomedical Research" Web blog post. STREAM research. 12 Sep 2016. Web. 28 Apr 2024. <http://www.translationalethics.com/2016/09/12/stream-workshop-series-2016-october-4th-mark-yarborough/>

APA

Esther Vinarov. (2016, Sep 12). Who Cares if the Emperor is Immodestly Attired: An Exploration of the Trustworthiness of Biomedical Research [Web log post]. Retrieved from http://www.translationalethics.com/2016/09/12/stream-workshop-series-2016-october-4th-mark-yarborough/


Accelerated Drug Approval and Health Inequality

by

Since the 1960s, the U.S. FDA has served as a model for drug regulation around the world with its stringent standards for approval of new drugs. Increasingly, however, a coalition of libertarians, patient advocates, and certain commercial interests have been pressing for a relaxation of these stringent standards. Examples of legislative initiatives that would weaken regulatory standards of evidence for drug approval include the “Regrow Act,” “21st Century Cures Act,” as well as various “Right to Try” laws passed in U.S. states.

Much has been written in support- and against- relaxation of current regulatory standards. Typically, these debates are framed in terms of a conflict between public welfare (i.e. the public needs to be protected from unproven and potentially dangerous drugs) and individual choice (i.e. desperately ill patients are entitled to make their own personal decisions about risky new drugs).

In a recent commentary, my co-author Alex London and I take a different tack on this debate. Rather than framing this as “public welfare” vs. “individual choice,” we examine the subtle ways that relaxed standards for drug approval would redistribute the burdens of uncertainty in ways that raise questions of fairness. We suggest weakened standards would shift greater burdens of uncertainty a) from advantaged populations to ones that are already suffer greater burdens from medical uncertainty; b) from research systems toward healthcare systems; c) from private and commercial payers toward public payers; and d) from comprehending and voluntary patients towards less comprehending and less voluntary patients. We hope our analysis stimulates a more probing discussion of the way regulatory standards determine how medical uncertainty is distributed.

BibTeX

@Manual{stream2016-1090,
    title = {Accelerated Drug Approval and Health Inequality},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2016,
    month = jul,
    day = 18,
    url = {http://www.translationalethics.com/2016/07/18/accelerated-drug-approval-and-health-inequality/}
}

MLA

Jonathan Kimmelman. "Accelerated Drug Approval and Health Inequality" Web blog post. STREAM research. 18 Jul 2016. Web. 28 Apr 2024. <http://www.translationalethics.com/2016/07/18/accelerated-drug-approval-and-health-inequality/>

APA

Jonathan Kimmelman. (2016, Jul 18). Accelerated Drug Approval and Health Inequality [Web log post]. Retrieved from http://www.translationalethics.com/2016/07/18/accelerated-drug-approval-and-health-inequality/


Search STREAM


All content © STREAM research

admin@translationalethics.com
Twitter: @stream_research
3647 rue Peel
Montreal QC H3A 1X1