Recapping the recent plagiarism scandal

by

Parts of the paper that are nearly identical to my blog

Parts of the paper that are nearly identical to my blog

A year ago, I received a message from Anna Powell-Smith about a research paper written by two doctors from Cambridge University that was a mirror image of a post I wrote on my personal blog1 roughly two years prior. The structure of the document was the same, as was the rationale, the methods, and the conclusions drawn. There were entire sentences that were identical to my post. Some wording changes were introduced, but the words were unmistakably mine. The authors had also changed some of the details of the methods, and in doing so introduced technical errors, which confounded proper replication. The paper had been press-released by the journal,2 and even noted by Retraction Watch.3

I checked my site’s analytics and found a record of a user from the University of Cambridge computer network accessing the blog post in question three times on 2015 December 7 and again on 2016 February 16, ten days prior to the original publication of the paper in question on 2016 February 26.4

At first, I was amused by the absurdity of the situation. The blog post was, ironically, a method for preventing certain kinds of scientific fraud. I was flattered that anyone noticed my blog at all, and I believed that academic publishing would have a means for correcting itself when the wrong people are credited with an idea. But as time went on, I became more and more frustrated by the fact that none of the institutions that were meant to prevent this sort of thing were working.

The journal did not catch the similarities between this paper and my blog in the first place, and the peer review of the paper was flawed as well. The journal employs an open peer review process in which the reviewers’ identities are published. The reviewers must all make a statement saying, “I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.” Despite this process, none of the reviewers made an attempt to analyse the validity of the methods used.

After the journal’s examination of the case, they informed us that updating the paper to cite me after the fact would undo any harm done by failing to credit the source of the paper’s idea. A new version was hastily published that cited me, using a non-standard citation format that omitted the name of my blog, the title of my post, and the date of original publication. The authors did note that the idea had been proposed in “the grey literature,” so I re-named my blog to “The Grey Literature” to match.

I was shocked by the journal’s response. Authorship of a paper confers authority in a subject matter, and their cavalier attitude toward this, especially given the validity issues I had raised with them, seemed irresponsible to me. In the meantime, the paper was cited favourably by the Economist5 and in the BMJ6, crediting Iriving and Holden.

I went to Retraction Watch with this story,7 which brought to light even more problems with this example of open peer review. The peer reviewers were interviewed, and rather than re-evaluating their support for the paper, they doubled down, choosing instead to disparage my professional work and call me a liar. One reviewer wrote, “It is concerning that this blogger would be attempting a doctorate and comfortably ascribe to a colleague such falsehoods.”

The journal refused to retract the paper. It was excellent press for the journal and for the paper’s putative authors, and it would have been embarrassing for them to retract it. The journal had rolled out the red carpet for this paper after all,2 and it was quickly accruing citations.

The case was forwarded to the next meeting of the Committee on Publication Ethics (COPE) for their advice. Three months later, at the August 2016 COPE meeting, the case was presented and voted on.8 It was surreal for me to be forced to wait for a seemingly unaccountable panel of journal editors to sit as a de facto court, deciding whether or not someone else would be credited with my words, all behind locked doors, with only one side of the case—the journal editors’—represented. In the end, they all but characterised my complaints as “punitive,” and dismissed them as if my only reason for desiring a retraction was that I was hurt and wanted revenge. The validity issues that I raised were acknowledged but no action was recommended. Their advice was to send the case to the authors’ institution, Cambridge University, for investigation. I do not know if Cambridge did conduct an investigation, and there has been no contact with me.

There is, to my knowledge, no way to appeal a decision from COPE, and I know of no mechanism of accountability for its members in the case they advise a journal with the wrong answer. As of January 2017, the journal officially considered the case closed.

It is very easy to become disheartened and jaded when things like this happen—as the Economist article citing Irving and Holden says, “Clinical trials are a murky old world.”5 The institutions that are supposed to protect the integrity of the academic literature sometimes act in ways that miss the lofty standards we expect from modern science.

Fortunately, the scientific community turned out to be a bigger place than I had given it credit for. There are people like Anna, who let me know that this was happening in the first place and Ben Goldacre, who provided insight and support. My supervisor and my colleagues in the STREAM research group were incredibly supportive and invested in the outcome of this case. A number of bloggers (Retraction Watch,7,9 Neuroskeptic,10 Jordan Anaya11—if I missed one, let me know!) picked up this story and drew attention to it, and in the end, the paper was reviewed by Daniel Himmelstein,12 whose persistence and thoroughness convinced the journal to re-open the case and invite Dr Knottenbelt’s decisive review.

While it is true that the mistakes introduced into the methods are what finally brought about its retraction, those mistakes happened in the first place because the authors did not come up with the idea themselves. It is a fallacy to think that issues of scientific integrity can be considered in isolation from issues of scientific validity, and this case very clearly shows how that sort of thinking could lead to a wrong decision.

Of course, there are still major problems with academic publishing. But there are also intelligent and conscientious people who haven’t given up yet. And that is an encouraging thought.

References

1. Carlisle, B. G. Proof of prespecified endpoints in medical research with the bitcoin blockchain. The Grey Literature (2014).

2. F1000 Press release: Doctors use Bitcoin tech to improve transparency in clinical trial research. (2016). Available at: http://f1000.com/resources/160511_Blockchain_FINAL.pdf. (Accessed: 23rd June 2016)

3. In major shift, medical journal to publish protocols along with clinical trials. Retraction Watch (2016).

4. Irving, G. & Holden, J. How blockchain-timestamped protocols could improve the trustworthiness of medical science. F1000Research 5, 222 (2017).

5. Better with bitcoin | The Economist. Available at: http://www.economist.com/news/science-and-technology/21699099-blockchain-technology-could-improve-reliability-medical-trials-better. (Accessed: 23rd June 2016)

6. Topol, E. J. Money back guarantees for non-reproducible results? BMJ 353, i2770 (2016).

7. Plagiarism concerns raised over popular blockchain paper on catching misconduct. Retraction Watch (2016).

8. What extent of plagiarism demands a retraction vs correction? | Committee on Publication Ethics: COPE. Available at: http://publicationethics.org/case/what-extent-plagiarism-demands-retraction-vs-correction. (Accessed: 16th August 2016)

9. Authors retract much-debated blockchain paper from F1000. Retraction Watch (2017).

10. Neuroskeptic. Blogs, Papers, Plagiarism and Bitcoin – Neuroskeptic. (2016).

11. Anaya, J. Medical students can’t help but plagiarize, apparently. Medium (2016). Available at: https://medium.com/@OmnesRes/medical-students-cant-help-but-plagiarize-apparently-f81074824c17. (Accessed: 21st July 2016)

12. Himmelstein, Daniel. Satoshi Village. The most interesting case of scientific irreproducibility? Available at: http://blog.dhimmel.com/irreproducible-timestamps/. (Accessed: 8th March 2017)

BibTeX

@Manual{stream2017-1280,
    title = {Recapping the recent plagiarism scandal},
    journal = {STREAM research},
    author = {Benjamin Gregory Carlisle},
    address = {Montreal, Canada},
    date = 2017,
    month = jun,
    day = 2,
    url = {https://www.translationalethics.com/2017/06/02/recapping-the-recent-plagiarism-scandal/}
}

MLA

Benjamin Gregory Carlisle. "Recapping the recent plagiarism scandal" Web blog post. STREAM research. 02 Jun 2017. Web. 19 Apr 2024. <https://www.translationalethics.com/2017/06/02/recapping-the-recent-plagiarism-scandal/>

APA

Benjamin Gregory Carlisle. (2017, Jun 02). Recapping the recent plagiarism scandal [Web log post]. Retrieved from https://www.translationalethics.com/2017/06/02/recapping-the-recent-plagiarism-scandal/


Semantic natural language processing and philosophy of science

by

On 2015 February 18, James Overton visited the STREAM research group in Montreal, where he presented his research into what scientists are doing when give an explanation for something. Many accounts of scientific explanation have been offered by philosophers of science over the years, but Overton’s offering differs in that he set out to establish his account of scientific explanation by actually examining the scentific literature. Specifically, he took a year’s worth of papers from the journal Science, converted them to unformatted text, and then parsed them using the Python Natural Language Toolkit.

Overton’s methods were an analysis of word frequencies and a random sampling of sentences that seem to be making explanations, to see what sorts of data are used to justify what other sorts of claims. The most shocking result, at least for me, was that the word “law” was almost never used in the sample that Dr Overton described. That’s not to say that there is no discussion of natural laws at all, but given how much space the description of laws takes up in most accounts of scientific explanation, this seemed to be a very striking finding at the least.

This technique is very versatile and could be applied to a number of projects, from exploring the nature of scientific explanation, as Dr Overton has done, or even to a more simple project analysing the frequency of phrases like “sorafenib showed a modest effect,” or “adverse events were manageable,” and seeing if there is any relationship between the word chosen and the result being described.

BibTeX

@Manual{stream2015-716,
    title = {Semantic natural language processing and philosophy of science},
    journal = {STREAM research},
    author = {Benjamin Gregory Carlisle},
    address = {Montreal, Canada},
    date = 2015,
    month = mar,
    day = 5,
    url = {https://www.translationalethics.com/2015/03/05/semantic-natural-language-processing-and-philosophy-of-science/}
}

MLA

Benjamin Gregory Carlisle. "Semantic natural language processing and philosophy of science" Web blog post. STREAM research. 05 Mar 2015. Web. 19 Apr 2024. <https://www.translationalethics.com/2015/03/05/semantic-natural-language-processing-and-philosophy-of-science/>

APA

Benjamin Gregory Carlisle. (2015, Mar 05). Semantic natural language processing and philosophy of science [Web log post]. Retrieved from https://www.translationalethics.com/2015/03/05/semantic-natural-language-processing-and-philosophy-of-science/


Unsuccessful trial accrual and human subjects protections: An empirical analysis of recently closed trials

by

Ratio of actual enrolment to expected enrolment versus number of trials for trials that completed and trials that terminated due to poor accrual in 2011

Ratio of actual enrolment to expected enrolment versus number of trials for trials that completed and trials that terminated due to poor accrual in 2011

The moral acceptability of a clinical trial is rooted in the risk and benefit for patients, as well as the ability of the trial to produce generalisable and useful scientific knowledge. The ability of a clinical trial to justify its claims to producing new knowledge depends in part on its ability to recruit patients to participate—the fewer the patients, the less confident we can be in the knowledge produced. So when trials have recruitment problems, those trials also have ethical problems.

In a recently published issue of Clinical Trials, my colleagues and I investigate the prevalence of poor trial accrual, the impact of accrual problems on study validity and their ethical implications.

We used the National Library of Medicine clinical trial registry to capture all initiated phase 2 and 3 intervention clinical trials that were registered as closed in 2011. We then determined the number that had been terminated due to unsuccessful accrual and the number that had closed after less than 85% of the target number of human subjects had been enrolled.

Of 2579 eligible trials, 481 (19%) either terminated for failed accrual or completed with less than 85% expected enrolment, seriously compromising their statistical power. A total of 48,027 patients had enrolled in trials closed in 2011 who were unable to answer the primary research question meaningfully.

Not only that, but we found that many trials that should have been terminated were pursued to completion, despite flagging rates of subject accrual, and the proportion of trials that completed was much higher than the proportion of trials that terminated, even at accrual levels as low as 30%. (See attached figure.)

The take-home message is that ethics bodies, investigators, and data monitoring committees should carefully scrutinize trial design, recruitment plans, and feasibility of achieving accrual targets when designing and reviewing trials, monitor accrual once initiated, and take corrective action when accrual is lagging.

BibTeX

@Manual{stream2014-615,
    title = {Unsuccessful trial accrual and human subjects protections: An empirical analysis of recently closed trials},
    journal = {STREAM research},
    author = {Benjamin Gregory Carlisle},
    address = {Montreal, Canada},
    date = 2014,
    month = nov,
    day = 6,
    url = {https://www.translationalethics.com/2014/11/06/trial-accrual-and-ethics/}
}

MLA

Benjamin Gregory Carlisle. "Unsuccessful trial accrual and human subjects protections: An empirical analysis of recently closed trials" Web blog post. STREAM research. 06 Nov 2014. Web. 19 Apr 2024. <https://www.translationalethics.com/2014/11/06/trial-accrual-and-ethics/>

APA

Benjamin Gregory Carlisle. (2014, Nov 06). Unsuccessful trial accrual and human subjects protections: An empirical analysis of recently closed trials [Web log post]. Retrieved from https://www.translationalethics.com/2014/11/06/trial-accrual-and-ethics/


When is it legitimate to stop a clinical trial early?

by

Stopping early?

Stopping early?

Inspired by a paper that I’m working on with a few of my colleagues from the STREAM research group on the subject of the accrual of subjects in human research, I’ve been reading through a number of articles related to the question, When is it legitimate to stop a clinical trial that is already in progress?

Lavery et al. identify a problem in allocating research resources in human research in their debate, In Global Health Research, Is It Legitimate To Stop Clinical Trials Early on Account of Their Opportunity Costs? (2009) The development of next-generation drug products often outstrips the capacity for testing them, resulting in a queue, and possibly less-than-optimal use of our resources in developing new drugs. They suggest that there should be a mechanism for ending a clinical trial early on the basis of “opportunity costs.” That is, while trials are already terminated early on the basis of futility, efficacy or safety concerns, an ongoing trial might not be the best use of scare healthcare resources, and there should be a way to divert resources to something more promising. Two options are proposed: a scientific oversight committee, or an expanded mandate for the DSMB. The procedure for making such decisions is based on Daniels’ “accountability for reasonableness.”

Buchanan responds, saying it is unethical and impractical to do so. He argues it is unethical because such a practice could not be justified in terms of the harm or benefit to the patients, and that it is difficult to see who would be harmed if ongoing trials were not stopped for opportunity costs. He argues that it is impractical because an accountability for reasonableness-based procedure would mire drug development in corporate lobbying, appeals, and deliberations on a virtually unlimited range of value considerations.

Lavery et al. rebut that the “interest [of advancing science] might be better served by adopting our proposal, rather than locking participants into a trial of a potentially inferior product,” and contrary to Buchanan’s claims, DSMB decisions are rarely certain. Buchanan concludes the paper by rejecting the position of Lavery et al. on no uncertain terms.

While this article does point out a big problem, the solutions proposed by Lavery et al. may present practical problems for a number of reasons, as Buchanan argues. The ethics of early termination is given short shrift in this work, and a more nuanced discussion is needed.

BibTeX

@Manual{stream2013-205,
    title = {When is it legitimate to stop a clinical trial early?},
    journal = {STREAM research},
    author = {Benjamin Gregory Carlisle},
    address = {Montreal, Canada},
    date = 2013,
    month = may,
    day = 24,
    url = {https://www.translationalethics.com/2013/05/24/when-is-it-legitimate-to-stop-a-clinical-trial-early/}
}

MLA

Benjamin Gregory Carlisle. "When is it legitimate to stop a clinical trial early?" Web blog post. STREAM research. 24 May 2013. Web. 19 Apr 2024. <https://www.translationalethics.com/2013/05/24/when-is-it-legitimate-to-stop-a-clinical-trial-early/>

APA

Benjamin Gregory Carlisle. (2013, May 24). When is it legitimate to stop a clinical trial early? [Web log post]. Retrieved from https://www.translationalethics.com/2013/05/24/when-is-it-legitimate-to-stop-a-clinical-trial-early/


Search STREAM


All content © STREAM research

admin@translationalethics.com
Twitter: @stream_research
3647 rue Peel
Montreal QC H3A 1X1