The MMR vaccine is linked to autism.
Embryonic stem cell lines can be established from cloned human embryos. These are just two claims made in articles—published in prestigious medical journals—that were retracted from the scientific literature, but not before they had a substantial impact in the scientific community and beyond. Consider that the 1998 Lancet article linking the MMR vaccine to autism led to a worldwide campaign against vaccination. The rate of MMR vaccination in the United States alone dropped from 93% to 79%, and the number of mumps cases increased 21-fold from 2005 to 2006.
Not all retractions have such widespread effect. Still, many people accept scientific data or interpretations as valid before fraud is recognized. In listing the top 10 retractions of 2010, The Scientist noted that four articles were cited 200-300 times and the MMR vaccine-autism article was cited 640 times before it was retracted by The Lancet in 2010. Even more astonishing is the finding by Redman et al that 325 retracted articles were cited 3,942 times before retraction and 4,501 times after retraction!
An exploration of some "striking trends" in retractions is the lead feature article in the March issue of the AMWA Journal. The author of the article, R. Grant Steen, found that 788 scientific articles were retracted in the past decade. Steen has been busy in his pursuit of gaining a better understanding of retractions. His AMWA Journal article is one of four that have been published since November 2010. In the first of these articles, he reported that more than half of fraudulent articles were written by a first author who had written other retracted articles; in
the second, he noted an increase in the level of retractions since 2000 (which may, he says, be either a real increase in misconduct or the result of greater efforts to police the literature); and in the third article, he noted that error is a more common reason for retraction than fraud.
Steen has found that most retractions (nearly three-quarters) were errors (mistakes, duplicate publication, plagiarism, etc) and that the remaining quarter or so of retractions were considered fraud—either data fabrication (15%) or falsification (13%). When Elizabeth Wager and Peter Williams, members of the UK-based Committee on Publication Ethics (COPE), analyzed 312 of 529 retractions in PubMed (from 1988 to 2008), they found data fabrication and falsification to be the least common reasons (5% and 4%, respectively) for retractions, with "honest research errors" as the most common reason (28%). Wager and Williams reported these findings at the 2009 Peer Review Congress.
It is difficult to know the actual reasons for retraction because most journals do not indicate a reason, according to a news article in the March issue of the Canadian Medical Association Journal. This is despite guidance from the International Committee of Medical Journal Editors (ICMJE) stating, "The text of the retraction should explain why the article is being retracted and include a complete citation reference to that article."
Regardless of the reasons for retractions, the big question is how to identify articles that should be retracted. Some point to the need for improved peer review, but most disagree with this as a solution. In an examination of the peer review process in a 1995 AMWA Journal article, Kendall Wills Sterling wrote that while the peer review process needs improvement, the process is unable to detect falsified data (or plagiarism). Similarly, in his 2006 Swanberg Address, Dale Hammerschmidt, MD, FACP, said, "…peer review was not introduced with the idea of detecting and preventing fraud, and peer reviewers are not now given that task by journals."
Whose job is it, then? The ICMJE guidance notes that if there is "substantial doubt" about the honesty or integrity of a manuscript (either submitted or published), it is the editor's responsibility "to ensure that the question is appropriately pursued, usually by the authors' sponsoring institution." Journal editors can find additional resources on dealing with suspected misconduct from the US Office of Research Integrity, which offers "Managing Allegations of Scientific Misconduct," and COPE, which has developed 17 flowcharts for handling suspected cases. In his AMWA Journal article, Steen notes that all stakeholders—first authors, coauthors, editors, referees, and peers—must share responsibility for maintaining scientific integrity.
The good news in all of this (I always look for that silver lining) is that medical writers seem to be far from the madding crowd of retractions. In her Keynote Address at the 2009 AMWA Annual Conference, Karen Wooley, PhD, noted that her research (also presented at the 2009 Peer Review Congress) showed that very few retracted articles had declared medical writing and industry support or declared medical writing support alone. Although the significance of this finding is uncertain because of the overall low rate of declared medical writing support, it is interesting that almost all of the retractions were articles with no declared industry funding—articles that have been criticized much less than those with industry support.
You can keep up with retractions by following Retraction Watch, a blog written by Adam Marcus (Managing Editor of Anesthesiology News) and Ivan Oransky, MD (Executive Editor of Reuters Health). Dr. Oransky, along with Liz Wager and editors of prominent medical journals, will speak at "What Can Editors Do to Deter and Detect Scientific Misconduct?" a session at the 2011 annual meeting of the Council of Science Editors (CSE). If you're attending the CSE meeting, your report on this session would be a valuable asset to the AMWA Journal. Contact the Journal Editor at email@example.com for more information.