The Scientist asks if peer review is broken?
Peter Lawrence, a developmental biologist who is also an editor at the journal Development and former editorial board member at Cell, has been publishing papers in academic journals for 40 years. His first 70 or so papers were "never rejected," he says, but that's all changed. Now, he has significantly more trouble getting articles into the first journal he submits them to.
"The rising [rejections] means an increase in angry authors."
-Drummond Rennie
Lawrence, based at the MRC Laboratory of Molecular Biology at Cambridge, UK, says his earlier papers were always published because he and his colleagues first submitted them to the journals they believed were most appropriate for the work. Now, because of the intense pressure to get into a handful of top journals, instead of sending less-than-groundbreaking work to second- or third-tier journals, more scientists are first sending their work to elite publications, where they often clearly don't belong.
Consequently, across the board, editors at top-tier journals say they are receiving more submissions every year, leading in many cases to more rejections, appeals, and complaints about the system overall. "We reject approximately 6,000 papers per year" before peer review, and submissions are steadily increasing, says Donald Kennedy, editor-in-chief of Science. "There's a lot of potential for complaints."
Everyone, it seems, has a problem with peer review at top-tier journals. The recent discrediting of stem cell work by Woo-Suk Hwang at Seoul National University sparked media debates about the system's failure to detect fraud. Authors, meanwhile, are lodging a range of complaints: Reviewers sabotage papers that compete with their own, strong papers are sent to sister journals to boost their profiles, and editors at commercial journals are too young and invariably make mistakes about which papers to reject or accept (see Truth or Myth?). Still, even senior scientists are reluctant to give specific examples of being shortchanged by peer review, worrying that the move could jeopardize their future publications.
So, do those complaints stem from valid concerns, or from the minds of disgruntled scientists who know they need to publish in Science or Nature to advance in their careers? "The rising [rejections] means an increase in angry authors," says Drummond Rennie, deputy editor at Journal of the American Medical Association (JAMA). The timing is right to take a good hard look at peer review, which, says Rennie, is "expensive, difficult, and blamed for everything."
What's wrong with the current system? What could make it better? Does it even work at all?
TOO MANY SUBMISSIONS
Editors at high-impact journals are reporting that the number of submissions is increasing every year (see "Facts and Figures", the table below). Researchers, it seems, want to get their data into a limited number of pages, sometimes taking extra measures to boost their success. Lately, academia seems to place a higher value on the quality of the journals that accept researchers' data, rather than the quality of the data itself. In many countries, scientists are judged by how many papers they have published in top-tier journals; the more publications they rack up, the more funding they receive.
Consequently, Lawrence says he believes more authors are going to desperate measures to get their results accepted by top journals. An increasing number of scientists are spending more time networking with editors, given that "it's quite hard to reject a paper by a friend of yours," says Lawrence. Overworked editors need something flashy to get their attention, and many authors are exaggerating their results, stuffing reports with findings, or stretching implications to human diseases, as those papers often rack up extra references."I think that's happening more and more," Lawrence says. In fact, in a paper presented at the 2005 International Congress on Peer Review and Biomedical Publication, a prospective review of 1,107 manuscripts submitted to the Annals of Internal Medicine, British Medical Journal (BMJ), and The Lancet in 2003 showed that many major changes to the text demanded by peer review included toning down the manuscript's conclusions and highlighting the paper's limitations. This study suggests that boosting findings may cause more problems by overburdening reviewers even further.
Indeed, sorting through hype can make a reviewer's job at a top journal even more difficult than it already is. At high-impact journals, reviewers need to judge whether a paper belongs in the top one percent of submissions from a particular field is an impossible task, says Hemai Parthasarathy, managing editor at Public Library of Science (PLoS) Biology. Consequently, editors and reviewers sometimes make mistakes, she notes, perhaps publishing something that is really in the top 10%, or passing on a really strong paper. To an outsider, this pattern can look like "noise," where some relatively weak papers are accepted when others aren't, inspiring rejected authors to complain. But, it's an inevitable result of the system, she notes.
THE RELIGION OF PEER REVIEW
Despite a lack of evidence that peer review works, most scientists (by nature a skeptical lot) appear to believe in peer review. It's something that's held "absolutely sacred" in a field where people rarely accept anything with "blind faith," says Richard Smith, former editor of the BMJ and now CEO of UnitedHealth Europe and board member of PLoS. "It's very unscientific, really."
What's wrong with the current system?
What could make it better?
Does it even work at all?
Indeed, an abundance of data from a range of journals suggests peer review does little to improve papers. In one 1998 experiment designed to test what peer review uncovers, researchers intentionally introduced eight errors into a research paper. More than 200 reviewers identified an average of only two errors. That same year, a paper in the Annals of Emergency Medicine showed that reviewers couldn't spot two-thirds of the major errors in a fake manuscript. In July 2005, an article in JAMA showed that among recent clinical research articles published in major journals, 16% of the reports showing an intervention was effective were contradicted by later findings, suggesting reviewers may have missed major flaws.
Some critics argue that peer review is inherently biased, because reviewers favor studies with statistically significant results. Research also suggests that statistical results published in many top journals aren't even correct, again highlighting what reviewers often miss. "There's a lot of evidence to (peer review's) downside," says Smith. "Even the very best journals have published rubbish they wish they'd never published at all. Peer review doesn't stop that." Moreover, peer review can also err in the other direction, passing on promising work: Some of the most highly cited papers were rejected by the first journals to see them.
The literature is also full of reports highlighting reviewers' potential limitations and biases. An abstract presented at the 2005 Peer Review Congress, held in Chicago in September, suggested that reviewers were less likely to reject a paper if it cited their work, although the trend was not statistically significant. Another paper at the same meeting showed that many journals lack policies on reviewer conflicts of interest; less than half of 91 biomedical journals say they have a policy at all, and only three percent say they publish conflict disclosures from peer reviewers. Still another study demonstrated that only 37% of reviewers agreed on the manuscripts that should be published. Peer review is a "lottery to some extent," says Smith.
Facts and Figures
Statistics are from editors at Journal of the American Medical Association (JAMA), Public Library of Science (PLoS) Biology, Science, Nature, and the New England Journal of Medicine (NEJM). The Scientist also contacted editors at Cell, The Lancet, and the Proceedings of the National Academy of Sciences; all declined to comment.
[table]
0 Comments:
Post a Comment
<< Home