Cooperative Peer Review Results in Fewer Errors for Scientists, Research

  @IBTScience on November 09 2011 5:14 PM

Peer review, one of the most crucial steps in getting scientific research funded and published, is a process typically steeped in secrecy since grant and manuscript authors aren't supposed to know who refereed their manuscript.

One group of biostatisticians argued Wednesday that by changing the anonymous peer review into a cooperative process among authors and referees, the review process generates fewer errors.

Biostatisticians at Johns Hopkins Bloomberg School of Public Health published the report Wednesday in the online journal PLoS ONE.

The lab study, led by Johns Hopkins biostatistician Jeffrey Leek, started with a theoretical model that then became expanded into a game that resembled peer review.

The peer review game included around 10 scientists who were members of the same lab who answered multiple choice questions about a manuscript. Players could either fix problems they saw in the manuscript or they could review solutions their peers suggested. The players with the largest number of accepted manuscripts won cash rewards. Players pursued the game in two conditions: anonymously or under transparent conditions.

The questionnaires included problems that could be solved by lab members, such as What is (4/10 + 0.005)/2?

Not only were the open reviewers more cooperative (22 percent cooperative participants in open review versus 9 percent in closed review), reviewing accuracy increased 11 percent when submitter and reviewer acted cooperatively.

Our results suggest that increasing cooperation in the peer review process could reduce the risk of reviewing errors, Jeffrey Leek, biostatistician and lead author of the study, said in a statement.

The Johns Hopkins group isn't the only team to examine critically the anonymous peer review system. The current system of peer review lends itself to a situation whereby reviewers may bias their appraisal if the manuscript contents contradict either their own or mainstream thinking, John Phillips, a surgeon at the Norwich University Hospital wrote in an editorial published in Current Medical Research & Opinion in October.

A team of researchers combed reviewers' recommendations for manuscripts for the Journal of General Internal Medicine between 2004 and 2008 and found that the journal editors agreed on recommendations to reject vs. accept/revise at levels barely beyond chance.

Interestingly, two of the authors are editors-in-chief at Journal of the General Internal Medicine, a note made in the 2010 study published in PLoS ONE.

The authors suggest three solutions on how to improve peer review: more reviewers, better guidance to reviewers and skipping recommendations altogether, instead depending on a system where reviewers don't decide, but discuss strengths and weaknesses of a manuscript.

Peer review not only includes authors and referees, but also journal editors. An Australian team surveyed journal editors and found the group viewed social and subjective influences as beneficial additions to a reviewer's authority and expertise.

The researchers published the study in the journal Social Science & Medicine in April.

The authors concluded that social and subjective dimensions of biomedical manuscript review should be made more explicit, accommodated and even encouraged, not only because these dimensions of human relationships and judgements are unavoidable, but because their explicit presence is likely to enrich, rather than threaten the manuscript review process.

Join the Discussion