Reviewing the Reviewers: a Study of Author Perception on Peer Reviews in Computer Science

  • Author: Conny Kühne, Klemens Böhm, and Jing Zhi Yue
  • Source: Proceedings of the 6th International Conference on Collaborative Computing (CollaborateCom), Chicago, USA, 2010.
  • Date: 9.-12.10.2010
  • Abstract: Peer reviewing is an important form of collaborative work that is used for quality assurance in science and in other domains like software development and knowledge management. Review ratings by authors have potential to improve the quality of peer reviews, by giving way to remuneration of good reviews. A significant problem, however, is that authors’ perception is hardly neutral, but might be affected by the reviews. To gain insight into their perception of peer reviews, we have conducted a survey among the authors of papers submitted to a peer-reviewed computer science conference. One of our findings is that authors are satisfied with reviews whose comments they deem helpful, and when they feel that the reviewer has made an effort to understand the paper. Suprisingly, these results hold when controlled for the score given by the reviewer. Based on the study results, we discuss the suitability of author ratings to identify high-quality reviews. We describe a remuneration function for reviews based on author ratings that aims to neutralize the effects of review scores.

    Download pdf