Theoretical computer scientists have been talking about double blind peer review, and it’s an interesting discussion. The current incarnation of this discussion started when Rasmus Pagh and Suresh Venkatasubramanian used a double blind refereeing process for submissions to the ALENEX18 conference they co-chaired. Venkatasubramanian posted about their motivations and how they pulled it off in two posts on his blog, The Geomblog (post 1, post 2, post 3).
Why double-blind? First, it’s the standard for computer science conferences outside of the theory subdiscipline. More importantly, many people worry that single-blind peer review, where the reviewer knows the identity of the author, leads to some objectionable outcomes based on implicit and explicit biases. More famous authors or authors from more prominent institutions may have their work reviewed more favorably, and more broadly, the bias in favor of these authors combined with other biases reviewers have can continue systemic bias against women and other groups that are underrepresented in the field.
Obviously, a major change in the paper submission system is not without controversy. The discussion has continued in posts by Boaz Barak, Michael Mitzenmacher, Omer Reingold, and Lance Fortnow. In general, the conversation I have seen has been civil and thoughtful. In one post, Venkatasubramanian writes,
First up, I think it’s gratifying to see that the the basic premise: ‘single blind review has the potential for bias, especially with respect to institutional status, gender and other signifiers of in/out groups’ is granted at this point. There was a time in the not-so-distant past that I wouldn’t be able to even establish this baseline in conversations that I’d have.
“The argument therefore has moved to one of tradeoffs: does the installation of DB review introduce other kinds of harm while mitigating harms due to bias?
A few math journals—mostly in math education and undergraduate research, as far as I can tell—do use double-blind peer review. But it is not standard. One of the biggest barriers to double blind reviewing in computer science, physics, or math is the fact that so many preprints are posted on arxiv or authors’ websites before they are submitted, making it that much more difficult for a reviewer to avoid knowing who wrote the paper. (Venkatasubramanian writes about how they dealt with that problem in his posts; one point he makes is that double-blinding the process won’t necessarily prevent reviewers from being able to determine authors eventually, but it could prevent some knee-jerk reactions. He also points to a post by Regina Barzilay that delves into the issue in more depth) In some fairly narrow subdisciplines, there are few enough researchers that even without seeing the paper online, others in the field will be able to tell who wrote it anyway.
While societies and individual humans in them have biases, there will be no way to completely eliminate these biases when people (or algorithms) make decisions about paper and conference submissions. It is important for academics to look at the advantages and disadvantages of different strategies to mitigate the effects of bias. I am looking forward to seeing how this conversation evolves.