Simine Vazire is Professor in the Ethics and Wellbeing Hub at University of Melbourne’s School of Psychological Sciences, Co-director of the MetaMelb Lab, and Editor-in-Chief of UC Press’s open-access journal Collabra: Psychology.
UC Press: Happy Peer Review Week!
SV: Happy Peer Review Week to you!
UC Press: Collabra differentiates itself in many ways from other psychology journals, including with how it handles peer review. How do you approach peer review, and why is Collabra’s approach to peer review important?
SV: In psychology we’ve seen a significant push towards more transparency in our research practices – authors are increasingly expected to share their materials, data, and code (and at Collabra we require this kind of transparency whenever ethically possible), but so far this expectation of transparency has been one-way: authors are expected to be transparent, but we don’t ask journals to be transparent about their peer review process. Of course there’s a case to be made for some kinds of privacy (e.g., if reviewers wish to remain anonymous, I think they should have that option). But when the entire peer review process is opaque to readers, journals are not accountable. Journals are basically asking readers to trust that their peer review process is fair, thorough, and accurate. But with modern technology, there’s no reason we can’t take steps to be more transparent and accountable for how we do peer review. At Collabra, we’ve taken several steps in this direction. First, we publish the peer review history (reviews and editors’ decision letters) for all accepted manuscripts. This lets readers see what issues came up during peer review, and whether the process was thorough. Second, we give authors complete control over their reviews and decision letters in the case of rejected manuscripts – if authors want to post their Collabra reviews or decision letters publicly, they’re free to do so. This makes it possible for people to share negative (or positive!) experiences, and for our community to hold us accountable if there is a pattern of bad decisions (e.g., bias, errors, or sloppiness). Third, we list the name of the handling editor on every published article – this also provides some accountability for our editors’ decisions, and transparency in the case of potential conflicts of interest. (We don’t require reviewers to share their identity – they can choose whether to sign their reviews or not.)
Another distinctive aspect of peer review at Collabra is that we focus exclusively on the rigor of the research presented – we don’t evaluate manuscripts based on how novel they are, or how much impact we think they’ll have (we do reserve the right to take into account the importance of the research question, but we only do this in extreme cases). That means we focus on things like: Is the research reported transparently enough that readers can critically evaluate the work? Is the study designed in such a way that it provides a rigorous test of the research question? Are the statistical analyses appropriate? Are the conclusions calibrated to the research design and evidence presented? We have high standards in these areas – submissions that would meet the methodological and reporting standards at other journals sometimes get rejected at Collabra. However, we are committed to accepting papers that meet our high standards for transparency and rigor, regardless of how exciting or groundbreaking the results are (indeed, we are more likely to reject papers that make exaggerated claims about the implications of the findings).
UC Press: How have authors and reviewers responded to Collabra’s transparent peer review?
SV: To be honest, I don’t really know! As editor, I only rarely get direct feedback from authors and reviewers, so all I have to go by is the submission rate, and how often people accept review requests. On both of those fronts, we’re in good shape. I suspect most Collabra authors and reviewers have neutral-to-slightly-positive feelings about policies around transparent peer review. But really, although I hope transparent review helps reviewers who want credit for their work to receive that credit (if they sign their reviews and the paper is published, readers can see the reviewers’ contributions), the policy is more for Collabra readers than anyone else. I would love to hear from readers what they make of the peer review histories available for all papers. I have a fantasy that some enterprising metascientists will use them as material for a study – it would be great to know how we’re doing.
UC Press: What do you think might be in store for the future of peer review?
SV: My impression (and experience) is that many editors and reviewers treat peer review like a ritual – get the required number of reviewers to agree, aggregate their reviews, make a decision, move on. Little thought is put into whether the peer review was thorough, fair, focused on the right things, etc. It’s not enough to just go through the motions, we need to put more thought into what peer review is for, and whether it’s doing its job. That means thinking about what reviewers should be evaluating, who can provide fair and accurate reviews on which dimensions, how we can incentivize higher quality reviews and editorial evaluations, etc.
As we make the peer review process more and more transparent, the inadequacies of peer review are going to become more and more apparent. I think many of us have been coasting on the uncritically positive reputation that peer review has among both scientists and the public – as if peer review is an almost magical process. Showing how the sausage is made will hopefully put some pressure on peer review to come closer to living up to its reputation. But that’s going to be hard without some pretty dramatic changes. Really good, thorough peer reviews are rare and extremely valuable. We need to figure out how to reward them and train and incentivize more people to provide this incredible service. Good, critical reviewers are what make science credible and trusted by the public.
Happily there are lots of exciting ideas going around about how we can reform peer review to live up to its responsibilities. These include: paying reviewers for their work, crowdsourcing peer review (e.g., using platforms like PreReview.org or the repliCATS platform (full disclosure: I am part of the repliCATS team)), modular reviews (where reviewers can evaluate just one aspect, e.g., computational reproducibility), updatable reviews (there’s no reason for peer review to stop when the paper is “accepted” by an editorial team), peer review overlaid on preprints so that it happens completely live and in public (though it would be nice to preserve a mechanism for reviewers to use pseudonyms – PreReview.org allows this by having reviewers create accounts linked to their ORCID (so they are known to the moderators and accountable) but giving the option to use a pseudonym for public reviews), recognizing exceptional reviews (e.g., by up-voting), and more.
UC Press: Thank you for helping innovate Collabra’s editorial policies, and a big thank you to all of our peer reviewers out there!
SV: Yes, we are lucky to have more than our fair share of those truly exceptional reviewers (and editors!) volunteering their time for Collabra, and we are really grateful to them! Reviewers giving their time to a journal is a vote of confidence in the journal, and we hope we’ll keep earning those votes and the incredible work they do to help make Collabra as good as it can be!
Collabra: Psychology—the official journal of the Society for the Improvement of Psychological Science—is a mission-driven, open-access journal from University of California Press that shares not only the research it publishes, but also the value created by the psychology community during the peer-review process. Collabra: Psychology has seven sections representing the broad field of psychology, and a highlighted focus area of “Methodology and Research Practice” (which currently seeks a section editor).