Crowdsourced Review — DOA

The early death of peer-review.io, and lessons learned

In the wake of the Bill & Melinda Gates Foundation’s new policy to stop paying APCs and instead focus on preprints with crowdsourced review, an astute reader sent along a few items from Daniel Bingham, who last month shut down peer-review.io, the system he built for crowdsourcing peer review of academic manuscripts.

Bingham wrote in a February 12th post:

Two years ago, I started a journey into academic publishing. I imagined using a reputation system to replace the journals with crowdsourcing. The reputation system would match reviewers to papers, incentivize good faith review, and identify bad actors. It wasn’t clear whether it would work in practice, but I wanted to find out.

I spent a year doing user research, building a beta (peer-review.io), and working to get people to give it a shot.

I am now convinced that it’s not going to work. . . .

Journal’s editorial teams are doing quite a bit of manual labor, some of which is very difficult to replace with technology. . . .

On the surface, most of [it] seems like work that a crowdsourcing system could potentially handle. I certainly thought the reputation system could handle a lot of it.

But I didn’t fully understand exactly what was happening with two items in particular.

Identifying and recruiting reviewers for papers . . . and convincing them to do the review.

This is a huge piece of the labor of editorial teams.

Journal editors are doing an enormous amount of moderation. And not just your standard internet discussion moderation. They’re doing a lot of a very specific kind of ego management.

Crowdsourced systems generally work as long as the average actor is a good actor.

A good actor in the context of academic publishing is someone who is willing to put aside their own ego in the pursuit of the best work possible. This is someone capable of recognizing when they’re wrong and letting their own ideas go. A good actor would see a paper that invalidated their work and be able to assess it purely on the merits.

It is unclear whether the average academic is a good actor in this sense. And its editors who keep that from tearing the whole system apart.

Bingham has two other posts related to this — one exploring why preprint review rates are stalling (all for solid reasons), and another more speculative about helping editors escape commercial publishers. But his insights are solid and reality-based.

Stephen Heard, writing about the end of peer-review.io, notes that journals actually represent volunteer peer-review systems that work, and for obvious and meaningful reasons:

People respond to people, not to abstract “opportunities,” and volunteer/service energy isn’t infinite. A lot of the systems we have – including journals – are designed to work from that fact.

Preprints represent just 6.4% of papers, and growth is stalling or falling, while preprint review systems are being shutdown or rethought.

Gates’ new policy is bad because it makes no sense and has no chance of working to make scientific communication better.

Yet, it has every chance to do the opposite, and that’s the worry.