-->
Gavia Libraria

Why open peer review does not fix the bad-journal problem

When the Loon mused in a preliminary fashion about how to police journal quality (on all sides of the business-model aisle) via stings, several commenters and Twitterers declared (in the Loon’s paraphrase) that open review solved the problem and we could all go home.

For purposes of this post, the Loon will pretend that open-review has magically become universal. That it is not, and will not be for a good long time, is of course a problem for open review as a universal quality measure, but to some extent it is also a distraction. If universal open review truly solved the quality-measure problem, then moving toward open review truly would be the end of the story.

The Loon declares in return, however, that even universal open review does not solve all problems… and she adds that more clarity about what “the problems” are would be useful at this juncture.

As framed by such luminaries as Cameron Neylon, the major problem solved by open review is that of the researcher looking for needles in haystacks. Good research will be called out as such, bad research ignored, end of problem—the more so because both good and bad research will be treated correctly regardless of locus of publication.

(Other problems open review putatively solves include behind-the-scenes collusion to suppress innovative or challenging research. The Loon is doubtful, the more so when she considers how much unconscious bias it will re-introduce over blind review. If you believe that open review will boost the fortunes of female researchers, researchers of color, and independent researchers, you have a great deal more faith in reviewers—in society generally, for that matter—than does the Loon. Rant ranted, let us move on.)

Why does this not suffice? Because it does not offer the quickly-applicable sort-by-quality criteria that many participants in academe and users of its products need so badly that where there is a dearth of reasonably reliable criteria (as now, of course), they rely on manifestly unreliable and unfit-for-purpose ones. These participants and users include:

  • Hiring, tenure, and promotion committees in academe
  • Grant reviewers and decisionmakers
  • Policymakers at many levels of government
  • Teachers of every sort of student
  • Students of every sort of teacher
  • Science journalists
  • Ordinary journalists trying to cover research
  • Librarians, public librarians as well as academic, guiding patrons and students to reliable information
  • Bloggers
  • Layfolk trying to understand the world while avoiding pseudoscience, marketing disguised as science, political or religious know-nothingism disguised as science, and the many other ills disguised as science that afflict public discourse presently.

Many of these are not subject experts. None of them can spend hours on answering the question “is this trustworthy?” one article at a time. Given that faking open review well enough to fool the lay gaze is trivial (Yelp and Amazon may serve as exemplars here), the simplest assessment algorithm for a single article that anyone has offered runs something like this:

  1. Does the article have no reviews? It must be bad or unimportant, then. (Just this stops the Loon dead in her webfooted tracks, knowing how little incentive there is to review openly or otherwise, but she sees no alternative.)
  2. Otherwise, for each review, check the review author’s credentials, and verify (somehow) that the putative author did in fact write the review.
  3. Assess the trustworthiness of the reviews based on what is known about their authors. (Remember, we cannot assume that our assessor understands the field or research methods well enough to evaluate either the article or the review directly.)
  4. Based on that, form an educated (?) guess about the article’s value and reliability.

Really? This is an acceptable quick-evaluation solution? Really? People limited to 24 hours in a day, people who are not stunningly obsessed, will actually do this? In what unfathomable ocean? Certainly not in the pond where the Loon swims. Apply this supposed method to a body of work rather than a single piece, and the time and accuracy problems only multiply.

Open review also does not help:

  • protect new entrants into academe from being gulled into placing their hard work with unworthy outlets
  • protect researchers in nations whose governments do not understand research from being forced by cargo-cult productivity requirements to throw money at bad journals
  • improve the image of research and researchers in broader society; bad journals reflect badly on academe’s oft-claimed self-policing
  • reduce the criminally wasteful money flow into the pockets of the bad actors who create bad journals

These problems wind up casting more aspersions on open-access publishers than on paywall publishers, who can and do conceal their many sins behind Big Deals and extortionate per-article prices. This being the case, the open-access movement (as many respondents to the Bohannon article have already pointed out) has a special interest in reducing the proliferation of bad actors and their bad journals.

If neither the apparent fact of peer review nor the apparent fact of open review (“apparent” because as discussed, claiming certain review processes is not the same as doing them, or doing them responsibly) suffices as a quick quality measure, we have two non-exclusive choices: find another such measure, or winnow the pool of journals to exclude as many bad actors as possible.

The Loon believes both choices have merit—she is certainly interested in alternative metrics as quick-judgment facilitators—but neither suffices in the absence of the other. She will therefore continue to ponder viable means of eliminating bad actors, and encourages her readers to do the same.

5 thoughts on “Why open peer review does not fix the bad-journal problem

  1. Mike Taylor

    Wait — what do we mean by “open review”? It seems from this article that you may simply mean non-anonymous review. I’m taking about the reviews themselves being freely available to read, so you can see whether they’ve done a proper job. Are we on the same page?

    1. RMS

      Correct, Mike. The Loon is not talking about the kind of “open” peer-review that F1000Research does (note: I have no affiliation to it), which is still closed in terms of who reviews the papers (i.e. the editors choose the reviewers, supposedly good ones), and just open in terms of posting the reviewer’s comments. This type of system enables us readers from seeing if a given paper has been properly reviewed and revised, giving us more confidence on the final product, or raising flags for the ones that squeaked through.

      Loon, remember what “peer” means. Letting the public review papers does not make it a ‘peer-review’. To qualify as a peer-review, the peers must review the paper, meaning in most cases professors, post-docs, qualified PhD students, and professional researchers.

  2. Chris Rusbridge

    Somewhere in the tech world I’ve seen a comment/review system where the reviewers get rated, possibly as a result of the extent to which their reviews support the consensus. I was wondering if an adaptation could apply here… I suspect not, as it is probably limited to quantitaivenumerical assessments, and would also be susceptible to twitter-like echo-chamber effects.

    BTW from Wikipedia (ref 42 from Peer Review article) I found this 1998 Baxt article “Who reviews the reviewers? Feasibility of using a fictitious manuscript to evaluate peer reviewer performance”. Won’t include the DOI because of likely hitting your spam filter!

    1. Library Loon Post author

      The Loon does clean out her spam filter, though admittedly sometimes with delays. Please feel free to post links.

  3. Carl

    Interesting perspective, but I think it would help if you would clarify a few points further. You seem to conflate the problem of identifying what literature is most important with simply identifying what literature actually received peer review, and you seem to conflate the notion of anonymous and non-anonymous open peer review.

    Bohannon’s “sting” highlighted the fact that it was difficult for both indices like the DOAJ and publishers like Elsevier to identify which journals were performing peer review and which were reviewing only formatting or nothing at all. Surely including reviews solves that problem for these parties?

    You seem primarily concerned that publishing reviews does not provide an easy filter for the myriads who need an easy filter. These seems like a red herring. Of course publishing reviews doesn’t provide a filter. Neither does publishing supplemental materials. But that’s not why folks recommend it.

    Publish the reviews (anonymously perhaps, for the concerns you highlight), and it becomes much harder to become a DOAJ listed journal while not practicing peer review, right?