When the Loon mused in a preliminary fashion about how to police journal quality (on all sides of the business-model aisle) via stings, several commenters and Twitterers declared (in the Loon’s paraphrase) that open review solved the problem and we could all go home.
For purposes of this post, the Loon will pretend that open-review has magically become universal. That it is not, and will not be for a good long time, is of course a problem for open review as a universal quality measure, but to some extent it is also a distraction. If universal open review truly solved the quality-measure problem, then moving toward open review truly would be the end of the story.
The Loon declares in return, however, that even universal open review does not solve all problems… and she adds that more clarity about what “the problems” are would be useful at this juncture.
As framed by such luminaries as Cameron Neylon, the major problem solved by open review is that of the researcher looking for needles in haystacks. Good research will be called out as such, bad research ignored, end of problem—the more so because both good and bad research will be treated correctly regardless of locus of publication.
(Other problems open review putatively solves include behind-the-scenes collusion to suppress innovative or challenging research. The Loon is doubtful, the more so when she considers how much unconscious bias it will re-introduce over blind review. If you believe that open review will boost the fortunes of female researchers, researchers of color, and independent researchers, you have a great deal more faith in reviewers—in society generally, for that matter—than does the Loon. Rant ranted, let us move on.)
Why does this not suffice? Because it does not offer the quickly-applicable sort-by-quality criteria that many participants in academe and users of its products need so badly that where there is a dearth of reasonably reliable criteria (as now, of course), they rely on manifestly unreliable and unfit-for-purpose ones. These participants and users include:
- Hiring, tenure, and promotion committees in academe
- Grant reviewers and decisionmakers
- Policymakers at many levels of government
- Teachers of every sort of student
- Students of every sort of teacher
- Science journalists
- Ordinary journalists trying to cover research
- Librarians, public librarians as well as academic, guiding patrons and students to reliable information
- Layfolk trying to understand the world while avoiding pseudoscience, marketing disguised as science, political or religious know-nothingism disguised as science, and the many other ills disguised as science that afflict public discourse presently.
Many of these are not subject experts. None of them can spend hours on answering the question “is this trustworthy?” one article at a time. Given that faking open review well enough to fool the lay gaze is trivial (Yelp and Amazon may serve as exemplars here), the simplest assessment algorithm for a single article that anyone has offered runs something like this:
- Does the article have no reviews? It must be bad or unimportant, then. (Just this stops the Loon dead in her webfooted tracks, knowing how little incentive there is to review openly or otherwise, but she sees no alternative.)
- Otherwise, for each review, check the review author’s credentials, and verify (somehow) that the putative author did in fact write the review.
- Assess the trustworthiness of the reviews based on what is known about their authors. (Remember, we cannot assume that our assessor understands the field or research methods well enough to evaluate either the article or the review directly.)
- Based on that, form an educated (?) guess about the article’s value and reliability.
Really? This is an acceptable quick-evaluation solution? Really? People limited to 24 hours in a day, people who are not stunningly obsessed, will actually do this? In what unfathomable ocean? Certainly not in the pond where the Loon swims. Apply this supposed method to a body of work rather than a single piece, and the time and accuracy problems only multiply.
Open review also does not help:
- protect new entrants into academe from being gulled into placing their hard work with unworthy outlets
- protect researchers in nations whose governments do not understand research from being forced by cargo-cult productivity requirements to throw money at bad journals
- improve the image of research and researchers in broader society; bad journals reflect badly on academe’s oft-claimed self-policing
- reduce the criminally wasteful money flow into the pockets of the bad actors who create bad journals
These problems wind up casting more aspersions on open-access publishers than on paywall publishers, who can and do conceal their many sins behind Big Deals and extortionate per-article prices. This being the case, the open-access movement (as many respondents to the Bohannon article have already pointed out) has a special interest in reducing the proliferation of bad actors and their bad journals.
If neither the apparent fact of peer review nor the apparent fact of open review (“apparent” because as discussed, claiming certain review processes is not the same as doing them, or doing them responsibly) suffices as a quick quality measure, we have two non-exclusive choices: find another such measure, or winnow the pool of journals to exclude as many bad actors as possible.
The Loon believes both choices have merit—she is certainly interested in alternative metrics as quick-judgment facilitators—but neither suffices in the absence of the other. She will therefore continue to ponder viable means of eliminating bad actors, and encourages her readers to do the same.
- Strong editors
- Un-disappearing Suzanne Briet