Now that we all know that Bohannon’s Science “sting” was embarrassing pseudo-science, it seems well worth considering how to do better and fairer ones.
The usual means that academic stakeholders, from tenure-and-promotion committees to collection-development librarians, use to judge journal quality are rapidly proving untrustworthy and gaming-prone. (Several never should have been relied upon to the extent they are, but leave that aside for now.) Peer review has known biases and other problems and passes too many bad works, not to mention that it is easy for a journal to claim peer review without actually doing it, or while faking it. Low acceptance rates are a dumb indicator, full stop, especially with the advent of the reputable mega-journal. The less said about opaque, gameable journal impact factor, the better, though it is worth recalling that judging journals (rather than authors who publish in them) is its actual function. Whatever one’s opinion of JIF, however, the continued existence of many, many journals with no or essentially zero JIF is undeniable, meaning that JIF cannot serve to weed out bad journals.
Negative indicators don’t work terribly well at present either. Library cancellations don’t help much because of Big Deal bundling. Retractions are endemic, often opaque, and unfortunately not performed by many journals (along the entire quality continuum) even when urgently needed. (The Retraction Watch weblog is one of the Loon’s favorites, and she recommends it unhesitatingly to everyone in academe, academic librarianship, and academic publishing.)
What’s left? Not a great deal. Might we not create something, then?
Make no mistake, doing systematic sting work would be an enormous undertaking. Bohannon, despite selling out his training and reputation for a cheap hit at OA, did have one methodological problem right: too many journals sit on articles far too long, making a real sting operation tedious, long-term work. Length of consideration process, however, is a testable quality indicator—even better, one many authors care deeply about. Creating a reporting mechanism where authors can rate and answer relatively simple questions about their experiences with various journals seems worthwhile. Anti-gaming and anti-senseless-grudge guards would need to be in place; partnering with an academic-identity enterprise such as ORCID or an article-identity enterprise such as CrossRef might be wise, to start.
Doing stings with fake papers adds the burden of writing or otherwise securing half-believable fake papers across a broad spectrum of disciplines. For methodological consistency, too, the believability level would have to be roughly even across all the fakes, and the Loon isn’t even sure how to measure the believability of a fake paper! Perhaps the Nigerian-spam tactic is easiest, on the whole: it won’t catch all or even most bad journals, but the journals it does catch will be known to be truly execrable.
The Loon might instead suggest a plagiarism sting, however. Cooperating authors, publishers, or pre/post-print repositories—copyright or copyright license permitting—could quietly donate published papers (or even better, their pre-review manuscripts) for resubmission to the journals being tested. (All authors of all papers employed in plagiarism stings should receive a receipt from the testers, to protect them when journals do the right thing and report the “attempted plagiarism” to institutions or grant agencies.) Two or perhaps three levels of testing then become possible: testing for straight republication, republication of a faithful translation, and (most labor-intensively) republication of what rhetoric scholars and information-literacy librarians call a “patchwritten” article.
Journals that fail the initial test could perhaps be marked for further testing, including fake-paper testing. Reporting—well, that begs questions. Should journals be quietly warned that they have failed once and are under scrutiny? Or is immediate public transparency and inviting the journal editors to respond preferable?
Who might fund or do this work? Publishers, scholarly societies, and publishing-trade organizations are too compromised to be trustworthy process managers, sadly, though some of them might be shamed into providing funding for a trustworthy third-party tester. The DOAJ might do this on the open-access side, but that leaves no counterpart for toll-access journals (which, says the Loon’s activist heart, might be no bad thing for OA). Electronic journal aggregators might, as an internal quality measure, though they might not in order to continue to dazzle credulous eyes with the size of Big Deals. Perhaps the folks behind DORA could crowdsource something credibly? Perhaps ARL, if the work can be firewalled away from SPARC? Perhaps a major research-library consortium? Perhaps a startup?
The Loon quite despairs of teaching academics (much less politicians—and yes, this also is a concern!) to gauge journal quality properly. The inevitable corollary is that academe needs more and better quick-use quality criteria. Let us all consider how best to accomplish this.
- The hero librarian’s response to HBR agitation
- Strong editors